Monday, 28 March 2016

Next Gen 22/03/16 Maths

Since this is the final week before the Easter holiday it was crunch week, especially since the exams are directly after the holiday. The final subjects we needed to learn were standard deviation and binary numbers.

This is the final maths related piece we need before the first year exam. Other maths sums we needed to learn were: the Dot products, trigonometry (with Pythagoras Theory and SOHCAHTOA) and adding, subtracting and multiplying together 3 dimensional co-ordinates.

Some of these topics were quite difficult, for example for binary numbers you need to know a whole-other number system (it goes from 1, 2, 3, 4, 5... to 1, 10, 11, 100, 101...). For standard deviation it is a combination of squared numbers, division, mean and square root (this is what a visual representation of the equation).




Next Gen 17/03/16 Coding Loops

The main point of coding this week was to show all of the four loops in coding their purpose and how they operate (another piece to use into a ever expanding dictionary of code to use). The main point of these loops is to repeat code until conditions are met (think of them as constant if statements).

1. While loops: The way these work is that while a condition is met, the code will continually loop.
2. Do While Loop: This follows the pattern of the while loop, but the condition is tested at the end of the code body. Meaning the code will run at least once before stopping. These be useful for mechanics like temporary barriers (say until you get a certain item or enough resources).
3. For Loop: This loop has the same function as the while loop however it uses a different syntex. It uses three separate sections for information in the loop (declaring variables, the conditional, increase/decrease). This can be used as a more precise way of using code to add multiple variables for one final result.
4. For Each Loop: This loop is used to go through arrays of code (strings) to test the items in the array, like looking through a list. This application is useful for having enemies spawn in certain locations across a map, without getting to close to some objects or the player.

The next step is to use these in coding to test and the use them when needed. These pieces of code though so far have been the hardest to use because (for me) it interferers with the rest of the coding.


Next Gen 18/03/16 Modeling

(Note) I got food poisoning for two days. So I was not in for one day of lectures and missed some work I am not aware of.

Since this week was more open with what we were doing, we continued with what modeling and coding of what we were doing last week. The main modeling work that I did was to fully finish the UV mapping of the model I was doing.


Something that I found out was that this button called the 'isolation toggle' removes all of the shapes in the model besides the one that you are currently editing. This allows you to edit shapes without being blocked by other shapes in the model. Also for texturing sake, the more shapes the better, because it allows for more space for high quality texturing.

The only other thing I did with modelling was that a while ago, I had made a basic model of a power drill. Recently I have updated the model. From the image on the left to the image on the right to show some of the learned skills in my 3D modeling. I am very happy with the updated result, this will be the model I occasionally update to show how well I can update my models and show my progress. The only other thing I could do is update the textures, however without Photoshop (because of Easter holiday) I don't know what to do about that. The next small model I try to make will be a form of a face.



Sunday, 13 March 2016

Next Gen 09/03/16 VFX

Since it has been a while since I used Maya and Adobe Premiere I did have to find my bearings again. Today I finished one of the animations in Maya and the main background shots that were shot have been compiled and edited into a single shot that is just above 2 minutes. So it is not long enough that it drags but gives enough time to use the Maya animations without rushing through the shots. Although I did want to use a extra shot when Jacob (the person in the shot) turns around to the camera and give a thumbs up, but it was a test shot, it might not make the final cut. Another issue I have is that since the models in Maya can't be hidden easily, it is annoying to have the blocks drop at the same rate from a origin point that that will not be in front of the filmed shots.

The next step is getting the finished digital shots into Maya and putting the animation in front of it, to match them together in the same shot. As well as that I could add some sound effects (if anyone has played the Spiderman 2 game arcade games you know what I am talking about/somewhat similar to space invaders).

Next Gen 27/02/16 Unreal Engine

It has been a while since we used Unreal Engine 4 in any use since the 2nd half of the first term (the college model class project). The what we attempted was to understand some understanding of Unreal Engine and create a basic day/night cycle.

The way this worked was that it was done in the blueprint screen editor. Combining and changing settings of the brightness point if origin to a timer set to 24 seconds.

As shown by the image Unreal Engine is for making games that do not require using any code. It is a incredibly upgraded version of that knock off program 'Scratch'. The way to 'code' in this program is through combining multiple functions through threads in different connection slots (as shown in the image).

This process did go through mostly well, the only issue I had, was that some of the components would not appear in the blueprints unless some specific icons were highlighted. The next part will be using objects I have made to make a scene of my own and to toy around further with some of the functions that Unreal can do.

Next Gen 08/03/16 Art and Modelling


This week it was another exercise based on creativity of modelling and concept art. Firstly we had to draw concept art based on a small section of the Harry Potter books, without any context. The scene was of when Harry arrives in Diagon Alley. The main idea of the image is based around 'He wished he had more eyes'. The idea of the image was that there was a bunch of really wondrous items in the shop window (books, brooms, potions, etc). While the face is staring at the more odd/disgusting items including the bat spleens and (specifically)  the frog eyes. Since the writing could have been interpreted in any way this was the (slightly creepy) way I interpreted it.

The other main point of the task was to either make a futuristic bedroom (which I and 70% of the group chose) or a magical shop. It was firstly drawn as a concept drawing with a few thumbnails first of what could be in the drawing. After the limit, we then modeled a object in the room that we made up. The object that I came up with was a cross of a wardrobe and a door. The idea behind was that the user would open the doors, then the panel would appear. They would input what clothes they would what on the panel. Confirm, and step through and it would atomise on the user, without all of the hassle of morning dress up.

The first attempt I made on this ended after around 2 hours of work the program cut and I lost all progress. Afterwards I tried again with a different approach. The objects for the model are mostly separate. The only thing similar were that the doors and emitters (inside the doorway) had to be copied. I am also working on a polygon that can be transparent to make glass inside the doorway. I have not began work on the textures yet, however something I noticed was that most sci fi human inventions and ships are quite bland in terms of texture.  As shown in the image, it is mostly grey, white or black. So for my model I wanted to do something more stylised (think of it like combining hidden spy rooms (things from the Bond movies or Men in Black) and futuristic utopias (like the original Bioshock).

The only other main thing I could do in terms of design is move the point of origin on some objects to allow it to pivot, so it has functionality.

Monday, 7 March 2016

Next Gen My VFX Shots Pipeline

Modelling: The basic idea is creating objects in a 3d environment with the right programs, the only main objects that are needed were the Tetris blocks, not a lot of modelling had to be done for the sequence. Although the blocks were fairly bland.

Pre-vis: This is the basic process of using basic 3d models to show a simple form of the sequence that shows how the shot should go without put major effort into the visual assets. We made a few pre vis shots with . Considering the shots that were taken were relatively stale in terms of actual actions in the scene. We could have used pre-vis for having the shots interact more with the CG.

High Dynamic Range Environment Photos: This is the process of taking multiple photos of the same environment for the ideal tint and lighting effects. Since the shots were done in a relatively bright environment and the screen in the shot was very bright; darkening the shots could make the screen appear brighter so that it could seem that it is very late in the day.

Reference Photos: This is the process of taking multiple photos during the shots to have an idea for the models, environments and lighting reference for the animators later in the pipeline. Since we took as few second shots at a few different angles and had no extra props in the shot (besides the computer and mouse) it will be a relatively simple process.
LIDAR/Cyberscan: It is a way to get highly detailed models of buildings or environments using laser-based scanners. Since we did not have access to that kind of help. We had to make our own assets from scratch. If we had this I would use this to scan the inside of a pc to have a accurate representation of the wires and boards that go into it, to add to the surrounding environment.

Rigging:Having a rig posted over 3d models to make them feasible in digital animation (movement). Since the models I was originally going to put into the shot were not that complicated they do not badly need to be rigged plus the models are not required to twist and turn, only fall.

Tracking and Match-Moving: This means that a image on set is filmed, once the shot is finished the digital copy of the image can be placed over whatever was used in the shot. This process is because CG objects seem to have more 'Life' in the movie. Since the objects are Tetris blocks this process can be easily applied if the footage was slowed down and other interferences removed from the shot. However it would not add anything to the shot, since the environment (of a desk) has no soft surfaces or uses any heavy hitting objects affecting anything on the desk.

Plate Preparation: Since most movies use film, it can cause scratches and chemical reactions and it will need to be cleaned for the movie release. However since the cameras we used were digital, we did not have to deal with this kind of issue.

Film Scans: Once the shots are finished, the shots are then scanned to preserve the original negative; it is usually done at around 2k if the filmed used super 35mm. Once again, since the shots were done on a digital camera, their was no need to scan the film ourselves.

Rotoscoping: When VFX is used in a shot with things such as green screen to add backgrounds or add extra visual components. Since we don't have any access to this kind of VFX equipment we are not doing anything around rotoscoping.

Tests: Testing the look, style and/or piece of technology that might be worth putting in the final film. Test shots for this have been using different shots for the final result. Testing concepts such as tracking in Maya, and a light-saber effect in a active shot. If they make it in the final product is debatable.

Animation: This is the process of using models and rigging elements that move on the screen. The main animation in the shot is the Tetris blocks falling, moving, and turning in the shot and the effect of the blocks shattering once a line is made (like in the game). I also would want a gapping hole in the air to show were the blocks are coming from instead of just appearing.

Grading and Technical Grading: This is when the scene in the shot has alterations done in post-production. It is done so that there are no jumps in contrast, and brightness in the final cut. The main integration into my VFX shots that I could do, is alter the brightness of the shot to make the blocks more or less artificial for the shattering effect.

Element Shoots: When an extra effect of natural particles is put into a shot for more visual appeal. These effects are usually done before posting-production begins and stored so the VFX team can use it fast. Since the main application of this in our project would be the shattering effect, this can be used, even though it can be dangerous, plus shattering is an effect we can add in after effects.

Look Development: This is when the texturing and lighting is altered for the final shots. It is to make the CG objects appear more integrated with the rest of the shot. The only main use this can be used for is the blocks when in front of the shot of the desk, computer and person, to appear in the shot.

Research and Development: This is when all of the tools in the VFX process are set up before the shots are done, so that the animators have a easier time for finishing each VFX shot. The main tool we used is the program 'Autodesk Maya', so most of the tools in the program were set up. We also had some time to get used to the program and its tools.

Wednesday, 2 March 2016

Next Gen 24/02/16 Adobe tracking

Today we have been briefly introduced to tracking in Adobe After Effects, we were given brief clips to tinker with. The clips were, firstly it was a clip of random people walking down the street and the second was a clip of the first clip being played on a screen while the camera was panning from right to left, over the screen.

The purpose was to understand tracking in the animation sequence to add other elements in the visual shots. The main way the first shot was done was with using the tracking tool in After Effects to find a contrast in the image to have a constant location for the tool to track. A main point was the man in the 'sand' shirt's chest hair (since it is very dark compared to the surrounding imagery). However the tool for this is very unreliable. The tool kept veering of into different directions and losing track of the point from a certain frame of the clip. I kept having to redirect the point to follow the contrast point. 

However the next clip was the worst. We were asked to have a quad point track from the screen corners, while the shot panned from right to left. The problem was that the corners had no contrast with the rest of the screen rim. The points had no point to focus on and so kept moving. The only way around the problem was to put the points at contrasting points on the screen. Also, since the the screen points do go of screen it could not be tracked easily. The main purpose of the tracking was to put an image over the tracking or to add space for a VFX effect.