Hello, my name is David Ker and this is a continuation on the summary of what I have done so far on the PG Cert 3D for Visual effects course for the remaining six weeks.
Tracking & Integration
Introduction to Cameras
Camera Tracking with 3D Equalizer
Manual Tracking Techniques
Advanced Matchmoving Techniques
Advance Shading and Lighting
Advance Mentalray Nodes
Advance lighting theory
Render Layers and compositing
Render Layers and Passes
Introduction to Linux
Introduction to compositing
Compositing with Nuke (CG Layers)
Rotoscoping and keying with Nuke
Animation and Rigging
Constraints and Rigging
End of course Presentation
LESSONS LEARNT AND ACHIEVING A VFX SHOT
Below is a Log of how I achieved my VFX project from lessons taught from the above outline. Although I did not use every single technique or workflow in the outline, I did however work based on the workflow I needed to use to achieve the project.
At this point in the course I had finished composing my 3D model (Guitar and belt) into a still image (back plate). So for the next project I began lessons on how to track and export tracked footage for use in Maya. I was introduced to the software 3DEqualizer, its interface and its functions. It was quite a strange software because it did not have the usual layout of most software am familiar with.
I learnt all I needed to learn in this phase of the course to help me achieve my next project. I learnt that before a live footage can be tracked it has to be in the form of a sequence. This implies having a series of frames of the footage instead of a single video file. So every footage used during lessons were always in sequence. I had to open my footage in Nuke using a read node. It was then written out as a sequence into a chosen file path as a targa file (tga). This was the method i used in converting my footage into sequence files as shown below.
The footage at this point was ready to be opened and tracked in 3DEqualizer. I imported the footage and set the amount of frames I wanted to work with. I also created a buffer compression file to increase playback speed. At this point I needed to set my lens before I could start solving for it. The film back height and width of my lens was information I got online and inserted the values (film back width was set to fixed and the height set to passive because a change in one will affect the other). I also set the pixel aspect to fixed, and the film aspect to passive so that as I solved the lens the fixed values would affect the passive values.
Finally, I selected to calculate for lens distortion and later on quartic distortion. I started tracking the footage manually using both pattern and marker tracking modes. I made sure I established tracking points to define the floor and a well spread array of tracking points around the footage as well. After I was done tracking I grouped my tracks for better organisation. For this particular footage I did not require using the reference images I took because the tracking points were good enough to solve for the lens. I began solving my points using the parametric adjustment tool (with a wide and brute setting for the range and method respectively and then a fine and adaptive after that). The calculated parameters were then transferred and calculated from scratch. I did the above process a couple of times until I got my pixel deviation values low enough to get a good solve. This was then exported to Maya, I also had to dewarp using the warp4 tool (by opening it in the menu option, selecting the save option, choosing a path to save to, and setting over scan to automatic) and finally rendering it out.
In Maya I imported the tracked footage from 3DEqualizer with the manual and auto tracks and then scaled my scene. I also imported the dewarped footage then I set my image number and frame offset with respect to the footage (with the ‘use image sequence’ selected). I also imported the footage as an image file so i could be able to color correct the footage.The playback speed of the footage was really slow so I had to convert the footage into a ‘low res’ footage in nuke and then import it back into Maya. The image plane was a bit too close to the camera so I increased the far clip plane to have a better view. To be able to successfully place my models in 3D space the plane had to be set. So to line it up I chose a mid-point on my manual tracks and snapped it to the centre of the grid in Maya.
Link to Video showing manual and auto tracks https://www.youtube.com/watch?v=QaKKZUC6i9A
Link to Video showing Primitive cone in shot https://www.youtube.com/watch?v=A7_GblhH_6E
Figuring out a theme for this project was done in the second week and the footage was shot in the third week. I did however started building up props for the scene even though I had not gotten the shot at the time I was modelling. I had an idea of a ‘half and half’ shot were one half depicts a time in the past and the other half depicts the present time.
I modeled furniture and buildings with respect to the time (props that would help sell the idea mostly). As usual before I got started I set my projects and also established a linear work flow. I imported my reference pictures as image planes for the different furniture and buildings I wanted to model.
Modelling the scene was done using the tools learnt in the earlier part of the course. The extrude tool, the revolve tool, the append polygon tool etc. After modelling I assigned a mia material with x passes to my individual models and renamed them. Below are views showing the development of the models based on the reference images.
I created my UVs mostly as cylindrical and planer projections because of the nature of the forms I modeled. For the planer maps all I needed to do was just alternate the axis according to where the plane was facing. The transfer attribute tool was used to transfer UVs of similar forms to avoid repeating projections. UVs were arranged in the UV bounding box using the UV layout tool and exported as PNG files.
I matched my lights with the lights in the footage using a directional light(no decay), and two area lights (with a quadratic decay rate). An IBL was setup using a Lat-long Hdr made from Photoshop and nuke. The exposure on the IBL was increase to light up the scene a bit more.
My texturing was done in photoshop and saved as Targa files with a bit rate of 24. Below is the view of some of the textures used.
In preparing my scene for rendering I created my passes for the scene;
Beauty, Depth, Diffuse, DiffuseMaterialColor, DirectIrradiance, Indirect, Mv2DToxik, Reflection, Refraction, Shadow, ShadowRaw and Specular. I linked them to their associated passes in the render settings. I had my sampling quality set low to reduce render time because my scene was quite heavy. My final gather was turned on. Next I set my output path, re-sized my renders from a 4K to a 2016 X 1090 file and did a batch render.
Link to Colored VFX Shot https://www.youtube.com/watch?v=1dotnHdT4A8
The rendered files were opened in nuke for compositing. In nuke I had to use a reformat node to re-size my original footage to match the renders from a 4K format to a 2016 X 1090 format. I began compositing my shot. I started off by color correcting my rendered CG and used a merge (multiply) node to connect it to my ambient occlusion. Then i did a color grad to match the footage with the CG (Using the white points on the CG to match the white points on the footage and same thing for the black too). I did a color correction again and added a vector blur to it. I then merged (matte) the composition with the reformatted footage and added a grain node to give the footage an old grain look. I added a color correct node and made it a sepia tone to add to the old look in the scene. I did a roto on one half of the sepia to give the change in time from old to new and I blurred the edges so it blends in. Finally, the composition was written out using a write node as a mov file. Below is the node connection in nuke.
Link to VFX shot https://www.youtube.com/watch?v=kJDdnRs72ko&feature=youtu.be
The project was an interesting one indeed but to a large extent I underestimated the scope of the work to be done. I had a lot of generic texturing which could have been more specific with direction. The window were a bit too clean for the idea i was trying to portray.This however was due to my misunderstanding of how my UVs were to be exported. I would have exported larger UV sets so that when texturing in Photoshop I had the freedom and a larger area to work with. Most of the exports i did were smaller than i could provide precise detailing for.
My final renders gave light flickers of some sort and i could not identify what the problem was. Perhaps it had something to do with my final gather not being calculated but i cannot really say. I had a couple of things that needed roto scoping at the far end of the footage which would have aided in making the CG sit in the shot a bit better. The one half which is suppose to represent an earlier time could have been a little more broken with a bit of atmospheric particle like smoke to sell the idea more. I would say that I have learnt a lot of things in this project. It is definitely a stretch compared to what i had to produce in my first project with the guitar and the belt.
There could have been more props made for the shops with transparent glasses to give the shot more life. Also the walls could use some more life to it, perhaps more windows and broken bricks.There is more work to be done to get the shot to the point where i can say the original idea has been clearly portrayed. Its not there yet, but its well on the way.