Technological advancements have made motion capture available at a low cost. Independent or small scale artists now have alternative means of achieving motion capture in ways that were only available to large scale production companies. This paper proposes an affordable approach to markerless motion capture using the Xbox one Kinect sensor by microsoft. A workflow that incorporates the use of full body motion capture software (iPisoft) and a facial capture software (Faceshift) into the animation pipeline was used to achieve motion capture. This research shows that motion capture can be done with consumer-grade hardware and software, by using a markerless system and workflow to achieve usable data for independent artist/ small scale productions.
Hello, my name is David Ker and this is a continuation on the summary of what I have done so far on the PG Cert 3D for Visual effects course for the remaining six weeks.
Tracking & Integration
Introduction to Cameras
Camera Tracking with 3D Equalizer
Manual Tracking Techniques
Advanced Matchmoving Techniques
Advance Shading and Lighting
Advance Mentalray Nodes
Advance lighting theory
Render Layers and compositing
Render Layers and Passes
Introduction to Linux
Introduction to compositing
Compositing with Nuke (CG Layers)
Rotoscoping and keying with Nuke
Animation and Rigging
Constraints and Rigging
End of course Presentation
LESSONS LEARNT AND ACHIEVING A VFX SHOT
Below is a Log of how I achieved my VFX project from lessons taught from the above outline. Although I did not use every single technique or workflow in the outline, I did however work based on the workflow I needed to use to achieve the project.
At this point in the course I had finished composing my 3D model (Guitar and belt) into a still image (back plate). So for the next project I began lessons on how to track and export tracked footage for use in Maya. I was introduced to the software 3DEqualizer, its interface and its functions. It was quite a strange software because it did not have the usual layout of most software am familiar with.
I learnt all I needed to learn in this phase of the course to help me achieve my next project. I learnt that before a live footage can be tracked it has to be in the form of a sequence. This implies having a series of frames of the footage instead of a single video file. So every footage used during lessons were always in sequence. I had to open my footage in Nuke using a read node. It was then written out as a sequence into a chosen file path as a targa file (tga). This was the method i used in converting my footage into sequence files as shown below.
The footage at this point was ready to be opened and tracked in 3DEqualizer. I imported the footage and set the amount of frames I wanted to work with. I also created a buffer compression file to increase playback speed. At this point I needed to set my lens before I could start solving for it. The film back height and width of my lens was information I got online and inserted the values (film back width was set to fixed and the height set to passive because a change in one will affect the other). I also set the pixel aspect to fixed, and the film aspect to passive so that as I solved the lens the fixed values would affect the passive values.
Finally, I selected to calculate for lens distortion and later on quartic distortion. I started tracking the footage manually using both pattern and marker tracking modes. I made sure I established tracking points to define the floor and a well spread array of tracking points around the footage as well. After I was done tracking I grouped my tracks for better organisation. For this particular footage I did not require using the reference images I took because the tracking points were good enough to solve for the lens. I began solving my points using the parametric adjustment tool (with a wide and brute setting for the range and method respectively and then a fine and adaptive after that). The calculated parameters were then transferred and calculated from scratch. I did the above process a couple of times until I got my pixel deviation values low enough to get a good solve. This was then exported to Maya, I also had to dewarp using the warp4 tool (by opening it in the menu option, selecting the save option, choosing a path to save to, and setting over scan to automatic) and finally rendering it out.
In Maya I imported the tracked footage from 3DEqualizer with the manual and auto tracks and then scaled my scene. I also imported the dewarped footage then I set my image number and frame offset with respect to the footage (with the ‘use image sequence’ selected). I also imported the footage as an image file so i could be able to color correct the footage.The playback speed of the footage was really slow so I had to convert the footage into a ‘low res’ footage in nuke and then import it back into Maya. The image plane was a bit too close to the camera so I increased the far clip plane to have a better view. To be able to successfully place my models in 3D space the plane had to be set. So to line it up I chose a mid-point on my manual tracks and snapped it to the centre of the grid in Maya.
Figuring out a theme for this project was done in the second week and the footage was shot in the third week. I did however started building up props for the scene even though I had not gotten the shot at the time I was modelling. I had an idea of a ‘half and half’ shot were one half depicts a time in the past and the other half depicts the present time.
I modeled furniture and buildings with respect to the time (props that would help sell the idea mostly). As usual before I got started I set my projects and also established a linear work flow. I imported my reference pictures as image planes for the different furniture and buildings I wanted to model.
Modelling the scene was done using the tools learnt in the earlier part of the course. The extrude tool, the revolve tool, the append polygon tool etc. After modelling I assigned a mia material with x passes to my individual models and renamed them. Below are views showing the development of the models based on the reference images.
I created my UVs mostly as cylindrical and planer projections because of the nature of the forms I modeled. For the planer maps all I needed to do was just alternate the axis according to where the plane was facing. The transfer attribute tool was used to transfer UVs of similar forms to avoid repeating projections. UVs were arranged in the UV bounding box using the UV layout tool and exported as PNG files.
I matched my lights with the lights in the footage using a directional light(no decay), and two area lights (with a quadratic decay rate). An IBL was setup using a Lat-long Hdr made from Photoshop and nuke. The exposure on the IBL was increase to light up the scene a bit more.
My texturing was done in photoshop and saved as Targa files with a bit rate of 24. Below is the view of some of the textures used.
In preparing my scene for rendering I created my passes for the scene;
Beauty, Depth, Diffuse, DiffuseMaterialColor, DirectIrradiance, Indirect, Mv2DToxik, Reflection, Refraction, Shadow, ShadowRaw and Specular. I linked them to their associated passes in the render settings. I had my sampling quality set low to reduce render time because my scene was quite heavy. My final gather was turned on. Next I set my output path, re-sized my renders from a 4K to a 2016 X 1090 file and did a batch render.
The rendered files were opened in nuke for compositing. In nuke I had to use a reformat node to re-size my original footage to match the renders from a 4K format to a 2016 X 1090 format. I began compositing my shot. I started off by color correcting my rendered CG and used a merge (multiply) node to connect it to my ambient occlusion. Then i did a color grad to match the footage with the CG (Using the white points on the CG to match the white points on the footage and same thing for the black too). I did a color correction again and added a vector blur to it. I then merged (matte) the composition with the reformatted footage and added a grain node to give the footage an old grain look. I added a color correct node and made it a sepia tone to add to the old look in the scene. I did a roto on one half of the sepia to give the change in time from old to new and I blurred the edges so it blends in. Finally, the composition was written out using a write node as a mov file. Below is the node connection in nuke.
The project was an interesting one indeed but to a large extent I underestimated the scope of the work to be done. I had a lot of generic texturing which could have been more specific with direction. The window were a bit too clean for the idea i was trying to portray.This however was due to my misunderstanding of how my UVs were to be exported. I would have exported larger UV sets so that when texturing in Photoshop I had the freedom and a larger area to work with. Most of the exports i did were smaller than i could provide precise detailing for.
My final renders gave light flickers of some sort and i could not identify what the problem was. Perhaps it had something to do with my final gather not being calculated but i cannot really say. I had a couple of things that needed roto scoping at the far end of the footage which would have aided in making the CG sit in the shot a bit better. The one half which is suppose to represent an earlier time could have been a little more broken with a bit of atmospheric particle like smoke to sell the idea more. I would say that I have learnt a lot of things in this project. It is definitely a stretch compared to what i had to produce in my first project with the guitar and the belt.
There could have been more props made for the shops with transparent glasses to give the shot more life. Also the walls could use some more life to it, perhaps more windows and broken bricks.There is more work to be done to get the shot to the point where i can say the original idea has been clearly portrayed. Its not there yet, but its well on the way.
Hello, my name is David Ker and this is a summary of what I have done so far on the PG Cert 3D for Visual effects course for the first six weeks. I started off with tutorials online because I was not present in class for the first three days of the course. So the first part of this log will be about things i had to learn from the online lessons and then gradually moving into the classroom. WEEK 1 Introduction to Maya The User Interface Introduction to Modelling Introduction to Textures Introduction to Lighting, Shading and Rendering I started off by learning how to navigate the Maya interface by knowing the fundamental tools and then built on that as I needed to.
I went on the AutoDesk Maya 2015 website to gain more knowledge and got some really helpful illustrations explaining the interface in more detail which helped a lot in my understanding of the interface better as shown below.
The menu bar presents the more general tools you will find in most software like; file, edit… I also learnt to access the drop down menu that categorizes the different aspects in maya like; Animation, Polygons, Surfaces, Rendering, and Dynamics. For every function you choose the menu changes as shown below.
Moving on I learnt how to navigate through the functions on the status line more; how to create, open and save scenes instead of using the main bar. I also learnt how to switch selection sets to make specific selections, how to use the snap tools and their shortcuts X,C,V (snap to grid, snap to nearest curve and snap to point respectively).
Conveniently being able to collapse and bring back options on the status line is something i found beneficial. It made me able to shrink in functions I did not need at a particular time as shown below.
Interestingly i found out a way of customizing the shelf to put more frequently used functions on the shelf by using ctrl+shift and clicking on the function in the menu bar (i.e hypershade or outliner).
To be able to open up menus using the shelf, specific tools need to be selected as it was for the drop down menu shown earlier. So to open up functions for a surface, polygon…as the case maybe you have to select the surface option or polygon option as shown below.
Then there is the work space where most of the work within Maya is done it is a central window where objects and most editor panels are shown.
I got to know the 3D scene a bit better by understanding the axis indicators X, Y, Z and there color codes and also the shortcut keys to navigate through the tool box Q, W, E, R (selection, move, rotate and scale respectively)
And lastly was the channel box. I learnt how to show and hide the channel box and also how to manipulate the figures for the rotation, translation and scale of objects which is always represented in X,Y,Z.
Getting familiar with the interface was fun and educative at the same time. Exploring the interface more opened up more possibilities to the things i could achieve with Maya. In the first week of the course I was also introduced to modelling with curves, in one of the exercises which was the making of the vase. The CV curve was used to create the profile of the vase which was a bit tricky at first making the points stick where you want them to. Using the snap tools proved helpful as the made points snap to the grid and move along a curve. The revolve tool was the interesting bit, with just a click the profile was made into a vase, which was quite cool.
The primitive man was a fun way to get familiar with the default shapes in Maya and also hierarchy. I got to Place the different basic shapes that represented body parts into different hierarchy to enable me pose the primitive man.
There was a reiteration in the use of curves to model which can be seen in the making of the apple in the fruit bowl. This was a good way in getting familiar with modelling with curves and also using the revolve tool again. To further achieve the look of an apple, the geometry was deformed a bit by manipulating the vertex which gave more control in achieving the irregular form intended.
Still in the first week of the course an introduction to polygonal modelling gave more familiarity to the use of the extrude tool. It was the tool used most frequently in modelling the chair. The same technique of extruding faces and edges was used in modelling the tap also.
We went on to basic lighting of a scene but firstly we were shown which lights in Maya represented lights found in the real world. Lights such as directional light which is meant to mimic the sun, spot light which is used for a light source emanating from a point and area light which is used to create light through a shape i.e. light through a window.
The first week was a good orientation into Maya 3D, its interface, a general introduction to modelling, textures and basic rendering settings. The lighting class was especially interesting because it did make the scene come to life. In summary, after one week of the course I learnt how to navigate the Maya interface.I gained a better knowledge of how to locate the tools I needed and also their corresponding keyboard short cut keys. So instead of having to click the select, move, rotate or scale tool every other minute I just use the Q, W, E, R short cut keys respectively. I also learnt the importance of always having to set your projects so that Maya knows exactly where to look when you working in a scene. The modelling class gave me the basic tools i needed to start modelling and also how to use short cut keys to access the modelling tools by using shift with the object selected and right mouse clicking. Everything learnt in the first week does come to play a major role in my achieving my first 3D composition in Maya. WEEK 2Modelling NURBS Curves and Surfaces NURBS Surfaces to Polygons Polygon Modelling Topological Rules At this stage of the course I was to decide on a suitable composition of a 3D model of which I chose to model a Gibson L0 guitar and a guitar strap. I was able to start this off with the knowledge I gained from the previous classes. i had the guidance of my tutor, studio assistants and as the work progressed so did my use of more tools too. The first stage was choosing a theme for the project which initially was a bit of a challenge. I wanted to model so many things for the project in one go. I was advised by the tutor to stream line my idea and stick to one piece and have supporting elements/props to sell the idea. So I came up with the theme ‘Music’ after I eliminated unrelated props and decided to stick with a singular prop and its supporting props. The second stage was getting reference materials to enable me best model the guitar. Key references included front, top, and side views and a lot of supporting images taken from different angles but for the purpose of this log i will show a few. While modelling the vase an image was imported to serve as a guide, this same technique was used to import the key references as image planes for the front, top and side views respectively for the guitar. After the image plane was set up I used basic shapes (poly cube and Poly cylinder) and manipulated them using modelling tools. Tools like the extrude tool, merging vertices, fill hole etc. Week two was a step further into modelling, so it gave me a better footing as I commenced my first project. I was introduced to new tools like, the loft tool, extruding along a curve, the bridge tool, putting text to a model and also how to use the snap tools and their keyboard short cut keys X, C, V (snap to grid, snap along a curve and snap to point respectively). I learnt also how to create Booleans which I used in achieving the hole in the pegs of the guitar. For the strings I used a poly cube and scaled it to the desired length and moved the vertices to achieve the bend at the guitar bridge. In creating the tuning pegs too i used a poly cube and manipulated the vertices to achieve the required shape. I used the smooth tool to smooth the overall geometry.
The lesson of creating the head phone was a good one. I did get some helpful ways to work around making my vertices when irregular to align in a line using the R and J short keys (selecting the vertices press R to bring up the scale tool and while holding J you click and drag to snap along a line). This was really helpful when vertices tend to go out of place when modelling the guitar. In summary, there was a lot learnt in this week’s class that made modelling a lot more efficient. Being able to use the short cut keys, understanding that your models always had to have a good typology(edge loops, quads etc.). WEEK 3Modelling Continued Polygonal Modelling continued Organic Modelling Non-Linear Modelling UV Mapping techniques (Organic and hard surface) At this point of my project I had successfully used the poly cube and manipulated its vertices several times to achieve the body of the guitar. I created a poly cube, increased the subdivisions so I had a range of edges and vertices to work with. I Started off by shaping the vertices to match with the image plane I had of the guitar. After I was pleased with the form, I selected some faces at the middle of the body and deleted them. I made an extrusion to achieve the thickness in the sound hole.
The making of the P51 Mustang was done in the third week. I learnt how to use the smooth proxy tool which enabled me to model with one half of the model as the hard shaped geometry and the other half smooth. In that way, I did not have to keep going back and forth between smooth and rough geometry (I was able to model the guitar bridge using this tool).
I also found efficient and very useful the isolate select tool which helped me build different parts of my geometry individually in the same scene. I was introduced to UV mapping techniques (Planer- good for planer shapes, Cylindrical- objects that are cylindrical in shape, Spherical- for circular or spherical objects and Automatic- project on all sides of an object). I did however notice later on, if NURBS are used to create an object they should always be converted to polygons. I tried creating UVs for a NURBS surface and it did not UV not until I converted it to a polygon. I mostly used the planer mapping and varied the axis as needed in projecting the UVs for the guitar. In summary, this week I got introduced to UVs, how to UV map an object, how to lay it out in the UV space using the layout option and also how to snap UV maps for export into photoshop which i applied in my project. WEEK 4 Colour & Lighting Lighting – Traditional Cinematographic and Photographic Techniques Colour Theory, Three Point Lighting Natural Lighting Indirect Lighting Techniques Image Based Lighting Matching Lighting Advanced Lighting Understanding the basic setup required in lighting a scene was taught in the fourth week using the three point lighting technique. In applying this knowledge to lighting my scene was not about directly replicating what was thought in class. it was more about understanding my scene and creating the appropriate lights that best translate the real world lighting conditions to my 3D scene. I had a light source coming through the window and some from the ceiling. I used an area light to mimic the light coming through the window also replicating the window shape. I also used an area light to represent the lights from the ceiling and made it into the shape of what was in the real world (Rectangular). Both lights had different intensities to vary there level of impact in the scene and also were of different tints (ceiling light was yellowish to represent the warmness in the lights and a washed out blue colour to represent the cool lights from the window)
In setting up my IBL(Image based lighting) I had to import the latlong image which I extracted from the shots taken with the Canon 7D using photoshop and nuke. The different levels of exposures of the images were merged in photoshop as a HDR, cropped and then saved. It was opened in nuke after which it was unwrapped with a spherical node and the resolution doubled. Then it was written out as a Hdr. My exposures on the IBL were a bit low so I had to adjust the gain to make it brighter. Below is the highest and lowest exposures from my chrome ball. I went on to match my lights with the grey ball i.e. using the values for setting the grey ball (Diffuse weight of 0.9, roughness of 0.1, reflected colour of white, reflectivity of 0.1, glossiness of 0.5 and BRDF Fresnel was turned on). I was however reminded to always colour correct my colours with a gamma correct node with a value of 0.45 which was done on the diffuse colour of the grey ball. In other to match my lights with the CG grey ball I needed to have a reference to match it with. I used the grey ball from the back plate as a reference. I was taught to always take as many reference pictures as possible to be able to best recreate the lighting in the real world. I had to set up the back plate into the scene before I could match the lighting. Creating a mip_matteshadow and connecting a mip_cameramap to it and finally a file node of the back plate connected to the mip_cameramap made this possible. In order to set the back plate a camera was created and the focal length set to the focal length gotten from the camera after which the plane was set to match with the back plate using the grid on the floor.
At this stage of the project i was already composing a shot of the guitar using a reference picture i took with a real life guitar as seen below.
In summary, the lighting class was a bit of a challenge. It did take me quite a while to match my grey ball but eventually i did after a couple of failed attempts. I made variations in the intensities of the lights and also changed the tints to replicate what i had in real like. I also learnt a few things on indirect lighting of which I used the image based lighting technique for my scene and setting up my final gather which is all the inter-reflected light in my scene. I was able to produce very soft shadows efficiently and eliminate or even out dark corners with final gather. WEEK 5Intro to basic texturing Hypershade Texturing Masterclass Image Manipulation with Photoshop Texture Distressing with Photoshop Non-destructive Texturing Workflows Camera Projection Mapping The fifth week was mostly a photoshop week and I was looking forward to the class because at this point I had my model done and ready for texturing. Understanding the hypershade was central in knowing where to build shading networks by creating, editing, and connecting rendering nodes, such as textures, materials, lights, rendering utilities, and special effects. To create textures for the guitar I had to create UVs for the guitar parts which I did from lessons learnt in the third week. A planner projection was done for the front and back sides of the guitar and the curved side of the guitar too but it had to be unfolded using the unfold tool to avoid overlapping UVs. The same process of planer mapping and unfolding was done for the neck of the guitar. I got to download textures of wood for my guitar and also a texture for the guitar belt online.
The projected UVs were laid out in the UV space using the layout tool everything was set to default except the UV spacing which was set to 2048 for adequate spacing. After the UVs were laid out, they were then exported as PNG files with a 2K resolution and opened in photoshop. In Photoshop a non-destructive approach and an orientation to the workflows were taught. I used this in achieving the textures I needed for my guitar. My workflow had to be well organized in groups and layers which were named for easy accessibility. Masks were used to manipulate images in layers so that they can always be recovered if ever I decided to change anything (Non-destructive workflow). At this point texturing was more or less intuitive because I was translating how I felt the guitar should look with regards to references. I wanted a look of a guitar that has been really used but yet not completely damaged.
WEEK 6 Texturing and Surface Techniques Displacement Mapping Maya Nodes Raytracing (Reflections, Refractions and Shadows) MentalRay Shaders Material Properties and Case Studies (MIA Shaders) In the last week of the course I was further introduced to mental ray production shaders. I did however get to use the camera map and matte shadow earlier on in setting up my back plate. At this stage in my project i had finished modelling, texturing and lighting the scene. I started composing the shot using the reference shot i had taken. I modelled the belt in the last week which was a bit of a challenge. I had never done anything like that before. The belt was modelled using a poly cube using a reference image after which i used the animation tools to create a motion path for the belt and posed it as i wanted as shown below.
The final rendering was done in mental ray and colour grading in nuke by matching the black points on the CG with the black points on the back plate and the white points of the CG with that of the back plate
In conclusion, the project was quite the experience. Before now i had never used Maya to model before. So it was quite an achievement for me. However, there are things that i probably would do differently not necessarily because the method i used was wrong, but I do see that there are other ways I can get the model done and efficiently too.I did majority of the work in the last three days because of delays with setting up my lights. I would say that my flow in achieving this project was done by following the below process which has been summarized,
Setting of my project
Setting Colour management for a linear work flow (sRGB/Linear sRGB)
Importing in reference images
Modelling with good topology/assigning materials to my models
Matching my lights in the scene/Setting up my Camera/Back plate