Motion Capture in Face Robot
Face Robot is an incredibly valuable tool for facial motion capture work. It solves the problems of traditional facial mocap and puts the control back into the hands of the artist. Until now, higher-end productions have relied on the sheer number of mocap markers applied to the actor's face to relay an adequate representation of motion. This large number of markers (some productions approach 150) is incredibly time consuming to attatch at every capture session, and the possibility of occlusion or even of a marker falling off unnoticed is very high. Data cleanup is also incredibly labor intensive and file sizes very large. For all of this work, the end result is usually of a much lower quality than is desired, often requiring significant hand tweaking to make it presentable. This process largely defeats the purpose of motion capture, which is to quickly acquire highly accurate motion data and save the hours that a keyframe animator would need to do the same job.
Even with 150 markers effectively scanning an actor's face, the results are less than lifelike. The problem is that a mocap actor's face seldomly matches the face of the character that is being animated. Trying to force those markers to move around a corresponding point on the character's mesh usually results in the character being pulled off-model. That resolution of data simply does not translate well from the actor's face to a character of different proportions. This gives a character the slightly ugly quality associated with most facial mocap. Even when the actor and the character are an exact match, believable, realistic motion is hard to achieve with this approach.
The Face Robot solution. How to finally fix these problems? Stop plastering more and more markers onto the actors and throwing more and more data at the mesh. The human face has a finite number of useful landmarks to capture. The areas between are only reacting to the movement of these points, albeit in a very complex manner. What if artists could capture only the important parts of the face and let an intelligent rig do the rest of the work? This is precisely what Softimage|Face Robot allows animators to do. Simply place markers on the critical areas of the face (30 to 35 markers are all that is needed), and adjust the Soft Tissue Tuning to accurately move the face in response. The results are far more subtle movements and more believable animation, all facets of which are controllable by the artist after the capture session has ended. Less setup. More control. Better results.
|Table of contents|
Motion Capture Marker Placement
Face Robot is a general-purpose animation system. It is particularly useful for motion capture animation because, unlike traditional approaches, it only requires a small number of facial capture markers (~32) to achieve high quality results.
Face Robot is driven by an animation control set consisting of 32 control points. These control points can be driven directly by motion capture data. Therefore, the motion capture marker placement is related closely to the positions of animation control set.
Motion Capture Camera Placement
For facial-only capture camera placement, cameras are positioned to provide good coverage (two or more cameras for every marker at all times) for a reasonable range of head movements. It is important to avoid occlusion specifically for the nostril markers.
What to Capture
The motion capture needs obviously depend on the performance that is required for the project. However, experience has shown that at least the following captures should be made to facilitate tuning of the head in the facial design phase:
- Range of Motion. The range of motion should exercise the widest possible range of facial deformations, from smiles to screams to frowns and sneers.
- TODO: add more detail here
- Key Poses. In particular for 3-d scanned heads, it is also useful to capture the following additional poses because they make it easier to align the motion capture data with the facial model. This can be done either as part of the range of motion or in separate takes.
- Facial Base Pose. There should be a capture of a facial expression that corresponds to the rest pose 3-d scan. This will simplify the retargeting in the same way a t-pose is used for the body.
- Extreme Poses.
- Range of Motion: which expressions?
- Lip synch take
- lead in / lead out?
Mocap Data Processing
- Motion Capture Marker Naming - Naming convention for the marker positions.
- What file format to export
- What / when to stabilize
- What / when / how to clean up / filter the data
Importing & Applying Mocap Data
- How to get the data into Face Robot - see Motion Capture in Face Robot
- How to link up the mocap marker data to the face in Face Robot - see Motion Capture in Face Robot
- How to do it - see Motion Capture in Face Robot
Tips & Tricks
- Keeping Shoulders Steady. The two motion capture markers driving the neck tendons in the Face Robot animation control set are also influenced by movements in the shoulders, for example when the performer's arms are raised above the head. In an ideal scenario, the performer will sit in a chair with a relatively motionless body posture and emotional acting focused on head movement and facial expressions.
- Facial Scans with Mocap Markers. In evaluating the positions of the controls on the face, it could be helpful to also acquire a 3-d scan of the face with the motion capture markers in place. For more on 3-d scanning, see Facial Scanning.