Face Robot - The Big Picture
|Table of contents|
Phase 1: Research and Character Concept Development
In the research and concept phase materials are compiled that will guide the later stages of data acquisition, facial modeling and facial design. Outputs of this phase can include but are not limited to: digital photographs of extreme poses, video closeups of the talent performing, concept sketches of extreme poses (for non-realistic characters) as well as maquettes.
Phase 2: Data Acquisition
Depending on the pipeline(s) used, data acquisition in various areas is required. This includes:
- Facial Scanning. For realistic human performers, one will acquire high-resolution facial scans to guide the modeling and face design work. Scans will include a 360-degree scan of a base pose (relaxed face) as well as frontal scans of several extreme poses. See Facial Scanning.
- Reference Photographs. For realistic human performers, high-resolution closeup photos are required for texture acquisition. Shots from different angles of a standard set of extreme poses support the face design phase. See Facial Reference Photography.
- Motion Capture. If a motion capture pipeline is used, a facial motion capture expert will acquire motion data through industry-standard capture techniques. The data will generally include a range-of-motion (ROM) take as well as any number of additional takes of specific acting performances. See Motion Capture in Face Robot.
Phase 3: Digital Actor Preparation
Face modelers prepare faces for facial animation. The resulting models need to exhibit certain characteristics to work with Face Robot (guidelines are provided by Softimage). Specifications for Face Robot models include information about scale, placement, resolution and required components (such as eyeballs, teeth, etc.). Modelers deliver a fully modeled head to the face designer. Textures and hair can also be added at later stages in the pipeline.
- Modeling. Creation of a 3-D model from scratch or modification of an existing or scanned 3-D model.
- Texturing. Applying surface detail to a 3-d model.
- Hair. Applying hair, eyebrows, eye lashes, sideburns, chest hair, etc. to the model.
- Model Finishing. Completion of the interior of the mouth and the eyes, insertion of teeth, eyeballs and tongue and other finishing touches.
Phase 4: Working in Face Robot
Here is an overview of the Face Robot stages:
Stage 1 - Assembly
Bring together facial parts and check their validity:
- Import a completed 3D model of a head built according to Face Robot modeling guidelines.
- Checks if meshes are ready for Face Robot.
- Fix holes.
- Generate the interior of the eyes and mouth.
- Get heads and body parts from the library.
- Determine a symmetrical or asymmetrical topology.
Stage 2 - Object Picking
Identify the mesh components of the face:
- Pick the face mesh, eyes, and teeth.
Stage 3 - Landmark Picking
Pick landmark points on the face:
- Visually guided workflow to pick the key points on the face.
Stage 4 - Fit
Spatially align specialized parts of the face:
- Fit jaw tissue, neck, and jaw bone.
Stage 5 - Act
Act by setting keyframes or by applying motion capture:
- Key facial animation controls.
- Load facial motion capture from .c3d files.
- Re-author and alter performance using motion retargeting.
- Blend keyframes and motion capture animation.
- Use the animation mixer to blend animation.
- Animate eye blinking.
- Test facial poses.
- Export animation to Softimage / Maya / Max / Lightwave / Messiah.
Stage 6 - Tune
Adjust the mechanical behavior of the face to match artistic or data-driven goals. This process primarily consists of soft tissue tuning, an iterative technique that uses various forms of manipulation of the soft tissue model to achieve the desired deformation of the face's surface for a range of facial expressions.
- Adjust the soft tissue regions of the face through sculpting.
- Adjust the eyelids.
- Adjust the mouth.
- Create maps to control facial behavior through rendering or to transfer to a game engine.