Lip Sync in Face Robot
The lip-sync tools in Face Robot help you create realistic facial animation using audio files as your starting point. Using speech recognition technology, Face Robot recognizes a standard set of phonemes (sounds of speech) and automatically determines where these phonemes occur in an audio file. Each phoneme is mapped to the face’s mouth animation controls, using the corresponding viseme pose (mouth shape for a phoneme). These poses drive the facial animation via a speech action clip.
Face Robot makes the process of generating lip-sync animation easy. You can quickly generate a large amount of lip-sync animation at a fairly high level of quality. Once the animation is generated, you can spend your time on the real art of getting the timing right and giving the correct emotional intent on the character’s face according to the dialogue.
In a typical lip-sync process, there are usually three basic steps. Here’s how Face Robot can make things quicker and easier for you:
1. Break down the voice track into phonemes.
With Face Robot, this is done automatically for you when you create a speech clip based on an audio/text file combination.
2. Animate the character’s face to synchronize with the phonemes in the dialogue.
Face Robot provides you a library of face poses (visemes) to match each phoneme sound. The visemes are automatically mapped to the phonemes when you generate the speech clip.
3. Animate the rest of the character’s face (and body) to coordinate with and reinforce the emotion of what is being said.
Using the animation mixer in Face Robot, you can combine the lip-sync animation with other animation on the head or face, such as mocap or keyframes.
Once you have the basic lip-sync animation generated, the real fun begins!
• You can adjust the timing of whole words or individual phonemes to get perfect results. Or if the audio analysis doesn’t give exactly the results you want, you can change or add phonemes or words.
• You can adjust each viseme that is assigned to a phoneme for a perfect mouth shape, or create your own variations of the visemes.
• You can adjust the blending for each part of the mouth’s animation (lips, jaw, tongue) to have more natural looking animation.
• Using the mixer, you can mix the lip-sync animation with motion capture or keyframed animation that’s applied to the entire face.
Level of Quality for Lip-synching
The level of quality and detail of the lip sync that you need depends on two main things:
• The importance of the dialogue: is this the main character who is speaking? or is this some minor character having a background conversation?
• The distance of the character’s face from the camera: is the character in close-ups a lot? or is he or she farther away?
If the quality doesn’t have to be that great, you can move quickly through many segments of dialogue in little time using Face Robot because the generation process is automated. In some cases, you can simply generate the lip-sync animation and you’re good to go!
If the quality does matter, Face Robot gives you the tools you need to perfect the timing of the lip movements with the audio, tweak the facial expressions of each viseme, and blend the lips, tongue, and jaw movements so that they’re just right for close-ups.
Autodesk Softimage 2011 Subscription Advantage Pack