Facial Rigging in MGS4

Great article translated from the Japanese Softimage site on a very interesting set up.

Eric Chadwick posted it on polycount a few days ago, was waiting for a real translation instead of… this. Thanks for the heads up. An interesting way to leverage the best of morph targets (nuance) and bones (flexibility). When I was in school, I once took FaceGen morph targets and converted them to bones so I wouldn’t have to make my own phonemes and wouldn’t have to implement morph targets in our engine… however this was before I was able to script, ultimately provided a subpar result, and took more time than it would have to remake custom phonemes, but it was definitely one of my earlier introductions to the concept of technical art.

Ok, so let me get this straight. They’re animating with morph targets - and the targets are morphing a low res cage mesh. Then they’ve got face bones that are constrained to the movement of that cage. Finally, their game mesh is weighted to those constrained bones. Is that how it’s working? Interesting solution. I wonder if the morph target constrained bones have advantages over just using bones that have preset combinable poses. Any thoughts?

[QUOTE=bcloward;747]Ok, so let me get this straight. They’re animating with morph targets - and the targets are morphing a low res cage mesh. Then they’ve got face bones that are constrained to the movement of that cage. Finally, their game mesh is weighted to those constrained bones. Is that how it’s working? Interesting solution. I wonder if the morph target constrained bones have advantages over just using bones that have preset combinable poses. Any thoughts?[/QUOTE]

From the article the advantage seems to be predictable results and a simpler system in the long run. It is pretty smart if you ask me because the bones just have to follow a specific vertice ( I assume ) and you don’t have to set up a complete system to deal with pose management, blending between poses, inbetween shapes, etc. all things that come free with a built in morph system.

I’m just curious how the skinning process went and if that transfered from character to character. I suppose if you had enough bones in there you would get pretty accurate results when compared to the original morphs.

oh and thanks for the link Martin. Interesting read for sure.

That’s nice. Really nice.

We’ve always outsourced our facial capture and processing (for story characters), and our ambient characters are done internally by our animation guys.

Interesting solution. I guess using the face mask thing must make it easy to apply to other characters should you need to. At the very least the face could be skin wrapped to it to speed up skinning. Skin wrap is one of my favourite short cut tools.

We did a demo of a Karaoke game a few years back on PS2 and we used a simple set of phoneme morphs driven by a script generated by Magpie Pro. That was a very simple solution but worked pretty well and was very easy to sync with new songs. It meant the same script could be used on all the characters. Initially we were going to drive jaw rotation from the volume coming in from the microphone too but it didn’t really work too well. The next plan was to dial down the level of the morphs depending on volume input but we didn’t get that far.

[QUOTE=Rick Stirling;750]That’s nice. Really nice.

We’ve always outsourced our facial capture and processing (for story characters), and our ambient characters are done internally by our animation guys.[/QUOTE]

Rick, you guys used Image Metrics performance capture for the story cutscenes in GTA4 right? Do the characters use bones or morphs or both for the facial animation in that?

And to the Bioware guys. What did you do for the facial animation in Mass Effect? I thought that was very good too. Is that hand animated or did you use some sort of auto solution?

MGS4 is listed as a client of ImageMetrics as well.

Thanks for the plug guys, my brother has also translated the part about the character pipeline, I posted that this morning. [link]

yeah we used Iamgemetrics - I’m pretty sure they did a talk on the tech the other day (and they just picked up an award for it last night at the Developer Awards 2008).

The cutscenes models use bone driven poses with corrective morphs over the top. The benefit is that the Morphs would then be loddable if required to reduce processing for background characters.

Certain cutscene characters didn’t receive morphs at all, and some of the minor ones had a reduced rig that I set up (essentailly a copy of the in game stuff).

Thanks Rick for the explanation.

And thanks too to Chris and Chris’s brother for the translation. Is he doing the full article? There’s a lot of it.

For those that don’t subscribe to the feed, Chris has just put up the next MGS4 tech translation:

And for those that don’t subscribe to his blog, what is wrong with you?! :D:

[QUOTE=robinb;754]And to the Bioware guys. What did you do for the facial animation in Mass Effect? I thought that was very good too. Is that hand animated or did you use some sort of auto solution?[/QUOTE]

I can’t go into details, but for Mass we created an auto solution based on the tagging system that comes with FaceFX, only pumped up on steroids to handle not only lip-sync and emotions but also head and eye movement and glances, body weightshifting and basic gestures based on simple neurolinguistic (NLP) observations. This was complemented by an algorithm that chose cameras from pre-created sets based on simple cinematic rules.

This system was called Robobrad, named after designer Brad Prince that led the conversation creation on Jade Empire, but “defected” to BioWare Austin and had to be replaced… by a robot. The system allowed entire conversations to be created via a single button-press, though all the best cinematic conversations still need to be created by real people because that simply can’t be beaten - Robobrad just gave the guys doing those conversations a head-start and caught the rest, (virtually all of the party-member dialogues on the Normandy remained Robobrad-only for example).

We have the “real” Brad at our studio - and we’re also using Robobrad. :0)

I’ve been pretty happy with the results we’re getting from our facial animation system - but I don’t think I can talk about it yet. I’ll post as much as I can on it as soon as I can.

Thanks guys, very interesting.

Hi all,

Just so you know, Softimage have just released an official English article on the Metal Gear Solid 4 workflow. It seems a little beefier than it was before, so it might be worth a reread:

You wrote a classic too! Continue to support youFAGFAGFAG