Facial pipeline queries

We are looking for a facial system that is faster and more accurate for our game project. Here are the list of queries:

  1. Which are the best solutions used in studios these days, both in terms of hardware and software, that can be used in cinematics as well.
  2. Which is better, bone based system or morph based system? pros and cons.
  3. What parts of the process can be automated?
  4. How good is the facial mocap system?
  5. Any idea on markerless capture system?

Thanks.

i think a lot of these questions are very dependent on the type of project that you are making. depending on the volume of work, your choices may already be decided for you to a degree.

  1. facefx is still pretty high on the list, but the studios that have the best results from facefx have put a lot of work into it. if your game is cinematic or dialogue heavy, you may not have the luxury of having an animator hand key every piece of dialogue. facefx can be good for animation on secondary characters or rough pass animation that animators can quickly touch up.

  2. i think this is a big one that is dependent on the project. morphs may give you better results but they are expensive. if you plan on having 50 characters on screen, you may not be able to use morph targets. most current games use bone based systems. you can still put out high quality animation with bone based systems and they are easily extendable.

  3. the whole creation pipeline (rig, skin, LOD’s) can and should be automated. leaving pipeline tasks like rigging heads to people leaves a large margin of error. the more you can automate, the easier you can edit later, the faster you can put out work, the more characters you can create.

i don’t have any experience with facial mocap but my general opinion of it is that it provides similar results as facefx (or body mocap) - it can give you a rough first pass but it will certainly need animator touchups. facial mocap suffers from the same ‘swimmy’ artifacts that body mocap suffers from.

  1. Bone-rigs will always give you a wider range of usability. If you can afford enough facial bones in many cases you can achieve visual quality on par with blendshapes. Blends tend to “look nicer” but limit animation. Bones are more animatable but require better animation to achieve good quality. If you’re engine supports using both you can always use corrective blends on top of bones (if needed) for even better results on those tough to hit shapes. We’ve always had less than desirable results with all blend/morph facial game rigs and don’t recommend them.

  2. On the rigging side, practically all of it can be automated. The things you don’t want to automate are the artistic tasks such as skin-weighting, joint placement (in most cases), and any editing/modeling of expressions or blendshapes. Automate all the technical stuff that is easy to screw up and takes too long, thus leaving the riggers time to build nice rigs.

4/5. Our company does marker-less facial capture and we get awesome results. We also have head cams you can use while filming your body mocap so the acting/eyelines/etc all match up together. I’m biased of course :D: but i think our system is great.

I can recommend auto generating skinning, joint positions and poses based on a reference assets, but allowing for unrestricted artist tuning after the fact. Should save you loads of time, with all the benefits :cool: I also think the IM system is very nice - but it would be even cooler if one was sitting on my desktop, driving my facial rigs in real time :slight_smile:

oh right, imagemetrics does pretty good stuff too!

if you get your pipeline down and are able to do video capture for the markerless stuff while doing voiceover, i think imagemetrics is a totally solid solution. we were not so fortunate on this project.

jayg i heard something about you guys licensing the targetting software to outside companies - is that coming down the line?

We do use automation and math to auto-generate skinning, joint placement and expressions. However, it’s never left that way. It’s only to get you a good percentage of the way there to save time. The rigger(s) always go in and fine-tune everything, as Sune mentioned.

[QUOTE=jeremyc;6400]jayg i heard something about you guys licensing the targetting software to outside companies - is that coming down the line?[/QUOTE]

Yeah, actually it’s in use now and we’re getting great results. The software is called Faceware. Check out the IM website for more info if you’re interested or toss me an email.

[QUOTE=Sune;6398]I also think the IM system is very nice - but it would be even cooler if one was sitting on my desktop, driving my facial rigs in real time :-)[/QUOTE]

They’re working on that. :stuck_out_tongue:

U know the funny thing is, of all the major studios that I asked this question, they all said that they have used IM somewhere or the other, hehe. IM seem to have a rock solid facial pipeline which is giving remarkable results. What I’m curious about is that how many artist normally work on a face(rigging/capture/animation) and how much time does it take to deliver an animation, say a min long, using IM pipeline.