We’re planning on using FaceFX to handle our facial animation. I was curious what kinds of experiences others have had with it, as I have very little experience with it, but have a short period of time to get a lot done with it.
One of the big things I’m curious about is hand keying ability within FaceFX. I didn’t even know it was a feature until opening it up and looking at the docs. Has anyone experimented with setting up a hand key rig within FaceFX? How are the curve editting tools and other general animation tools as compared to a 3d package like Max or Maya?
I use FaceFX a lot and really like it. I don’t do a lot of hand-keying in it, and when I did a felt a bit clumsy - probably because I wasn’t used to the workflow. It’s different from Max or Maya and takes a bit of getting used to - but it’s usable.
I really like how easy it is to set up complex facial controls in FaceFX. It does things pretty quickly that would be much harder and not practical in Max. It does pretty nice audio analysis and lip sync generation too if you work hard on refining your set of phonemes.
Ben, have you compared the phoneme data from faceFX with motionbuilders build in phoneme generator? I’m curious if you have what your findings are. Also wondering, since there’s no pricing on the site, if it’s affordable software for a small compagny doing animation (not games), I don’t need exact pricing, I should contact them, but an estimate could save me a mail and a product manager spamming me for the purchase
I haven’t used Motion Builder, so I can’t really compare it. I will say this though - Before I used FaceFX, I did all my lip sync animation by hand. I really enjoyed doing it that way and got pretty good at it. Because of the volume of facial animation on our project, it’s pretty much impossible to do it by hand, so we’ve worked with FaceFX to make it so the results we get from its automatically generated animation are pretty close to what I was doing all by hand previously. I’ve been pretty impressed with what it can do.
As far as pricing goes - I have no idea. Our company has a company-wide license that they bought many years ago - so when I started my job, they just handed it to me. Sorry I can’t be more helpful in that area.
FaceFX would be my favorite software for doing lipsync but anyway we work with emotionFX which already got a lipsynctool. Unluckily you can not edit anything, it analyse the audio and thats it, kind of crappy when you want to edit important scenes.
Hey JHN, we tried motionbuilders phoneme data and found it rather weak so we used Face-Fx, which gave us good results quickly. In terms of cost its fairly cheap (cant remember exact price as had different pc with old emails back then) and its on a by-title basis so you buy it and as many people as you want can use it.
I’m currently in the process of writing a big, encompassing FaceFX doc for UDN. though, that covers the unreal integration and not the standalone version. I think it’s a good solution. I am probably not allowed to voice personal opinions too much as a partner, but it does the job quite well. I’ll agree with what others have said, the hand-keying inside is a nice feature, but feels clumsy. You have to make a lot of bone poses in order to get a face rig that has a good range for hand key work.
One other super swanky feature of it is the template. I made one template ages ago, and if you make sure all your characters have a lot of the same bone poses(names, not actual poses) then you can sync to template, and forgo all the setup work on multiple characters. That includes workspaces and face graph and all!
For cinematic work, I’ve been looking into Voice-O-Matic to incorporate into my face rig. I like some of the settings it has, like anticipation(how many frames before the actual sound to ramp up that shape) and smoothing, which the higher it is, the less keys it plots. We are thinking of using this possibly as a base to start on or for background characters and such. But for in-game, we still will continue to use FaceFX.
one thing about hand keying in facefx that i have realized is that without the body animation accompanying it, it can be difficult to sync up motion if you are doing more than just lip syncing.
anyone have any insight as to how to deal with volume changes in audio? for example if i want a guy to yell, i think i could use gestures to generate the actual curves, but wouldn’t that put the character in sort of a yelling pseudo-state that he would be in at all times? can facefx switch gestures on the fly to go from speaking to yelling?
For that, we’re using two sets of phonemes - one for loud or angry speech and one for normal speech. The loud speech phonemes are exported as additive. In FaceFX we can dial up and down how much they’re added with a speech volume parameter that can be animated.
Hey… We are currently in the process of evaluating FaceFX.
So far, we’ve been making good progress. And I believe that with further testing/troubleshooting and learning …we can achieve the results we want ;o)
I have however come across my first ‘sticky’ point ! And was wondering if you guys have faced similar issues…
On testing our pipeline…I am finding that the exported FBX file (from FaceFX) contains large spikes in the rotational data on certain bones (eyelids and bottom lips in particular) The roll/flip in the bones is quite significant, but only in selected areas. We have faced similar problems when importing skeletal Mocap data …And this can be simply fixed by plotting the animation in MotionBuilder (unroll filter) This, for some reason does not work with the FaceFX files (?)…and would be an issue if we intend on automating the process as much as possible (ie. I don’t want to manually fix every FaceFX file we generate)
Currently, our pipeline is…
3D StudioMax (asset creation/bone posing)> FaceFX (animation creation/FBX export)> MotionBuilder (final scene creation)
So…my initial thoughts are…
-FaceFX is not exporting it’s own bone data correctly.
-I am not exporting data correctly (ie…am I being stupid !! Can you? do you?.. have to ‘bake’ the animation curves somehow in FaceFX before exporting?)
-Is there anything in our base Max asset (eg. when creating bone poses) and creating the FaceFx Actor in Max that may affect the final exported animation ? (eg. Bone values, Max Controllers etc)
I hope I have missed something obvious (I usually do ;o)