Facial Rigging Setup: BRINK

Hello Tech Artist

I have a question for you all :): I am looking for some advice on facial rigging for games before I start a working on a rig for the BRINK heads I have been working on (Still W.I.P!)

I am building these head with the same topology and am hoping to build a rig that can share the animation data between the heads.I am unsure about a few things and was hoping someone would be able to help out.

  • Poly count? What is acceptable these days for game res head? I am thinking around 5-6 tris but I am unsure on this?

  • Rig types. From what I can see you have three types:

    Bones/Joints
    Curve Based
    Blendshape
    

My understanding that Blendshapes and game engines dont get along. Bone/Joints looks to be the most common used from what I can see but I have also came across Curve Based rigs but I am unsure if you can use these in games engines? If I was to go with bones how man is suitable for a high fidelity games rig?

I am also looking use mo-cap data and looking for advice how to transfer the rig and animation data from head to head? I am looking to use Faceware plugin for Maya but am open to hearing any other soultions or workflows that are out there!

Any help from you guys would be awesome! :slight_smile:

Thanks in advanced.

Is this for an in-game rig or for cutscenes? A lot of studios swap out facial rigs depending on what part of the game it’s in. This way they can have higher fidelity in the scripted events/cutscenes and not require that same amount of memory for in-game stuff.

These days facial rigs are getting a bit more love so you may see head meshes with 5-6 thousand polygons. From what I’ve seen in the past year I’d say the average is about 2-3 thousand. Lowest I’ve seen for a hero character in the past year was about 1200 polygons. If you shoot for somewhere in that range you should be good.

Most game engines support blendshapes, but the memory cost associated with blends as opposed to joints/bones is significantly higher so in most cases you’ll find joints. If you can use both and build a hybrid setup, that is always my recommendation.

I’ve never seen a game engine that supported curve-based facial rigs, but then again I’ve never really looked for one so they might exist. :slight_smile:

As long as the point count doesn’t change and your joint orientation is the same across your rigs you shouldn’t have too much fuss transferring animations across characters. They all have about the same proportions so you want to try and get the joint positioning to be in the same relative spot for each character. It will never look perfect, but if you keep in mind when you animate that it’s going to be working on three characters and not just one (keep it slightly more generic in some cases) it will work better.

We use joints for animation in Brink, but the joints are driven by blendshapes in Motionbuilder.

All the faces in Brink share the same topology so we just copy skin weights between them since there is a direct 1-to-1 mapping of vertices.

The different faces are handled by an additive transform on the joints which is applied on top of the facial animation to make sure the joint positions match the different proportions, since otherwise you’d have to have all the facial features being in the same place in order to avoid distortion. This was a custom solution in our engine, I’m not sure how you’d handle it yourself if it’s going into a game.

The stuff JayG said about the polygon counts is pretty much spot-on with what we’re using these days. In fact, everything JayG said makes sense and is good advice. :slight_smile: