Rigging in game vs animation

I’m a character rigger working in the animated industry now, and looking to switch into the game industry.

So I just wonder if any folks here has similar experience? What kind of changes should I be expecting?

In particular, I always hear that a big difference is animation render time per frame could take hours, while in game we are talking about real time rendering like 16ms per frame. How does that affect rigging? Does that mean that the rig’s motion system need to be really fast? Or do you just bake out joint locations for animations so that only deformation computation kicks in? Or… am I making no sense here?

Any inputs would be really helpful :):

1 Like

[QUOTE=doraemon213;15801]I’m a character rigger working in the animated industry now, and looking to switch into the game industry.

So I just wonder if any folks here has similar experience? What kind of changes should I be expecting?

In particular, I always hear that a big difference is animation render time per frame could take hours, while in game we are talking about real time rendering like 16ms per frame. How does that affect rigging? Does that mean that the rig’s motion system need to be really fast? Or do you just bake out joint locations for animations so that only deformation computation kicks in? Or… am I making no sense here?

Any inputs would be really helpful :):[/QUOTE]

Most systems use baked/plotted FK joint data as base.

Depending on the amount of characters the game has on screen; you’re looking at joint-counts for the whole characters between 25 and 200. (100’s of characters to a handful).

Depending on the game engine, you need some supportjoints also. The most common is a reference/position node that is hierarchically over the Hips that controls the characters location and blend-space.

Max influences per vertex is usually 4.

Some game-engines doesn’t support everything that is supported in your 3d-software. For example Joint Orients, DOF’s etc.

I would suggest trying out Unity. It’s free and there are many tutorials for it. Build a real-time rig for it and you’ll probably learn some more.

As far as I know (and bearing in mind I only work on mobile games) the current limit in Unity is 64 bones per skinned mesh, 4 bone weights per vertex, and no morphs/shapes (out of the box, though you could probably write something that does it). You’d also probably want to make use of a shadow rig (just a hierarchy of nulls, with their transforms baked) that you export into your engine, while driving the shadow rig with a more animator-friendly rig.

Yeah, Elyaradine and primal_r are pretty much dead on.
Right now my rig is around 35 joints with a simple skin binding and 4 influences.
This will probably be even less when we port our current game to 3DS and mobile.
Unity will give you a really good idea of what can work well.

We pretty much bake the joints down on export so it handles anything like constraints, SDK’s, motion path animations ;etc. Once its baked down then the engine is just pulling in the joint/vert transform information.

Matt -

The actual rigs themselves in the DCC app can be as complex as you want, anything to speed up the animators workflow and make for more believable performances but, that always has to be balanced with what data you can actually output to game (yes, baked out joint data) So from a rigging perspective the core of the rigs are pretty similar to any animation rig, we’ve got custom softIk systems and all sorts of twisters to correct skinning, we’re also looking at corrective blendshapes, but again, we’re limited on what the engine can realistically run in real-time at 60fps