Character/Rig/Animation Management systems

So i’m spinning off of this post:
http://tech-artists.org/forum/showpost.php?p=2912&postcount=19

What I’m interested in discussing is the work folks have done to get referencing/xref/namespace/whatever it is in your dcc package to play nice with the realities of their character and animation pipelines, be it in-game animation, cinematics, or whatever else you do with characters. I’ll start:

I’m going to re-tread some of the stuff presented in Modular Procedural Rigging, so some of this might be old news.

We’ve come to rely more on namespaces for managing assets, along with the xRig system (presented in Modular Procedural Rigging). Starting with xRig, we store any tool relevant data on different types of xRig nodes. Examples include: rig path, render model path, rig component type, etc. This way, as outlined in the paper, we don’t even need to touch content to gather data. The big caveat we’ve some folks have suggested is “well what if an artist touches something and it breaks?” Well:

  1. xRig nodes are network nodes, so if you really want to get to it, you have to dig a bit. I’ve noticed that artists are becoming increasingly skittish around the hypergraph:D:

  2. again, most artists we work with don’t care…if it isn’t something they need to know about to create content, it might as well be invisible.

  3. Since we control the system pretty explicitly, it’s not a big deal to write checks into your tools to validate any existing xRig nodes. Every class of xRig nodes has functionality to validate,version check,repair,upgrade,and otherwise manage the node.

Namespaces can be thought of as our in-scene asset management hooks. As Mark pointed out, Maya’s namespace management tools aren’t that great, so we’ve written a bunch of scripts that wrap various string managment hacks, etc, to allow us more robust namespace management. The next step is probably to port that to a command plugin (i know, i know, if you’re using MPxCommand, you’re doing it wrong) or break it out into a Python package. As an aside, we’re holding off on full Python development until such time as we can take the time to let everyone in the department get up to speed on Python. Porting MEL scripts to maya.cmds is no substitute for learning Python.:rolleyes:

Now since xRig allows us to upgrade rig components in place, the only thing we really have to worry about is getting updated meshes and skeletons into a scene. The obvious answer here might be referencing. Referencing in Maya has definitely made some big strides in the last few versions, but the conclusion we came to was that we just didn’t need alot of the complexity/functionality that the referencing system brings with it. So instead what we did was come up with a tool called xPlant that allows us to transplant a skeleton and mesh into an existing rig. You’d be surprised how minimal the code to cache a set of connections and re-assign them is, even in MEL, hehe.

This is a really broad overview, if anyone is interested in specifics about a certain system, feel free to post questions…

Similar to you we use nodes, marker attrs and message links to build up unseen networks that form the backbone of the entire pipeline. To manage and direct tools to a particular character we just run a ‘define’ tool that goes into the rig nodes and hunts down the markers, storing them into a number of global arrays. Because we always know that Array[2] = L_Wrist tools become a doddle and very stable. It gets over any naming or referencing and because it’s run from ID’s it’s totally expandable (we’ve pushed the same setup to its extreme on some recent multi-limbed bizarre characers). We use the same idea to ID mirror arrays.

So, metaTags, MarkerAttrs or message links, whatever you want to call them, they’re backbone of pipelines an get over any artist fuckups… mainly because the artists don’t know they’re there.

As for referencing, it’s great once you get over the learning curve. When we started during Pirates it was because we didn’t want to run update batchs to get changes to characters into production. With referencing, the modeller changes his model, that goes into the Prep-Rig that updates the skeleton/skin file, that’s published to the characterManager and out to production and all animations automatically get the changes.

All well in theory, but we’ve had to build a substantial set of tools to manage this and get round referencing issues. One thing that’s different is that we run characterSets and that drastically changes the way referenced files are loaded / connected. Nothing is done on names, it’s all broken into 3 lists, linear, angular and unitless arrays, that’s then run though aliasAttr and characterMapping to rebuild links. The long and sort of this is that if a rigger changes the order of the chSet, that effects the way the chSet is broken into the 3 arrays, and can destroy current animations. In maya 2009 they added another check , placeholder lists which seem to have fixed this issue, but we’d been on at Alias/Autodesk to fix this limitation since Maya7. Once you know aout it, and have tools to check for consistency, eveything is sweet.

What else… well, referenceEdits… what a pain! the order they get applied isn’t the order you did them. Lets say you have a referenced cube and add a new attribute ‘test’, now delete the attribute and add it again. store the file… you’ll never get the attribute back, no matter how many times you add it. This is because the edits are applied by group, so the addAttr get evaluated first, the the setAttr, connectAttr and finally, yep you guessed it, deleteAttr. So because you added then deleted, the delete will always runafter any further adds, unless you go into the edit list and remove the delete… again, a bug that’s caught us out.

Mark

[QUOTE=Mark-J;2925]
As for referencing, it’s great once you get over the learning curve.[/QUOTE]

Do you mind elaborating a bit more on how you guys control/manage referencing? I know it’s a sore spot with alot of Maya users, so it’d be great to hear from someone who actually beat it into submission.:D:

The main thing is consistency… a lot of the extra functionality we’ve got in the CharacterManager is there to get over Maya bugs which, in 2009 are fixed.

The best advice to ensure stability is use namespaces, especially if you know that you’re going to be referencing multiple copies of rigs into the scene. Most of our namespace management tools are to ensure that these namespaces are kept clean. We also try and NOT use any nested referencing, if we can avoid it. We’ve got tools for moving anything that was imported into nested spaces back out into root … maintaining the single level NS. We did try having rigs with nesting in, so that the skinFile would be referenced and bound to the rig, which would maybe reference the props etc, all live and very flexible, but in the end it just proved to many levels of pain. We do use this for the Riggers, but any published rigs get flattened so animators only ever take a single level reference.

One thing we found from the outset was that using characterSets changed the way referencing handles connections for the better. In earlier version of Maya animations were reconnected back to nodes via node name, so if you renamed anything inside the rig it would fail to reconnect. With characterSets because everything is piped through an arbitary list, all you needed to manage was the slots in that list, you could rename, shift heirarchies, pretty much do anything to the rig and the animations would always manage to reconnect. In 2009 they introduced (or rather fixed) a new level into the referencing, placeHolder Lists, so now everything goes through a new level of abstraction before being connected. To be honest we’ve been trying to work out exactly how these placeholders are managed, we think we know what Autodesk are doing under the hood but aren’t 100% sure.

So, for those who had bad experiences with referencing in Maya 2008 and previous, try again, it’s a hell of a lot more forgiving than it was when we started in Maya7. Biggest tip is make things consistent…

I attended David’s Modular procedural rigging class, and it was awesome! One of the best classes I attended this year.

Lately I have been doing a ton of research on file referencing since I heard horror stories of catastrophic issues from many studios. When I was with Midway games, I really never ran into any major problems except for the occasional animator importing references which broke all the tools and obviously didn’t update with mesh/rig changes, but that was minor in my eyes and we could easily work around it. We never nested references, just one reference once it got to the animators! We also used message attributes for everything. It makes everything expandable and non-name dependent. Plus artist never go digging into attributes on transform nodes :slight_smile:

Anyway, I am still curious how you managed many files when you made minor updates without references. Example… When an animator opens an old file, do they have to run the xrig live tool on every file they open, or do they open the file through a script that will check the versions to the database?

Was this update script just doing what you did by hand yesterday to add a feature/fix a bug. Example, you added a prop bone under the hand, the xrig live script just calls the procedure add prop bone under hand?

Hey JAM, i bounced your questions off Dave and he’s gonna pop in and do a brain dump at some point. We’re finishing up a sprint here, so hopefully he’ll have a bit of time next week or this weekend…keep the questions coming!

Hi JAMsession,

Yes, I remember talking with you after the class! In fact, I have your card sitting right here on my desk in front of me. Let’s see if I can answer some of your questions…

I am still curious how you managed many files when you made minor updates without references. Example… When an animator opens an old file, do they have to run the xrig live tool on every file they open, or do they open the file through a script that will check the versions to the database?

If the update is critical for the animation data to remain consistent and synchronized across the entire project (for example: skeleton hierarchies) then our Tech Art Animation crew pushes out a batch update and verifies that the upgrade worked both in Maya and in game. During this process our scripts call the same functions used in the Xrig Live GUI.

For non-critical updates or for swapping character permutations our animators can run the Xrig Live update system at any time. And as a bonus for doing so they unlock achievements and increment their Riggerscores :cool:

Was this update script just doing what you did by hand yesterday to add a feature/fix a bug. Example, you added a prop bone under the hand, the xrig live script just calls the procedure add prop bone under hand?

As djTomServo mentioned, our skeleton and skinning data is updated through the xPlant system. This is effectively the same as doing a File Merge in 3D Studio Max on specific sub-components of the rigs in the scene. All DG and DAG relationships are preserved. This is a lot like File Referencing but it gives us much more granular control how and when we want it.

The control rig that drives the deformation skeleton is assembled from a library of modular rig components. Each of these can be updated individually through script. And because our rigs are generated 100% procedurally, we encode them at the time of creation with rich metadata tracking. This makes it easy for the update scripts to traverse to the associated nodes. One of the main ways we distribute these updates is through the GUI tools that operate with the rigs. Each time the tool refreshes it can verify that the rig it is working on is up to date. If the rig is not up to date it automatically executes the update script on the spot and the animator never even knew anything happened, it just works.

There are also an interchangeable set of components that can work on top of the rig or a bare skeleton based on animator preference. Our animators can add, remove and combine these in interesting ways to get the behaviors they need for various situations that arise. It’s pretty cool seeing what kinds of things they come up with! And if it ever gets too broken they can always bake it down to the deformation skeleton and rip it back onto a fresh clean rig using our skeleton to rig retargeting tool: ReAnimator.

I’m glad you found my GDC talk worthwhile, thanks for the feedback! If you want more details you can download the presentation here on bungie.net:presentations Modular Procedural Rigging. This is a shorter version of the Autodesk Masterclass I did in Asia and Australia last year. In the masterclass I had more time to talk about strategies for building metaNode networks and writing tools on that kind of framework.

Let me know if you have any more questions and I’ll be glad to go into greater detail.

-David

Great to see you on the Board David, I really like the direction rigging tools and strategies are heading and thank you again for sharing so much of your time and information.

Brad

[QUOTE=bclark;3113]Great to see you on the Board David, I really like the direction rigging tools and strategies are heading and thank you again for sharing so much of your time and information.

Brad[/QUOTE]

WOW, thanks for the in depth response (posted at 10:30pm… hopefully your not at work)

Thanks for making me feel bad because I don’t have your card. I tore apart my desk wondering what I did with it. I have a few cards from other people from the class but not yours…

***Updated: I just remembered you ran out of cards that day. :D:

There are also an interchangeable set of components that can work on top of the rig or a bare skeleton based on animator preference. Our animators can add, remove and combine these in interesting ways to get the behaviors they need for various situations that arise.

We have been “debating” to put it lightly, on how much control the animator has to change the rig. The main issue that keeps popping up, is a single animation may go through several animators before it is finished and shipped. Every animator has their preferences for animating to say the least. With everyone doing stuff their way, the next person to open the file has to spend 30 minutes just figuring out what is going on. Or they bake it down to bones then treat it like motion capture - adding it to an animation layer back on the base rig (your tool equivalence : ReAnimator)

Some animators want that control, others want consistency.

I think Mark-J put it best, for me the main part is consistency. Especially since I am the one that has to stay late and troubleshoot problems.

This is a really great thread. I will try to add my thoughts once I have a bit more time (we are using some similar ideas to what have been mentioned but I don’t think anywhere near as thorough yet). I am fairly early days when it comes to rigging in Maya at the moment, but I have been forced to learn quite a lot recently, and reading the Modular Procedural Rigging presentation was definitely useful, so thanks for that!

Out of interest, can you say how long it took you Bungie guys to get to where you are there with the procedural system, and how many people worked on that? How many are needed to look after it now that it all seems to have the main stuff in place (presumably various things still get tweaked and added as production continues?).
Just curious as to how much work is invested in a tool suite like that.

Great to see you on the Board David, I really like the direction rigging tools and strategies are heading and thank you again for sharing so much of your time and information.

Thanks, Brad. I got a lot of inspiration from hearing how you are doing rigging in Motion Builder when you visited Bungie last year. Thanks again for coming up here!

Out of interest, can you say how long it took you Bungie guys to get to where you are there with the procedural system, and how many people worked on that? How many are needed to look after it now that it all seems to have the main stuff in place (presumably various things still get tweaked and added as production continues?).
Just curious as to how much work is invested in a tool suite like that.

Hi MoP,
It took only about two weeks to write the core xrig/metaNode system. This included a couple of days of whiteboard design discussions and running R&D tests in Maya, then we built the core Semantic Traversal functions and put it to use. That version carried us through the rest of Halo 3 which ran about 1.5 years. After that we did a refactor to add database support and some more scene management tools which lead us to what we have today.

How many people worked on it: For all of Halo 3 I was the only rigger. I worked closely with our Production Engineer who used it to develop the cinematic animation toolset. We now have four total: two full time riggers (when one of them is not getting pulled off to do mocap), one core Maya developer: Mr. djTomServo and another contract rigger. All of us write tools and rig components within this system. It is easy to understand and very extensible. We have yet to find something we want to do that it won’t let us do. I always hear the other guys saying that they couldn’t imagine working without it because otherwise your code turns into what we call Conditional Hell.

[QUOTE=MoP;3121]
Just curious as to how much work is invested in a tool suite like that.[/QUOTE]
I know dave touched on this a little bit, i’ll toss in my own perspShape as well. I think once you get the framework down, most of the work is determining where you WANT to go with it next. One of our big projects now is to port the meta system over to Python/Pymel(FTW), which is a big project, but is certainly optional, point being once you get through the initial design and implementation phase, it’s not really a case of maintenance so much as it’s upgrades and extension. We’re still using paradigms that were born during the Halo 3 production and code that’s almost 2 years old and has had minimal fixes to it.

[QUOTE=david306;3162]I always hear the other guys saying that they couldn’t imagine working without it because otherwise your code turns into what we call Conditional Hell.[/QUOTE]
I’ll Totally vouch for that statement, since i’m usually the guy that’s saying that:D: Like any good API or SDK, once you wrap your head around working with the functions and inputs, writing tools with it is simple and clean, especially when you compare it to the wild west that MEL development used to be. If you have the resources to develop this sort of rigging framework, it’s going to be well worth your effort in the long run. Hopefully our fully object oriented version will make writing tools with it even cleaner (seriously, Pymel? oh yeah):nod:

Seth interesting you mention PyMel. We’ve had our current Rigging pipelines up and running for something like 6 years now, it’s been through a lot of revisions and big fixes but we’re starting to look at a ground up redesign in Python to clean everything up. Like you said, it’s a case of flesh everything out on the whiteboard, figuring out the code logic then writing it.

What I’m wondering is what everybody’s take is on integrating Pymel as core functionality? There’s no doubt that the integration to python is a lot sweeter than cmds and it would speed things up. . . . BUT from a management point of view would you put so much reliance on a none standard module? What if Chad gets hit by a bus? If Autodesk won’t ship pyMel as a module and go away and update their cmds functionality to better reflect what Chads doing then you’re screwed…

Mark

[QUOTE=Mark-J;3186]What I’m wondering is what everybody’s take is on integrating Pymel as core functionality? There’s no doubt that the integration to python is a lot sweeter than cmds and it would speed things up. . . . BUT from a management point of view would you put so much reliance on a none standard module? What if Chad gets hit by a bus? If Autodesk won’t ship pyMel as a module and go away and update their cmds functionality to better reflect what Chads doing then you’re screwed…[/QUOTE]

Those are exactly my concerns with Pymel. Even though the syntax is bloody beautiful, it gets really hard to say “yes, let’s put all our tools in this bucket.” because it is non-standard, and unsupported. You could very well end up in a position where all your tools are deprecated in one fell swoop. There is also the danger associated with working with Maya not the way Autodesk wants you to work. As was discussed in another thread, understanding how the packages actually want you to do things is important. This drops the package’s own abstracting for (an admittedly very clean) alternate abstraction.

There are some defenses in place though:
[ul]
[li]Autodesk keeps fairly reasonable backwards compatibility. They want people to upgrade.[/li][li]If everything breaks, there are enough people invested that Pymel will probably get fixed before you can retrofit all your tools, which are probably also broken.[/li][li]Pymel is under the BSD license, so you can continue, or fork, development if anything happens.[/li][/ul]

I think though, that for any really involved implementations, you are going to be reinventing parts of pymel anyway. Their class set-up is pretty inspired from what I can tell. I guess if you want to use it, you use it the same way you do any other 3rd Party module. Get the code in escrow ( easy since it’s BSD, just put a copy in Version Control ). Don’t modify it heavily, so you can easily upgrade. Build your items that use that as a black box. If the black box ever fails, replace it. Your code should operate correctly with any replacement. A ton of studios do the same thing with Havok, or SpeedTree.

As you may be able to tell, I’m torn. I guess the answer depends on your studios and your priorities. We don’t use it. I don’t disagree with that decision, but I’m glad that that wasn’t my decision to make, because it is a tough one.

[QUOTE=Mark-J;3186]What I’m wondering is what everybody’s take is on integrating Pymel as core functionality?[/QUOTE]

It sorta depends on your available resources. For now, since it’s not autodesk supported (fingers crossed for change soon), if you don’t have the resources to maintain it, i’d say it’s probably not the way to go, but that’s true with any sort of external code. We’ve sort of already gone down that road with things like Python .NET, we maintain our own in-house branch of that since it isn’t “actively” supported anymore, so making the leap to an internally supported version of Pymel wouldn’t be that big of a deal for us. I’m not taking any shots at anyone in particular with this next statement and i understand most studios don’t have the resources to provide this sort of support, but the way i see it, you can support anything internally that you have source for, you just have to be willing to spend the resources on it. Pymel is one of those things that i feel is definitely worth putting some of my time towards on that level…of course i’m the guy who’s actively lobbying to remove the term “artist” from his title too, so there’s probably just something wrong with me :laugh:

I am already investing a lot of my own personal development in Pymel. It solves a lot of the major problems we have experienced with standard Maya Python.

I have strong hopes that Autodesk will integrate Pymel into a future version of Maya and give it the backing it deserves. And I’ll be at Chad’s Pymel Masterclass at SIGGRAPH this year guaranteed, even if it means I have to walk there.

I think we’re all in the same boat by the sound of it, and I agree about having the resources to support external modules. I didn’t know Chad was doing a masterclass this year, I’ll be there!

Is there an overview or outline of the class yet online anywhere?

What makes pyMel so powerful is it is actually Object Oriented. Not mel with python syntax.

The major downside with pyMel is support, and community. If you ever have a mel issue there are 1000000’s of people that have had the exact same issue and are more than willing to share the solution. With pyMel, everyone is trying to get up to speed at the same time, and only one has truly mastered it (chadrik himself)

Seeing major studios using it, such as Dreamworks and ImageMovers is quite a bit of reassurance, but they aren’t allowed to share anything work related with the public. They are all probably maintaining their own version too.

If it is written in a “pythonic” manner, you should be able to call help( “pymel” ) in an interpreter, like maya’s command line. If you want, you can also let loose pydoc to generate HTML help pages for you. Python can be very good about self documenting like that.

There will always be help and documentation,


help(Cameras)

I should better explain myself.
I remember when Maya first integrated python, I spent 3 days trying to figure out why skinPercent (query=True) wasn’t working and throwing a syntax error. Weeks later many people on the boards started to complain about the same command and syntax error and no one had any solution to it. Now (~2 years later) we finally found out that it was a bug on Autodesk’s end and not a syntax error.

When the community is so small, you have no idea whether it is a user error, syntax error, logic error, or product error.

Either way, all my tools are python driven… To many long term advantages to Python that out weigh Mel.