What’s the difference between production rigs and rigs downloaded from CreativeCrash and sites like that? Does more have to be taken into account with free rigs since they may fall into the hands of inexperienced 3D animators? Do they have to be more “by the book” than production rigs? Are either kind easier to build?
Most studios will build their own rigs.
A downloaded free rig, or and out-of-the-box solution
-probaly won’t meet the technical requirements of the project.
-will have built in assumptions that you don’t know
-will be harder to trouble shoot when it doesn’t quite work
-will be harder to update, revise, and adapt
A downloaded free rig may be ‘already built’ and thus easier, but It is probably less work to build,update, revise, and adapt your own rig than to reverse engineer someone else’s rig to do teh same.
And certainly more valuable experience.
But is there a difference in learning rigging for studio production and leanring to rig for the public?
Not technically, though studio users tend to be pickier and have more specific requests. Some stuff out there is quite good but a lot of it is just passable, but who complains if it’s free? In house when the guy who used it sits next to the guy who writes it the problems will get fixed. OTOH some studio rigs can be really ugly because they have been baked into a pipeline in ways that prevent them from evolving and improving. If a studio has a lot of characters to model they probably also invest in standardisation so that animators can move between tasks more easily; if you are using lots of random internet stuff there will be a steeper learning curve for each new character.
In general I like to keep the export process ‘rig agnostic’ and make the rig work for the animators, not a particular pipeline. I fill in the gaps with data or metadata that gives the engine what it needs, but try to keep that out of the control scheme for the character. That lets the animators tear stuff up and rebuild it as they see fit without breaking the game. If the engine needs special bones (‘look target’ or whatnot) they can be ‘rigged’ with controls too but they get treated like any other exportable transform – ideally i’d like to use completely different rigs in different animation without the engine having any idea that there were changes.
That pretty much sums it up really. Free rigs tend to have a lot of eye candy and tricks on that you probably won’t see in production rigs where the key is stability and speed. Certainly all the rigs I’ve done over over years were all based on a certain core setup which had been developed to suite the production pipeline at the given studio. The Eurocom setup was a procedural rig system that had been developed over 12years and been through over 20 games in it’s time, ever developed and refined.
For studio based rigs you have to look slightly wider, not just how that controller moves the rig but how that data is then going to propagate through to engine. There’s also the massive consideration of what happens if you tweak that setup, we have 3000 anim all pointing to that rig and you need to be damn sure that when you publish a new build you have something that regression checks the effect of those changes as well as track that change. The rig is only the top layer, the bit the animators get, but then at a studio you’ll also find that the rig plugs into the rest of the pipeline tools, maybe has a ton of in-built metaData that your exporter picks up etc.
Its kind of one of the reasons I did Red9 and the Red9 metaRig setup and how it connects into all the Red9 toolset, this is exactly what we’re doing at Crytek in production. The rig for us is plugged into the meta systems and it’s that that makes it a more solid, studio based solution. Our in house rig on our project was hand built, purely because of time constraints when we started, but the system that controls and plugs it into both the animation pipeline and engine is all metaData hooks.
cheers
Mark
What if you only want to work in the Film and Television industry? Ever sense I heard about “ransomware,” on top of all the other threats I’ve refused to run any sort of Windows on my Dell, but the common engines like Unity aren’t available on Linux so I can’t learn them and claim compentency to an employer wanting someone who wants that. I don’t own any consoles, either; I can’t afford them. The only games I want to ever play are SHMUPS…I don’t know of any US commerical companies making those. So I’m afraid that getting a rigging job in gaming is virtually impossible for me. That being said, is what you all said still relevant to a film/TV production pipeline?
That’s not the case at all! Rigging in Games is much like in films, only with more emphasis on consistency and speed. If you can rig and prove your knowledge then that’s all games companies will really want. Sure you could be the best, most experienced user of Unity, but thats no use if you came to Crytek and visa versa. As long as you have a good grounding and showreel them you’ll be fine.
The gap between hi end games production and film is getting smaller and smaller
what i noticed is that the line between games and film is getting thinner and thinner. of course you cant use the weta digital skin plugin that gives you realistic moving and sliding skin, but there are ways to make a character look like its moving muscles or skin in a proper way, the only difference in the end is that games tend to rely on the limited amount of resources to get the best looking deformations allround while movies need to look perfect in detail for every shot.
having knowledge about as much disciplines within rigging is a big plus because it helps you understand certain problems and allows you to figure out neat tricks and this is more important then knowing how to export for a certain engine (every engine is different so is every export pipeline or conditions needed for a rig)
To be honest that’s one of the things that makes games interesting, you have to come up with the results as cheaply as possible and that makes you think out of the box more. It’s always a balancing act between using complex solutions and what will actually run in-game. Then there’s the layer that says, sure we can make cloth work like this but that solution has to now work on your generic animation library and not fuck up when you have 12 anims all blending together, which happens a lot. Some of the tech that we’re doing at work at the moment is pretty incredible considering it’s all running hi-res in real time!