Next gen animimation?

Hello hello,

Having looked at almost nothing else than graphics for the last year I was wondering what’s the latest in animation (or rather deformation) techniques?

When I look at our (or other) movie departments rigs (muscle, cloth sim etc.), and then at deformation in current games it seems we are still lagging a bit behind. Or maybe I’m missing something… :slight_smile:

The last time I checked I thought “Skinning By Example” was a really cool idea.

(http://portal.acm.org/citation.cfm?id=545283)

Any other “next gen” for real time deformation worth checking out?

Thanks in advance,

seb

PS: see how excited I am - stuttering the post title! geez. :slight_smile:

Next, next generation will be same as this next gen: technically incompetent animators, daydreaming of working on Disney epics of last century, complain endlessly about modern game development techniques.

Alan,

I love your reply. :slight_smile:
So true (for the most part)

I’ve only met a few animators who were really interested in pushing game animation into new heights, while most dream of jobs somewhere else.

:slight_smile:

I would like to add to that 'complain endlessly about modern game development techniques, while design talks about how many more characters and particles they need on the screen at the same time, only to run out of time and system memory to implement half the created animations and characters they wanted in the first place and leave no time to adjust/tweak or polish any of it.

All sides are to blame for lack of animation quality in current and future projects:) The reality of it is no matter what animation wants to do, design has a huge hand in screwing it all up with unrealistic implementation standards. (to fast running speeds, poorly thought out movement charts, over simplistic play mechanic animation requirements… programmers that strip off all the animation data on the root with out telling anyone.

Also game cameras that are locked on to the root of the character canceling out any ability to play with framing and making even the best looking runs/walks/moves look strangely static and un-interesting that programmers don’t see anything wrong with.

Ahh yes, many many roadblocks for animation quality in games and no wonder animators often feel like another industry might respect their contribution a little more.

*not to negate the problem of a tech. incompetent animator.

Lol, well in terms of tech we were looking in to this for awhile, had some good ideas, but could potentially lag some systems…

http://isg.cs.tcd.ie/projects/DualQuaternions/

Wow, tough crowd here :):

Not to hijack the thread, but what exactly are you guys referring to when you say “technically incompetent animator”? Do you guys expect animators to know how to build custom rigs and understand the complexities of transform matrices or is it something more general then that?

I ask because I’m an animator and part of the reason I’m here is to learn more about the technical side of things (to help you guys out hopefully). I feel most animators, and especially ones in games, need a certain level of technical knowledge. I don’t think every animator should rig their own characters, but some sort of functional technical knowledge is extremely helpful.

@bclark: you really hit the nail on the head with your reply. It’s like trying to fit a square peg into a round hole because of the design and gameplay restrictions. All I can do is try my best to make something look decent, let alone good.

Most of the time my animations are either chopped up into 3-5 different pieces and the blending ruins the timing, or it’s smashed down into 1/2 a second, completely destroying it. Granted, I’ve only worked on MMO’s which have some very limited systems to begin with (at least ours does at the moment).

I would love for animation in games to get better, so in your guy’s opinion, what are some things that each animator should know that would make your lives easier?

I think that the next gen stuff is going to be all about good state machines, animation trees and general clever blending.

Basically the ideal is that the animator creates something that looks great in the authoring package, and then the exporter / game-side stuff deals with that intelligently based on how it should be used in the game.

I think games like Drakes Fortune are really pushing in a great direction with this, making good use of all sorts of current and new tech (layering anims, nice blending etc) and setting this up in a way that feels natural in the game.

I don’t know too much about anim stuff (yet!) since I haven’t had all that much experience with it, but personally I think it’s fine that animators don’t need to know all that much technical stuff - animation is what they do, they should just be given the simplest and most straightforward solution for getting their art into the game.

Equally programmers and tech guys need to learn to from these animators what it is that really makes the anims come to life and use the principles of animation when implementing these things in the game or in the tools, so that everyone can do what they do best with minimal “technical” stuff in the way. If an animator knows how to make something look super-good, they should be communicating with the technical guys and programmers to figure out what it is that really gives animation that extra life, and figure out the best ways to implement that in the game so it plays nice with the raw animation data handed to it.

Obviously stuff like real-time IK and advanced skinning solutions will be a big help to increasing visual fidelity and dynamic interaction, also the increasing budget for joints and physics simulations can also help to add more detail and quality.
I think cheaper cloth and physics simulations are definitely going to help the next-next-gen stuff, it’s already in many games but usually only the ones which have one or two “hero” characters, it will be much nicer when it becomes a cheaper solution (hardware accelerated?) which can be used more freely.

Basically I think the future is just more collaboration and understanding between all aspects of game creation (animation, art, programming, design) to really figure out what meets the needs of the game with the highest visual quality.

I already see nice layering, state machines, and blends going on currently. Morpheme does a great job for our team on these things at a low cost of resources. I have not had the chance to look at human IK but that does full body IK also not sure of the performance hit there. Then I’m sure everyone is aware of Euphoria that will become more popular by next gen.

I see things getting more optimized to have more of these things on screen. Games are like that now I think about it because of the massive amount of characters on screen course that also depends on design of the game.

It would be nice to see some better animation increase some bone counts and maybe some blend shapes on some knees, elbows, and face. Course I’m sure it will end being how many people we can fit on screen for a lot of projects for next gen but time will tell can always hope. :D:

When I think of an animator in games being technically competent , to me it has nothing to do with rigging (they should be able to understand how to use the rig and communicate with character TDs though) but it comes down to having technical understanding of how to animate for the game, how that animation gets implemented and what factors beyond the animation are controlling that motion (designer speed variables, programmers stripping out data etc…) they need to have enough tech know how and access to the tools to make the final game animation match what they pictured in maya/expect in game.

If you “just animate” and pass the data off there is a very high chance that it will get munged beyond comprehension by the rest of the well meaning team that has very little understanding of what makes the animation work or not, but guess who gets blamed when it looks/plays back strange- the animator! It helps to know how the data is exported,loaded and controlled in the engine because during playback, being blended, in game is where the “polish” stage for animation really happens and not back in maya.

**sorry this kind of went of topic.

AS for better motion technically, Mumm has it right, it will just be more refinement and interoperability of the tools, AI using HIK solvers to affect characters, better smart blending systems and more artist control like in Havok behavior.

There are some great examples now like Assasins creed and Prince of Persia that show off some very excellent use of this current state of tech.

At the same time there has to be a streamlining of the tools and less starting from scratch ever hard ware rev, if a film had to first make the cameras that it was going to use to shoot the film with, it would be a hell of a lot harder and take longer to shoot a movie…yet games are for the most part building the tools and the "content’ at the same time …laying road and driving on it hoping to not run out of road before you get to the end.

Realtime chracter rigs are interesting , letting animators get exactly what they see in the DCC app but creates its own set of problems but that is all ready being looked at.

The push for more on screen and more characters on screen is a strange goal because all ready there is not time enough to make what we do have screen space for unique and interesting and move away from generic motion and push for more personality and quality acting from the game characters. This requires animation to be involved more than just "make character run 20 feet per sec- export and then see it in game 3 weeks later looking nothing like what was exported.

oh yeah and it still has to be “fun” to play as a game or no amount of amazing animation will save it.

Without agreeing or disagreeing with any opinion here, I’ll throw in my 2 cents. :slight_smile:

It certainly is hard to get game animation to feel fluid and look the way it was animated in the 3D App. Part of that comes from technical restrictions (We only have so much memory for animation), time restrictions (must get N number of motions done per day to stay on schedule), and communication restrictions.

I say communication ‘restrictions’ and not ‘issues’ because in my experience, more often than not it isn’t that people aren’t trying to communicate, it is that they either don’t know how or cannot communicate what needs to happen because they don’t have enough information. Design is an organic process- it changes throughout development. So can programming be- we all try to have our tech nailed down going into production, but things come up that force us to look into changes.

I think animation blendtree/state machines are a good start, especially when giving animators the tools to implement the blends and conditions of their motions themselves. But that’s just the straight animation part of it. How the animation networks/blendtrees/state machines are used my game code still affects that data. To that end, my opinion is that scripting systems and AI need to be developed with animation and the existing animation system in mind, instead of animation being created to work within an existing scripting or AI system. Ideally, both would be created concurrently, though I can’t speak to the feasibility of that. This site:

www.aigamedev.com

is a great resource if, for anything, the fact that it posts whitepapers on AI and animation fairly regularly.

Who knows, maybe I’m crazy, but I envision a day, very soon, where animators control exactly how their animations look in game (with art direction and design input), and where AI/scripting uses the data given to them by animation and creates a system that no longer has characters spinning on a dime or doing the shakypants dance.

I have to say, at Volition we’re spoiled- all of our animators want to push game animation into new heights. There is also a definite stronger desire to work on in-game animation over cinematic animation, where there are less limitations.

[QUOTE=kees;3509]Alan,

I love your reply. :slight_smile:
So true (for the most part)

I’ve only met a few animators who were really interested in pushing game animation into new heights, while most dream of jobs somewhere else.

:)[/QUOTE]

To answer the OP, lots of games have cloth simulations these days; Havok has even released a third-party solution for it. Muscles can be faked to some extents by using ‘smart’ bones, or you can model morph targets for flexed vs unflexed muscles and trigger them when needed. Fine details can be added via blended normal maps, as seen in Fight Night Round 4. It’s not fancy compared to Shrek but I would argue that we have bigger problems to solve anyway, due to the interactive aspect of videogames. Uncharted 2’s characters skin may not accurately slide over the sternocleidomastoid muscles, but they do interaction well.

Uncharted 2, if you havn’t seen it: http://www.gametrailers.com/video/ex...harted-2/49328

To people blaming their animators, have you considered that perhaps they’re not happy because of the animation pipeline? We have fancy tech like IK, additive blend, look-at, smart ragdolls… but they’re almost always hard-coded by programmers instead of being freely available to animators.

So why can’t animators keyframe character limbs to IK goals like ‘floor’, ‘enemy spine’ or ‘closest climb anchor’ without having to ask programmers about it? These IK triggers are already in your animator’s working scene, they’re just not exported. Or how about better visualization tools? Most animations are done entirely out of context, without knowing how they blend to and from other behaviors, without knowing how it’d look if the ‘hurt’ animation played on top of the run cycle etc. Tech doesn’t really matter when you have no tools to take advantage of it.

Yep, 100% agree with Francois here. All the animators here are totally about the game animation, not dreaming of making movies (we did interview someone for our lead animator position once, who actually said in the interview that their goal was to be a director or animator for film, at which point we kinda went “er, no”).

It’s definitely the best solution to try and make sure the animators are seeing their work in the “live” game environment. They should not be spending all their time in motionbuilder or maya or whatever and just letting other people implement their segments of animation into whatever part of the game needs them. That sort of thing leads too easily to bad-looking transitions and inconsistent motion.

Having tools for an animator to easily go from the authoring app straight into the game with the minimum of fuss, and having access to familiar controls (as Francois said, IK and various interaction parameters) in the game world would be a huge bonus wherever possible.

100% agreed here.

[QUOTE=Francois Levesque;3544]
To people blaming their animators, have you considered that perhaps they’re not happy because of the animation pipeline? We have fancy tech like IK, additive blend, look-at, smart ragdolls… but they’re almost always hard-coded by programmers instead of being freely available to animators.

So why can’t animators keyframe character limbs to IK goals like ‘floor’, ‘enemy spine’ or ‘closest climb anchor’ without having to ask programmers about it? These IK triggers are already in your animator’s working scene, they’re just not exported. Or how about better visualization tools? Most animations are done entirely out of context, without knowing how they blend to and from other behaviors, without knowing how it’d look if the ‘hurt’ animation played on top of the run cycle etc. Tech doesn’t really matter when you have no tools to take advantage of it.[/QUOTE]

Thanks everybody for the posted info!

I’ll definitely look into the Dual Quaternion stuff, thanks for the tip!

I’m looking forward to some real skin sliding and muscles for even nicer creatures though. With current polycounts it might not make much sense yet…?

Speaking of interactivity, have people seen the new Ico (“Trico”) trailer? Some very nice creature , character interplay there! Cool design too…

Does any one work in a team where animation/design/programing are grouped to work just on animation/action tuning? Either 2 or 3 people that are just working on animation, interaction visual tuning.

How many animators implement their animations in the game tools/tweak the ai parameters or is it left for design/programing to add in (bottle necking the whole process)?

Are people finding tools like havok animation are letting more animator control over the motion or is it still being left in programing hands?

aside from saying “the animator is evil!!” I noticed alot of animators who are more disney… are able to create what looks right, specially when the movements are faster, and some focus more on making it look right even if the rig is not really being nice. I would like to mention my background has been smaller companies, so there tends to be more flexibility.

and for a new skinning system, I wonder how heavy pixars harmonic coordinate weighting would be for real-time, blender has now implemented it if im not mistaken. I would assume it is about the same hit on processor as dual Quaternion method

[QUOTE=bclark;3634]Does any one work in a team where animation/design/programing are grouped to work just on animation/action tuning? Either 2 or 3 people that are just working on animation, interaction visual tuning.

How many animators implement their animations in the game tools/tweak the ai parameters or is it left for design/programing to add in (bottle necking the whole process)?

Are people finding tools like havok animation are letting more animator control over the motion or is it still being left in programing hands?[/QUOTE]

We’ve been historically bottlenecked by having animators need to wait for programming to hook animations into the game. From there, it’s a back and forth workflow with the animator, programmer and a designer making sure the animations look and behave as they should, though it’s rarely exactly what the artists envisioned, but usually close enough. It’s something we’re moving away from, and are now moving towards giving the animators more complete control over how animations look and behave, but I can’t say much more than that.

Havok’s Behavior Tool does allow a more technical artist set up animation networks, but most animators would likely shy away from the tools’ complexity- I believe Havok expects this tool to tbe used by technical artists. Morpheme:Connect is more artist-friendly, but not as powerful as Havok (at least, it wasn’t last fall).

We use HBT(Havok Behavior Tool) at 7. Its a great tool that allows us artists to determine animation blending, states, and pretty much control how we want the animations to display. That said, there are issues with the tool. Stability issues(5 and 5.5) were prone to crashing if left open, there were many diferent file types associated with each character. Debugging exctly what was happening at runtime was a little complicated and required (at least for us, someone from the tech dept.) There were some bugs that I have been told have been fixed on the latest release (6.5). I am also told that the documentation for HBT is greatly improved. 5.0 - 5.5 had very poor help files.

Many of these files were platform specific. We had file types for wii, ps3,xbox 360,and win32. HBT only recognized and worked with the win32 files. Then there were xml files, character files, the behavior graph file, all the anim files, and the project file. Thats not including and ragdoll files. Eventually I made a maxscript (because its really the only language I know well enough to do this) that alowed you to open the project file and it would check it out and all the supporting files out of perforce for you to make dealing with it easier.

The positives were that artists were able to add animations, make additive animations, use modifiers add secondary movement procedurally, randomize idles, etc… very powerful tool, but complicated and it really does take a little time to create a fast work flow. There are so many things that can be done with it. We made template graphs for certain characters to speed up the creation process.

Granted HBT was new to us so this could have been paired down I’m guessing. And we’ve only used it on 2 projects, the 2nd was fairly similar to the first (No one wanted to fix what worked). So we really havent pushed it past the first project and really refine our process with it yet.

Most animations are done entirely out of context, without knowing how they blend to and from other behaviors, without knowing how it’d look if the ‘hurt’ animation played on top of the run cycle etc. Tech doesn’t really matter when you have no tools to take advantage of it.

I totally agree. This is exactly why in my previous experience, I spent a lot of time and effort developing a modern system that would allow animators to do just these things. The animation tool was a node based editor where each node was a piece of animation or blending method where nodes could be grouped and arranged to turn on and off various states and bits of anim data as would occur in game.

Unfortunately, the fact of the matter is that all of this state management stuff is complicated no matter how you slice it; our animators could not wrap their heads around it. Even the most simple task of creating an anim tree consisting of two animations with a simple blend between was too difficult. The staff just did not understand how animation goes into a game in a modern system and was not technically savvy to deal with the node based editor.

So, my perception of where animation is going is obviously tainted. If what I saw is occurring at other shops, then I do not have much hope that in game animation is going to get any better anytime soon.

I noticed alot of animators who are more disney… are able to create what looks right

Well, without going into a huge, example by example discussion, I would say I disagree. On the whole, my opinion of keyframing is that it either ends up looking very keyframey, (bad,) or very Disney, (good, but not appropriate.)

Mocap is king. Obviously, mocap needs massaging by a trained animator, but in my book, beginning animation without a mocap base and going keyframe start to finish is not the answer.