The dream is simple:
- The animator grabs a point on the character, says “go there”.
- The rig figures it out. The point goes there.
How:
If the entire logic of the rig is written in a common, differentiable system, then we can use the same error propagation that neural networks use to train.
Here, the animator’s desired delta is the error of the system - we reverse the rig, and propagate it back to the controls, to find the configuration of controls that best solves it.
(Then maybe we loop it a few more times, idk).
To be clear, I’m NOT talking about TRAINING or LEARNING anything - only using the BACK-PROPAGATION that neural net libraries provide.
I’m drawn to Tensorflow for its Graph system: Introduction to graphs and tf.function | TensorFlow Core
Pytorch also has a graph system, with seemingly less in-depth documentation: PyG Documentation — pytorch_geometric documentation
But I have no idea if better docs translate to better value.
DiffTaiChi seems like the only library that would hold up to doing this on deformers as well: Differentiable Programming | Taichi Docs
- but maybe it’s not actually useful to be able to reverse soft deformations alongside rigids.
(Consider shot-sculpting a silhouette - it seems unlikely you would ever want to meet a shape using both rig kinematics and changes to the muscle rig, for example)
Stability in this way is a problem - I imagine controls would need a “stiffness” to say how easily they’re moved to deltas compared to others - hands and feet less stiff than body controls, for example.
And then this might need to be available to the animator in a tactile way, like a brush tool radius.
Another advantage is in more complex constraints. If we consider the human knee, the femur and tibia move in a specific relation - one relative position, for one relative rotation.
With a fully differentiable system, any arbitrary constraint like this should be more simple to implement, just as you would set a driven key between values.
Some difficulties I can already imagine:
- Tensorflow has a linalg package but it’s still marked experimental
- You’re locked to programs that support python (though there’s apparently a way to port a TF graph into C++)
- I’ve had great difficulty getting Taichi stable at all, and that’s without getting it to play nice with Tensorflow.
- Building a parallel graph to Maya’s own is always tricky but I’ve done it a couple of times before
- The idea relies on the propagated error still being in a useful form by the time it hits the rig’s parametres. I have no idea how realistic this is.
Eager to hear any perspective you have on this, or if any of you have already tried it before.
Thanks