I’m curious if anybody else has worked on this problem, and what solutions you’ve found or come up with. But I’m kinda hoping that my solution is novel
Background: In Simplex (like other blendshape systems I’ve seen), there are primaries, and combos. When certain groups of primaries are activated, that automatically activates certain combos. But it’s not just flipping switches. Every primary shape can vary between 0 and 100% and beyond, so the combos need to activate smoothly as well as handle extrapolation. This means they need to be controlled by smooth (or at least continuous) mathematical functions.
But I got a request to make it work with joint poses as well.
So here’s my problem: What function should I use to interpolate and extrapolate joint pose orientations? I want a constant velocity (ie, a 25% weight would give exactly 25% of the rotation). And I want the function to be order agnostic (ie, blending poses A, B and C would produce the same output as blending poses C, A, and B)
Of course, a Radial Basis Function (RBF) immediately comes to mind. But that gives you weights to apply to the poses. I’m trying to find out how to apply those weights, so that doesn’t work.
The thing I WANT is to somehow sum SLERPing quaternions. But that’s doesn’t seem to work because applying rotations is an inherently ordered operation. I could SLERP between pairs of SLERPs (kinda like bilinear interpolation), but that requires 2**N items. And also, I’d have to make a decision about which order to pair things. And different pairings give different outputs.
I could do NLERP (which is just normalizing a linear combination of quaternions). But that doesn’t do the constant velocity.
I just read the paper on dual-quaternion skinning, and it does NLERP. So there’s no help there.
I could also just do a linear interpolation of the matrix deltas. And this can work. However, there are scaling issues during the blends. It’s just like the candy wrapper effect from linear skinning. I could add intermediate shapes, and spline interpolation, but that feels ugly to me. I’d like to get rid of intermediates. (That said, this is still useful for interfacing with linear skinning algorithms)
I was banging my head against this problem for a couple days when I got an idea. I previously found a StackOverflow question on averaging quaternions, and there was a really neat idea in there based on a paper from NASA. The reasons why/how it works are way above my head, but I can sure call the numpy functions to implement it! Apparently the largest eigenvector of the sum of the matrices formed by taking the outer product of each quaternion with itself is the average quaternion
Ignoring all that crazy stuff, What is the naiive formula for averaging? Just ADD UP ALL THE VALUES, then divide by the count of values. So by averaging, I’m summing everything together in a manner of speaking! So then I would just need to “multiply” by the count of values (done by raising the average quaternion to that power) to cancel out that division and get the sum.
So here’s the “almost” in the title. I haven’t figured out how to reverse/invert this process in a useful way, which means I can’t get deltas the same way I would with blendshapes. But I can get a rotation offset relative to the output. So I just do the poses in layers: All the primary shapes, then all the 2-combos, then all the 3-combos and so on.
But with that little caveat … holy crap, this seems to work!