Shader based blend shape, stuck :(

Hello everyone.

I’m working on a blend shape code in dx9. I’ve got it working pretty much but I need some tips on how to calculate the weights IN MAYA to GET the proper amount of displacement.

I’ve made a mel script in maya that takes a base model, and also a blend shape and calculates the delta vector by subtracting vertex positions. I then normalize it to get it in length of one. After that I multiply it by 0.5 and then add 0.5 to get it in color range [0-1] instead of vector range [-1,1] which I then apply as a VERTEX COLOR to the original mesh.

So far very good right?

Now about the WEIGHTS. What I do right now is this in pseudo code in MEL;

for each vertex
{
find largest length of delta vector and store it in a global float[3]
}

for each vertex
{
compute length of delta vector and divide it by the previously stored MAX length (this yields 0-1 range clamped “weight”)
put it in alpha of vertex color
}

I use this in a vertex shader to displace the model vertices. But I don’t get the exact result I was expecting but it looks VERY close. I know I should compute the weights differently than by just simply taking a length in 0-1 range, but I have no idea how to do that. Anyone has any ideas?

Thank you so much =]

EDIT:

Maya test model:
http://img153.imageshack.us/img153/4197/orgmesh.jpg

FX Composer result (sorry for the quality)
http://img684.imageshack.us/img684/5209/blendshape.gif

Okay I fixed it, the only problem was that I took the lenght of the vector wrong :stuck_out_tongue: I should have taken it on the direction but I took it on the point instead of the vertex, stupid mistake I know haha.

here is the final result with pixel perfect blend shaping. Reading vectors from vertex color rgb and weight from alpha. Next step is to make several blendshapes, store them in a targa by taking square root of vertex count to get texture size. Also store UVs maybe and make like wrinkle maps etc. The possibility is endles =D

A single pass shader running just a single vertex shader in dx9 :slight_smile:

Nifty! May I ask a few questions?

Why have you decided that the Colors must be normalized? Couldn’t you send the same data across in 3 Channels instead of 4? Why a 0-1 Color channel? Wouldn’t it be fewer instructions for the same cost to just send another position channel, subtract it from the original, multiply by the blend amount and add it back?

Have you considered storing these as tangent-space vectors instead, and using normal-blending tricks to blend multiple poses at once?

Sorry for the probes, I totally geek out over this stuff, and it’s fun to see other people’s experiments.

[QUOTE=Lithium;5811]Nifty! May I ask a few questions?

Why have you decided that the Colors must be normalized? Couldn’t you send the same data across in 3 Channels instead of 4? Why a 0-1 Color channel? Wouldn’t it be fewer instructions for the same cost to just send another position channel, subtract it from the original, multiply by the blend amount and add it back?

Have you considered storing these as tangent-space vectors instead, and using normal-blending tricks to blend multiple poses at once?

Sorry for the probes, I totally geek out over this stuff, and it’s fun to see other people’s experiments.[/QUOTE]

Heya, thank you for your answer =D

Now that you say about it, I could premultiply the weights in maya and send just RGB values without the ALPHA. Didn’t think about that one haha, finished this blendshape code like a few hours ago at school when I should be doing my artwork project instead.
So yes you can send 3 channels instead of 4 because the colors are in 0-1 range and so is the weights! So nothing will go below or above 0-1 range if multiplied in maya! A weight of 1 will yield the same result when multiplied. Thanks for that one, appreciated!

EDIT: Hmm I just tested it and it didn’t work. and I know why. Remember that if we premultiply it, and then convert to 0-1 range. converting back to -1 to 1 range yields some weird vectors compared to the original method where you multiply the weight after converting it to -1 to 1 range in the shader. So nope this didn’t work :frowning: I thought it would for a moment ^^

Why a 0-1 color for the vectors is because vertex colors can’t store negative numbers and maya applies colors in 0-1 range to vertices so I need to scale them.
Also normalized vectors are in -1 to 1 range (which vertex colors can’t store because maya clamps -1 to black) so multiplying it by 0.5 and adding 0.5 will yield me a 0-1 color range vector that I can store as a color on the vertices (same principle with normal maps). So all I need to do in the shader is convert back to -1 to 1 range by multiplying 2 and subtracting 1 to use it without loosing direcitons :). About sending another position channel and subtracting it in the shader would yield more operation because I need to calculate the directions on the fly, so storing the directions already in the color channels is faster :slight_smile:

Before starting on this vertex color based blend shaping, I thought about tangent space blend shape. But I didn’t want to go that route before finishing a basic working blendshape.

But blending several poses will not be a problem with this shader because you could save out the vectors to a texture and use GPU vertex texture fetch with a blend shape ATLAS texture which can contain like 64 blend shapes inside a 512x512. Using modulus on the X direction and division on the Y direction you could gather the correct piece of the texture to fetch a vector value. I believe you can do 4 vertex texture fetches at the time or something so you could have your base model and 4 blends!

But we’ll see how I will progress with that, I haven’t been able to export out a texture in maya in UV space which kinda is frustrating. If anyone knows how to export a TGA of the vertex colors or any other idea, please tell me :slight_smile:

Thanks again =]

Another picture :slight_smile:

How well do you think it will handle additive/blending of shapes and the performance by using it at full scale? Is there any upper limit like texture samples or similar by using this method? Just curious, it looks interesting. :slight_smile:

Great experiments!

You need to export an Attribute Map, this is found under Paint Vertex Color Tool and it’s near the bottom of the menu (sorry, don’t have Maya open atm). There you’ll find Import and Export Attribute Map options… I think TGA is supported.

@Denny: I left my USB stick at school so all the code/test models/shaders are at my desk so can’t give a precise answer. I’m positive that it will give a very good performance COMPARED to bones for facial expressions, which usually needs large amount of vertex influences per bone and the shader needs like loops and multiplications for the bones etc. Just to much compared to transforming a vector to -1, 1 range from color, multiply by weight and add to vertex in world space, then transform it on to projectionspace for the pixel shader or do other stuff.

The only limit should be the maximum amount of vertex texture fetches you can do which is 4 in shader model 3.0. 128 in shader model 4 I think (dx10 though). If you wanna be hacky, you can store blend shape vectors in vertex colors in addition to the texture atlas to increase the amount of blends per object! But who needs more than 4 blends anyway ^^

But I’ll have more precise answers tomorow when I get to work on it and finish so I can show more than a single blending per object. Goal is to get 4 tomorow.

@whw: I’m gonna work on the texture atlas method with modulus/division operation on the x/y directions tomorow. Thanks man really appreciated :slight_smile: very helpful (tga isn’t supported but the others will do very fine =] )

Hey again all!

I’ve noticed something weird with exporting attribute maps, vertex colors that are in the alpha channel gets kinda triangulated shading, as if everything was made of hard edges. This doesn’t happen for the main RGB channels though. Really weird =S

Here is what I get in the alpha!

Here is how it should be! (Done by writing the weights in RGB instead of A

Btw the reason the whole head is colored is because I rotated all the vertices to test out how it deforms. When you only do facial stuff like eyes and mouth etc, stuff that are untouched get gray color which is 0 in vector space [-1, 1] :slight_smile:

It’s not a big deal I can just export the weights by using another pass where I generate them inside the RGB channels instead of the A channel. Then combine them in photoshop!

Otherwise textures work really well on meshes! I was satisfied by the end result.

BUT! there is a problem.
Now export attribute maps export a texture right. But there is no fill texture seams variable so it does it not fill a little outside the UV seams. So if you move those vertices on those areas of the “SOURCE MESH” then it causes problems on the vertex shader during the blending of the TARGET mesh.
Polygons start to split themselves and go apart because vertex texture fetching miss returns a pixel to the right or left of what it should (like returns a black color instead of a vector). So hide those seams like behind a head or something and don’t touch them!

This isn’t a problem on border edges because those can be fixed in photoshop by taking the oiriginal image, blurring it and combining behind the original image. So border edges has no problems, but seams that actually is not borders (such as a seam behind the head) cause big troubles, so you can’t edit the vertex positions of those areas). I was happy with the result of the textures, they made exactly the same blending as done with vertex colors PLUS the problem on UV seamed areas.

If anyone gets any ideas after reading this post for something that could maybe help me fix some problems or so, feel free to help out =]

Anyway I’ve gotta go home from school now. Regards!

EDIT: Hmm I wonder how bad things will get with those seamed areas when I use smaller textures O_o?? Time will tell =/

Okay I have some images from vertex color based blend versus texture based blend shapes. Vertex color based blendshape has no precision problems, it’s perfect. The texture based one has some bleeding problems, fetching black pixels for the mouth and eyes so they don’t move as much as they should. Could be fixed by making some bleeds in photoshop but still, it’s gonna get alot worse with smaller images. Technically 64x64 images should be able to handle 4096 vertices. But the bleeding problems really force me to use bigger =/.

My 8800gt can handle COLORn where n can be 0-15 in SM3. So technically I could have 16 blendshapes with vertex colors. Ofc you can also use some hacks and use two more UV sets per blendshape (First set XY is RG, Second set XY is BA) and use them for blendshape data aswell xD.

I am not very happy with the results I get with texture based though =/.

result:

Here is the texture RGB and A

and a quick preview on the tool I coded for automating the process ^^
Using it for my XNA based engine for collision and some other stuff that gets exported as opaque data with the FBX.
http://img69.imageshack.us/img69/6442/mayasj.jpg

Anyway. If you guys have any ideas or want to help me with making this even better, feel free. Time to go home! Regards!