[QUOTE=Lithium;5811]Nifty! May I ask a few questions?
Why have you decided that the Colors must be normalized? Couldn’t you send the same data across in 3 Channels instead of 4? Why a 0-1 Color channel? Wouldn’t it be fewer instructions for the same cost to just send another position channel, subtract it from the original, multiply by the blend amount and add it back?
Have you considered storing these as tangent-space vectors instead, and using normal-blending tricks to blend multiple poses at once?
Sorry for the probes, I totally geek out over this stuff, and it’s fun to see other people’s experiments.[/QUOTE]
Heya, thank you for your answer =D
Now that you say about it, I could premultiply the weights in maya and send just RGB values without the ALPHA. Didn’t think about that one haha, finished this blendshape code like a few hours ago at school when I should be doing my artwork project instead.
So yes you can send 3 channels instead of 4 because the colors are in 0-1 range and so is the weights! So nothing will go below or above 0-1 range if multiplied in maya! A weight of 1 will yield the same result when multiplied. Thanks for that one, appreciated!
EDIT: Hmm I just tested it and it didn’t work. and I know why. Remember that if we premultiply it, and then convert to 0-1 range. converting back to -1 to 1 range yields some weird vectors compared to the original method where you multiply the weight after converting it to -1 to 1 range in the shader. So nope this didn’t work I thought it would for a moment ^^
Why a 0-1 color for the vectors is because vertex colors can’t store negative numbers and maya applies colors in 0-1 range to vertices so I need to scale them.
Also normalized vectors are in -1 to 1 range (which vertex colors can’t store because maya clamps -1 to black) so multiplying it by 0.5 and adding 0.5 will yield me a 0-1 color range vector that I can store as a color on the vertices (same principle with normal maps). So all I need to do in the shader is convert back to -1 to 1 range by multiplying 2 and subtracting 1 to use it without loosing direcitons :). About sending another position channel and subtracting it in the shader would yield more operation because I need to calculate the directions on the fly, so storing the directions already in the color channels is faster
Before starting on this vertex color based blend shaping, I thought about tangent space blend shape. But I didn’t want to go that route before finishing a basic working blendshape.
But blending several poses will not be a problem with this shader because you could save out the vectors to a texture and use GPU vertex texture fetch with a blend shape ATLAS texture which can contain like 64 blend shapes inside a 512x512. Using modulus on the X direction and division on the Y direction you could gather the correct piece of the texture to fetch a vector value. I believe you can do 4 vertex texture fetches at the time or something so you could have your base model and 4 blends!
But we’ll see how I will progress with that, I haven’t been able to export out a texture in maya in UV space which kinda is frustrating. If anyone knows how to export a TGA of the vertex colors or any other idea, please tell me
Thanks again =]