Shader based blendshape V2.0 finished (major overhaul)

Heya all agian =]

I’ve changed my whole approach today when I came up with a new idea

Sorry for the aspect ratio btw.

Anyway my approach is like this in MEL:

Calculate amount of pixels needed to form atleast a quad by:
texDim = ceil(sqrt(vertexCount))
pixelsNeeded = texDim*texDim;

Then I move on to create an array to hold enough pixels.
I start off to calculate the delta vectors and the linear depth, then I put them inside an array and send it off to a MLL plugin I wrote in c++ that just simply outputs a texture.
Once the texture is outputted I start off to calculate a new UV set for the entire mesh that will match the order of the colors that was outputted. Since the orders match I just simply use the new UV set.
And it doesn’t matter if they overlap or not because we will do Vertex Texture Fetching in a vertex shader with POINT sampling.

The texture you see in the movie is 30x30 pixels big, but I resize it to 60x60 because there are some small errors otherwise, even when I already fix the halfpixel offset error for the UVs preahead in MEL so they match the texels in DX9.

But I’m very happy with the end result now. 60x60 pixels for that head is kinda nothing I believe, and I can build a texture atlas easily by extending my script/plugin and do several blends at the same time in a single pass in HLSL =]

Best regards and peace out it was a good day today =]

Yup- that’s exactly what I did about 2 years ago when I tried it on my own. The only real problem there was my limited post-fx skillset- I needed a way to combine those textures in a pre-processing pass based on some parameters (ie, use these blendshapes in these ratios corresponds to a blending of the blendshape textures). Or you could probably use an atlas, like you say. One optimization you can make is to have verts using the same information point to the same pixel (though that may be overkill). I’m sure texture-based blend shapes are possible to do well, I didn’t have the chops to finish but I hope to see what you can make.

There are several performance implications here- VTF, dependent texture reads, and texture size, and some I’m not thinking of. Since the work is almost all in the vertex shader, you may be fine (since it is never the bottleneck anyway), but I’d do some serious profiling- these are things not normally done in the vertex shader and it’d be good to know what the actual effect of the implications are.

Good job Yasin!

Thanks for your reply Rob =]

You’ve given me some ideas that I’d like to try.

The first is, instead of using a quad texture, I’ll use a 1 dimensional texture (that way ---->).
The second is, just like you said having every vertex that is NEVER EVER gonna get blended should point to the same pixel, either the first or the last. This fits very well with the 1 dimensional texture idea.

So now I have two choices;
Calculate the vertices that never gets blended and let them point to the gray pixel. The calculation will include all the SOURCE blendshapes and make a perfect UV for the TARGET. So for the head almost half the head would be gone =]

OR add another listbox that the user selects the vertices he wants to exclude (which automaticly points to the first or last pixel) so examplewise the user would select half the head and add into a exclude listbox or something!

This way building an atlas will be super simple since everything will be 1 dimensional, and you can just stack them up to create an atlas, which you can index by adjusting the Y coordinate of the UV in a shader =]

Also NVIDIA documentation suggest that I should use dynamic branching within the shader to break early before even doing VTFetches with a simple ViewDir Dot Normal do determine if it is obtuse or acute angle. I could also check if it is within the projection window [-1, 1] range and then break out early aswell. But Two dynamic branching, don’t know if it’s worth it, I’ll need to make several techniques to the shader once I’m done with the 1D texture generations and do some profiling.

Thanks again =]

Alright finished the 1 dimensional texture + auto stacking feature =]

Here is a very quick video of it in action =D

I had very bad luck today though, I calculated some stuff wrong and sent them off to the plugin without saving the scripts and maya crashed :stuck_out_tongue: lost everything so I had to redo it twice facepalm xD

Anyway I’m off to working on the optimization part =P gonna have a SINGLE 0.5 value in the texture, either at the beginning or the end ^^

Best regards

//Yaz

Big update =]

Added GUI support + Optimization so it checks every vertex across all blendshapes and then decides wether it needs to have a color or if it should be PUT at LAST PIXEL which has gray rows (no displacement) while keeping the auto stacking feature =] So 2 clicks will handle the entire texture from now on!

The head with ~4k vertices:
Without optimization: ~8000x4 after 200% scaling
With optimization: ~3000x4 after 200% scaling

More than 60% data was shaved away ^^ Thanks for the idea Rob it really made me think further with the little push =D

Now the files are very small like a few kbs depending on vertex amount and how much of that vertices you blend with since the unblended ones are skipped ^^

Here is a video of it in action, shows textures for comparision and stuff aswell

Best regards!

Edit: Oh btw I’d gladly take any advices or what would be optimal from you veterans in the industry =]

Hello all!

Summer is here and I finally have time again for this baby =P

I’ve been thinking about something that might work or it might totally break the whole thing.

It’s about encoding the Tangent, Normal, Deltavector, UV coordinate offsets and Weights to a
A32B32G32R32 128bit floating point DDS texture at Preprocessing stage in Maya. This would require c++ plugin that can output DDS textures. With that I can just do a single Vertex Texture Fetch and then decode out all the vectors and generate the final Binormal by doing just a simple cross product.

R32 channel = Normals xyz
G32 channel = Tangents xyz
B32 channel = Deltas xyz
A32 channel = Weights scalar + UVs xy

I’ve just tried encoding a float4 to a float and then back in FX Composer with the code far far down :stuck_out_tongue: and it worked great! The testing was done with a R32F render target btw.

But there is something weird about it and that’s why I post this message. The weird thing is the alpha channel. Alot of information is lost when I decode it back. I guess this has to do with the floating point precision at that low level ( 1 / (256256256)).

Result:

Not that I need to cover an entire float4 inside a float, it’s just that I like to use everything :stuck_out_tongue:

But there is a funny situation that I found when I was testing around, when I add * 2 - 1 to the normal map at pass 0, I get perfect alpha in pass 2 when I decode it, but this time the colors are screwed (and no I didn’t mean more bluish which is obvious since I scaled the normalmap, what I ment was it looses the normal map details) =P

WITH* 2 - 1 added to pass 0:

Anyway back on topic! What I wanted to say is, is there a way to cover A8R8G8B8 inside a R32F without loosing the alpha channel? As I said earlier I don’t need to cover 4 elements but it would be good :stuck_out_tongue:

Here is the code that I threw up, it has the * 2 - 1 term in pass 0 which shouldn’t be there, but I left it out to show where I do that operation to get perfect alpha but screwed colors :stuck_out_tongue:

uniform float4 ColorToFloat = {1.0f, 1.0f/256.0f, 1.0f/(256.0f256.0f), 1.0f/(256.0f256.0f256.0f)};
uniform float4 FloatToColor = {1.0f, 256.0f, 256.0f
256.0f, 256.0f256.0f256.0f};

//DRAWS THE SCENE TO TexCol render target in Pass 0
float4 PS_DrawColor(OneTexelVertex IN) : COLOR0
{
float4 normal = tex2D(SampNormal, IN.UV);
return float4(normalize(normal.rgb)*2-1, normal.a);
}

//ENCODES THE SCENE TO TexFloat render target in Pass 1
float4 PS_ConvertFloat(OneTexelVertex IN) : COLOR
{
float4 vectorsRaw = tex2D(SampCol, IN.UV);

//Return encoded
return float4(dot(vectorsRaw, ColorToFloat), 0, 0, 0);
}

//DECODES THE TexFloat render target and outputs the color to the FRAMEBUFFER in pass 2
float4 PS_ConvertBack(OneTexelVertex IN) : COLOR
{
float vectorsEncoded = tex2D(SampFloat, IN.UV).r;
float4 vectorsDecoded = frac(vectorsEncoded*FloatToColor);
return float4(vectorsDecoded.xyz, vectorsDecoded.a);
}

After being busy with 3D art at school and portfolio work, I decided it was time to get back to more technical stuff, so I decided to work on this babe again.

I just finished writing a c# console application that takes several .txt files at the same time that contains numbers per line, and then reads it and outputs a A32R32G32B32 DDS texture.

You just drag drop the .txt file/files onto the .exe file or pass the files as arguments and it will create the textures for you. The .txt files can be created directly from mel script or max script for example. The reason I did it this way was to get something up and running fast, eventually I will go and edit my MLL plugin to handle it instead, but it’s really not needed. Besides I love C#.

As I wrote earlier I will use A32R32G32B32 textures to encode tangent, normal, deltas and UV + weights into each channel, and decode back by sampling a single pixel and build the binormal from a cross product. This works because the colors for tangent, binormal, normal, deltas, weight and uvs will be 8 bits size (0-255) while I will use 32bit floating point channels to store entire 3D vectors just like I explained in the previous post by encoding them.

Anyhow I wanted to share the source code for the C# console app for anyone interested in outputting DDS textures in 128 bit format from C#. I also included 3links in the source code to MSDN with the info needed to save out other formats etc. The Bin/x86/debug folder contains 2 .txt files (Vectors1.txt and Vectors2.txt) that you can drag and drop right into the exe file to generate the textures, then open them up in photoshop and check the values :slight_smile: Make sure you have nvidia DDS tools installed for photoshop and that you open the textures as 32 bit.

DDSWriter for C#:
http://www.filefront.com/17232666/DDSWriter.rar

Peace out!