Height -> normal pixel shading

Im looking for some advice on this topic. Ive been venturing a little bit into procedural material work (Im using ue3), and the main roadblock I have run into is generating the normal from the procedurally generated height image. I have read a few things on sobel slope detection, but Im not yet grasping how adjacent texel sampling works and how it can be done in the material editor. I think a plain terms pseudo code explanation is what I need to help understand it.

Also I wonder how practical it is to do this sort of conversion in real time. There is bound to be different techniques of varying complexity/quality to accomplish this sort of operation. Any insight is appreciated.

Here’s the code that we use in ShaderFX for converting a height map samples to a normal:

float center = tex2D(Sampler, texcoords); //center bump map sample
float U = tex2D(Sampler, texcoords + float2( 0.005, 0)); //U bump map sample
float V = tex2D(Sampler, texcoords + float2(0, 0.005)); //V bump map sample
float dHdU = U - center; //create bump map U offset
float dHdV = V - center; //create bump map V offset
float3 normal = float3( -dHdU, dHdV, 0.05 ); //create the tangent space normal

The basic idea here is that you sample the height map three times. The first time is with the regular texture coordinates. The second time is with a small amount added the U coordinate. The third time is with a small amount added to the V coordinate. dHdU is a vector from the center out to the second sample and dHdV is a vector out to the third sample. These two vectors define the slope of the surface - and that’s what you want for a normal. So then to make a normal you just use dhdU and dhdV as your X and Y components and 0.05 as the Z component. This will give you a normal in tangent space. You have to normalize it at the end. I didn’t include that line in the code I pasted above. You can add in a multiplier to the Z component (0.05) to make the bumps deeper or more shallow.

seems REALLY expensive, do you know of any examples where people might want to do this?

If you just want to use a black and white texture map and put it in a bump map slot, this is definitely not worth it. You’re much better off converting it to a normal map in an off-line process before hand instead of doing the conversion every frame.

What Cliffy is talking about is a scenario where the height map is being procedurally generated - and therefore it’s changing every frame. Water ripples is a good example.

Thanks for the help, I got it working, quality is quite good, and only 9 instructions, so is very much practical. The drawback however is that this method only works well when your sampling an actual texture. In the case of procedurals where your dealing with purely math, it is not practical to do since your potentially exponentially increasing the complexity of the material by duplicating functions with offsets to them to get the end result. So looks like im gonna have to approximate the normal somehow without taking samples. (rendering to texture is not an option in this case)

Yep - it does get really expensive when you have to run the same procedural function three times to get the slope. I guess you’d have to change around your procedural stuff so it gives you a vector instead of a float. When you find a good way to do it, let me know. I’d really like to hear about it.

I just did something like this - trying to make deforming surface waves in an ocean shader. This worked out pretty well for a basic wave – but it assumes there’s only vertical, Up-wards displacement. It’s all happening in the Vertex shader, so even though it’s inefficient, at least it’s not per-pixel…

I wouldn’t use this in a complicated environment, but for illustrative purposes:


float getDisplacement( float4 UV )
{
	float4 data = tex2Dlod(targetSampler, float4( UV.xy, 0, 0));
	float disp = data.g;

	return disp;
}

#define numPixels 128
#define uvStep 1.0f / 128

float4 getNormal( float4 UV )
{
	float tl = getDisplacement( UV + uvStep * float4( 1, -1, 0, 0 ) );
	float  l = getDisplacement( UV + uvStep * float4( 1,  0, 0, 0 ) );
	float bl = getDisplacement( UV + uvStep * float4( 1,  1, 0, 0 ) );
	
	float  t = getDisplacement( UV + uvStep * float4( 0, -1, 0, 0 ) );
	float  b = getDisplacement( UV + uvStep * float4( 0,  1, 0, 0 ) );
	
	float tr = getDisplacement( UV + uvStep * float4(-1, -1, 0, 0 ) );
	float  r = getDisplacement( UV + uvStep * float4(-1,  0, 0, 0 ) );
	float br = getDisplacement( UV + uvStep * float4(-1,  1, 0, 0 ) );
	
		
	float dx = tr + 2*r + br - tl - 2*l - bl;
	
	float dy = bl + 2*b + br - tl - 2*t - tr;
	
	float normalStr = 0.5;
	float4 N = float4( normalize( float3(dx, 1.0 / normalStr, dy) ), 1.0);

	return N; //* 0.5 + 0.5;
}

Disclaimer - I did grab the technique from a paper on the 'net about calculating normals from heightmaps. I can’t find it, and I’ve forgotten the link, and I feel awful about it. So – I can’t take credit for the algorithm. If I find it I’ll edit this post with the info.
But, I did get it working in FXComposer and in the Max viewport.

>> Edit >>

Ah, I read about it from Catalan Zima’s Height -> Normals article, located here:
http://catalinzima.spaces.live.com/blog/cns!3D9ECAE1F2DC56EF!223.entry

It’s called the Sobel filter.
<< End Edit>>

~

There’s three techniques I commonly use:

  1. partial derivatives (ddx/ddy, either manually or with the functions) to find the gradient slope, then construct the normal from there. Obviously works with pure math generation or other methods, limitation is it’s screen space so no fixed texel neighbour sampling here.

  2. for purely mathematical input, you have to be very smart about constructing your functions using packed vector and matrix math methods. This means you can condense most functions into an acceptable expense with 3 packed offsets from the start, giving a gradient slope all throughout the function.

  3. for reconstructing normals based off dynamic heightfields (generally texture based), one trick I’ve thought up recently was to simply pack the neighbouring texels into the RGB(A) channels respectively. Such a simple concept, but surprisingly I’ve never seen this used before. A good example of this is if you’re generating water droplets or dynamically altered terrain, you simply encode your neighbouring texels into the RGB (RGBA if you want 4 sample lookup instead of 3), and you’ve instantly got all data you need for the normal gradient in one single texture lookup.

This is only marginally related; bcloward made me think of it when he mentioned the water ripples example. It probably wont directly help you achieve what you’re looking for, but it might just get you thinking about another path to resolve your problem or the documents I reference may provide some valuable insight… At worst it’s out there for reference.

Several years ago, I wrote a C# application that created tileable and animated normal maps that simulated waves based on a series of user inputs; things like maximum wave height, wind direction, wind speed, choppiness, etc… It used a multi-band Fourier domain ocean wave synthesis with a Phillips spectrum to get some pretty realistic results. (all the water in AoE3was generated this way.)

We didn’t need realtime-results except for boat wakes, as we just pre-generated the animated texture and played it back… However, if you utilize the GPU/DirectX for your calculations, you can get pretty darn close to real-time results based on your texture resolution. The ATi paper linked below has specific examples for real-time waves mixed with real-time boat/duck wakes, ripples, etc.

There’s a decent discussion as well as some code samples in this book as well as numerous SigGraph (still the best conference!) papers on the subject. This is also a good paper from ATi back in '05.

Im returning to this again, because I am looking for a way to do the conversion without extra texture samples. It seemed I overlooked J.I. Style’s reply:

[QUOTE=j.i. styles;2440]There’s three techniques I commonly use:

  1. partial derivatives (ddx/ddy, either manually or with the functions) to find the gradient slope, then construct the normal from there. Obviously works with pure math generation or other methods, limitation is it’s screen space so no fixed texel neighbour sampling here.
    [/QUOTE]

I looked up information on ddx/ddy, but could never find enough info to get an understanding of how they could be used to generate a normal from a heightmap without taking extra samples of the heightmap.

Could you provide an example of your commonly used technique with them please?

Hi guys, sorry to dig up this thread :slight_smile:
I had that problem a few times before so i decided to spend a few hours to have a better understanding of all this.

I like the ddx() solution a LOT. It has no need for tangent space but you need vertex normals, the light and eye into view space (can be done on the vertex side). It also requires only one sample of a grayscale texture, making it potentially cheaper than every normal map solution i’ve seen before. Unfortunately there are a few problems: since it’s in screen space things starts to look ugly at grazing angles and it is also dependent on the distance AND pixel density. I guess that last problem could be solved by storing a bump value into the alpha of the mipmaps or something like that. Tricky but still, i like it :slight_smile:

The solution from Ben works nicely and is pretty cheap. There’s a few artifacts from the lack of precision, 16 bit would probably solve that. I like J. I Styles trick of packing neighbouring pixels in rgba, but I can’t find any case where that might be usefull?

Sobel looks very nice too, a bit more blurry but has a lot less artifacts, expensive tough :slight_smile:

Thanks everyone for your inputs on this :slight_smile:

Yes I’ve used the ddx/ddy stuff to on-the-fly bump->normal conversion, but it can be pretty tough to use in a real situation. Due to problems with:

blockiness - all pixels in a screen quad share the same ddx/ddy value so it’s like doing it at half-rez. This can really break the effect, especially on consoles.
distance - the effect gets stronger in the distance, I tried something like dividing by z but never found a solution that kept the look constant
grazing angles - just never made it look good, too noisy or to soft in all my tests.

I also never figured out how to get it to output a normal relative to the surface normal in world space, but then my math there is a bit more sucky. But it is way cheaper than any other method!

I’m not sure of the cost of ddx/ddy instructions, but they can’t be too bad as the card naturally works on pixel quads at once, so all the values are in memory and it should just be a simple op. I never PIX’d it to find out though.

Hi cody :slight_smile:
I’m glad you tested that distance problem too, i’m not the only one to fail :stuck_out_tongue: It’s also related to the pixel density because it seems to appear when scaling down an object and thus visually increasing the pixel ratio in screen space. Filtering/fading the mipmaps works to some extend (but the nvidia filter is broken for that feature, and it’s kind of a pain to do it by hand), but knowing the current mipmap would help building a custom weight for the normal. The only way i have in mind is storing the mip level in its alpha.

You can retrieve the normal in world space by simply multiplying it by the inverse of the view matrix (the view matrix bring from world to view space). But usually you don’t want to do that kind of expensive calculation on the pixel shader if you are looking into that odd method in the first place :slight_smile: It’s cheaper to do everything into view space: transform the light and the vertex normal into view space in the vertex shader and voila. If you use a directional light you don’t even need to transform the light, the cpu can preprocess it.

I guess working around those limitations is more trouble than using a regular normal map method or a more expensive heightmap>normal solution. It might be useful in specific cases tough, like water or a terrain for an iPhone 3GS maybe :slight_smile: