Tangent Space Confusion

I’m really trying to wrap my head around Normal maps. I don’t have any background in math so some of this is has been quite a learning experience but would I would like to be able to do is to explain to someone how they work from start to finish.

This is what I think I understand so far.

Tangent space is a coordinate system that is relative to a surface. In the case of a 3D mesh it is a coordinate system that is relative to a single triangle.

With normal mapping we take points in word space and convert them into points in tangent space, which we then store in UV/texture space.

Here is where I get confused:

How exactly is the tangent basis calculated, and am I correct in assuming the the origin of this coordinate system is at the center the triangle?

Also what do the colors in the normal map actually correspond to? I know that if you have shading errors on your model the normal map will compensate for those errors how do the vertex normals of a mesh fit into how the tangents space is calculated, and why do they effect the colors in the normal map.

If it helps to understand where I am coming from I am imagining all of this kind of like object space but only relative to a single triangle so each pixel sits on the triangle like polygons sit on a flat grid. Then they are tilted relative to the high rez mesh you baked from. I don’t know if that’s the correct way of looking at this, and the problem is that I don’t see how the vertex normals of the low poly mesh fit into that, or why they effect anything.

If someone could give their best shot at explanation it to me I would really appreciate it. Its been a struggle to understand and I don’t have access to any 3D gurus.

I’m not wise when it comes to normal maps, but I’m pretty sure it’s calculated on a per pixel basis across the polygon. If I understand these articles properly the tangent/normals are calculated per pixel by using the RGB value in the normal map as a vector.

http://wiki.polycount.net/Normal_Map

The way I understand it is this:

the way light and shadow is calculated on a polygon is by having a light cast a ray at the face center, and comparing that ray with the face’s normal. The larger that angle, the darker the polygon.

Normals maps are just a way to bake that data onto the texel level. So, it’s as if every texel on the mesh has its own face normal.

This article goes into better detail about it, and is a great read imo.
http://www.cgarena.com/freestuff/tutorials/maya/varga/index9.html

Tangent space and UV space can be seen as the same thing (UV space is essentially the xy-plane of tangent space in a z-up configuration). In most games, tangent space is computed based on the UVs applied to a face and the balanced normal of the face (typically, the average of the vertex normals for that face).

The reason that you will see your “shading” issues even when a normal map is applied is that seams in UVs or strange mesh normals effect the conversion of other vectors (eye, light, etc.) into tangent space. Inside a typical surface shader, the normal, tangent, and binormal (the three axes of tangent space) are computed per vertex and then interpolated across the pixels of a face for use by the pixel shader. The normal map already stores vectors in tangent space so, the eye and light vectors need to be converted to tangent space as well in order for the lighting calculations to be run with the normal map information. If there are issues with the normals on the mesh (from modeling) or the tangent/binormal (from UV mapping), this effects the conversion of these other vectors and they come out wonky. So, even if you are using a normal map, the end result of the surface shading is still disrupted by the mesh errors.

I hope that gives you a basic idea of what is happening and how tangent space is typically used. There can be significant differences depending on your mesh or rendering pipeline and this explanation is somewhat simplified.

Great answer guys, learned something new today.

So, I am wondering after this –

On a skinned mesh the vertex normals are constantly updating, and must be being fed through to the pixel shader for lighting calculations.

Why don’t we use world-space or object-space normals? Would they still need the extra binormals and tangents?

Since most of those calculations are being fed into the shader anyway, it seems like you could warp the world-space normals as regular, and use them to interpret the world-space normal map in it’s new orientation.

Is it a lot more calculations to throw at a graphics card?

I might be missing a big step somewhere, I haven’t really had my coffee yet. :zzz:

Animation does not change tangent space, unless it is doing something really weird and making your face non-manifold (not possible with triangles). A normal for a face still points out from the face and the UVs still sit on the face in the same way no matter which direction (in world space) the face is pointing. This, combined with the fact that normal maps need to be authored and applied in tangent space to make sense as a texture, makes tangent space lighting the way to go.

I know I’m late to the game, and there are really good answers already, but I figured I’d throw in my sense of Normal Maps as well.

There is no meaningful “origin” in Tangent-space. It is a Linear Transformation, not an Affine Transformation (An Affine Transformation includes translation). What Tangent-space does have is a “Basis”. A Basis is a set of 3 (orthogonal) vectors that you can use to take a vector from one space to another. It’s kind of like a rotation, especially when no scale is involved. It lets us map a vector in one space to a vector in another. This is roughly the same math that gets your characters animated, or gets your objects into the proper place in the world, and then onto the screen!

So what is our Basis in Tangent-space? You may have seen “TBN” in some shaders. The Basis is defined by the Tangent, Binormal (really, Bitangent, but the misnomer has gone so far, the distinction is pointless) and Normal of your mesh. So what defines Tangent and Binormal? It’s actually quite arbitrary. There can be an infinite number of othogonal vectors on the plane defined by the Normal. We have to pick two, and be consistent. The UV’s of a triangle give us a great definition, and for every face a Tangent can be defined. There is some fancy math to map these into the space the Normal is in, but the first image in this thread shows the result quite nicely. But that’s per face. Generally, much like Vertex Normals, we need smoothly varying Vertex Tangents across a mesh. So, just like how a vertex Normal is a weighted sum of the adjacent face normals, a vertex Tangent will be the weighted sum of the adjacent tangents, often with a post-pass done to make sure everything is still orthogonal. “Tangent-space” is then the smoothly interpolated TBN between all the vertices, and actually varies per sample point!

That’s great, but how does that relate to a Normal Map? In a Transformation, you take each component of the source vector, and multiply it against each vector in the Basis, and add those three vectors up to get your resultant vector. The Vertex Normal in Tangent-space is just the 0,0,1 vector transformed by the Basis. It is saying to use 100% of “N” 0Tangent + 0Binormal + 1*Normal (0T 0B 1N in shorthand, if you will). If you think about a flat Normal Map being solid blue, this makes a lot of sense. But why 128,128,255 Blue, not 0,0,255? Normal Maps are range-compressed because Normals can go from -1 to 1 in each channel, and a texture can only go from 0-1. That’s why when you unpack a normal map you Multiply by 2 then subtract 1. (Always in that order! Graphics Cards do Multiply->Add really well, but less-so on Add->Multiply) Red is multiplier for Tangent, Green for Binormal, and Blue for Normal.

So what if we want our normal to be bent “up” in a Normal Map. Really, the best we can do is say that we want to bend along the Binormal or Tangent, because “up” is constantly changing. If you wanted more Binormal, you would trade some Green and get something like 0T .7B .7N. Maybe it’s just easier to think of a Normal Map as how much you need to bend the Vertex Normal, but at heart, it is the vector that when multiplied by our very arbitrary basis, will yield the correct vector in another, useful space, like World-space. This is why you actually get better results form Normal Map bakes if you can guarantee your sampling Tangents and your Display Tangents are computed identically.

Alright, so why such a mindbending and difficult space? Why not Object-space, can’t we do the same rotations but without this crazy varying basis? Yes, actually, but Tangent-space normals have some very interesting properties.

[ul]
[li]You can guarantee that the Z component will always be greater than 0, because it must point out from the face. This gives Tangent-space normals their characteristic blue color (because it will always be greater than 128 when range-compressed), and means that for compression, you can actually completely drop the Z Component, and recompute it using the Pythagorean Theorem.
[/li][li]Tangent-space Normal Maps are also far more reusable. Imagine having a single solid-mesh cube. A standard in-game crate. Every face would require different texture colors in Object-space, but a single Tangent-space image of one side could be transformed by the TBN for each face and re-used.
[/li][li]Tangent-space has interesting properties for blending as well. You can treat the Tangent-space Normals more as “offsets from default” instead of “this direction in space”, and get more pleasing blending. For example, if you had .7T 0B .7N blended with 0T .7B .7N, a vector lerp would give you .35T .35B .7N which would look washed out. Artistically, you really want something more like .66T .66B .66N, and there are several fast approximations to get that in Tangent-space, largely because you can make huge assumptions about the Z.
[/li][/ul]

I’m sorry for rambling, and a bit of this is probably gibberish, but I hope small parts of it may be helpful. I hope that at the very least I didn’t make things more confusing.

Cheers!

Thanks for the great replies everyone. This forum rocks :slight_smile:

I also posted over at polycount and had a pretty good discussion there to. If anyone wants to take a look feel free: http://boards.polycount.net/showthread.php?t=72018

Here’s a great link with some actual demo code related to this:

http://www.xnainfo.com/content.php?content=34#201005

cool, thanks so much for posting it.