Ben has a great article there that I often refer to when pondering normal map issues.
However, I’m having trouble right now trying to explain whats happening with one of my models I am trying to bake.
One of the “Golden Rules” I have learned about normal maps over the years is that you should always make edges that define UV borders hard. This, at least as I understand it, is because there is already going to be a break in continuity between the pixels because they reside on different UV shells.
I’m having a really hard time visualizing all of this. What exactly does a UV seam do to a normal map bake and what exactly do hard edges do.
When a normal map is baked does it really project out rays perpendicular to every pixel like in the images Ben provided there, or does it work different from that. How is the tangent basis taken into account in all of this? Does it affect the angle of the rays?
Thanks in advance for any replies, this made my head spin a bit, and I would really like to be able to explain it on our internal wiki.
In most engines a UV split causes two different tangent space matrices to lia on the same point.
So two normals pointing in the same direction might still have two different color values, due to the different tangent basis.
A smart baking process and ingame shader can take this into consideration and almost completely erase the seam - however most games don’t, also because the tangent space during baking is different from the one in game.
The way the rays are shot again depends on your baking tool. 3dsmax can use cages for example where you have a lot of control over the ray direction. And as you can see in your second pic, the rays aren’t perpendicular, when you have a smooth edge.
The tangent basis shouldn’t affect the direction of the rays (the vertex normals usually does), but it will affect the final color in your normal map.
Often times there’s no way around becoming very familiar with your engine and baking tools, if you want to achieve the best possible results. Many engines don’t do normal-maps and baking exactly in the way that is described in tutorials.
Thanks a bunch for the reply. So is there a ray cast per pixel? If I have a 256x256 image does that mean I’m going to cast a total of 65,536 rays (one per pixel)?
I’m just trying my best to visualize the process, its the only way I can manage to wrap my head around this stuff.
Technically only pixels that cover some area on the unwrapped triangles can shoot a ray - also you probably use supersampling/antialiasing, so you’ll end up shooting several rays per pixel.
Then wherever the ray hits your highpoly mesh, it’ll calculate the normal of that point relative to the tangent-space of the pixel on the texture and write this to the texture as a color value.