Working a deferred rendering engine and am trying to figure out screen space ambient occlusion. Right now the examples I have found all seem to work with world space.
Basically I am curious if there are any implementations that work in view space or take into account view space normals. Ideally I would like to be able to use normal maps, hence the view space preference.
I worked on a SSAO shader some time ago in an indie engine. It used worldspace normals and depth in a deferred renderer so the normals and the depth textures came for free kinda.
Though I don’t see why you would need view space normals to support normal mapping. Is there a different reason you need viewspace normals for? Because normal mapping works on every space, as long as the light direction is in the same space so the math operation (dot product) is done correctly.
If I remember right, killzone 2 used viewspace normals, and because in viewspace Z points forward (sometimes backwards depending on tech), they could reconstruct the blue channel with sqrt(1.0f - Normal.x[SUP]2[/SUP] - Normal.y[SUP]2[/SUP]) which allows you to use R16G16 textures giving you more precision for your normals rendertarget.
The main reason for view space is how I have lighting setup. Switching to world space would require a re-write of the lighting somewhat. View space isn’t required for normal mapping, that I know, it’s more just how I have things setup at the moment. Quite a bit of this is learning as I go. The engine is my final project for my bachelors degree.
Ended up going a different route that just uses the depth value for the computations that I found in a thread at gamedev.net. I didn’t take into account dealing with illumination which I am going to consider down the road.
Here’s how the AO is looking at the moment with just AO applied. I still need to blur and use a half resolution render texture before I am done with AO.
There’s a good technique presented in Rendering techniques in Toy Story 3. It too only requires a depth buffer and performs quite well. I implemented it recently and was very happy with the results.
There are more examples over on my blog if you’re interested.
BTW, the Volumetric-Obscurance method used by the TS3 game (and described in the Peter Pike-Sloan paper) is extremely forgiving about the filter kernels – that is, if you don’t actually go and calculate the voronoi-based slices of the hemisphere, but just make up #'s that are reasonably close, you will get a decent result. Playing with the weights can actually be a handle to give the SSAO a softer/harder “shaping” (essentially, you’re tweaking the shape of sampling the be something other than a perfect hemisphere), and as long as you’re still working with symmetric sample pairs, you lose no efficiency.
There’s a good technique presented in Rendering techniques in Toy Story 3. It too only requires a depth buffer and performs quite well. I implemented it recently and was very happy with the results.
There are more examples over on my blog if you’re interested.[/quote]
Thanks. That article, I honestly saw, but for some reason skipped over it assuming it was about the movie. Very useful information and techniques in that presentation I can use for shadows.
bjorke: good to know about the voronoi based slices bit.
At the moment I am going to stick with the basic version I have. The engine is my final project for my bachelors degree and I am wanting to get my degree completed to help me get some employment. I am positive at the moment that I will implement the toy story 3 technique in the future once I have my degree completed.