Global Illumination with light volumes

Hi guys,

My names Stefan Kamoda and I’m a London based technical artist with a background in animation and rendering for games and television. Been mostly lurking on these forums so I thought I’d finally introduce myself and show off some stuff I’m pretty excited about.

In previous jobs I’ve written lots of maxscripts to sooth pipeline issues, etc but I’ve also taken an interest in global illumination for games. I’ve been working on a GPU based lightmap renderer for a while and recently read a paper on volume lighting by the guys at Monolith. I was intrigued by the idea so decided to give it a try. I’ve attached a video link below and there’s more info on my blog at copypastepixel.blogspot.com if you’re keen.

//youtu.be/3hJjgkuRRIg

P.S I’ll be looking for work as a technical artist / tools programmer soon so any feedback on the blog, etc would be greatly appreciated :):

Hi there,

Thought I’d post another light volume video, this one featuring an animated character.

//youtu.be/TSYJWuCtW-0

There’s a short write up on my blog as well if anyone is interested.

Cheers,
Stef.

That looks really nice, do you know how fast does it run and how spread apart are the probe sample points?

Thanks,

I used two volumes for the sponza scene, a 32Wx17Hx64L and a 8Wx8Hx32L. There’s an image showing the layout in this post copypastepixel: Light Volumes – Sponza Attrium. They could actually be a little higher res as there is some light leaking in places.

Speed wise it’s hard to give concrete figures as the code I’ve written isn’t optimised. The scenes I’ve posted here easily run at 60fps on an nvidia 8800gtx. With 64 deferred lights whizing around the scene it drops to about 30 - 40 so there’s still room to move.

I’m using the technique in this presentation which gives some more real world performance figures.

http://www.microsoft.com/downloads/details.aspx?familyid=d68a47f9-ae4d-464c-9b40-9550c0aea3ec&displaylang=en

Inspiring, good job! :slight_smile:

I like your blog, will definitely follow it!

This really opened my eyes how much you can really do with cool tricks.
So I’ve decided to give this a try myself and perhaps learn something from it ^^

So if I understood correct to get the model into lightspace I have to take the the cube grids transform matrix and multiply it in the shader with the vertex to get it in lightspace. Then rescale it to 0-1 range to sample it, and then lookup with the transformed vertex pos.

Though I’m abit concerned with the rescaling. What I had in mind was having the pivot point at the left bottom back corner (so the grid vertices are always in the positive axises), and then divide the transformed vertex in the shader with the scale X,Y,Z values of the cube grid. Is there a better way to do it? :o

Also how do you handle which volume textures to use at rendering? I guess you just do a AABB colission test with the mesh and then pick the correct volume, which then hooks the shader with the correct texture during the rendering pipeline, am I right? :o In deferred rendering this problem doesn’t exist since you render all of the volumes into the light buffer anyway ;p But I was thinking of a forward method to try, and then perhaps switch to deferred ^^

Good job again very inspiring for me :slight_smile:

So if I understood correct to get the model into lightspace I have to take the the cube grids transform matrix and multiply it in the shader with the vertex to get it in lightspace. Then rescale it to 0-1 range to sample it, and then lookup with the transformed vertex pos.

Though I’m abit concerned with the rescaling. What I had in mind was having the pivot point at the left bottom back corner (so the grid vertices are always in the positive axises), and then divide the transformed vertex in the shader with the scale X,Y,Z values of the cube grid. Is there a better way to do it? :o

That’s correct, the point being shaded needs to be mapped to the 0-1 space of the volume texture. If the light volumes are perfectly aligned to the world then you can get just reposition and scale the world coordinate and use that as the look up. If the light volume is rotated then you need to map the world position with a matrix transform. I calculate the required transform matrix for each volume once and then send that to the shader to save on instructions.

Matrix volWorldTransform; // Volume world transform with pivot at the point representing 0,0,0 in volume texture

Matrix volUVWTransform =
1 / volWidth, 0, 0, 0,
0, 1/ volHeight, 0, 0,
0, 0, 1 / volLength, 0,
0, 0, 0, 1;

Matrix worldToVolumeTransform = (inverse volWorldTransform) * volUVWTransform;

worldToVolumeTransform is then sent to the shader and used to transform the world position into the volumes UVW space.

float3 volumeUVW = mul(input.worldPos worldToVolumeTransform);

float3 color = tex3D(sampler_volCol,volumeUVW);

Also how do you handle which volume textures to use at rendering? I guess you just do a AABB colission test with the mesh and then pick the correct volume, which then hooks the shader with the correct texture during the rendering pipeline, am I right? :o In deferred rendering this problem doesn’t exist since you render all of the volumes into the light buffer anyway ;p But I was thinking of a forward method to try, and then perhaps switch to deferred ^^

I’m not sure as to the best way to go about this in a forward renderer but it’s pretty easy with a deferred renderer. It’s just a case of using texKill on the result of volumeUVW and rejecting any coordinates that fall outside the volume.

-Stef

Awesome I can already visualize it in my head now =}

One final question, do you do as monolith did with Fear 2 aka render cube maps of the scene and downsample them progressively to 1x1 for each face?

I wanted to try more of a raytrace option by sending a few rays from each volume vertex to all directions kinda in a spherical fashion. Doesn’t have to be that many rays as the resulting pixel is gonna be 1x1 anyway, and then use some dot product to decide which face i’m gonna shade and ADD up all the bounce colors from the diffuse surface, and then use the distance of the rays as a factor to weight how much color should be added (can use linear, quadratic or even cubic falloff here) and then in the end divide the resulting number with the ray count to get my final pixel color =}

EDIT: Or I could use some sort of tone mapping and maintain the HDR data, hmm maybe maybe!
Also doesn’t volume textures allow arbitrary geometric lights, if I use them as emmisive objects. I could use the first raytrace hit as a direct light color and the rest of the x amount of bounces as indirect :slight_smile:

What do you think of it? Do you see any problems with the method I might try? Since you’ve already got something up and running I think you can see where I’m headed and perhaps notice problems that I might have to go through :slight_smile:

//Lonewolf

One final question, do you do as monolith did with Fear 2 aka render cube maps of the scene and downsample them progressively to 1x1 for each face?

I went with a raytracing approach because it was an easy extension to the lightmap renderer I’d written. I also use a fair few samples as well. At least 512 per sample point but usually higher. That said I think the cube map method would be simpler so I’d probably start there.

EDIT: Also, you shouldn’t need to weight your ray samples by distance when sampling a lighting environment. The OMPF guys can explain it better than me :slight_smile: http://ompf.org/forum/viewtopic.php?f=17&t=1865&p=20757&hilit=ray+distance#p20757

Or I could use some sort of tone mapping and maintain the HDR data, hmm maybe maybe!

I don’t do any tone mapping during the volume generation phase, I leave that to the post process stage. That’s not to say you couldn’t though, it all depends on how you’re using the light data at the other end of the pipeline.

Also doesn’t volume textures allow arbitrary geometric lights, if I use them as emmisive objects. I could use the first raytrace hit as a direct light color and the rest of the x amount of bounces as indirect :slight_smile:

Yep, emmisive objects are really easy to implement :slight_smile: You shouldn’t need to treat them any differently to any other object though, just raytrace or render the emmisive object and add it’s contribution along with your regular geometry.

Let us know how you get along with it.

Thanks again for the answer :slight_smile:

I’ve already got a simple raytracing framework for XNA anyway so I think I’ll stick with that :slight_smile:

Though I’ve got no experience with lightning models using raytracing but from the thread you linked, if I understood the statements correct you don’t include a 1/d[SUP]2[/SUP] for a quadratic falloff because the amount of light rays that hit the model will be alot less when the distance is further away because of perspective. Since perspective is kinda done like this: x/z and y/z which means that the 1/d[SUP]2[/SUP] is included implicitly. The 1/d[SUP]2[/SUP] is needed though when we use the dot product because that just compares the angles and doesn’t take account for distance in the 3D world so literally all rays hit the model.

And that’s also the reason why shaders use that term 1/d[SUP]2[/SUP] to make a falloff effect. Am I correct?

Though I didn’t understand what all the formulas where for in that thread if I’m gonna be honest :stuck_out_tongue:

What a cool discussion is going on here between LoneWolf and copypastepixel… amazing… well I agree with LoneWolf the blog is really appreciable.