Lets get visual - recent r&d

I’m a fan of the idea behind tech-artists.org, but I’d love to see more people sharing what/when they can. I realise that when it comes to this I’m a great big fat hypocrite :laugh: so in a bid to curb that, here’s my thread to dump some stuff I can show that I’ve been doing in my own time.

sooooo first up…

Cubels.

  • Trying to tackle volume rendering. I call it Cubels! kinda like cubes crossed with pixels/texels/voxels… I thought it sounded ‘hip’.
  • Concept is based off cubic neighbours being used in a kind of voxel relationship sort of way… each point is cubic in form, but “knows” about its direct neighbour in a diagonal direction.
  • Computations are performed on different levels, going from point>cube>vert>pixel. Since they’re cubic, once you get to the surface level, you can step through the local volume in virtual surfaces within the shader, and easily LOD this amount.
  • Shading SHOULD be done on a per point or vertex level, but here I’ve hacked a test togethor which uses an RGB texel offset in the volume map which in world pos space is used to reconstruct a 3D gradient normal. Nicer, higher frequency detail, but a bit slower.
  • Next I need to step through the cubels in screen projection to world space… this will give me the opportunity for an early out if any of the cubels closer to the camera return solid, occluding the ones behind. I can step through in this direction and sum the results at the point level to return this result… really required to reduce overdraw when viewing a big solid block of them.

I’ve shown these before too, so may as well post here…

Directional SSS - aka, Generalized Translucency
A low cost technique for a generalized translucency lighting model.

I created a skin shader for 3dsmax and released it onto the web AGES ago (at least it feels that way?). It was popular at the time, but I was always bugged by the fact it wasn’t truly depth or scatter based - it was just an extension to the nvidia Dawn technique of wrapping the lambert term to an expanded range, and then had some extra trickery. There was no particulate matter that it scattered through… it was all just a pretty hack with limited use. Since then I’ve played with raymarching through layers, localised density recovery, etc etc, but it’s all expensive or too abstract to set up for a proper production pipeline without massive overhead or managerial nightmares.

So this time around…

  • Takes into account dynamic manipulation of the surface and light, but cannot recalculate secondary occlusion. Think of it like the limitations of normal mapping… you can’t dynamically recalculate the wrinkles in a joint, but you can transform the surface and light.
  • This technique takes the low poly asset, and encodes scatter and density information into vertex colours.
  • Internal occlusion geometry (such as skeletal/musculature proxies) can optionally be included in the pre-process to increase/decrease the amount of density in a localised area.
  • I’ve set up a baking rig to quickly adjust and apply the information in 3dsMax.
  • Extremely cheap.
    Can be processed on a per vert or per pixel level. Works very nicely when completely offloaded into the vertex fragment since it’s all low enough frequency so it still looks great.
    1 interpolator, 7 alu’s (this can include lambert lighting)

Dynamic Reflective Radiosity - per vertex
This one’s a bit of a mess, I’ll add some details once I know what the hell I’m doing :slight_smile:
basics are:

  • try and get a quick approximation to a radiosity bake which can be used with dynamic lights on the cheap.
  • bake two data sets from a fully light emitting scene: all surfaces cast their colour and worldpsace normals, bake the secondary effects of this. Use the worldspace normals to simply come up with a weighting for how much of the colour component should be mixed with direct light.

Dynamic approximation to directional secondary lighting - lightmap encoded
A new idea for approximating secondary lighting. Shown here are dynamically lit results with a simple point light, no shadow maps.

  • Reasonably good approximation to secondary bounced light. This doesn’t just give soft occlusion, but also an approximation to reflected light.
  • It’s an approximation somewhere in-between ambient occlusion and full radiosity, except it’s fully dynamic. There’s a whole lot of cheap hacks and dirty assumptions, but visually it’s quite pleasing :slight_smile:
  • It’s similar in concept to a PRT, except the data is composed in just 2 channels and is a lot cheaper at runtime. Also a hell of a lot faster to pre-compute the data. I’ve been experimenting with a few different variations, but so far the best results are coming from a data set which uses the incidence between two surfaces to produce the partial derivative of the distance field and the average of the angular aperture in a hemisphere.

Are you going to explain how these things are done so I (we) can steal it from you?

:slight_smile:

Nice stuff though

Hey Kees, that’s a fair ask but may take a little time to detail :laugh:

since these are all in flux and still really r&d stage, I figure I’ll eventually make some white papers on a few of them to get into some detail on the “how”; For now I’ll add some more notes regarding the “what”.

I’ll update that top post with some more info :D:

very cool.
I will instantly steal your sss shader when you do :slight_smile:

Also like the voxel stuff.

Beautiful work, Joel! Nice to see you around again!

I experimented a bit with this style of sub-surface scattering. My method was to render the model with back faces, then render it with front faces and measure the distance between the two. Then I normalized the distance and used it to look up a color value in a gradient. I got some decent results but didn’t take my experiments far enough to share with anyone.

I’m also very interested in more details on your data baking process when you encode the scatter and density properties - and then how you use that data in the shader to get your results.

[QUOTE=j.i. styles;2474]I’ve been experimenting with a few different variations, but so far the best results are coming from a data set which uses the incidence between two surfaces to produce the partial derivative of the distance field and the average of the angular aperture in a hemisphere.[/QUOTE]

That sentence struck me as awesome. Hopefully when I’m sober in the morning I may pretend to understand it!

Nice looking r&D!

[QUOTE=Dave Buchhofer;2498]That sentence struck me as awesome. Hopefully when I’m sober in the morning I may pretend to understand it!

Nice looking r&D![/QUOTE]

Holy crap, I don’t think being sober will help… I read that and confused myself :D:
Here’s a much easier to understand breakdown:

  • For the current surface texel…
  • I do a distance search for the closest surface neighbours. I do this with a few random samples in a hemispherical pattern.
  • Based off “true hits” within a distance threshold, I generate two pieces of data:
  1. a rough distance field
  2. the direction the ray came from.
  • I combine these and take the partial derivative so that I have a dx/dy.
  • With this I can reconstitute at runtime and get a full angular aperture, with a scalable distance and “area of incidence”. In other words, I can get artificially bounced light, and also increase/decrease the incoming aperture to get softer/harder shading.

see! Makes just as much sense as “the incidence between two surfaces to produce the partial derivative of the distance field and the average of the angular aperture in a hemisphere.” eek, proof read your posts mr. Styles… :eek: