Thickness map for absorption?

I’m trying to achieve an approximation of light absorption in Unreal.
Everything must be done in one shader, so no light dependent depth passes
are possible, but I can live with that.
Basically this is what I’d like to get, on a ball in this case:

I came up with an idea (“solution” would be a too strong word at this point),
and I’d like to ask your opinion on it.

The basic concept is quite simple: a texture map stores a 3D vector and an
alpha at each pixel. The vector means an angle: if the camera is looking at
that texel at that angle, the amount of background darkening is defined at
the alpha channel.
As the view angle deviates from this specified angle, the darkening
decreases.
The surface normal is transformed to “view”, which looks like this on a sphere:


These normals are compared to the texture.

This is a 2D example: the red arrow is the angle stored in the texel, which
represents a depth, indicated by the gray line inside the triangle. It leads
from the texel’s surface point to the far corner.
Technically this isn’t the longest available path inside the shape: the dotted
line shows one longer. But I think those can be discarded, as the related
angle is very steep, the effect would be hard to see especially if fresnel kicks
in.
The green lines are examples of different view angles: as it gets further
away from the red direction, the perceived depth gets more shallow. No, it’s
not precise at all, but maybe it looks ok.

Applied to a sphere, it would give a fresnel like falloff.

In the middle of this rectangle there is an ambiguity: both corners are the
same distance away. Off the centerline it becomes obvious which corner to
pick.
Near the corner, steep view angles will produce a depth which is vastly
different than the actual thickness of the object.

Well, that’s it. :slight_smile:

I have two main problems:

  • How the hell could I generate a thickness map like this for testing? I’m no
    programmer unfortunately.
  • How could I reduce inprecision at corners? A separate texture could define
    falloffs, but I’m not sure how.

I’d really appreciate ideas, advices. :slight_smile:

I’m not exactly sure if I understand what you want to achieve, but it sounds very similar to something we do at our current project: fake subsurface scattering.

Here’s how we do it:

  1. generate an “object thickness map”, storing the approximate object depth (distance which incoming light would travel until it exits the object again) at every pixel. 3dsmax 2010 can do this for you:
    render->render surface map -> subsurface map - or use polyboost 4.0, which where they grabbed the feature from.
    I am currently working on a script to add more control over the result (take underlying bones into consideration), but the 3dsmax result is a good start.
    You can ditch the thickness map if you only want to use the shader on very simple near-spherical objects or paint it in photoshop.
    Important: white = thin, black = thick!
  2. in the shader approximate the thickness by scaling and offsetting a fresnel term with your thickness map.
    pseudo code:

fresnel = 1 - saturate(dot(normal,viewVector));
objectThickness = fresnel * (1 - thicknessmap) + thicknessmap;

This will generate quite nice fake-light-coming-from-all-directions-absorbtion.
3. to fake light influence you can just multiply the thickness with a half-lambert or something:

objectThickness *= dot(normal, lightVector) * 0.5 + 0.5
  1. use objectThickness as diffuse term or apply color gradient, or some other fancy stuff.

Try this and see if it is what you want.

Thank you for the answer.

I was making a shader quite like what you described, when I started to
suspect that the grayscale map might be my problem.

They work fine for SSS on complex meshes, but now I’m not sure if they can
be used for absorption. Especially on simple meshes, where the brain is not
confused by the complexity and it’s obvious how things should look like.

For example here is a thickness map for a cube:

Using it in the light transmission channel would make a great fake SSS, but I
don’t really see how I could use it for absorption:

This made me thinking about a way to store more information in the map.

My first idea was to have an RGBA pixel define 4 thickness values related to
4 directions then inter- and extrapolate from those values.
But then I was stuck… like now. :slight_smile:

Oh my.
This might be tough without an extra depth pass.

Basically you’d have to crunch as many distance samples into a pixel as you can and then interpolate inbetween them - like you said.
This sounds a bit like a BRDF but completely unique per pixel. :frowning:
Maybe spherical harmonics could be useful here. Unfortunately I’ve never worked with them myself - but I think it would be overkill.

May I ask what you need it for, if it has to be so accurate?

I’m trying to test the limits of the Unreal engine from and arch-viz perspective.
At this point I’m going through all the basic effects of an offline renderer and
see if it can be approximated well enough.
Dispersion, blurred reflection/refraction, SSS are pretty straightforward, but
this absorption is a tough nut.

Since my last post I did a few tests with the volumetric fog feature, which can
take the shape of an arbitrary mesh:

Best solution yet, but its much more involved than just drag and dropping a
shader to a mesh. It has its own set of rules which needs to be considered for
best look, control over the falloff is also limited, so I’m not entirely happy with
it.

In that case fog volumes might be your best pick if you want accurate results.
Any shader that works in a single pass would require you to crunch the entire detail of the mesh potentially visible through each texel into a texture. :frowning:

You’re right, it would probably need quite a preprocessing to generate and
much memory to store all that data… Fog volume it is.

Thanks for the help! :slight_smile:

[QUOTE=saschaherfort;4928]
Try this and see if it is what you want.[/QUOTE]

Hi! I’ve tried ur code in unreal with a custom node but it doesnt seems to do anything :tear:

Have you tried doing it with usual node linking? It should be easy to do it that way to.
(Btw, custom codes don’t get optimized, so try to do stuff the usual way if possible.)

[QUOTE=Zoltan;4943]Have you tried doing it with usual node linking? It should be easy to do it that way to.
(Btw, custom codes don’t get optimized, so try to do stuff the usual way if possible.)[/QUOTE]

Yep I did it with nodes first and didnt get the expected result so I tried with the custom node to be sure I didn’t miss something.

Ill try again tomorrow as I gave only a quick try :rolleyes:

I should’ve pointed out that my “code” is pseudo code. :slight_smile:
You’ll have to translate this into your favourite shader language and set up all necessary textures etc., but it should be doable with unreal’s shader graph.