Time lapse photos and normal maps

I was wandering in the forest when I arrived at a clearing
with a great view to a neighboring hillside. The dense
foliage was lit by the setting sun, producing an
aesthetically pleasing sight. The low angle of the light
accentuated the complex, visually interesting nature of
that cloud of leaves and branches flowing down the slopes.

My TA lobe kicked in and I started wondering about the
following:

Could we estimate a normal map based on timelapse photos
of an outdoor subject?

Let’s assume that…
… the subject doesn’t move or change shape. (So no wind
shakes the leaves or we deal with a rock for example.)
… the weather is good, no clouds.
… we know the exact date and time for each picture.
… we know the exact location and orientation of the camera.
… we know the approximate location of the subject.

We take several photos throughout the day without moving
neither the camera nor the subject. When it’s done then we
follow the brightness changes of every pixel: when it’s the
brightest then it was facing the sun the most. We can
compute the position of the sun based on date, time and
geo-location. The approximate orientation of the subject
could also be factored in.

When the brightness suddenly changes then we can assume
that the area got in or out of a shadow. We only care if the
pixel is directly lit so if shadows come and go then we
interpolate through the shadowy gaps.

The color of the sunlight can also be computed so the
surface color can be compensated for that.

So that’s it. :slight_smile: Thoughts?

Someone did a tutorial a long time ago using a bowl of peanuts and a simple lighting rig.
He took 3 photos and combined the result in photoshop to create a normal map. Pretty convincing.
I think the biggest obstacle to overcome are the shadows that get produced. A hill inside the shadow of a larger hill won’t get the correct surface data…

if you only you could turn off shadow casting on the sun…

Yup Kovach, that was one of the like earliest normal mapping things I remember. He just took a flashlight, put it at 12 oclock and 9 oclock (and maybe the others) and composited them as R and G channels to create a normals map.

That’d be the easiest way to try it (get photos as close to those differences). I’m sure there are some super complex algorithms that could determine the geometry from understanding the light’s behavior but I have no idea if they’d be accessible or practical.

Of course, Ryan Clark’s peanut normal method. So that’s where
my idea rooted… I’m not that clever after all… :expressionless:
But yeah, the solution could be roughly the same only with less
nice angles and color values.

I guess shadowy areas can only be fixed with the clone brush.
Or by placing a mirror on the ground for extra coverage. Or using
the light reflected off the surface of natural bodies of water.
Or one can just throw away the camera and procedurally generate
cliffs and dense foliage in Houdini.