A fullboar "Image Based Specular" Term (with HDR pipeline tech)

Hey Gents,

Question for ya mostly regarding lighting and specular. This is a bit more of a “high concept” question. Sorry, this is an extremely complicated question for me to ask, I’ll do my best. It’s a two part question I think.

We’re doing some proprietary work involving placed cubemap nodes within our levels, and are attempting to store a full spec exponent range for each cube in it’s mip chain. We have our level asset’s cubemap samplers for their various materials automatically hooking into these generated cubes based on proximity in world space.

There are a couple “extended tech” issues that I’d like to ask about, maybe see if anyone has had success with this type of thing or if anyone can wave any red flags.

1) How should the mip levels be blurred/Convolved?
We are currently convolving (blurring) our cubes in our build pipeline using libraries from ATI’s CubeMapGen, but we don’t have the source for the libraries and can’t be sure they will link with later versions of the visual studio compiler (they almost certainly won’t I’ve been told by our rendering engineer). There is a paper on their convolve process, but it’s sorta wasted work since it’s already done and we’re on a pretty tight timeline w/ this R&D task. Anyone else dealing with this and have other methods worth mentioning?

2) How should the representation of the lightsources themselves be captured?
When you take pictures of your level from various positions, you need representation of the physical light sources (bright splotches?) to show up in your term, mimicing a traditional specular highlight lookup to a lightsource. These light sources should also be above the 0-1 range of the standard reflectance (HDR). Would it make sense to capture the cube sides at 16 bit and later compress it to an 8bit range? For reasons outside of this conversation, we probably need to stick to standard cubes and not HDR when all is said and done.