[Unreal3] Bluring reflections in Chrome material?

Hey,

I’m using a cube map rendered in Unreal to create the reflections for my Chrome material but the reflections are too sharp and intense. Is there any way I can blur the reflections in this material’s shader network? Here’s my setup.

Cheers,

Ps. I found this: http://tech-artists.org/forum/showthread.php?t=462

But I couldn’t seem to get it to work. The “A” channel of the top “Append” goes into the “A” of the “negative offset” multiply? If anyone could shed some light on how this network works then that might answer my question. :slight_smile:

I have something that I think fits what you need. it’s currently really rough (just hashed it in in the past hour while taking a break from some other work)

i threw in some basic comments, but you may have some difficulties. If I don’t upload a nicer version before you get to try it, feel free to ask questions. I’ll have a nicer version up soon. just unzip it, it’s a .upk

In the thread mentioned in the first post, the negative offset is made by multiplying the
positive offset by -1. It’s just for convenience.
So you can’t link two inputs, although it does look like the line goes straight from A to A. :slight_smile:

Oh that’s what he meant by the A goes into the A…

…feel dumb that I didn’t realize that was the problem that he was having now haha

the way to usually do it cheaply is to use a smaller cubemap and/or using an LODBias which shifts the texture to a lower mipmap, not sure if unreal supports the LODBias call, it might if you use a custom node.

It seems that Unreal’s CustomTexture node is “not flexible enough for the case
where you would like to sample the same texture multiple times in a loop, or
to use other sampling instructions such as tex2Dbias, tex2Dlod, tex2Dproj

But since Bside is not using a cubemap but a regular texture, it might be possible
to fake MIPs: actually create a texture with mip levels present like this and scale,
offset and clamp texture coordinates appropriately in the shader.
I think it would be possible to make a PS action for scaling, blurring, placing the
bits.

Blur it in PS!

blurring is PS destroys the seams of a cubemap, but I think ATI has a software called CubeGen or something along those lines that allows you to blur cubemaps.

I thought that this was a dynamic map though? if not then yeah, do it offline and don’t think about it

Oh, doh, he’s using a cubemap reflection…
In that case the brute force on the fly blur can be done like this,
so the result will look similar to this:

If you have any hopes of getting this to run in a console, I would highly suggest not doing the above suggestion =), while it works, it probably doesn’t blur it enough for what you’re looking for and it’s pretty expensive.

If you’re working on a project with programmers, ask them to look into implementing an LODBias node, unreal just spits out an FX file in the end of the day, so you should be able to do whatever you want. Probably even add a paremeter to the Texture2dSample node that compiles into an LODBias

Hehe yeah it’s very expensive so it only makes sense for blurring a
render2texture cubemap.

For offline processing of cubemaps I don’t use CubeMapGen
because it can’t produce the images in the way Unreal expects them.
So I made a scene preset in modo where the camera has 6 frames of animation
where it’s oriented properly for the 6 cubemap sides.

Then there is a sphere around it with a blurry refraction material which blurs
the environment. (I tried DOF but that produced seams.) In this preset scene I
just swap the HDRI envmaps and stage geometry and render the animation.

I thought about this whole ordeal with blurred cubemaps and there was a
few things I really didn’t like:

  • Outside tools are required.
  • The generated images might need to be rotated in the way Unreal expects
    them.
  • The input cubemap, which gets blurred, usually has no relation to the
    actual Unreal map where it will be used on. “One reflection fits all” is a rather
    old-school approach.

Yes, learning and using CubeMapGen is not that difficult, the images can be
rotated by an ImageMagick script, you can capture cubemaps of actual
maps, export and convert them to a format CubeMapGen can load, blur them
then bring everything back again, every time the map changes significantly.

All that could be done but I prefer having the computer do the tedious work,
not me, so I tried to come up with a better workflow.

As it turns out, it is possible to produce a decent blur right in Unreal. Like
this one:

The trick is to use a recursive shader and let it run for a second.

Since actual loops are not allowed in Unreal shaders (not even in the custom
code node) I had to put on my “Hacky” hat and make a special
SceneCaptureCubemapActor.

The capture point is located inside an extra StaticMesh component, which is
an inside-out sphere (so the normals are facing toward the center).
There is a blur shader applied to this sphere: it blurs a bit the referenced
cubemap which is generated by the same SceneCapture actor.

So in one frame it captures the surrounding sphere, the next frame the
sphere’s shader gets updated and blurs stuff a bit. The change is captured in
the next frame, then the updated cubemap is processed again and so on.

With the proper blur shader it’s also easy to simulate multilayer reflections
like this:


(It’s one cubemap and not two mixed together.)

There is a time limit set in the actor after which everything is deactivated
and removed from the scene. The generated cubemap stays there so you
can save it as an “offline” asset if you want.

The base cubemap might be some imported HDRI image, but can also be
captured: place a normal SceneCaptureCubeMapActor above the map and
set it’s near plane to skip the actual map and only see the really distant
environment.

So the blurring workflow becomes like this:

  • Set blur parameters.
  • Start a PIE session and wait for a few seconds.
  • Close PIE and save the generated cubemap.

It’s also possible to have a more flexible and generalized way of dealing with
reflections, if you are willing to sacrifice the first second of each map.

Have cubemaps like “Environment” and “EnvironmentBlurred” in a package and
generate them once every time a map starts. By using only these CMs in
reflective shaders you can stop the proliferation of cubemaps and related
material instances and you can also be sure that reflections have at least a
remote resemblance to the visuals of the actual map.

So I set up this material:

and now I can forget about it, it will look right whatever I do to the skybox.

The next logical step would be having separate cubemaps for separate areas
on a map and have the (Static)Meshes use the closest ones automatically. I
don’t have time for that now but I’d like to dig into it in the future.

Zoltan - That looks like a very juicy shader. I would love to see it in motion!

You mentioned SceneCaptureCubemapActor, I take it that’s through UnrealScript? Could you shed some light on how you actually did that (for non-techy artists like myself who are mostly just interested in doing shaders).

~B

I’ll make Fraps captures of the stuff when I have some free time.

In the shader I used the surface normals as the input UVs for the cubemap.
The blur itself is a brute force method: the base normals are rotated a bit on
each axis (simple 2d rotations) then used to sample several cubemaps which
at the end get averaged.

I also added adjustable noise, brightness, contrast and saturation
parameters but they are not terribly important.

The script part is mostly for convenience: all components are encapsulated
within a single actor and the important parameters are at the same place.

So you can build a similar rig using existing building blocks: a
SceneCaptureCubeMapActor, a staticmesh around it, kismet turning off
everything. (That’s what I did for prototyping this hack.)

You can read more about scene capture actors on UDN:
http://udn.epicgames.com/Three/RenderToTexture.html