Rendering Pipeline

We’re working on a game project that will use rendered backgrounds and I’m in the process of setting up a rendering pipeline. I’ve done rendering pipelines before, but I’m running into an issue that I’ve never encountered and am having a difficult time coming up with an efficient solution. I’m working in Maya.

Limitations:
[ul]
[li]Our viewable area is 1024x768, but we could have backgrounds as large as 3x that, where the viewable area scrolls around the background image.
[/li][li]The game world requires that we have a fixed camera in order to optimize point-and-click movement calculations. The camera must be in the same position for all scenes.
[/li][li]The background is rendered in perspective.
[/li][/ul]

The problem enters when I need to render one room at 1024x768 and another at 2048x1536.
[ul][li]I can’t simply change the rendering resolution without adjusting the camera, or my world will suddenly become twice as large. The artists would need to build everything at variations in scale in order to match the proper render scale, which would make sharing assets from one scene to another problematic.
[/li][li]I can’t move the camera, or we lose the optimizations that have been done for point-and-click.
[/li][li]I tried changing the Film Back properties, which allow keeping the camera in the same place and keeping objects scaled properly, but it ultimately changes the Focal Length, which causes the same problems as moving the camera.
[/li][/ul]
The only solution I’ve come up with is to place the camera in such a position that it works for the 3x backgrounds, then render it out full-size and crop down to 1024x768 or 2048x1536. Obviously, this is not an efficient pipeline when we’re rendering up to 3 times the size that’s needed. Is there something I’m missing for being able to render and save out only sub-sections of the render?

Anyone have any thoughts or solutions I’ve not thought of?

Camera scale. In your case if your default with is 1024, using the expression perspShape.cameraScale=defaultResolution.width/1024.0

would give you what I think you’re asking for.

I tried using camera scale, but essentially all that attribute does is become a multiplier for the focal length. Changing it yields the same results as changing focal length, which I can’t do because it won’t match the game camera anymore. :(:

My current solution is to use the render region to only render the part that I want. Every image will need to go through an external script to crop and composite anyway, so I’ll be able to crop off the border that I’m not using. It means larger file sizes for the immediate render, but that shouldn’t be too big of a problem in the grand scheme of things.

Maybe you want a script to render background tiles using the game camera (but with nodal pan/tilt) and comp them together?

I had been thinking about how I would go about rendering out multiple smaller images and comping them together but haven’t come to anything usable yet. Can you explain what you mean by nodal pan/tilt?

Nodal pan or tilt just means a simple rotation of the camera up or down or sideways, but with no translation, so the camera stays in the same position.

Render multiple tiles, say in a noughts and crosses 3 x 3 grid, using the same focal length as the game camera, and stitch them together. You will then have an oversize background image you can pan around with the game camera.

You might or might not have to correct for lens distortion at the time you stitch the images back together. If you use a short focal length lens (wide fov) you might notice seams where the renders were sticthed together. Cant remember off the top of my head how to correct that distortion but i’m sure someone else on this forum will know or it’ll be floating around on the internet somewhere.