I would go with the spit out information route, rather than reference scene. Allows for easy changing later, switching renderers/3d packages/whatever. Maybe overkill and more work than it’s worth for your situation, but is a “down the road thought.”
Yeah for the moment the Layout Pipeline will use simple json file ( the clean output for next pipeline stages ) that describes the positional / rotational layout of the scene assets.
The SceneAssembler script will use that json to spawn and place the real rigged assets for the Animation scene. Likewise the SceneAssembler will build the Lighting, Effects and Simulation scenes using the json Layout spat out and Alembic Point Caches that Animation spat out, this way avoiding conflicts in pipeline stages.
The Layout stage is hooked into the asset database system so if a new proxy asset is added to it it would show up in the json, and changes would carry through downstream using the real assets. This puts a constraint though, if a new asset is needed in a shot, it would have to be added in the Layout scene so it can propagate downwards and auto show up in other stages.
How more complicated upstream changes are propagated downstream is the only thing I worry about atm.
These are untested ideas, experimenting so far 
One thing I did for some school projects was write a script to apply consistent render and render pass settings across multiple scene files. Saved tons of time that would otherwise be spent troubleshooting potential Rendering irregularities going from one shot assembly file to another. Also in there I automated some additional custom passes and wrote another script to quickly setup Matte Passes based on naming conventions. All-in-all it wasn’t without problems and took some time to write, but it cut back on a lot of repetitive work and allowed me to rely slightly less on the reference editor.
I know most of this pertains to further down the pipeline, so it may be seem a little far off now, but trust me when I say prepping for it now could potentially save you more headaches than just about everything shy of Maya’s reference editor. Actually on second thought, the often unstable reference editor is far easier to live with than having to toss a Render you did overnight because of some simple human error. Especially so when you have multiple shots to manage and it becomes easier to make mistakes.
Yeah I plan to have something like that for every light and object for maximum control in Nuke while compositing.
So the SceneAssembler for the Rendering stage in the pipeline will setup all the contribution passes and all the render passes and maaaaybe some render layers as well, this goes for every light and every object.
Most of the time it will be a few objects only and the rest are done through Matte paintings and 2D foreground / background cards =}
Oh tinting all the lights for the right mood in comp will be so fun!
Some FX test passes from the FX stage in the pipe 
Uses parti_volume shader for participating media in the air that I want for the final cinematic.
It’s slow with the default raymarch step settings and the
uniform variable which is supposed to make it look like fog is just pure ugly and requires tiny raymarch steps to look even close to good which increases render time by magnitudes.
So what I instead did was use big raymarch steps (faster render time but very smooth fog) and then I just applied 3D volume noise texture in the color slot of the spot lights, tada better quality and renders under 20 seconds with my dual core 2.6ghz 
No antialiasing was used and 1 shadow ray on the spot lights. Autovolume was set to on in the render settings.
I also moved and rotated the 3D volume noise transform node so it looks like there is wind in the scene 
Couple this with some stock atmospheric footage and few glare and glow, I think it will look good 
Any updates on your pipeline work? I was following with great interest. 
Well
yeah!
I built myself two "Wand of HDRI"s, looted the raw materials from an old christmas tree at my mothers work and went to my blacksmith to build the two wands with the chrome balls on them, then I wrapped the stick up in black leather for comfort! 
I feel production ready 

Testing some integration stuff right now by shooting reflection maps! Need to get myself a good Camera, Canon or Nikon. Would love to have a 360 degree panorama rig though with fish-eye lens 
I’m also working on automating animation transfer from the Layout Pipeline / Pre-Vis Pipeline to the Animation Pipeline, a friend of mine suggested that it might be very useful if some interesting animation pops out from the early stages and I thought that it would be nice to move those automatically over to the real assets in the Animation Pipeline following the rule you pointed out with clean input / clean output, pretty much the same philosophy in system design in programming 
Might also visit an old auto rig script, this way animation transfering will be alot easier. Though if something doesn’t match I plan to pop a window where you can re-route stuff 
Cool. Yeah, keeping animation “pipeline friendly” is a tricky beast. It involves applying the “clean input/output” methodology to the animator which has the drawback of restricting the animator to what the designer/rigger had in mind in the first place.
E.g. for a character to fingerpaint, one might think it would be useful to make the index finger IK and include that to the Input set of the rig. But if the character was also suppose to make a fist at some point, he should probably have FK too, so FK controls is into the Input list too.
This is all well and dandy, all of the animatable controls and it’s properties are in a single set, for easy export/import using whatever format you choose (json?)
The constraints you might want to adhere to are two-fold.
[ul]
[li]1. - Ensure that whatever assets you wish to transfer data inbetween has the same interface. If not between characters themselves then between proxy/anim/render rigs of the one character (the “proxy” design pattern).
[/li]
[li]2. - Shots are unique and there will always be a need for special tricks when animating a shot.
[/li]
It could be finalling issues, where the character is looking great all the way through the shot, except for this one spot where the tip of the finger really should mosh up against a glass surface.
It could be broad issues, like in one shot the character is doing a backflip with a heavy packpack and so the rotational pivot would suit this motion better if it was shifted a little bit.
[/ul]
You could solve it by:
[ul]
[li]1. - …going back to the drawing board and implementing it into the rig, making sure to maintain backwards compatability to existing animation. You would most likely want to have both blackbox and whitebox unit-tests on both the rig and perhaps a few randomly selected existing animations, perhaps along with a “workout” animation in which the character performs some basic “arm-up”, “leg-out”, “jaw-open” motions, depending on the scale of your project. (although some studios, that really should, don’t!)
[/li][li]2. - …allowing the artist to do whatever in this one particular shot, thus breaking the encapsulation of your rig putting lots of responsability on the artist to fix any issues that this might cause in the long-run.
[/li][/ul]
Ideally, one would closely study the actions a character would have to perform before doing any rigging work to minimise any changes necessary in retrospect since one persons resarch is nothing compared to a group of people, or even a whole company, standing still waiting for an update.
Different people solve this issue in different ways. Some studios prefer having the clean input/output methodoligy being “Clean Rig In -> Clean Animation Curves out”, letting a team of “Finalling Artists” take all the heat of fixing penetrations/collisions and some finer details (Framestore). Others let artists do whatever to the characters with whatever tools they know, as long as the output is vertex positions per frame (Weta). One puts more responsability on the technical artists, whilst the other puts it on the animators and which way to go depends on the level of comfort the Lead/Supervisor or Head has with has with the artists in question.
Woah, this got long. Hope it makes sense. And yes, most of this still doesn’t apply to your project since your doing it all on your own but, since the idea is to keep things modular and light and easy to change, I think it could still be a useful approach to whatever sized project you do.
Hey!
Thanks for the epic reply =}
I’m going with the proxy/anim/render rigs having exactly the same interface for easy data transfer between Layout Scene (uses proxy) —> Animation Scene (uses anim). The interface will be easily maintained by an auto rigging tool.
So Layout Scene will transfer raw curves to Animation Scene through a script to have a good starting point, but the Animation Scene will only spit out vertex points (Alembic Cache) because I really don’t want anything animatable in the fx / sim / lighting / rendering stages. If a problem occurs it should be kicked back to the appropriate department imo.
Which is why I also think a lighter shouldn’t tweak animations or tweak shaders. I value a texture pixel as much as I value a shader parameter as much as I value an animation curve, and they all belong to their appropriate departments, this makes things consistent imo across the entire project instead of having random tweaks by random people here and there 
The issue about shot uniqueness is a good point and something I haven’t thought about. I’ve seen some pipelines having a “Finaling Stage” as well right before rendering (like you state about Framestore) where they do additional tweaks here and there with deformers. It could be done there or it should be just as you say kicked back to the right department.
Can’t say for sure which one I like, I’m not a fan of having random edits late in the pipeline. It sounds like a compositor trying to make a render look good when the rendering and shaders were done poorly. It’s not compositors job to do that, he should only assemble everything and layer stuff with atmospherics and do final color corrections. The renders should look 80% good when they arive to a compositor and the last 20% should be done there.
But of course in the real world stuff like this is not avoidable (time constraints, people, management), but I always aim for perfection at a 100% level and beyond.
I’ve seen some Blur Studio tools which I thought was quite cool, especially their point cache placer tool with randomized accessories. I don’t think it was used for hero animations though since this tool was there to create mass army / crowd with different animations.
I really need to check a few other tools as well that is used by the industry. I’m mostly interested in the Animation Pipeline tools since that’s where I think stuff is getting very volatile, dangerous and explosive 
My best regards
[QUOTE=LoneWolf;
Can’t say for sure which one I like, I’m not a fan of having random edits late in the pipeline. It sounds like a compositor trying to make a render look good when the rendering and shaders were done poorly. It’s not compositors job to do that, [/QUOTE]
It usually boils down to what is most practical in terms of time, budget or quality requirements. If you only knew how much work is being performed ontop of someone elses hacked performance. ^^ It isn’t ideal, but the movie has to get made and sending shots backwards is always very costly. In the end its all one big team effort and everyone has to make the best of what they get.
Meant to add. The most important ingredient to any pipeline : KISS principle - Wikipedia
![]()
Yup that applies to programming as well =}
Playing around with fluids for the moment needed a break ![]()
This smoke will come out of the cannons after each volley and then cool down for 4 seconds or so ![]()
It might be too fast right now so I will probably slow it down and diffuse it more towards the end.

