Need some input for a new (Maya) pipeline tool

So! I’m considering developing a maya/vray pipeline tool for my studio, basically something that let’s us work locally on our file, “submit” render/prepass instances of these files to a project folder on the server and fix all paths to both work on our inhouse farm and the online farm we use.

So basically you:

  1. Press button#1, it generates a master file on our server.

  2. Press button#2, it generates a file That writes out the gi prepass files.

  3. Press button#3 and it generates the renderfile that is to be submitted to the farm, which reads the gi prepass.

So for the actual modification of the serverfiles I guess there’s two ways of doing it. Either I store my settings internally in the code > change the settings > save copy of the workingfile on server > revert the settings, or I save a copy on the server > modify it in mayabatch.
Which approach would you prefer? I haven’t worked with maya in batch mode before, so I’m not sure how easy it is to use. Can you send argvars through it?

The reason I’m doing this is primarily because our file server has seen better days, and it would be great to be able to work on the oh-so-much-faster ssd, without the clutter of n numbers of rendermasters and prepassfiles and whatnots, and not having to bother with all the file-setup that could easily be automated.

Plus it would be kinda fun to code something a little more ambitious than just the 4-lines-per-script I usually get to do :slight_smile:

With that being said it would also be great to hear your input about if this is worth it at all, or if I’m just over complicating things.

Cheers!

Johan

You can run MayaBatch on the server, but I’d think about running a Maya python tool with a WSGI front end that listens to incoming requests over HTTP, processes them using a Maya.Standalone, and then notifies you about the results. The two advantages here are that (a) you can write the guts of the thing in ‘real python’ if you use Standalone rather than batch and (b) you can control things like job queues inside the application instead of worrying about command lines and so on.

Here’s a decent link to get you started with WSGI. As you can see the ‘server’ is just a function that will get called every time a web request comes in; usually your function will read the request URL and take some action based on the URL or the query string. If you run this code :


from wsgiref.util import setup_testing_defaults
from wsgiref.simple_server import make_server

def simple_app(environ, start_response):

    status = '200 OK'
    headers = [('Content-type', 'text/html')]
    start_response(status, headers)

    yield "<html><h2>hello world</h2></html>"

httpd = make_server('', 8000, simple_app)
print "Serving on port 8000..."
httpd.serve_forever()

and then point a browser at 127.0.0.1:8000 you’ll get a ‘hello world’ message. In a real application you’d parse that URL and delegate the actual function to a class that did something more interesting.

If you’re feeling more ambitious you could also try the same thing in Django – i just finished a project doing the same thing and I was very pleased with Django although it’s more complex because it’s a full-blown web application framework which means you have worry about running a real server. The WSGI alternative can just be a standalone server that you run from a batch file or cron job.

Sorry for late reply, the whole holiday debacle took me a bit by suprise, but this sounds awesome, I’ve never heard of WSGI but it seems doable enough, even for one at my skillevel :slight_smile: Thanks!

you can also use something like werkzeug or flask. We use werkzeug as its pretty easy to map a url to a function with arguments and whatnot. If all youre doing is a couple simple urls, then theodox’s solution should work fine