Is it possible to automatically consolidate builder calls in scons

I use scons to process a directory containing models in 3d studio max form.

Example:

#Assume a builder called ToASE

env.ToASE(“model.ase”,“model.max”)

Scons is smart enough to cache the results so that if I ever try to convert the same model to ascii scene format again, it will pull the ASE output from the cache and return instantly.

The hypothetical ToASE builder also accepts lists of inputs
env.ToASE([“model1.ase”,“model2.ase”],[“model1.max”,“model2.max”])

If it ever gets the exact same request, “convert models 1 and 2 into ase”, it will pull the output files from the cache.

The way I have the builder implemented, it is cheaper to convert multiple max models at the same time if a cached version is unavailable. If the list of models stays constant, even better, since the whole pack can be pulled from the cache.

But what if I add another model, say model3.max

If i’ve been lumping all my models together, it will look like this
env.ToASE([“model1.ase”,“model2.ase”,“model3.ase”],[“model1.ase”,“model2.max”,“model3.max”])

The problem is that the cached statement is the one containing just the two models, so my system will proceed to convert them ALL again. Because of the overhead associated with converting every model whenever I add a new one, I end up just specifying each conversion individually. This means that I pay only the small penalty if I add just one model, but pay a huge one if its a fresh build with no cache.

If I could tell scons that each input and output is individually cachable, this would make my life easier.

Or if I could get scons to automatically convert

env.ToASE(“model1.ase”,“model1.max”)
env.ToASE(“model2.ase”,“model2.max”)
env.ToASE(“model3.ase”,“model3.max”)

to

env.ToASE([“model1.ase”,“model2.ase”,“model3.ase”],[“model1.max”,“model2.max”,“model3.max”])

while retaining the benefits of individual file caching.

If anyone is using scons like this to process max models, please let me know if you have encountered similar problems.

Note: The reason the per model conversion time is shorter on an array of models is that I only have to launch and close 3dsmax once, where on individual conversions I have to close and reopen max once per file.

Hey JonnyRo,

I haven’t had enough experience with scons to help directly, but…

The following is one idea I have gotten to work in the past:

Create a job server, feeding it all of the files you need to be processed.
Have the job server start X instances of the processing application (Max, Maya, etc).
Each instance should go into a loop, connecting to the server, requesting and completing the job.
When there are no more jobs the instances close themselves.
The server closes its self.

If you could make scons start the server and feed it jobs. Scons should still check the cache before sibmitting the jobs. That way you can let the server manage the opening and closing of Max and have the best of both worlds.

Keir

Thanks. I am beginning to come to the conclusion that what you suggest is the correct strategy.

I mocked up a prototype a few weeks ago that worked like this.

=Jobserver Process=
Written in python
Exposes the following methods

  1. get_job_output_dir
  2. get_job_submission_dir
  3. setup_job(input_format_name,output_format_name) returns (jobid)
  4. start_job(job_id)
  5. query_job_status(job_id) returns (JOB_PENDING_START,JOB_IN_PROGRESS,JOB_FAILED,JOB_COMPLETE)

=Client Process=
Written in python
Exposes a single function (with intended use in scons builder)

  1. max_2_intermediate(target_file,source_file,env)

def max_2_intermediate(target,source,env):
   #For now, only process one file at a time
   source_max_file = str(source[0])
   target_intermediate_file = str(target[0])

   #Get drop location on network to submit jobs
   job_submission_drop = jobserverproxy.get_job_submission_dir()
   job_output_drop = jobserverpoxy.get_job_output_dir()

   #Set up the job, to get a jobid
   job_id = jobserverproxy.setup_job('ase','max')
 
   #Copy the input file to the job submission drop
   #I use the system copy command because shutil.copyfile is slow for big files
   ret = os.sytem("copy %s %s" % (source_max_file,job_submission_drop))

   if ret <> 0:
      sys.exit(-1) #or something kinder, like a catchable exception

   #If we have reached this point, assume copy succeeded
   #signal server to start job
   jobserverproxy.start_job(job_id)

   while True:
      time.sleep(5) #So that we dont spin so fast
      response = jobserverproxy.query_job_status(job_id)
      if (JOB_PENDING_START == response):
          print "Job waiting for client to signal start."
          #this is bad, we just sent start command
      if (JOB_IN_PROGRESS == response):
          print "Job in server queue for conversion."
      if (JOB_FAILED == response):
          print "Server was unable to complete conversion"
          sys.exit(-1) #Probably could do better than this, maybe return(-1)
      if (JOB_COMPLETE == response):
          print "Jobserver indicates conversion is complete"
          break #Leave loop

    #Job is complete, retrieve results
    job_output_path = os.path.join(job_submission_dir,job_id)
    ret = os.system("copy %s %s" % (job_output_path,target_intermediate_file)

     #ideally the copy sets the proper name on the target intermediate file

    return

Note the above code was written from memory, so it may contain sytax errors.

The server process can be implemented in a number of ways. I imagine its loop would look something like this, pseudocode


while running
  Obtain a list of all jobs that are queued up (start has been signaled)
  Compose a maxscript that will process all of these jobs
  Launch max with script
  Wait for results, max will close on complete
  Update job status for results. Removing from queue
 

I’ve started developing a networked max 2 ase converter here.

Nice

Its coming along. I’ve been doing most of my development using png->dds conversion, as I dont have a copy of 3dsmax at home.

I have tried to keep the layout simple and not over-optimize.