Custom Cache Handler

David Rhodes 18 hours ago

SideFx support is the best. Here’s the reply for posterity:

Cache files need to be added to the work items when they’re being generated, because the logic for checking cache files is performed before the work item cooks. PDG’s cache system determines if the work item should actually cook in the first place, or if it should be marked as cooked from cache immediately. The code in the Python Script defines the what gets executed when the work items are cooking, so it’s too late to specify cache file paths there.You’ll need to use a Python Processor instead and break your code into two separate pieces. I’ve attached an update example that uses a Python Processor to generate some work items with expected outputs in the Generate callback, and then writes those file to disk using the Cook Task callback. In the attached example, you can RMB → Generate Node on the Python Processor and then MMB on one of the generated work items – they’ll list an “Expected Output” path in the attribute panel. That’s the path that PDG will check to determine if the work item should cook, or if the cache files already exist.If all you want to do is run an external process, you could also just use a Generic Generator TOP instead. That node allows you to specify a command string, either as a single line or using a multi-parm based command builder, and a list of expected files from the command. I also included an example of that in the same file. Using Python’s subprocess module from a Python Script node is generally not the preferable way to run an external process because then you’re bypassing the TOP network’s scheduler. Any work item that has a command set on it will automatically be queued with the active scheduler and run out of process, using whatever settings and concurrency limits are set on the scheduler.

1 Like