I’m just curious about what opinions have about this. What is the best way to organize your in-house tools and scripts with a folder structure? Right now, our current structure has developed over time, which is to say with no organization or forethought whatsoever. Basically, we have a share that everyone maps a drive letter to. Tools and libraries are in their own folders except when they work with a certain apps like Shotgun or Deadline. There is some deviation. For example, Shotgun configurations live in their own folders.
Most projects’ code organizes itself in this way. Though every time you get to restart, you can come up with a little more logical layout.
The only thing I would suggest is to get your code into Perforce if you’re using that. Version control is much better than a shared network drive and you have more options to shelve, etc. But you do have to rely on the artists/users getting latest when you update stuff.
Our setup keeps the entire studio toolset as a github repository. It’s organized into 4 big categories:
Infrastructure is stuff related to delivering stuff to the user. This is where things like “unzip this plugin into the user’s plugin directory if they don’t have it” lives. This code is pretty much self contained and most of it runs on startup – users don’t interact with it much (apart from a couple of edge cases like the bug reporter dialog that pops up when something goes wrong and posts a message on slack to the TA team).
Libraries are code that does stuff but with no context. A function for finding the UV shell from a given UV, or for converting a transform from Maya coordinates to Unreal coordinates, or for creating the convex hull of a point set are all examples of libraries. The libraries tend to be nested two or three levels deep for organization, for example you’d import mayaScene.uv for uv-related maya functions or engine.unreal.transform for unreal transformation stuff.
Tools are the things the user sees. These are the things the users sees – the menus and buttons and dialog boxes.
The two important things about tools are: ui is separate from functionality (that way you can batch anything, or combine existing functions into more complex chains of behavior) and tools can’t import other tools – if you want to share code, it’s supposed to be library code (and written to be nice and general and application-agnostic)
Exporter this is really a special subset of tool, but I keep it separate because it’s usually pretty complex and it it has a lot of wierd special cases that have to do with things outside our control – for example, if the engine doesn’t allow objects with duplicate names, you have to have a way to warn the user about that instead of allowing the export to go through
I don’t show the ‘folder hierarchy’ to the users; all our stuff is packed into a single zip file which includes the entire current tools distribution. That eliminates all of the special case stuff caused by people with locked files, missing files, or odd drive locations. Al of the top-level names in the project (‘tools’, ‘exporter’, etc) are python packages in the folder that gets zipped; when you distribute the zip file and add its location to the path all of the paths just work
One thing to watch out for with the python zip deployments is that you cannot include compiled extensions.
Those have to be installed separately like plugins.
We’re using Gitlab and pypi for source and distribution for all of our Python. Basically using virtual environments to build package bundles and edit installs for dev. Only really using pip to resolve dependencies and move things. The rest is standard entry points to DCC and standalone applications. We treat everything as a package. I really need to doc the system online somewhere.
Folder structure, for the Python based libs/tools, is simply site-packages for the given lib/tool. This has allowed us to ingest shared toolsets and keep large/complex/clever systems at bay.
Given that Autodesk is super friendly and defaults site-packages to be part of the install directory, how do you get around needing elevated privileges when writing to anything under Program Files?
I’m not writing to Program Files. I create a virtual environment per pipeline and add to site. We never modify any application installations, registry entries, or global variables. Everything we do is scoped to a particular session.
What we do is to include the binaries in the zip file and unpack them to a directory on the plugin path. We don’t have to support a wide variety of architectures or OSs so it’s not a big burden
This is really interesting. I was at PyCon last year, and I heard a presenter say that in most cases you should not use virtual environments to distribute code. Unfortunately, I don’t recall the reason why. Sounds like you made it work though.
If that talk is available anywhere, I’d love to hear/watch it. Our distribution is pretty light weight. If there’s good reason not to do it this way, I’m open to it.
How do you get around the issue of reading data files from within the zip file? For instance, taking a piece of code like this…
import os
import json
def foo():
"""
Read some data from a json file which side-by-side the module
:return:
"""
data_file = os.path.join(
os.path.dirname(__file__),
'bar.json',
)
with open(data_file, 'r') as f:
data = json.load(f)
return data
Zipping up the foo module and adding the zip path to the sys.path, and calling that method returns this err…
FileNotFoundError: [Errno 2] No such file or directory: 'c:\\pyt\\foo.zip\\foo\\bar.json'
Which I assume is because open() expects a directly accessible folder structure. Its not logic I use too often, but loading stylesheets dynamically onto qt widgets is a good use case.
How do you get around this issue, or do you keep all data separate from source code as a rule?