The real bottleneck here is just the cmds.polyColorPerVertex calls. Having to call this command at least once per vertex is massively slow. Here is a little benchmark:
from itertools import izip
from contextlib import contextmanager
from functools import partial
from operator import add
import timeit
from maya.api import OpenMaya
from maya import cmds
def new_colorset(mesh, colorset):
cmds.polyColorSet(mesh, create=True, colorSet=colorset)
cmds.polyColorSet(mesh, colorSet=colorset, currentColorSet=True)
def get_colorset(mesh, colorset, **kwargs):
cmds.polyColorSet(mesh, colorSet=colorset, currentColorSet=True)
return cmds.polyColorPerVertex(mesh + '.vtx[:]', q=True, **kwargs)
def merge_colorsets(mesh, newset, colorsets):
'''maya.cmds'''
# Time rgb value adding and packing
start = timeit.default_timer()
num_verts = cmds.polyEvaluate(mesh, vertex=True)
colorsets = (get_colorset(mesh, s, rgb=True) for s in colorsets)
colors_per_vert = [[0, 0, 0] for i in xrange(num_verts)]
for s in colorsets:
for i, j in enumerate(xrange(0, num_verts, 3)):
colors_per_vert[i][0] += s[j]
colors_per_vert[i][1] += s[j + 1]
colors_per_vert[i][2] += s[j + 2]
print 'pack and add color: %s seconds' % (timeit.default_timer() - start)
# Time creating and apply verex colors
start = timeit.default_timer()
new_colorset(mesh, newset)
vtx = mesh + '.vtx[%d]'
for i, rgb in enumerate(colors_per_vert):
cmds.polyColorPerVertex(vtx % i, rgb=rgb)
print 'apply color: %s seconds' % (timeit.default_timer() - start)
def api_merge_colorsets(mesh, newset, colorsets):
'''maya.api.OpenMaya'''
# sum doesn't work with OpenMaya.MColor objects, make our own that does
sum_mcolors = partial(reduce, add)
new_colorset(mesh, newset)
dagpath = OpenMaya.MGlobal.getSelectionListByName(mesh).getDagPath(0)
meshfn = OpenMaya.MFnMesh(dagpath)
# Time rgba value adding and packing
start = timeit.default_timer()
default_color = OpenMaya.MColor((0, 0, 0, 0))
colorsets = (meshfn.getVertexColors(s, default_color) for s in colorsets)
colors = [sum_mcolors(vert_colors) for vert_colors in izip(*colorsets)]
print 'pack and add color: %s seconds' % (timeit.default_timer() - start)
# Time creating and apply verex colors
start = timeit.default_timer()
meshfn.setVertexColors(colors, range(len(colors)))
print 'apply color: %s seconds' % (timeit.default_timer() - start)
def setup_scene():
cmds.file(new=True, force=True)
cmds.polyPlane(name='colored_plane', w=24, h=24, sw=24, sh=24)
mesh = 'colored_planeShape'
new_colorset(mesh, 'redset')
cmds.polyColorPerVertex(mesh, rgb=(1, 0, 0))
new_colorset(mesh, 'greenset')
cmds.polyColorPerVertex(mesh, rgb=(0, 1, 0))
new_colorset(mesh, 'blueset')
cmds.polyColorPerVertex(mesh, rgb=(0, 0, 1))
def benchmark():
mesh = 'colored_planeShape'
setup_scene()
print 'maya.cmds merge vertex colorsets'
print '================================'
start = timeit.default_timer()
merge_colorsets(
mesh,
newset='merged',
colorsets=('redset', 'greenset', 'blueset')
)
print 'total: %s seconds
' % (timeit.default_timer() - start)
setup_scene()
print 'maya.api merge vertex colorsets'
print '==============================='
start = timeit.default_timer()
api_merge_colorsets(
mesh,
newset='merged',
colorsets=('redset', 'greenset', 'blueset')
)
print 'total: %s seconds
' % (timeit.default_timer() - start)
if __name__ == '__main__':
benchmark()
I’ve tried to keep this fairly close to theodox’s script sans the clever use of generators and added a maya.api version for comparison. Here are the results:
maya.cmds merge vertex colorsets
================================
pack and add color: 0.00642395019531 seconds
apply color: 103.396903038 seconds
total: 103.408151865 seconds
maya.api merge vertex colorsets
===============================
pack and add color: 0.00123691558838 seconds
apply color: 0.00776481628418 seconds
total: 0.0191380977631 seconds
These results are pretty impressive, we’ve improved our run time by ~5000x by using the lower level maya.api. As you can see virtually ALL the time spent in the maya.cmds version is spent on cmds.polyColorPerVertex calls. The real lesson here is that our handling of data had nothing to do with the performance of our script. It had everything to do with a poorly implemented maya.cmds function.
Attempting to improve performance using multiprocessing or threading in this case would be preemptive. If we had a mesh with 200 million verts and 10 colorsets this would change. Then we might attempt to chunk this large data set up and use multiprocessing.pool to calculate chunks simultaneously. Even then we might have better avenues to go down before trying multiprocessing. For example, maybe we would use MItMeshVertex iterator to build up our resulting colors without storing each colorset in memory OR use numpy datatypes.
I guess this could all be boiled down to a simple list:
[ul]
[li]maya.cmds
[/li][li]maya.api
[/li][li]numpy
[/li][li]python multiprocessing/threading if you can break up your task
[/li][li]c++
[/li][/ul]
Start from the top and work your way down until you’ve solved your performance issues =).