I’m trying to setup a tool to help our cinematics guys quickly optimize a scene based on polygons that are actually visible from set camera angles.
Is there a method for doing raycast testing on individual faces or some other way to approach this?
What I’m currently trying is creating a light at the camera, and baking the light info to verts, then deleting any face that has all of its vertex faces bake completely black, but this is pretty obviously a very slow method to use in a large scene.
are you talking about some way to compute a software back-culling?
I am not sure you want to go the raycast route if that is what you’re after; just compare the dot product of the face normals and the camera direction?
unless I totally missed your point.
So my background in mathematics isn’t very strong, but if I’m understanding you correctly, the process you’re suggesting is that I compare the dot product of each face normal with the dot product of the camera direction, and if the result is less than zero, that face is not visible?
That seems simpler than what I was trying for sure, and I should be able to implement that fairly easily (I’ve been able to find an example or two of that method being used, actually). Thanks for the help!
So raycasting is definitely not the way to go. For reference, this is doing what I want at a prototype level:
import pymel.core as pm
import pymel.core.datatypes as dt
camera = pm.PyNode('persp')
cameraShape = camera.getShape()
selGeo = pm.ls(sl=1)
pm.select(cl=1)
for x in range(len(selGeo)):
geo = pm.PyNode(selGeo[x])
geoShape = geo.getShape()
cameraPos = camera.getTranslation()
cameraCOI = pm.camera(camera, q=1, wci=1)
cameraVector = []
for x in range(len(cameraPos)):
cameraVector.append(cameraCOI[x] - cameraPos[x])
faces = geo.faces
cameraVector = dt.normal(cameraVector)
for x in range(len(faces)):
polyNormal = geoShape.getPolygonNormal(x, space='world')
dotValue = dt.dot(cameraVector, polyNormal)
if dotValue > 0:
pm.select((geo + '.f[%d]'%(x)), r=0, add=1)
selFaces = pm.ls(sl=1)
if len(selFaces) > 0:
pm.delete()
pm.select(selGeo, r=1)
The one advantage that raycasting might have over this method is that this won’t detect faces that are facing the camera but are otherwise completely obscured by another face. I may look at doing a combo method that could catch both types of hidden faces.
Dot products was my first reaction too. As you mentioned though it that doesn’t help with the faces behind other objects.
There is another option I can think of that would give better results, but it could also potentially be a huge risk :D:. The paint selection tool can collect the exact faces you need to omit from deleting, but there is no “flood select” from camera. So… you could take control of the mouse :laugh:. Hide all the UI elements, script the mouse click and hold in the center of the screen, move to the top left, then across in columns on the screen.
It certainly would not be very safe, but I bet it could get the job done. :laugh:
Just a dirty ‘solution’…
but Maya does a pretty efficient z-depth pass. You could render it out quickly, then project it back on through the scene camera.
Actually, typing this, you could pretty much just project a flat color into the scene from the camera using a temporary visualization material. and then sample the color per poly to see if it’s visible or not.
The advantage of the zDepth would be perhaps auto-LOD? Not really sure…just thinking out loud.
It’s pretty similar to your idea of light baking…but I don’t think sampling of projections would need baking…and it’d probably be more visually interactive
perhaps you could do a preprocessing step? check the bounding box of your object against the view frustum? The maya API MBoundingBox intersects method and contains might be useful for it. The idea is that you check your object bounding box and points rather than your faces.
[QUOTE=dgovil;14269]Just a dirty ‘solution’…
but Maya does a pretty efficient z-depth pass. You could render it out quickly, then project it back on through the scene camera.
Actually, typing this, you could pretty much just project a flat color into the scene from the camera using a temporary visualization material. and then sample the color per poly to see if it’s visible or not.
The advantage of the zDepth would be perhaps auto-LOD? Not really sure…just thinking out loud.
It’s pretty similar to your idea of light baking…but I don’t think sampling of projections would need baking…and it’d probably be more visually interactive[/QUOTE]
So I tried the baking method, and it works reasonably well on geo with a high enough vertex density, but for low-poly geo (so, everything) it’s wildly inaccurate. I’m not familiar with the method you’re talking about here, is there a resource you could point me to that could get me started?