Basic question on Maya shading networks

Here’s the setup: you have a single mesh connected to a lambert and a texture.

Create a simple scene with a poly sphere and hook it up to lambert1. Set the lambert1 to get it’s diffuse from a file. In the file node, select a texture. You get the texture wrapped on the sphere. If you delete history, you basically have three nodes. The place2dTexture node with multiple connections to the file node. The file node with either one or two connections to lambert1 (outColor and outTransparency with outColor being the important one).

The outColor attribute on the file node is a compound attribute with three floats: outColorR, outColorG, and outColorB. And if I do a getAttr on outColor, it’s always been 0,0,0.

So, the basic question is: How does the lambert node get the all texture data?

While the texture has three channels, there’s a ton more data than three floats. It seems like there’s more going on here that data flowing across the outColor connection.

The next question is: if I was implementing the lambert shader in C++, how do I get access to all the texture data? It doesn’t seem that I can just read it from outColor - or can I? Also remember that the 2dPlacement node can affect the data so I can’t just read it from the file specified in the file node).

Thanks!

I’m not a Maya person, but I am a shader person, so I’ll try to give an answer (but I hope someone more familiar with Maya shader networks can give a better one).

The file node is actually only passing the color data to the material. The place2dTexture node is determine WHICH pixel of the file texture node to pass. This is completely independent of the actual material so the lambert doesn’t have a say in the process, and doesn’t need to know more than which texel it needs to render what it is rendering. Thinking about the hierarchy as all things flowing towards the material, the material only ever needs a color from that file node since the file node (by way of the 2d/3d place texture node) determines which color to supply, then the material just processes it.

[QUOTE=wdavidlewis;2715]Here’s the setup: you have a single mesh connected to a lambert and a texture.

Create a simple scene with a poly sphere and hook it up to lambert1. Set the lambert1 to get it’s diffuse from a file. In the file node, select a texture. You get the texture wrapped on the sphere. If you delete history, you basically have three nodes. The place2dTexture node with multiple connections to the file node. The file node with either one or two connections to lambert1 (outColor and outTransparency with outColor being the important one).

The outColor attribute on the file node is a compound attribute with three floats: outColorR, outColorG, and outColorB. And if I do a getAttr on outColor, it’s always been 0,0,0.

So, the basic question is: How does the lambert node get the all texture data?

While the texture has three channels, there’s a ton more data than three floats. It seems like there’s more going on here that data flowing across the outColor connection.

The next question is: if I was implementing the lambert shader in C++, how do I get access to all the texture data? It doesn’t seem that I can just read it from outColor - or can I? Also remember that the 2dPlacement node can affect the data so I can’t just read it from the file specified in the file node).

Thanks![/QUOTE]

I too, don’t use Maya, so Im not sure exactly what your doing. But I think you are saying place2dtexture is the equivlent to a tex2d in HLSL.

How can a texture of 128x128 contain only 3 floats?
If it only has 3 floats it must be 1x1 pixels in size for RGB?

This is because GPUs are parallel beasts, imagine it processes
every pixel at the same time. So the float 3 is just one of the pixels
depending on which UVs you are sampling from.

Does this make sense? I’m not sure I can explain it well.
Is this what you were asking about?

So…

Those connections are wires that tell the shading node where to request the color input. The sub-attrs are there essentially so you can mix and match and get some blended combination of colors. As far as your node is concerned, they are just populating your MDataBlock.

Your question begins to fork when you get to hardware or software rendering. For software rendering, the answer is short and sweet, you can pretend that the renderer calls “compute” on your node once per pixel, and that the MDataBlock will just be correct for the pixel you’re on, not unlike a standard pixel shader. the “InterpNode example code walkthrough” in the API docs is pretty good about showing what goes on here.

For hardware rendering, my knowledge gets really fuzzy, but the “a hardware shading node plugin example” page in the API documentation is probably your best bet.

I do wonder though, could you not accomplish what you are trying to do with the HLSL and CGFX nodes now present in Maya?

Thanks everyone for the replies.

The key here that I wasn’t getting (but makes perfect sense in hindsight) is that the shader network describes the data flow per-pixel or per-sample. That’s why everything is only a single pixel wide.

It’s a little confusing though, because the code I’m looking at doesn’t seem to be written that way. (I’ve acquired ownership of a shader node written by someone who is no longer at the company). The render() function of the shader basically pulls the uniform constant values out, passes them to D3D and then calls DrawIndexedPrimitive() for every primitive. I’m a D3D noob (though probably not for long), so it might be doing something per-pixel. I’m having difficulty reconciling this. It could be that the code is wrong… Or is it different for hardware shaders? My shader is inheriting from MPxHardwareShader.

As for using the Maya built-in shaders, that might be possible in the future. I can take a look. However, I’ve got some bugs I need to fix in hopefully less time than re-implementing an existing part of the pipeline.

One impediment to using the provided shader nodes is that the shaders are embedded in compiled materials, so probably some code would need to be written to do the extraction (though obviously the current shader does it so it would just be a matter of repackaging).

Thanks!

— David

[QUOTE=wdavidlewis;2761]Thanks everyone for the replies.

The key here that I wasn’t getting (but makes perfect sense in hindsight) is that the shader network describes the data flow per-pixel or per-sample. That’s why everything is only a single pixel wide.

— David[/QUOTE]

Yeah you got it :D:

Its certainly not easy to explain these things!! But once you get the concept it all becomes clear,
I think the way shaders work goes back to Renderman Shaders. I often go back to renderman shaders as they are useful for learning stuff you can apply to HLSL.

[QUOTE=wdavidlewis;2761]I’m a D3D noob (though probably not for long), so it might be doing something per-pixel. I’m having difficulty reconciling this. It could be that the code is wrong… Or is it different for hardware shaders? My shader is inheriting from MPxHardwareShader.
[/QUOTE]

Sorry this took awhile to get back to you on. You may actually be well beyond this by this point… I mean, it sounds like you’ve got it, and it actually sounds a lot like the hlsl cpp example autodesk provides.

It is a little different for hardware shaders, and please take this with some salt. I’m piecing together what I can tell from the API docs and from my rendering knowledge. It looks like MPxHardwareShader attempts to wrap and manage calls to various shader parameters. So you do all the set up through that. Tell it what to draw, and then let the GPU rip on the entire geometry. So your textures are actually getting passed directly to the GPU somewhere. It probably isn’t directly from those plugs like a software shader.

Where it gets hairy, is from the API I can’t really find clear documentation on how you bind a texture as a uniform, which I believe is your fundamental question. If your shader is based on autodesk’s example, I would look for something derived from MHwCallback that does something like D3DXCreateTextureFromFile(). This will actually probably be sent to hardware by something that looks a lot like:

effect->setTexture(theParam,theTexture);

My best guess is that a uniform is bound as a string to an MUniformParamater and then parsed in some method which will have a name like “bind” or “bindTexture” or something like that that actually takes a parameter and a string, finds a texture from the string, and associates the texture resource with the parameter.

Sorry to ramble, but I guess maybe the most concise answer is:

… and that isn’t very concise… sorry. I hope it helped though.

Cheers! Good Luck!