Hi all,
I’m attempting to implement an SDF shader in Maya, using the GLSL shader as a front end for it. However, rendering the SDFs depends on being able to raycast consistently as if from a perspective viewpoint.
As far as I am aware, there is no way to determine the screen-space position of a fragment shader call without knowing the screen’s resolution. This is also essential to things like edge detection, blurring etc.
A trivial example can be found in the first line of this: Shader - Shadertoy BETA
" vec2 uv = fragCoord.xy / iResolution.xy; "
- where fragCoord is the xy index of the fragment (which I can get), and iResolution is the xy resolution of the viewer (which I cannot).
I can’t see any kind of semantic in the docs Semantics and annotations supported by the dx11Shader and glslShader plug-ins in Viewport 2.0 or in the provided ubershader implementation. I’ve also tried some way to resize an actual polygon in maya, assign it a colour ramp texture and pass that into the shader, but that hasn’t worked either.
Apologies if this is a simple question, I’m still a total shader baby. If there is a solution to this, or an alternative way entirely, please correct me.
Thanks very much.