DepthNormals texture into custom global variable
I have a camera...
_camera = GetComponent<Camera>();
_camera.depthTextureMode = DepthTextureMode.DepthNormals;
...whose DepthNormals texture I use in a shader
sampler2D _CameraDepthNormalsTexture;
Works perfectly!
But when I try to put the DepthNormalsTexture into my own global variable...
_camera.targetTexture = new RenderTexture(_camera.pixelWidth, _camera.pixelHeight, 16, RenderTextureFormat.ARGB32);
_camera.targetTexture.filterMode = FilterMode.Bilinear;
Shader.SetGlobalTexture(_globalTextureName, _camera.targetTexture);
... it doesn't work. It's not the same. And I am unable to make it work in this RenderTexture + SetGlobalTexture way.
What I've tried so far:
Various sorts of RenderTexture formats
Assign the camera's RenderTexture directly to the shaders using them. This also behaved weirdly, that's why my guess is that my RenderTexture has different contents from _CameraDepthNormalsTexture.
Any help is greatly appreciated!
Answer by StarkeyBoy · Apr 12, 2018 at 08:05 AM
When the camera renders to a texture, it renders what it can see (based on the applied culling mask filters) NOT the depth normals. You access the depth and normal data from within the shader using the inbuilt _CameraDepthNormalsTexture variable (which gives access to the GBuffer from the GPU side).
The only time that I've seen the shader variable being accessed outside the GPU is to pass them into a compute shader. If you are using compute shaders you have to pass this into the shader ComputeShader.SetTextureFromGlobal(CSMainkernalID, "_DepthNormalsTexture", "_CameraDepthNormalsTexture");
Hope this helps.
Answer by Doidel · Apr 13, 2018 at 08:03 AM
Thanks for your explanation @StarkeyBoy!
I am not using a compute shader. Instead, in my forward rendering setup, I have a surface shader which accesses the _CameraDepthNormalsTexture of a camera A with depth -1. There is also a camera B with depth 0 around, but like this, the surface shader uses camera A's _CameraDepthNormalsTexture.
Now if I switch the forward rendering setup to a deferred one, the surface shader doesn't work out of the box anymore. This didn't come as a surprise to me, and I thought the easiest way to circumvent this is to simply draw the camera's rendered depth and normal data (GBuffer) to a texture and then just reference that texture in my surface shader. After all, since I can do that with an albedo texture, why not with the depthNormals texture.
Should I also use something like MySurfaceShader.SetTextureFromGlobal(SSMainkernalID, "_DepthNormalsTexture", "_CameraDepthNormalsTexture"); or do you have a suggestion how I can make sure that my surface shader has access to camera A's GBuffer (which btw indeed has a specific culling mask)?
Thanks again for your effort, much appreciated!
Hi @Doidel,
I have not tried this but you may want to take a look at the 'Shader.SetGlobalTexture'. This will make the texture available for all shaders, but sound like it will do what you require. Take a look at the Unity API manual on this topic. Remember you will still need to decode the _CameraDepthNormalsTexture, in each shader where this is used, using the DecodeDepthNormal function contained within the UnityCG.cginc. Unless you decode this to a colour texture first and set that into a global texture.
I have used a compute shader in the past to output onto a render texture. I created a colour texture from the decoded normals and output this from the compute shader. This worked fine and allowed me visually see the normal map. Really I only did this as an experiment to see if could be done. In reality I was able to use the _CameraDepthNormalsTexture within the compute shader. If your working on after effects then take a look at compute shaders and use the compute shader.SetTextureFromGlobal function.
However I think from what you've described the Shader.SetGlobalTexture option looks the best for you. Good luck, I hope this does what you need.