- Home /
Compositing two RenderTextures based on depth
I want to achieve a "L4D2 see other players through the walls" shader effect (not for that purpose, but it's similar enough).
I have two cameras with identical transform and parameters, but different culling layers.
Camera 1 renders the scene (everything but characters). Camera 2 renders only the characters (and is disabled).
Camera 1 has a C# script that uses Camera1's OnImageRendered to render Camera2 to a RenderTexture, and then cribs Overlay / fragAlphaBlend from the Unity built-in Overlay image effect to composite the two RenderTextures using Graphics.blit.
At this point, I can see the entire scene, but the characters are always drawn on top of any foreground scene geometry, since the fragAlphaBlend shader does not know about either RenderTexture's depth/zbuffer.
I'd like to compare camera1 rendertexture depth to camera2 rendertexture depth, and blend the images differently if camera1 geometry is closer to the camera than the camera2 elements. I see that I can get the current camera's depth buffer in a shader using _CameraDepthTexture, but that doesn't help because by the time I'm compositing the two images in Graphics.blit, neither camera is current.
Any suggestions? The only option I can think of that's sure to work is explicitly rendering depth passes for both cameras..
To make things even more fun, I'm currently on Unity 4.6.2, but have ambitions to upgrade to 5.1 soon, so I'd like to not shoot myself in the foot re: forward compatibility..
Your answer
![](https://koobas.hobune.stream/wayback/20220613201139im_/https://answers.unity.com/themes/thub/images/avi.jpg)