- Home /
Copying depth from one RenderTexture to another
Hi there,
I know that this question has been asked already but still I could not find a definite answer and I am still struggling to make it work.
What I am doing in my program is some HDR rendering into a RenderTexture (the format ARGBHalf). Then, after applying tonemapping operator (Tonemapping.js) I continue rendering some overlays into the same RenderTexture (with another similarly setup camera with higher depth value but in LDR mode - I am not doing any tonemapping after that). The problem is that as soon as tonemapping completes the depth information is gone from the RenderTexture. I am aware that Graphics.Blit() used by Tonemapping implementation does not copy over depth info the destination RenderTexture. So to remedy that I made my HDR camera create depth texture for me (by enabling depthTexture creation)
function Start ()
{
....
camera.depthTextureMode=DepthTextureMode.Depth;
}
Then I am confused as to how I can use the generated texture (_CameraDepthTexture) to transfer the depth values to the destination RenderTexture. This is how I modified Tonemapping.js and this does not work:
@ImageEffectTransformToLDR
function OnRenderImage (source: RenderTexture, destination: RenderTexture)
{
CopyDepth(source, destinarion);
....
}
function CopyDepth (source : RenderTexture, destination : RenderTexture)
{
var oldRT = RenderTexture.active;
Graphics.SetRenderTarget(destination.depthBuffer, destination.depthBuffer);
GL.Clear(true, false, Color.clear);
GL.PushMatrix();
GL.LoadOrtho();
depthCopier.SetPass(0);
//Render the full screen quad manually.
GL.Begin(GL.QUADS);
GL.TexCoord2(0.0f, 0.0f); GL.Vertex3(0.0f, 0.0f, 0.1f);
GL.TexCoord2(1.0f, 0.0f); GL.Vertex3(1.0f, 0.0f, 0.1f);
GL.TexCoord2(1.0f, 1.0f); GL.Vertex3(1.0f, 1.0f, 0.1f);
GL.TexCoord2(0.0f, 1.0f); GL.Vertex3(0.0f, 1.0f, 0.1f);
GL.End();
GL.PopMatrix();
RenderTexture.active = oldRT;
}
The shader used by the depthCopier material is this:
Shader "Hidden/DepthCopy"
{
Subshader {
Tags {"RenderType"="Opaque"}
// -- DepthTextureCopy
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#pragma fragmentoption ARB_precision_hint_fastest
#include "UnityCG.cginc"
sampler2D _CameraDepthTexture;
struct appdata_t {
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
struct v2f {
float4 vertex : POSITION;
float2 texcoord : TEXCOORD0;
};
v2f vert (appdata_t v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
o.texcoord=v.texcoord;
return o;
}
half4 frag (v2f i) : COLOR
{
half4 depth=half4(Linear01Depth(tex2D(_CameraDepthTexture, i.uv).r));
return depth;
}
ENDCG
}
}
Fallback off
}
Answer by igorgiv · Aug 01, 2014 at 09:57 PM
Well, since no one answered and I solved the problem already I am posting my own solution.
For reasons still unknown to me rendering depth into the destination texture (destination argument of OnRenderImage() function) does not work. Maybe something else happens with destination RenderTexture after OnRenderImage() completes? Don't know. But anyway rendering depth in OnPreRender() method of the next (in the depth order) camera does work (the next is LDR camera in my example above). To render depth I have used replacement shaders, BTW. Below is the script and the shader. Hope this will be helpful to someone.
using UnityEngine;
using System.Collections;
/// <summary>
/// The purpose of this shader is to render scena objects (residing in a particular
/// layer) in depth-only mode. The point is to restore the state of the depth buffer
/// before the scene is rendered with the camera which this script is attached to.
/// This is useful in case you need to perform rendering AFTER some screen-space image
/// effects (such as Tonemapping or Contrast Enhance etc.) as those effects wipe out the
/// contents of the depth buffer. Another example would be rendering scene in HDR and then
/// rendering some LDR overlays.
/// USAGE: attach this script to a camera object with depth greater than that of a camera,
/// which renders earlier and has image effects scripts attached to it.
/// </summary>
[RequireComponent(typeof(Camera))]
public class DepthRenderer : MonoBehaviour {
GameObject depthCamera=null;
Shader replacementShader=null;
// Use this for initialization
void Start ()
{
depthCamera=new GameObject();
depthCamera.AddComponent<Camera>();
depthCamera.camera.enabled=false;
depthCamera.hideFlags=HideFlags.HideAndDontSave;
depthCamera.camera.CopyFrom(camera);
depthCamera.camera.cullingMask=1<<0; // default layer for now
depthCamera.camera.clearFlags=CameraClearFlags.Depth;
replacementShader=Shader.Find("RenderDepth");
if (replacementShader==null)
{
Debug.LogError("could not find 'RenderDepth' shader");
}
}
// Update is called once per frame
void OnPreRender ()
{
if (replacementShader!=null)
{
Camera camCopy=depthCamera.camera;
// copy position and location;
camCopy.transform.position=camera.transform.position;
camCopy.transform.rotation=camera.transform.rotation;
camCopy.RenderWithShader(replacementShader, "RenderType");
}
}
}
The replacement shader is trivial:
Shader "RenderDepth"
{
SubShader {
Tags { "RenderType"="Opaque" }
Pass {
ZWrite On
ColorMask 0
Fog { Mode Off }
}
}
}
Just don't forget to tag the SubShaders for the objects that you want to go to the depth buffer with
Tags {"RenderType"="Opaque"}
isn't this basiclly wiping out performance since you render one camera and all objects one more time? Should be possible somehow to store the old depthbuffer in a rendertexture and reuse it in a later shader in some way...
Answer by $$anonymous$$ · Sep 22, 2016 at 12:48 PM
I don't know if this will help anyone, but I spent 40 hours trying to get stencil buffers to work with the newer CommandBuffer system, and this is one of the threads I looked at during my research. Here's a link to my solution in case anyone else is struggling with this: http://forum.unity3d.com/threads/has-anyone-ever-gotten-a-stencil-buffer-to-copy-with-commandbuffer-blit.432503/