I'm trying to calculate the world-space position in a shader using the camera's depth buffer. I'm getting values that vary a lot based on the camera's rotation. Why?
The world-space distance between the camera and a fragment shouldn't change at all as the camera rotates, but I have been running into this problem where it does change. I've made a simple example to demonstrate.
Below is a view of a landscape with hills and a mountain. The landscape is rendered with a post-process effect (shader code is below) that visualizes the reconstructed world-space distance from the camera's depth texture, snapped to discrete intervals to make the problem clearer:

Here is where the problem comes in: when I rotate my camera and objects move to the side of the view, their world-space distance shrinks by hundreds of meters (watch the mountain and large hill as they move to the right):

Why are these depth values changing so dramatically? My depth visualization shader is below.
 Shader "Visualize Depth PostProcess"
 {
     Properties
     {
         _Range("Range", Float) = 2575
     }
     SubShader
     {
         Tags { "RenderType"="Opaque" }
 
         Pass
         {
             CGPROGRAM
             #pragma vertex vert
             #pragma fragment frag
 
             #include "UnityCG.cginc"
 
 
             struct appdata
             {
                 float4 vertex : POSITION;
             };
             struct v2f
             {
                 float4 vertex : SV_POSITION;
                 float4 screenPos : TEXCOORD0;
             };
 
             v2f vert (appdata v)
             {
                 v2f o;
                 o.vertex = UnityObjectToClipPos(v.vertex);
                 o.screenPos = ComputeScreenPos(o.vertex);
                 return o;
             }
 
             float _Range;
             sampler2D_float _CameraDepthTexture;
 
             fixed4 frag (v2f IN) : SV_Target
             {
                 float4 depthMapUV4 = UNITY_PROJ_COORD(IN.screenPos);
                 float rawDepth = tex2Dproj(_CameraDepthTexture, depthMapUV4).r;
 
                 float camToFragWorldDist = LinearEyeDepth(rawDepth);
                 float3 color = saturate(camToFragWorldDist / _Range).xxx;
 
                 //To make the effect more stark, snap the depth value to discrete steps.
                 return float4((0.2 * step(0.1, color)) +
                                 (0.2 * step(0.3, color)) +
                                 (0.2 * step(0.5, color)) +
                                 (0.2 * step(0.7, color)) +
                                 (0.2 * step(0.9, color)),
                               1);
             }
             ENDCG
         }
     }
 }
 
Answer by heyx3 · Nov 17, 2018 at 05:23 AM
OK I'm pretty certain I've fixed this problem. At the very least, I've compensated for it so that the artifact from camera rotation isn't visible.
I thought that the output from LinearEyeDepth was the view-space distance to the fragment (and therefore, the world-space distance). However, it is merely the view-space Z component from the camera to the fragment; I was ignoring the view-space X and Y! The true distance is found by doing this:
 float2 viewPosXY = ...;
 float viewPosZ = LinearEyeDepth(sampledDepth);
 return length(float3(viewPosXY, viewPosZ));
After some time, I figured out how to calculate viewPosXY in the fragment shader so that I could finally calculate the above expression.
- Output the clip-space position from the vertex shader (i.e. - UnityObjectToClipPos(v.vertex))
- Also output the screen position for sampling the depth texture (i.e. - ComputeScreenPos(UnityObjectToClipPos(v.vertex));)
- In the fragment shader, do the perspective division for the clip-space position: - float2 clipPosXY = IN.clipPos.xy/IN.clipPos.w;
- Sample the depth texture and convert to linear eye depth: - float viewPosZ = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.clipPos))).r;
- You could also get the depth value by - lerping between the camera's near and far plane using- Linear01Depth(sampledDepth)as the t value. The results are slightly different, and I'm not sure which one is more mathematically correct (or what the difference between them really is). Using the- LinearEyeDepthapproach should be more performant because it doesn't have that extra- lerp.
- Get the view-space X and Y, given the clip-space X and Y and the view-space Z (this part is taken from the Unity docs): 
float camAspectRatio = ...; // =width/height. Has to be fed in from script
float camFOVDegrees = ...; //Has to be fed in from script.
const float deg2rad = 0.0174533
float viewHeight = 2.0 * viewPosZ * tan(camFOVDegrees * 0.5 * deg2rad);
float viewPosY = 0.5 * viewHeight * clipPosXY.y,
 viewPosX = 0.5 * viewHeight * clipPosXY.x * camAspectRatio;
It seems to me like you should also be able to get the view-space X and Y much more easily by calculating mul(UNITY_MATRIX_MV, v.vertex), but for some reason that gives me very strange results.
Finally, compute the distance:
float worldSpaceDist = length(float3(viewPosX, viewPosY, viewPosZ));
Your answer
 
 
             Follow this Question
Related Questions
i wanna draw inside like outside. shader.... 1 Answer
Getting Color Generated from Shader to Script to Shader 0 Answers
A public list of color in shader. 0 Answers
What is the difference between Semantic POSITION and SV_POSITION both 0 Answers
How do i get a shader to affect every object with the same material? 0 Answers
 koobas.hobune.stream
koobas.hobune.stream 
                       
                
                       
			     
			 
                