Calculating world position from depth buffer; is there a better solution?
Intial Problem: I have a procedural mesh representing a planet. I want to add water to that planet using a ray cast to alter the color of a pixel if the ocean sphere is intersected. Unfortunately, if the water is outside the bounds of the planet (i.e, the camera is looking at the horizon), there's no frag called for that pixel and the ocean isn't rendered.
The solution I came up with was to Blit a post processing shader inside the camera's OnRenderImage method and use the depth buffer to determine where the the ocean sphere should be occluded. This works well enough at a distance, but something is wrong when the camera is close with a wide FOV:
I thought there might be an issue with how I'm generating rays from the UVs:
// In frag
Ray ray = CreateCameraRay(i.uv * -2 + 1);
[...]
Ray CreateRay(float3 origin, float3 direction)
{
Ray ray;
ray.origin = origin;
ray.direction = direction;
return ray;
}
Ray CreateCameraRay(float2 uv)
{
// Set the origin of the ray to the camera origin in world space.
float3 origin = mul(unity_CameraToWorld, float4(0.0f, 0.0f, 0.0f, 1.0f)).xyz;
// Set the direction of the ray by mapping the uv input to the inverse projection matrix, rotating to match world space, and then normalizing it.
float3 direction = mul(_CameraInverseProjection, float4(uv, 0.0f, 1.0f)).xyz;
direction = mul(unity_CameraToWorld, float4(direction, 0.0f)).xyz;
direction = normalize(direction);
direction = float3(direction.x, direction.y, direction.z);
return CreateRay(origin, direction);
}
I'm trying to debug this now, but I'm wondering if I'm going about this the wrong way in the first place. Is there a simpler way to draw outside the boundaries of the mesh using a script/shader on the planet itself rather than having to involve a post processing effect on the Camera [and effectively recalculate the world position from the depth buffer]?
Answer by CaseyCat · Sep 04, 2020 at 09:12 AM
While unrelated to the main question, I do have an update on the depth buffer / rayHit comparison discrepancy...
I decided to try passing the shader the camera frustum corners manually as properties and lerp/slerp between them per pixel; the lerp'd directions rendered an identical image to the one produced by the inverse projection matrix, suggesting the ray directions are not the issue. Okay, then.
I moved on to the next possible culprit: the depth values returned by LinearEyeDepth. I discovered something odd. The depth values change VERY slightly depending on the orientation of the camera, suggesting the value returned by LinearEyeDepth is not actually the distance to the origin of the frustum like I expected. Naturally, I assume it's calculating distance from the near plane; but changing the near plane doesn't affect the result.
EDIT: LinearEyeDepth is definitely "wrong", in terms of returning depth to camera origin, but I think I ruled this out as the cause.
My working theory is that LinearEyeDepth is only truly accurate for the point at the center of the camera. I think it takes the distance from the near plane and scales it to world space, then adds a flat value equal to the near clipping plane distance. However, if the distance from the frustum to the near plane is not constant (i.e, the near plane is flat) this will make points near the edges of the near plane appear closer than they should: the near plane offset is too low; this distortion would be more dramatic for larger FOVs, which we do observe.
EDIT: Didn't work. I'll try generating the ray origins at the near plane and adding a flat near plane offset to their depth before comparing. But REGARDLESS, this illustrates why using a post processing shader to do this CANNOT be the best way to solve the problem. Reconstructing world space coordinates from the depth buffer is clumsy and hacky. Any suggestions for a better way to solve this problem?
So, the near plane was not the issue. I'm still not sure why LinearEyeSpace does not return an accurate depth value near screen edges, but I was able to find a way to generate world space coordinates from the depth buffer by adapting this solution:
(https://stackoverflow.com/questions/32227283/getting-world-position-from-depth-buffer-value)
The gist is... sample the depth texture, but don't linearize the value. Instead scale the the depth value and UV to create a clip space position, then matrix math it back into world space via the inverse projection matrix and object to world matrix.
float3 DepthToWorld(float2 uv, float depth) {
float z = (1-depth) * 2.0 - 1.0;
float4 clipSpacePosition = float4(uv * 2.0 - 1.0, z, 1.0);
float4 viewSpacePosition = mul(_CameraInverseProjection,clipSpacePosition);
viewSpacePosition /= viewSpacePosition.w;
float4 worldSpacePosition = mul(unity_ObjectToWorld,viewSpacePosition);
return worldSpacePosition.xyz;
}
Definitely useful!
Still wondering if there isn't a simpler way to draw outside the bounds of the mesh, though. Seems wasteful to recalculate the world positions when I just had them [inside the planet's own shader].
Edit: Plus this solution doesn't easily work outside of run mode. OnRenderImage doesn't seem to respond to the [ExecuteInEdit$$anonymous$$ode] attribute.
Your answer
Follow this Question
Related Questions
Making a Kuwahara Filter Post process Shader (I'm using HDRP) 0 Answers
How do you port old shaders to hlsl so they work with the new post-processing stack? 1 Answer
How to draw texture on top of everything? 0 Answers
When im trying build with post processing unity crashes 0 Answers
After changing build platform from Android to PC all objects have become red !!! 0 Answers