- Home /
Computing world space pixel position in VR
Hi there
I've been wrestling with this one for a day and got no further. From within a fragment shader, towards the end of the render pipeline (but not in OnRenderImage), I'm attempting to use sample the depth texture in order to calculate the location of a given pixel at a given point on the screen. This is in order to achieve localised volumetric effects.
The very simple code, that I'm sure I've used before outside of VR, and is used by the soft particles (a bit) is as follows:
sampler2D _CameraDepthTexture;
struct v2f {
float4 pos : SV_POSITION;
float3 view : TEXCOORD0;
float4 clippos : TEXCOORD1;
float3 world : TEXCOORD2;
};
v2f vert (appdata_base v) {
v2f o;
float4 wPos = mul(unity_ObjectToWorld, v.vertex);
o.pos = UnityObjectToClipPos(v.vertex);
o.view = wPos.xyz - _WorldSpaceCameraPos;
o.clippos = o.pos;
o.world = wPos.xyz;
return o;
}
half4 frag (v2f i) : COLOR
{
float4 screenpos = ComputeScreenPos(i.clippos);
float scenez = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE_PROJ(_CameraDepthTexture, UNITY_PROJ_COORD(screenpos)));
float3 position = i.world + normalize(i.world - _WorldSpaceCameraPos)*scenez;
return half4(frac(abs(position*3)), 1);
}
On rendering the above, I would expect a sort of 'tiled' effect, as the fragment shader outputs a colour based on the world space position of the pixel it is rendering. However what I get is a warped output, that seems accurate in the center of the screen, but wrong towards the edges:
Rendering the scenez instead looks fairly sensible (though a bit of me thinks perhaps it should be curving out from the camera if it is correct?)
I am baffled. I've tried numerous different approaches, and placed all sorts of divide-by-w bits for the projection. I've had a go at passing different bits between vertex/fragment shader, or calculating the depth in different ways. However, all eventually end up doing the same dodgy effect.
I think it must have something to do with the fact that in VR the output is warped to match the lense, but how to compensate for this, and why a similar post-effect style approach works in OnRenderImage is beyond me.
Anybody any experience with this issue? Do I need to 'unwarp' somehow? Is a warp causing me to sample the wrong area of the depth texture perhaps?
Are you attempting this in Single or $$anonymous$$ulti Pass Stereo?
Answer by equalsequals · Jul 13, 2017 at 05:27 PM
For posterity, I believe to have resolved the issue, at least in my case. Please see my Forum Post.