- Home /
Normals from Depth texture distorted
Hi all,
I've been writing some image effects using the cameras generated depth and normals texture. One thing that I've noticed is that the cameras field of view seems to distort the depth values at the sides of the screen... If anyone wants to see what I mean, load up Unity's builtin global fog shader and with some random geometry look at the left and right sides of the screen as you turn left and right... It's a bit frustrating as my world space positions which are based off the depth texture change, which is screwing up my lighting calculations...
I've included a little demo scene with a plane to highlight the problem. It's particularly noticeable on the right corner.. as you rotate the camera around slowly you'll see the corner get more 'red' instead of staying the same color. And like I mentioned, seeing as my world space postions are based of this... it screws things up! I'm inclined to think it's a problem on my end, but Unity's builtin global fog shader has the same problem, so I don't know...
distortion is a general problem with perspective based cameras. Usually lowering the field of view avoids this, however it still exists and can't be completely avoided
Answer by Owen-Reynolds · Jul 13, 2013 at 03:08 PM
To expand on BenProd, Depth is not distance. It's faster to convert the screen to local coords and use Z for depth. This means that a duck 30 meters to your left and 40 meters away, has a depth of 40. Rotate to face that same duck and it's depth magically increases to 50. Of course, the distance was always 50, but no-one was computing that.
You see this issues in lots of game with depth-culling. Walk back until a tree has a LOD change (vanishes, etc...) then turn sideways and the tree will pop back on the edge of the screen, since the depth has gotten lower.
Thanks Owen. So there's no effective method around it?
No, our real eye has a curved retina. The field of view represents an angle and should therefore result in a curved surface as well (actually a spherical sector). But since our monitor is flat we use a viewing frustum ins$$anonymous$$d.
Optics aside, the graphics card is wired to use "local-Z = depth" for speed. So, we could do all the math in the CPU, using real distance. I assume overnight renders do this (and one of the reasons they take overnight to render 30 seconds.)
Answer by spraycanmansam · Mar 12, 2014 at 07:37 AM
Just to follow up on this -- I mentioned my lighting calculations were being stuffed up in my original post. I had attibuted this to my world space calculations from the depth texture. It actually turned out to be from my decoded normals from the _CameraDepthNormals texture. They were still in view space! I had forgotten to convert my normals to world space the same as the depth values I was working with.
Just wanted to clarify that point :)