Any solution to floating point precision issues when working with extreme FOVs?
I have a need to have a camera located at an extreme distance, and allow the user to zoom in by correctly setting the FOV (as oppose to faking it by moving the camera in world-position).
Imagine a scenario using a satellite, a surveillance airplane or a drone, observing a location from far away.
However there is clear issues rendering objects far away from the camera and using a FOV at a value less than 20.
An example video proving the issue, zooming from FOV 20 down to FOV of 1.0: https://www.youtube.com/watch?v=1lSR7KHg3D8 (The camera location is 1000m/units away, 1000m/units up.)
Notice the 'wobbling' thanks to floating point (im-)precision.
Is there any known solution to this issue, or do we just have to accept that render-engines today have too many precision-issues to be able to handle these use-cases?
NOTE
Moving the camera, to fake the zoom-factor, suffer issues with incorrect lens-depth as-well (perhaps more importantly) it can cause the camera to be placed beyond blocking objects without those objects blocking the view, if the position of the base-position isn't guaranteed to be high up and cleared of any high landmarks or terrain.
Answer by Zarkow · May 28, 2016 at 09:41 AM
Will try to answer this myself:
From the manual regarding Camera: http://docs.unity3d.com/Manual/class-Camera.html
Clip Planes
The Near and Far Clip Plane properties determine where the Camera’s view begins and ends. The planes are laid out perpendicular to the Camera’s direction and are measured from its position. The Near plane is the closest location that will be rendered, and the Far plane is the furthest.
The clipping planes also determine how depth buffer precision is distributed over the scene. In general, to get better precision you should move the Near plane as far as possible.
Note that the near and far clip planes together with the planes defined by the field of view of the camera describe what is popularly known as the camera frustum. Unity ensures that when rendering your objects those which are completely outside of this frustum are not displayed. This is called Frustum Culling. Frustum Culling happens irrespective of whether you use Occlusion Culling in your game.
For performance reasons, you might want to cull small objects earlier. For example, small rocks and debris could be made invisible at much smaller distance than large buildings. To do that, put small objects into a separate layer and set up per-layer cull distances using Camera.layerCullDistances script function.
From: http://www.davenewson.com/dev/unity-notes-on-rendering-the-big-and-the-small
Camera issues on large scales
Cameras in Unity (and most other engines) have a Near and Far clipping plane, which defines the View Frustum. Everything which falls within the View Frustum will be rendered, and anything outside this range will not.
To render something really small like an apple you may want to be able to bring the camera really close to the object, perhaps as close as 1cm (0.01u) before clipping gets in the way. Conversely, to render something really huge like a planet, you'd set your far clipping plane to encompass the object, at whatever distance it is. For the sake of our example, let's say we want to render out to our maximum play area, 100km (100000u).
If you plug these two values into a Camera (near=0.01, far=100000), you'll notice a few issues. Firstly, polygons might start to intersect and flicker as they z-fight. If you're using Unity Pro with Screen Space Ambient Occlusion (SSAO) applied to the camera, you'll also notice the effect looks like muddy stripes. Finally, if you really do have an object placed out at 100000u you might also notice some oddities in the lighting, with dynamic shadows becoming poorly detailed, or flicking on and off at certain viewing angles.
This is all caused by a lack of z-buffer (depth) accuracy when using a camera frustum this large, and is basically the same issue we were having with our floating point accuracy on the coordinate system, but applied to a camera. A simple way to imagine the problem is the distance between the Near and Far clipping planes will be divided like slices in bread. The number of slices is always the same, but they need to be reasonably closely spaced for everything to work as expected. Setting such a long frustum on the camera results in the slices being spaced too far apart, and weird things start to happen.
The default Unity camera near and far clipping planes are 0.3 and 1000 respectively, so it's recommended to use a ratio similar to this (1:10000) when configuring your cameras to avoid the issues above. For more information, check the MSDN Article "Common Techniques to Improve Shadow Depth Maps" and Unity's Camera Documentation
I adjusted my Near Plane to 100 and the apparent, horrible, wobbling has been drastically decreased. But that also limits the camera-placement to be 'not nearer than 100 meters from an object that should be rendered'.
I hope there would be a chance to increase precision in the future. I wouldn't mind using a int64 for location (div by 1000 for centimeter-precision - dev can chose resolution).
Answer by Carve_Online · Jan 29, 2017 at 01:23 PM
There are two different problems you have to deal with.
The first is the floating point precision problem and the second is the z-buffer issue caused by using the camera to view both very close items and very far items.
The second problem is easily cured by having multiple stacked cameras each with different ranges.. so a camera from 0-1000(depth 0, clear on depth only), a camera from 1000-10000(depth -1, clear on depth only) and a camera from 10,000 to max(depth -2, clear on skybox). This setup makes sure that you still have the z-buffer precision necessary to correctly view small close items at the same time you can view far away large items.
The floating point precision problem only really will show itself if you are trying to move objects or have physics work on objects that are very far away from 0,0,0 ( usually 10,000 or more units). You can sometimes help things by making sure the origin (0,0,0) is in the middle of your view distance rather than where the camera is.. for instance.. Camera(0,0,0) object(10000,0,0) might cause issues, but Camera (-5000,0,0) object (5000,0,0) would not.
There are also some advanced techniques that allow you to move the origin at runtime to minimize the floating point precision problem. I believe there is an asset called ´World Streamer´ on the asset store that does this.
But I believe in your case, the problem is just with the z-buffer because you are only using on camera.