- Home /
Is there a version of Camera.WorldtoScreenPoint for equirect projections?
I'm using RenderTexture.ConvertToEquirect to capture 360 screenshots of my game, and I'd like to know the coordinates of a given object in my scene in the resulting images. For regular screenshots, I know I can do this with Camera.WorldToScreenPoint. Is there a similar function for equirect projections?
You would have to do it manually. WorldToScreenPoint relies on the projection matrix, but because equirectangular images require more than one camera as a source, there isn't a single projection matrix. Ins$$anonymous$$d, you would have to think in terms of the original cubemap; calculate the screen position for each of the 6 cameras, find which one contains the point, then convert said point into an equirect format.
The top answer in following link explains how to do so and has some great examples in Unity:
https://stackoverflow.com/questions/34250742/converting-a-cubemap-into-equirectangular-panorama
Answer by Bunny83 · Aug 09, 2019 at 11:47 AM
No, there is no built-in conversion since the cubemap is a 360° projection and not directly related to the camera view or the camera projection. However since the result is an equirect projection of the whole 360°, just using plain lat / lon conversion should work just fine considering the camera position as center.
I have never used the "RenderToCubemap()" or "ConvertToEquirect()" methods, so I don't know if the rendered image is actually relative to the camera rotation or relative to worldspace. Though the only difference would be that you first need to convert between local and world space coordinates.
To convert from worldspace to the equirect projection, just determine the longitude / azimuth (rotation angle around the y-axis) as well as the latitude / polar angle with respect to the camera position.
For localspace conversion, you can just use:
Vector3 v = Camera.transform.InverseTransformPoint(worldSpacePoint);
For worldspace conversion just subtract the camera position
Vector3 v = worldSpacePoint - Camera.transform.position;
With that vector we can determine the azimuth using Mathf.Atan2
float azimuth = Mathf.Atan2(v.z, v.x);
The polar angle can be determined from the normalized vector's y component
Vector3 dir = v.normalized;
float polar = Mathf.ASin(dir.y);
Next you want to make sure the angles are positive:
azimuth += Mathf.PI*2; // 360°
polar += Mathf.PI; // 180°
Next divide by the angle range to get a value between 0 and 1:
float U = azimuth / Mathf.PI*2;
float V = polar / Mathf.PI;
Those are essentially your UV coordinate of your equirect image. The z coordinate (distance from the camera) is simply the magnitude of the v-vector.
Note that as I said there might be some offsets involved or some signs need to be flipped but the general idea is the same.
To calculate the reverse you just do it in reverse. So the UV ranges have to be converted into angles (+-2PI for U and +-PI for V). With those we can simply construct a direction vector using Mathf.Sin and Mathf.Cos. Once we have the normalized vector we can multiply the direction by the wanted distance from the camera to get an actual point. In the last step again either add it to the camera position or use TransformPoint.
Thanks for the elaborate answer, I'll implement this and see if it works ;)
FYI, Cubemaps are made relative to the camera view when using Camera.$$anonymous$$onoOrStereoscopicEye.Left or Camera.$$anonymous$$onoOrStereoscopicEye.Right but relative to the worldview when using Camera.$$anonymous$$onoOrStereoscopicEye.$$anonymous$$ono.
Your answer
Follow this Question
Related Questions
dynamic hole in layer / texture / camera 0 Answers
How can I build a RenderTexture to Texture2D hidden booth for UI elements? 1 Answer
Is there a way to write to Deferred Depth from a command buffer in CameraEvent.AfterImageEffects? 1 Answer
How to render a custom shadow map in 2D game?,How to render a custom shadow map in a 2D game ? 0 Answers
Weird Image Flickering Behavior on UI Canvas When Using RT Material 1 Answer