- Home /
Oculus VR and Sharing RenderTextures Across Both Eyes
I am attempting to leverage a RenderTexture drawn with screen-space coordinates of an object in VR. I have a separate Camera with a target RenderTexture and a replacement Shader that is told to render during the OnPreCull event of the Main Camera.
The object that gets rendered with the replacement Shader to the RenderTexture then, during the Forward Base pass of the Main Camera then uses that RenderTexture to be rendered.
In doing this I am assuming that the Main Camera will render for the left eye, the RenderTexture will receive the results from the left eye's perspective and the object will then be rendered with that RenderTexture, and repeat for the right eye.
To reiterate, the process occurs like this:
OnPreCull of Main Camera (from left eye perspective) the off-screen Camera is told to render an object into its RenderTexture.
The object is rendered by the Main Camera (from left eye perspective) with the RenderTexture as a property of its Material.
OnPreCull of Main Camera (right eye perspective), the off-screen Camera is told to render an object into its RenderTexture.
The object is rendered by the Main Camera (from right eye perspective) with the RenderTexture as a property of its Material.
With other APIs such as Google Cardboard, where there are actually 2 Cameras, this works as expected; the perspective of each eye is correctly used for the RenderTexture and the buffer can be shared. However, it seems that in OVR, #3 (or possibly #1) from above is not actually happening as both eyes' perspectives seem to be looking up the same RenderTexture, causing the resulting image to appear as "double vision".
I can only speculate that my assumptions of how the Camera is working when rendering to the Oculus headset is incorrect, but I can't seem to find any information that gives an explanation as to what is going on with the Camera.
If anyone has any ideas it would be greatly appreciated.
Thanks in advance.
==
Answer by equalsequals · May 12, 2016 at 03:01 PM
For posterity, I figured out the issue.
My assumptions were correct in all instances except for one detail: the OnPreCull hook seems to only function once for both eyes.
I can only speculate that this is an optimization of only doing the frustum culling once as opposed to processing for each eye. Moving my off-screen buffer Camera.Render call to OnPreRender of the Main Camera allows for the RenderTexture to be drawn with the correct perspective for each eye and there is no more "double vision".
Your answer
Follow this Question
Related Questions
RenderTexture.active for a non-VR camera in a VR project 0 Answers
Unity crashes - multiple cameras with image effects and fade to black 0 Answers
RenderTexture in VR/3D 0 Answers
Best approach for alpha blending multiple webcamtexture is renderTexture? 0 Answers
How to synchronize laggy stereo VR video with AudioSources? 0 Answers