- Home /
CAVEs, OpenGL stereoscopy and camera frustrums
Hello,
I am currently trying to build a simple asset to display Unity apps in CAVE. What it needs : - Server / client management to handle clusters of computers (that part is completed and working) - Custom camera frustrums for each wall (also completed and working, thanks to mr Kooima and https://en.wikibooks.org/wiki/Cg_Programming/Unity/Projection_for_Virtual_Reality) - Native tracking (also completed, it handles ART's bodies through UDP reading). - Stereoscopic rendering.
This is where i am stuck. My setup uses windows 7, so I can forget about direct3D11. I use OpenGL Core to use the quadbuffer (The CAVE uses Quadro K6000), with the "Virtual reality supported" option checked, and "Stereo Display (non head mounted)" sub-option. Additionaly, all my cameras are doubled (one is left, one is right, they have a 6.4cm separation), and the result is weird : - with "virtual reality supported" checked, i get stereoscopy but the frustums are broken. - If i just uncheck this option and leave the rest the same, the frustrums are correct but I get no stereoscopy. I've tried many other options, but i can't make both work :( any reason or solution in mind ? why does stereoscopy change frustrums ?
Anyway, even if you don't, thank you for reading this :)
Antoine
Answer by mikewarren · Aug 29, 2017 at 06:47 PM
Ok, if I didn't know better, I'd think I had written this post as I have nearly the exact same setup. CAVE, (no cluster though - single host), ART tracking, Windows 7 - OpenGL, Quadro cards. I've spent considerable time working on getting CAVE frustums working.
First, are you aware of the Cluster Rendering support? I've not used it as it's been historically buggy, so I'm just driving our five wall CAVE off of a single host. But, it's supposed to handle input device synchronization, frame sync, swap buffers, etc. Just letting you know in case you weren't aware.
Secondly, are you doing anything with the stereo frustums?
In a CAVE setup with head tracking, because the display position is fixed and the head/eyes are constantly moving, you'll need to calculate a view and projection matrix for each eye's asymmetric view frustum and use the SetStereoViewMatrix and SetStereoProjectionMatrix calls on the stereo camera.
I've had a few gotchas using the approach over the past few years (as I remember)
Those matrices are OpenGL standard (eg. right handed)
I think they completely override the other camera projection / view settings, and I think they're in world space, not local space.
As Unity has continued to add VR platform support, quad buffered stereo seems to be one of the things that breaks regularly between versions. I'm using 5.5.3 now. I think 5.4 was broken, and maybe 5.6. I thought 2017.1 was working, but I've been concentrating on head mounts lately, so it's been a while since I tested the OpenGL stereo code.
Not, sure if that helps, but feel free to follow up with questions if you'd like.
Thank you a lot pour your answer ! i'm glad i am not alone with these issues. I knew about cluster rendering, but didnt use it as Unity support told me they stopped developping it. Also, I needed a special licence.
I am not doing anything special between mono and stereo mode - juste adding eyes and calculating more frustrums. Since it works perfectly in mono, i am pretty lost. I will try it on Unity 5.5.3 as you advise. When you mean "broken", do you mean it doesn't work or only with issues?
I will also try to change some parameters in my frustrums scripts (especially the "estimate view frustrum" part, cf my previous link), and I'll keep in touch.
Actually, what you told me totally solved the issue. I was not using SetStereoProjection$$anonymous$$atrix and SetStereoView$$anonymous$$atrix, only the monoscopic equivalent ones for each eye. I don't really understand the difference, but I don't need to I guess ;) Thank you again !
I've been working with the stereoscopic rendering for a number of years, and it's confusing to me, too. I think it's a hard problem for Unity because there's more than one way to accomplish the same thing, and I think they try real hard to provide enough simplicity for common use cases and then add more in-depth support for the remainder. The interfaces are always evolving and some conflict.
I use a single camera approach and set the left/right eye matrices in the stereo calls. I suspect you could do the same thing (maybe?) with the regular projection matrix interface if you created two cameras (left/right) and had each draw into the appropriate buffer.
Glad to hear it's working.
Your answer
Follow this Question
Related Questions
How to make a split screen, one for a main camera and one for the VR set camera? 1 Answer
Render texture fullscreen on top of camera output 0 Answers
How do you change the dimensions of the Steam VR view on the mirror screen (URP) 2 Answers
Using "Stereo Display (non head-mounted)" in Virtual Reality generates very low resolution 1 Answer