- Home /
How can I update the camera orientation faster than the UI loop?
I'm developing a VR application which uses a modified system for tracking the HMD pose (usually a Vive). Most of the time it works nice and well and on par with the native Vive or Oculus support using their respective tracking systems.
It fails when I use very complicated game logic (in my case actually a very complex geometry) where the UI loop takes longer than the HMD's refresh time to complete. Since I update the HMD pose in the UI loop, I'm only updating at the frequency of the UI loop, and naturally the image becomes sluggish.
Now the thing that makes me wonder if there's a better way to handle the pose is that for the Oculus and Vive, even if Update, Camera.OnPreCull and Application.OnPreRender are called at lower rates (I measured this, we can go down to below 30fps), the HMD pose or orientation is actually updated at the full 90fps, which makes for a much more pleasant experience.
My question therefore is this: is there any way to modify the pose right before the frame is passed to the GPU for rendering, outside of the UI loop? The Oculus and the Vive are doing something like this, but it seems to happen in the innards of Unity.
Besides the basic UI loop functionalities Update, OnPreCull, OnPreRender, I have tried adding a command Buffer as in
hmdCamera.AddCommandBuffer(CameraEvent.BeforeForwardOpaque, commandBuffer);
commandBuffer.SetViewMatrix(myViewMatrix);
and using that to set the camera pose, but to no avail. I didn't even run into the synchronization issues I was expecting.
Edit since the result is somewhat hidden in this long thread: both the Oculus and the Vive generate fake frames to interpolate between the application-provided frames if the application falls behind and doesn't provide frame data in time. These frames are what make the native Lighthouse tracking seem more smooth than our own implementation. This feature is outside of the scope of Unity, and therefore my problem cannot be solved in Unity. The question misrepresents the inner working of Unity slightly, because at the time I wrote this I didn't understand that this is not a Unity feature at all. Even native rendering plugins run at the same framerate as the game code.
After some playing around with a native plugin, I have measured that a commandBuffer as above is not called at the full frame rate.
To give a bit more details: I have a native plugin which is invoked along these lines
IntPtr cbFunc = Get$$anonymous$$yCommandBufferEventFunc();
commandBuffer = new CommandBuffer();
commandBuffer.IssuePluginEvent(cbFunc, 0);
hmdCamera.AddCommandBuffer(CameraEvent.BeforeImageEffectsOpaque, commandBuffer);
The native plugin measures the time between calls. If I make sure that my UI loop is slow, this plugin is also called at a lower frequency than the 90fps of the actual display.
Any hints where else I might be able to plug in?
The only alternatives I can think of are to either hijack openvr_api.dll or to write a full-fledged S$$anonymous$$mVR driver. The latter runs into trouble because I want to use the Vive as display and thus have to keep its driver running somehow as well. The former doesn't sound like something you would want to setup on somebody else's machine.
You said you were updating the H$$anonymous$$D pose in Update. Since you can do that, do it in LateUpdate ins$$anonymous$$d.
Thank you. From the documentation I gather that LateUpdate is also executed in the UI loop, so I don't think it will solve my problem, it is actually executed before OnPreCull and OnPreRender, which I already tried. Documentation reference: https://docs.unity3d.com/$$anonymous$$anual/ExecutionOrder.html
Diving into Unity a bit more, I'm convinced that a Native Rendering plugin can do the trick. Actually, the headers look like some functionality was added specifically for my purpose. I don't think it's me though, probably that's how the Oculus & Vive support was added.
So far so good, my plugin is correctly initialized, now on to making it do something ...
No worries, even if your answer didn't solve my immediate problem, it allowed me to learn a bit more about Unity.
Answer by Anotheryeti · Oct 04, 2017 at 07:47 AM
It sounds like you have some resource-intensive code that is already decoupled from rendering. If that's the case, just put it into another thread, and be careful about synchronization. Or just put it into a coroutine and go for the cooperative multitasking route.
If the issue is how much you're rendering, no amount of finagling will fix the root problem of gpu throughput. At that point, it's just "render less."
Thanks for the pointer. Indeed, asynchronous time warp sounds like what I figured that Unity would be doing internally. From the description you linked to, I take it that it actually happens inside the Oculus or S$$anonymous$$mVR code, and that I therefore shouldn't be surprised if there's no way of doing it even in a native render plugin.
I don't believe the resource-intensive part of my scene can be decoupled from rendering, as the scene has a large number of polygons (anything between ~10 and ~100 million polygons with complex texture shaders), so culling and other optimizations which take place in the UI loop are highly desirable.
So I'm basically left with the option of implementing a S$$anonymous$$mVR driver, which will not be easy as I somehow have to be able to use the Vive as my display, even though I wouldn't be using its S$$anonymous$$mVR driver :(
If that's the case, your issue is GPU throughput, and no amount of driver rewrites or anything will help you. You just need to render less. Updating your H$$anonymous$$D pose at 90hz won't make any difference if the card is only outputting frames at 30hz. There's no magical solution here. If there was, everyone would do that.
The oculus asynchronous timewarp/spacewarp is a last ditch solution to try and keep FPS up, when frames have been dropped that shouldn't have been.
Again, your solution is don't draw 100 million polygons per frame. I would suggest looking into LOD groups, occlusion culling, and chopping up larger meshes so that frustum culling works better.
And as a last note, the term "UI Loop" is a little confusing, given that Unity is a mostly single-threaded engine for these operations. Any UI code will run in serial with game logic and rendering. Your game will only update as fast as the CPU/GPU allows. Hence your command buffers not doing anything.
Thank you for your reply. I think it now all falls in place.
I should maybe point out that our scene is almost completely static, so time warp is appropriate and leads to a result that feels almost perfect. We don't have explosions, lightning or other single-frame events. Therefore, even though our scene runs at 30 fps, as measured in Unity, we find that it feels almost perfectly smooth with the Oculus or the Lighthouse, whereas with our own system it feels like you would expect from 30 fps. Not knowing that Oculus and Lighthouse do the View matrix adjustment behind Unity's back I had therefore assumed that the rendering is actually fast enough when I started this thread. Given that assumption, you will probably understand why I thought that the scene code and the multithreaded rendering don't run in lockstep. Once I thought that, it was of course straightforward to come up with what amounts to the dynamic time warp and wanting to implement it inside Unity. Thanks to your explanation, I now understand that this is not possible.
If possible, one fallback for us would be to cache depth buffer, light maps etc between frames, thus making use of the fact that the scene is (almost) static, but that would probably require taking the rendering completely apart.
(The large number of polygons is actually related to why the Lighthouse or the Oculus don't cut it: our scene covers a large volume in high detail, much larger than what they can cover.)
Answer by tobi_s · Oct 04, 2017 at 06:33 AM
I looked into writing a Low-level native plugin rendering extensions. Timing the frequency of calls into the plugin with UnityRenderingExtEventType kUnityRenderingExtEventSetStereoTarget or kUnityRenderingExtEventBeforeDrawCall tells me that these calls are also only issued with the frequency of the UI loop.
In other words, it doesn't seem possible to update the orientation with higher frequency than the UI loop.
One can also do custom blits, so it might be possible to shift the rendered image across the surface of the display in between the correct updates, and thus simulate the effect of a 3D rendering, but besides not knowing Direct3D well enough to know if this is feasible given what I have available, I don't see the means to synchronize this correctly, either.
Your answer
Follow this Question
Related Questions
Stereo VR compatible with other VR modes? 0 Answers
Switching VR platforms via VRSettings 0 Answers
Using OpenXR and OpenGL together on desktop 1 Answer