- Home /
Discussions go to the forums.
The best way how to render parallel scenes?
Let's say that i want to emulate multiple parallel scenes in Unity. Everything about the game logic is solved, and my only problem is rendering now. What i need is to have many worlds simultaneously running in the unity scene, with each camera only rendering its world. What would be the most efficient way to achieve this? Some options i gave a thought:
Place each subscene's origin far enough from each other
Use filtering by layers
Disable the subscene's root gameObject and only enable it durign the subscene camera rendering
Same as above, but change layer to some invisible instead of disabling
Now none of these options is ideal, and all of them, some more obvious and clear to me than the others, have performance penalties.
Solution 1:
Pros:
No extra work, easy to implement
Cons:
Not guaranteed to stay separate
Floating point precision (If unity could first subtract the common root / transform of the camera and renderers, this would do fine. But it can't.
Probably inner engine penalties, such as octree size/depth
Solution 2:
Pros:
Probably optimized in the engine
Cons:
Limited number of layers, each scene would probably require more of them for special effects (ignoring any game logic requirement, such as collision filtering, i don't need these here)
For procedurally created scenes, this would require some dynamic allocation of layers and their translation, this would require either special workaround for every class that uses layers, or automatic reflection based corrections, making it slow and impractical. Also, layers are needed for so many things in the unity engine, and they are shared per purpose...
Solution 3:
Pros:
Technically this would work the best, no real limit for the number of subscenes, BUT
Cons:
Each component would receive OnEnable and OnDisable every frame. They'd need to anticipate this and handle this correctly. Still, it would make this approach very slow (i think). Even if i am disabling a single GameObject, the state is inherited by chlidren that need to be actively disabled too. Unless i am wrong, this solution is unusable.
+Alternative: Only search for and disable Renderers? Is there a big penalty for updating some trees in the engine here, or are they recreated before each camera rendering anyway?
Solution 4:
This is for now the most acceptable for me, but i don't know if there's penalty for changing object layers, if unity does some precaching or preserves some filtered lists or trees based on object layers. This would basically mean that every camera, before it renders, would change layer of all objects depenging if it wants or does not want to render it, restoring layers afterwards... By far most acceptable i don't mean ideal, I am pretty sure something would go wrong here, or it would be very slow (I'd still need to maintain some dictionary or other way of keeping the original layer of the object to be restored, I don't want to allocate memory for this every frame).
None of these seem fit for the project, mostly for performance reasons. Maybe there's some simple solution i am missing, or there are some people who have solved this by themselves. Any help will be appreciated.
It would be the absolute best if unity allowed overriding culling methods (for camera culling, lights, together with many other similar methods), just like, for example, OGRE engine does
No, not that much... The best option for my case is the layer filtering, at least for the physics (I do some layer allocation for each parallel world, replacing the 'pseudo layer' with the real one in a Rigidbody wrapper), but it's a lot of extra setup I$$anonymous$$O
Ah well... $$anonymous$$y. It would be nice to have some feedback from Unity on this. Are you doing research, or is this for a game/product?
I'm also interested in this.
I have a game that is so far using Option 1. This works as most of the subscenes are separated physically and will never interact with each other. I need rendering to be switched off, but I need the physics to keep going, which is important for me. So far I'm at the implementing but not optimising stage.
I've been looking into trying to generalise some of what I've written so it can be reused in different ways in my scene (and maybe with a view to releasing what I have as an Asset so people don't have to start from scratch, but I'm not remotely sold on that).
I'm still too early to offer real help, but just thought it worth saying you're not alone in doing this and I'll be watching this interestedly.
If you need the rendering to be switched off, can you just disable the $$anonymous$$ainCamera?
Have a similar problem and tried rendering with one Layer per Camera to achieve some parallel rendering. Performance wise there was zero difference... So it seems there is no parallel rendering at all.
Perhaps this can be solved now by SRP? I haven't explored that yet, so It's a wild guess, but I think problems like this are the reason SPR even exists and tries to solve
Back in the days switching render targets were a big thing. I noticed an hour ago the possibility to let several cameras render into the same RT (in Unity 2019.4), so I used one big texture and rendered into it. Too bad no change in performance. So rendering all cameras into one big texture is the same as rendering each in their own RT.
Follow this Question
Related Questions
Applying Theme to game ? 0 Answers
Can I render two cameras simultaneously? 1 Answer
Why does Unity render everything twice ? 0 Answers
A lot of static objects affects the performance 0 Answers
Merge what 2 cameras render 0 Answers