- Home /
When & why should I use multiple cameras with different layers?
Sorry this isn't a specific Unity question. I was wondering when & why would I use multiple cameras with different layers in a scene? I notice in some games the developers used a different layer for the HUD. In 2D games I noticed they assign different background images to different layers and get the game to have parallaxing background. Is there a general rule when I would want to put objects on a separate layer? Also are there any major performance hits (mobile or PC) using all these cameras that are assigned to different layers?
Thanks!
Answer by Julien-Lynge · Dec 14, 2011 at 10:50 PM
http://unity3d.com/support/documentation/Components/Layers.html
http://unity3d.com/support/documentation/Components/class-Camera.html
Those pretty much spell it out. You use layers to only render part of a scene with a camera. Here's an example from this very answers site that shows a case where you'd want to use multiple cameras: http://answers.unity3d.com/questions/41298/fps-gun-clippes-through-walls.html
I appreciate your attempt to answer my question, however I do not appreciate your assumption of my research and your snide comment about bugging the community. Please keep those opinions to yourself, thanks.
no snark required. remember your answer isn't just for this guy but for people like me who just needed a link to the right documentation.
Alright, I certainly hear you guys.
However, the snarky responses ("Please keep those opinions to yourself" and "no snark required") are also uncalled for. Please be nice to the people that are helping you out for no reward - if you think I'm being unreasonably mean, tell me so straight up.
It's always good to remember that the long time folks here put up with a lot of crap from new users. Plus, we're human beings too, and sometimes maybe you just caught us on a bad day.
I've removed the snark from my answer.
Technically, you do get a reward in the form of (ironically) $$anonymous$$arma :) I can appreciate your perspective though.
Answer by Simon-O · Jun 15, 2016 at 06:00 PM
Personally, I've had to do this a couple of times...
First was a space scene where I wanted the near clip plane for the spacecraft to be quite small, but the far clip plane (to encompass the planet) had to be huge. Doing both with one camera leads to Z-Buffer fighting (often manifested as flickering shadows). Using one camera for the planet, then a second for the ship avoided the issue. In this case, both cameras were in the same position/orientation.
Next example was to generate a top-down mini-map. By having a layer with map icons that is only visible to the minimap camera, I was able to add icons, etc that didn't interfere with the player's view.
Finally, I used a similar technique to emulate Valve's "3D Skybox" (most visible in Counter-Strike when you're dead and noclipping. You'll notice that eg on de_dust the leaves of the palm trees always render behind other objects, no matter the camera position). This is because there's a smaller (usually 1/16th) model of the environment somewhere else in the scene. As the player moves around the map, a second camera moves at 1/16th of the speed around the "skymap". This allows middle-distance objects to move correctly with player perspective while avoiding buffer depth issues.