- Home /
Off Screen Render of Camera As Enemy Vision
I am making the enemy code for a first person shooter and have worked out a reasonable system for enemy vision using triggers. It detects players within the trigger volumes and then raycasts to check if there is not a collider in between them.
This works, but it means that objects without colliders like some of the foliage in the game do not work as cover. I was thinking that a better system for my game would be if the enemies had cameras attached, which render (offscreen) all non-player objects in one colour and the player in another. If there was any player-colour objects visible it would try and kill the player. This would work based on renderers instead of colliders so would be more accurate.
However, I don't know if this is even possible, or how to do it. If anyone does know anything that could help, please tell me.
Sounds very very slow to me. I think you will need colliders on all cover opportunities.
I have heard some games use this as the system for detecting bullet hits, and because most of the time in a render is taken up by shading, and this would use solid colour, that wouldn't be as slow. Also it would only do this when there is a player in the vision trigger- sorry, I should have explained- this is just replacing the raycasting to look for cover.
Depends on your platform, but you are going to ask the graphics card to transform every vertex and interpolate every pixel so you are still doing a lot of shading (just not much calculation in the fragment shader). Also you'd have all of the pre-culling etc.
You also have to then find that pixel color in the resulting render texture - another slow process. It might work if you had very low resolution textures, but I wouldn't be holding out that much hope of it giving real performance. Normally you cover search by having properly annotated objects.
Also I guess you'd have to check for more than one pixel - seeing a single pixel of an enemy isn't actually likely to break cover. Seeing an enemy behind some leaves would probably represent reasonable s$$anonymous$$lth cover even though technically many pixels of the target are visible, the pattern of the leaves means that they are not recognisable.
I don't know much about shaders, but what if it used unlit or even no shader at all so that there is no time spent lighting? Plus it would only render in a certain distance. What if it only ran every ten frames or so when the player is in the field of view trigger?
Answer by Coderdood · Jun 18, 2013 at 06:58 PM
While certainly a novel solution to your problem to use an off screen render, and I admit that its interesting. I think it would probably be an unworkable / slow solution.
I would suggest to simply use colliders. As whydoidoit suggested you can make colliders triggers, or on a different layer. See Physics.IgnoreLayerCollision .
Its possible you could implement a custom solution using a "ProvidesCover" type component to each cover object and some sort of "CoverSystem" manager type object that can determine whether a specific spot is visible from a specific angle / distance / etc. But the chances of it being better or faster than the built in Unity raycast system is pretty slim. Especially since, unless you have force field type objects, any collision object is automatically cover so a custom cover system would have to replicate all the work of tracking / checking collisions objects.
Thank you, I tried using layers and IgnoreLayerCollision, and it works, with no problems at all.