- Home /
Camera Culling per object?
I need some kind of camera culling effect, but on a per-object rather than per-layer basis. Just using layers isn't feasible because there could be more cameras/objects than layers and the culling must be dynamic.
Does anyone have any ideas on how one might isolate a camera's view to only one particular object, without needing to dedicate an entire layer to just that one object?
So far I'm thinking I'll probably need to use the camera depth buffer and/or stencil shader to mask the object by it's depth/mesh in the scene.
Answer by IsaiahKelly · Jan 26, 2017 at 05:15 PM
Okay, the simplest solution I've found so far is to place the object(s) you want to selectivity render in a single layer that is only rendered by a certain camera. Then disable all objects in that layer, and only enable the ones you wish to render at the moment right before the camera for that layer renders.
private void Start()
{
// First disable all object renderers in the layer.
}
void OnPreCull()
{
// Now enable your target object's renderer.
// Right before the camera renders.
}
void OnPostRender()
{
// Then disable your target object's renderer again.
// After the camera is done rendering.
}
Probably better ways to do this and I'm still researching it, but I thought I would at least post this here for others that might find it useful.
Answer by Bunny83 · Jan 22, 2017 at 01:22 AM
If you just want to render a single object (or a few) with another camera you might want to do this:
use any rendering callback (OnRenderObject should work)
first set the object(s) you want to render to a dedicated layer
now manually render your camera using Camera.Render() (of course you would disable the camera so it doesn't render automatically)
reset the layer of the object(s)
This procedure could be done even mutliple times a frame for different objects.
The point of the layermask is that the check which the camera has to do on a per-object basis is very fast since it's a bit mask. However that limits us to 32 layers. Of course Unity could have implemented a different system, but it would be way slower / complicated and usually 32 layers are more than enough.
You haven't really said much about your usecase so we can't really suggest anything else without knowing the exact situation.
edit
Another way could be to simply render the objects manually. So setting up a projection / modelview matrix and just use Graphics.DrawMeshNow or a similar way. Again, it highly depends on your case.
Thanks for all your suggestions! I'll look into these techniques and see if they can help me. $$anonymous$$y use case is a bit complicated so it's kind of hard to explain. I also wanted to keep my question simple as possible.
I essentially want to render multiple objects with their own isolated cameras (for special object effects) on top of the main camera. So each child camera needs to only render it's own object, but right now each child camera renders all objects on it's object layer, and using one layer mask per-object and camera would be impractical.
There might actually be a better way to do all this with something like command buffers, but I don't have enough experience with these more advanced systems to know for sure yet.
Think I might have a solution: What if I disabled all mesh renderers on all objects, and then cycled through each camera, only enabling the mesh renderer on the current object right before the current camera renders it?
Answer by Glurth · Jan 22, 2017 at 03:32 AM
If you are willing to use a different shader variant for each object, you can use the ReplacementShader on a camera to draw only shaders with a particular, custom, "tag" defined.
https://docs.unity3d.com/Manual/SL-ShaderReplacement.html
It is the very last line of the link above that is relative to your objective: "Any objects whose shader does not have a matching tag value for the specified key in the replacement shader will not be rendered."
Edit/example:
I do a limited form of this with my Object Preview editor. It allows me to control the lights rendered on any scene object, as well as show only the one object to preview in the camera (this is the part pertinent to your objective). I need make no assumptions, or use of layers, but all my objects are a single color because I'm only using ONE very-simple shader variant. I have not tried to do this for your objective yet, so I'll just post what I have working:
Shader "UI/PreviewShader"
{
Properties
{
_PreviewLightDirection ("Light Direction", Vector) = (1,1,-1,0)
_PreviewAmbientLight("Ambient", Float) = 0.5
_Color("Color",Color)=(1,0.5,0.5,1)
[Toggle]_UseVertexColor("Use Vertex Color",Int) = 0
}
SubShader
{
//************ notice the PreviewTag*************
Tags { "RenderType"="Transparent" "PreviewTag"="PreviewTag" }
//************ notice the PreviewTag*************
Pass
{
Tags { "PreviewTag"="PreviewTag" }
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 color: COLOR;
float4 vertex : POSITION;
float3 normal: NORMAL;
};
struct v2f
{
float4 color: COLOR;
float4 vertex : SV_POSITION;
};
float _PreviewAmbientLight;
float4 _PreviewLightDirection;
float4 _Color;
int _UseVertexColor;
v2f vert (appdata v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_MVP, v.vertex);
if (length(v.normal)>0) //GL.lines dont get a normal
{
half3 worldNormal = UnityObjectToWorldNormal(v.normal);
// dot product between normal and light direction for
// standard diffuse (Lambert) lighting
half nl = max(0,dot(worldNormal, _PreviewLightDirection.xyz));
nl*= (1-_PreviewAmbientLight);
nl+= _PreviewAmbientLight;
// factor in the light color
if(_UseVertexColor)
{
o.color.rgb= nl * v.color.rgb;
o.color.a=v.color.a;
}
else
{
o.color.rgb= nl * _Color.rgb;
o.color.a=_Color.a;
}
}
else
{
if(_UseVertexColor)
o.color=v.color;
else
o.color=_Color;//float4(1,1,1,1);
}
//o.color=float4(1,1,1,1);
return o;
}
fixed4 frag (v2f i) : SV_Target
{
return i.color;
}
ENDCG
}
}
}
A Material using this shader is assigned to the object, this initializes it for display in the preview Camera. I do this in code, but it could be done manually in the editor for scene objects.
void SetupPreviewObjectRendererWithMaterial(Material previewMaterial)
{
Mesh targetMesh = (Mesh)target;
int submeshes = targetMesh.subMeshCount;
if (submeshes == 0)
{
previewObject.GetComponent<MeshRenderer>().sharedMaterial = previewMaterial;
return;
}
//if we have submeshes, they each need a material reference to be visible
Material[] matArray = new Material[submeshes];
for (int i = 0; i < submeshes; i++)
{
matArray[i] = previewMaterial;
}
previewObject.GetComponent<MeshRenderer>().sharedMaterials = matArray;
}
Then, each cycle, we use RenderWithShader(). (We actually "replace" it with the exact same shader it's using! we are just using the second parameter as our filter.)
previewCamera.RenderWithShader(previewShader, "PreviewTag");
Thanks for sharing all this. I might not use your exact technique, but it's all still very useful in helping me understand different possible approaches to the problem.
Your answer
Follow this Question
Related Questions
How to add image effect on not all GameObjects? 0 Answers
Mask and Vignette artifacts on Android 0 Answers
Image manipulation application 0 Answers
How to merge two images at runtime? 1 Answer