- Home /
Rendering full-screen gradient background without overdraw?
I am making a 3d game where the background is always filled with a simple gradient. However, I need to optimize for older mobiles, so avoiding overdraw is one of my goals. What would be the best approach to this problem?
Right now I am thinking about rendering a screen-sized quad with a gradient shader material in the background, but as I understand, it will always be fully drawn, even when partially covered by other objects, which is not ideal. Besides, synchronizing quad's size with the window seems clunky. Is there a better solution, like drawing the "background" (the thing specified by camera's Clear Flags) in the shader directly in the screen space?
I know there are existing solutions, but I want to do it myself for learning purposes.
Thanks.
Answer by Namey5 · Apr 27, 2020 at 02:38 AM
The easiest way to go about this would simply be to use a custom skybox shader, something like the following;
Shader "Custom/GradientBackground"
{
Properties
{
_Color1 ("Color 1", Color) = (1.0, 1.0, 1.0, 1.0)
_Color2 ("Color 2", Color) = (0.75, 0.75, 0.75, 1.0)
_Color3 ("Color 3", Color) = (0.25, 0.25, 0.25, 1.0)
_Color4 ("Color 4", Color) = (0.0, 0.0, 0.0, 1.0)
_Pos1 ("Gradient Position 1", Range (0,1)) = 0.33
_Pos2 ("Gradient Position 2", Range (0,1)) = 0.66
}
SubShader
{
Tags { "RenderType"="Opaque" "Queue"="Background" }
LOD 100
ZWrite Off
Cull Off
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
fixed4 _Color1;
fixed4 _Color2;
fixed4 _Color3;
fixed4 _Color4;
half _Pos1;
half _Pos2;
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 pos : SV_POSITION;
float4 uv : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
o.pos = UnityObjectToClipPos (v.vertex);
o.uv = ComputeScreenPos (o.pos);
return o;
}
//This is a slightly cheaper version of smoothstep for linear gradients
float linstep (float a, float b, float x)
{
return saturate ((x - a) / (b - a));
}
fixed4 frag (v2f i) : SV_Target
{
//These are screen-space UVs
float2 uv = i.uv.xy / i.uv.w;
//Make sure the gradient always travels in the same direction
float p1 = min (_Pos1, _Pos2);
float p2 = max (_Pos1, _Pos2);
//Here is a simple 4-colour gradient on the y-axis, using smoothstep to get a more continuous derivative
return lerp
(
_Color1,
lerp
(
_Color2,
lerp
(
_Color3,
_Color4,
smoothstep (p2, 1.0, uv.y)
),
smoothstep (p1, p2, uv.y)
),
smoothstep (0.0, p1, uv.y)
);
}
ENDCG
}
}
}
Skyboxes are drawn in the background queue (just after opaque objects) and are culled using z-testing, so you don't have to worry about overdraw. All you really need to do find the screen-space coordinates of the skybox in the shader and use those for the gradient. You could easily do a similar thing using a quad, and you wouldn't even need to manually track the screen - just override the vertex coordinates in the vertex shader to always cover the screen.
Thanks, this seems to be just what I need! I was confused by this sentence from the docs:
Background - this render queue is rendered before any others. You’d typically use this for things that really need to be in the background.
I read this as "the background queue is just drawn before everything and not culled", but as you say, this probably isn't the case.
The only issue is that Unity still uses its default skybox mesh (a high-poly sphere) when I create a material with this shader and set it as skybox. I would like to try using a simpler quad, so can you elaborate on the "override the vertex coordinates in the vertex shader to always cover the screen" bit? $$anonymous$$y only idea is to use several "if" checks to map the corners of a quad to screen corners, but maybe there is a better solution?
You're right, the regular background queue is drawn before opaque objects, but the actual skybox is manually drawn after opaque and before transparent objects (you can check the frame debugger to see the order in which things are actually drawn). I really don't think the skybox mesh will be a problem - it's really not that high poly and and I can't see it having a performance impact on any device made in the last ten years, but if you really want to use a quad, we can modify the above shader.
First, we will want to use a render queue after all objects that write to depth (opaque, alphatest), but before transparencies such that blending works properly;
Tags { "RenderType"="Opaque" "Queue"="AlphaTest+50" }
Then, we can modify the vertex shader to map the quad's vertices to screen;
struct v2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
v2f vert (appdata v)
{
v2f o;
//This places the quad's vertices in the corners of the camera frustum's far plane
o.pos = float4 (v.uv.xy * 2.0 - 1.0, 1.0, 1.0);
//On some platforms the far plane is reversed
#ifdef UNITY_REVERSED_Z
o.pos.z = 1.0 - o.pos.z;
#endif
//On some platforms the y-axis is flipped
#ifdef UNITY_STARTS_AT_TOP
o.pos.y *= -1;
#endif
//Just use vertex UVs
o.uv = v.uv;
return o;
}
From there, you just need to use the regular UVs in place of the screen-space UVs and it will work.
//These are screen-space UVs
float2 uv = i.uv.xy;
Just create a quad and attach this material. Something to note is that because this is all based on the GPU, per-object culling still applies. We can't really do anything about that (hence why the skybox approach is 'better'), so just attach the quad to your camera to make sure it is always on screen. Alternatively, you could draw the quad using Graphics.Draw$$anonymous$$esh, but I'm not sure how that interacts with culling.
Again, thanks for your response, I learned quite a few things from your code! The skybox mesh may not be an issue, but it's still good to know how to do it the other way. Good point on the Z axis being reversed on some platforms, I totally would've missed that.