- Home /
Applying full screen image effects to UI only
I want to be able to apply a full-screen image effect like a glow on just my UI elements, without also applying the effect to the rest of my scene. As I understand it, the right approach is something like this:
Set the main scene camera Culling Mask to exclude UI layer.
Create a second camera for rendering the UI, orthographic, forward rendering path
Set the UI Canvas Render Mode to Screen Space - Camera, and link to the UI Camera.
On the UI Camera, set Clear Flags to Depth Only and Culling Mask to just the UI layer. (Setting Clear Flags to Don't Clear here would create a smearing effect, right?)
At this point, any image effects applied to the UI camera will be applied to the entire render buffer, so a glow would affect both the UI and the scene. To get around this:
Set Target Texture for the UI Camera to a RenderTexture.
Blit the render texture to the game screen.
That last step is where I'm stuck. How do I combine this with the buffer from the main camera? Would the correct approach be to create a Screen Space - Overlay canvas with an Image component to display the RenderTexture on top of the game screen? I'm guessing some scripting is required.
Thanks in advance!
I've attached a test Unity project for this question.
I've changed the UI Camera Clear Flags to Solid Color (Black) ins$$anonymous$$d of Depth Only, because I was seeing some weird smearing with Depth Only and the screen would be filled with white. I've also changed the World Camera to target its own render texture.
I've gotten as far as writing a shader to combine two render textures and display it on the screen using a third camera. Here, _$$anonymous$$ainTex or "Texture1" is what the third camera sees, which we don't care about. "Texture2" is the World Camera's render texture and "Texture 3" is the UI Camera's render texture. When Texture 3 is blended with Texture 2, it appears to completely replace it ins$$anonymous$$d of being added on top. So I'm guessing the render texture doesn't have the proper alpha info, or I'm not blending them in the right way?
Shader:
Shader "Hidden/BlendCameras"
{
Properties {
_BlendAmount ("Blend Amount", Range (0, 1) ) = 1
_$$anonymous$$ainTex ("Texture 1", 2D) = "black" {}
_Texture2 ("Texture 2", 2D) = ""
_Texture3 ("Texture 3", 2D) = ""
}
SubShader {
Pass {
Blend One Zero
SetTexture[_$$anonymous$$ainTex]
SetTexture[_Texture2] {
ConstantColor (0,0,0, 1)
Combine texture Lerp(constant) previous
}
SetTexture[_Texture3] {
ConstantColor (0,0,0, [_BlendAmount])
Combine texture Lerp(constant) previous
}
}
}
}
Render UI script:
using UnityEngine;
public class RenderUI : $$anonymous$$onoBehaviour
{
[RangeAttribute(0,1)]
public float blendAmount;
private $$anonymous$$aterial material;
void Awake()
{
material = new $$anonymous$$aterial(Shader.Find("Hidden/BlendCameras"));
}
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
material.SetFloat("_BlendAmount", blendAmount);
Graphics.Blit(source, destination, material);
}
}
Answer by zxkne · Jan 22, 2017 at 09:53 PM
I don't know how to achieve the desired effect with an 'FPP-like' ShaderLab syntax, but I done it with regular VP/FP shaders. I also made some changes to the original approach.
Instead of using an asset-based RenderTexture, I used temporal RT, which allowed to match RT size to the camera size. Stretching of UI looked very bad.
When I made it, I got the image from the left:
You may notice the black area around the text. It's because we render the UI on the black background. Yes, the background is fully transparent, but it's still black, and when we alpha-blend (0, 0, 0, 0.1) with our image we get a slightly dim pixel. I thought that it isn't the effect the author want to achieve, so, I replaced a UI-on-the-black image with a UI-over-the-scene image and get the image shown on the right.
Honestly, I doubt that it's the desired effect but maybe I helped to realize some subtle details of the original task. The reworked project is attached.