- Home /
Shader float4 component values computing strangely
Hi, I have a blur shader and I'm attempting to make the radius of the blur dependent on the alpha value of the colour read from the fragment. Here's the horizontal pass (the vertical pass is very similar):
Pass
{
CGPROGRAM
#pragma vertex vert_img
#pragma fragment frag
#include "UnityCG.cginc"
// Properties
sampler2D _GlowMap;
float4 _GlowMap_TexelSize;
float _Radius;
half _Resolution;
float gauss(float x, float s)
{
return 0.39894*exp(-0.5*x*x / (s*s)) / s;
}
float4 Blur(sampler2D tex, float2 uv)
{
float4 col = tex2Dlod(tex, float4(uv.x, uv.y, 0, 0));
const int blurWidth = 20; // (uint)_Resolution;
const int kernel = (blurWidth - 1) * 0.5;
const float blurKernelSize = _Radius * _GlowMap_TexelSize.x;
const float multiplier = 2;
for (int i = -kernel; i <= kernel; i++)
col += (tex2Dlod(tex, float4(uv.x + i * blurKernelSize, uv.y, 0, 0)) * gauss(float(i), 7)) * multiplier;
return col;
}
float4 frag(v2f_img input) : COLOR
{
float4 blur = Blur(_GlowMap, input.uv);
return blur;
}
ENDCG
}
This line:
const float blurKernelSize = _Radius * _GlowMap_TexelSize.x;
I want to be
const float blurKernelSize = _Radius * _GlowMap_TexelSize.x * col.a;
The output of col.a (and col.r, col.b and col.g) appear to be 0 or 1 when multiplying the blurKernelSize even though the actual output of the shader is drawing the true colours. What's strange to me is when I comment out all lines of the Blur method, except the first line that declares col, and return col.r (.g; .b; or .a) instead of col, the shader clearly is drawing decimal values of col.r as I would expect it.
I have also tried this which does work:
const float blurKernelSize = _Radius * _GlowMap_TexelSize.x * uv.y;
I am at a genuine loss with this one. I feel like it must be something obvious but something this seemingly simple has eluded me for days. Any help greatly appreciated. Thanks