- Home /
How to get (usable) bilinear filtering on small textures?
I have a project which involves pixel-art style 2d sprites inhabiting a 3d world. Up until now, I've solved the point filter blinky/jaggedness issue by resizing the sprites 4x in photoshop, and then setting the Import Texture Filter Mode to 'Bilinear' in Unity. Works like a charm!! But has caused another issue entirely: Memory/Storage.
A 1024x1024 sprite sheet sized up 4x becomes a whopping 21mb FULLY compressed. Considering the scale of the game, that's going to become HUGE in file size, not to mention texture memory usage at runtime.
Here are some (long shot) options that came to mind. If anyone has new ideas on how to approach this, or ways to actually accomplish the methods below please let me know! Google has not been kind to me on this topic :)
Resize sprites during runtime. (Tried this, and it does work... but doesn't reduce texture memory - actually worse since you can't resize compressed textures. Not to mention figuring out the equivalent set pixelsPerUnit during runtime... bleh)
Write a custom bilinear shader. Seems to be the most promising option. I'm no expert in CG though, and I have a feeling it would really bog down GPU having hundreds of sprites on screen. Opinions welcomed.
Scaling sprites 2x instead of 4x.. No good, too blurry :(
Create a custom FilterType class (IDEAL). In theory, I only need bilinear to scale itself up 4x. However, I have zero idea if Unity even allows access to the current bilinear texture import setting function. Something like having a "Bilinear Scale" variable would be AMAZINGGGG for this.
There MUST be a way to kill these jaggies without using massive amounts of GB and TexMemory.
Answer by domjon · Apr 19, 2015 at 12:32 PM
Went with the shader option. Wrote a very basic sprite shader that mimics smoothness of Bilinear by blurring. Handles all light situations (boy that was a pain to figure out how to not get white outlines in the dark w/ a point light). It also does shadows! :)
Fairly new to shaders, so I'm unsure of how efficient this is, but it completely solves my issue.
Feedback and input more than welcome:
Shader "Sprites/Low Res - Diffuse With Shadows"
{
Properties
{
[PerRendererData] _MainTex ("Sprite Texture", 2D) = "white" {}
_Cutoff ("Alpha Cutoff", Range (0,1)) = 0.0001
_BlurAmount ("Blur Amount", Range (0, 1)) = 0.055
}
SubShader
{
Tags
{
"Queue"="Transparent"
"RenderType"="Transparent"
"PreviewType"="Plane"
"CanUseSpriteAtlas"="True"
}
LOD 300
Cull Back
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
#pragma surface surf ToonRamp vertex:vert alpha alphatest:_Cutoff addshadow
#pragma lighting ToonRamp
sampler2D _MainTex;
float _BlurAmount;
float4 _MainTex_TexelSize;
// for hard double-sided proximity lighting
inline half4 LightingToonRamp (SurfaceOutput s, half3 lightDir, half atten)
{
half4 c;
c.rgb = s.Albedo * _LightColor0.rgb * sqrt(atten);
c.a = s.Alpha;
return c;
}
struct Input
{
float2 uv_MainTex : TEXCOORD0;
fixed4 color;
};
void vert (inout appdata_full v, out Input o)
{
UNITY_INITIALIZE_OUTPUT(Input, o);
o.color = v.color;
}
void surf (Input IN, inout SurfaceOutput o)
{
half4 original = tex2D(_MainTex, IN.uv_MainTex);
half4 finalOutput = tex2D(_MainTex, IN.uv_MainTex);
float amount = _BlurAmount;
float2 up = float2(0.0, _MainTex_TexelSize.y) * amount;
float2 right = float2(_MainTex_TexelSize.x, 0.0) * amount;
for(int i=0;i<3;i++)
{
finalOutput += tex2D(_MainTex, IN.uv_MainTex + up);
finalOutput += tex2D(_MainTex, IN.uv_MainTex - up);
finalOutput += tex2D(_MainTex, IN.uv_MainTex + right);
finalOutput += tex2D(_MainTex, IN.uv_MainTex - right);
finalOutput += tex2D(_MainTex, IN.uv_MainTex + right + up);
finalOutput += tex2D(_MainTex, IN.uv_MainTex - right + up);
finalOutput += tex2D(_MainTex, IN.uv_MainTex - right - up);
finalOutput += tex2D(_MainTex, IN.uv_MainTex + right - up);
amount += amount;
up = float2(0.0, _MainTex_TexelSize.y) * amount;
right = float2(_MainTex_TexelSize.x, 0.0) * amount;
}
finalOutput = (finalOutput) / 25;
fixed3 finalOutput2 = lerp((0,0,0,0), finalOutput.rgb, finalOutput.a);
finalOutput2 = lerp(finalOutput2, finalOutput.rgb, original.a);
finalOutput2 *= IN.color;
o.Alpha = (finalOutput.a * IN.color.a);
o.Albedo = finalOutput2;
}
ENDCG
}
}
So could I in theory use this shader one a 128x128 tilemap if I enable point filtering? Sorry to resurrect something from 2015 but I have a problem with jitteriness on my tile maps that I was hoping this might solve
In theory, yes. Surface shaders like these are pretty much a thing of the past with URP and HDRP now sadly, but in you could definitely implement a similar "blur" using the same approach I did here. It essentially just places a bunch of offset duplicated layers and averages them out.
If you don't have many tile maps, your best route might be to just resize your 128x128 to 512x512 and set the filter to 'bilinear' in the import settings to prevent jitter. I needed this shader fanciness because I had so many separate assets.
I found a link to this thread on reddit. Hopefully it solves some of my jitter issues, thanks for uploading the script!
Your answer
![](https://koobas.hobune.stream/wayback/20220613192948im_/https://answers.unity.com/themes/thub/images/avi.jpg)