- Home /
How does unity handle scaling down textures when displaying them smaller than their actual size
I have 2 identical game objects with a sprite renderer and identical textures placed next to each other. Each texture is 32 pixels, and I'm using an orthographic camera set up to display a 32 pixel sprite as 32 screen pixels (The camera sets orthographicSize based on screen height and only moves in pixel increments. AA is off, filter is set to point, and wrap set to clamp). I'm not using mipmaps.
If I double the orthographicSize to zoom out to 0.5x zoom so that each 32 pixel sprite is rendered as 16 pixels on screen, the sprites on the 2 game objects look different, despite being identical.
I get that there are artifacts caused by scaling the 32px image to 16px, but what I don't get is why each one has different artifacts.
How does unity handle scaling down textures when displaying the smaller than they actually are? Does it scale the whole screen down, or does it scale the textures down independently?
Answer by hexagonius · Mar 12, 2018 at 09:04 AM
A sprite is just a quad. Each corner represents the normalized coordinate either into the atlas, or the whole texture. if there is less space than pixels available the graphics card chooses the pixel closest to the point or should be rendered. Depending on the position of the sprite it could be the pixel in any of the directions surrounding the coordinate it tries to render. in addition, floating point imprecision can make a difference when adjacent pixels share the coordinate of being rendered exactly in between them.
the graphics card chooses the pixel closest to the point
Well this only happens when you set the texture filter mode to "point". If you have set it to bilinear the GPU will use bilinear sampling. Furthermore if the texture has mipmaps and the size of the mesh is rather small it will pick a smaller mipmap level. Unity also can build anisotropic mipmaps to get better results when viewing a mesh at a low angle.