- Home /
forcing texture resolution?
I'm doing something rather convoluted with Unity Pro. I have a plane just in front of my camera that I'm trying to use as a custom distortion lens by applying a custom normal-map + direct-environment-map shader to it. Basically, each point on the plane has a normal which maps to a direction to sample a real-time cubemap from.
Naturally, this involves running e.g. the manual's cubemap script at a near-screen-size resolution. It also involves having a near-screen-size normal map.
Initially this resulted in a very garbled/grainy warping of a very pixelated image.
After discovering and playing with the texture import settings (converted my normal map to raw bmp at the desired res and set the format to 24-bit RGB uncompressed so the importer wouldn't give it horrible artifacts; disabled mipmapping; set the import res to the native file res), I appeared to have a fairly smooth warping of a very pixelated image. I tried raising the cubemap res (from 2048 to 4096 O_O;) and got... a just as pixelated image. I tried lowering the cubemap res back to its original 128... and got a discernably-but-not-by-much more pixelated image.
However, when I cranked the normal map back from 2048 to 512... I got not just blurrier pixelation, but actually ~less~ pixelation.
I now half-way suspect that when I throw ambiguougly-too-large textures at Unity, it is automatically downsizing them based on its own preconceptions of what should be necessary for a single simple quad. And that's not what I want, particularly since I'm running on God's own graphics card right now explicitly for the purpose of trying just these sorts of crunchy graphics tricks, and said quad takes up the whole screen.
Can anyone confirm/deny whether Unity auto-converts textures at runtime and/or how to force it not to, and/or how to perhaps apply a shader directly through the camera to the screen buffer rather than to a plane directly in front of the camera?
Answer by sean · May 07, 2010 at 05:42 PM
Okay, kiddies. Let's review binary format precision 101: An 8-bit int can store 256 different values at a fixed spacing. 3 8-bit ints can store 2^24 different values at a fixed spacing. A 32-bit float can store 2^32 different values, with a ridiculous density spike between -1.0 and 1.0. 3 32-bit floats can store... you get the picture.
So if you want a really nice smooth normal-bump curve on a screen-sized normal map, do you want to
a) create a next-power-of-2-above-screen-resolution normal map precisely storing the direction of the normal at each pixel with a per-axis granularity of of 128 ticks each side of center for a theoretical number of expressable normal vectors of 16777216 (far fewer accounting for the fact that each vector "should" be unit-length)
OR
b) create the smallest normal map that will capture the curvature of your surface and let the graphics card sample it with interpolation to 32-bit floats in each direction with, I believe, a full half of those expressable values existing between -1 and 1 for a grand total of 9903520314283042199192993792 expressable vectors (wiht a similar fraction in the unit-length range)?
If your answer is (a), give yourself a facepalm. Then try a normal map of size 32 or 64 or something.
Yes, folks, it would appear no texture downsizing took place, I was just seeing artifacts of inadvertently intended behavior.
Your answer
Follow this Question
Related Questions
Rendering a G-Buffer for a cubemap 0 Answers
How to make a half 2d- half 3d main menu?? 1 Answer
Terrain: Increase base map distance over 2000? 1 Answer
2D Sprite Black on mobile - Resolution problem 2 Answers
Hololens 4K 360 Video Possible? 0 Answers