- Home /
16bit Precision Custom Render Texture
Hello everyone. I have been trying to implement the Jump Flooding Algorithm in unity, using custom render textures.
The algorithm generates a texture, where each pixel contains the uv coordinates of the nearest seed pixel. It looks something like this:
The texture I'm using is 1024x1024, so the uv coordinates stored in it have to be precise enough to map 1024 values, which makes 8bit per channel not sufficient. I have tried to change the format of the render texture to ARGB float, ARGB 64 and others, but it didn't seem to change anything.
My implementation works fine for 256x256 textures, as this is the limit for 8bit coordinates, but anything higher causes artifacts.
My question is: How can I ensure my custom render texture stores 16bit per channel, or, how to store the uv coordinates in an other way?
I'm not sure I understand the question, but maybe you're looking for this: https://docs.unity3d.com/ScriptReference/TextureFormat.html
@RShields Thank you for your response. As I have written in the question, I already tried changing the format of the texture, but it doesn't seem to make any difference. I am not sure if changing the format works correctly, as the preview window always states that the texture is ARGB32.
Alright, I assumed Unity would use the texture format to format the PNG but it doesn't. Will Texture2D.EncodeToEXR
work?
($$anonymous$$ake sure to use RGBAFloat)
Without changing formats, you can't have 16bit per channel, but by using 2 channels per coordinate, yes.
You're already using 2 components to store 2D coordinates, and if not using the remaining 2, then make use of them as offset multipliers (of 256). So:
(1, 0, 0, 0) is (1, 0) + (0, 0) * 256 = (1, 0)
(1, 0, 1, 0) is (1, 0) + (1, 0) * 256 = (257, 0)
...
and (255, 255, 255, 255) is (65535, 65535), 16bit limit.