- Home /
How do int textures work in ComputeShaders?
I'm having trouble understanding how to write into a RenderTexture with an integer format (e.g. RGInt). The following code leads to a completely black texture but by my understanding, it should be yellow:
C# Script:
using UnityEngine;
public class IntTextureTest : MonoBehaviour {
public RenderTexture renderTexture;
public ComputeShader computeShader;
void Start() {
renderTexture = new RenderTexture(1024, 1024, 0, RenderTextureFormat.RGInt);
renderTexture.enableRandomWrite = true;
renderTexture.Create();
computeShader.SetTexture(0, "Result", renderTexture);
computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
}
}
ComputeShader:
#pragma kernel CSMain
RWTexture2D<int2> Result;
[numthreads(8, 8, 1)]
void CSMain(uint3 id : SV_DispatchThreadID) {
Result[id.xy] = int2(0x7fffffff, 0x7fffffff);
}
I have verified that the texture format is supported using SystemInfo.SupportsRenderTextureFormat and I tried the same example with a float texture, which worked fine.
Answer by andrew-lukasik · Jun 16, 2021 at 11:25 AM
Idk, but RWTexture2D<float2>
works just fine. My guess is that RGInt
might be describing hardware data encoding method and not software layer.
#pragma kernel CSMain
RWTexture2D<float2> Result;
[numthreads( 8 , 8 , 1 )]
void CSMain ( uint3 id : SV_DispatchThreadID )
{
Result[id.xy] = float2( id.x/1024.0 , id.y/1024.0 );
}
I see, thanks for looking into this. It seems like the entire integer range is only exposed to the shader as a small float range (maybe 0..1?). That sucks for me though since I was planning to write indices for a (possibly very large) array/buffer. I'm still hoping that maybe there is a way to write ints directly. Even if I do the range conversion exactly right, floats simply can't represent high integers precisely. It seems pointless to offer a 32 bit integer format but no way to write into it with full precision. If I wanted float precision, I could just use a float texture. I guess I'll have to use a ComputeBuffer instead.
Answer by BastianUrbach · Jun 16, 2021 at 01:45 PM
I'm still hoping for better solutions but this is what I came up with so far:
Since apparently it's not possible to write ints or uints to a texture, I decided to use a float texture. Floats can represent integers up to 16777216 accurately, which isn't much but may be enough for some use cases. BUT you can actually do much better by using a different kind of conversion between floats and ints. HLSL provides intrinsic functions for reinterpreting bit patterns as floats or ints/uints. This should be cheap or even completely free. If you're only working with ComputeShaders and convert just before writing and just after reading, then this should work fine with any int/uint. However, there is a small but annoying catch: if you perform any float operation on a denormalized float in HLSL then that float is rounded to zero. Unfortunately this also seems to apply to sampling a texture using tex2D & friends. If you (like me) want to write a texture in a compute shader and then use that texture in a regular vertex fragment shader then you have to make sure that you only store uints that result in normalized floats. You can do that by setting the second highest bit of the uint to 1. In practice, this means that you can't easily use the upper two bits but that still allows you to store uints up to 1073741823, which is significantly better than just using floats.
Here are the conversion functions (as macros so that they work with arguments of any dimension):
#define EncodeUintToFloat(u) (asfloat(u | 0x40000000))
#define DecodeFloatToUint(f) (asuint(f) & 0xBFFFFFFF)
It's an ugly solution but seems to work. If you have a better one, please share.
Your answer
Follow this Question
Related Questions
sampler2d object has no methods 1 Answer
More than one Kernel in Compute Shader 1 Answer
Runtime generated RenderTexture as Displacement map weird behavior 1 Answer
ComputeShader: Accessing array elements within a StructuredBuffer 1 Answer
Builtin Shader UI/Unlit/Transparent Yellow Pixels Read As White 0 Answers