- Home /
How to set compute shader float precision to half explicitly in Unity 2022 and Direct3D12?
I'm optimizing for mobile, so I'd like all compute shader floats to use half precision explicitly. For example, if I fill a compute buffer with half floats on the CPU side, I'm not guaranteed that the buffer will be read with 16 bit precision on the GPU side. Depending on the hardware, it might read it as 32 bit instead.
To avoid this, it's usually recommended to use hlsl compilation arguments such as "-enable-16bit-types", but I can't find where to set this in Unity properties. I see a "shader precision model" property under the Player settings, but that seems to do the opposite of what I want.
What's the best way to ensure that all floating point operations in my compute shader always use 16-bit precision?
Answer by Nemquae · Feb 23 at 11:58 AM
I found the answer, thanks to this post:
https://forum.unity.com/threads/unity-is-adding-a-new-dxc-hlsl-compiler-backend-option.1086272/ https://docs.google.com/document/d/1yHARKE5NwOGmWKZY2z3EPwSz5V_ZxTDT8RnRl521iyE/edit#
Basically, you need to add these two lines to the top of your compute shader:
#pragma use_dxc
#pragma require Native16Bit
Your answer
Follow this Question
Related Questions
Convert float in Transform to double 1 Answer
float precision, again 0 Answers
How Precise is Random.Range? 1 Answer
Vector3 constructor precision 1 Answer