- Home /
The 'fixed' numbers in shaders: what are the 11 bits used for?
I've just started getting into the magical world of shaders, and this line from the UnityDocs bothers me:
fixed: low precision fixed point. Generally 11 bits, with a range of –2.0 to +2.0 and 1/256th precision.
How are those 11 bits being used? To get 1/256th precision 8 bits must be for the fractional part, 1 bit for the sign, leaving 2 bits for the integer part. However that would make the true possible range [-4.0, 4.0) though! Am I missing something that requires the use of the extra integer bit? Or is this one of those "the precise implementation depends on the platform" things?
The closest thing I can find that makes sense is from here:
The fixed data type corresponds to a native signed fixed-point integers with the range [-2.0,+2.0), sometimes called fx12. This type provides 10 fractional bits of precision.
That makes perfect sense to me. 12 bits: 1 for sign, 1 for integer part, 10 for fractional part. Resulting in the expected range with "1/1024th precision" as the Unity docs would say.
Answer by Owen-Reynolds · Feb 07, 2015 at 03:50 PM
Depends on the platform. The NVidea page is telling you about their exact specs on their best chip. Unity's page is probably giving you a worst-case, for an older cell phone.
My (not extensive) experience is that the specs on a graphics chip are only a very rough guide as to how it really works, anyway.