- Home /
Color: Why is color with four attributes and a value of 0 to 1?
I noticed in the document that Color has four attribute and a value of 0 to 1. I'm not understanding the concept well. I want to understand the concept.
e.g. I've seen this:
new Color(1f, 0f, 0f, 0.4f);
Let's say I have a color gray and I want to use the Color attribute. I know that gray's rgb value is (192,192,192). How do I convert that to a value from 0 to 1? Why is Color written from 0 to 1?
Can someone explain the concept and why Unity uses something like the new Color declared above? I can't find a good explanation online.
Answer by komodor · Feb 24, 2014 at 02:52 PM
it's RGBA red green blue alpha
minimum value is 0, maximum is 1
it means 0 = 0, 1 = 255 for RGB and 0 = 0, 1 = 100 for alpha
it's actually pretty simple and pretty good for purposes of programmer and that's the reason why it's that ... number from 0 to 255 is nothing more than minimum and maximum of some range, but if you want to calculate with color you always need to divide it by maximum (so by 255) first and that's the reason unity skips this step
your example mean red color with 40% transparency (photoshop values)
gray color as you describe it would be new Color(0.75f, 0.75f, 0.75f); (the alpha is optional) you'll get the number by dividing 192 / 255
to work with unity i was forced to forget my old habbits like thinking in this photoshop RGB or thinking about animation as it consists from frames ... and it works even better
"0 to 255 is nothing more than $$anonymous$$imum and maximum of some range"
But 0-255 isn't arbitrary. Who would pick that? Your screen probably uses 32bit RGBA. That means each channel is stored in a byte, which has a range of 0-255. In other words, a pixel really does have 256 brightness steps for each color.
Thank you for the explanation $$anonymous$$omodor! That makes me understand it a little better now.
Answer by edve98 · Feb 24, 2014 at 03:07 PM
Why Unity uses this format I have no Idea, but you can always use color32: http://docs.unity3d.com/Documentation/ScriptReference/Color32.html
This. If you want to limit yourself to 256 integer values for each channel, use Color32 and you'll be happy; and it give better performance.
A float value between 0 and 1 can contain a lot more information than 256 integers. (I'm not going to do the math right now to figure out how much more, exactly.)
For still images, 32 bits per pixel should be enough for anyone, ( it's all most graphics cards support anyway) unless your audience is mantis shrimp. But 32-bit color is really 8-bit per channel precision, so you could still get banding in very shallow gradients or jumping in very slow fades, whereas the Color
format lets you blend around colors effectively losslessly as many times as you like before sending them to the card.
Also 64-bit color is co$$anonymous$$g soon; once that becomes a an established standard, graphics snobs will expect no less.
Thank you very much for mantis shrimp enlightment :)
Answer by Owen-Reynolds · Feb 24, 2014 at 04:26 PM
0-1 RGBA has always been the official format used inside all graphics cards and shaders. Because of that, programmers tend to use 0-1 for colors (since all colors are going to into a shader at some point.) When I first looked at Unity, seeing they used 0-1 for Color variables was one of the things that convinced me they knew what they were doing.
Your answer
Follow this Question
Related Questions
A node in a childnode? 1 Answer
How do I change colors of an axis? 1 Answer
How to change the color using "renderer.material.color"? 2 Answers
Problem with GUITexture.color.a 2 Answers