- Home /
Color of texture pixels not pure
I wanted to use a bitmap to populate grid of cells I'm creating.
Here's the basic methodology...
Grid dimensions are determined by the image's dimensions.
Cell contents are determined by the color of the corresponding pixel (pixel whose position in the image = cell's position in the grid). Each color corresponds to a type of cell contents.
Now here's the problem...
Sometimes, I get imperfect/impure colors... Without pure ones, I cannot reliably match the appropriate contents to the pixel/cell.
Black and white pixels in the bitmap are being read correctly (pure):
white: Color32(255,255,255,255)
black: Color32(0,0,0,255)
Unfortunately, other colors are not (impure):
red: Color32(237,28,36,255) - pure would be Color32(255,0,0,255)
blue: Color32(63,72,204,255) - pure would be Color32(0,0,255,255)
Here are the measures I've taken so far...
Texture format been changed to .TIF - an uncompressed RGBA format.
Unity power-of-2 setting has been adjusted so that it does not round.
Unity format setting has been adjusted to RGBA 32bit (uncompressed).
Ints are being gotten from the Color32 class instead of floats to prevent precision/comparison problems.
I don't have any insight into your color problem, but maybe the solutions is to ins$$anonymous$$d be able to find the closest color to a list of colors rather than look for an exact match. You could encode your colors as normalized Vector3s, and use Vector3.Dot(). The color pair that is closest to 1.0, will be the best match. It would not work if you need to match close colors, but if your colors are as divergent as the one you are listing above, it should work well.
$$anonymous$$ore info on color matching (algorithm better than I've outlined) here:
I will take a look at the link. The issue with the dot method is that white (255, 255, 255, 255) dotted with the normal of red will return the same value as red (255, 0, 0, 255) dotted with red. Likewise all colors when dotted with black (0, 0, 0, 255) will return the same value.
And beyond this, part of my tenacity in pursuing this matter is the fact that it simply does not make sense and should not be this difficult. I expect to use bitmap methods in which specific colors need to be recognized in the future and getting to the root of this problem seems to be the more prudent solution.
Thank you very much for your feedback.
Getting to the bottom of the problem would be best. Perhaps you want to provide a link to a Unity package demonstrating the issue. No one is stepping forward that knows the answer, so having someone play with a project might be the way to get to an answer.
As for the comparing colors. You are right about the dot product. I don't know your end use, but with a divergent color set, I'm not sure it matters (except for the degenerate case of black). You aren't trying to make an arbitrary comparison. You have something that is close to one of the colors, so the fact that red and green are equally distance from blue doesn't matter. You will be comparing something that is almost blue against something that is red, green and blue.
Beyond a quick skim, I did not pay attention to all the algorithms. Some converted the color to HSL first, which may be a better fit.
I've solved the issue of matching divergent colors to the nearest in a list. The best solution is to subtract one color (as a vector) from another. This 'between' vector is then used to calculate a square magnitude that is compared to square magnitudes yielded by other comparisons. The lowest square magnitude indicates the best match.
Still, the root problem remains unsolved.
Answer by Eric5h5 · Jul 23, 2013 at 09:32 PM
Make sure the texture is not compressed. Also, you can use Color32 if you want ints 0-255.
I tried using Color32. I created a new image in paint and saved it as a .TIF to ensure that the pixel data would not be compressed. But, I'm still seeing weird reads... 239, 243, etc.
Any other thoughts?
Check the texture settings, under max texture size you can change the format from compressed...
Unity will set your textures to compressed by default.
The source format is irrelevant; the only thing that matters is the settings in Unity. Saving as a .tif has nothing to do with compression, aside from making it possible to be correctly uncompressed (compared to .jpg, for example, which would always have compression artifacts even if you set it to uncompressed in Unity).
I've updated the original question to reflect the measures taken and suggested changes. Unfortunately we're still short of the goal. Only black and white are being read correctly. Additional input would be greatly appreciated.
Answer by RacingRapist · Jul 28, 2013 at 06:10 PM
When you choose "Automatic Truecolor" from the Format parameter of your imported image, won't it gives you the right color?
As silly as it sounds... I never noticed the uncompressed Truecolor setting. That may very possibly have solved my problem. I'll take a look and see, post my findings.