- Home /
Get a color on raycast whatever the source of the color
Hello,
I am making a simulator, in which the user can place a color sensor. Then I have the script of the color sensor which have to find the color in front of it. So here is the part of the script:
Color color = Color.black;
rayCursor = new Ray(transform.position, transform.forward);
if (Physics.Raycast(rayCursor, out hitInfos, 0.02f))
{
Renderer rend = hitInfos.transform.GetComponent<MeshRenderer>();
if (rend != null)
Texture2D tex = (Texture2D)rend.material.mainTexture;
color = rend.material.color * ((tex == null) ? Color.white : tex.GetPixel((int)hitInfos.textureCoord.x, (int)hitInfos.textureCoord.y));
}
}
However, this script seems to work only when the sensor faces a simple gameobject like a yellow sphere or a red cube... The problem is that the collider doesn't always fit the object's mesh, but the sensor have to work in all situation (fitting to the reality since it is a simulator), the gameobject can as well be a Terrain.
How can I do that ?
Thanks in advance. (Sorry for my bad English)
Answer by tanoshimi · Jul 08, 2016 at 01:47 PM
If you want the colour as displayed on screen accounting for lighting, effects etc., what you'll need to do is...:
Set your camera to output to a rendertexture
Copy the rendertexture to a regular texture using ReadPixels
Convert the coordinates of the "colour sensor" to screenspace relative to the camera (using WorldToScreenPoint)
Then use GetPixel to retrieve the pixel value at the corresponding screen coordinate.
Thanks for your answer. Using directly the camera is a great idea. However, I can't use the main camera because I have to get the color in front of the sensor, even if it is not visible from the main camera. Then I added a camera on the color sensor with the following settings: near:less possible(0.01) ; far: sensor max distance ; clear flags:solid color(black) ; -optional I think- field:1.
Then I wrote the script took from the RenderTexture.active example
Color color = Color.black;
Camera cam = GetComponent<Camera>();
RenderTexture currentRT = RenderTexture.active;
RenderTexture.active = cam.targetTexture;
cam.Render();
Texture2D image = new Texture2D(cam.targetTexture.width, cam.targetTexture.height);
image.ReadPixels(new Rect(0, 0, cam.targetTexture.width, cam.targetTexture.height), 0, 0);
image.Apply();
color = image.GetPixel(image.width / 2, image.height / 2);
RenderTexture.active = currentRT;
It works ! But I have efficiency problem: the fps drops from 70 to 40 with only one sensor (and more if I put more sensors).
Therefore: How to optimize this new camera. How to reduce its resolution since I need ONLY ONE pixel ?
But how often do you need to sample the sensor? Does it really have to be done every frame?
I can sample the sensor not every frame, but I nevertheless need it several times per second. So is there any way to optimize this script ? Even if the sensor get the color 4 times per seconds for example, if the user puts 10 sensors in the scene, it is still a problem...
I can be satisfied with this script (I can make the user set the sensor "rate", so that he can choose according to the number of sensors and his computer's efficiency), but it would be great if we could optimize it...
In any case, thanks a lot for your help.
Also, what do you do when you know the colour value? The slow bit here is the interface retrieving the value from the GPU to the CPU ( ie. reading the pixels from a render texture to a regular texture). But do you need to do that? Could you do all the remaining processing on the GPU side?
@Eric5h5: I did what you said. Now I have:
Texture2D image = new Texture2D(1, 1);
image.ReadPixels(new Rect(cam.targetTexture.width/2, cam.targetTexture.height/2, 1, 1), 0, 0);
image.Apply();
color = image.GetPixel(0, 0);
But the fps still drops almost as much...
I also tried to reduce the camera rect ( cam.pixelRect = new Rect(0, 0, 1, 1); ) but the fps is barely higher, and the script no more works (I always get black {0,0,0}).
@tanoshimi: "what do you do when you know the colour value?" Everything I do is displaying it...
"But do you need to do that?" I don't know. As I said, this script comme from the RenderTexture.active example, and I am on uncharted ground...
By the way, here are the performance stats:
When there is no sensor: main: 14ms ; render thread: 1ms
When there is a sensor: main: 21ms ; render thread: 16ms
So, to be clear, you're going to this effort to calculate the colour of one pixel position of the screen, and then you're displaying it (somewhere else)? There's no game logic associated with what the colour returned is? In that case, you can definitely do it all on the GPU, which should dramatically improve performance.
I am making a emulator of "Lego $$anonymous$$dstorms". I hope you know it so that you will easier understand. Otherwise, you can look at some pictures of it. The user can build his own robots in which there are a brick (with a screen, buttons, and ports), ans motors and sensors connected to the ports. Then the user can program his brick to make the robot autonomous or remote controlled. $$anonymous$$y project is a emulator, then the user can make all of this in the software.
Then the colour sensor may be used in the robot to detect a line on the ground so that the robot can follow it for example. And the user can see everything from an independent camera, which will not always see the point faced by the sensor... the emulator contains an interpreter which may need the colour value whenever.
"calculate the colour of one pixel position of the screen" is false... the pixel will not always be on the screen when the colour is needed.
Your answer
Follow this Question
Related Questions
Material doesn't have a color property '_Color' 4 Answers
Changing two different objects renderer colour 1 Answer
Is it possible to simulate robot movement and sensor feedback accurately using Unity? 0 Answers
How do I get the shader color-property to show up in the Inspector? 3 Answers
Make camera look Black and white. 2 Answers