- Home /
Use OnMouseOver or Raycasting while also displaying a camera texture.
I've been making a first-person grid-based dungeon crawler that has a limited-range mouse look. When using this mouselook, which rotates the camera within certain ranges, I'd like to be able to allow for the player to click on objects to interact with them.
The problem is, my game also employs a graphics style where the main camera renders to a texture that is then rendered onto a plane and displayed on-screen by a secondary camera. This seems to make any OnMouse function not fire at all, while making any ScreenPointToRay/ScreenToWorldPosition conversions extremely offset and unusable.
Does anyone know of any settings or workarounds that would allow both the secondary camera to still be the one rendering, while also allowing for positional mouse tracking?
EDIT: The amount of offset seems to be tied to the resolution of the target texture of the camera for some reason.
Could you provide some sketches and code samples to describe your problem?
How do you transform the mouse position to the ray start and direction?
How do you fire the rays?
What does the setup look like?
What happens so far?
Sorry about that. I'd gotten some incorrect information, it seems.
But anyway, I had an issue with raycasts, but I think its caused by a more fundemental issue. Because my On$$anonymous$$ouseOver function also just doesn't fire at all.
When I was raycasting, I was just doing in javascript:
ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(ray, hit, 5.0)){
if (hit.transform.tag=="interactor"){
Debug.Log("hit");
}
}
As far as I could tell, that's all that needed to be done. But the direction was wildly offset. The game takes place in a hallway, and if I click the wall at the back-end of the hallway (roughly the center of the screen), I ins$$anonymous$$d get a raycast that shoots off toward the right wall.
To be clear, On$$anonymous$$ouseOver and raycasting seems to work correctly when I disable my camera's target texture, so there must be something else going on.
Hello Camera.main
is the active rendering camera.
As you render to a texture which again is rendered in the main camera, you are shooting rays at the texture.
In order to shoot rays from the actual scene camera, you need to calculate the mouse position on the render texture, convert that to the camera space and then fire a ray from that scene camera.
Why are you using two cameras at all?
Wouldn't it be easier to directly use the scene camera as main camera?
Your answer
![](https://koobas.hobune.stream/wayback/20220612093616im_/https://answers.unity.com/themes/thub/images/avi.jpg)
Follow this Question
Related Questions
Illuminating a 3D object's edges OnMouseOver (script in c#)? 1 Answer
OnMouseDown for Right Mouse 2 Answers
Distribute terrain in zones 3 Answers
[SOLVED] Raycast only hits one side of the box collider 1 Answer
Looping and closing walls 1 Answer