- Home /
iPad - camera.ScreenPointToRay always returns the same ray
I'm using the camera.ScreenPointToRay functionality as so:
Ray ray = cam.ScreenPointToRay(touch.position);
Touch and cam are both set ahead of time. This works fine on the pc, but when I build it out to the iOS from a mac, it always gives me back the same ray - specifically, one at the position of the camera pointing in the camera's direction.
I'm sure I'm missing something stupid, but I've double checked that the camera's stats look valid and that the touch position looks right (by which I mean, they look the same as they are on the PC, where it's working).
Has anyone seen a problem like this before?
Thanks in advance!
Edit with more info: cam is simply a cached version of MonoBehavior.Camera, which is established in awake. touch is defined (in the previous line) as such: Touch touch = Input.touches[i];
where i is simply an iteration variable (checking rays from each touch).
In addition, from my debugging these seem to have appropriate values - but even replacing my touch with various hardcoded vector2s and vector3s is having no effect on the resultant ray.
I haven't personally encountered any obvious problems with ScreenPointToRay() on iOS platforms. If I had to guess, it seems more likely that cam
or touch
are not what you expect them to be; since you're not showing us how those are set, it's difficult to offer any advice beyond that.
That's a fair point; I've edited the original post with some more info. The script is fairly simple, which is a large part of why I'm really puzzled by this behavior, but let me know if you need more than that.
Answer by MD_Reptile · May 14, 2012 at 09:46 PM
I wouldnt know about using any raycasting on iOS, but I do think another method might get the job done for you to detect touch "hits", which I kinda just thought up myself, with some inspiration from a few touch related posts on the forums and here.
Instead of doing the raycast at all, instead take the info for turning the screenpoint into world space, (where your touch is happening in world space) and then make an if statement something like
if((touch.transform.position.x - objectithit.transform.position.x) < 1.5f && (touch.transform.position.y - objectithit.transform.position.y) < 2.0f)
{
// do my logic cause I just touched the object on the screen...
domystuff = true;
}
else
{
// dont do my logic, cancel any bools or whatnot out
domystuff = false;
}
touch is your... well duh haha, and objectithit is just a private (or public I guess) GameObject that is your possible "touch-able" object yaknow. Could be buttons, enemy targets, whatever I suppose... And that is just the approach I been going with (android however), no rays needed, and it just seems simpler to me, although the logic usually needs some bool to control like, it in a state of "beenhit" and false if it has yet to be touched, ya know what im sayin? I guess same would go with the raycasting too wouldnt it...
Now if for some reason you just have to keep the raycasting to do something, then this wont work for you then...for instance you dont have a "mostly" 2D game this would be more problematic I think.
And one last thing, I only completed about half the logic, you would also need to do something like mytouch.x minus objectimhitting.x GREATER THAN -whatevernumber to make it not effect a whole half the screen or something, but I figure thats enough to get the idea across yaknow
Thanks for the suggestion, but unfortunately the game's not very 2D :(. Still an alternative I'll keep in $$anonymous$$d if I can't get it working otherwise.
well if you had to detect stuff in more than just the x and y, you could just do something similar, and rather than taking like, the transform of the touch and comparing the transform of the object, just get the transform of the touch in world coords, then cast a ray from the screen from that in its own special vector, making it possibly work right for you than trying to cast the ray directly off the camera.blahblah part...
for instance like
private vector2 touchPos; private Ray ray; private RaycastHit hit; private Float myFloat = 1.5f;
//...other stuff here
update() { touchPos = screentorayorwhatever(input.gettouch(1)); // keep touches in their own vector to prevent problems that might happen on iOS *i dont know that they will
// now do a check for hits on colliders of stuff you need to detect, like this sort of
if((hit.point - whateverobject.transform.position) < myFloat) // do this for greater than a negative myfloat too! { // do some stuff with the hit object or whatever }
}
Again I have no experience on iOS, no guarantees this stuff works or is even logical haha, but trying to help nontheless :)
if you get hung up trying this out at all, I do have working code like this I could try and implement some rays into, and see if it works here later, I'll check back if you had another better answer, and if not I might get something hacked together for you, since I need something like this in the future I am sure.
and unrelated, if your planning on using mul$$anonymous$$ch at all, definitely keep track of individual touches rather than using like an index with one mass code, unless that mass code was set up to keep track in its own way...ok im rambling im out haha
Your answer
Follow this Question
Related Questions
How to distinguish iPad 2 from iPad (1)? 4 Answers
How do I read XML on iOS? 2 Answers
Full FPS For iPad? 2 Answers
How to change button sizes depending on iOS device? 1 Answer
Possible to render a simple colored spot rather than use a PNG? 0 Answers