- Home /
Asynchronous Multi-Touch help - keeping track of touches and corresponding gameObjects (Android)
I'm trying to get multi-touch functionality incorporated into my game - not something like using two touches to do a unique action like pinch-zooming - but using multiple touches to do multiples of the same type of action, independently.
The problem is keeping track of objects that were touched, when other touches may begin or end during the same period the first touch is made (for example).
I'd been using Input.GetTouch(i) but I've read that "FingerID" is a better way to go as it won't change as touches get added or removed.
I've stripped down my code to the basics of what I'm trying to do. Basically there are several boxes that you can touch, which then generates a waypoint tracker that you can drag around, and when you release it, the box that you touched moves to the tracker's las position.
This works perfectly doing one at a time, but I want to be able to do this simultaneously and independently (so two people could play at the same time without worrying about messing the game up).
If someone could take a look and point me in the right direction I'd appreciate it! I also realize that gameObjects "tracker" and "savedSelection" should probably be arrays of gameObjects, but I'm not sure the best way to handle that, so looking for advice on that as part of the solution.
using UnityEngine;
using System.Collections;
using System.Collections.Generic;
public class MultiTouchTracking : MonoBehaviour
{
public GameObject trackingIndicator; //prefab tracking indicator
public GameObject tracker; //instance of tracking indicator
public GameObject savedSelection; //selected object
private Vector3 touchPosition;
void Update () {
if(Input.touchCount > 0 )
{
for(int i=0; i<Input.touchCount; i++)
{
if (Input.GetTouch(i).phase == TouchPhase.Began)
{
//Player has touched the screen and will raycast to an object, tagging it as "savedSelection"
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(i).position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit))
{
savedSelection = hit.collider.gameObject;
tracker = Instantiate(trackingIndicator, transform.position, Quaternion.Euler(0,0,0)) as GameObject;
}
}
if (Input.GetTouch(i).phase == TouchPhase.Moved)
{
//The tracker will move around as the player drags the finger around the screen
touchPosition = Camera.main.ScreenToWorldPoint(new Vector3(Input.GetTouch(i).position.x, Input.GetTouch(i).position.y, 0f));
tracker.transform.position = new Vector3(touchPosition.x, 0f, touchPosition.z);
}
if (Input.GetTouch(i).phase == TouchPhase.Ended)
{
//Once the finger is lifted up, we send a message to the game object "savedSelection", telling it to go to the point the tracker is at, then we destroy the tracker
savedSelection.SendMessage("MoveToPosition", tracker.transform.position);
Destroy(tracker);
}
}
}
}
}
Answer by Jamora · Jul 28, 2013 at 03:35 AM
I would approach this more from a distributed control perspective than the centralized control scheme you are currently using. I would assume it be better if each tracker knew which touch it needs to follow, and then does. You wouldn't have to care about gameobject arrays or even keeping track of touches, because the trackers themselves handle that. All you need to do is slice up that script of yours a little then attach the Tracker part to the trackerindicator
... slicing code ...
The only thing I could think off the top of my head is how do check if there is a new touch. But I'm sure you can figure that out. I hope my code works without a lot of hassle, here goes:
public class Tracker
{
public Touch thisTouch;
public GameObject savedSelection;
void Update ()
{
if (thisTouch.phase == TouchPhase.Moved) {
//The tracker will move around as the player drags the finger around the screen
touchPosition = Camera.main.ScreenToWorldPoint (new Vector3 (thisTouch.position.x, thisTouch.position.y, 0f));
transform.position = new Vector3 (touchPosition.x, 0f, touchPosition.z);
}
if (thisTouch.phase == TouchPhase.Ended) {
//Once the finger is lifted up, we send a message to the game object "savedSelection", telling it to go to the point the tracker is at, then we destroy the tracker
savedSelection.SendMessage ("MoveToPosition", transform.position);
Destroy (this.gameObject);
}
}
}
public class MultitouchTracking{
void Update (){
/*Somehow determine if there is a new touch, possibly checking if touchCount has increased*/
if (newTouch == TouchPhase.Began) {
//Player has touched the screen and will raycast to an object, tagging it as "savedSelection"
Ray ray = Camera.main.ScreenPointToRay (newTouch.position);
RaycastHit hit;
if (Physics.Raycast (ray, out hit)) {
Tracker tracker = (Tracker)Instantiate (trackingIndicator, newTouch.position, Quaternion.Euler (0, 0, 0)).GetComponent<Tracker>();
tracker.savedSelection = hit.collider.gameObject;
tracker.thisTouch = newTouch;
}
}
}
}
Hmm this seems like a good method, and it seems to be assigning "thisTouch" correctly - but I can't seem to get the Tracker script to pick up on "thisTouch" phases $$anonymous$$oved and Ended.
yeah using this code, "thisTouch" seems locked in the phase TouchPhase.Began, even if I pull my finger off. I don't see why though...
At each Update Unity realocate the Input, so you can't just save your Input as a variable.
Solution is there: http://forum.unity3d.com/threads/how-to-handle-unity-re-assigning-touches.144536/