Panning (translating) code works in Unity Remote 5, but not when installed on device
Hi, I'm using Unity 5.4.0f3 Personal on a Windows 10 machine, building for Android. I have a script that handles pinch zoom gesture as well as panning, all tied to a 2D game object (with children). The panning is in fact just moving the game object in 2D, inside a scroll window and it is meant to just follow the midpoint of the two finger touches. The scaling and the translation work just fine when testing them on my Android device using Unity Remote 5, but when the app is built and copied to the device and run, only the scaling works. The translation seems to manifest as extremely small changes (close to nothing) in position when in fact the fingers are dragged from one end to the other of the screen. Does anyone have an idea what could cause this difference in behaviour and how I could get past this problem? I've tried this on an Android 4.0.4 and on an Android 6.0 and faced the same problem. Thanks. Here is the code in question:
if (Input.touchCount == 2) {
// Store both touches.
Touch touchZero = Input.GetTouch (0);
Touch touchOne = Input.GetTouch (1);
if (touchOne.phase == TouchPhase.Began) {
//...
} else if (touchOne.phase == TouchPhase.Moved) {
// ...
// Pan the image conform to the changes in the position of the midpoint.
Vector2 pan = (touchZero.deltaPosition + touchOne.deltaPosition) / 2;
transform.Translate (new Vector3 (pan.x, pan.y, 0), Space.World);
//...
} else {
//...
}
}
Answer by unit-One · Nov 14, 2016 at 02:18 AM
I've managed to get the panning logic to work with the app installed on the device. The change that I made to the code and turned out to be critical is that I triggered the panning/translation only when the change in position was larger than a particular threshold. That's all. And I have to admit that I've seen this before in examples but never realized its importance. So now it works just as well both in Unity Remote and when installed on device.
Why was there a difference in behaviour between the two run modes with the previous code? I still don't know for sure, but I can venture a speculation (whoever knows the correct answer, please weigh in): I think that very small changes in touch position, even though sensed by the input manager, are ignored by the Unity engine when there is an attempt to apply them as changes in object position, hence the need for a minimum threshold to be passed before making that attempt. A possible explanation for the Unity Remote to run without a hitch even without the threshold logic in the code could be in the way Unity Remote (on the device) communicates with the Unity Engine on the computer. I think that in an attempt to increase efficiency and responsiveness, Unity Remote buffers inputs before sending them to the Unity Engine, so much so that enough of them are bunched together to pass the needed threshold, thus achieving indirectly the same result that an explicit "wait-for-enough-displacement" logic would have done in my code. That's all I got. If anyone has a better idea, please let me know. Cheers.
Your answer
Follow this Question
Related Questions
Raw image flickering when texture changed 1 Answer
Stuck inside of Composite Collider when using Platform Effector 2D 2 Answers
Unity 2D Lighting From Universal Render Pipeline Not Working 1 Answer
Making a level for a 2D platform Racing game? 0 Answers
limit unity editor camera to 2d space 0 Answers