- Home /
Is it good idea to implement camera drag with UI drag event?
Greetings. I have written a basic script like below. I attach CameraDragUI to a transparent UI image which is behind all UI and CameraControl to the main camera.
public class CameraDragUI : MonoBehaviour, IDragHandler, IScrollHandler {
public CameraControl control;
void IDragHandler.OnDrag(PointerEventData e) {
control.x += e.delta.x;
control.y += e.delta.y;
}
void IScrollHandler.OnScroll(PointerEventData e) {
control.radius -= e.scrollDelta.y * radiusScrollSpeed;
}
}
public class CameraControl{
public float x;
public float y;
public float radius = 10f;
void Update(){
Quaternion rotation = Quaternion.Euler(y, x, 0f);
Vector3 position = rotation * new Vector3(0.0f, 0.0f, -radius);
transform.rotation = rotation;
transform.position = position;
}
}
This works quite well and script is easy. The question is, is it really a good idea? Is there any case this will not work? Is it less efficient? One advantage I can clearly see is, you don't have to explicitly check if mouse is over any UI because event system does it for you.
The downside to a fullscreen transparent image is that it directly affects speed on low-end (mobile) devices because of low fill-rates.
Answer by coffiarts · Apr 03, 2018 at 07:28 PM
Hi @SirKurt,
I really like your approach!
I started experimenting with it. Other than you I tried not to code-implement everything (using interfaces), but working with Event Triggers in the inspector (simply because I prefer not to pack too much logic into C# code, but rather go the "visible" way in the editor). According to my understanding, this isn't really different (only that Unity is creating the more or less similar code for you).
However, I found out that the invisible image (you can also use a UI canvas btw) completely blocks ANY other events.
In other words: My camera drag now works absolutely fine, but I am completely unable to let any other events (like PointerClick) passthrough to my worldspace 3D game objects.
As I understand Unity as of v2017 (hopefully this is correct), the new event systems decides on its own (in each single tick) whether to use the Graphics od Physics raycaster. I.e., if a complete UI screen overlay is expected to intercept events, it automatically takes precedence over anything else. This breaks my game completely
I assume that this COULD be a killer to your approach. At least I didn't get it to work.
Any ideas on that would be highly appreciated!
EDIT: Just noticed that my thoughts seem to be exactly what @Maverick already mentioned above.
Hi, since then I also updated this script many times. The conclusion is, you can't use Unity Event System with this approach out of UI. Because the event first checks for UI, then world objects. But at the bottom of the UI is this camera handler, which will capture all pointer events. Alternatively, you can use other tools like raycast for your world objects. You don't have to use Unity Event System for a generic camera controller at al. Just use the Input interface. But I believe my approach has its own good sides and it is useful in some situations.
Thanks for your reply, @Sir$$anonymous$$urt.
It is the first statement to finally confirm my assumption (i.e. that it's simply not possible with the new event system).
It seems to me that, originally, UI elements weren't blocking ANYTHING, but that was causing problems for many people (because clicking UI Buttons caused unwanted click-throughs to game objects). So unity changed that behaviour into the opposite (making things easier, but also breaking solutions like the one we're discussing here).
I wish that now unity would introduce an option like a "passthrough mask", to allow UI events to block/consume only SELECTED event types. But I assume that this is not feasible with the given nature of the event system (which always uses only ONE out of both raycasts - graphic vs physics - per frame).
So I am glad about your answer, because I can now definitely go the other way.
The downside is that I now have to find a way again to make camera control prevent the click-through - like in old times :-) If I find a robust solution, I'll be glad to post it back here.
And I agree, btw, that input controlled camera movement doesn't require the event system, as you can handle any input directly. Furthermore, camera movement doesn't require - logically - any raycasting to UI or 3d objects at all. So the event system approach would cause unnecessary raycasts anyway. The UPSIDE of your idea (and that's why I would have loved to use it) is that it would offer a very convenient way for handling the dragging (because drag event detection using the UI overlay and events seems to work reliably like a charm). It would also allow to configure dragging logic in the editor/inspector with event triggers, ins$$anonymous$$d of coding (which is also my favoured way of doing it). But as it's not possible, the scripted solution remains as the only way (a bit "sweetened", of course, by the benefit that it helps to avoid a bit of raycasting).
Answer by Maverick · Oct 06, 2015 at 11:58 AM
The idea is really nice, I really like it.
What about 3D gameObjects in the scene that you might want to click/select via same event system?