How does an indirect drag gesture work?

Hello, I’ve got a few questions about drag gestures on VisionOS in Immersive scenes.

Once a user initiates a drag gesture are their eyes involved anymore in the gesture?

If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall?

How can the user cancel the gesture If they don’t like the anticipated / telegraphed result?

I’m trying to craft a good experience and it’s difficult without some of these details. I have still not heard back on my devkit application.

Thank you for any help.

One more:

What if the user is holding something for away and they rotate their torso away from it?

Will it follow along or stay static?

Hello,

Once a user initiates a drag gesture are their eyes involved anymore in the gesture?

No, the eyes are only involved for the initial targeting of the gesture.

If not and the user is dragging something farther away, how far can they move it using indirect gestures? I assume the user’s range of motion is limited because their hands are in their lap, so could they move something multiple meters along a distant wall?

This is app-defined behavior. An app can interpret the gesture data to move an object in any way it wants, if that means scaling up the gesture data to get larger translations, the app can do that.

How can the user cancel the gesture If they don’t like the anticipated / telegraphed result?

It is up to the app to preserve the initial state before the gesture, and then revert back to it if the user hits an undo button after the drag, for example.

What if the user is holding something for away and they rotate their torso away from it? Will it follow along or stay static?

It's conceivable that this rotation wouldn't get reflected in the gesture data at all, so the object could remain static.

How does an indirect drag gesture work?
 
 
Q