Eye tracking permission

Hello.

I am developing a space painting application in C++ and Metal in Apple Vision Pro. It's essentially a port of the application called CoolPainrtrVR from PS4. I would like to implement an eye-tracking system for its menu system (3D menus displayed within the scene). How do I need to handle the necessary permissions to use eye tracking for the menu system?

Thank you very much.

I'm afraid eye tracking information isn't currently available at all, regardless of the immersion level of your app. When creating an app in the mixed space, Apple handles eye tracking for you (but it's abstracted away), but when creating a fully immersive app you get nothing.

@KTRosenberg Eye tracking in mixed and immersive space is supported in the sense that looking at a thing and tapping it can trigger a gesture, but any information about the interaction or the "gaze" is entirely abstracted away. For example on macOS/iPadOS in SwiftUI, you can use onHover(perform:) to trigger a closure when the users mouse enters a View, however there is no way to do this on visionOS, as Apple does not inform you of the gesture until the user taps to confirm the hit.

Apple are of course aware of where you are looking as they use the information to do automatic hover state on views in SwiftUI, or to update the state of any Entity that has a HoverEffectComponent, but if you either want to do drawing outside of RealityKit (in which case the HoverEffectComponent will not be available, and gaze information in general is unavailable), or you want to do custom hover drawing in either SwiftUI or RealityKit, you are stuck.

It's understandable why this approach has been taken if Apple are protective of gaze information for privacy concerns. If you has the ability to know when a view is hovered, there's many ways you could abuse that to determine where the user is looking. Apple clearly feels this would be abused, likely by advertisers and bad actors.

Of course... I can't help but feel that if Apple had a way to inject custom drawing into RealityKit (not just pre-canned Entities, or products of RealityComposer), this would be lessoned. For example, it would be nice to be able to stream geometry updates to a ModelEntity, without having to rebuild MeshResource - if you could have some way of indicating that the shape (for example) has changed, and it would request a new position buffer only. But even with that, RealityKit still has a long way to go to fill the gaps that force you to use Metal, including proper geometry hit testing against the surface of the mesh, and true custom shaders that allow me to describe a custom BRDF or custom effects, instead of the not-that-helpful CustomMaterial. But I guess this is a v1. 🤷

[@Matt Cox](https://developer.apple.com/forums/profile/Matt Cox) I have similar qualms about the limitations on custom rendering. I think a lot of this could be partially-solved by, as you suggest, allowing for mesh streaming as opposed to just texture streaming. A better solution would be permitting custom metal rendering outside of fully-immersive mode. I can imagine composition services + Metal having special visionOS CPU-side Metal calls that allow the programmer to specify where to render the camera data/what to occlude. For custom shaders (which we really will need at some point since surface shaders are pretty limiting), there'd need proper sandboxing so reading the color/depth of the camera couldn't leak back to the CPU. Some kind of Metal-builtin read/function-pointer support?

I think you ought to file a feature request, for what it's worth. We're not the only ones who've raised this point. Pointing to specific examples probably helps a bit.

Eye tracking permission
 
 
Q