RealityKit visionOS anchor to POV

Hi,

is there a way in visionOS to anchor an entity to the POV via RealityKit? I need an entity which is always fixed to the 'camera'. I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.

Edit: ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform) How would I get this information on visionOS? RealityViews content does not seem offer anything comparable. An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.

I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.

Appreciate any hints, thanks!

I just found this: https://developer.apple.com/documentation/arkit/worldtrackingprovider/4218774-querypose which sounds promising if I enable ARKit tracking. Will give it a go. Can someone from the RealityKit confirm that this would be the way to go?

Also there is https://developer.apple.com/documentation/realitykit/anchoringcomponent/target-swift.enum/head Does this also only work when ARKit is enabled? So far I wasn't able to run it successfully in the Simulator.

What I discovered is that you can indeed use RealityKit's head anchor to virtually attach things to the user's head (by attaching entities as children to the head anchor).

However the head anchor's transform is not exposed, it always remains at identity. Child entities will correctly move with the head, but if you query their global position (or orientation) using position(relativeTo:nil), you just get back their local transform.

Which means that it seems currently impossible to write any RealityKit systems that react to the user's position (for example a character looking at the user, like the dragon in Apple's own demo), without getting it from ARKit and injecting it into the RealityView via the update closure.

I don't know if this is a bug or a conscious design decision. My guess is that it was an early design decision that should have been but wasn't revised later. That initially the thinking was that the user's head transform should be hidden for privacy reasons. But later engineers realized that there are applications (like fully custom Metal renderers) that absolutely need the user's head pose, so it was exposed after all via ARKit.

It is probably worth filing a feedback on this, because I can't see how it would make sense to hide the head anchor's transform, if the same information is accessible in a less performant and convenient way already.

ahhhh I've just spent ages wondering what's going on with the head anchor transform and I suspected it might be because it's considered sensitive info.. or maybe a bug.. :)

anyone heard anything official about it? I'm pretty sure the docs don't mention it

d'oh, I just realised that AnchorEntity(.head) can be used anywhere (shared/immersive/fully immersive) so I get why its transform would be restricted for privacy reasons.. using the ARKit world tracker works fine (since it can only be used in immersive/fully immersive)

I had an app in mind with a similar use case where an element/view/node would be always fixed to POV but I did not find a way to achieve it. I will follow this thread and hopefully we find a way to achieve it. I suppose there will be multiple use cases, especially games, where you would want something like a heads up display, fixed and always visible. Fingers crossed we find a way to do it! :)

Can anyone give some guidance as to how I would use the ARKit world tracker to get the .head position? I've also run into this issue today. Thanks

RealityKit visionOS anchor to POV
 
 
Q