Allow me to explain.
Examining the data generated by calls in a project I recently created, which mimics the code that is described in https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal , I noticed that, when running in the VisionOS simulator, the contents of the matrix returned by the following line:
simd_float4x4 head_pose = ar_pose_get_origin_from_device_transform(pose);
the translation components of the matrix are returned as (0,0,0) if you do not move the camera within the simulator.
Which means that, at least for the simulator, the user's head is initially at the origin of the world coordinate system. There doesn't seem to be any indication that the headset is at a specific distance from the floor of the room that the simulator displays.
That value is kind of important, if you want to convey in a fully-immersive scene that you (the user) are standing at a specific spot (and distance) from the floor, in a believable, non VR-sickness inducing way, by taking into account your own height.
Is this "distance to ground" value available somewhere within the vast array of APIs exposed by the Vision Pro headset? If so, where should I look for?
Or is this something I'm supposed to work on, by using ARKit or some other API, by myself?