Thank you, @Developer Tools Engineer. It seems that not even ARKit will let developers access any sensor data: https://developer.apple.com/videos/play/wwdc2023/10082?time=262
In the WWDC video you shared, it was mentioned that only a "composite .front camera" will be abailable via AVCapture DiscoverySession. May I ask what is meant by "composite camera"? Does it mean that visionOS will provide stream of RGB-only frames captured by a virtual camera, which is created by internally combining data from several hardware cameras?
Or is the meaning of "composite" meant to be "multiple modalities" (i.e. AVFoundation providing access to RGB and depth output via a virtual composite camera)?
Is AVFoundation going to be the only Framework on visionOS capable of providing access to (composite) camera data, as suggested by the video you shared?