I have the exact same question, any updates so far?
Bing / ChatGPT was hallucinating and told me that it's possible with App Groups or a custom Info.plist key "NSGroupActivitiesIdentifier"
Post
Replies
Boosts
Views
Activity
How do you run Apple Vision Pro destination on the iPad? I'm not able to see a way to do that. Probably you're just running a RealityKit app on iPad?
It turns out the export function is not able to export all of the options listed in CapturedRoom.USDExportOptions.
When I selected .model the exported USDZ objects are substituted correctly.
Encountered the same issue, any update on this?
It really depends on your camera frame. You can turn on feature points debugging to understand what the camera is looking at.
What I observed is they can return to the same location if the map is good, and the camera frame is really looking at the same scene. Otherwise, there will always be inaccuracies. Most of the time I have to delete the map, and restart all over again. That may not be a good workflow for the user.
To build a good map is also still something that I don’t completely understand. It’s a common problem in SLAM algorithms like ARKit. What would be great however is for RealityKit / ARKit to be able to provide an API that takes user input or any other sensor feedback that can help with localization.
I see, Thanks. I will try that and see how it goes.