Best practices for loading AR anchors on a loaded ARWorldMap

Hello,

I have been experimenting with saving and loading ARWorldMap, especially by following this guide:

Saving and Loading World Data

However I'm using RealityKit and not using SceneKit, I also want to add an AnchorEntity on various locations that can be automatically detected from detected anchors and the ones that I create by tapping and raycasting on the camera frame.

I can successfully achieve what I've explained above, but there is a lack of guideline in loading multiple anchors at the beginning of an app launch, or when the ARSession delegate is called.

In particular, these methods in ARSessionDelegate:
session(_:didUpdate:)

One way is to add them directly to a Scene with addAnchor(_:).

However this incurs a performance penalty on drawing the AR camera frame, when those anchors are repeatedly being updated and new ones being added. I'm only adding a sphere mesh.

This is especially slow on devices such as iPad Pro 9-7 inch (2016) with A9X chip.

Any tips for a performant and asynchronous rendering of anchors?

You mention that you are only adding a sphere mesh. This sounds like all geometries attached to the anchors are the same. In that case you could create an Entity with that sphere mesh once on startup and then call .clone() on this entity for every anchor added to the ARSession. This way you avoid the cost of re-creating the same sphere mesh over and over again.
I see, Thanks. I will try that and see how it goes.
I have been playing around with Saving ARWorldmap using realitykit as well but I am having trouble loading the scene back to what it was after saving it. All anchors restored correctly but my AnchorEntity always appear to be at the wrong location. You got any hint on that? I am using the session didadd for adding new Anchorentity to scene when a new ARAnchor was added using my handletap function.
It really depends on your camera frame. You can turn on feature points debugging to understand what the camera is looking at.

What I observed is they can return to the same location if the map is good, and the camera frame is really looking at the same scene. Otherwise, there will always be inaccuracies. Most of the time I have to delete the map, and restart all over again. That may not be a good workflow for the user.

To build a good map is also still something that I don’t completely understand. It’s a common problem in SLAM algorithms like ARKit. What would be great however is for RealityKit / ARKit to be able to provide an API that takes user input or any other sensor feedback that can help with localization.


Best practices for loading AR anchors on a loaded ARWorldMap
 
 
Q