Persist anchors across sessions

Is there anyway to persit anchors across sessions (different launches of the app)?

Replies

I was wondering the same thing yesterday. I will talk to some of the Apple engineers this week to get a more clear answer, but I think it would be cool if you could use Machine Learning to identify and recognize persistent landmarks.

I hope they can be too. It would be great if they had a worldanchor store like HoloLens that way content can be developed to persist across sessions. I am hoping if this initial version doesn't have it they plan to incorporate it down the road. The Anchor class seems to have a uuid unique ID that could be used to identify it.

I'll add a "me too" on this. Saving and loading tracked anchors is a must for this platform...

As far as I udnerstand there are no persistence APIs yet. I couldn't support the need for the functionality more. It will be great to have anchors persistr per app (equivalent of HoloLens's WorldAnchorStore) and into an app-independent binary file to pass around (equivalent of HoloLens's WorldAnchorTransferBatch). The latter case is especially important for industrial use cases where time is gold and you'd want to map you scneario anchors once and then distribute the anchor list file onto another iOS devices so they can recognise/localise the anchors against their own sensory data. Fingers crossed.

I don't think those UUID's on the anchor are intended to be the same across sessions or devices. Would be great if so.

Sorry this isn't supported. Each session is entirely new from one execution of the app to the next. Tracking is all relative to the starting position of the session and anchors are reset.


@stepanstulov Thanks for explaining the use-case but please also follow up with a feature request at http://bugreporter.apple.com/

@funnest

There we go. There is no ARKit category under iOS + SDK so I just chose SceneKit.

https://bugreport.apple.com/web/?problemID=33286970

No need for AI, I think. You could use a mix of GPS positioning and semantic grouping of features (for which you have to extract your own descriptors, Apple doesn't give you access to that nor say which feature extractor they are using), like Bag of Words. An example of how it is used to detect trajectory loops (meaning when you get back to the same place) is here https://github.com/raulmur/ORB_SLAM2 (careful, no c&p, it's GPL!)


Basically, it's your rudimentary representation of a location. Using a lighting/rotation/scaling invariant feature extractor will make it more likely to detect the same location again. Save the bag of words, with GPS position if outdoors, for each location and you will be able to use them in the next session or share them with a friend.