Posts

Post not yet marked as solved
4 Replies
@KTRosenberg From looking at the SDK it looks like SceneReconstructionProvider (https://developer.apple.com/documentation/arkit/scenereconstructionprovider) will be able to get you the raw detected geometry MeshAnchor (https://developer.apple.com/documentation/arkit/meshanchor). This doesn't solve the issue of pixel data access, but gives an idea of the lowest level scene understanding you'll be able to get (so far).
Post not yet marked as solved
4 Replies
@KTRosenberg I am also very concerned about the lack of ability to sense/interact with the world around you. I seems you will only get what xrOS decides to give you, which are currently simple things like walls, floors, etc... As you've no doubt noticed in the sessions there are repeated (and valid) references to user privacy. The downside is it seems like we are beholden to what xrOS shares with us, we can't implement any novel object detection of our own. This is of course v1, but it is vital to be able to sense/interact with the world around us for the ar vision to become a reality.