visionOS: Interact with reality

My understanding of vision OS is that you can not use the lidar sensor or the depth camera to get a 3D model of the room you are in. This means at the moment it is not possible to attach virtual objects like sensors to the ceiling or the floor.

My client would like to use the vision pro to help him plan where sensors and cables in a physical room should be placed during construction.

Is my understanding correct? I could not find anything related in the documentation.

Accepted Reply

You should probably watch the WWDC23 sessions for spacial computing and ARKit. You can have to types of interaction with the room you're in: plane detection and scene reconstruction. For the first one you get information about horizontal and vertical planes detected by ARKit and you'll be able to anchor virtual content to these planes. For the second you get a more detailed mesh of your surroundings and then you can work with that.

So, yes, your use case should be possible with these features/techniques.

Replies

You should probably watch the WWDC23 sessions for spacial computing and ARKit. You can have to types of interaction with the room you're in: plane detection and scene reconstruction. For the first one you get information about horizontal and vertical planes detected by ARKit and you'll be able to anchor virtual content to these planes. For the second you get a more detailed mesh of your surroundings and then you can work with that.

So, yes, your use case should be possible with these features/techniques.