We are creating an iOS app similar to Snapchat. The app will access the camera where our goal is to create a seamless user experience by immediately start placing AR objects into the frame of picture the moment the camera opens. Our problem is, that the ARkit forces us to move around the camera to scan the environment first before being allowed to place 3D/2D objects. Is there any documentation you can please provide, that will help to prevent scanning of the environment, causing us to wait until planes and tracking is found before placing these objects into the frame? We are using Unity AR iOS11 kit plugin
Your help would be so much appreciated!
No, you don't need to wait ARKit to scan the enviroment. Just add an object when initialising, it will be shown immediately.
Only problem ARKit cannot detect anchor / planeanchor at that moment. You should find a hardcode position for your object first and then update it later.