Is it possible to get the SCNVector3 position of a real world object using Core ML and ARKit and create a SCNPlane above the object?

I am working on a AR based solution in which i am rendering some 3D models using Scenekit-ARKit. I have also integrated CoreML to identify objects and render corresponding 3D objects in scene.

But right now i am just rendering it in the center of screen as soon i detect the object(Only for the list of objects that i have). Is it possible to get the position of the real world object so that i can show some overlay above the object?

That is if i have a water bottled scanned, i should able to get the position of the water bottle. It could be anywhere in the water bottle but shouldn't go outside of it. Is this possible using Scenekit?

Replies

Short answer: No


CoreML always works with image, in lucky cases it may give you position in 2D world, missing a dimension for 3D world. For classificating objects the situation is worse since it gives nothing about position / boundary of the object.


If your app can accept slower proccessing speed (I afraid it is not that case when working with ARKit) you may split your input image into some smaller pieces and then try-error with CoreML to find closer boundary of the object. However, one dimension is still missing (you may assign it by a fix number).


Good luck

Maybe you can add a tap gesture recognizer, s.t. when you tap the object, the sceneView run a hitTest to look for a feature point. Thus you can approximate the 3D location of the object by the coordinates of the feature point.

No, you cannot tap / hittest to a real object in the real world. You can do that only with "artificial" objects which you have created or loadded (3D models).