Object Capture API be integrated into iOS app?

From my understanding you capture images on an iOS device and send it to macOS which uses photogrammetry with Object Capture API to process it to a 3D model…

Is it possible to exclude macOS and pull the API within the app itself so it does the processing all within the app? From scanning to processing? I see on the AppStore, there’s Scanner apps already, so I know it is possible to create 3D models on the iPhone within an app— but can this API do that? If not, any resources to point me in the right direction?

(I’m working on creating a 3D food app, that scans food items and turns them into 3D models for restaurant owners… I’d like the restaurant owner to be able to scan their food item all within the app itself)

Hi, as far I understand from the session "Creating 3D models with Object Capture" , yes you can, and you're going to be limited with the reduced and medium size resolution.

Currently the API is limited to macOS, but I don’t know the reason they decided to exclude iOS.

The latest iPad uses M1 chip, so apparently there is not reason to exclude iOS to create 3D objects.

I’ll appreciate if Apple engineers can explain us why is limited to Mac or it will come to iOS in future betas.

The new Object Capture API is a macOS API. It requires the power of the Mac for the 3D reconstruction of objects. You can use iOS devices to capture input images. The reconstruction has to be on a Mac.

We offer different detail levels which are optimized for different use cases. The “reduced” and “medium” detail level optimize the output 3D model size for viewing in AR Quicklook on iOS devices. 

Object Capture API be integrated into iOS app?
 
 
Q