I am currently developing a mobile and server-side application using the new ObjectCaptureSession
on iOS and PhotogrammetrySession
on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession
and use it to create 3D models on PhotogrammetrySession
on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession
utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession
on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request
. I would like to know if this PointCloud
data is the same set of data captured during ObjectCaptureSession
from WWDC23 that is used to create ObjectCapturePointCloudView
.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Sorry folks, I know this is a duplicate from the other post I submitted. I thought the original post wasn't submitted. And there is no feature in Apple Developer Forum to delete a post...