Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23

I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.

I have two questions regarding the newly updated APIs.

From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.

From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.

Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.

Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.

  • Sorry folks, I know this is a duplicate from the other post I submitted. I thought the original post wasn't submitted. And there is no feature in Apple Developer Forum to delete a post...

Add a Comment

Accepted Reply

My understanding from the WWDC Session Meet Object Capture for iOS, as well as talking to some of the Object Capture engineers during the WWDC 2023 Labs, is that the point cloud data captured during an ObjectCaptureSession is embedded in the HEIC file and automatically parsed during a macOS PhotogrammetrySession. So as long as you provide the input (images) directory to the PhotogrammetrySession when you instantiate it, alongside the optional checkpoint folder, I believe the macOS PhotogrammetrySession should automatically take advantage of the point cloud data for improved reconstruction times and performance.

For what it's worth, in the Taking Pictures for 3D Object Capture sample code, the depth data captured and intended to be provided to the PhotogrammetrySession isn't actually embedded depth data in the HEIC file, but a separate depth mask as a TIFF file. Granted, I know HEIC files are capable of including depth information, so it might be possible that PhotogrammetrySession can intake a HEIC file with embedded depth data (and, newly this year, embedded point cloud data), but just wanted to make that distinction. At least in the Taking Pictures for 3D Object Capture sample code, each image captured includes a HEIC file, a TIFF depth file, and a TXT gravity file - I've been passing an entire folder that contains all of those files into the PhotogrammetrySession and it seems that PhotogrammetrySession leverages all provided data accordingly.

To your latter question, while I'm not entirely sure if the PhotogrammterySession.PointCloud array that's returned from a PhotogrammetrySession.Request is actually the same thing that the ObjectCapturePointCloudView uses, my guess would be yes. It seems like ObjectCapturePointCloudView is a convenience SwiftUI view that beautifully renders the underlying point cloud data that would match the point cloud data returned in the PhotogrammetrySession.Request.

Replies

My understanding from the WWDC Session Meet Object Capture for iOS, as well as talking to some of the Object Capture engineers during the WWDC 2023 Labs, is that the point cloud data captured during an ObjectCaptureSession is embedded in the HEIC file and automatically parsed during a macOS PhotogrammetrySession. So as long as you provide the input (images) directory to the PhotogrammetrySession when you instantiate it, alongside the optional checkpoint folder, I believe the macOS PhotogrammetrySession should automatically take advantage of the point cloud data for improved reconstruction times and performance.

For what it's worth, in the Taking Pictures for 3D Object Capture sample code, the depth data captured and intended to be provided to the PhotogrammetrySession isn't actually embedded depth data in the HEIC file, but a separate depth mask as a TIFF file. Granted, I know HEIC files are capable of including depth information, so it might be possible that PhotogrammetrySession can intake a HEIC file with embedded depth data (and, newly this year, embedded point cloud data), but just wanted to make that distinction. At least in the Taking Pictures for 3D Object Capture sample code, each image captured includes a HEIC file, a TIFF depth file, and a TXT gravity file - I've been passing an entire folder that contains all of those files into the PhotogrammetrySession and it seems that PhotogrammetrySession leverages all provided data accordingly.

To your latter question, while I'm not entirely sure if the PhotogrammterySession.PointCloud array that's returned from a PhotogrammetrySession.Request is actually the same thing that the ObjectCapturePointCloudView uses, my guess would be yes. It seems like ObjectCapturePointCloudView is a convenience SwiftUI view that beautifully renders the underlying point cloud data that would match the point cloud data returned in the PhotogrammetrySession.Request.

We're using automatic photography studio to take pictures for object capture (on mac) with 3 reflex cameras connected to a remotable turntable, it could be very useful to inject the point cloud data coming from iOS to enhance quality of featureless object and also to have real size measurement (that you don't have with reflex camera pictures).

Do you have any clue about this??

@KKodiac we're trying to do the same, how did you managed to create a an Object Capture listener Server side?

@Francesco_Esimple from my experience, I was able to get the PhotogrammetrySession running on the server-side for processing and reconstructing 3D model from the data that was captured from my iPhone. The data captured were 1. HEIC image and 2. "Snapshot" folder that consisted of data from ObjectCaptureSession that was saved to its checkpointDirectory(Check out the docs here).

I had Vapor as my server-side swift framework which imported RealityKit's PhotogrammetrySession, set its checkpointDirectory property to the one saved to the server from the iPhone's ObjectCaptureSession's checkpointDirectory. By doing this I was able to utilize Point Cloud and other bits of data to reconstruct the 3D model on the server-side PhotogrammetrySession with the images.

I wasn't able to get access to inner bits of data that was saved to the checkpointDirectory though. The folder itself was random collection of folders and .bin files. And opening them up in hex converter, I was only able to decipher bundle paths like com.oc.PointCloud. Hopefully more features and documentation will come out to help us figure these things out.

checkpointDirectory seems to only write the .bin files when you flip the object and get deleted once the reconstruction was successful