Applying Point Cloud to updated PhotogrammetrySession

Hello, I am testing the updated PhotogrammetrySession based on test iOS app that I created that uses ObjectCaptureSession that was announced this year's WWDC23: "Meet Object Capture for iOS".

I currently have server-side code using the RealityKit's PhotogrammetrySession on MacOS 14.0 beta which means that PhotogrammetrySession should be able to utilize Point Cloud data captured during the ObjectCaptureSession in the iOS app(as announced during WWDC23 session).

My expectation was that the Point Cloud captured during ObjectCaptureSession was embedded into the image so that I only needed to import the HEIC image files to be used for PhotogrammetrySession on MacOS. However, I came across following warning message:

Image Folder Reader: Cannot read temporal depth point clouds of sample (id = 20)

for all my input images. The thing to note is, when I run the same PhotogrammetrySession on iOS, the Point Cloud data seem to be processed just fine.

After digging into hex format of the HEIC image captured during ObjectCaptureSession, I came across the following line:

mimeapplication/rdf+xmlinfe6hvc1<infe7uri octag:com:apple:oc:cameraTrackingState>infe8uri octag:com:apple:oc:cameraCalibrationData=infe9uri octag:com:apple:oc:2022:objectTransform:infe:uri octag:com:apple:oc:objectBoundingBox9infe;uri octag:com:apple:oc:rawFeaturePoints7infe<uri octag:com:apple:oc:pointCloudData0infe=uri octag:com:apple:oc:version2infe>uri octag:com:apple:oc:segmentID=infe?uri octag:com:apple:oc:wideToDepthTransform

which, to me, seemed like the location of data which included Point Cloud data of respective HEIC image that was captured.

So, the question is, is it possible for me to access these files, read them, and send its data to server-side PhotogrammetrySession to be processed alongside its respective HEIC image? Or am I getting this completely wrong?

Accepted Reply

I found out that using .checkpointDirectory during ObjectCaptureSession for snapshotting current captures(including Point Cloud) and passing it to the server-side PhotogrammetrySession and setting its PhotogrammetrySession.Configuration.checkpointDirectory to snapshot URL uses the Point Cloud data captured during ObjectCaptureSession on iOS to be used during processing.

Importing the snapshot from checkpointDirectory significantly increases the quality of the result of featureless object shown above(my AirPods Pro). I believe this is the solution to utilizing Point Cloud data to my question.

Replies

I found out that using .checkpointDirectory during ObjectCaptureSession for snapshotting current captures(including Point Cloud) and passing it to the server-side PhotogrammetrySession and setting its PhotogrammetrySession.Configuration.checkpointDirectory to snapshot URL uses the Point Cloud data captured during ObjectCaptureSession on iOS to be used during processing.

Importing the snapshot from checkpointDirectory significantly increases the quality of the result of featureless object shown above(my AirPods Pro). I believe this is the solution to utilizing Point Cloud data to my question.

Not using RealityKit's Object Capture API, How can I manually add those datas when capturing images through AVFoundation or ARKit?

  • Can you specify what kind of data you're talking about? If you mean by data captured in checkpointDirectory in ObjectCaptureSession, I don't think you can create them from AVFoundation or ARKit, since those folders are automatically created with formats that are not specified publicly.

Add a Comment