Post

Replies

Boosts

Views

Activity

Custom UI for ObjectCaptureView
Is it possible for me to customize the ObjectCaptureView? I'd like to have the turn-table that indicates whether the photo was captured with point cloud image to have different foreground color. So I want the white part under the point clouds to be some other color that I specify. Would it be possible by extending the ObjectCapturePointCloudView?
0
0
591
Jul ’23
Applying Point Cloud to updated PhotogrammetrySession
Hello, I am testing the updated PhotogrammetrySession based on test iOS app that I created that uses ObjectCaptureSession that was announced this year's WWDC23: "Meet Object Capture for iOS". I currently have server-side code using the RealityKit's PhotogrammetrySession on MacOS 14.0 beta which means that PhotogrammetrySession should be able to utilize Point Cloud data captured during the ObjectCaptureSession in the iOS app(as announced during WWDC23 session). My expectation was that the Point Cloud captured during ObjectCaptureSession was embedded into the image so that I only needed to import the HEIC image files to be used for PhotogrammetrySession on MacOS. However, I came across following warning message: Image Folder Reader: Cannot read temporal depth point clouds of sample (id = 20) for all my input images. The thing to note is, when I run the same PhotogrammetrySession on iOS, the Point Cloud data seem to be processed just fine. After digging into hex format of the HEIC image captured during ObjectCaptureSession, I came across the following line: mimeapplication/rdf+xml infe 6hvc1<infe 7uri octag:com:apple:oc:cameraTrackingState>infe 8uri octag:com:apple:oc:cameraCalibrationData=infe 9uri octag:com:apple:oc:2022:objectTransform:infe :uri octag:com:apple:oc:objectBoundingBox9infe ;uri octag:com:apple:oc:rawFeaturePoints7infe <uri octag:com:apple:oc:pointCloudData0infe =uri octag:com:apple:oc:version2infe >uri octag:com:apple:oc:segmentID=infe ?uri octag:com:apple:oc:wideToDepthTransform which, to me, seemed like the location of data which included Point Cloud data of respective HEIC image that was captured. So, the question is, is it possible for me to access these files, read them, and send its data to server-side PhotogrammetrySession to be processed alongside its respective HEIC image? Or am I getting this completely wrong?
2
0
803
Jun ’23
Question regarding Reality Composer Pro as 3D Reconstruction Tool
I saw the at the WWDC23 session "Meet Object Capture for iOS" that the new tool that was released today along with Xcode 15 beta 2 called "Reality Composer Pro" will be capable of creating 3D models with Apple's PhotogrammetrySession. However, I do not see any of its features on the tool. Has anyone managed to find the feature for creating 3D models as shown in the session?
1
0
1.1k
Jun ’23
Utilizing Point Cloud data from `ObjectCaptureSession` in WWDC23
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. I have two questions regarding the newly updated APIs. From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS. From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction. Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView. Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
5
0
1.9k
Jun ’23
PhotogrammetrySession Update from WWDC23?
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS. As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session. I have two questions regarding this API. The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct? I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession? Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.
0
0
939
Jun ’23