Hi. Each time when I am trying to capture object using example from session https://developer.apple.com/videos/play/wwdc2023/10191 I have a crash. iPhone 14 Pro Max, iOS 17 beta 3. Xcode Version 15.0 beta 3 (15A5195k)
Log:
ObjectCaptureSession.: mobileSfM pose for the new camera shot is not consistent.
<<<< PlayerRemoteXPC >>>> fpr_deferPostNotificationToNotificationQueue signalled err=-12 785 (kCMBaseObjectError_Invalidated) (item invalidated) at FigPlayer_RemoteXPC.m:829
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try
/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:485: failed assertion `MPSLibrary::MPSKey_Create internal error: Unable to get MPS kernel NDArrayMatrixMultiplyNNA14_EdgeCase. Error: Compiler encountered an internal error
'
/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm, line 485: error ''
Meet Object Capture for iOS
RSS for tagDiscuss the WWDC23 Session Meet Object Capture for iOS
Posts under wwdc2023-10191 tag
31 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
the video mentions that developers can download and work off of a sample app that implements object capture for iOS . . .
how can we download?
thanks!
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples.
A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.'
But in code, PhotogrammetrySession initialize with data saved directory.
How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
As the speaker mentions, the documentation contains source code for the sample app. But when I went there I just found the sample code from wwdc 2021. Is the code available yet?
Hi there,
Just wondering when the sample project will be available. I am having trouble getting anything good out of the snippets and want to see the workings of the full project.
Where/When can we get this ?
ios 17 beta 2 photo selection don't have option to enable/disable meta data. It was working in Beta 1 and not working in Beta 2.
Any reason why?
If I make custom point cloud, how can I send this to photogrammetry session? Does it seperately saved to directory? Or does it saved into heic image?
I saw the at the WWDC23 session "Meet Object Capture for iOS" that the new tool that was released today along with Xcode 15 beta 2 called "Reality Composer Pro" will be capable of creating 3D models with Apple's PhotogrammetrySession. However, I do not see any of its features on the tool. Has anyone managed to find the feature for creating 3D models as shown in the session?
Is there a way to access the dimensions of the bounding box that is displayed around the object in the ObjectCaptureView?
I don't get a bounding box on my screen when I start to scan, and get the following error message when my laptop tries to process the files:
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session.
I have two questions regarding this API.
The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct?
I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession?
Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.