I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session.
I have a few unanswered questions:
Where is the sample code available from?
Are the new Object Capture APIs on iOS limited to certain devices?
Can we capture images from the front facing cameras?
Meet Object Capture for iOS
RSS for tagDiscuss the WWDC23 Session Meet Object Capture for iOS
Posts under wwdc2023-10191 tag
27 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
the video mentions that developers can download and work off of a sample app that implements object capture for iOS . . .
how can we download?
thanks!
With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale.
let color = arView.session.currentFrame!.capturedImage
let depth = arView.session.currentFrame!.sceneDepth!.depthMap
//😀 Color
let colorCIImage = CIImage(cvPixelBuffer: color)
let colorUIImage = UIImage(ciImage: colorCIImage)
let depthCIImage = CIImage(cvPixelBuffer: depth)
let heicData = colorUIImage.heicData()!
let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic")
do {
try heicData.write(to: fileURL)
print("Successfully wrote image to \(fileURL)")
} catch {
print("Failed to write image to \(fileURL): \(error)")
}
//😀 Depth
let context = CIContext()
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)!
let depthData = context.tiffRepresentation(of: depthCIImage,
format: .Lf,
colorSpace: colorSpace,
options: [.disparityImage: depthCIImage])
let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF")
try! depthData!.write(to: depth_dir, options: [.atomic])
print("depth saved")
And also tried this.
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)
let depthCIImage = CIImage(cvImageBuffer: depth,
options: [.auxiliaryDepth : true])
let context = CIContext()
let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB)
guard let heicData = context.heifRepresentation(of: colorCIImage,
format: .RGBA16,
colorSpace: linearColorSpace!,
options: [.depthImage : depthCIImage]) else {
print("Failed to convert combined image into HEIC format")
return
}
Does Anyone know why and how to fix this?
As the speaker mentions, the documentation contains source code for the sample app. But when I went there I just found the sample code from wwdc 2021. Is the code available yet?
Now that we have the Vision Pro, I really want to start using Apple's Object Capture API to transform real objects into 3D assets. I watched the latest Object Capture vid from WWDC 23 and noticed they were using a "sample app".
Does Apple provide this sample app to VisionOS developers or do we have to build our own iOS app?
Thanks and cheers!
If I make custom point cloud, how can I send this to photogrammetry session? Does it seperately saved to directory? Or does it saved into heic image?
ios 17 beta 2 photo selection don't have option to enable/disable meta data. It was working in Beta 1 and not working in Beta 2.
Any reason why?
Hi there,
Just wondering when the sample project will be available. I am having trouble getting anything good out of the snippets and want to see the workings of the full project.
Where/When can we get this ?
With AVFoundation's builtInLiDARDepthCamera,
if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata.
But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data.
What should I do for AVFoundation's exported image has same meta data?
In WWDC 2021, It saids 'we also offer an interface for advanced workflows to provide a sequence of custom samples.
A PhotogrammetrySample includes the image plus other optional data such as a depth map, gravity vector, or custom segmentation mask.'
But in code, PhotogrammetrySession initialize with data saved directory.
How can I give input of PhotogrammetrySamples to PhotogrammetrySession?
When I install and run the sample app Apple released just recently, everything works fine up until I try to start the capture. Bounding box sets up without a problem, but then every time, this error occurs:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCapturePhotoOutput capturePhotoWithSettings:delegate:] You are not authorized to use custom shutter sounds'
*** First throw call stack:
(0x19d6e8300 0x195cd4f30 0x1b9bfdcb4 0x1cc4fbf98 0x1cc432964 0x19d6e8920 0x19d70552c 0x1cc4328f8 0x1cc4a8fac 0x19d6e8920 0x19d70552c 0x1cc4a8e44 0x23634923c 0x23637abfc 0x2362d21a4 0x2362d139c 0x236339874 0x23636dc04 0x1a67f9b74 0x1a68023ac 0x1a67fa964 0x1a67faa78 0x1a67fa5d0 0x1039c6b34 0x1039d80b4 0x1a6800188 0x1a67f94bc 0x1a67f9fd0 0x1a6800098 0x1a67f9504 0x23633777c 0x23637201c 0x2354d081c 0x2354c8658 0x1039c6b34 0x1039c9c20 0x1039e1078 0x1039dfacc 0x1039d6ebc 0x1039d6ba0 0x19d774e94 0x19d758594 0x19d75cda0 0x1df4c0224 0x19fbcd154 0x19fbccdb8 0x1a142f1a8 0x1a139df2c 0x1a1387c1c 0x102a5d944 0x102a5d9f4 0x1c030e4f8)
libc++abi: terminating due to uncaught exception of type NSException
I have no idea why this is happening, so any help would be appreciated. My iPad is running the latest iPadOS 17 Beta and the crash also occurs when I don't have it isn't connected to Xcode...
Is it possible to capture only manually (automatic off) on object capture api ?
And can I proceed to capturing stage right a way?
Only Object Capture API captures real scale object.
Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object.
I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale.
Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
Hi. Each time when I am trying to capture object using example from session https://developer.apple.com/videos/play/wwdc2023/10191 I have a crash. iPhone 14 Pro Max, iOS 17 beta 3. Xcode Version 15.0 beta 3 (15A5195k)
Log:
ObjectCaptureSession.: mobileSfM pose for the new camera shot is not consistent.
<<<< PlayerRemoteXPC >>>> fpr_deferPostNotificationToNotificationQueue signalled err=-12 785 (kCMBaseObjectError_Invalidated) (item invalidated) at FigPlayer_RemoteXPC.m:829
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
Compiler failed with XPC_ERROR_CONNECTION_INTERRUPTED
MTLCompiler: Compilation failed with XPC_ERROR_CONNECTION_INTERRUPTED on 3 try
/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:485: failed assertion `MPSLibrary::MPSKey_Create internal error: Unable to get MPS kernel NDArrayMatrixMultiplyNNA14_EdgeCase. Error: Compiler encountered an internal error
'
/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm, line 485: error ''
I am trying the demo code in https://developer.apple.com/documentation/realitykit/guided-capture-sample
MacOS: 13.4.1 (22F82)
XCode: 15 Beta 4
iPadOS: 17.0 Public Beta
iPad: Pro 11 inch 2nd Generation (has Lidar Scanner)
But I've got an error in the runtime: "Thread 1: Fatal error: ObjectCaptureSession is not supported on this device!"
When running the code from the object capture event from WWDC 23 event I'm currently getting the error "dyld[607]: Symbol not found: _$s21DeveloperToolsSupport15PreviewRegistryPAAE7previewAA0D0VvgZ
Referenced from: <411AA023-A110-33EA-B026-D0103BAE08B6> /private/var/containers/Bundle/Application/9E9526BF-C163-420D-B6E0-2DC9E02B3F7E/ObjectCapture.app/ObjectCapture
Expected in: <0BD6AC59-17BF-3B07-8C7F-6D9D25E0F3AD> /System/Library/Frameworks/DeveloperToolsSupport.framework/DeveloperToolsSupport"
Hi,
In the scanning objects using object capture project, when the content view is dismissed the AppDataModel is always retained and the deinit is never called.
@StateObject var appModel: AppDataModel = AppDataModel.instance
I am presenting the contentView using a UIHostingController
let hostingController = UIHostingController(rootView: ContentView())
hostingController.modalPresentationStyle = .fullScreen
present(hostingController, animated: true)
I have tried to manually detach the listeners and setting the objectCaptureSession to nil.
In the debug memory graph there is a coachingoverlay retaining the AppDataModel.
I want to remove the appModel from memory when the contentView is dismissed.
Any suggestions?
Is it possible for me to customize the ObjectCaptureView?
I'd like to have the turn-table that indicates whether the photo was captured with point cloud image to have different foreground color.
So I want the white part under the point clouds to be some other color that I specify.
Would it be possible by extending the ObjectCapturePointCloudView?
Value of type 'ObjectCaptureSession' has no member '$feedback'
Value of type 'ObjectCaptureSession' has no member '$state'
Any thoughts? code is how it came in the .zip
We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession.
After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS.
We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC.
As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation.
Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers.
We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.
The Object Capture Apple sample code crashes while generating the 3D model when using more than 10 images. The code was running fine in Xcode beta 4 (and the corresponding iOS version). Since beta 5 I get these crashes. When scanning with exactly 10 images the process runs through fine.
Does anybody know a workaround for that?