Object Capture

RSS for tag

Turn photos from your iPhone or iPad into high‑quality 3D models that are optimized for AR using the new Object Capture API on macOS Monterey.

Object Capture Documentation

Posts under Object Capture tag

66 Posts
Sort by:
Post not yet marked as solved
0 Replies
377 Views
I am trying to extract the 6DOF (six degrees of freedom) information from the PhotogrammetrySession.Pose using the ObjectCaptureSession in iOS. In the API documentation for PhotogrammetrySession.Pose, it is mentioned that it supports iOS 17 and later. However, in the GuidedCapture sample program, the following comment is written: case .modelEntity(_, _), .bounds, .poses, .pointCloud: // Not supported yet break Does this mean it's impossible to get 6DOF information from PhotogrammetrySession.Pose at this time? Or is there any other way to achieve this? Any guidance would be greatly appreciated.
Posted
by ioridev.
Last updated
.
Post not yet marked as solved
2 Replies
974 Views
With AVFoundation's builtInLiDARDepthCamera, if I save photo.fileDataRepresentation to heic, it only has Exif and TIFF metadata. But, RealityKit's object capture's heic image has not only Exif and TIFF, but also has HEIC metadata including camera calibration data. What should I do for AVFoundation's exported image has same meta data?
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
0 Replies
409 Views
May be trivial question, but I can't find a clear answer. What interface should ICA (scanner) driver implement to communicate with the application ? i.e. to receive acquisition request with parameters and transfer image[s]. Thanks
Posted
by Farquaad.
Last updated
.
Post not yet marked as solved
0 Replies
522 Views
Hey there fellas, i am a beginner on ios trying to find a way to capture/extract depthdata from a captured image in my photo gallery. I have been using xcode to achieve this task but i am particularly new to swift so i am having troubles. I need the depthdata from the image to work on it and to be able to manipulate it.
Posted
by Musadiq.
Last updated
.
Post not yet marked as solved
0 Replies
430 Views
My app NFC.cool is using the object capture API and I fully developed the feature with an iPhone 13 Pro Max. On that phone everything works fine. No I have a new iPhone 15 Pro Max and I get crashes when the photogrammetry session is at around 1%. This happens when I completed all three scan passes. When I prematurely end a scan with around 10 images the reconstruction runs fine and I get a 3D model. com.apple.corephotogrammetry.tracking:0 (40): EXC_BAD_ACCESS (code=1, address=0x0) Any one else seeing these crashes?
Posted
by NickYaw.
Last updated
.
Post not yet marked as solved
1 Replies
422 Views
I am building a mobile app that requires 3D scanning functionality. The app should be able to create an accurate scan of a small object using TrueDepth and convert that scan to STL geometric mesh file format. The required functionality is: 3D scan functionality using TrueDepth front facing sensor on an iPad The user should be able to view the scan as it's being generated The scan should be processed to eliminate bumps & holes No texture or colors. The output should be a 3D mesh. Scan resolution under 2mm The user can view, rotate and zoom in on the completed scan The user can save and load past scans Export to stl and ply file formats I have found some SDKs that offer this functionality but they are too expensive. Is it possible to scan and export files with ARKit, RealityKit, or other Apple libraries?
Posted
by LBJ34.
Last updated
.
Post not yet marked as solved
0 Replies
291 Views
Hello! I have a question about usage snapshots from ios17 sample app on macOS 14. I tried to export folders "Photos" and "Snapshots" captured from ios and then wrote like: let checkpointDirectoryPath = "/path/to/the/Snapshots/" let checkpointDirectoryURL = URL(fileURLWithPath: checkpointDirectoryPath, isDirectory: true) if #available(macOS 14.0, *) { configuration.checkpointDirectory = checkpointDirectoryURL } else { // Fallback on earlier versions } But I didn't notice any speed or performance improvements. It looks like "Snapshots" folder was simply ignored. Please advise what I can do so that "Snapshots" folder is affected during calculations.
Posted Last updated
.
Post not yet marked as solved
0 Replies
371 Views
I want to create a 3D model with Photogrammetry Session. It's working fine with AVCaptureSession with Depth Data . But I want to capture a series of frames from ARKit and sceneDepth which is of type ARDepthMap. Depthdata is being stored as tiff but I'm still getting the error . if let depthData = self.arView?.session.currentFrame?.sceneDepth?.depthMap { if let colorSpace = CGColorSpace(name: CGColorSpace.linearGray) { let depthImage = CIImage( cvImageBuffer: depthData,options: [ .auxiliaryDisparity: true ,.auxiliaryDepth : true] ) depthMapData = context.tiffRepresentation(of: depthImage,format: .Lf, colorSpace: colorSpace, options: [.disparityImage: depthImage]) } } if let image = self.arView?.session.currentFrame?.capturedImage { if let imageData = self.convertPixelBufferToHighQualityJPEG(pixelBuffer: image) { self.addCapture(Capture(id: photoId, photoData: imageData, photoPixelBuffer: image,depthData: depthMapData)) } }` "No Auxiliary Depth Data found" while running the photogrammetry session.
Posted
by Mukaddir.
Last updated
.
Post not yet marked as solved
1 Replies
476 Views
In ARKit, I took few Color CVPixelBuffers and Depth CVPixelBuffers, ran PhotogrammetrySession with PhotogrammetrySamples. In my service, precise real scale is important, so I tried to figure out what is related to the rate of real scale model created. I did some experiments, and I set same number of images(10 pics), same object, same shot angles, distance to object(30cm, 50cm, 100cm). But even with above same controlled variables, sometimes, it generate real scale, and sometimes not. Because I couldn't get to source code of photogrammetry and how it work inside, I wonder do I miss and how can I create real scale every time if it's possible.
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
0 Replies
407 Views
Anybody has noticed pivot issue in constructed model through object capture. Ideally pivot of object should be centre of bounding box but with new macOS changes now pivot is at 0,0,0 (below the bounding box) Here is a quick comparison Old v/s new
Posted
by lanxinger.
Last updated
.
Post not yet marked as solved
1 Replies
435 Views
Hey There, I recently tried out the iOS 17 photogrammetry sample app, The results are very promising when compared to the iOS 16 apps The real world scale retention works amazing. However, my use case involves making the camera still and rotating the object instead, which was an option in iOS 16 but unfortunately removed in iOS 17 I wonder if there's a way to do so in iOS 17 app!
Posted Last updated
.
Post not yet marked as solved
0 Replies
501 Views
Hi, We are searching a solution to create 3D models in real life size using reflex cameras. We created an app for mac called Smart Capture that is using Object Capture to recreate 3D models from pictures. We used this project to digitize 5000 archeological findings of the Archaeological Park of Pompeii. We created a strong workflow using Orbitvu automated photography boxes with 3 reflex cameras for each box to speed up the capture process that allowed us to get a 3D model in less than 10 minutes (2-3 minutes to capture and about 7-8 minutes to process on a m2 max). The problem is that the resulting object has no size information and we have to manually take measurement and resize the 3d model accordingly, introducing a manual step and a possible error on the workflow. I was wondering if it's possible, using iOS 17 Object Capture APIs to get point cloud data which I could add to the reflex cameras pictures and process the whole package on the mac to retrieve the size of the real object. As far as I understood the only way to get it working before iOS 17 was to use depth information (I tried Sample Capture project), but the problem is that we have to work with small objects up to huge objects (our range is objects from about 1 to 25 inches) Do you have any clue on how to achieve this?
Posted Last updated
.
Post not yet marked as solved
2 Replies
947 Views
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
Posted
by HeoJin.
Last updated
.
Post not yet marked as solved
6 Replies
2.5k Views
I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session. I have a few unanswered questions: Where is the sample code available from? Are the new Object Capture APIs on iOS limited to certain devices? Can we capture images from the front facing cameras?
Posted Last updated
.
Post not yet marked as solved
0 Replies
406 Views
I used ObjectCaptureView with an ObjectCaptureSession in different setups, for example nested in an UIViewController so that I was able to deallocate the View and the Session after switching to another View. If I am going to use an ARSession with ARWorldTracking and SceneUnderstanding afterwards and the app won't show the overlaying Mesh anymore. Using SceneUnderstanding without opening the ObjectCaptureView previously works fine. Has someone faced the same issue, or how could I report this to apple? Seems like a problem with the ObjectCaptureView/Session itself. During the start of the ObjectCaptureSession the are also some logs in the Metadata telling me: "Wasn't able to pop ARFrame and Cameraframe at the same time", it will be shown like 10 or 15 times for every start. So I nested it in an ARSCNView but that didn't fixed it.
Posted Last updated
.
Post not yet marked as solved
9 Replies
1k Views
The Object Capture Apple sample code crashes while generating the 3D model when using more than 10 images. The code was running fine in Xcode beta 4 (and the corresponding iOS version). Since beta 5 I get these crashes. When scanning with exactly 10 images the process runs through fine. Does anybody know a workaround for that?
Posted
by NickYaw.
Last updated
.
Post not yet marked as solved
3 Replies
847 Views
Hello, after installing Xcode 15 beta and the sample project provided for object capture in wwdc23 I am getting the below error: dyld[2006]: Symbol not found: _$s19_RealityKit_SwiftUI20ObjectCaptureSessionC7Combine010ObservableE0AAMc Referenced from: <35FD44C0-6001-325E-9F2A-016AF906B269> /private/var/containers/Bundle/Application/776635FF-FDD4-4DE1-B710-FC5F27D70D4F/GuidedCapture.app/GuidedCapture Expected in: <6A96F77C-1BEB-3925-B370-266184BF844F> /System/Library/Frameworks/_RealityKit_SwiftUI.framework/_RealityKit_SwiftUI I am trying to run the sample project on an iPhone 12 Pro (iOS 17.0 (21A5291j)) Any help in solving this issue would be appreciated. Thank you.
Posted
by igyehia.
Last updated
.
Post not yet marked as solved
0 Replies
332 Views
Hi! My team has been playing with Object Capture on Mac for a while, and now that we got our hands on a MacBook Pro M2, we are starting to see differences between running the API on a Mac M1 and a Mac M2. The major difference observed is that on the M2 outputs, the object is correctly rotated, normal to the ground, while M1 outputs may have random rotations. This has been observed on latest Ventura 13.5.1 on the following machines: MacBook Pro M2 Pro 32Go MacBook Air M1 8Go and Mac mini 8Go. We have tested using both the PhotoCatch application and the HelloPhotogrammetry example (even using the same compiled binary). For sure we use the same options and same files on both sides. Is this expected behavior ? We would appreciate having the same rotation on both side. Regards
Posted
by Red13.
Last updated
.