Object Capture

RSS for tag

Turn photos from your iPhone or iPad into high‑quality 3D models that are optimized for AR using the new Object Capture API on macOS Monterey.

Posts under Object Capture tag

34 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Object Capture API Limitation Concerns
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet: Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code? Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning? I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
2
0
694
Mar ’24
Photogrammetry CLI Tool Error
Hi all, I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution. 1-) HEIF file is expected. 2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
1
1
614
Mar ’24
Seeking Guidance on Extracting Point Cloud and Facial Measurements from Object Capture Scans
Hello Apple community, I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans. I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation. Point Cloud: How can I extract the point cloud data from my Object Capture scans? Are there any specific tools or methods recommended for this purpose? Facial Measurements: Is there a way to extract facial measurements accurately using Object Capture? Are there any built-in features or third-party tools that can assist with this? I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated! Thank you in advance.
0
0
503
Feb ’24
Photogrammetry on iOS 17.0+ with TrueDepth
Hello, I came across the Object Capture for iOS example from WWDC23, which utilizes LiDAR sensor. However, I’m interested in using the TrueDepth camera system instead. What I have tried is to save depth photos (.HEIC) to the Images/ folder (based on modifying the example below), which is hopefully used by the Photogrammetry session. But I haven’t been successful so far in starting the 3D reconstruction. Could there be something I’ve missed, or is the Object Capture sample code exclusively designed for LiDAR? Or maybe .HEIC is not the right format to use? Thank you for your assistance. import AVFoundation import UIKit class DepthPhotoCapture: NSObject, AVCapturePhotoCaptureDelegate { let photoOutput = AVCapturePhotoOutput() let captureSession = AVCaptureSession() override init() { super.init() setupCaptureSession() } func setupCaptureSession() { // Get the front camera (TrueDepth camera) guard let frontCamera = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .front) else { print("Unable to access front camera!") return } do { // Create an input object from the camera let input = try AVCaptureDeviceInput(device: frontCamera) // Add the input to the capture session captureSession.addInput(input) } catch { print("Unable to create AVCaptureDeviceInput: \(error)") } // Check if depth data capture is supported if photoOutput.isDepthDataDeliverySupported { // Enable depth data capture photoOutput.isDepthDataDeliveryEnabled = true } // Add the photo output to the capture session captureSession.addOutput(photoOutput) // Start the capture session captureSession.startRunning() } func captureDepthPhoto() { // Create a photo settings object let photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc]) photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliveryEnabled // Capture a photo with depth data photoOutput.capturePhoto(with: photoSettings, delegate: self) } // Implement the AVCapturePhotoCaptureDelegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { guard let imageData = photo.fileDataRepresentation() else { print("Error while generating image from photo capture data.") return } // Get the documents directory let documentsDirectory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! // Append the image directory and a unique image name let fileURL = documentsDirectory.appendingPathComponent("Images/").appendingPathComponent(UUID().uuidString).appendingPathExtension("heic") do { // Write the image data to the file try imageData.write(to: fileURL) print("Saved photo with depth data to \(fileURL)") } catch { print("Failed to write the image data to disk: \(error)") } } }
1
1
689
Jan ’24
iOS 17: Cannot find type 'PhotogrammetrySession' in scope (Object Capture)
Dear all, I'm building a SwiftUI-based frontend for the RealityKit Object Capture feature. It is using the iOS+macOS (native, not Catalyst) template on Xcode 15.1. When compiling for macOS 14.2, everything works as expected. When compiling for iPadOS 17.2, I receive the compiler error message "Cannot find type 'PhotogrammetrySession' in scope". As minimum deployment target, I selected iOS 17 and macOS 14 (see attachment). An example of the lines of code causing errors is import RealityKit typealias FeatureSensitivity = PhotogrammetrySession.Configuration.FeatureSensitivity // error: Cannot find type PhotogrammetrySession in scope typealias LevelOfDetail = PhotogrammetrySession.Request.Detail // error: Cannot find type PhotogrammetrySession in scope I made sure to wrap code that uses features unavailable on iOS (e.g. the .raw LOD setting) in #available checks. Is this an issue with Xcode or am I missing some compatibility check? The SDK clearly says /// Manages the creation of a 3D model from a set of images. /// /// For more information on using ``PhotogrammetrySession``, see /// <doc://com.apple.documentation/documentation/realitykit/creating-3d-objects-from-photographs>. @available(iOS 17.0, macOS 12.0, macCatalyst 15.0, *) @available(visionOS, unavailable) public class PhotogrammetrySession { /* ... */ } Thanks for any help with this :) And greetings from Köln, Germany ~ Alex
2
0
900
Jan ’24
ObjectCaptureView presents a memory leak
I have an app which shows a screen using ObjectCaptureView and each time that the view appears and disappears the memory increases around 400-500Mb. After check the memory graph I found that I was related to the SwiftUI's view which is creating the ARKit view under the hood. To be sure that the ObjectCaptureView was which has the memory leak I only commented in the view the line to create the ObjectCaptureView keeping the rest of the logic to handle the session state, feedback, etc.
1
1
647
Dec ’23
How to replicate ObjectCaptureSession's boundary restriction?
Hello, I want to use Apple's PhotogrammetrySession to scan a window. However, ObjectCaptureSession seems to be a monotasker and won't allow capture to occur with anything but a small object on a flat surface. So, I need to manually feed data into PhotogrammetrySession. But when I do, it focuses way too much on the scene behind the window, sacrificing detail on the window itself. Is there a way for me to either coax ObjectCaptureSession into capturing an area on the wall, or for me to restrict PhotogrammetrySession's target bounding box manually? How does ObjectCaptureSession communicate the limited bounding box to PhotogrammetrySession? Thanks, Sebastian
1
0
601
Dec ’23
Object Capture: Exporting model as OBJ.
https://developer.apple.com/documentation/realitykit/photogrammetrysession/request/modelfile(url:detail:geometry:) if the path given is a file path that contains a .usdz extension, then it will be saved as .usdz, or else if we provide a folder, it will save as OBJ; I tried it, but no use. Right before saving, it shows the folder that will be saved, but after I click on done and check the folder, it's always empty.
2
2
927
Nov ’23