Posts

Post not yet marked as solved
2 Replies
I tried to initialize session with bellow. //MARK: Initialize Camera private func initializeCamera() { print("Initialize Camera") currentCamera = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .depthData, position: .back) currentSession = AVCaptureSession() currentSession.sessionPreset = .photo do { let cameraInput = try AVCaptureDeviceInput(device: currentCamera) currentSession.addInput(cameraInput) } catch { fatalError() } let videoOutput = AVCaptureVideoDataOutput() videoOutput.setSampleBufferDelegate(self, queue: currentDataOutputQueue) videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA] currentSession.addOutput(videoOutput) let depthOutput = AVCaptureDepthDataOutput() depthOutput.setDelegate(self, callbackQueue: currentDataOutputQueue) depthOutput.isFilteringEnabled = true currentSession.addOutput(depthOutput) currentPhotoOutput = AVCapturePhotoOutput() currentSession.addOutput(currentPhotoOutput) currentPhotoOutput.isDepthDataDeliveryEnabled = true }
Post marked as solved
2 Replies
Not using RealityKit's Object Capture API, How can I manually add those datas when capturing images through AVFoundation or ARKit?
Post not yet marked as solved
5 Replies
How did you put PhotogrammetySample to PhotogrammetrySession? As I know, PhotogrammetrySession only get image saved directory.
Post not yet marked as solved
1 Replies
I'm asking because new object capture session's output has quite accurate in scale with real world scale. But, with Avfoundation, even I've saved photo to heic and depth to tiff, the size of reconstructed model is different with real size.
Post not yet marked as solved
23 Replies
Hi, bradon. I'm not really sure why your .ply export code keep crashing. 😭 Do you have any github repository for this ?
Post not yet marked as solved
23 Replies
Amazing! You have saved me 😁
Post not yet marked as solved
23 Replies
Hi, brandonK212. I have quite similar problem with you. But, because I'm a noob in metal, I couldn't figure out this. Could you mail me ? (h890819j@gmail.com) Or Could you give me any small advice how to get points world space into array ?
Post marked as solved
4 Replies
Thanks for your reply. I'm not really familiar with the concept unprojecting pixels of image into 3D space and coloring. Could you explain some little more?
Post not yet marked as solved
4 Replies
Hey, recently I've tried to use your codes, but an error occur, there's no UIImage.pixelData().How can I fix this?