Hey, recently I've tried to use your codes, but an error occur, there's no UIImage.pixelData().How can I fix this?
Post
Replies
Boosts
Views
Activity
Thanks for your reply.
I'm not really familiar with the concept unprojecting pixels of image into 3D space and coloring.
Could you explain some little more?
Hi, brandonK212.
I have quite similar problem with you.
But, because I'm a noob in metal, I couldn't figure out this.
Could you mail me ? (h890819j@gmail.com)
Or Could you give me any small advice how to get points world space into array ?
Amazing! You have saved me 😁
Hi, bradon.
I'm not really sure why your .ply export code keep crashing. 😭
Do you have any github repository for this ?
I'm asking because new object capture session's output has quite accurate in scale with real world scale. But, with Avfoundation, even I've saved photo to heic and depth to tiff, the size of reconstructed model is different with real size.
How did you put PhotogrammetySample to PhotogrammetrySession? As I know, PhotogrammetrySession only get image saved directory.
Not using RealityKit's Object Capture API, How can I manually add those datas when capturing images through AVFoundation or ARKit?
I tried to initialize session with bellow.
//MARK: Initialize Camera
private func initializeCamera() {
print("Initialize Camera")
currentCamera = AVCaptureDevice.default(.builtInLiDARDepthCamera,
for: .depthData,
position: .back)
currentSession = AVCaptureSession()
currentSession.sessionPreset = .photo
do {
let cameraInput = try AVCaptureDeviceInput(device: currentCamera)
currentSession.addInput(cameraInput)
} catch {
fatalError()
}
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self,
queue: currentDataOutputQueue)
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
currentSession.addOutput(videoOutput)
let depthOutput = AVCaptureDepthDataOutput()
depthOutput.setDelegate(self, callbackQueue: currentDataOutputQueue)
depthOutput.isFilteringEnabled = true
currentSession.addOutput(depthOutput)
currentPhotoOutput = AVCapturePhotoOutput()
currentSession.addOutput(currentPhotoOutput)
currentPhotoOutput.isDepthDataDeliveryEnabled = true
}