Post

Replies

Boosts

Views

Activity

How to get the actual distance of the depth map image subject from the true depth camera
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] { let width = CVPixelBufferGetWidth(depthMap) let height = CVPixelBufferGetHeight(depthMap) var convertedDepthMap: [[Float32]] = Array( repeating: Array(repeating: 0, count: width), count: height ) CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) let floatBuffer = unsafeBitCast( CVPixelBufferGetBaseAddress(depthMap), to: UnsafeMutablePointer<Float32>.self ) for row in 0 ..< height { for col in 0 ..< width { if floatBuffer[width * row + col].isFinite{ convertedDepthMap[row][col] = floatBuffer[width * row + col] } } } CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2)) return convertedDepthMap } Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
1
0
217
Nov ’24
Depth map is always in hdis format instead of hdep. Unable to capture depth map in kCVPixelFormatType_DepthFloat format even after setting the activeDepthDataFormat for AVCapture device
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method. I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0) After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0) But when I receive the captured photo in func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) The depth map is Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}}) Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map. The depth quality is high and depth data accuracy is absolute. Here is my code import UIKit import AVKit import AVFoundation class ViewController: UIViewController, AVCapturePhotoCaptureDelegate { @IBOutlet weak var previewView: UIView! @IBOutlet weak var resultLbl: UILabel! private var session = AVCaptureSession() private var captureDevice: AVCaptureDevice? private var inputDevice: AVCaptureDeviceInput? private var photoOutput: AVCapturePhotoOutput? private var photoSettings: AVCapturePhotoSettings? private var cameraPreviewLayer: AVCaptureVideoPreviewLayer? override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. self.setupCaptureSession() } func setupCaptureSession(){ captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified) guard let captureDevice else{ print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ") return } session.beginConfiguration() do{ inputDevice = try AVCaptureDeviceInput(device: captureDevice) guard let inputDevice else{ print("ERROR: UNABLE TO SET UP INPUT DEVICE") return } if session.canAddInput(inputDevice){ session.addInput(inputDevice) } } catch{ print(error) } photoOutput = AVCapturePhotoOutput() guard let photoOutput else{ print("ERROR: UNABLE TO SET UP PHOTO OUTPUT") return } if session.canAddOutput(photoOutput){ session.addOutput(photoOutput) } session.sessionPreset = .photo photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)") session.commitConfiguration() let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats let depthFormat = availableFormats.filter { format in let pixelFormatType = CMFormatDescriptionGetMediaSubType(format.formatDescription) return (pixelFormatType == kCVPixelFormatType_DepthFloat16 || pixelFormatType == kCVPixelFormatType_DepthFloat32) }.first session.beginConfiguration() try! captureDevice.lockForConfiguration() captureDevice.activeDepthDataFormat = depthFormat captureDevice.unlockForConfiguration() session.commitConfiguration() self.setupPreviewLayer() } func setupPreviewLayer(){ cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session) cameraPreviewLayer?.videoGravity = .resizeAspectFill if let cameraPreviewLayer{ self.previewView.layer.addSublayer(cameraPreviewLayer) cameraPreviewLayer.frame = self.previewView.bounds } DispatchQueue.global(qos: .userInteractive).async { self.session.startRunning() } } @IBAction func captureBtnPressed(_ sender: Any) { photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg]) guard let photoSettings else{ print("ERROR: UNABLE TO SETUP PHOTO SETTINGS") return } guard let photoOutput else{ print("ERROR: UNABLE TO SET UP PHOTO OUTPUT") return } photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported photoSettings.embedsDepthDataInPhoto = true photoSettings.embedsPortraitEffectsMatteInPhoto = true photoSettings.embedsSemanticSegmentationMattesInPhoto = true photoOutput.capturePhoto(with: photoSettings, delegate: self) } func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { print(photo.depthData) switch photo.depthData?.depthDataQuality { case .low: print("Depth quality is low") case .high: print("Depth quality is high") case nil: print("Depth quality is nil") } switch photo.depthData?.depthDataAccuracy { case .relative: print("Depth accuarcy is relative") case .absolute: print("Depth accuarcy is absolute") case nil: print("Depth accuarcy is nil") } if let imageData = photo.fileDataRepresentation(){ if let image = UIImage(data: imageData){ UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil) } } } }
0
0
173
Nov ’24
How can I use iPhone true depth front camera to detect if the captured depth map of a face is a true 3d face or spoofed 2d image
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution. I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less. When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one. I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one. How can I solve this problem.
0
0
233
Nov ’24
CIImage property of UIImage is always nil
I'm trying to apply a Core Image filter to an UIImage. For that I want to get the CIImage format of the UIImage. I'm trying to obtain the CIImage of the UIImage as shown below. if let inputImage = self.orginalImageView.image{ if let ciImage = CIImage(image: inputImage){ print(ciImage) print(self.orginalImageView.image?.ciImage) } } } This method works. But one thing I noticed is that there is already a ciImage property and it inside UIImage and it is always nil. According to documentation ciImage The underlying Core Image data. var ciImage: CIImage? { get } Discussion If the UIImage object was initialized using a CGImage, the value of the property is nil. Does accessing image property of UIImage comes from CGImage so that the ciImage porperty is nil?
1
0
227
Nov ’24