Hi can anyone help me with how you would obtain depth data utilizing the rear facing camera equipped with LiDAR?
Could you please give me some tips or code examples on how to work with/access the LiDAR data?
Also from post i have previously viewed you are able to obtain information from the true depth camera API but are not able to for the Rear-facing camera? why is this?
Thank you so much!!
Post
Replies
Boosts
Views
Activity
Goal: To obtain depth data & calibration data from the TrueDepth Camera for computer vision task.
I am very confused because for example apple says,
To use depth data for computer vision tasks, use the data in the cameraCalibrationData property to rectify the depth data.
which I tried and get nil, and then when looking through stack overflow I read,
cameraCalibrationData is always nil in photo, you have to get it from photo.depthData. As long as you're requesting depth data, you'll get the calibration data.
and so when I tried print(photo.depthData) to obtain depth & calibration data my output was:
Optional(hdis 640x480 (high/abs)
calibration:
{intrinsicMatrix: [2735.35 0.00 2017.75 | 0.00 2735.35 1518.51 | 0.00 0.00 1.00],
extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm,
distortionCenter:{2017.75,1518.51},
ref:{4032x3024}})
^ But where is the depth data??`
Below is my entire code:
Note: I'm new to Xcode and I'm use to coding in python for computer vision task so I apologize in advance for the messy code.
import AVFoundation
import UIKit
import Photos
class ViewController: UIViewController {
var session: AVCaptureSession?
let output = AVCapturePhotoOutput()
var previewLayer = AVCaptureVideoPreviewLayer()
// MARK: - Permission check
private func checkCameraPermissions() {
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .notDetermined:
AVCaptureDevice.requestAccess(for: .video) { [weak self] granted in
guard granted else { return }
DispatchQueue.main.async { self?.setUpCamera() }
}
case .restricted:
break
case .denied:
break
case .authorized:
setUpCamera()
@unknown default:
break
}
}
// MARK: - camera SETUP
private func setUpCamera() {
let session = AVCaptureSession()
if let captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: AVMediaType.depthData, position: .unspecified) {
do {
let input = try AVCaptureDeviceInput(device: captureDevice)
if session.canAddInput(input) {
session.beginConfiguration()
session.sessionPreset = .photo
session.addInput(input)
session.commitConfiguration()
}
if session.canAddOutput(output) {
session.beginConfiguration()
session.addOutput(output)
session.commitConfiguration()
}
output.isDepthDataDeliveryEnabled = true
previewLayer.videoGravity = .resizeAspectFill
previewLayer.session = session
session.startRunning()
self.session = session
}
catch {
print(error)
}
}
}
//MARK: - UI Button
private let shutterButton: UIButton = {
let button = UIButton(frame: CGRect(x: 0, y: 0, width: 100, height: 100))
button.layer.cornerRadius = 50
button.layer.borderWidth = 10
button.layer.borderColor = UIColor.white.cgColor
return button
}()
//MARK: - Video Preview Setup
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = .black
view.layer.insertSublayer(previewLayer, at: 0)
view.addSubview(shutterButton)
checkCameraPermissions()
shutterButton.addTarget(self, action: #selector(didTapTakePhoto), for: .touchUpInside)
}
//MARK: - Video Preview Setup
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
previewLayer.frame = view.bounds
shutterButton.center = CGPoint(x: view.frame.size.width/2, y: view.frame.size.height - 100)
}
//MARK: - Running and Stopping the Session
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
session!.startRunning()
}
//MARK: - Running and Stopping the Session
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
session!.stopRunning()
}
//MARK: - taking a photo
@objc private func didTapTakePhoto() {
let photoSettings = AVCapturePhotoSettings()
photoSettings.isDepthDataDeliveryEnabled = true
photoSettings.isDepthDataFiltered = true
output.capturePhoto(with: photoSettings, delegate: self)
}
}
extension ViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation() else { return }
print(photo.depthData)
let image = UIImage(data: data)
session?.stopRunning()
// ADDING the IMAGE onto the UI
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.frame = view.bounds
view.addSubview(imageView)
session?.stopRunning()
// saving photo to library
PHPhotoLibrary.requestAuthorization { status in
guard status == .authorized else { return }
PHPhotoLibrary.shared().performChanges({
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photo, data: photo.fileDataRepresentation()!, options: nil)
}, completionHandler: { _, error in
if error != nil {
print("error")
}
})
}
}
}
I have an application that captures an image with a depth map and calibration data and exports it so then I can work with it in python.
The depth map and calibration data are all converted to Float32 and is stored as a json file. The image is stored as a jpeg file.
The depth map shape is (480, 640) and the image shape is (3024, 4032, 3)
My goal is to be able to create a point cloud from this data.
I’m new to working with data provided by apples TrueDepth camera and would like some clarity to what preprocessing steps I need to perform before creating the point cloud.
Here they are below:
1) since the 640x480 is a scaled version of the 12MP image, means that I can scale down the intrinsics as well. So I should scale [fx, fy, cx, cy] by the scaling factor 640/4032 = 0.15873?
2) after scaling comes taking care of the distortion, which I should use lensDistortionLookupTable to distort both the image and depth map?
Are the above two questions correct or am I missing something??
I can't figure out how to solve this error:
Value of type 'ARFrame' has no member 'viewTrans'
Nothing came up when I tried googling the error. Below I attached the entire code and if you scroll down to basically the end you will see a comment called //ERROR and right below that is the line of code throwing this error.
code
Xcode Version --> 13.2.1
Iphone Version -> 15.6.1
Ever since I updated my iPhone version, Xcode Keeps failing to run and when I tried to switch the deployment target to my iPhone's version (15.6.1) I am unable to since there are no options that go above 15.2.
How can I solve this? Do I need to get a new version of Xcode? or is there a way to download my specific iPhone version and integrate it into Xcode?
Thank you for your help:)
I got this Codesign error when attempting to run one of my applications on my iPhone, I tried a couple more of my apps and i still get the same error. Also, I only get this error when I’m running the app on my iPhone that’s plugged into my computer but I get no errors when running on a simulator. Thus, this error seems to be due to my iPhone.
From what I read online I tried the following:
Restart both my Mac and iPhone
no success. I tried multiple times and still got the same error.
Check my provisioning profile and certificates
everything looks fine but I'm a newbie so I could be wrong.
What I noticed that may be causing the error:
I stopped using Xcode for about 3 months and during that time I updated my iPhone version to 15.6.1, but the iOS Deployment Target only goes up to 15.2.
So with all this said, is it possible to run an older version say 15.2 when your iPhones version is higher (like mine which is 15.6.1)?
^ If this is possible, where else could this error be coming from?
Thank you!
My question is in regards to the Depth Data acquired by the TrueDepth camera.
I want to know how to get real distance Depth data from TrueDepth.
I can obtain depth maps but how do I obtain the actual distance values (Z values)?