Hi there,
when I run the ObjectCapture sample project on my iPad Pro 2020, depth is always disabled.
Is there a way to enable it?
Thanks in advance
Hi there,
when I run the ObjectCapture sample project on my iPad Pro 2020, depth is always disabled.
Is there a way to enable it?
Thanks in advance
I'm having the same problem on an 11" 2020 iPad Pro. This is what I've found out so far. Perhaps someone from Apple can confirm that this device should be capable of producing depth data from the rear camera.
After doing some debugging, I determined that this iPad does not report having AVCaptureDevice .builtInDualCamera. It does have .builtInDualWideCamera, however that AVCaptureDevice type is not showing depth support.
This is the relevant section of the sample code in CameraViewModel.swift starting at line 550:
/// This method checks for a depth-capable dual rear camera and, if found, returns an `AVCaptureDevice`.
private func getVideoDeviceForPhotogrammetry() throws -> AVCaptureDevice {
var defaultVideoDevice: AVCaptureDevice?
// Specify dual camera to get access to depth data.
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video,
position: .back) {
logger.log(">>> Got back dual camera!")
defaultVideoDevice = dualCameraDevice
} else if let dualWideCameraDevice = AVCaptureDevice.default(.builtInDualWideCamera,
for: .video,
position: .back) {
logger.log(">>> Got back dual wide camera!")
defaultVideoDevice = dualWideCameraDevice
} else if let backWideCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .back) {
logger.log(">>> Can't find a depth-capable camera: using wide back camera!")
defaultVideoDevice = backWideCameraDevice
}
guard let videoDevice = defaultVideoDevice else {
logger.error("Back video device is unavailable.")
throw SessionSetupError.configurationFailed
}
return videoDevice
}
You'll see in the console output when running with your iPad attached that it prints >>> Got back dual wide camera!
, having passed over the first choice of .builtInDualCamera. The way the code is written, it looks like that wide dual camera would also support depth because only the next else if selection (.builtInWideAngleCamera) specifically says that it can't find a depth-capable camera. However, I found that the dual wide camera does not support depth information by adding this code:
let discoverySession = AVCaptureDevice.DiscoverySession(deviceTypes:
[.builtInDualCamera, .builtInDualWideCamera, .builtInUltraWideCamera, .builtInTelephotoCamera, .builtInWideAngleCamera, .builtInTrueDepthCamera, .builtInTripleCamera],
mediaType: .video, position: .unspecified)
for device in discoverySession.devices {
print("\(device) supports \(device.activeFormat.supportedDepthDataFormats)")
}
Console Output:
<AVCaptureFigVideoDevice: 0x104915b50 [Back Dual Wide Camera][com.apple.avfoundation.avcapturedevice.built-in_video:6]> supports []
<AVCaptureFigVideoDevice: 0x104915480 [Back Ultra Wide Camera][com.apple.avfoundation.avcapturedevice.built-in_video:5]> supports []
<AVCaptureFigVideoDevice: 0x104914810 [Back Camera][com.apple.avfoundation.avcapturedevice.built-in_video:0]> supports []
<AVCaptureFigVideoDevice: 0x104916140 [Front Camera][com.apple.avfoundation.avcapturedevice.built-in_video:1]> supports []
<AVCaptureFigVideoDevice: 0x104916750 [Front TrueDepth Camera][com.apple.avfoundation.avcapturedevice.built-in_video:4]> supports ['dpth'/'hdis' 160x 90, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdis' 160x 90, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'hdep' 160x 90, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdep' 160x 90, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'hdis' 320x 180, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdis' 320x 180, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'hdep' 320x 180, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdep' 320x 180, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'hdis' 640x 360, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdis' 640x 360, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'hdep' 640x 360, { 2- 30 fps}, HRSI: 640x 360, fov:61.161, 'dpth'/'fdep' 640x 360, { 2- 30 fps}, HRSI: 640x 360, fov:61.161]
Here is a description of those two device types from https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype
static let builtInDualCamera: AVCaptureDevice.DeviceType
// A combination of wide-angle and telephoto cameras that creates a capture device.
static let builtInDualWideCamera: AVCaptureDevice.DeviceType
// A device that consists of two cameras of fixed focal length, one ultrawide angle and one wide angle.
Based on that description and the fact that this iPad Pro only has two rear cameras, it must be lacking the telephoto camera while having a wide and ultra-wide camera. The iPhone 12 Pro has 3 cameras, so I'm guessing it has the missing telephoto lens. I'm buying one tomorrow and I'll report if I can get this demo working properly. I suspect that this demo was written for and tested on an iPhone. In some ways that makes sense as the iPad gets pretty heavy after taking hundreds of pictures but it would be frustrating if the iPad with LIDAR doesn't support capturing rear depth data as many people, myself included, bought it specifically to use it for scanning. I'm hoping that .builtInDualWideCamera not reporting depth support is a bug!
Apple forums said my post was too long ?? so this is (1/2)
(2/2) since this part of the answer made my first response too long... (note: the forum error message only shows up when trying to submit your answer, not in the live preview, and it doesn't say how much you are over. frustrating!)
There is one way to get depth images with this iPad and that is to use the front camera (.builtInTrueDepthCamera), which you can do by altering the code from my other answer to search for the true depth camera:
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video,
position: .back) {
logger.log(">>> Got back dual camera!")
defaultVideoDevice = dualCameraDevice
} else if let trueDepthDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified) {
logger.log(">>> Got true depth camera!")
defaultVideoDevice = trueDepthDevice
} else if let dualWideCameraDevice = AVCaptureDevice.default(.builtInDualWideCamera,
for: .video,
position: .back) {
logger.log(">>> Got back dual wide camera!")
// Note: this does not have depth data. Why?!?
defaultVideoDevice = dualWideCameraDevice
} else if let backWideCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video,
position: .back) {
logger.log(">>> Can't find a depth-capable camera: using wide back camera!")
defaultVideoDevice = backWideCameraDevice
}
You'll notice when running the app that the yellow ! changes to a green check mark and depth data is recorded to the image. Of course this is a completely impractical way to scan an object because it's hard to see the screen without getting in the photo! Also, unfortunately, the resolution of the depth image (640x480) is much smaller than the color image (3088x2316), but hopefully that won't be the case with the dual rear cameras.
Nice work Mike, duplicate question here FWIW but you came to the same conclusion and printed out the format data
Agree this looks like a bug, and interesting to know the depth sensor resolution is ~1/5 that of the camera sensor.
Reproduced your findings on my iPad Pro 11" (3rd Gen) too, only the TrueDepth camera has anything listed for available depth formats. :-(
Is anyone on the iPadOS 15 Beta able to see if it's enabled there? I'm still on 14.6 here.
I suspected it may be due to not setting up the session properly from reading the docs, but if the available formats aren't even listed then that rules that possibility out I suppose.
Yes, I have the iPad Pro 12.9 inch 4th gen from November running iPadOS 15.0 Beta and am experiencing the same issue. No depth captured or then used in the HelloPhotogrametry App. The Sample Capture App does not report depth data. I see the yellow warning icon on the photos, and depth is disabled (red) in the info. Lidar scanning was the whole point of buying this iPad. So also frustrated and curious about how much better (if at all) the scans will be with Lidar.
Tried the same thing. The iPad Pro does not support the depth capture in iPadOS 15.0 Beta 5 I am really mad and disappointed that this app is so exclusive for iPhone 12 Pro/Max only. Also, does it make more sense to have M1 iPad as 3D model capture all in one, without even using Command-Line tools in separate Mac OS? It said it support M1 Mac so it is supposed to be available in M1 iPad, isn't it?
I used my iphone 11 which has dualcamera, and it's works pretty good, and i tried my m1 ipad pro, no depth captured. Basically, i think that the lidar does not work with this app at all.
Does anyone know of any new updates on this?
I got a brand new iPad Pro with lidar as well, and I'm also not getting depth map when using .builtInDualWideCamera.
It does work fine with .builtInTrueDepthCamera, and I managed to scan an object with it, but the photogrammetry command line sample on OSX outputs the mesh with a wrong scaling, much smaller than the real world scale.
I actually bought the new iPadPro specifically to use with the new photogrammetry API in RealityKit, and it seems quite absurd that a brand new device, with dual camera on the back and Lidar doesn't produce a depth/disparity map at all!
Shouldn't we get an answer from applet about this? Does apple people look at this forum at all?
Just got iPad lol to for this. Same issue,may be returning it if no fix.
Any updates on this issue?