Thanks, good to know.
By the way, the results are impressive.
My personal phone is a 12 pro max but I will not put iOS 15 for the moment.
I have already played with stereo configuration on a 12 pro with my own software and the best quality was achieved by using the tele and the wide.
Can I expect to have the same results between a 12 pro and an iPad Pro M1. Since the iPad Pro M1 has only wide and ultra wide, I am afraid the quality will be less accurate. What do you recommend between a 12 pro and iPad Pro M1 ?
Post
Replies
Boosts
Views
Activity
The screen width of a 12 pro max is about 73 mm measured with a ruler.
If we convert 428 points into mm we have 150.989 mm which is almost twice the real size.
Normally a point is defined by 25.4 mm / 72 which is about 0.352778 mm which can be easily checked on google
It is like the definition of point is not the same
The screen width of a 12 pro max is about 73 mm measured with a ruler.
If we convert 428 points into mm we have 150.989 mm which is almost twice the real size.
Normally a point is defined by 25.4 mm / 72 which is about 0.352778 mm which can be easily checked on google
It is like the definition of point is not the same
The following code works for me with AVCaptureDeviceTypeBuiltInTripleCamera on a pro max. The problem was the geometric distortion enabled by default. [videoDevice lockForConfiguration:nil];
[videoDevice setGeometricDistortionCorrectionEnabled:NO];
[videoDevice unlockForConfiguration];
#if defined(USE_AV_CAPTURE_PHOTO_OUTPUT)
AVCapturePhotoOutput* photoOutput = [[AVCapturePhotoOutput alloc] init];
if ([self.session canAddOutput:photoOutput]) {
[self.session addOutput:photoOutput];
self.photoOutput = photoOutput;
self.photoOutput.highResolutionCaptureEnabled = NO;
self.photoOutput.livePhotoCaptureEnabled = NO;
if ([videoDevice isVirtualDevice]){
BOOL supported = self.photoOutput.virtualDeviceConstituentPhotoDeliverySupported;
NSLog(@"* virtualDeviceConstituentPhotoDeliverySupported %d", supported);
self.photoOutput.virtualDeviceConstituentPhotoDeliveryEnabled = supported;
}
BOOL depthDataDeliverySupported = self.photoOutput.depthDataDeliverySupported;
//self.photoOutput.depthDataDeliveryEnabled = depthDataDeliverySupported;
BOOL cameraCalibrationDataDeliverySupported = self.photoOutput.cameraCalibrationDataDeliverySupported;
NSLog(@"Constituent %@", [[self.videoDeviceInput device] constituentDevices]);
NSLog(@"* depthDataDeliverySupported %d", depthDataDeliverySupported);
NSLog(@"* depthDataDeliveryEnabled %d", self.photoOutput.depthDataDeliveryEnabled);
NSLog(@"* cameraCalibrationDataDeliverySupported %d", cameraCalibrationDataDeliverySupported);
// self.photoOutput.cameraCalibrationDataDeliverySupported = YES;
}
#endif
NSLog(@"Geom: %d %d", [videoDevice isGeometricDistortionCorrectionSupported], [videoDevice isGeometricDistortionCorrectionEnabled]);
You're right, the geometric distortion is enabled by default with the ultra wide which prevents the delivery of calibration.How can we retrieve the camera calibration in the case we use only the AVCaptureDeviceTypeBuiltInWideAngleCamera ?
When I'm trying to use the ultra wide camera, nothing works.Only the tele and wide can work on my side
Did you get time to check why in some cases no cameraCalibrationData are available ?For information, I'm using an AVCaptureSession and not an AVCaptureMultiCamSession
I'm using an iPhone 8 Plus and self.videoDeviceInput.device.deviceType is AVCaptureDeviceTypeBuiltInDualCamera (made of tele and wide).Today someone lent me an iPhone 11 pro max.If I'm using AVCaptureDeviceTypeBuiltInTrippleCamera, the property self.photoOutput.cameraCalibrationDataDeliverySupported returns NOIf I'm using AVCaptureDeviceTypeBuiltInDualWideCamera, the property self.photoOutput.cameraCalibrationDataDeliverySupported returns NOIf I'm using AVCaptureDeviceTypeBuiltInDualCamera the property self.photoOutput.cameraCalibrationDataDeliverySupported returns YESIn this last mode, if I'm setting NO or YES to dataDepthDelivery, I get valid cameraCalibrationDataIt seems that there are several problems:1) The first one with my iphone 8 plus (AVCaptureDeviceTypeBuiltInDualCamera), if I don't enable dataDepthDelivery no cameraCalibrationData are available2) With the iPhone 11 pro, some configurations cannot deliver cameraCalibration data
Thanks for answering meI did what you adviced me and it worked 🙂. I have the results I want directly in the property cameraCalibration data of AVCapturePhoto for both pictures. But I noticed something strange.What I'm trying to do here, it is just taking synchronized pictures with associated intrinsics, distortion and extrinsics data for each frame.So I'm interested only by the property cameraCalibrationData of AVCapturePhoto.If I am not setting the property depthDataDeliveryEnabled to YES of AVCapturePhotoOutput, I will not get valid cameraCalibrationData for AVCapturePhoto (nil). I assume when this property is set to YES, the phone has to make some additional processings. I do not want to waste some resources for data I don't need, I have to do some processing by my own once I get the two frames. Maybe it is my codeThe code when I'm configuring the AVCapturePhotoOutput AVCapturePhotoOutput* photoOutput = [[AVCapturePhotoOutput alloc] init];
if ([self.session canAddOutput:photoOutput]) {
[self.session addOutput:photoOutput];
self.photoOutput = photoOutput;
self.photoOutput.highResolutionCaptureEnabled = YES;
self.photoOutput.livePhotoCaptureEnabled = NO;
if ([videoDevice isVirtualDevice]){
self.photoOutput.virtualDeviceConstituentPhotoDeliveryEnabled = YES;
}
BOOL depthDataDeliverySupported = self.photoOutput.depthDataDeliverySupported;
self.photoOutput.depthDataDeliveryEnabled = depthDataDeliverySupported;
NSLog(@"depthDataDeliverySupported %d", depthDataDeliverySupported);
NSLog(@"depthDataDeliveryEnabled %d", self.photoOutput.depthDataDeliveryEnabled);
BOOL cameraCalibrationDataDeliverySupported = self.photoOutput.cameraCalibrationDataDeliverySupported;
NSLog(@"cameraCalibrationDataDeliverySupported %d", cameraCalibrationDataDeliverySupported);
}and the one when I'm configuring the AVCapturePhotoSettings NSDictionary * dict =@{
(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA],
};
AVCapturePhotoSettings* photoSettings = [AVCapturePhotoSettings photoSettingsWithFormat:dict];
AVCaptureDevice * device = self.videoDeviceInput.device;
if ([device isVirtualDevice]){
// We have several constituent cameras
photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = [device constituentDevices];
photoSettings.depthDataDeliveryEnabled = self.photoOutput.isDepthDataDeliveryEnabled;
NSLog(@"Virtual %@ %d",
photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices,
photoSettings.depthDataDeliveryEnabled);
}
photoSettings.cameraCalibrationDataDeliveryEnabled = self.photoOutput.cameraCalibrationDataDeliverySupported;
NSLog(@"cameraCalibrationDataDeliveryEnabled %d", photoSettings.cameraCalibrationDataDeliveryEnabled);
if (self.videoDeviceInput.device.isFlashAvailable) {
photoSettings.flashMode = AVCaptureFlashModeOff;
}
photoSettings.highResolutionPhotoEnabled = YES;
if (photoSettings.availablePreviewPhotoPixelFormatTypes.count > 0) {
photoSettings.previewPhotoFormat = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : photoSettings.availablePreviewPhotoPixelFormatTypes.firstObject };
}
//photoSettings.photoQualityPrioritization = AVCapturePhotoQualityPrioritizationSpeed;
[self.photoOutput capturePhotoWithSettings:photoSettings delegate:self.capturePhotoDelegate];Can you please tell me if I've done something wrong or if the system requires enabling the depth delivery ?I was planning to support all iPhones supported by iOS 13 like iPhone SE (first generation). It has only one camera at the back. Today with this code I cannot get the cameraCalibrationData.My program will also uses AVCaptureVideoDataOutput. When I'm using it with an iPhone SE I'm able to retrieve the cameraMatrix with kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix attachment on the samplebuffer. Is there a way to retrieve the distortion ?And finally the last question, I was planning to make some 3D reconstructions from the synchronized pictures. Do you think that the extrinsic matrix given by the phone is enough accurate ? The two cameras are so close (about 10 mm) that a small error on the translation vector may have a significant impact on the true scale of the measured object