I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request.
I am using the hd1920x1080 preset, but both the wide and ultra-wide photos are being delivered in the highest possible resolution (4224x2376). I've tried to disable any setting that suggests that it should be using that 4k resolution rather than 1080p on the AVCapturePhotoOutput, AVCapturePhotoSettings and AVCaptureDevice, but nothing has worked.
Some debugging that I've done:
When I turn off constituent photo delivery by commenting out the line of code below, I end up getting a single photo delivered with the 1080p resolution, as you'd expect.
// photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = captureDevice.constituentDevices
I tried the constituent photo delivery with the .builtInDualCamera and got only 4k results (same as described above)
I tried using a AVCaptureMultiCamSession with .builtInDualWideCamera and also only got 4k imagery
I inspected the resolved settings on photo.resolvedSettings.photoDimensions, and the dimensions suggest the imagery should be 1080p, but then when I inspect the UIImage, it is always 4k.
guard let imageData = photo.fileDataRepresentation() else { return }
guard let capturedImage = UIImage(data: imageData ) else { return }
print("photo.resolvedSettings.photoDimensions", photo.resolvedSettings.photoDimensions) // 1920x1080
print("capturedImage.size", capturedImage.size) // 4224x2376
--
Any help here would be greatly appreciated, because I've run out of things to try and documentation to follow π
Post
Replies
Boosts
Views
Activity
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode.
At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results.
Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs.
From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized.
self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput])
outputSync.setDelegate(self, queue: .main)
...
func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) {
guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData,
let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else {
return;
}
// either syncWideData or syncUltraData is always nil, so the guard condition never passes.
}
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request.
Our app ideally would have the timestamp difference between the photos in a single capture request as short as possible, but we don't have a good idea of what the theoretical or practical limits of this timestamp difference are.
In my testing on an iPhone 12 Pro, with a frame rate of 33Hz and the preset set to hd1920x1080, I get the timestamp difference between photos at approx 0.3ms, which seems smaller than I would expect, unless the frames are being synchronised incredibly well under the hood.
This leaves the following unanswered questions:
What sort of ranges of values should we expect to come out of these timestamp differences between photos?
What factors influence this?
Is there any way to control these values to ensure they are as small as possible? (Will likely be answered by (2))
We have a camera application which is attempting to display a zoomed in area of the AVCaptureVideoPreviewLayer from an AVCaptureDeviceInput, while still capturing images of the non-zoomed in area of the chosen camera. The user also needs to be able to tap to focus.
We are currently using the below SwiftUI component to achieve this. However, when handling the tap focus events using videoPreviewLayer.captureDevicePointConverted with videoPreviewCropFactor set to 2.0 (effectively zooming in the camera preview by 2x), the captureDevicePoint do not correspond with the correct positions in the capture device reference frame.
import AVFoundation
import SwiftUI
@available(iOS 15.0, *)
struct CameraPreview: UIViewRepresentable {
var preview: AVCaptureVideoPreviewLayer
var cropFactor: Double
func makeUIView(context _: Context) -> UIView {
let uiView = UIView()
uiView.layer.addSublayer(self.preview)
DispatchQueue.main.async {
self.preview.bounds = self.getPreviewBounds(uiView)
self.preview.position = uiView.layer.position
}
return uiView
}
func updateUIView(_ uiView: UIView, context _: Context) {
self.preview.bounds = self.getPreviewBounds(uiView)
self.preview.position = uiView.layer.position
}
func getPreviewBounds(_ uiView: UIView) -> CGRect {
return CGRect(x: 0, y: 0,
width: uiView.layer.bounds.width * self.cropFactor,
height: uiView.layer.bounds.height * self.cropFactor)
}
}
// parent component
struct YieldMultiCaptureView: View {
var body: some View {
ZStack {
CameraPreview(preview: cameraController.videoPreviewLayer, cropFactor: self.videoPreviewCropFactor)
.ignoresSafeArea()
.opacity(cameraController.isCaptureInProgress ? 0.2 : 1)
.onTouch(type: .started, perform: onCameraViewTap)
...
func onCameraViewTap(_ location: CGPoint) {
let captureDevicePoint = cameraController.videoPreviewLayer.captureDevicePointConverted(fromLayerPoint: location)
cameraController.focusCamera(focusPoint: captureDevicePoint)
}
...
Note: previous answers I have seen suggest adding a zoom factor to the device input. However, this will not work in this case since we still want to capture the non-zoomed region of the camera.
Hi there,
I am building a camera application to be able to capture an image with the wide and ultra wide cameras simultaneously (or as close as possible) with the intrinsics and extrinsics for each camera also delivered.
We are able to achieve this with an AVCaptureMultiCamSession and AVCaptureVideoDataOutput, setting up the .builtInWideAngleCamera and .builtInUltraWideCamera manually. Doing this, we are able to enable the delivery of the intrinsics via the AVCaptureConnection of the cameras. Also, geometric distortion correction is enabled for the ultra camera (by default).
However, we are seeing if it possible to move the application over to the .builtInDualWideCamera with AVCapturePhotoOutput and AVCaptureSession to simplify our application and get access to depth data. We are using the isVirtualDeviceConstituentPhotoDeliveryEnabled=true property to allow for simultaneous capture. Functionally, everything is working fine, except that when isGeometricDistortionCorrectionEnabled is not set to false, the photoOutput.isCameraCalibrationDataDeliverySupported returns false.
From this thread and the docs, it appears that we cannot get the intrinsics when isGeometricDistortionCorrectionEnabled=true (only applicable to the ultra wide), unless we use a AVCaptureVideoDataOutput.
Is there any way to get access to the intrinsics for the wide and ultra while enabling geometric distortion correction for the ultra?
guard let captureDevice = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back) else {
throw InitError.error("Could not find builtInDualWideCamera")
}
self.captureDevice = captureDevice
self.videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
self.photoOutput = AVCapturePhotoOutput()
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
captureSession.addInput(self.videoDeviceInput)
captureSession.addOutput(self.photoOutput)
try captureDevice.lockForConfiguration()
captureDevice.isGeometricDistortionCorrectionEnabled = false // <- NB line
captureDevice.unlockForConfiguration()
/// configure photoOutput
guard self.photoOutput.isVirtualDeviceConstituentPhotoDeliverySupported else {
throw InitError.error("Dual photo delivery is not supported")
}
self.photoOutput.isVirtualDeviceConstituentPhotoDeliveryEnabled = true
print("isCameraCalibrationDataDeliverySupported", self.photoOutput.isCameraCalibrationDataDeliverySupported) // false when distortion correction is enabled
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer delegate", attributes: []))
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
self.videoPreviewLayer.setSessionWithNoConnection(self.captureSession)
self.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let cameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: self.videoDeviceInput.ports.first!, videoPreviewLayer: self.videoPreviewLayer);
self.captureSession.addConnection(cameraVideoPreviewLayerConnection)
self.captureSession.commitConfiguration()
self.captureSession.startRunning()