Posts

Post marked as solved
1 Replies
Camera calibration data delivery (of which intrinsics are a part) is currently only supported when GDC is off, unfortunately. This is documented in AVCapturePhotoOutput.h: /*! @property cameraCalibrationDataDeliverySupported @abstract Specifies whether the photo output's current configuration supports delivery of AVCameraCalibrationData in the resultant AVCapturePhoto. @discussion Camera calibration data delivery (intrinsics, extrinsics, lens distortion characteristics, etc.) is only supported if virtualDeviceConstituentPhotoDeliveryEnabled is YES and contentAwareDistortionCorrectionEnabled is NO and the source device's geometricDistortionCorrectionEnabled property is set to NO. This property is key-value observable. */
Post not yet marked as solved
2 Replies
Yes, please do. When you file your enhancement request, please include this write-up of the use-case you're hoping to enable, as well as what features you'd like to see in a manual flash control API. Thanks for the feedback!
Post marked as solved
2 Replies
Hi there. AVCaptureMovieFileOutput is only capable of recording from one video source at a time. You have a couple of options. Use AVCaptureMultiCamSession + 2 VideoDataOutputs + 1 AVAssetWriter. Get buffers from each of the video data outputs and then composite them into a larger buffer (positioning them side by side), and then pass that new buffer to AVAssetWriter to record to a movie. This is very similar to the sample code we published last year called AVMulticamPip (https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/avmulticampip_capturing_from_multiple_cameras). This app composites the second camera as a picture-in-picture in the corner of the screen, and changes the primary camera as you double tap on the screen. Use two different AVCaptureMovieFileOutputs to write 2 different movie files to disk -- one representing each camera. After recording, read the two movies back frame by frame and composite them together, and re-write using AVAssetWriter.
Post marked as solved
2 Replies
AVFoundation's capture APIs allow one to detect machine readable codes (a variety of 1-D and 2-D codes) from the camera. The API set is easy to configure — it requires no work on the part of the developer to distinguish between detection and tracking of barcodes. You just opt in for the type of codes you'd like to detect. The downside is that the AVCaptureMetadataOutput barcode detection only works on frames streamed in real-time from the camera. It's not a general purpose detector that can be used to find barcodes in a single image or sequence of images stored in a file. Vision's barcode detection is better for the latter purpose.
Post marked as solved
4 Replies
Slightly cropped compared to what? Compared to the still image accompanying the Live Photo movie? This is expected, since Live Photo movies are stabilized to counter handshake, which eats slightly into the field of view.
Post not yet marked as solved
2 Replies
Are you aware that different configurations of microphones are used in AVCaptureMultiCamSession compared to AVCaptureSession? A multicam session is also a multi-mic capable session. It allows simultaneous capture of up to 3 audio beams: Omni-directional audio Front-facing audio Rear-facing audio This differs from AVCaptureSession, in which the audio captured always follows the direction of the camera you're using. My guess would be that you're mistakenly capturing a different beam-form than you're accustomed to get in AVCaptureSession. All of this behavior is discussed in depth near the end of 2019's Session 249: Introducing Multi-Camera Capture for iOS. https://developer.apple.com/videos/play/wwdc2019/249/ It's also discussed in AVCaptureInput.h, such as in the headerdoc for the "sourceDevicePosition" property (copied below). /*!  @property sourceDevicePosition  @abstract     The AVCaptureDevicePosition of the source device providing input through this port.    @discussion     All AVCaptureInputPorts contained in an AVCaptureDeviceInput's ports array have the same sourceDevicePosition, which is deviceInput.device.position. When working with microphone input in an AVCaptureMultiCamSession, it is possible to record multiple microphone directions simultaneously, for instance, to record front-facing microphone input to pair with video from the front facing camera, and back-facing microphone input to pair with the video from the back-facing camera. By calling -[AVCaptureDeviceInput portsWithMediaType:sourceDeviceType:sourceDevicePosition:], you may discover additional hidden ports originating from the source audio device. These ports represent individual microphones positioned to pick up audio from one particular direction. Examples follow.           To discover the audio port that captures omnidirectional audio, use [microphoneDeviceInput portsWithMediaType:AVMediaTypeAudio sourceDeviceType:AVCaptureDeviceTypeBuiltInMicrophone sourceDevicePosition:AVCaptureDevicePositionUnspecified].firstObject.         To discover the audio port that captures front-facing audio, use [microphoneDeviceInput portsWithMediaType:AVMediaTypeAudio sourceDeviceType:AVCaptureDeviceTypeBuiltInMicrophone sourceDevicePosition:AVCaptureDevicePositionFront].firstObject.         To discover the audio port that captures back-facing audio, use [microphoneDeviceInput portsWithMediaType:AVMediaTypeAudio sourceDeviceType:AVCaptureDeviceTypeBuiltInMicrophone sourceDevicePosition:AVCaptureDevicePositionBack].firstObject.  */ @property(nonatomic, readonly) AVCaptureDevicePosition sourceDevicePosition API_AVAILABLE(ios(13.0)) SPI_AVAILABLE(macos(10.15)) API_UNAVAILABLE(tvos, watchos);
Post marked as solved
1 Replies
Hi Frank, what you've described is the expected behavior. Different engines are used for the disparity / depth generation in AVCaptureDepthDataOutput and AVCapturePhotoOutput. The photo output depth generation takes longer, but is more accurate. It is optimized to deliver depth (actually it's natively disparity) results that will be saved in image files (HEIC/JPEG). When saving depth data to image files, we always use disparity (it compresses better than depth), and translate the 16-bit floating point values to 8-bit fixed point values. When using the depth data output, there's no requirement to save depth to files, so we allow its delivery as 16 or 32 bit floating point, disparity or depth. Natively, it is calculated as disparity (speaking of Dual / DualWide / Triple / TrueDepth cameras).
Post marked as solved
2 Replies
We do not support writing uncompressed disparity/depth data to HEIC or JPEG. This is a great feature request though. Please feel free to use feedbackassistant.apple.com to formally request support for this feature.
Post not yet marked as solved
5 Replies
I'd suggest you send us a lite version of your code that reproduces the problem, using https://feedbackassistant.apple.comYou could of course swap in a different AVCaptureVideoPreviewLayer, though you shouldn't have to.
Post not yet marked as solved
3 Replies
Are you writing a macOS native app or attempting to compile a catalyst app?
Post not yet marked as solved
5 Replies
Sounds like you didn't actually deallocate the multicam session. You may have released it, but someone was holding onto it (perhaps in an autorelease pool) at the time you made your new AVCaptureSession and tried to add that same camera to it. Making the sessionPreset of the new AVCaptureSession 4K implicitly changes the activeFormat of the camera to 4K, which is an unsupported config for the multicam session.In both cases, the correct thing to do is to stop the multicam session ([session stopRunning]) before trying to get rid of it. Then you can proceed with your 4K session without fear. The multicam session only cares what format you've set on your cameras when it's running.
Post not yet marked as solved
12 Replies
Have you disabled geometric distortion correction on the AVCaptureDevice?
Post not yet marked as solved
12 Replies
May I ask on which device are you running and what is the device.deviceType of the AVCaptureDeviceInput connected to your AVCapturePhotoOutput?
Post not yet marked as solved
12 Replies
Not a bug. Depth data is only available for the intersection of pixels between the two cameras being used to create stereo depth. In your case, the wide and the tele. We only have depth information for the tele field of view, so we only deliver a depthData with the tele image. You can get intrinsics for both images if you opt in for cameraCalibrationDataDeliveryEnabled, in which case the cameraCalibrationData is delivered as a top-level property of the AVCapturePhoto, and it will be delivered for both photos in the virtualCameraConstituentPhotoDelivery capture.Note that the properties named "dualXXXDelivery" have been deprecated and replacements provided: virtualCameraConstituentPhotoDelivery. This reflects the reality that we now have virtual cameras consisting of more than just two physical cameras. The Triple Camera on iPhone 11 and iPhone 11 Pro consists of tele, wide, and ultrawide.
Post not yet marked as solved
1 Replies
Thanks for the report. Probably the best course of action would be to file a bug and please share examples of the DNGs so we can get a feel for what's different (does it look like different exposure? white balance?). Also a small test project that reproduces the problem so we can repeat your calling sequence. Also, let us know which software you're using to observe your results.Thanks.