I am using a RemoteIO unit along with AVCaptureSession to playback samples received from builtin or external microphone. The problem is AVCaptureSession freezes for few seconds in the startup if RemoteIO is simultaneously initialised. I want to know if there is a better and faster way to loopback samples received through AVCaptureAudioDataOutput to headphones? There is one sample code by Apple called AVCaptureToAudioUnit that uses AUGraph but it's pretty old and does not build anymore.
Post
Replies
Boosts
Views
Activity
Is it possible to get certified financial reports from Apple detailing sale of apps in different countries across the world along with VAT deductions? Online reports may not work in some countries for audit purposes, particularly when you have income from various countries.
Does AVFoundation exports only have fixed frame rate output in presets other than passthrough? In other words, do AVAssetExportSession & AVAssetWriter/Reader output/input ever output variable frame rate video? If input video was 29.43 fps, I see output video is always 30 fps when I render using the above two interfaces. Looks like AVF applies a default video composition while rendering.
The WWDC video gives compressionSettings for HDR10 and HLG but doesn't mention any setting for Dolby Vision. Is it not supported or I am missing something?
WWDC 2020 video 10010 says we need to specify MDCV and CLLI for encoding HDR10 video, but the question is how to determine the values for these fields from the camera?
I have CVPixelBuffer's in kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange, which is 10 bit HDR. I need to convert these to RGB and display in MTKView. I need to know the correct pixel format to use, the BT2020 conversion matrix, and displaying the 10 bit RGB pixel buffer in MTKView.
I see some apps have started supporting Dolby vision 10 bit HDR recording but I see no AVFoundation API to enable it. AVFoundation engineers, please do let me know if I am missing any specific API to record video in 10 bit.
I want to know how AVFoundation handles variable frame rate compositions. I create an AVMutableComposition from a bunch of videos (most have variable frame rate as iPhone camera never shoots videos at constant frame rate), then assign an AVVideoComposition with a custom defined frameDuration. Then if I playback this composition or render it using AVAssetExportSession/AVAssetWriter, will the output always be a constant frame rate video and will AVPlayer play it as constant frame rate (perhaps by dropping frames)?
Also how do I specify frame rates such as 29.97 or 23.98 for rendering and playback?
Is it possible to have horizontal scrolling 2D UICollectionView with cells which are variable in width but same in height without subclassing UICollectionViewLayout? Some items may be even more than the size of collectionView frame and the size of any two items is not the same in general and the gap between successive items may also be variable. WWDC videos say compositional layout can achieve anything we can imagine, not sure how to do this.
iPhone 12 pro and iPhone 12 pro max support HDR recording in 10 bit color space. But we are waiting for any documentation , dos/don'ts from AVFoundation Engineers and make apps ready for recording 10 bit video on iPhone 12 pro using AVCaptureVideoDataOutput and displaying the sample buffers using MTKView.
Dear Apple Engineers,
There seems to incomplete documentation about new AVFoundation methods introduced in iOS 14
@available(iOS 13.0, *) optional func anticipateRendering(using renderHint: AVVideoCompositionRenderHint)
@available(iOS 13.0, *)
optional func prerollForRendering(using renderHint: AVVideoCompositionRenderHint)
How do we exactly use these methods, when are they exactly called, how much time can we take to load texture from memory in these methods, what is the correct usage...can anyone explain?
When in multi-selction mode (selectionLimit = 0), the PHPickerViewController almost everytime detects a scroll or swipe gesture as touch and selects the entire row of photos. It is so buggy that the user does not know which photos he inadvertently selected! While it is easy to use third party frameworks for this job, the biggest advantage of this picker is it doesn't need complex Photo Library permissions that are confusing users on iOS 14. Are Apple Engineers even aware of this serious bug that makes the picker almost useless for multi-selection mode?
PHPickerViewController allows access to copies of photo library assets as well as returning PHAssets in the results. To get PHAssets instead of file copies, I do:
let photolibrary = PHPhotoLibrary.shared()
var configuration = PHPickerConfiguration(photoLibrary: photolibrary)
configuration.filter = .videos
configuration.selectionLimit = 0
let picker = PHPickerViewController(configuration: configuration)
picker.delegate = self
self.present(picker, animated: true, completion: nil)
And then,
			func picker(_ picker: PHPickerViewController,		 didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true) {
let identifiers:[String] = results.compactMap(\.assetIdentifier)
let fetchResult = PHAsset.fetchAssets(withLocalIdentifiers: identifiers, options: nil)
NSLog("\(identifiers), \(fetchResult)")
}
}
But the problem is once the photo picker is dismissed, it prompts for Photo Library access which is confusing and since the user anyways implicitly gave access to the selected assets in PHPickerViewController, PHPhotoLibrary should load those assets directly. Is there anyway to avoid the Photo library permission? The other option to copy the assets in the app is a waste of space for editing applications.
I am using the following code to prompt PhotoLibrary access with add only prompt:
PHPhotoLibrary.requestAuthorization(for: .addOnly) { status in
handler(status)
}
However, this still shows the wrong prompt "App would Like to Access Your Photos". Why is that and what can I do to show "App would like to Add to Your Photos" as shown in WWDC video?
PHPhotoLibrary gives an AVComposition rather than an AVURLAsset for a video recorded in SlowMo. I want to insert slowMo videos into another AVMutableComposition, so this means I need to insert this AVComposition into AVMutableComposition which is my editing timeline. The hack I used before was to load the tracks and segments and find the mediaURL of asset.
AVCompositionTrack *track = [avAsset tracks][0];
AVCompositionTrackSegment *segment = track.segments[0];
mediaURL = [segment sourceURL];
Once I had mediaURL, I was able to create a new AVAsset that could be inserted into AVMutableComposition. But I wonder if there is a cleaner approach that allows the slowMo video composition to be directly inserted into the timeline AVMutableComposition?