AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to fix this Swift 6 migration issue?
Here is a code snippet about AVPlayer. avPlayer.addPeriodicTimeObserver(forInterval: CMTime(value: 1, timescale: 60), queue: .main) { [weak self] _ in // Call main actor-isolated instance methods } Xcode shows warnings that Call to main actor-isolated instance method '***' in a synchronous nonisolated context; this is an error in the Swift 6 language mode. How can I fix this? avPlayer.addPeriodicTimeObserver(forInterval: CMTime(value: 1, timescale: 60), queue: .main) { [weak self] _ in Task { @MainActor in // Call main actor-isolated instance methods } } Can I use this solution above? But it seems switching actors frequently can slow down performance.
1
0
873
Sep ’24
Is there a way to detect front camera location on the newest iPad Pro's and Air's?
With the newest iPad Pro and iPad Air's, the front facing camera sits on the long horizontal edge, which is different from the previous version which had the camera on the shorter top edge. We have an app that needs to put a UI item near the camera. Is there a way of detecting where the front facing camera is on iOS? I have tried doing simple resolution checks, i.e, if width == ipadProWidth || width == ipadAirWidth { do2024iPadProLayout() } else { doStandardiPadLayout() } But this doesn't feel like the nicest way to do this check, because it's liable to break moving forward, and theres the possibility Apple release more devices with the camera on the horizontal edge. Any help here is appreciated!
1
1
283
Sep ’24
AVAudioPlayer init very slow on iOS 18
On Xcode 16 (16A242) app execution and UI will stall / lag as soon as an AVAudioPlayer is initialized. let audioPlayer = try AVAudioPlayer(contentsOf: URL) audioPlayer.volume = 1.0 audioPlayer.delegate = self audioPlayer.prepareToPlay() Typically you would not notice this in a music app for example, but it is especially noticable in games where multiple sounds are being played using multiple instances of AVAudioPlayer. The entire app slows down because of it. This is similar to this issue from last year. I have reported it to Apple in FB15144369, as this messes up my production games where fps goes down to nothing when sounds are enabled. Unfortunately I cannot find a solution. Anyone?
2
0
315
Sep ’24
iPhone 16 Camera Control - AVFoundation code sample?
I found these two documents for the Camera Control. Human Interface Guidelines: https://developer.apple.com/design/human-interface-guidelines/camera-control Developer documentation: https://developer.apple.com/documentation/avfoundation/capture_setup/enhancing_your_app_experience_with_the_camera_control Any chance Apple has an updated or new code sample for the AVFoundation that would be integrating Camera Control? Can't find it yet.
1
0
434
Sep ’24
AVAssetReader init failure -- media services were reset
I work on a video editing app that composes multiple small video clips, sometimes hundreds or thousands. For one user in particular, attempting to export causes a failure 100% of the time. The failure occurs in the initialization of AVAssetReader, and is in the AVFoundationErrorDomain with code -11819 (AVErrorMediaServicesWereReset.) We've done everything we can think of, including quitting other running apps, enabling airplane mode, and even performing the flow on an identical device using the customer's data, and have had no luck pinning down the cause of the error. Does anyone have any suggestion for how we might go about debugging this? Getting ready to file a TSI but thought I should ask here first.
1
0
327
Sep ’24
Can I setup an AVCaptureSession exclusively for use with the new Camera Control APIs?
I have a third party app for controlling Sony mirrorless cameras over WiFi. I’m really excited to integrate the new camera controls on the iPhone 16 Pro with the app. I’ve found the documentation around this, and seems I need an AVCaptureSession setup in order to utilise them. func configureControls(_ controls: [AVCaptureControl]) { // Verify the host system supports controls; otherwise, return early. guard captureSession.supportsControls else { return } // Begin configuring the capture session. captureSession.beginConfiguration() // Remove previously configured controls, if any. for control in captureSession.controls { captureSession.removeControl(control) } // Iterate over the passed in controls. for control in controls { // Add the control to the capture session if possible. if captureSession.canAddControl(control) { captureSession.addControl(control) } else { print("Unable to add control \(control).") } } // Commit the capture session configuration. captureSession.commitConfiguration() } can I just use a freshly initialised capture session for this? Or does it need to be configured in any other ways? Are there any down sides to creating a session (CPU usage etc) that I may experience from this? Also, the scope of the controls is quite narrow. For something like shutter speed or aperture that has quite a number of possible values but requires custom labels, and a non-linear scale (so the AVCaptureIndexPicker seems to be the way to go). Will that picker support enough values to represent something like shutter speed or aperture? Is there any chance we may get non-linear float based controls in the future, which may feel more natural from a UX perspective than index-based? Apologies, lots of edits going on here as I think about this more. Is there any way, or would any way be considered of putting these controls in a disabled state like with other UI elements in iOS? There are times (during capture for example) that a lot of these settings can be unavailable (as communicated by the Sony camera) to be changed by the user, and managing a queue of changes when the function is unavailable to be set is going to be a challenge. If there won’t be, how will they behave if controls are removed whilst being interacted with? Presumably they will disappear entirely from the UI? Thanks!
3
1
628
Sep ’24
Why isReadyForMoreMediaData is false sometimes?
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes. Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala... I really be tortured by this problem for a long time, can I debug this problem? or any advice?
2
0
320
Sep ’24
Compressing AVAudioPCMBuffer within AVAudioEngine Tap
Hi everyone, I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network. Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this. Any advice or code examples would be greatly appreciated! Thanks in advance, Ondřej -- My current code without the compression: inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in guard let self else { return } // 1. Send data // a) Convert the buffer into the desired format if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) { // b) Use the converted buffer // TODO: compress it into a different format if let data = outputBuffer.convertToData() { self.sendAudio(data) } } // 2. Get sound level self.visualizeRecorderBuffer(buffer) } func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? { let outputFrameCapacity = AVAudioFrameCount( round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate)) ) guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity), let converter = AVAudioConverter(from: format, to: outputFormat) else { return nil } converter.convert(to: outputBuffer, error: nil) { packetCount, status in status.pointee = .haveData return self } return outputBuffer } static private let websocketInputFormat = AVAudioFormat( commonFormat: .pcmFormatInt16, sampleRate: 16000, channels: 1, interleaved: false )!
2
0
521
Sep ’24
rendering MV HEVC encoded stereoscopic video frames using RealityKit VideoMaterial and AVSampleBufferVideoRenderer on VisionOS
Hi, Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video) to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
1
0
403
Sep ’24
How can I read a dataless file from within the same or another FileProvider extension?
How can I read a dataless file from within the same or another FileProvider extension? When I pass the visible URL to AVAsset from AVFoundation, it throws the following error: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x15b01b1e0 {Error Domain=NSPOSIXErrorDomain Code=11 "Resource deadlock avoided"}, NSLocalizedFailureReason=An unknown error occurred (11), AVErrorFailedDependenciesKey=( "assetProperty_Tracks" ), NSURL=file:///Users/<<username>>/Library/CloudStorage/<<file-path>>, NSLocalizedDescription=The operation could not be completed} The code snippet works fine if executed as a separate Swift process. I'm using AVAsset with AVAssetExportSession to export a subset of the file being read. So I can't use NSFileProviderManager.requestDownloadForItem(withIdentifier:, requestedRange:) because I don't know the offset range required by the AVFoundation library.
1
0
359
Sep ’24
HLS output - influence fragment time.
Hello, I'm trying to create HLS output with segment time of 6 seconds, but sync samples (fragments) every 1 second. I want to have AVAssetWriter write a sync sample / moof header every second. Am I correct in understanding that I could only achieve this with a pre-fragmented MP4 and use a passthrough rendition with setting preferredOutputSegmentInterval to indefinite and running flushSegment() as needed? Or is there another method using AVFoundation? Thanks in advance.
0
0
326
Sep ’24
Getting a strange SwiftData error when generating samples using AVFoundation
Hi all, I'm getting a strange SwiftData error at runtime in my voice recorder app. Whenever I attempt to generate and cache a samples array so that my app can visualize the waveform of the audio, the app crashes and the following pops up in Xcode: { @storageRestrictions(accesses: _$backingData, initializes: _samples) init(initialValue) { _$backingData.setValue(forKey: \.samples, to: initialValue) _samples = _SwiftDataNoType() } get { _$observationRegistrar.access(self, keyPath: \.samples) return self.getValue(forKey: \.samples) } set { _$observationRegistrar.withMutation(of: self, keyPath: \.samples) { self.setValue(forKey: \.samples, to: newValue) } } } With an execution breakpoint on the line _$observationRegistrar.withMutation(of: self, keyPath: \.samples). Here is my model class: import Foundation import SwiftData @Model final class Recording { var id : UUID? var name : String? var date : Date? var samples : [Float]? = nil init(name: String) { self.id = UUID() self.name = name self.date = Date.now } } And here is where the samples are being generated (sorry for the long code): private func processSamples(from audioFile: AVAudioFile) async throws -> [Float] { let sampleCount = 128 let frameCount = Int(audioFile.length) let samplesPerSegment = frameCount / sampleCount let buffer = try createAudioBuffer(for: audioFile, frameCapacity: AVAudioFrameCount(samplesPerSegment)) let channelCount = Int(buffer.format.channelCount) let audioData = try readAudioData(from: audioFile, into: buffer, sampleCount: sampleCount, samplesPerSegment: samplesPerSegment, channelCount: channelCount) let processedResults = try await processAudioSegments(audioData: audioData, sampleCount: sampleCount, samplesPerSegment: samplesPerSegment, channelCount: channelCount) var samples = createSamplesArray(from: processedResults, sampleCount: sampleCount) samples = applyNoiseFloor(to: samples, noiseFloor: 0.01) samples = normalizeSamples(samples) return samples } private func createAudioBuffer(for audioFile: AVAudioFile, frameCapacity: AVAudioFrameCount) throws -> AVAudioPCMBuffer { guard let buffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat, frameCapacity: frameCapacity) else { throw Errors.AudioProcessingError } return buffer } private func readAudioData(from audioFile: AVAudioFile, into buffer: AVAudioPCMBuffer, sampleCount: Int, samplesPerSegment: Int, channelCount: Int) throws -> [[Float]] { var audioData = [[Float]](repeating: [Float](repeating: 0, count: samplesPerSegment * channelCount), count: sampleCount) for segment in 0..<sampleCount { let segmentStart = AVAudioFramePosition(segment * samplesPerSegment) audioFile.framePosition = segmentStart try audioFile.read(into: buffer) if let channelData = buffer.floatChannelData { let dataCount = samplesPerSegment * channelCount audioData[segment] = Array(UnsafeBufferPointer(start: channelData[0], count: dataCount)) } } return audioData } private func processAudioSegments(audioData: [[Float]], sampleCount: Int, samplesPerSegment: Int, channelCount: Int) async throws -> [(Int, Float)] { try await withThrowingTaskGroup(of: (Int, Float).self) { taskGroup in for segment in 0..<sampleCount { let segmentData = audioData[segment] taskGroup.addTask { var rms: Float = 0 vDSP_rmsqv(segmentData, 1, &rms, vDSP_Length(samplesPerSegment * channelCount)) return (segment, rms) } } var results = [(Int, Float)]() for try await result in taskGroup { results.append(result) } return results } } private func createSamplesArray(from processedResults: [(Int, Float)], sampleCount: Int) -> [Float] { var samples = [Float](repeating: 0, count: sampleCount) vDSP_vfill([0], &samples, 1, vDSP_Length(sampleCount)) for (segment, rms) in processedResults { samples[segment] = rms } return samples } private func applyNoiseFloor(to samples: [Float], noiseFloor: Float) -> [Float] { var result = samples let noiseFloorArray = [Float](repeating: noiseFloor, count: samples.count) vDSP_vsub(noiseFloorArray, 1, samples, 1, &result, 1, vDSP_Length(samples.count)) return result } private func normalizeSamples(_ samples: [Float]) -> [Float] { var result = samples var min: Float = 0 var max: Float = 0 vDSP_minv(samples, 1, &min, vDSP_Length(samples.count)) vDSP_maxv(samples, 1, &max, vDSP_Length(samples.count)) if max > min { var a: Float = 1.0 / (max - min) var b: Float = -min / (max - min) vDSP_vsmsa(samples, 1, &a, &b, &result, 1, vDSP_Length(samples.count)) } else { vDSP_vfill([0.5], &result, 1, vDSP_Length(samples.count)) } return result } And this is how the processSamples function is used: private func loadAudioSamples() async { let url = recording.fileURL if let audioFile = loadAudioFile(url: url) { if recording.samples == nil { recording.samples = try? await processSamples(from: audioFile) } } } private func loadAudioFile(url: URL) -> AVAudioFile? { do { let audioFile = try AVAudioFile(forReading: url) return audioFile } catch { return nil } } Any help or leads would be greatly appreciated! Thanks!
1
0
301
Sep ’24
Video Background Removal
I am searching for a method to remove background from a video. it can be from camera Session fileOutput url or from photo library. I was able to accomplish live preview of removed background with the depth data and some metal framework code from the example Enhancing Live Video by Leveraging TrueDepth Camera Data. However I count figure out a way to save this as a video so that I can upload it. Also this method is using over 150% of cpu ( Xcode cpu usage ), which seems to be quite a lot and the device is getting heated up so fast and drops the frames when It hot. I also found something similar from GitHub using CoreML example by Dmitry Voitekh which only uses less than 40% cpu. Any information regarding this will be helpful. Objective : Remove Background from video and save it
5
0
751
Sep ’24
Speed up the conversion of MV-HEVC to Side-by-side
I have read the Converting side-by-side 3D video to multi-view HEVC and spatial video, now I want to convert back to side-by-side 3D video. On iPhone 15 Pro MAX, the converting time is about 1:1 as the original video length. I do almost the same as the article mentioned above, the only difference is I get the frames from Spatial video, merging into Side-by-side. Currently my code merging the frame wrote as below. Is any suggestion to speed up the process? Or in the official article, is there anything that we can do to speed up the conversion? // Merge frame let leftCI = resizeCVPixelBufferFill(bufferLeft, targetSize: targetSize) let rightCI = resizeCVPixelBufferFill(bufferRight, targetSize: targetSize) let lbuffer = convertCIImageToCVPixelBuffer(leftCI!)! let rbuffer = convertCIImageToCVPixelBuffer(rightCI!)! pixelBuffer = mergeFrames(lbuffer, rbuffer)
1
0
317
Sep ’24
Inexplicable Fence Hang
Hello, My App is getting a Fence hang right after install in a specific scenario. Issue1: I attempted to follow the directions, tried to symbolicate the file etc. however did not have much luck. I was able to pinpoint the lines of code where the hang seems to occur. I did this using simple print and comment out/uncomment blocks of code related to the specific scenario. I was able to do so as, not much is happening on the Main thread in this scenario . Issue 2: The following lines of code ( modified var etc. ) seem to cause the hang. Commenting them out gets rid of the hang across devices, while online/offline etc. I am not sure if I need to use a framework other than AVFoundation. Note: The file extension is mpg The music files are static ( included in the Bundle ) and not accessed from user's playlist etc. import var plyr : AVAudioPlayer? let pth = Bundle.main.path(forResource: "MusicFileName", ofType: "mpg")! let url = URL(fileURLWithPath: pth) do [{](https://www.example.com/) plyr = try AVAudioPlayer(contentsOf: url) plyr?.prepareToPlay() plyr?.play() } catch { // print error etc. } Thanks in advance. I would appreciate some help! Close to submission :)
6
0
455
Sep ’24
How can I use the F8 play/pause key to control media playback in Catalyst?
I have a Catalyst app that plays audio via AVQueuePlayer, and I'd like to use the system play/pause key (F8 on my MacBook Pro keyboard) to play and pause it. It doesn't seem to work automatically, and if I hook up a UIKeyCommand using UIKeyInputF8, it works with Fn-F8, not F8 on its own. It does seem to work in Overcast's Mac app, but I think that's an iPad app for Mac, not Catalyst, so it's probably going through whatever system pathway that the Lock Screen controls would be using on iOS. How do I make this work on Catalyst?
1
0
413
Aug ’24
How to Manage HLS Assets Downloaded with AVAssetDownloadTask to Appear in the iOS Settings App
I have an application that downloads content using AVAssetDownloadTask. In the iOS Settings app, these downloads are listed in the Storage section as a collection of downloaded movies, displaying the asset image, whether it's already watched, the file size, and an option to delete it. Curious about how other apps handle this display, I noticed that Apple Music shows every downloaded artist, album, and song individually. This feature made me wonder: can I achieve something similar in my application? On the other hand, apps like Spotify and Amazon Music don’t show any downloaded files in the Settings app. Is it possible to implement that approach as well? Here is print screen of the Apple Music Storage section in the Settings App: I tried moving the download directory into sub folder using the FileManager, but all the results made the downloads stop showing in the setting app
1
0
488
Aug ’24
Constant color API improvement
I've experimented quite a bit with the new API designed to neutralize image colors using the iPhone flash, and I think the concept is brilliant. The flash could potentially serve as a substitute for a color checker, given our full control over it. However, I believe there are several areas where this API could be improved. Firstly, the resulting images often appear "unattractive"—colors tend to look faded, and the images themselves can be overly bright and washed out, losing the natural ambiance, shadows, and introducing unwanted flash reflections. There is also inconsistency in color rendering; for example, yellows sometimes appear unnatural, possibly due to reflections. In some cases, all the colors in the image are completely desaturated or become black and white if another light source does not fully illuminate the scene. Additionally, the shadows cast by the flash don't correct the colors properly since they fall outside the flash's range. I think many of these issues could be resolved if we had access to ProRAW images capturing both the ambient light (without flash) and the flash-illuminated scene. With these, we could use specific colors in the image as a reference, similar to a color checker, to create an ICC profile or color transformation matrix to adjust the image colors more globally. This approach could help retain the shadows from the ambient light while still correcting colors to a neutral tone. Access to ProRAW data is crucial for this, as it would provide images without the saturation issues that can affect some colors and with a linear tone curve. I hope this suggestion makes sense and could help improve the API's effectiveness.
0
1
307
Aug ’24
SwiftUI ScrollView scroll gesture doesn't work on macOS but works on iOS(Mac Catalyst)
The title says it all, I'm using SwiftUI for multiplatform app, and I'm using ScrollView in it. My app has player bar which tracks the AVPlayer's current time. I made it using .offset(y:). The problem here is that whenever I change offset, the scroll gesture suddenly doesn't work on macOS. Video link: https://streamable.com/euzuwk But weirdly, it works on Mac Catalyst version. Video link: https://streamable.com/oq01mt The source code is on GitHub. I tried using Animation, but it made the player bar and music out of sync. So right now I made a publisher based on AVPlayer's addPeriodicTimeObserver and receiving the time from AVPlayer but now the scroll doesn't work as expected.
1
0
401
Aug ’24