AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

82 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Check if NSMicrophoneUsageDescription exists
In our third party SDK we would like to use microphone (as optional feature) in case the hosting app allows it. From the docs requestRecordPermission will crash if no NSMicrophoneUsageDescription exists in the hosting app info.plist. Obviously I don't want to crash the app. I would like to check if the hosting app will allow me to call requestRecordPermission before calling it? Is it possible
1
0
19
1h
USB microphone with high samplerate and AVAudioEngine
Hello, I can't get my head wrapped around the following problem: I have an external USB microphone capable of samplerates of up to 500 kHz. I want to capture the samples and do analysis and display - no playback required. I can not find a way to run the microphone with its maximum samplerate, I always get 48 kHz. I would like to stick to AVAudioEngine if possible. Any pointer welcome. thx! volker
2
0
142
1d
Command Center / Dynamic Island missing icons and animations
hello all! I'm setting up a really simple media player in my swiftui app. the code is the following: import AVFoundation import MediaPlayer class AudioPlayerProvider { private var player: AVPlayer init() { self.player = AVPlayer() self.player.automaticallyWaitsToMinimizeStalling = false self.setupAudioSession() self.setupRemoteCommandCenter() } private func setupAudioSession() { do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try AVAudioSession.sharedInstance().setActive(true) } catch { print("Failed to set up audio session: \(error.localizedDescription)") } } private func setupRemoteCommandCenter() { let commandCenter = MPRemoteCommandCenter.shared() commandCenter.playCommand.addTarget { [weak self] _ in guard let self = self else { return .commandFailed } self.play() return .success } commandCenter.pauseCommand.addTarget { [weak self] _ in guard let self = self else { return .commandFailed } self.pause() return .success } } func loadAudio(from urlString: String) { guard let url = URL(string: urlString) else { return } let asset = AVAsset(url: url) let playerItem = AVPlayerItem(asset: asset) self.player.pause() self.player.replaceCurrentItem(with: playerItem) NotificationCenter.default.addObserver(self, selector: #selector(self.streamFinished), name: .AVPlayerItemDidPlayToEndTime, object: self.player.currentItem) } func setMetadata(title: String, artist: String, duration: Double) { var nowPlayingInfo = [ MPMediaItemPropertyTitle: title, MPMediaItemPropertyArtist: artist, MPMediaItemPropertyPlaybackDuration: duration, MPNowPlayingInfoPropertyPlaybackRate: 1.0, ] as [String: Any] MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo } @objc private func streamFinished() { self.player.seek(to: .zero) try? AVAudioSession.sharedInstance().setActive(false) MPNowPlayingInfoCenter.default().playbackState = .stopped } func play() { MPNowPlayingInfoCenter.default().playbackState = .playing self.player.play() } func pause() { MPNowPlayingInfoCenter.default().playbackState = .paused self.player.pause() } } pretty scholastic. The code works when called on views. It also shows up within the lock screen / dynamic island (when in background), but here lies the problems: The play/pause button do not appear neither in the Command Center nor in the dynamic island. If I tap on the position these button should show up, the command works. Just the icons are not appearing. the waveform animation does not animate when playing. Many audio apps are working just fine so is my code lacking something. But I don't know why. What is missing? Thanks in advance!
1
0
126
4d
AVAudioSessionErrorCodeCannotInterruptOthers
When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) If some malicious app to take up the audio, which leads to the third party app music playback recovery fails, an error AVAudioSessionErrorCodeCannotInterruptOthers. In this case, can we know which apps are maliciously hogging the audio?
2
0
219
1w
Getting 561015905 while trying to initiate recording when the app is in background
I'm trying to start and stop recording when my app is in background periodically. I implemented it using Timer and DispatchQueue. However whenever I am trying to initiate the recording I get this error. This issue does not exist in foreground. Here is the current state of my app and configuration. I have added "Background Modes" capability in the Signing & Capability and I also checked Audio and Self Care. Here is my Info.plist: <plist version="1.0"> <dict> <key>UIBackgroundModes</key> <array> <string>audio</string> </array> <key>WKBackgroundModes</key> <array> <string>self-care</string> </array> </dict> </plist> I also used the AVAudioSession with .record category and activated it. Here is the code snippet: func startPeriodicMonitoring() { let session = AVAudioSession.sharedInstance() do { try session.setCategory(AVAudioSession.Category.record, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: []) print("Session Activated") print(session) // Start recording. measurementTimer = Timer.scheduledTimer(withTimeInterval: measurementInterval, repeats: true) { _ in self.startMonitoring() DispatchQueue.main.asyncAfter(deadline: .now() + self.recordingDuration) { self.stopMonitoring() } } measurementTimer?.fire() // Start immediately } catch let error { print("Unable to set up the audio session: \(error.localizedDescription)") } } Any thoughts on this? I have tried most of the ways but the issue is still there.
3
0
158
1w
AVAudioEngine connectMIDI with eventListBlock always sends MIDI 2.0 events
I connect two AVAudioNodes by using - (void)connectMIDI:(AVAudioNode *)sourceNode to:(AVAudioNode *)destinationNode format:(AVAudioFormat * __nullable)format eventListBlock:(AUMIDIEventListBlock __nullable)tapBlock and add a AUMIDIEventListBlock tap block to it to capture the MIDI events. Both AUAudioUnits of the AVAudioNodes involved in this connection are set to use MIDI 1.0 UMP events: [[avAudioUnit AUAudioUnit] setHostMIDIProtocol:(kMIDIProtocol_1_0)]; But all the MIDI voice channel events received are automatically converted to UMP MIDI 2.0 format. Is there something else I need to set so that the tap receives MIDI 1.0 UMPs? (Note: My app can handle MIDI 2.0, so it is not really a problem. So this question is mainly to find out if I forgot to set the protocol somewhere...). Thanks!!
0
0
127
2w
Understanding AVAudioTime in AVAudioNodeTapBlock? Is there a way to get time relative to a scheduled Buffer?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
1
0
350
Apr ’24
watchOS: Resume recording from AudioInterruption in background mode
Hi, I have a watchOS app that records audio for an extended period of time and because the mic is active, continues to record in background mode when the watch face is off. However, when a call comes in or Siri is activated, recording stops because of an audio interruption. Here is my code for setting up the session: private func setupAudioSession() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default, options: [.overrideMutedMicrophoneInterruption]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { print("Audio Session error: \(error)") } } Before this I register an interruption handler that holds a reference to my AudioEngine (which I start and stop each time recording is activated by the user): _audioInterruptionHandler = AudioInterruptionHandler(audioEngine: _audioEngine) And here is how this class implements recovery: fileprivate class AudioInterruptionHandler { private let _audioEngine: AVAudioEngine public init(audioEngine: AVAudioEngine) { _audioEngine = audioEngine // Listen to interrupt notifications NotificationCenter.default.addObserver(self, selector: #selector(handleAudioInterruption(notification:)), name: AVAudioSession.interruptionNotification, object: nil) } @objc private func handleAudioInterruption(notification: Notification) { guard let userInfo = notification.userInfo, let interruptionTypeRawValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let interruptionType = AVAudioSession.InterruptionType(rawValue: interruptionTypeRawValue) else { return } switch interruptionType { case .began: print("[AudioInterruptionHandler] Interruption began") case .ended: print("[AudioInterruptionHandler] Interruption ended") print("Interruption ended") do { try AVAudioSession.sharedInstance().setActive(true) } catch { print("[AudioInterruptionHandler] Error resuming audio session: \(error.localizedDescription)") } default: print("[AudioInterruptionHandler] Unknown interruption: \(interruptionType.rawValue)") } } } Unfortunately, it fails with: Error resuming audio session: Session activation failed Is this even possible to do on watchOS? This code worked for me on iOS. Thank you, -- B.
2
0
332
Apr ’24
AVAudioSession multiRoute disables volume buttons
My app is trying to continuously record audio from the background. Due to user feedback, I'm setting the AVAudioSession to use the .multiRoute category and .mixWithOthers options. This is because otherwise, if the device is connected to a car with CarPlay, output from the car's radio is muted. The only drawback seems to be that in this setup, controlling the phone's volume using the hardware volume buttons doesn't work anymore. This, of course, is also disliked by users. I've searched the docs and this and other forums for any documentation of this and if there's anything I can do to either setup the session to handle volume changes again or if and how I'm expected to receive notifications of these button presses and how to forward them to the right spot. Unfortunately, I didn't find anything. Can offer any ideas?
0
0
227
Apr ’24
Other Audio Ducking in AVAudio session
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session, at 3:19 - Apple has introduced **Other audio ducking ** feature In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ? We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low. Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
0
0
261
Apr ’24
AudioSession activation while App is in background or killed
Hello, I'm developing a voice communication App using Livekit SDK. Everything works fine in the foreground, AudioSession is activated and audio transmitted. However, I would like to add a feature, I would like my app to receive audio even when it's in background or terminated. I know I can run code when the App is in that state by sending a background push notification, but the only thing that is not working in that case is the AudioSession activation. It fails with error "Session activation failed", no more clues. I tried every combination of category and mode, but no success. Bacground modes in XCode have been activated: -Audio, AirPlay, and Picture in Picture -Background Processing Is this a limit of Livekit? I would be grateful if someone can point me into the right direction.
0
0
304
Apr ’24
Is there a way to adjust (reduce) the upper limit of the system volume
Sometimes when I'm putting on or taking off clothes, I accidentally bump the digital crown of my Apple Watch or AirPods Max, and then the volume suddenly becomes very loud, which has been bothering me for a long time. I followed the instructions in https://support.apple.com/zh-sg/guide/iphone/iphb71f9b54d/ios, but I couldn't find the relevant settings. The system prompt is to "Reduce Loud Audio", rather than to lower the volume (iOS 17.4). I searched, but I couldn't find any related apps in the App Store. I asked the AI and it provided a relevant solution, so I want to learn Swift and create an app myself (I've only been learning for less than a week). Here's the solution provided by the AI: The general idea is to listen for the routeChange event of AVAudioSession through NotificationCenter then use MPVolumeView to get the slider, and set the value of the slider to control the volume limit. However, when I debugged it, I found that it didn't work even after setting it. I would like to ask where the problem might be and how I should adjust it? @objc func setMaximumVolume () -> Void { if !enableMaxvolume { return; } let volumeView = MPVolumeView() if let slider = volumeView.subviews.first as? UISlider { slider.value = Float(self.maximumVolume / 100) print("setMaximumVolume: \(slider.value)") } }
0
0
210
Apr ’24
Intermittent Audio Recording Failure and UI Freezing Issue in iOS App.
The application is developed in SwiftUI. Our application is responsible for audio recording, transcribing the audio file and uploading it to the backend. So, the 2 main components on the iOS application are : AVAudioRecorder, SFSpeechRecognizer. The UI compromises a visual design which showcases the recording of audio, and lets the user know if the audio is being recorded on not using a Text component. Lately the customer has been complaining that though the application says “Recording ” on the UI, their audios are not being are not being received at the backend. The customers try restarting there device(iPad) and the application started working normally We haven’t been able to reproduce the issue. But we suspect an intermittent failure in audio transmission or a potential UI freezing. Note : I have tried using Leaks instrument and had not encountered any memory leaks while using the application. Is there a way to determine whether the issue lies with the audio recorder, the speech recognizer, or elsewhere in the app? Are there any known issues or limitations with audio recorder lately on iOS that could be causing this behaviour? Please let me know if you have any suggestions to diagnose this issue. Also, do let me know if more information is required Thank you in advance
0
0
242
Apr ’24
Recording stereo audio with `AVCaptureAudioDataOutput`
Hey all! I'm building a Camera app using AVFoundation, and I am using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. (I cannot use AVCaptureMovieFileOutput because I am doing some processing inbetween) When recording the audio CMSampleBuffers to the AVAssetWriter, I noticed that compared to the stock iOS camera app, they are mono-audio, not stereo audio. I wonder how recording in stereo audio works, are there any guides or documentation available for that? Is a stereo audio frame still one CMSampleBuffer, or will it be multiple CMSampleBuffers? Do I need to synchronize them? Do I need to set up the AVAssetWriter/AVAssetWriterInput differently? This is my Audio Session code: func configureAudioSession(configuration: CameraConfiguration) throws { ReactLogger.log(level: .info, message: "Configuring Audio Session...") // Prevent iOS from automatically configuring the Audio Session for us audioCaptureSession.automaticallyConfiguresApplicationAudioSession = false let enableAudio = configuration.audio != .disabled // Check microphone permission if enableAudio { let audioPermissionStatus = AVCaptureDevice.authorizationStatus(for: .audio) if audioPermissionStatus != .authorized { throw CameraError.permission(.microphone) } } // Remove all current inputs for input in audioCaptureSession.inputs { audioCaptureSession.removeInput(input) } audioDeviceInput = nil // Audio Input (Microphone) if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio input...") guard let microphone = AVCaptureDevice.default(for: .audio) else { throw CameraError.device(.microphoneUnavailable) } let input = try AVCaptureDeviceInput(device: microphone) guard audioCaptureSession.canAddInput(input) else { throw CameraError.parameter(.unsupportedInput(inputDescriptor: "audio-input")) } audioCaptureSession.addInput(input) audioDeviceInput = input } // Remove all current outputs for output in audioCaptureSession.outputs { audioCaptureSession.removeOutput(output) } audioOutput = nil // Audio Output if enableAudio { ReactLogger.log(level: .info, message: "Adding Audio Data output...") let output = AVCaptureAudioDataOutput() guard audioCaptureSession.canAddOutput(output) else { throw CameraError.parameter(.unsupportedOutput(outputDescriptor: "audio-output")) } output.setSampleBufferDelegate(self, queue: CameraQueues.audioQueue) audioCaptureSession.addOutput(output) audioOutput = output } } This is how I activate the audio session just before I start recording: let audioSession = AVAudioSession.sharedInstance() try audioSession.updateCategory(AVAudioSession.Category.playAndRecord, mode: .videoRecording, options: [.mixWithOthers, .allowBluetoothA2DP, .defaultToSpeaker, .allowAirPlay]) if #available(iOS 14.5, *) { // prevents the audio session from being interrupted by a phone call try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true) } if #available(iOS 13.0, *) { // allow system sounds (notifications, calls, music) to play while recording try audioSession.setAllowHapticsAndSystemSoundsDuringRecording(true) } audioCaptureSession.startRunning() And this is how I set up the AVAssetWriter: let audioSettings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: options.fileType) let format = audioInput.device.activeFormat.formatDescription audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: format) audioWriter!.expectsMediaDataInRealTime = true assetWriter.add(audioWriter!) ReactLogger.log(level: .info, message: "Initialized Audio AssetWriter.") The rest is trivial - I receive CMSampleBuffers of the audio in my delegate's callback, write them to the audioWriter, and it ends up in the .mov file - but it is not stereo, it's mono. Is there anything I'm missing here?
0
0
300
Apr ’24
Recording Unprocessed Audio From Multiple Microphones iOS
Hi all, I'm working on an app that involves measuring the heading of one iPhone relative to another iPhone. I need to be able to record audio at the same time from at least 2 of built-in data sources at once. Does anyone know how I can achieve this? I've found that, when using the .measurement mode for an AVAudioSession, the stereo polar pattern is not available. Also, I see that it doesn't seem possible to select multiple data sources. Is there something I'm missing? If this is not possible, why not?
0
0
308
Mar ’24
Push to talk block Audio session
I'm facing an issue where I can't play an audio file stored in my project after receiving a push-to-talk notification. Strangely, I'm able to play the audio file by tapping on a button before receiving the push notification, but it doesn't work afterward without any error messages. I've ensured that I've set up everything correctly in my project's capabilities. Any insights on what might be causing this issue would be greatly appreciated. I set everything in capabilities Set permission in .plist Request permission in app delegate I make connection to the room when app becomes active and received succes Then I setup .halfDuplex for this channel In restoredChannelUUID I activate AVAudioSession After sending the ppt push, I parse speaker and make it activeRemoteParticipant. I see than delegate function channelManager didActivate works good Where I tried to play audio from my player I see this prints in console, but no sound play
3
0
351
Mar ’24
After unholding CallKit, the audio does not restore.
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold. If I end the active call myself, everything is fine, and CallKit calls the method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession). However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended. But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error: Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449 What needs to be done for CallKit to activate my audio?
0
0
353
Mar ’24
AVAudioSession.setActive true not working after exception happened in initial set
I am using the category - playAndRecord and mode - videoChat with options - duckOthers. But during an audio call when I try to call setActive(true) an exception occurs and when I try to setActive(true) again after the audio call ended, I am not getting any exception, but voice is not coming. Below is what I am trying to do. So once initial session active attempt fails the system is not activating the session. I have used AVAudioSession.interruptionNotification already but still its not setting the session as desired when audio call is ended. try session.setCategory(.playAndRecord, mode: .videoChat, options: .duckOthers) try session.setActive(true)
0
0
300
Mar ’24
Why is AVAudioRecorder creating corrupt files?
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage: func startRecording() { let settings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 22050, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] do { recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings) recorder?.delegate = self recorder!.record() recording = true } catch { recording = false recordingFinished(success: false) } } The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16 A subsequent attempt to load the file into AVAudioPlayer results in: MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method. I've also tried constructing the recorder with let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1) if audioFormat == nil { print("Audio format failed.") } else { do { recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!) ... with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate: AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
1
0
416
Mar ’24