AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

88 Posts
Sort by:
Post not yet marked as solved
1 Replies
2.5k Views
When the screen is unlocked the AVSpeechSynthesizer.speak is working fine and in locked not working do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .default, options: .defaultToSpeaker) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) } catch { print("audioSession properties weren't set because of an error.") } let utterance = AVSpeechUtterance(string: voiceOutdata) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") let synth = AVSpeechSynthesizer() synth.speak(utterance) defer { disableAVSession() } Error Log in the locked state [AXTTSCommon] Failure starting audio queue alp![AXTTSCommon] _BeginSpeaking: couldn't begin playback
Posted
by
Post not yet marked as solved
4 Replies
3.1k Views
I want to record both IMU data and Audio data from Airpods Pro. I have tried many times, and I failed. I can successfully record the IMU data and iPhone's microphone data simultaneously. When I choose Airpods Pro's microphone in the setCategory() function, the IMU data collection process stopped. If I change recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth) to ecordingSession.setCategory(.playAndRecord, mode: .default), everything is okay except the audio is recorded from the handphone. If I add options: .allowBluetooth, the IMU update will stop. Could you give me some suggestions for this? Below are some parts of my code.   let My_IMU = CMHeadphoneMotionManager()   let My_writer = CSVWriter()   var write_state: Bool = false   func test()   {     recordingSession = AVAudioSession.sharedInstance()     do {       try recordingSession.setCategory(.playAndRecord, mode: .default, options: .allowBluetooth)       try recordingSession.setActive(true)       recordingSession.requestRecordPermission() { [unowned self] allowed in         DispatchQueue.main.async {           if allowed == false {print("failed to record!")}         }       }     } catch {       print("failed to record!")     }                 let audioFilename = getDocumentsDirectory().appendingPathComponent("test_motion_Audio.m4a")     let settings = [       AVFormatIDKey: Int(kAudioFormatMPEG4AAC),       AVSampleRateKey: 8000,       AVNumberOfChannelsKey: 1,       AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue     ]     do     {       audioRecorder = try AVAudioRecorder(url: audioFilename, settings: settings)       audioRecorder.delegate = self       audioRecorder.record()     }     catch     {       print("Fail to record!")       finishRecording()     }           write_state.toggle()     let dir = FileManager.default.urls(      for: .documentDirectory,      in: .userDomainMask     ).first!           let filename = "test_motion_Audio.csv"     let fileUrl = dir.appendingPathComponent(filename)     My_writer.open(fileUrl)           APP.startDeviceMotionUpdates(to: OperationQueue.current!, withHandler: {[weak self] motion, error in       guard let motion = motion, error == nil else { return }       self?.My_writer.write(motion)     })   }
Posted
by
Post not yet marked as solved
3 Replies
1.7k Views
I have a music app that can play in the background, using AVQueuePlayer. I'm in the process of adding support for CloudKit sync of the CoreData store, switching from NSPersistentContainer to NSPersistentCloudKitContainer. The initial sync can be fairly large (10,000+ records), depending on how much the user has used the app. The issue I'm seeing is this: ✅ When the app is in the foreground, CloudKit sync uses a lot of CPU, nearly 100% for a long time (this is expected during the initial sync). ✅ If I AM NOT playing music, when I put the app in the background, CloudKit sync eventually stops syncing until I bring the app to the foreground again (this is also expected). ❌ If I AM playing music, when I put the app in the background, CloudKit never stops syncing, which leads the system to terminate the app after a certain amount of time due to high CPU usage. Is there any way to pause the CloudKit sync when the app is in the background or is there any way to mitigate this?
Posted
by
Post not yet marked as solved
1 Replies
1.1k Views
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Posted
by
Post not yet marked as solved
1 Replies
1.9k Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by
Post not yet marked as solved
1 Replies
1.7k Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by
Post not yet marked as solved
10 Replies
3.9k Views
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue. I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from: try session.setCategory(.playAndRecord, options: .defaultToSpeaker) to try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker) This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only. I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not: 2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71  Invalid input size for property 1684431725 2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225  Invalid input size for property 1684431725 I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Posted
by
Post not yet marked as solved
1 Replies
1.4k Views
In my app, which gives voice prompts and sound warnings while driving, I test if the audio session is connected to CarPlay in order to use CarPlay settings for AVAudioSession by doing: BOOL found = NO; for (AVAudioSessionPortDescription * portDescr in [AVAudioSession sharedInstance].currentRoute.outputs) { if ([portDescr.portType isEqualToString:AVAudioSessionPortCarAudio]) found = YES; } This has worked for several years. But now, in iOS 16, when the app is in background, it won't detect that the iPhone is connected to CarPlay most of the times by using this method. Therefore, I will set up the audio session in a way which won't play any sound to the user. I have seen this happening in other apps such as Google Maps. So my guess is that something is not working in iOS 16 an AVAudioSession in background. Has anyone experienced this? Any workaround? I tried notifications to detect a change in a route, but sometimes I wouldn't get the notification if the app was in background when connecting to CarPlay. So this workaround didn't work either. Any ideas? Thank you.
Posted
by
Post not yet marked as solved
9 Replies
3.6k Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by
Post not yet marked as solved
3 Replies
1.7k Views
Hi Folks, I'm currently working on video conferencing app and use AVRoutePickerView for output device selection. Since iOS 16 release it started to display incorrect names for ear-piece and speaker options. For both of them the name is iPhone (before it was iPhone/Speaker) Output changes works correctly but display names confuse. Here is my audio session configuration:     private func configureSession() {         let configuration = RTCAudioSessionConfiguration.webRTC()         configuration.category = AVAudioSession.Category.playAndRecord.rawValue         configuration.categoryOptions = [.allowBluetooth, .allowBluetoothA2DP, .duckOthers, .mixWithOthers]         let session = RTCAudioSession.sharedInstance()         session.lockForConfiguration()         do {             try session.setConfiguration(configuration)         } catch let error as NSError {             logError("[AudioSessionManager] Unable to configure RTCAudioSession with error: \(error.localizedDescription)")         }         session.unlockForConfiguration()     }          private func overrideOutputPortIfNeeded() {         DispatchQueue.main.async {             guard let currentOutputType = self.session.currentRoute.outputs.first?.portType else { return }                      self.session.lockForConfiguration()             let shouldOverride = [.builtInReceiver, .builtInSpeaker].contains(currentOutputType)             logDebug("[AudioSessionManager] Should override output to speaker? \(shouldOverride)")             if shouldOverride {                 do {                     try self.session.overrideOutputAudioPort(.speaker)                 } catch let error as NSError {                     logError("[AudioSessionManager] Unable to override output to Speaker: \(error.localizedDescription)")                 }             }             self.session.unlockForConfiguration()         }     } Any help appreciated, Thansk!
Posted
by
Post not yet marked as solved
2 Replies
1.1k Views
I am using AVSpeechSynthesizer to get audio buffer and play, I am using AVAudioEngine and AVAudioPlayerNode to play the buffer. But I am getting error. [avae] AVAEInternal.h:76 required condition is false: [AVAudioPlayerNode.mm:734:ScheduleBuffer: (_outputFormat.channelCount == buffer.format.channelCount)] 2023-05-02 03:14:35.709020-0700 AudioPlayer[12525:308940] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount' Can anyone please help me to play the AVAudioBuffer from AVSpeechSynthesizer write method?
Posted
by
Post not yet marked as solved
3 Replies
804 Views
I tried to press system Talk button to talk in channel when my APP in background, I see the channelManager(_ : PTChannelManager, _: UUID, didBeginTransmittingFrom _: PTChannelTransmitRequestSource) is called, but the function func channelManager(_ : PTChannelManager, didActivate _: AVAudioSession) never called. When my APP in foreground, everything works fine. I've set the APP's capability Background Modes -> Push to Talk. Anyone can help?
Posted
by
Post not yet marked as solved
0 Replies
363 Views
https://developer.apple.com/documentation/avfaudio/avaudiosession/mode/1616455-voicechat In this document's discussion region, there is side effect mentioned. But details of side effect is not available. Please share the details of side effect. Using this mode has the side effect of enabling the allowBluetooth category option.
Posted
by
Post not yet marked as solved
0 Replies
418 Views
We noticed that the number of AVAudioSession.mediaServicesWereResetNotification increased significantly after upgrading to iOS 16.4.1. When AVAudioSession.mediaServicesWereResetNotification is posted, there is a high probability that it failed to restart audio modules just on iOS 16.4.1. Another issue is that we couldn't emulate to reset media services after upgrading to iOS 16.4.1 similarly. Because the call would be ended by callkit after resetting media services. Steps to Reproduce: 2.1 Make a native call and the call is connected. 2.2 Click "Reset Media Servicces"(Settings -> Developer -> "Reset Media Servicces") button, and return to the phone menu. 2.3 The call is terminated by Callkit.
Posted
by
Post not yet marked as solved
0 Replies
957 Views
I'm writing an app in which I have a sound being played whenever a particular action occurs. During testing I noticed that this sounds will always be heard if my phone is connected to a Bluetooth speaker even if the mute switch is active. Is there a way to prevent this? I feel like my app should respect the mute switch in this case. If I am not connected to the Bluetooth speaker everything works as expected.
Posted
by
Post not yet marked as solved
0 Replies
789 Views
I'm building an app that is mostly around audio playback, but I occasionally want to record small bit from the microphone. I'm setting up my audio session like this: try? AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.allowBluetoothA2DP] ) When connected to bluetooth headphones, or via bluetooth audio to the car (a 2022 VW ID.4) this gives me the results I desire: playback is high quality via A2DP, and the recordings use the built-in iphone microphone. However, when connected to CarPlay, the output audio quality suffers greatly; it sounds like it's using a bluetooth HFP connection. Note that this only happens when using .playAndRecord -- if I setup the session with .playback the audio quality using carplay is just fine. I've tried bludgeoning some workarounds like setting .playAndRecord at the last minute, or calling setPreferredInput() to force using the ios mic, but nothing seems to help.
Posted
by
Post not yet marked as solved
1 Replies
1.1k Views
I'm trying to change the audio input (microphone) between all the available devices from AVAudioSession.sharedInstance().availableInputs. I'm using AVAudioSession.routeChangeNotification to get automatic route changes when devices get connected/disconnected and change the preferred input with setPreferredInput, then I restart my audioEngine and it works fine. But when I try to change the preferred input programmatically It doesn't change the audio capture inputNode. But keeps the last connected device and capturing. Even the AVAudioSession.sharedInstance().currentRoute.inputs changes but the audioEngine?.inputNode doesn't change to setPreferredInput call. WhatsApp seems to have done that without any issues. Any suggestions or leads are highly appreciated. Thanks.
Posted
by
Post not yet marked as solved
0 Replies
844 Views
I'm currently developing an app that requires detecting Bluetooth connections and disconnections in cars. During testing, I've observed the following behavior: In certain vehicles, only a Bluetooth connection via the car's hands-free system is available. In these cases, the device initiates a call to itself, which is then displayed on the vehicle's infotainment system. In some of the tested vehicles, this self-call is brief and only occurs during the device's connection or disconnection process. However, in other vehicles, the self-call remains visible throughout the entire duration of the device's pairing with the car's Bluetooth system. This blocked call blocks the entire infotainment system and causes the connection/disconnection observers in my app to stop functioning as expected. I'm looking for a solution or preventative measures to address this issue. Any guidance would be greatly appreciated. Here is a snippet of my code: `func audioSessionSetup() { do { resetAudioSession() let audioOptions: AVAudioSession.CategoryOptions = [.duckOthers, .allowBluetooth, .defaultToSpeaker] try audioSession.setCategory(.playAndRecord, mode: .spokenAudio, options: audioOptions) registerNotifications() try audioSession.setActive(true) print("audioSession is active") } catch let error as NSError { print("Failed to set the audio audioSession category and mode: \(error.localizedDescription)") } } /// Reset the audio session to deactivate it. func resetAudioSession() { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch let error as NSError { print("Failed to reset the audio audioSession, error: \(error.localizedDescription)") } } @objc func handleRouteChange(_ notification: Notification) { guard let userInfo = notification.userInfo, let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt, let reason = AVAudioSession.RouteChangeReason(rawValue: reasonValue) else { return } switch reason { case .newDeviceAvailable: /// Handle new device connection print("New device connected.") checkConnectionForSelectedOutput(notification) case .oldDeviceUnavailable: /// Handle device disconnection print("Device disconnected.") handleLocationServices(state: false) default: print("break") handleCategoryChange(notification) break } } private func handleCategoryChange(_ notification: Notification) { if let connectedDeviceName = getConnectedBluetoothDeviceName() { if connectedDeviceName != connectedDevice && connectedDeviceName == BluetoothUtils.getBluetoothInfo().portName { connectedDevice = connectedDeviceName checkConnectionForSelectedOutput(notification) } } else { audioSessionSetup() checkConnectionForSelectedOutput(notification) print("handleRouteChange audio session is active") } }`
Posted
by
Post not yet marked as solved
0 Replies
498 Views
Hi, I am developing a POC music player app. I use AVAudioSession; I have implemented background music and integration with command center. I am focusing now on volume. I am able to receive volume changes, with systemVolumeDidChange. About setting the volume, I am able to set it using MPVolumeView, but not for remote wifi audio device (for example, HomePods). I have the following open points: the native Podcast app is able to control volume when connected to HomePods. How does it do? the native Podcast app has icons for AirPods, HomePods, even Car bluetooth. Are there icon propeties for audioSession.currentRoute.outputs? Or what should I use instead? Here an example of what I would like to achieve:
Posted
by