In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold.
If I end the active call myself, everything is fine, and CallKit calls the
method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession).
However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended.
But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error:
Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449
What needs to be done for CallKit to activate my audio?
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
86 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am using the category - playAndRecord and mode - videoChat with options - duckOthers. But during an audio call when I try to call setActive(true) an exception occurs and when I try to setActive(true) again after the audio call ended, I am not getting any exception, but voice is not coming. Below is what I am trying to do. So once initial session active attempt fails the system is not activating the session.
I have used AVAudioSession.interruptionNotification already but still its not setting the session as desired when audio call is ended.
try session.setCategory(.playAndRecord, mode: .videoChat, options: .duckOthers)
try session.setActive(true)
I'm attempting to record from a device's microphone (under iOS) using AVAudioRecorder. The examples are all quite simple, and I'm following the same method. But I'm getting error messages on attempts to record, and the resulting M4A file (after several seconds of recording) is only 552 bytes long and won't load. Here's the recorder usage:
func startRecording()
{
let settings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 22050,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), settings: settings)
recorder?.delegate = self
recorder!.record()
recording = true
}
catch
{
recording = false
recordingFinished(success: false)
}
}
The immediate sign of trouble appears to be the following, in the console. Note the 0 bits per channel and irrelevant 8K sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 8000 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 8000 Hz, Int16
A subsequent attempt to load the file into AVAudioPlayer results in:
MP4_BoxParser.cpp:1089 DataSource read failed MP4AudioFile.cpp:4365 MP4Parser_PacketProvider->GetASBD() failed AudioFileObject.cpp:105 OpenFromDataSource failed AudioFileObject.cpp:80 Open failed
But that's not surprising given that it's only 500+ bytes and we had the earlier error. Anybody have an idea here? Every example on the Web shows essentially this exact method.
I've also tried constructing the recorder with
let audioFormat = AVAudioFormat.init(standardFormatWithSampleRate: 44100, channels: 1)
if audioFormat == nil
{
print("Audio format failed.")
}
else
{
do
{
recorder = try AVAudioRecorder(url: tempFileURL(), format: audioFormat!)
...
with mostly the same result. In that case the instantiation error message was the following, which at least mentions the requested sample rate:
AudioQueueObject.cpp:1580 BuildConverter: AudioConverterNew returned -50 from: 0 ch, 44100 Hz, .... (0x00000000) 0 bits/channel, 0 bytes/packet, 0 frames/packet, 0 bytes/frame to: 1 ch, 44100 Hz, Int32
I need to duck the audio coming from ApplicationMusicPlayer while playing a local file using AVAudioPlayer.
I've tried using the duckOthers option as follows, but it doesn't work:
let appAudioSession = AVAudioSession.sharedInstance()
do
{
try appAudioSession.setCategory(.playAndRecord, mode: .default, options: .duckOthers)
Maybe this is because there's one session for the entire app, and ApplicationMusicPlayer is using it?
This is a fairly critical problem for my application, since Music content is always much louder than locally recorded content. Any insight appreciated.
Background
When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music.
When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music.
however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684)
Some Solutions
I searched stackoverflow, there's some similar questions, and some solutions here are not very satisfying as :
I don't want my app to mix with others, and once again, it all works most of the time.
My app already uses remote control events so this doesn't solve anything.
Questions
1.Have someone ever encountered this problem ?
2.Can we solve this problem and how ?
3.In addition, I noticed that there's property named otherAudioPlaying in AVAudioSession, we can know there's another app is playing,the quetion is if we can know which app is playing ?
I'm trying to integrate Callkit into a Flutter app that uses webRTC for calls and I have an issue with taking calls on locked screen. CXAnswerCallAction requires to have the action.fulfill() method called after the connection is established. Here is a pice of code without waiting for establishment of the connection:
guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{
action.fail()
return
}
call.data.isAccepted = true
self.answerCall = call
self.callManager?.updateCall(call)
sendEvent(SwiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON())
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) {
self.configureAudioSession()
}
action.fulfill()
}
This causes the connection time counter to be immediately visible on the screen, but the user still has to wait for connection establishment and can't hear anything.
Here is the code that waits for the establishment of the connection before calling action.fulfill():
if(self.awaitedConnection.uuid != uuid) {
action.fail()
} else if(self.awaitedConnection.isConnected) {
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1200)) {
self.configureAudioSession()
}
action.fulfill()
} else {
DispatchQueue.main.asyncAfter(deadline: .now() + .milliseconds(1000)) {
self.waitForConnection(uuid: uuid, action: action)
}
}
}
public func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
guard let call = self.callManager?.callWithUUID(uuid: action.callUUID) else{
action.fail()
return
}
call.data.isAccepted = true
self.answerCall = call
self.callManager?.updateCall(call)
self.awaitedConnection.uuid = action.callUUID
self.awaitedConnection.isConnected = false
sendEvent(wiftCallKeepPlugin.ACTION_CALL_ACCEPT, call.data.toJSON())
waitForConnection(uuid: action.callUUID, action: action)
}
Unfortunately, though it works great on iOS 15.7, on 17.3 it causes lack of audio, no sound and no recording. I also can't enable it later when the call is ongoing. For reference:
let session = AVAudioSession.sharedInstance()
do{
try session.setCategory(AVAudioSession.Category.playAndRecord, options: AVAudioSession.CategoryOptions.allowBluetooth)
try session.setMode(self.getAudioSessionMode(data?.audioSessionMode ?? "voiceChat"))
try session.setActive(data?.audioSessionActive ?? true)
try session.setPreferredSampleRate(data?.audioSessionPreferredSampleRate ?? 44100.0)
try session.setPreferredIOBufferDuration(data?.audioSessionPreferredIOBufferDuration ?? 0.005)
}catch{
print(error)
}
}
I can see in the docs of action.fulfill() that "You should only call this method from the implementation of a CXProviderDelegate method". I this the reason for the issue? But how can I do it if I need to wait for the connection asynchronously and the provider method is synchronous?
If user use AirPods, and he change the name of AirPods to "xxxx", how to get the origin name of AirPods?
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips.
I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error:
Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.)
Thanks in advance!
Here is my code:
class AudioInputMonitor: ObservableObject {
private var audioEngine: AVAudioEngine?
@Published var inputLevel: Float = 0
init() {
requestMicrophonePermission()
}
private func requestMicrophonePermission() {
AVAudioApplication.requestRecordPermission { granted in
DispatchQueue.main.async {
if granted {
self.setupAudioSessionAndEngine()
} else {
print("Microphone permission not granted")
// Handle the case where permission is not granted
}
}
}
}
private func setupAudioSessionAndEngine() {
do {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [])
try audioSession.setActive(true)
self.setupAudioEngine()
} catch {
print("Failed to set up the audio session: \(error)")
}
}
private func setupAudioEngine() {
audioEngine = AVAudioEngine()
guard let inputNode = audioEngine?.inputNode else {
print("Failed to get the audio input node")
return
}
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in
self?.analyzeAudio(buffer: buffer)
}
do {
try audioEngine?.start()
} catch {
print("Error starting audio engine: \(error.localizedDescription)")
}
}
private func analyzeAudio(buffer: AVAudioPCMBuffer) {
// removed to be brief
}
func stopMonitoring() {
// removed to be brief
}
}
Hi all,
I have created a QuickLook Preview for my custom datatype in my app.
I use SwiftUI wrapped in UIKit for the preview. My issue is that when I try and play audio using AVAudioPlayer, I receive a status code 50 error.
Does anyone know if there are seperate permissions I need to request before being able to do this?
Here are the errors I get while trying to set my audio session as active and play on the avaudioplayer
Thanks for your help and advice!
The operation couldn’t be completed. (OSStatus error -50.)
nwi_state: registration failed (9)
connection <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
auto-cancelling <connection: 0x100e0b270> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
throwing swix::exception: !(is_valid())
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
rebuilding null connection
0x2816bf680 reply: XPC_ERROR_CONNECTION_INVALID
connection <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } : error <dictionary: 0x251524530> { count = 1, transaction: 0, voucher = 0x0, contents =
"XPCErrorDescription" => <string: 0x2515246c8> { length = 18, contents = "Connection invalid" }
}
throwing swix::exception: !(is_valid())
auto-cancelling <connection: 0x100822a90> { name = com.apple.audio.AudioQueueServer, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 }
AQ_API_V2Impl.cpp:134 AudioQueueNew: <-AudioQueueNew failed -302
There is a CustomPlayer class and inside it is using the MTAudioProcessingTap to modify the Audio buffer.
Let's say there are instances A and B of the Custom Player class.
When A and B are running, the process of B's MTAudioProcessingTap is stopped and finalize callback is coming up when A finishes the operation and the instance is terminated.
B is still experiencing this with some parts left to proceed. Same code same project is not happening in iOS 17.0 or lower.
At the same time when A is terminated, B can complete the task without any impact on B.
What changes to iOS 17.1 are resulting in these results? I'd appreciate it if you could give me an answer on how to avoid these issues.
let audioMix = AVMutableAudioMix()
var audioMixParameters: [AVMutableAudioMixInputParameters] = []
try composition.tracks(withMediaType: .audio).forEach { track in
let inputParameter = AVMutableAudioMixInputParameters(track: track)
inputParameter.trackID = track.trackID
var callbacks = MTAudioProcessingTapCallbacks(
version: kMTAudioProcessingTapCallbacksVersion_0,
clientInfo: UnsafeMutableRawPointer(
Unmanaged.passRetained(clientInfo).toOpaque()
),
init: { tap, clientInfo, tapStorageOut in
tapStorageOut.pointee = clientInfo
},
finalize: { tap in
Unmanaged<ClientInfo>.fromOpaque(MTAudioProcessingTapGetStorage(tap)).release()
},
prepare: nil,
unprepare: nil,
process: { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in
var timeRange = CMTimeRange.zero
let status = MTAudioProcessingTapGetSourceAudio(tap,
numberFrames,
bufferListInOut,
flagsOut,
&timeRange,
numberFramesOut)
if noErr == status {
....
}
})
var tap: Unmanaged<MTAudioProcessingTap>?
let status = MTAudioProcessingTapCreate(kCFAllocatorDefault,
&callbacks,
kMTAudioProcessingTapCreationFlag_PostEffects,
&tap)
guard noErr == status else {
return
}
inputParameter.audioTapProcessor = tap?.takeUnretainedValue()
audioMixParameters.append(inputParameter)
tap?.release()
}
audioMix.inputParameters = audioMixParameters
return audioMix
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback.
try session.setCategory(.playAndRecord, mode: .default, options: [])
try session.setActive(true, options: .notifyOthersOnDeactivation)
try session.setAllowHapticsAndSystemSoundsDuringRecording(true)
filePlayerNode ---> engine.mainMixerNode
bufferPlayerNode --> engine.mainMixerNode
engine.mainMixerNode --> engine.outputNode
//bufferPlayer.scheduleBuffer() is called on its own queue
The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse.
I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly.
The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring).
Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device.
I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route.
Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker.
the mode is default and category is playandrecord.
If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1.
I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17.
Anyone went through this kind of issue?
Thank you in advance.
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared.
Example:
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker])
try AVAudioSession.sharedInstance().setMode(.voiceChat)
If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device).
Anyone know why this might be or how to fix or a workaround?
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app:
EXC_BREAKPOINT
Exception 6, Code 1, Subcode 4304279688
+0x009888 AudioRecordModule.setupAudioEngine
+0x009788
AudioRecordModule.setupAudioEngine
+0x00c5bc
AudioRecordModule.handleConfigurationChange
Below is the relevant code in the Recorder class.
public class AudioRecordModule: Module {
private var audioEngine: AVAudioEngine?
private func startRecording(options recordingOptions: RecordingOptions) {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers)
try AVAudioSession.sharedInstance().setActive(true)
outputFormat = AVAudioFormat(
commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16,
sampleRate: Double(recordingOptions.sampleRate),
channels: AVAudioChannelCount(recordingOptions.channels),
interleaved: true
)!
let fileUri = URL(string: recordingOptions.fileUri)!
let formatSettings: [String: Any] = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: recordingOptions.sampleRate,
AVNumberOfChannelsKey: recordingOptions.channels,
AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue,
]
self.recordedFile = try AVAudioFile(
forWriting: fileUri,
settings: formatSettings,
commonFormat: outputFormat.commonFormat,
interleaved: outputFormat.isInterleaved
)
if !hadSetupNotification {
setupNotifications()
}
}
func handleConfigurationChange() {
DispatchQueue.main.async {
self.releaseAudioEngine()
self.setupAudioEngine()
if self.state == "recording" {
// we could attempt to keep recording
do {
try self.audioEngine?.start()
} catch {
self.internalPauseRecording()
self.sendInterruptEvent()
}
}
}
}
func setupNotifications() {
nc.addObserver(
forName: Notification.Name.AVAudioEngineConfigurationChange,
object: nil,
queue: nil
) { [weak self] _ in
guard let weakself = self else {
return
}
if weakself.state != "inactive" {
weakself.handleConfigurationChange()
}
}
}
private func setupAudioEngine() {
self.audioEngine = nil
let audioEngine = AVAudioEngine()
self.audioEngine = audioEngine
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.inputFormat(forBus: 0)
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
do {
let inputBlock: AVAudioConverterInputBlock = { _, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let frameCapacity =
AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength
/ AVAudioFrameCount(buffer.format.sampleRate)
let outputBuffer = AVAudioPCMBuffer(
pcmFormat: self.outputFormat,
frameCapacity: frameCapacity
)!
var error: NSError?
converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if let error = error {
throw error
} else {
try self.recordedFile?.write(from: outputBuffer)
}
} catch {
print(error)
}
}
}
private func releaseAudioEngine() {
if let audioEngine = self.audioEngine {
audioEngine.inputNode.removeTap(onBus: 0)
audioEngine.stop()
}
audioEngine = nil
}
}
Beside that, the record module works normally. It is just the configuration change that it does not handle well.
I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch.
AVAudioRecorder is not an option for me.
Thank you for your help.
:(
We are currently in the process of developing a video calling app using WebRTC.
We initiate one-to-one video calls with the AVAudioSession configured as follows:
do {
if audioSession.category != .playAndRecord {
try audioSession.setCategory(
AVAudioSession.Category.playAndRecord,
options: [
.defaultToSpeaker
]
)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
}
if audioSession.mode != .videoChat {
try audioSession.setMode(.videoChat)
}
} catch {
logger.error(msg: "AVAudioSession: \(error.localizedDescription)")
}
After initiating a video call, we recorded this app's video call using the iOS default screen recording feature.
As a result, the recorded video includes system audio.
However, iOS/iPad apps with similar features (Zoom, Skype, Slack) do not include audio in their recordings.
Why does this difference occur?
Is this behavior a security feature of iOS, and are there specific conditions required?
Is there a need for some sort of configuration in AVAudioSession?
additional :(
I also reached out to Apple Developer Technical Support, and they responded, "We were able to reproduce it, but since we don't understand the issue, we will investigate it."
What's that about...
I am creating an app where you can record a video and listen to music in the background. At the top of my viewDidLoad I set the AVAudioSession Category to .playAndRecord
let audioSession = AVAudioSession.sharedInstance()
AVCaptureSession().automaticallyConfiguresApplicationAudioSession = false
do {
try audioSession.setCategory(AVAudioSession.Category.playAndRecord, options: [.mixWithOthers, .allowAirPlay, .allowBluetoothA2DP])
try audioSession.setActive(true)
} catch {
print("error trying to record and play audio")
}
However when I do this the audio cuts out for a second or less at app open and app close. I would like the audio to continue playing and not cutout. Is there anything I can do to ensure the audio continues to play?
Hi There,
I am trying to record a meeting and upload it to AWS server. The recording is in .m4a format and the upload request is a URLSession request.
The following code works perfectly for recordings less than 15 mins. But then for greater recordings, it gets stuck
Could you please help me out in this?
func startRecording() {
let audioURL = getAudioURL()
let audioSettings = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: 12000,
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue
]
do {
audioRecorder = try AVAudioRecorder(url: audioURL, settings: audioSettings)
audioRecorder.delegate = self
audioRecorder.record()
} catch {
finishRecording(success: false)
}
}
func uploadRecordedAudio{
let _ = videoURL.startAccessingSecurityScopedResource()
let input = UploadVideoInput(signedUrl: signedUrlResponse, videoUrl: videoURL, fileExtension: "m4a")
self.fileExtension = "m4a"
uploadService.uploadFile(videoUrl: videoURL, input: input)
videoURL.stopAccessingSecurityScopedResource()
}
func uploadFileWithMultipart(endPoint: UploadEndpoint) {
var urlRequest: URLRequest
urlRequest = endPoint.urlRequest
uploadTask = URLSession.shared.uploadTask(withStreamedRequest: urlRequest)
uploadTask?.delegate = self
uploadTask?.resume()
}
I am creating a camera app where I would like music from another app (Apple Music, Spotify, etc.) to continue playing once the app is opened. Currently I am using .mixWithOthers to do this in my viewDidLoad.
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(AVAudioSession.Category.playback, options: [.mixWithOthers])
try audioSession.setActive(true)
} catch {
print("error trying to record and play audio")
}
However I am running into an issue where the music only plays if you resume music playback once you start recording a video. Otherwise, when you open the app music will stop when you see the preview. The interesting thing is that if you start playing music while recording, then once you stop music continues to play in the preview view. If you close the app (not force close) and reopen then music play back continues as expected. However, once you force close the app then it returns to the original behavior. I've tried to do research on this and I have not been able to find anything. Any help is appreciated. Let me know if more details are needed.
there is a method setPreferredInput in AVAudioSession which can be used to select different input device. But, does there any similar function like "setPerferredOutput" so that in my APP I can select a specific audio output device to play audio ?
I do not want user to change it through system interfaces (such as the Control Center), but by logic inside APP.
thanks!