Hi! I'm developing a music player app that interchanges between ApplicationMusicPlayer and AVAudioEngine. I'm facing an issue when switching from playback via ApplicationMusicPlayer to AVAudioEngine while the app is in background. Based on testing, it seems like the issue has to do with being unable to set audio focus in background, causing error AVAudioSessionErrorCodeCannotInterruptOthers.
I would like to check if ApplicationMusicPlayer has its own audio focus separated from the app's own audio focus. If it is, is there anything that I can do to ensure that ApplicationMusicPlayer returns focus to the app?
(I notice that the issue does not occur if we are moving playback from AVAudioEngine to ApplicationMusicPlayer. Not sure why the opposite does not work)
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
86 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions.
The stuttering occurs for the first 2 seconds. Here's an example:
https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering
The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured:
let engine = AVAudioEngine()
let recorderNode = AVAudioMixerNode()
engine.attach(recorderNode)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0))
engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat)
engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0))
let bufferSize: AVAudioFrameCount = 4096
recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in
guard let self = self else { return }
do {
// Write recording to disk
try audioFile.write(buffer)
} catch {
// ...
}
}
I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
Hi, Team.
We are currently creating a VoIP calling app using pjsip and want to be able to end a call using the headset button while the app is in the middle of a call (AVAudioSession.category == .playAndRecord), but MPRemoteCommand does not receive any events.
After trying various things, We found that the button will respond if the audio output destination is set to the speaker or if .allowBluetoothA2DP is set as an option, but this is not suitable for this use case because audio input and output would be from the device rather than the headset.
=================================================
Problem
Headset button events cannot be received from MPRemoteCommand during a call.
What is expected to happen?
When the headset button is pressed during a call, a handler registered in some MPRemoteCommand is called back.
What does actually happen?
No MPRemoteCommand responds when the headset button is pressed during a call.
Information
Sample code
Echoes back the audio input with a 5-second delay to simulate a phone call.
https://github.com/ryu-akaike/HeadsetTalkTest-iOS/
Versions
macOS: Sonoma 14.5
Xcode: 15.3
iPhone: 11
iOS: 17.5.1
=================================================
Thank you.
Ryu Akaike
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones).
I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift.
To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc.
Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation.
tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well.
I tried to set the AVAudioSession to be equally sensitive to all directions:
private func configureAudioSession() {
do {
try audioSession.setCategory(.record, mode: .measurement)
try audioSession.setActive(true)
if #available(visionOS 1.0, *) {
let availableDataSources = audioSession.availableInputs?.first?.dataSources
if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) {
try audioSession.setInputDataSource(omniDirectionalSource)
}
}
} catch {
print("Failed to set up audio session: \(error)")
}
}
And here is how I set up the speech recognition and configure the microphone inputs:
private func startSpeechRecognition(completion: @escaping (String) -> Void) {
do {
// Cancel the previous task if it's running.
if let recognitionTask = recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
// The AudioSession is already active, creating input node.
let inputNode = audioEngine.inputNode
try inputNode.setVoiceProcessingEnabled(false)
// Create and configure the speech recognition request
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
if #available(iOS 13, *) {
recognitionRequest.requiresOnDeviceRecognition = true
}
// Create a recognition task for speech recognition session.
// Keep a reference to the task so that it can be canceled.
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in
// var isFinal = false
if let result = result {
// Update the recognizedText
completion(result.bestTranscription.formattedString)
} else if let error = error {
completion("Recognition error: \(error.localizedDescription)")
}
if error != nil || result?.isFinal == true {
// Stop recognizing speech if there is a problem
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
}
}
// Configure the microphone input
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
} catch {
completion("Audio engine could not start: \(error.localizedDescription)")
}
}
Description:
I am developing a recording-only application that supports background recording using AVAudioEngine. The app segments the recording into 60-second files for further processing. For example, a 10-minute recording results in ten 60-second files.
Problem:
The application functions as expected in the background. However, after the app receives an interruption (such as a phone call) and the interruption ends, I can successfully restart the recording. The problem arises when the app then transitions to the background; it fails to restart the recording. Specifically, after ending the call and transitioning the app to the background, the app encounters an error and is unable to restart AVAudioSession and AVAudioEngine. The only resolution is to close and restart the app, which is not ideal for user experience.
Steps to Reproduce:
1. Start recording using AVAudioEngine.
2. The app records and saves 60-second segments.
3. Receive an interruption (e.g., an incoming phone call).
4. End the call.
5. Transition the app to the background.
6. Transition the app to the foreground and the session will be activated again.
7. Attempt to restart the recording.
Expected Behavior:
The app should resume recording seamlessly after the interruption and background transition.
Actual Behavior:
The app fails to restart AVAudioSession and AVAudioEngine, resulting in a continuous error. The recording cannot be resumed without closing and reopening the app.
How I’m Starting the Recording:
Configuration:
internal func setAudioSessionCategory() {
do {
try audioSession.setCategory(
.playAndRecord,
mode: .default,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetooth]
)
} catch {
debugPrint(error)
}
}
internal func setAudioSessionActivation() {
if UIApplication.shared.applicationState == .active {
do {
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
if audioSession.isInputGainSettable {
try audioSession.setInputGain(1.0)
}
try audioSession.setPreferredIOBufferDuration(0.01)
try setBuiltInPreferredInput()
} catch {
debugPrint(error)
}
}
}
Starting AVAudioEngine:
internal func setupEngine() {
if callObserver.onCall() { return }
inputNode = audioEngine.inputNode
audioEngine.attach(audioMixer)
audioEngine.connect(inputNode, to: audioMixer, format: AVAudioFormat.validInputAudioFormat(inputNode))
}
internal func beginRecordingEngine() {
audioMixer.removeTap(onBus: 0)
audioMixer.installTap(onBus: 0, bufferSize: 1024, format: AVAudioFormat.validInputAudioFormat(inputNode)) { [weak self] buffer, _ in
guard let self = self, let file = self.audioFile else { return }
write(file, buffer: buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
recordingTimer = Timer.scheduledTimer(withTimeInterval: recordingInterval, repeats: true) { [weak self] _ in
self?.handleRecordingInterval()
}
} catch {
debugPrint(error)
}
}
On the try audioEngine.start() call, I receive error code 561145187 in the catch block.
Logs/Error Messages:
• Error code: 561145187
Request:
I would appreciate any guidance or solutions to ensure the app can resume recording after interruptions and background transitions without requiring a restart.
Thank you for your assistance.
I'm building an app that will allow users to record voice notes. The functionality of all that is working great; I'm trying to now implement changes to the AudioSession to manage possible audio streams from other apps. I want it so that if there is audio playing from a different app, and the user opens my app; the audio keep playing. When we start recording, any third party app audio should stop, and can then can resume again when we stop recording.
This is my main audio setup code:
private var audioEngine: AVAudioEngine!
private var inputNode: AVAudioInputNode!
func setupAudioEngine() {
audioEngine = AVAudioEngine()
inputNode = audioEngine.inputNode
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attach(audioPlayerNode)
let format = AVAudioFormat(standardFormatWithSampleRate: AUDIO_SESSION_SAMPLE_RATE, channels: 1)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: format)
}
private func setupAudioSession() {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth])
try audioSession.setPreferredSampleRate(AUDIO_SESSION_SAMPLE_RATE)
try audioSession.setPreferredIOBufferDuration(0.005) // 5ms buffer for lower latency
try audioSession.setActive(true)
// Add observers
setupInterruptionObserver()
} catch {
audioErrorMessage = "Failed to set up audio session: \(error)"
}
}
This is all called upon app startup so we're ready to record whenever the user presses the record button.
However, currently when this happens, any outside audio stops playing.
I isolated the issue to this line: inputNode = audioEngine.inputNode
When that's commented out, the audio will play -- but I obviously need this for recording functionality.
Is this a bug? Expected behavior?
Hi there community,
First and foremost, a big thank you to everyone who takes the time to read this.
TL;DR: How, if even possible, can I record multiple audio streams simultaneously on an iOS application (iPad/iPhone)?
I'm working on a recorder for the iPad to gather data for a machine learning project focused on speech recognition. Our goal is to capture extensive speech data, which requires recording from multiple microphones. Specifically, I need to record from all mics connected to our Scarlett 4i4 audio interface and, most importantly, also record from the built-in mic on the iPad or iPhone at the same time.
As a newcomer to Swift development, I initially explored AVAudioRecorder. However, I quickly realized that it only supports one active audio node at a time, making multi-channel recording impossible. (perhaps you can proof me wrong, would make my day) Next, I transitioned to using AVAudioEngine, but encountered the same limitation: I couldn't manage to get input nodes for both the built-in mic and the Scarlett interface channels simultaneously. The application started behaving oddly, often resulting in identical audio data being recorded across all files.
Determined to find a solution, I delved deeper into the Core Audio framework, specifically using Audio Toolbox. My approach involved creating and configuring multiple Audio Units, each corresponding to a different audio input device. Here's a brief overview of my current implementation:
Listing Available Input Devices: I used AVAudioSession to enumerate all available input devices.
Creating Audio Units: For each device, I created an Audio Unit and attempted to configure it for recording.
Setting Up Callbacks: I set up input and output callbacks to handle the audio processing.
Despite my efforts over the last few days, I haven't had much success. The callbacks for the Audio Units don't seem to be invoked correctly, and I'm struggling to achieve simultaneous multi-channel recording. Below is a snippet of my latest attempt:
let audioUnitCallback: AURenderCallback = { (
inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?
) -> OSStatus in
guard let ioData = ioData else {
return noErr
}
print("Input callback invoked")
let audioUnit = inRefCon.assumingMemoryBound(to: AudioUnit.self).pointee
var bufferList = AudioBufferList(
mNumberBuffers: 1,
mBuffers: AudioBuffer(
mNumberChannels: 1,
mDataByteSize: 0,
mData: nil
)
)
let status = AudioUnitRender(audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList)
if status != noErr {
print("AudioUnitRender failed: \(status)")
return status
}
// Copy rendered data to output buffer
let buffer = UnsafeMutableAudioBufferListPointer(ioData)[0]
buffer.mData?.copyMemory(from: bufferList.mBuffers.mData!, byteCount: Int(bufferList.mBuffers.mDataByteSize))
buffer.mDataByteSize = bufferList.mBuffers.mDataByteSize
print("Rendered audio data")
return noErr
}
let outputCallback: AURenderCallback = { (
inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>?
) -> OSStatus in
guard let ioData = ioData else {
return noErr
}
print("Output callback invoked")
// Process the output data if needed
return noErr
}
In essence, I'm stuck and in need of guidance. Has anyone here successfully implemented multi-channel recording on iOS, especially involving both built-in microphones and external audio interfaces? Any shared experiences, insights, or suggestions on how to proceed would be immensely appreciated.
Thank you once again for your time and assistance!
We are a music app, encountered a scene, there is no way to resume playing music, so I would like to ask about the technical plan, how to achieve it.
For example, when playing a video in another app, we pause the music playing and turn off the video, we should resume the music playing.
Our code is implemented, so listen AVAudioSessionInterruptionNotification, when we received the notice and judge AVAudioSessionInterruptionOptionShouldResume, we play music came again, Error 560557684(AVAudioSessionErrorCodeCannotInterruptOthers) was reported. We were very confused
NSError *error = nil;
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:&error];
[audioSession setActive:YES error:&error];
We compared the apple music app and found that apple music can resume playing.
Here is a video of the effects of our app:
https://drive.google.com/file/d/1J94S2kxkEpNvG536yzCnKmE7IN3cGzIJ/view?usp=sharing
Here's the apple music effect video:
https://drive.google.com/file/d/1c1Kdgkn2nhy8SdDvRJAFF2sPvqJ8fL48/view?usp=sharing
We want to improve our user experience. How can we do that?
Music app stops playing when switching to the background
In apps that play music or music files, if you move to the home screen or run another app while the app is running, the music playback stops.
Our app does not have the code to stop playing when switching to the background.
We are guessing that some people experience this and others do not.
We usually guide users to reboot their devices and try again.
How can this phenomenon be improved in the code?
Or is this a bug or error in the OS?
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking.
I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful.
Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session.
Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio.
If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio.
import AVFoundation
struct ContentView: View {
let workingSynthesizer = UnduckingSpeechSynthesizer()
let brokenSynthesizer = BrokenSpeechSynthesizer()
init() {
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers])
} catch {
print("Setup error info: \(error)")
}
}
var body: some View {
VStack {
Button("Works Correctly"){
workingSynthesizer.speak(text: "Hello planet")
}
Text("-------")
Button("Does not work"){
brokenSynthesizer.speak(text: "Hello planet")
}
}
.padding()
}
}
class UnduckingSpeechSynthesizer: NSObject {
var synth = AVSpeechSynthesizer()
let audioSession = AVAudioSession.sharedInstance()
override init(){
super.init()
synth.delegate = self
}
func speak(text: String) {
let utterance = AVSpeechUtterance(string: text)
synth.speak(utterance)
}
}
extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate {
func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) {
do {
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
}
catch {
// always throws an error
// Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
print("Deactivate error info: \(error)")
}
}
}
class BrokenSpeechSynthesizer {
var synth = AVSpeechSynthesizer()
let audioSession = AVAudioSession.sharedInstance()
func speak(text: String) {
let utterance = AVSpeechUtterance(string: text)
synth.speak(utterance)
}
}
(I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
Hello,
I hope this message finds you well. I am currently working on a Unity-based iOS application that requires continuous microphone input while also producing sound outputs. For this we need to use iOS echo cancellation, so some sounds need to be played via the iOS layer w/ echo cancellation, I am manually setting up the Audio Session after the app starts. Using the .playAndRecord mode of AVAudioSession. However, I am facing an issue where the volume of the sound output is inconsistent across different iOS devices and scenarios.
The process is quite simple, for each AudioClip we are about to play via unity, we copy the buffer data to our iOS Swift layer, which then does all the processing then plays the audio via the native layer.
Here are the specific issues I am encountering:
The volume level for the game sound effects fluctuate between a normal audible volume and a very low volume.
The sound output behaves differently depending on whether the app is launched with the device at full volume or on mute, and if the app is put into background and in foreground afterwards.
The volume inconsistency affects my game negatively, as it is very hard to hear some audios, regardless of the device or its initial volume state. I have followed the basic setup for AVAudioSession as per the documentation, but the inconsistencies persist.
I'm also aware that Unity uses FMOD to set up the audio routing in iOS, we configure our custom routing after that.
We tried tweaking the output volume prior to playing an audio so there isn't much discrepancy, this seems to align the output volume, however there is still some places where the volume is super low, I've looked into the waveforms in Unity and they all seem consistent, there is no reason why the volume would take a dip.
private var audioPlayer = AVAudioPlayerNode()
@objc public func Play() {
audioPlayer.volume = AVAudioSession.sharedInstance().outputVolume * 0.25
audioPlayer.play()
}
We also explored changing the audio session options to see if we had any luck but unfortunately nothing has changed.
private func ConfigAudioSession() {
let audioSession = AVAudioSession.sharedInstance();
do {
try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]);
try audioSession.setMode(.spokenAudio)
try audioSession.setActive(true);
}
catch {
//Treat error
}
}
Could anyone provide guidance or suggest best practices to ensure a stable and consistent volume output in this scenario? Any advice on this issue would be greatly appreciated.
Thank you in advance for your help!
I have an iPad Pro 12.9". I am looking to make an app which can take a simultaneous audio recording from two different microphones at the same time. I want to be able to specify which of the 5 built-in microphones each audio stream should use - ideally one should be from the microphone on the left side of the iPad, and the other should be from one of the mics at the top of the iPad. Is this possible to achieve with the API?
The end goal here is to be able to use the two audio streams and do some DSP on the recordings to determine the approximate direction a particular sound comes from.
I am working on a VoIP based PTT app. Uses 'voip' apns notification type to get to know about new incoming PTT call.
When my app receives a PTT call, the app plays audio. But the call audio is not heard. While checking the phone volume, the API [[AVAudioSession sharedInstance] outputVolume] returns 0. But clearly the phone volume is not zero. On checking the phone volume by pressing side volume button, the volume is above 50%.
This behavior is observed in both app foreground and background scenario.
Why does the API return zero volume level ? Is there any other reason why the app volume is not heard ?
Position of AVAudioSession is different when I use the speaker.
try session.setCategory(.playAndRecord, mode: .voiceChat, options: [])
try session.overrideOutputAudioPort(.speaker)
try session.setActive(true)
let route = session.currentRoute
route.inputs.forEach{ input in
print(input.selectedDataSource?.location)
}
In iPhone 11(iOS 17.5.1),
AVAudioSessionLocation: Lower
In iPhone 7 Plus(iOS 15.8.2),
AVAudioSessionLocation: Upper
What causes this difference in behavior?
We are to judge the AVAudioSessionInterruptionOptionShouldResume, to restore the audio playback.
We have been online for a long time and have been able to resume audio playback normally.
But recently we've had a lot of user feedback as to why the audio won't resume playing.
Based on this feedback, we checked and found that there were some apps that did not play audio but occupied audio all the time. For example, when a user was using the wechat app, after sending a voice message, we received a notification to resume audio playback, and wechat did not play audio either. But we resume play times wrong AVAudioSessionErrorCodeCannotInterruptOthers.
After that, we gave feedback to the wechat app and fixed the problem. But we still have some users feedback this problem, we do not know which app is maliciously occupying audio, so we do not know which aspect to troubleshoot the problem.
We pay close attention to user feedback and hope it can help us solve user experience problems.
I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView.
This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space.
When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio.
When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive.
I've tried the following:
Setting all entities in my space to use channel audio, including the web view attachment.
for entity in content.entities {
entity.channelAudio = ChannelAudioComponent()
entity.ambientAudio = nil
entity.spatialAudio = nil
}
Changing the AVAudioSessionSpatialExperience:
And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb.
let experience = AVAudioSessionSpatialExperience.headTracked(
soundStageSize: .large,
anchoringStrategy: .automatic
)
try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience)
I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too.
Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.
PLATFORM AND VERSION
iOS
Development environment: Xcode 15.0, macOS 14.4.1, Objective-C
Run-time configuration: iOS 17.2.1,
DESCRIPTION OF PROBLEM
I am developing an application that uses NetworkExtension (VoIP local push function).
But iOS sometimes doesn't call didActivateAudioSession after following sequence.
Would you tell me why iOS doesn't call didActivateAudioSession ?
(I said "sometimes", but once it occurs, it will occur repeatedly)
myApp --- CXStartCallAction --->iOS
myApp <-- performStartCallAction callback --- iOS
myApp --- AVAudioSession
setCategory:
AVAudioSessionCategoryPlayAndRecord --->iOS
myApp --- AVAudioSession
setMode:
AVAudioSessionModeVoiceChat --->iOS
myApp <-- didActivateAudioSession callback ----iOS
I suspect that myApp cannot acquire an AVAudioSession if another app is already using AVAudioSession.
[QUESTION1]
Is my guess correct? Should I consider another cause?
[QUESTION2]
If my guess is correct, how can I prove if another app is already using an AVAudioSession?
This issue is based on a customer complaint, but the customer said they don't use any other apps.
Best Regards,
Hello, today when we uploaded a new TestFlight Mac Catalyst build we received an email about the build being invalid:
TMS-90338: Non-public API usage - The app references non-public symbols in {app name}: _AVCaptureDeviceTypeBuiltInTelephotoCamera, _AVCaptureDeviceTypeBuiltInTrueDepthCamera, _AVCaptureDeviceTypeBuiltInUltraWideCamera, _AVCaptureSessionInterruptionReasonKey, _AVCaptureSessionInterruptionSystemPressureStateKey, _AVCaptureSystemPressureLevelCritical, _AVCaptureSystemPressureLevelFair, _AVCaptureSystemPressureLevelNominal, _AVCaptureSystemPressureLevelSerious, _AVCaptureSystemPressureLevelShutdown. If method names in your source code match the private Apple APIs listed above, altering your method names will help prevent this app from being flagged in future submissions. In addition, note that one or more of the above APIs may be located in a static library that was included with your app. If so, they must be removed. For further information, visit the Technical Support Information at http://developer.apple.com/support/technical/
We've been uploading builds the same way for months, using the same Xcode 15.2 and dependency versions, and have checked our most recent commits since the last release and nothing was updated around AVFoundation, archiving, etc. Did anything change on Apple's side recently?
We use Xcode 15.2 to build/archive/upload and xcodebuild to run all commands.