I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView.
This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space.
When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio.
When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive.
I've tried the following:
- Setting all entities in my space to use channel audio, including the web view attachment.
for entity in content.entities {
entity.channelAudio = ChannelAudioComponent()
entity.ambientAudio = nil
entity.spatialAudio = nil
}
- Changing the
AVAudioSessionSpatialExperience
:- And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb.
let experience = AVAudioSessionSpatialExperience.headTracked(
soundStageSize: .large,
anchoringStrategy: .automatic
)
try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience)
I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too.
Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.