Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

MusicKit: How do we search by title or name only?
I can't find any way to search for a song by title only. You can search for songs, but any term you provide appears to be applied to any metadata associated with the song. Look at the largely nonsensical results when I search for a song with the letters "de": In many cases, that string doesn't appear anywhere. I used MusicCatalogSearchRequest(term: searchTerm, types: [Song.self]) Likewise it stands to reason that people want to search for artist and album names using text strings. How do we do that?
0
0
73
1d
AVSpeechSynthesizer - just not working on 15.1.1
So get a swift file and put this in it import Foundation import AVFoundation let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: "Hello, testing speech synthesis on macOS.") if let voice = AVSpeechSynthesisVoice(identifier: "com.apple.voice.compact.en-GB.Daniel") { utterance.voice = voice print("Using voice: \(voice.name), \(voice.language)") } else { print("Daniel voice not found on macOS.") } synthesizer.speak(utterance) I get no speech output and this log output Error reading languages in for local resources. Error reading languages in for local resources. Using voice: Daniel, en-GB Program ended with exit code: 0 Why? and whats with "Error reading languages in for local resources." ?
0
0
81
1d
AirPods Pro issue during a VoIP call,
Case-ID: 10075936 PLATFORM AND VERSION iOS Development environment: Xcode Xcode15, macOS macOS 14.5 Run-time configuration: iOS iOS18.0.1 DESCRIPTION OF PROBLEM Our customer experienced an one-way audio issue when switching from the built-in microphone to AirPods Pro (model: A2084, version: 6F21) during a VoIP call. The issue occurred when the customer's voice could not be heard by the other party, but the customer could hear the other party's voice. STEPS TO REPRODUCE Here are the details: After the issue occurred, subsequent VoIP calls experienced the same issue when using AirPods Pro, but the issue did not occur when using the built-in microphone. The issue could only be resolved by restarting the system, and killing the app did not work. Log and code analysis: In WebRTC, it listens for AVAudioSessionRouteChangeNotification. In the above scenario, when webrtc receives the route change notification, it will print the audio session configuration information. At this point, the input channel count was 0, which was abnormal: [Webrtc] (RTCLogging.mm:33): (audio_device_ios.mm:535 HandleValidRouteChange): RTC_OBJC_TYPE(RTCAudioSession): { category: AVAudioSessionCategoryPlayAndRecord categoryOptions: 128 mode: AVAudioSessionModeVoiceChat isActive: 1 sampleRate: 48000.00 IOBufferDuration: 0.020000 outputNumberOfChannels: 2 inputNumberOfChannels: 0 outputLatency: 0.021500 inputLatency: 0.005000 outputVolume: 0.600000 isPreferredSpeaker: 0 isCallkit: 0 } If app tries to call API, setPreferredInputNumberOfChannels at this point, it will fail with an error code of -50: setConfiguration:active:shouldSetActive:error:]): Failed to set preferred input number of channels(1): The operation couldn’t be completed. (OSStatus error -50.) Our questions: When AVAudioSession is active, the category and mode are as expected. Why is the input channel count 0? Assuming that the AVAudioSession state is abnormal at this point, why does killing the app not resolve the issue, and why does the system need to be restarted to resolve the issue? Is it possible that the category and mode of the AVAudioSession fetched by the app is currently wrong? Does it need to be reset again each time the callkit is started if the category and mode fetched are the same as the values to be set?
1
0
83
4d
Shazamkit with AirPods
HI Guys, I'm using Shazamkit in my IOS app and successfully capturing the currently playing track details, when using the devices (iPhone) built-in mic. When I test with AirPods though, my app cannot both send the output to through the AirPods and capture that same output with the AirPods mic, for Shazamkit recognition. I believe this must be possible, because the Shazamkit widget on IOS can do this. Is it restricted in some way for third party apps? If not, I'd appreciate some guidance on how to achieve this in Swift code. Thanks in advance.
0
1
84
1d
Normal audio mode AirPods IOS 18.1
Hi everyone, I just upgraded my iPhone to 18.1.1. I noticed the absence of the normal mode on my AirPods. I can't stand the noise cancellation mode for too long and the transparency one is overwhelming everytime a hair is brushing near by my AirPods. Why is that? I don't really see the progression here. Can you this be fixed in the next version? I don't want to buy another brand, I use them everyday for many hours.
0
0
78
2d
Increased and Mismatched Audio Buffer Sizes on iOS 18 when Sound Recognition or Vocal Shortcuts Is Enabled
Description As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency. Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced. Steps to Reproduce: Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center). Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug) Build and install the example project Attach a headset and launch the application Observe console logs showing a requested buffer size of 0.005805 (256 samples @ 48k) an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests) Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback) Launch the application Observe Same result from step 4 Mismatched hardware buffer size of 1104 and recorded frame count of 1024 Mismatched playbackCount and recordCount Quit the app and disable vocal shortcuts Launch the app Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior) Expected results: Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced Input and output buffer sizes match Device(s): iPhone 11 Pro, iPad Pro OS: iOS 18.0.1 Environment: Xcode 16.1 FB: FB15715421 Related to: https://forums.developer.apple.com/forums/thread/765477
0
0
119
3d
AVAudioUnitTimePitch: speeding up introduces artifacts
For an upcoming update of one of my apps, I’m facing an issue: The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch. However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32). Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down?? I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result. Grateful for any input on this 🙏
0
0
59
3d
AVMIDIPlayer ignores initial track volume settings on first playback
Issue Description When playing certain MIDI files using AVMIDIPlayer, the initial volume settings for individual tracks are being ignored during the first playback. This results in all tracks playing at the same volume level, regardless of their specified volume settings in the MIDI file. Steps to Reproduce Load a MIDI file that contains different volume settings for multiple tracks Start playback using AVMIDIPlayer Observe that all tracks play at the same volume level, ignoring their individual volume settings Current Behavior All tracks play at the same volume level during initial playback Track volume settings specified in the MIDI file are not being respected This behavior consistently occurs on first playback of affected MIDI files Expected Behavior Each track should play at its specified volume level from the beginning Volume settings in the MIDI file should be respected from the first playback Workaround I discovered that the correct volume settings can be restored by: Starting playback of the MIDI file Setting the currentPosition property to (current time - 1 second) After this operation, all tracks play at their intended volume levels However, this is not an ideal solution as it requires manual intervention and may affect the playback experience. Questions Is there a way to ensure the track volume settings are respected during the initial playback? Is this a known issue with AVMIDIPlayer? Are there any configuration settings or alternative approaches that could resolve this issue? Technical Details iOS Version: 18.1.1 (22B91) Xcode Version: 16.1 (16B40)
0
0
45
3d
iOS Audio Crackling issue when send audio data to UDP server and Play
I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback. I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it. Please help me resolve this issue. I have attached the code reference that I am currently using. Thank you. ViewController.swift
0
0
97
4d
Connect 2 mono nodes as L/R input for a stereo node
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap. But I can't figure out how to connect them back to a stereo node. I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units. Any ideas? let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)! let leftInputMixer = AVAudioMixerNode() let rightInputMixer = AVAudioMixerNode() let leftOutputMixer = AVAudioMixerNode() let rightOutputMixer = AVAudioMixerNode() let channelMixer = AVAudioMixerNode() [leftInputMixer, rightInputMixer, leftOutputMixer, rightOutputMixer, channelMixer].forEach { engine.attach($0) } let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0) let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0) plugin.leftInputMixer = leftInputMixer plugin.rightInputMixer = rightInputMixer plugin.leftOutputMixer = leftOutputMixer plugin.rightOutputMixer = rightOutputMixer plugin.channelMixer = channelMixer leftInputMixer.auAudioUnit.channelMap = [0] rightInputMixer.auAudioUnit.channelMap = [1] engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat) // Process right channel, pass through left channel engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat) engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat) engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat) // Mix back to stereo? engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat) engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
1
0
118
5d
Generate recomendation QUEUE after selection a song in MusicKit
Hi, I'm developing a musicKit integration in my iOS App, and I want to select songs from recently played (done it), the problem is that the queue is not auto-generated and the user have to select other song if they want to go forward. There is any method to ask for similar songs, or recommended songs, from a song that the user has already selected? It will be really great :) Also if you know it... There is any publisher for the music duration or I need to do a timer?? Thanks. David.
0
0
98
6d
Error Domain=NSOSStatusErrorDomain Code=-16384, -16155, -16512
I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great! However, I’ve noticed some unusual logs popping up: Domain: NSOSStatusErrorDomain Error Codes: -16384, -16155, -16512 *That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all. I’ve searched around but can’t find any documentation explaining what these errors mean. Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated! Thanks!
0
0
162
1w
MPNowPlayingInfoCenter without playing music
Hello ! I am working on an app connected to an external streamer . I would like to display current playing song on the Lock Screen. I tried to update the information in MPNowPlayingInfoCenter but I need to play a sound on my iPhone for the control to be displayed . Is there a way to do it without playing a sound? If not, playing a silent sound would be the only solution ? validated by Apple ? :-/ Thank you Frederic
2
0
362
Jul ’24
Research Purpose SensorKit Access
Dear Apple, I am currently working on Mental Health related research project supported by South Korea Government funding. In addition to SensorKit access, we are working on the data from microphone. Is there any contact point aside SensorKit access application to discuss the possibility of research data collection from restricted participant samples?
0
0
78
1w
Issues with Downsampling Live Audio from Mic with AVAudioNodeMixer
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz). To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself. To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling. Here are the key issues and questions I have: 1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case. 2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts? This issue seems longstanding, as noted in a 2018 forum post: https://forums.developer.apple.com/forums/thread/111726 Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
0
0
111
1w
Cannot Transcribe Audio During SharePlay in VisionOS
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference. Here’s the relevant code snippet that throws the error: private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) { let audioEngine = AVAudioEngine() let request = SFSpeechAudioBufferRecognitionRequest() request.shouldReportPartialResults = true let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in request.append(buffer) } audioEngine.prepare() try audioEngine.start() return (audioEngine, request) } The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed. Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated! The specific error is: The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
0
0
104
1w
Distorted Audio When Recording External Mics With AVCaptureSession and AVAssetWriter
I’m working on a macOS app, written in Swift. My goal is to record audio from an external microphone, e.g., one connected via USB. For this, I’m using an AVCaptureSession and recording its output with an AVAssetWriter. This works perfectly in principle (and reliably with internal microphones, for example). The problem occurs after my app has successfully completed the first recording and I then want to make additional recordings (which makes me think it might be process-dependent, because it works again after restarting the app). The problem: Noisy or distorted-sounding audio files. In addition, the following error message appears in the Console from CoreAudio / its AudioConverter: Input data proc returned inconsistent 512 packets for 2048 bytes; at 3 bytes per packet, that is actually 682 packets It is easy to reproduce. This problem is reproducible even if I don’t configure the AVAssetWriter manually and instead let it receive its audioSettings using a preset from an AVOutputSettingsAssistant. I’m running on macOS 15.0 (24A335). I’ve filed a feedback including a demo project → FB15333298 🎟️ I would greatly appreciate any help! Have a great day, Martin
5
0
357
Oct ’24
iOS Audio App + AirPlay tvOS Metadata Issue
I have an audio app that can play audio on an AirPlay device. On non-Apple TV devices, the AirPlay app (on Roku, Samsung, etc.) shows the now playing metadata: title, artist, and album art. However, on tvOS 18.1, no metadata is shown. The Apple TV device plays the audio, but there is no now playing information shown, nor any other indicators. Other media apps show the "Now Playing" controls on the upper right of the tvOS home screen. Can someone point me in the direction of how to solve this issue? I think I am missing something somewhere in regards to the tvOS metadata implementation.
0
0
124
1w