Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

Sound randomly
Hello all! I've been having this issue for a while, on my iPhone 12 Pro. The volume when listening to music, watching YouTube, TikTok, etc. It will randomly lower, but the actual audio slider won't it will still be at max volume but get very quiet. I've followed other instructions such as turn off audio awareness, and other settings but nothing seems to be working. And phone calls too Has anyone else had this issue and managed to fix it?
1
0
109
2d
Why is AVAudioEngine input giving all zero samples?
I am trying to get access to raw audio samples from mic. I've written a simple example application that writes the values to a text file. Below is my sample application. All the input samples from the buffers connected to the input tap is zero. What am I doing wrong? I did add the Privacy - Microphone Usage Description key to my application target properties and I am allowing microphone access when the application launches. I do find it strange that I have to provide permission every time even though in Settings > Privacy, my application is listed as one of the applications allowed to access the microphone. class AudioRecorder { private let audioEngine = AVAudioEngine() private var fileHandle: FileHandle? func startRecording() { let inputNode = audioEngine.inputNode let audioFormat: AVAudioFormat #if os(iOS) let hardwareSampleRate = AVAudioSession.sharedInstance().sampleRate audioFormat = AVAudioFormat(standardFormatWithSampleRate: hardwareSampleRate, channels: 1)! #elseif os(macOS) audioFormat = inputNode.inputFormat(forBus: 0) // Use input node's current format #endif setupTextFile() inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { [weak self] buffer, _ in self!.processAudioBuffer(buffer: buffer) } do { try audioEngine.start() print("Recording started with format: \(audioFormat)") } catch { print("Failed to start audio engine: \(error.localizedDescription)") } } func stopRecording() { audioEngine.stop() audioEngine.inputNode.removeTap(onBus: 0) print("Recording stopped.") } private func setupTextFile() { let tempDir = FileManager.default.temporaryDirectory let textFileURL = tempDir.appendingPathComponent("audioData.txt") FileManager.default.createFile(atPath: textFileURL.path, contents: nil, attributes: nil) fileHandle = try? FileHandle(forWritingTo: textFileURL) } private func processAudioBuffer(buffer: AVAudioPCMBuffer) { guard let channelData = buffer.floatChannelData else { return } let channelSamples = channelData[0] let frameLength = Int(buffer.frameLength) var textData = "" var allZero = true for i in 0..<frameLength { let sample = channelSamples[i] if sample != 0 { allZero = false } textData += "\(sample)\n" } if allZero { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels. All data is zero.") } else { print("Got \(frameLength) worth of audio data on \(buffer.stride) channels.") } // Write to file if let data = textData.data(using: .utf8) { fileHandle!.write(data) } } }
2
0
111
3d
MATCH_ATTEMPT_FAILED error on Android Studio Java+Kotlin
Getting MatchError "MATCH_ATTEMPT_FAILED" everytime when matchstream on Android Studio Java+Kotlin project. My project reads the samples from the mic input using audioRecord class and sents them to the Shazamkit to matchstream. I created a kotlin class to handle to Shazamkit. The audioRecord is build to be mono and 16 bit. My Kotlin Class class ShazamKitHelper { val shazamScope = CoroutineScope(Dispatchers.IO + SupervisorJob()) lateinit var streaming_session: StreamingSession lateinit var signature: Signature lateinit var catalog: ShazamCatalog fun createStreamingSessionAsync(developerTokenProvider: DeveloperTokenProvider, readBufferSize: Int, sampleRate: AudioSampleRateInHz ): CompletableFuture<Unit>{ return CompletableFuture.supplyAsync { runBlocking { runCatching { shazamScope.launch { createStreamingSession(developerTokenProvider,readBufferSize,sampleRate) }.join() }.onFailure { throwable -> }.getOrThrow() } } } private suspend fun createStreamingSession(developerTokenProvider:DeveloperTokenProvider,readBufferSize: Int,sampleRateInHz: AudioSampleRateInHz) { catalog = ShazamKit.createShazamCatalog(developerTokenProvider) streaming_session = (ShazamKit.createStreamingSession( catalog, sampleRateInHz, readBufferSize ) as ShazamKitResult.Success).data } fun startMatching() { val audioData = sharedAudioData ?: return // Return if sharedAudioData is null CoroutineScope(Dispatchers.IO).launch { runCatching { streaming_session.matchStream(audioData.data, audioData.meaningfulLengthInBytes, audioData.timestampInMs) }.onFailure { throwable -> Log.e("ShazamKitHelper", "Error during matchStream", throwable) } } } @JvmField var sharedAudioData: AudioData? = null; data class AudioData(val data: ByteArray, val meaningfulLengthInBytes: Int, val timestampInMs: Long) fun startListeningForMatches() { CoroutineScope(Dispatchers.IO).launch { streaming_session.recognitionResults().collect { matchResult -> when (matchResult) { is MatchResult.Match -> { val match = matchResult.matchedMediaItems println("Match found: ${match.get(0).title} by ${match.get(0).artist}") } is MatchResult.NoMatch -> { println("No match found") } is MatchResult.Error -> { val error = matchResult.exception println("Match error: ${error.message}") } } } } } } My code in java reads the samples from a thread: shazam_create_session(); while (audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING){ if (shazam_session_created){ byte[] buffer = new byte[288000];//max_shazam_seconds * sampleRate * 2]; audioRecord.read(buffer,0,buffer.length,AudioRecord.READ_BLOCKING); helper.sharedAudioData = new ShazamKitHelper.AudioData(buffer,buffer.length,System.currentTimeMillis()); helper.startMatching(); if (!listener_called){ listener_called = true; helper.startListeningForMatches(); } } else{ SystemClock.sleep(100); } } private void shazam_create_session() { MyDeveloperTokenProvider provider = new MyDeveloperTokenProvider(); AudioSampleRateInHz sample_rate = AudioSampleRateInHz.SAMPLE_RATE_48000; if (sampleRate == 44100) sample_rate = AudioSampleRateInHz.SAMPLE_RATE_44100; CompletableFuture<Unit> future = helper.createStreamingSessionAsync(provider, 288000, sample_rate); future.thenAccept(result -> { shazam_session_created = true; }); future.exceptionally(throwable -> { Toast.makeText(mine, "Failure", Toast.LENGTH_SHORT).show(); return null; }); } I Implemented the developer token in java as follows public static class MyDeveloperTokenProvider implements DeveloperTokenProvider { DeveloperToken the_token = null; @NonNull @Override public DeveloperToken provideDeveloperToken() { if (the_token == null){ try { the_token = generateDeveloperToken(); return the_token; } catch (NoSuchAlgorithmException | InvalidKeySpecException e) { throw new RuntimeException(e); } } else{ return the_token; } } @NonNull private DeveloperToken generateDeveloperToken() throws NoSuchAlgorithmException, InvalidKeySpecException { PKCS8EncodedKeySpec priPKCS8 = new PKCS8EncodedKeySpec(Decoders.BASE64.decode(p8)); PrivateKey appleKey = KeyFactory.getInstance("EC").generatePrivate(priPKCS8); Instant now = Instant.now(); Instant expiration = now.plus(Duration.ofDays(90)); String jwt = Jwts.builder() .header().add("alg", "ES256").add("kid", keyId).and() .issuer(teamId) .issuedAt(Date.from(now)) .expiration(Date.from(expiration)) .signWith(appleKey) // Specify algorithm explicitly .compact(); return new DeveloperToken(jwt); } }
0
0
123
4d
Trouble with getting extended Album info from user library
Hello! I have a problem with getting album extended info from users library. Note that app authorised to use Apple Music according documentation. I get albums from users library with this code: func getLibraryAlbums() async throws -> MusicItemCollection<Album> { let request = MusicLibraryRequest<Album>() let response = try await request.response() return response.items } This is an example of Albums request respones: { "data" : [ { "meta" : { "musicKit_identifierSet" : { "isLibrary" : true, "id" : "1945382328890400383", "dataSources" : [ "localLibrary", "legacyModel" ], "type" : "Album", "deviceLocalID" : { "databaseID" : "37336CB19CF51727", "value" : "1945382328890400383" }, "catalogID" : { "kind" : "adamID", "value" : "1173535954" } } }, "id" : "1945382328890400383", "type" : "library-albums", "attributes" : { "artwork" : { "url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=4A2F444C%2D336D%2D49EA%2D90C8%2D13C547A5B95B", "width" : 0, "height" : 0 }, "genreNames" : [ "Pop" ], "trackCount" : 1, "artistName" : "Сара Окс", "isAppleDigitalMaster" : false, "audioVariants" : [ "lossless" ], "playParams" : { "catalogId" : "1173535954", "id" : "1945382328890400383", "musicKit_persistentID" : "1945382328890400383", "kind" : "album", "musicKit_databaseID" : "37336CB19CF51727", "isLibrary" : true }, "name" : "Нимфомания - Single", "isCompilation" : false } }, { "meta" : { "musicKit_identifierSet" : { "isLibrary" : true, "id" : "-8570883332059662437", "dataSources" : [ "localLibrary", "legacyModel" ], "type" : "Album", "deviceLocalID" : { "value" : "-8570883332059662437", "databaseID" : "37336CB19CF51727" }, "catalogID" : { "kind" : "adamID", "value" : "1618488499" } } }, "id" : "-8570883332059662437", "type" : "library-albums", "attributes" : { "isCompilation" : false, "genreNames" : [ "Pop" ], "trackCount" : 1, "artistName" : "TIMOFEEW & KURYANOVA", "isAppleDigitalMaster" : false, "audioVariants" : [ "lossless" ], "playParams" : { "catalogId" : "1618488499", "musicKit_persistentID" : "-8570883332059662437", "kind" : "album", "id" : "-8570883332059662437", "musicKit_databaseID" : "37336CB19CF51727", "isLibrary" : true }, "artwork" : { "url" : "musicKit:\/\/artwork\/transient\/{w}x{h}?id=BEA6DBD3%2D8E14%2D4A10%2D97BE%2D8908C7C5FC2C", "width" : 0, "height" : 0 }, "name" : "Не звони - Single" } }, ... ] } In AlbumView using task: view modifier I request extended information about the album with this code: func loadExtendedInfo(_ album: Album) async throws -> Album { let response = try await album.with([.tracks, .audioVariants, .recordLabels], preferredSource: .library) return response } but in the response some of the fields are always nil, for example recordLabels, releaseDate, url, editorialNotes, copyright. Please tell me what I'm doing wrong?
0
0
92
4d
How to find `AudioHardwareControl` direction?
I'm working with modern Core Audio API introduced in macOS Sequoia. I have an AudioHadwareDevice which has several controls of type AudioHardwareControl. I figured out to filter only volume controls I can use classID == kAudioVolumeControlClassID condition. Some devices have volume controls for both input and output. How I can determine the direction of the control? Streams, i.e. AudioHardwareStream object have direction, but I didn't found a way to map controls to streams. There are kAudioObjectPropertyScopeInput and kAudioObjectPropertyScopeOutput property scopes, but no matter what I tried controls always return false to any control.hasProperty(address: whatever). Any other ideas?
0
0
79
4d
Only Apple based music devices show view
The following is my playground code. Any of the apple audio units show the plugin view, however anything else (i.e. kontakt, spitfire, etc.) does not. It does not error, just where the visual is expected is blank. import AppKit import PlaygroundSupport import AudioToolbox import AVFoundation import CoreAudioKit let manager = AVAudioUnitComponentManager.shared() let description = AudioComponentDescription(componentType: kAudioUnitType_MusicDevice, componentSubType: 0, componentManufacturer: 0, componentFlags: 0, componentFlagsMask: 0) var deviceComponents = manager.components(matching: description) var names = deviceComponents.map{$0.name} let pluginName: String = "AUSampler" // This works //let pluginName: String = "Kontakt" // This does not var plugin = deviceComponents.filter{$0.name.contains(pluginName)}.first! print("Plugin name: \(plugin.name)") var customViewController:NSViewController? AVAudioUnit.instantiate(with: plugin.audioComponentDescription, options: []){avAudioUnit, error in var ilip = avAudioUnit!.auAudioUnit.isLoadedInProcess print("Loaded in process: \(ilip)") guard error == nil else { print("Error: \(error!.localizedDescription)") return } print("AudioUnit successfully created.") let audioUnit = avAudioUnit!.auAudioUnit audioUnit.requestViewController{ vc in if let viewCtrl = vc { customViewController = vc var b = vc?.view.bounds PlaygroundPage.current.liveView = vc print("Successfully added view controller.") }else{ print("Failed to load controller.") } } }
0
0
56
4d
Apple music web kit play issues (MusicKit JS)
Hello, I am trying to follow the getting started guide. I have produced a developer token via the music kit embedding approach and can confirm I'm successfully authorized. When I try to do play music, I'm unable to hear anything. Thought it could be some auto-play problems with the browser, but it doesn't appear to be related, as I can trigger play from a button with no further success. const music = MusicKit.getInstance() try { await music.authorize() // successful const result = await music.api.music(`/v1/catalog/gb/search`, { term: 'Sound Travels', types: 'albums', }) await music.play() } catch (error) { console.error('play error', error) // ! No error triggered } I have searched the forum, have found similar queries but apparently none using V3 of the API. Other potentially helpful information: OS: macos 15.1 (24B83) API version: V3 On localhost Browser: Arc (chromium based), also tried on Safari, The only difference between the two browsers is that safari appears to exit the breakpoint, whereas Arc will continue (without throwing any errors) authorizationStatus: 3 Side note, any reason this is still in beta so many years later?
0
0
153
1w
Custom AVAssetResourceLoaderDelegate on iOS 15 fails to load large files
In our app we have implemented a AVAssetResourceLoaderDelegate to handle encrypted downloaded files. We have it working on all iOS versions but we are seeing issues on iOS 15 (15.8.3) with large files (> 1 Gb). We have so far seen two cases where either the load method on the AVURLAsset fails early and throws an unknown error code or starts requesting more data than the device has available RAM. The CPU usage is almost always over 100%, even after pausing playback. The memory issue can happen even though the player has successfully started playback. When running this on devices running iOS 16 and above we set the isEntireLengthAvailableOnDemand to true on the AVAssetResourceLoadingContentInformationRequest. This seems to be key to solving the issue those devices that support it. If we set the property to false we see the same memory issue as on iOS 15. So we have a solution for iOS 16 and upwards but are at a loss for how to handle iOS 15. Is there something we have overlooked or is it in fact an issue with that iOS version?
0
0
134
1w
How not to block during recording
The problem I have at the moment is that if a phone call comes in during my recording, even if I don't answer, my recording will be interrupted The phenomenon of recording interruption is that the picture is stuck, and the recording can be resumed automatically after the call is over. But it will cause the recorded video sound and painting out of sync Through the AVCaptureSessionWasInterrupted listening, I can get to record the types of alerts and interrupt As far as I can tell, a ringing or vibrating phone can block the audio channel. I found the same scenario in other apps, you can turn off the ring tone or vibration, but I don't know how to do it, I tried a lot of ways, but it doesn't work BlackmagicCam or ProMovie App, when a call comes in during recording, there will only be a notification menu, and there will be no ringtone or vibration, which solves the problem of recording interruption I don't know if this requires some configuration or application, please let me know if it does
1
0
102
1w
Can backgrounded apps record audio?
I'd like to find out: Can backgrounded apps record audio? In the past as I recall, I found that backgrounded apps were pretty restricted and couldn't do much of anything. However I'm not familiar with the current state of affairs. With iOS 15.8 and above, can backgrounded apps record audio if they've been given permission by the user to access the microphone? Thanks.
1
0
160
1w
AVAssetWriterInput appendSampleBuffer failed with error -12780
I tried adding watermarks to the recorded video. Appending sample buffers using AVAssetWriterInput's append method fails and when I inspect the AVAssetWriter's error property, I get the following: Error Domain=AVFoundation Error Domain Code=-11800 "This operation cannot be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDDescription=This operation cannot be completed, NSUnderlyingError=0x302399a70 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} As far as I can tell -11800 indicates an AVErrorUknown, however I have not been able to find information about the -12780 error code, which as far as I can tell is undocumented. Thanks! Here is the code
1
0
166
1w
AUv3 recent "Failed to find component with type..." frequent issues
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...") In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails. If I "Archive" the app+plugin, there is no audio unit extension in the bundle. If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there. Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
1
0
165
2w
AudioHardwareError: No Access to Int32 error constants
I am unable to access the Int32 error from the errors that CoreAudio throws in Swift type AudioHardwareError. This is critical. There is no way to access the errors or even create an AudioHardwareError to test for errors. do { _ = try AudioHardwareDevice(id: 0).streams // will throw } catch { if let error = error as? AudioHardwareError { // cast to AudioHardwareError print(error) // prints error code but not the errorDescription } } How can get reliably get the error.Int32? Or create a AudioHardwareError with an error constant? There is no way for me to handle these error with code or run tests without knowing what the error is. On top of that, by default the error localizedDescription does not contain the errorDescription unless I extend AudioHardwareError with CustomStringConvertible. extension AudioHardwareError: @retroactive CustomStringConvertible { public var description: String { return self.localizedDescription } }
2
1
205
2w
Listener for kAudioProcessPropertyIsRunningOutput
I'm trying to setup a listener for kAudioProcessPropertyIsRunningOutput but it's never triggered. I get calls for kAudioProcessPropertyIsRunning and kAudioProcessPropertyDevices but not for kAudioProcessPropertyIsRunningInput or kAudioProcessPropertyIsRunningOutput. class MyDelegate: PropertyListenerDelegate { func propertiesChanged(properties: [AudioObjectPropertyAddress]) { print(properties) } } var myDelegate = MyDelegate() var processes = try AudioHardwareSystem.shared.processes for process in processes { process.delegates += [myDelegate] try process.addListener(forProperties: [AudioObjectPropertyAddress(mSelector: kAudioPropertyWildcardPropertyID, mScope: kAudioObjectPropertyScopeWildcard, mElement: kAudioObjectPropertyElementWildcard)]) } Xcode 16.1 macOS 15.0.1
0
0
160
2w
MusicKit lastPlayedDate always nil
I am having trouble accessing the lastPlayedData for any given album or track using MusicKit. The value is always nil, both on numerous albums and tracks I tested. Afaik this is not a property that has to be fetched separately like tracks for example. I am running this on my physical iPhone 12 18.1.1 with Xcode 16.1. The albums and tracks have definitely been played multiple times before. The app has permission to the library using MusicAuthorization.request() This post mentions the same problem but offers no solution. Thanks for any help
0
0
111
2w
PTTFramework w/ AVAudioSession
Hi all, I have spent a lot of time reading the tech note and watching the WDDC video that introduce the PTTFramework on iOS. I currently have a custom setup where I am using AVAudioEngine to schedule and play buffers that are being streamed through a call. I am looking to use the PTTFramework to allow a user to trigger this push to talk behavior from the lock screen and the various places with the system UI it provides. However I am unsure what the correct behavior is regarding the handling of the audio session. Right now I am using .playback when there is no active voice transmission so that devices such as AirPods can be in AD2P mode where applicable, and then transitioning to .playbackAndRecord category only when the mic input should become active. Following this change in my AVAudioEngine manager I am then manually activating and deactivating the audio session manually when the engine is either playing/recording or idle. In the documentation it states that you should not attempt to activate or deactivate your audio session directly, but allow the framework to handle it. Does that mean that I need to either call the request to transmit delegate function or set an active participant on the channel manager first, and then wait for the didBecomeActive delegate method to trigger before I actually attempt to play or record any audio? (I am using the fullDuplex mode currently.) I noticed that that delegate method will only trigger if the audio session wasn't active before doing one of the above (setting active participant, requesting transmit). Lastly, when using the PTTFramework it also mentions that we get support for PTT devices and I notice on the didBeginTransmittingFrom property we have a handsfreeButton case. Is there any documentation or resources for what is actually supported out of the box for this? I am currently working on handling a lot of the push to talk through bluetooth LE, and wanted to make sure there wasn't overlap with what the system provides. Thank you!
0
0
94
2w
AirPods Audio Sample Rate Issue on macOS Sequoia
I’m experiencing an unusual audio issue with AirPods on macOS Sequoia while developing VoIP applications like Zoom and FaceTime. When AirPods are connected, the other party’s voice sometimes sounds unnaturally stretched (approximately twice as long). This problem can be temporarily fixed by switching the sound output settings from AirPods to speakers and then back to AirPods. From our analysis, the issue appears to be related to the sample rate provided by AudioObjectGetPropertyData. Here’s what we’ve observed: When the issue occurs, the AudioStreamBasicDescription.sampleRate for AirPods is reported as 48000. Under normal conditions, it’s reported as 24000. It seems like the system is mistakenly returning a sample rate that doesn’t match the AirPods’ actual settings, perhaps defaulting to a system speaker value. Once the output setting is toggled, the correct sampleRate (24000) is retrieved. This discrepancy causes our application to transmit the audio stream at 48000, leading to the distorted playback. Has anyone encountered a similar issue or knows how to resolve it?
2
0
164
2w
Delay w/ new AudioTap API when system device is a BL device
I'm capturing audio from other applications on macOS to mix them with other sources in a real time streaming application. I noticed that audio data captured via the new tapping mechanism introduced in macOS 14.2 arrives delayed in my app, when the macOS system device is a Bluetooth headphone, e.g. Apple AirPods. Sometimes this delay is about 300-400 milliseconds, which makes it unusable for live streaming, because the audio is out of sync with the video and also audio captured from other devices. What is confusing to me, is that this also happens when my app does not even use that output device. Is this a known issue? Is there a way around this?
1
0
199
2w
Increased delay when AUGraph's output device is system output
I'm using an AUGraph to mix audio from different sources for a real time streaming application. Whenever the audio device used as the graph's output device is also the Mac's default output device, the measured latency increases by about 35 milliseconds for wired devices. Any idea why this is? Is there a way around this besides nagging the user to not the use the system output in our app?
0
0
152
2w