1, I saw nullAudio custom properties of the static const AudioObjectPropertySelector kPlugIn_CustomPropertyID = 'PCst'; But I don't know how to use this in a project.
2. What is the difference between PlugIn and Device's custom properties?
3. When I try to customize the PropertySelector for deivce. After adding kAudioObjectPropertyCustomPropertyInfoList NullAudio_HasDeviceProperty method to compile again after restarting coreAudio service, found that virtual devices don't show.
Audio
RSS for tagDive into the technical aspects of audio on your device, including codecs, format support, and customization options.
Post
Replies
Boosts
Views
Activity
Here is crash logs
NSInvalidArgumentException - *** -[AVContentKeyRequest processContentKeyResponse:] AVContentKeySession's keySystem is not same as that of keyResponse
We observed few 16.X.X Devices are not able to download audio media content, and if they trying multiple times. app got crash and above error encountered while crashing.
Please let us know if any issues.
Haptics are often represented as audio for presentation purposes by Apple in videos and learning resources.
I am curious if:
...Apple has released, or is willing to release any tools that may have been used synthesize audio to represent a haptic patterns (such as in their WWDC19 Audio-Haptic presentation)?
...there are any current tools available that take haptic instruction as input (like AHAP) and outputs an audio file?
...there is some low-level access to the signal that drives the Taptic Engine, so that it can be repurposed as an audio stream?
...you have any other suggestions!
I can imagine some crude solutions that hack together preexisting synthesizers and fudging together a process to convert AHAP to MIDI instructions, dialing in some synth settings to mimic the behaviour of an actuator, but I'm not too interested in that rabbit hole just yet.
Any thoughts? Very curious what the process was for the WWDC videos and audio examples in their documentation...
Thank you!
We are using the MediaPlayer API to provide CarPlay support. Starting in iOS 18 we are having issues updating the content list. The initial list of items will populate on a fresh instance but soon there after an error will show up saying we are not entitled to "com.apple.mediaremote.external-artwork-validation". From that point onwards no changes we make to our MPPlayableContentDataSource are reflected in CarPlay. Even after restarting the device.
While the MediaPlayer API is marked as deprecated, we are still using it to provide CarPlay support going back to iOS 10.
Has anyone else run into this or have suggestions for workarounds?
Setting the default output device using Core Audio stops working after using continuity with AirPods
Hi, made an app which is managing the sound input/output for the user and I'm facing an unexpected behavior which can be replicated using the Apple's HALLab tool too.
Initially the app is able to set the input/output AudioDevice and everything works as expected however if you switch from your Mac to your iPhone when using AirPods Pro and switch back again the app is no longer able to set the output device. There's no error, it simply switches back to AirPods immediately.
You can replicate the issue on the HALLab(an app provided by Apple as "additional tools for Xcode").
How to:
Open HALLab, put on your AirPods and play some media.
Try out changing the input and output source and study the expected behavior
Unlock your iPhone, play some media and wait for the AirPods Pro to switch to the iPhone.
Go back to your Mac and play some media and wait for AirPods to switch to Mac
Try changing the output source on HALLab, notice that the source immediately reverses back to AirPods, which is the unexpected behavior.
Changing the source from the System Settings keeps working as expected.
Any ideas on what's going on and how to handle that?
I'm on MacOS 15.1 using the following code to set the device:
private func setDevice(deviceID: AudioDeviceID, forType type: AudioDeviceType) throws {
guard isDeviceAvailable(deviceID) else {
throw AudioDeviceError.deviceNotAvailable
}
print("setting the device.")
var propertyAddress = AudioObjectPropertyAddress(
mSelector: type == .input ? kAudioHardwarePropertyDefaultInputDevice : kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMain
)
let dataSize = UInt32(MemoryLayout<AudioDeviceID>.size)
var mutableDeviceID = deviceID // Create a mutable copy
let status = AudioObjectSetPropertyData(AudioObjectID(kAudioObjectSystemObject), &propertyAddress, 0, nil, dataSize, &mutableDeviceID)
guard status == noErr else {
throw AudioDeviceError.deviceNotSet(status: status)
}
}
I am using an AVQueuePlayer to play a series of audio files. While implementing the now playing functionality for the iOS lock screen, I got a little confused with how to use MPNowPlayingInfo. When you update MPNowPlayingInfo, one of the fields is MPNowPlayingInfoPropertyElapsedPlaybackTime. This leads me to believe you need to call it at least once a second to keep that up-to-date.
But if you don't call it that frequently, the Now Playing UI does update correctly as it's playing, so that makes me think you only need to call it once you start playing?
It feels very expensive to keep on calling it every time in my periodic time observer, but is that the correct approach? Or do you just call it when you play/pause/skip, etc. ?
Hello! I’m making an app which will have a waveform of the frequency of what’s playing on a Mac. The question is whether it is possible to have access to the signal of the media and use it with the FFT?
Hi everyone,
I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network.
Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this.
Any advice or code examples would be greatly appreciated!
Thanks in advance,
Ondřej
--
My current code without the compression:
inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in
guard let self else {
return
}
// 1. Send data
// a) Convert the buffer into the desired format
if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) {
// b) Use the converted buffer
// TODO: compress it into a different format
if let data = outputBuffer.convertToData() {
self.sendAudio(data)
}
}
// 2. Get sound level
self.visualizeRecorderBuffer(buffer)
}
func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? {
let outputFrameCapacity = AVAudioFrameCount(
round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate))
)
guard
let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity),
let converter = AVAudioConverter(from: format, to: outputFormat)
else {
return nil
}
converter.convert(to: outputBuffer, error: nil) { packetCount, status in
status.pointee = .haveData
return self
}
return outputBuffer
}
static private let websocketInputFormat = AVAudioFormat(
commonFormat: .pcmFormatInt16,
sampleRate: 16000,
channels: 1,
interleaved: false
)!
Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
I'm trying to create an app that uses the Apple Music API and while I can fetch playlists as I desire when selecting a song from a playlist the music does not play. First I'm getting the playlists, then showing those playlists on a list in a view. Then pass that "song" which is a Track type into a PlayBackView. There are several UI components in that view but I want to boil it down here for simplicity's sake to better understand my problem.
struct PlayBackView: View {
@State private var playState: PlayState = .pause
private let player = ApplicationMusicPlayer.shared
@State var song: Track
private var isPlaying: Bool {
return (player.state.playbackStatus == .playing)
}
var body: some View {
VStack {
AsyncImage(url: song.artwork?.url(width: 100, height: 100)) { image in
image
.resizable()
.frame(maxWidth: .infinity)
.aspectRatio(1.0, contentMode: .fit)
} placeholder: {
Image(systemName: "music.note")
.resizable()
.frame(width: 100, height: 100)
}
// Song Title
Text(song.title)
.font(.title)
// Album Title
Text(song.albumTitle ?? "Album Title Not Found")
.font(.caption)
// Play/Pause Button
Button(action: {
handlePlayButton()
}, label: {
Image(systemName: playPauseImage)
})
.padding()
.foregroundStyle(.white)
.font(.largeTitle)
Image(systemName: airplayImage)
.font(ifDeviceIsConnected ? .largeTitle : .title3)
}
.padding()
}
private func handlePlayButton() {
Task {
if isPlaying {
player.pause()
playState = .play
} else {
playState = .pause
await playTrack(song: song)
}
}
}
@MainActor
public func playTrack(song: Track) async {
do {
try await player.play()
playState = .play
} catch {
print(error.localizedDescription)
}
}
}
These are the errors I'm seeing printing in the console in Xcode
prepareToPlay failed [no target descriptor]
The operation couldn’t be completed. (MPMusicPlayerControllerErrorDomain error 1.)
ASYNC-WATCHDOG-1: Attempting to wake up the remote process
ASYNC-WATCHDOG-2: Tearing down connection
Hi,
I am trying to detect if an audio stream is Dolby Atmos. I have existing code that determines if a stream is Dolby Atmos based on the following:
Channel count is greater than equal to 8
Binaural is true
Immersive is true
Downmix is false
I am trying to determine if these rules are correct and documentation that specifies these rules that I can reference in the future.
Any help you can provide is greatly appreciated.
Regards,
John
Is it possible to play audio in the Background or when the app is Terminated? If yes, how can I play audio in the Background or when the app is Terminated in iOS using Swift? I am receiving an audio link in a Firebase notification. How can I play this audio link when the app is in the Background or Terminated?
I am developing an app running on iOS/iPadOS and on macOS using MacCatalyst. It uses ApplicationMusicPlayer.shared to play music from Apple Music. However, on the Mac songs with contentRating == .explicit do not work.
I will get the following error (sorry, German localization):
Failed to prepareToPlay error=<MPMusicPlayerControllerErrorDomain.6 "Failed to prepare to play" {}>
Error playing item: Der Vorgang konnte nicht abgeschlossen werden. (MPMusicPlayerControllerErrorDomain-Fehler 6.)
On iOS/iPadOS these songs play correctly. What can I do to also play explicit songs using MacCatalyst?
Thanks,
Dirk
I've encountered a critical issue while developing a music player app using SwiftUI and MusicKit. The problem persists across multiple devices and iOS versions, specifically with the endSeeking() method of ApplicationMusicPlayer, which fails to stop the fast-forward operation as expected.
Development Environment:
Xcode 16 Beta 6
macOS Sonoma 15.0 Beta 7 (24A5327a)
Affected Devices:
iPhone 11 Pro Max (iOS 17.6)
iPhone SE 3 (iOS 18.0 Beta 7)
Here's the relevant code snippet:
Image(systemName: "forward.end.circle")
.foregroundStyle(.accent)
.gesture(
TapGesture()
.onEnded { _ in
vm.nextTrack()
}
)
.simultaneousGesture(
LongPressGesture(minimumDuration: 0.5)
.onChanged { isPressing in
if isPressing {
vm.player.beginSeekingForward()
}
}
.onEnded { _ in
vm.player.endSeeking()
}
)
The issue manifests when the long press ends: despite invoking the endSeeking() method, the fast-forward operation persists.
To troubleshoot, I've taken the following steps:
Confirmed that vm.player is set to ApplicationMusicPlayer.shared.
Attempted to combine endSeeking() with beginSeekingForward(), as per the documentation guidelines.
Despite these efforts, the problem persists across all tested devices and OS versions. This leads me to two critical questions:
Has anyone else encountered a similar issue?
Could this potentially be an undocumented bug in the latest MusicKit implementation?
I want my media app to support Siri when using the phone, and CarPlay when in the car but I get an error during installation that it's not possible because 2 extensions both use INPlayMediaIntent.
Can someone explain why this is bad? I don't understand why I can't have both.
Has anyone figured this one out? Pasting them gives us detritus problems, but I'm there must be some way of doing this? Thank you.
I'm having trouble using SFSpeechRecognizer & SFSpeechRecognitionTask to show me the words from an audio file. I found a solution on stackoverflow to separate the audio file into smaller sizes. How would I do that programmatically using Swift for a macOS app Xcode project?
I would prefer not to separate the file into smaller files. I will submit another post with more information for that.
iOS Audio Lockscreen Problem in PWA
Description
When running a PWA on iOS; playing audio from the lockscreen works as expected until you leave the audio paused for 30 seconds. After this, the audio will cease to function until you return the PWA to the foreground.
Reproduction
In a PWA, create an HTML 5 audio element.
Load an audio file into it.
Set navigator.mediaSession data and action handlers for play and pause.
Everything is in working order and your audio plays and pauses from the lock screen.
Pause your audio and wait for 30 seconds.
Now, press the play button. Your audio will no longer function.
At this point, the only way to get the audio to function again is to open the PWA into the foreground. Once you do this, the audio will be in working order.
What is expected
In step number 6, when you press the play button, the audio should play. The lock screen audio should not enter a non-functional state or there should be some way to "wake up" the PWA.
Closing
If you follow these steps exactly on Android, you will see that the problem does not exist on those devices.
We do custom audio buffering in our app. A very minimal version of the relevant code would look something like:
import AVFoundation
class LoopingBuffer {
private var playerNode: AVAudioPlayerNode
private var audioFile: AVAudioFile
init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) {
self.playerNode = playerNode
self.audioFile = audioFile
}
func scheduleBuffer(_ frames: AVAudioFrameCount) async {
let audioBuffer = AVAudioPCMBuffer(
pcmFormat: audioFile.processingFormat,
frameCapacity: frames
)!
try! audioFile.read(into: audioBuffer, frameCount: frames)
await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed)
}
}
We are in the migration process to swift 6 concurrency and have moved a lot of our audio code into a global actor. So now we have something along the lines of
import AVFoundation
@globalActor public actor AudioActor: GlobalActor {
public static let shared = AudioActor()
}
@AudioActor
class LoopingBuffer {
private var playerNode: AVAudioPlayerNode
private var audioFile: AVAudioFile
init(playerNode: AVAudioPlayerNode, audioFile: AVAudioFile) {
self.playerNode = playerNode
self.audioFile = audioFile
}
func scheduleBuffer(_ frames: AVAudioFrameCount) async {
let audioBuffer = AVAudioPCMBuffer(
pcmFormat: audioFile.processingFormat,
frameCapacity: frames
)!
try! audioFile.read(into: audioBuffer, frameCount: frames)
await playerNode.scheduleBuffer(audioBuffer, completionCallbackType: .dataConsumed)
}
}
Unfortunately this now causes an error:
error: sending 'audioBuffer' risks causing data races
| `- note: sending global actor 'AudioActor'-isolated 'audioBuffer' to nonisolated instance method 'scheduleBuffer(_:completionCallbackType:)' risks causing data races between nonisolated and global actor 'AudioActor'-isolated uses
I understand the error, what I don't understand is how I can safely use this API?
AVAudioPlayerNode is not marked as @MainActor so it seems like it should be safe to schedule a buffer from a custom actor as long as we don't send it anywhere else. Is that right?
AVAudioPCMBuffer is not Sendable so I don't think it's possible to make this callsite ever work from an isolated context. Even forgetting about the custom actor, if you instead annotate the class with @MainActor the same error is still present.
I think the AVAudioPlayerNode.scheduleBuffer() function should have a sending annotation to make clear that the buffer can't be used after it's sent. I think that would make the compiler happy but I'm not certain.
Am I overlooking something, holding it wrong, or is this API just pretty much unusable in Swift 6?
My current workaround is just to import AVFoundation with @preconcurrency but it feels dirty and I am worried there may be a real issue here that I am missing in doing so.
I am trying to acheive AAC playback, I have stripped off the ADTS header using a function.
I am not being shown any errors by the Apple API however I cannot hear any playback.
Here is my asbd
My sample is definitely 44.1KHz and AAC_LC.
Here is the file for reference: https://dl.espressif.com/dl/audio/ff-16b-2c-44100hz.aac
Here are some relevant snippets of the code:
AudioStreamBasicDescription desc = {0};
desc.mSampleRate = 44100; // Sample rate
desc.mFormatID = kAudioFormatMPEG4AAC; // Format ID for AAC
desc.mChannelsPerFrame = 2; // Stereo audio
desc.mFramesPerPacket = 1024; // AAC typically uses 1024 frames per packet
desc.mBitsPerChannel = 0;
desc.mBytesPerPacket = 0;
desc.mBytesPerFrame = 0;
OSStatus status =
CMAudioFormatDescriptionCreate(kCFAllocatorDefault,
&desc,
inlayout_size, //32 corresponding to stereo
inlayout_buf,
kAudioFormatMPEG4AAC,
nil,
nil,
&_fmtDesc);
const CMBlockBufferCustomBlockSource blockSource = {
.version = kCMBlockBufferCustomBlockSourceVersion,
.FreeBlock = customBlock_Free,
.refCon = block,
};
OSStatus status;
CMSampleBufferRef sampleBuffer;
CMBlockBufferRef blockBuf;
status = CMBlockBufferCreateWithMemoryBlock(kCFAllocatorDefault,
(block->p_buffer), // memoryBlock
(block->i_buffer), // blockLength
kCFAllocatorNull, // blockAllocator
&blockSource, // customBlockSource
0, // offsetToData
(block->i_buffer), // dataLength
0, // flags
&blockBuf);
const CMSampleTimingInfo timeInfo = {
.duration = kCMTimeInvalid,
.presentationTimeStamp = CMTimeMake(_ptsSamples, _sampleRate),
.decodeTimeStamp = kCMTimeInvalid,
};
status = CMSampleBufferCreateReady(kCFAllocatorDefault,
blockBuf, // dataBuffer
_fmtDesc, // formatDescription
1024, // numSamples
1, // numSampleTimingEntries
&timeInfo, // sampleTimingArray
1, // numSampleSizeEntries
&_bytesPerFrame, // sampleSizeArray
&sampleBuf);
The renderer then handles this sampleBuf which is working correctly as I have tested it for other formats.
I have verified the hex dump of the p_buffer and it matches with that of the .aac file having removed the ADTS header.
Here is an output example
Hex dump of p_buffer(which is being passed to CMBlockBufferCreateWithMemoryBlock):
4CFE1DE0: 21 1A 8F 20 63 E7 FF FF 11 72 A3 20 C5 E3 B7 E9
4CFE1DF0: 42 F5 3D 9A D1 77 D2 F0 9A 00 00 B2 32 53 84 8C
4CFE1E00: E8 24 ED DF 23 04 3D CF A6 51 A8 D2 8F EE B3 FB
4CFE1E10: F4 CC 17 F9 7C 8B 75 06 29 8D D6 95 98 78 9D 87
4CFE1E20: 9C B4 9D 8E 2B 6C D2 90 D7 E3 C4 37 05 97 85 C1
4CFE1E30: F7 5E 7F D8 F3 DD 20 B5 73 31 C5 EC 3D 6F AC 5E
4CFE1E40: 45 AF CC 38 0D 5B 98 F5 F9 3B 3E D7 C3 8E 1B 38
4CFE1E50: F8 F1 9A 6F 96 05 15 CE 39 D6 2B 06 60 33 8A C4
4CFE1E60: EE 4F 6B B3 C9 CF F2 BF 3F B1 96 69 B9 62 34 62
4CFE1E70: CD 41 1C 08 CF 80 5F A4 60 BD 45 36 AC 66 00 40
4CFE1E80: 42 F6 95 F4 89 8A A2 24 11 01 74 08 82 33 94 D1
4CFE1E90: 0B 24 51 4A 55 28 06 21 78 85 D4 B5 13 49 1D AA
4CFE1EA0: 44 02 32 E9 42 61 8C 59 4A 65 96 4D BC BC AE D2
4CFE1EB0: F1 D0 00 00 D4 A2 F8 87 A0 FD C8 93 87 59 A2 CB
4CFE1EC0: BE B3 AB 49 C6 37 60 2B 50 26 D3 0C 1D 29 45 81
4CFE1ED0: D9 4E 62 5E 29 8E 27 19 75 FB 62 0B 3B C0 B9 E6
4CFE1EE0: EB A0 3F B8 D5 7E 77 90 C1 E2 9C D9 4E 5B 82 ED
4CFE1EF0: CF BC 55 1C 55 1B F2 DE CC B2 13 25 CB ED F5 B5
4CFE1F00: 6E F9 EF 38 DE 8C C4 38 C2 60 CF DA F3 F2 1F 80
4CFE1F10: C5 23 0C 3E 57 31 0D 5E EB 63 58 1A 28 38 7B B2
4CFE1F20: 0B F3 5B 33 96 59 55 44 4A 09 55 73 EC 94 A0 F3
4CFE1F30: FC F4 70 F9 76 FB FF 8D AD 13 01 30 05 C0 90 01
4CFE1F40: B2 37 27 24 44 B9 F0 24 4E C5 D4 25 D6 F7 20 4D
4CFE1F50: 39 92 5D 31 71 5B 4A B2 A4 C1 59 D4 42 60 1C 00
4CFE1F60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
i_buffer: 383
CMBlockBufferIsEmpty check: 0
Block not null in wrapBuffer
CMBlockBufferCreateWithMemoryBlock status: 0
I did try multiple configurations, I am shown no errors by any log however I cannot hear playback.
Please help me identify what is wrong here.
I have used this as a reference, which seems to be based on previous Apple documentation
https://github.com/UFOooX/iOSAACStreamPlayer