Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Post

Replies

Boosts

Views

Activity

AVAudioPlayerNode can't play interleaved AVAudioPCMBuffer
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format. However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue: AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3) But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected. Could you please help me fix it? Below is the code snippet. @Observable final class MyAudioPlayer { private var audioEngine: AVAudioEngine = .init() private var audioPlayerNode: AVAudioPlayerNode = .init() private var audioFormat: AVAudioFormat? init() { audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil) try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try? AVAudioSession.sharedInstance().setActive(true) audioEngine.prepare() try? audioEngine.start() audioPlayerNode.play() } // more code... /// This crashes private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData?[0] { data.update(from: buf, count: samples * Int(format.channelCount)) } audioPlayerNode.scheduleBuffer(audioBuffer) } /// This works private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) { guard let buf, let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false), let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples)) else { return } audioBuffer.frameLength = AVAudioFrameCount(samples) if let data = audioBuffer.floatChannelData { for channel in 0 ..< Int(format.channelCount) { for frame in 0 ..< Int(audioBuffer.frameLength) { data[channel][frame] = buf[frame * Int(format.channelCount) + channel] } } } audioPlayerNode.scheduleBuffer(audioBuffer) } }
1
0
175
2w
AVSpeechUtterance Mandarin voice output replaced by SIRI language setting after upgraded the IOS to 18
Hi, Apple's engineer. Hoping that you can reply to this one. We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18. AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly. In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin. Buggy language setting in Siri that affects the AVSpeechSynthesisVoice : Chinese (Cantonese - China mainland) Chinese (Cantonese -Hong Kong)
1
2
214
2w
Clear ApplicationMusicPlayer queue after station queued
After an Album, Playlist, or collection of songs have been added to the ApplicationMusicPlayer queue, clearing the queue can be easily accomplished with: ApplicationMusicPlayer.shared.queue.entries = [] This transitions the player to a paused state with an empty queue. After queueing a Station, the same code cannot be used to clear the queue. Instead, it causes the queue to be refilled with a current and next MusicItem from the Station. What's the correct way to detect that the ApplicationMusicPlayer is in the state where it's being refilled by a Station and clear it? I've tried the following approaches with no luck: # Reinitialize queue ApplicationMusicPlayer.shared.queue = ApplicationMusicPlayer.Queue() # Create empty Queue let songs: [Song] = [] let emptyQueue = ApplicationMusicPlayer.Queue(for: songs) ApplicationMusicPlayer.shared.queue = emptyQueue
1
0
196
2w
Help with corrupted audio plugin authentication
Somehow I have a corrupted audio plugin authentication problem. Iā€™m on a silicon Mac M1 and two audio plugins that were installed and working will now not authenticate. The vendors both are unable to troubleshoot and I think the issue is a corrupted low level file. One product authenticates correctly when I created a new user but another plugin only authenticates on the original user account and not on the newly created user. Reinstalling the plugins and the Mac OS does not fix the issue. Any thoughts?
0
0
177
2w
Data Persistence of AVAssets
Hey, I am fairly new to working with AVFoundation etc. As far as I could research on my own, if I want to get metadata from let's say a .m4a audio file, I have to get the data and then create an AVAsset. My files are all on local servers and therefore I would not be able to just pass in the URL. The extraction of the metadata works fine - however those AVAssets create a huge overhead in storage consumption. To my knowledge the data instances of each audio file and AVAsset should only live inside the function I call to extract the metadata, however those data/AVAsset instances still live on on storage as I can clearly see that the app's file size increases by multiple Gigabytes (equal to the library size I test with). However, the only data that I purposefully save with SwiftData is the album artwork. Is this normal behavior for AVAssets or am I missing some detail? PS. If I forgot to mention something important, please ask. This is my first ever post, so I'm not too sure what is worth mentioning. Thank you in advance! Denis
1
0
181
2w
Help with CoreAudio Input level monitoring
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall... For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case). I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring. Graph looks like: [AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device] Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower. B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device). I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence). I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem. I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience. Notes: My App is not sandboxed in this test I have tried with and without hardened runtime with Audio Input checked The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels. All input channels have been tested and are working in Ableton Live and Logic Pro. Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence. The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file. Link to github project page. Diagram of AUGraph for initial test (code that is on github) Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter. I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
0
0
198
3w
Recording audio from a microphone using the AVFoundation framework does not work after reconnecting the microphone
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same. There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears. Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again. This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted. Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation? Below is the code to demonstrate the described behavior. I am also attaching an example of the microphone usage indicator icon. Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1. #include <chrono> #include <condition_variable> #include <iostream> #include <mutex> #include <thread> #include <AVFoundation/AVFoundation.h> #include <Foundation/NSString.h> #include <Foundation/NSURL.h> AVCaptureSession* m_captureSession = nullptr; AVCaptureDeviceInput* m_audioInput = nullptr; AVCaptureAudioDataOutput* m_audioOutput = nullptr; std::condition_variable conditionVariable; std::mutex mutex; bool responseToAccessRequestReceived = false; void receiveResponse() { std::lock_guard<std::mutex> lock(mutex); responseToAccessRequestReceived = true; conditionVariable.notify_one(); } void waitForResponse() { std::unique_lock<std::mutex> lock(mutex); conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; }); } void requestPermissions() { responseToAccessRequestReceived = false; [AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted) { const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio]; std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl; receiveResponse(); }]; waitForResponse(); } void timer(int timeSec) { for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining) { std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl; std::this_thread::sleep_for(std::chrono::seconds(1)); } } bool updateAudioInput() { [m_captureSession beginConfiguration]; if (m_audioOutput) { AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio]; [m_captureSession removeConnection:lastConnection]; } if (m_audioInput) { [m_captureSession removeInput:m_audioInput]; [m_audioInput release]; m_audioInput = nullptr; } AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]]; if (!audioInputDevice) { std::cout << "Error input audio device creating" << std::endl; return false; } // m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil]; // NSError *error = nil; NSError *error = [[NSError alloc] init]; m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error]; if (error) { const auto code = [error code]; const auto domain = [error domain]; const char* domainC = domain ? [domain UTF8String] : nullptr; std::cout << code << " " << domainC << std::endl; } if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) { [m_audioInput retain]; [m_captureSession addInput:m_audioInput]; } else { std::cout << "Failed to create audio device input" << std::endl; return false; } if (!m_audioOutput) { m_audioOutput = [[AVCaptureAudioDataOutput alloc] init]; if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput]) { [m_captureSession addOutput:m_audioOutput]; } else { std::cout << "Failed to add audio output" << std::endl; return false; } } [m_captureSession commitConfiguration]; return true; } void start() { std::cout << "Starting..." << std::endl; const bool updatingResult = updateAudioInput(); if (!updatingResult) { std::cout << "Error, while updating audio input" << std::endl; return; } [m_captureSession startRunning]; } void stop() { std::cout << "Stopping..." << std::endl; [m_captureSession stopRunning]; } int main() { requestPermissions(); m_captureSession = [[AVCaptureSession alloc] init]; start(); timer(5); stop(); timer(10); start(); timer(5); stop(); }
1
0
252
3w
Anyone know the output power of the headphone jack of a MacBook Pro for each percentage of volume?
Hello! I'm trying to create a headphone safety prototype to give warnings if I listen to music too loud, but inputing my headphone's impedance, sensitivity, and wanted SPL level, and all I need is just the data on the amount of power each percentage of volume outputs(I'm assuming the MacBook Pro has 1-100% volume scale). If anyone has this info, or can direct me to someone who has this info, that would be great! Also do I contact apple support for things like this? I'm not too sure... Thanks!!
0
0
143
3w
Capturing system audio no longer works with macOS Sequoia
Our capture application records system audio via HAL plugin, however, with the latest macOS 15 Sequoia, all audio buffer values are zero. I am attaching sample code that replicates the problem. Compile as a Command Line Tool application with Xcode. STEPS TO REPRODUCE Install BlackHole 2ch audio driver: https://existential.audio/blackhole/download/?code=1579271348 Start some system audio, e.g. YouTube. Compile and run the sample application. On macOS up to Sonoma, you will hear audio via loopback and see audio values in the debug/console window. On macOS Sequoia, you will not hear audio and the audio values are 0. #import <AVFoundation/AVFoundation.h> #import <CoreAudio/CoreAudio.h> #define BLACKHOLE_UID @"BlackHole2ch_UID" #define DEFAULT_OUTPUT_UID @"BuiltInSpeakerDevice" @interface AudioCaptureDelegate : NSObject <AVCaptureAudioDataOutputSampleBufferDelegate> @end void setDefaultAudioDevice(NSString *deviceUID); @implementation AudioCaptureDelegate // receive samples from CoreAudio/HAL driver and print amplitute values for testing // this is where samples would normally be copied and passed downstream for further processing which // is not needed in this simple sample application - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Access the audio data in the sample buffer CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); if (!blockBuffer) { NSLog(@"No audio data in the sample buffer."); return; } size_t length; char *data; CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &length, &data); // Process the audio samples to calculate the average amplitude int16_t *samples = (int16_t *)data; size_t sampleCount = length / sizeof(int16_t); int64_t sum = 0; for (size_t i = 0; i < sampleCount; i++) { sum += abs(samples[i]); } // Calculate and log the average amplitude float averageAmplitude = (float)sum / sampleCount; NSLog(@"Average Amplitude: %f", averageAmplitude); } @end // set the default audio device to Blackhole while testing or speakers when done // called by main void setDefaultAudioDevice(NSString *deviceUID) { AudioObjectPropertyAddress address; AudioDeviceID deviceID = kAudioObjectUnknown; UInt32 size; CFStringRef uidString = (__bridge CFStringRef)deviceUID; // Gets the device corresponding to the given UID. AudioValueTranslation translation; translation.mInputData = &uidString; translation.mInputDataSize = sizeof(uidString); translation.mOutputData = &deviceID; translation.mOutputDataSize = sizeof(deviceID); size = sizeof(translation); address.mSelector = kAudioHardwarePropertyDeviceForUID; address.mScope = kAudioObjectPropertyScopeGlobal; //???? address.mElement = kAudioObjectPropertyElementMain; OSStatus status = AudioObjectGetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, &size, &translation); if (status != noErr) { NSLog(@"Error: Could not retrieve audio device ID for UID %@. Status code: %d", deviceUID, (int)status); return; } AudioObjectPropertyAddress propertyAddress; propertyAddress.mSelector = kAudioHardwarePropertyDefaultOutputDevice; propertyAddress.mScope = kAudioObjectPropertyScopeGlobal; status = AudioObjectSetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, sizeof(AudioDeviceID), &deviceID); if (status == noErr) { NSLog(@"Default audio device set to %@", deviceUID); } else { NSLog(@"Failed to set default audio device: %d", status); } } // sets Blackhole device as default and configures it as AVCatureDeviceInput // sets the speakers as loopback so we can hear what is being captured // sets up queue to receive capture samples // runs session for 30 seconds, then restores speakers as default output int main(int argc, const char * argv[]) { @autoreleasepool { // Create the capture session AVCaptureSession *session = [[AVCaptureSession alloc] init]; // Select the audio device AVCaptureDevice *audioDevice = nil; NSString *audioDriverUID = nil; audioDriverUID = BLACKHOLE_UID; setDefaultAudioDevice(audioDriverUID); audioDevice = [AVCaptureDevice deviceWithUniqueID:audioDriverUID]; if (!audioDevice) { NSLog(@"Audio device %s not found!", [audioDriverUID UTF8String]); return -1; } else { NSLog(@"Using Audio device: %s", [audioDriverUID UTF8String]); } // Configure the audio input with the selected device (Blackhole) NSError *error = nil; AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error]; if (error || !audioInput) { NSLog(@"Failed to create audio input: %@", error); return -1; } [session addInput:audioInput]; // Configure the audio data output AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init]; AudioCaptureDelegate *delegate = [[AudioCaptureDelegate alloc] init]; dispatch_queue_t queue = dispatch_queue_create("AudioCaptureQueue", NULL); [audioOutput setSampleBufferDelegate:delegate queue:queue]; [session addOutput:audioOutput]; // Set audio settings NSDictionary *audioSettings = @{ AVFormatIDKey: @(kAudioFormatLinearPCM), AVSampleRateKey: @48000, AVNumberOfChannelsKey: @2, AVLinearPCMBitDepthKey: @16, AVLinearPCMIsFloatKey: @NO, AVLinearPCMIsNonInterleaved: @NO }; [audioOutput setAudioSettings:audioSettings]; AVCaptureAudioPreviewOutput * loopback_output = nil; loopback_output = [[AVCaptureAudioPreviewOutput alloc] init]; loopback_output.volume = 1.0; loopback_output.outputDeviceUniqueID = DEFAULT_OUTPUT_UID; [session addOutput:loopback_output]; const char *deviceID = loopback_output.outputDeviceUniqueID ? [loopback_output.outputDeviceUniqueID UTF8String] : "nil"; NSLog(@"session addOutput for preview/loopback: %s", deviceID); // Start the session [session startRunning]; NSLog(@"Capturing audio data for 30 seconds..."); [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:30.0]]; // Stop the session [session stopRunning]; NSLog(@"Capture session stopped."); setDefaultAudioDevice(DEFAULT_OUTPUT_UID); } return 0; }
4
0
301
3w
Play audio data coming from serial port
Hi. I want to read ADPCM encoded audio data, coming from an external device to my Mac via serial port (/dev/cu.usbserial-0001) as 256 byte chunks, and feed it into an audio player. So far I am using Swift and SwiftSerial (GitHub - mredig/SwiftSerial: A Swift Linux and Mac library for reading and writing to serial ports. 3) to get the data via serialPort.asyncBytes() into a AsyncStream but I am struggling to understand how to feed the stream to a AVAudioPlayer or similar. I am new to Swift and macOS audio development so any help to get me on the right track is greatly appreciated. Thx
0
0
135
3w
Get audio volume from microphone
Hello. We are trying to get audio volume from microphone. We have 2 questions. 1. Can anyone tell me about AVAudioEngine.InputNode.volume? AVAudioEngine.InputNode.volume Return 0 in the silence, Return float type value within 1.0 depending on the volume are expected work, but it looks 1.0 (default value) is returned at any time. Which case does it return 0.5 or 0? Sample code is below. Microphone works correctly. // instance member private var engine: AVAudioEngine! private var node: AVAudioInputNode! // start method self.engine = .init() self.node = engine.inputNode engine.prepare() try! engine.start() // volume getter print(\(self.node.volume)) 2. What is the best practice to get audio volume from microphone? Requirements are: Without AVAudioRecorder. We use it for streaming audio. it should withstand high frequency access. Testing info device: iPhone XR OS version: iOS 18 Best Regards.
2
0
301
Oct ā€™24
SFCustomLanguageModelData.CustomPronunciation and X-SAMPA string conversion
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation? I am following the below example from WWDC23 https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/ While using this kind of custom pronunciations we need X-SAMPA string of the specific word. There are tools available on the web to do the same Word to IPA: https://openl.io/ IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r". While using any online tools it gives - "/wI"nA:w@r/".
0
1
186
Oct ā€™24
MusicKit currentEntry.item is nil but currentEntry exists.
I'm trying to get the item that's assigned to the currentEntry when playing any song which is currently coming up nil when the song is playing. Note currentEntry returns: MusicPlayer.Queue.Entry(id: "evn7UntpH::ncK1NN3HS", title: "SONG TITLE") I'm a bit stumped on the API usage. if the song is playing, how could the queue item be nil? if queueObserver == nil { queueObserver = ApplicationMusicPlayer.shared.queue.objectWillChange .sink { [weak self] in self?.handleNowPlayingChange() } } } private func handleNowPlayingChange() { if let currentItem = ApplicationMusicPlayer.shared.queue.currentEntry { if let song = currentItem.item { self.currentlyPlayingID = song.id self.objectWillChange.send() print("Song ID: \(song.id)") } else { print("NO ITEM: \(currentItem)") } } else { print("No Entries: \(ApplicationMusicPlayer.shared.queue.currentEntry)") } }
0
0
170
Oct ā€™24
Is there a recommended approach to safeguarding against audio recording losses via app crash?
AVAudioRecorder leaves a completely useless chunk of file if a crash happens while recording. I need to be able to recover. I'm thinking of streaming the recording to disk. I know that is possible with AVAudioEngine but I also know that API is a headache that will lead to unexpected crashes unless you're lucky and the person who built it. Does Apple have a recommended strategy for failsafe audio recordings? I'm thinking of chunking recordings using many instances of AVAudioRecorder and then stitching those chunks together.
1
0
186
Oct ā€™24
API to use for high-level audio playback to a specific audio device?
I'm working on a little light and sound controller in Swift, driving DMX lights and audio. For the audio portion, I need to play a bunch of looping sounds (long-duration MP3s), and occasionally play sound effects (short-duration sounds, varying formats). I want all of this mixed into selected channels on specific devices. That is, I might have one audio stream going to the left channel, and a completely different one going to the right channel. What's the right API to do this from Swift? Core Audio? AVPlayer stuff?
0
0
155
Oct ā€™24
arm64 Logic Leaking Plugins (Not Calling AP_Close)
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects. I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep). Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
1
1
174
Oct ā€™24
Ford Puma Sync 3 problems with iOS 18
Good afternoon since Iā€™ve installed ios 18 on me iphone 15 pro I have problems using Apple car play with my Ford Puma with Sync 3. More in detail, problems with audio commands, selecting audio track, bluetooth, etc.. Are you aware about it? Thanks a lot Regards Alberto
0
0
229
Oct ā€™24