Thanks for this answer. But I'm wondering how to achieve low latency playback when I have to wait for up to 4096 samples before I can further process them inside my driver. So applications like Logic Pro support buffer sizes down to 32 samples which corresponds to 0.66 msec when operating at 48 kHz samplerate. That's fine if you want to go down to latencies below 10 msec. But when another app also starts playing and the second app uses buffers of 4096 samples this ends up in a latency of 85 msec - which is pretty much and if you think about audio/video synchronization even for 25 frames per second this means a delay of 2 frames. Is there a way to force an application to stay below a maximum buffer size? Or how can I know that the samples I've just received in IOUserAudioIOOperationWriteEnd are the final samples for the given time area and will not be overwritten a few moments later by the mixed audio data due to the playback inside a second app. Waiting always for at least 4096 samples would really break all efforts for low latency drivers in my eyes.
Post
Replies
Boosts
Views
Activity
Although I know you'll again blame my naming conventions I decided to add a small example here: it does nothing except the peakmeter stuff and still causes a CPU load of 25% to 30% for 128 channels. This CPU load merely changes if you decrease the size of the window to only show two channels, while starting the example with only two channels (set s_NbPeakmeter = 2) shows a much lower cpu load. So invisible views also seem to cause CPU load?
import SwiftUI
class CPeakmeterManager: NSObject, ObservableObject
{
static public let s_NbPeakmeter: Int = 128
@Published var m_VecPeakmeterValues: [CGFloat] = []
var m_Timer: Timer? = nil
override init()
{
super.init()
m_VecPeakmeterValues = [CGFloat](repeating: 0.0, count: CPeakmeterManager.s_NbPeakmeter )
m_Timer = Timer.scheduledTimer(timeInterval: 0.05, target: self, selector: #selector( OnTimer ), userInfo: nil, repeats: true)
}
@objc func OnTimer()
{
for ChannelIndex in 0..<m_VecPeakmeterValues.count
{
m_VecPeakmeterValues[ ChannelIndex ] = CGFloat.random(in: 0...1)
}
}
}
struct PeakmeterView: View
{
@Binding var b_PeakmeterValue: CGFloat
var body: some View
{
GeometryReader { geometry in
ZStack(alignment: .trailing) {
HStack(spacing: 0) {
Rectangle()
.frame( width: 0.4 * geometry.size.width, height: 10 )
.foregroundColor(.green)
Rectangle()
.frame( width: 0.3 * geometry.size.width, height: 10 )
.foregroundColor(.yellow)
Rectangle()
.frame( width: 0.3 * geometry.size.width, height: 10 )
.foregroundColor(.red)
}
Rectangle()
.frame(width: min( ( 1.0 - b_PeakmeterValue ) * geometry.size.width, geometry.size.width), height: 10)
.opacity(0.9)
.foregroundColor(.gray)
}
}
}
}
@main
struct PeakmeterTestApp: App {
@StateObject var m_PeakmeterManager = CPeakmeterManager()
var body: some Scene {
WindowGroup {
ContentView().environmentObject(self.m_PeakmeterManager)
}
}
}
struct ContentView: View {
@EnvironmentObject var m_PeakmeterManager: CPeakmeterManager
var body: some View {
ScrollViewReader { proxy in
ScrollView {
ForEach(0 ..< CPeakmeterManager.s_NbPeakmeter, id: \.self) { ChannelIndex in
PeakmeterView( b_PeakmeterValue: $m_PeakmeterManager.m_VecPeakmeterValues[ ChannelIndex ])
.frame(width: 150)
}
}
}
.padding([.top,.bottom],12)
}
}
Hi,
thanks for this explanation. It would be fine if XCode would mark this as an error if it is not supported. I already noticed that there's a difference when use a named variable versus an _ for this Lock variable, as using the underscore shows the same behaviour even in debug mode as the optimized version and just lets it go out of scope immediately. I'll switch to the withXxx version.
About your notes:
I know that the default naming is different, on the other hand if you're working in different languages (C,C++,Java,Swift,Typescript, etc.) and you want to follow each naming convention it may be also confusing. So sometimes when I know that my code will not become public, I'm reusing the naming convention I'm used to and I like best. For me in the end it's just names.
I think recursive locks are quite useful sometimes. Of course you shouldn't use them just because you don't know if you've already used this lock somewhere else. But I'd like to hear your explanation :-)
Hi, I did some further investigation and I think the problem is due to the fact that I'm doing the locking within a class that releases the lock when it goes out of scope. So my class to do this looks like this:
class CRecursiveMutex
{
let m_RecursiveLock: NSRecursiveLock
init( _ RecursiveLock: NSRecursiveLock )
{
m_RecursiveLock = RecursiveLock
m_RecursiveLock.lock()
}
deinit
{
m_RecursiveLock.unlock()
}
// func DoNothing() {
// }
}
now I have another class using this:
class CMyClass {
var m_CSMyLock: NSRecursiveLock = NSRecursiveLock()
public func DoSomething()
{
let Lock: CRecursiveMutex = CRecursiveMutex( m_CSMyLock )
// ... do something
// Lock.DoNothing()
}
}
So when you call DoSomething() from different threads the locking will not work in optimized mode, while it works correct if you disable the optimization. It will also work if you uncomment the function DoNothing() and the calls to it which I just added for testing purposes. So for me it looks like in the end the optimization seems to remove my local Lock variable probably as it thinks it is not used. I like to use this kind of locking by a class that goes out of scope as you can be sure the lock is released wherever you leave the function and I'm also using it in C++ classes where they also survive the compiler optimizations.
P.S.: This happens when I have my audio device selected as the default playback device.
In addition I just noticed that there seems to be a little step back back in the audio from the playback application at the moment my device is selected as input device in the SystemSettings.So a small part of the audio (est. 0.3 secs) from the playing application is transferred twice while the same amount of audio is skipped when I switch back to a different audio device as input device. Looking at the in_sample_time of the audio callback with io operation IOUserAudioIOOperationWriteEnd this seems to increase just steadily and according to the previous callbacks for my playback audio data. So I cannot see any jump backward or forward in the in_sample_time.
Hi Quinn,
posting my finding I already thought this will result in an answer like yours. But as already written in my first post I actually checked out SCNetworkInterfaceGetLocalizedDisplayName() before searching deeper as this function doesn't return the name I've given the NIC.
Best regards,
Johannes
OK, I think I found it - you can read it from:
/Library/Preferences/SystemConfiguration/preferences.plist
OK, I think I figured out some of the tricks you have to do:
for abi::__cxa_demangle() you really have to extract only the name of the function out of the complete line you've been given from backtrace_symbols(). Looking at my example this means I have to call abi::__cxa_demangle( "_ZN14CZMQConnection21CFnCalledWorkerThread6ThreadEv", ... )
for the file names and line numbers you need the atos command and the DWARF file corresponding to your binary. But the tricky thing here is to find the load address of your image which is needed to call atos. You can get the load address by calling _dyld_get_image_header(0) i.e. in your c++ signal handler where you also log the backtrace and backtrace_symbols output.
Perhaps this helps anybody else searching similar things,
Johannes
Thank you very much. This was really helpful and I've already done all the steps and uploaded my app with the dext for notarization. So currently I will have to learn what "a little while" actually means.
Will I have to go through all these steps for each version I'd like to distribute? Normally we do our builds in a fully automatized build system like Jenkins and have a final installer falling out of this process without human interference. Is this also possible for MacOS applications and dexts developed in XCode?
Best regards,
Johannes
Meanwhile I found a solution that I get at least compiled and that doesn’t crash. I’m using IOUserAudioCustomProperty to add an icon property but the icon still doesn’t appear. In the console I’m seeing this output, which makes me think I’m not completely wrong with this approach.
standard 17:47:07.554960+0200 Audio MIDI Setup HALC_ProxyObject.cpp:881 HALC_ProxyObject::GetPropertyData ('icon', 'glob', 0, DCFURL): got an error from the server, 0x77686174
fehler 17:47:07.554982+0200 Audio MIDI Setup HALC_ShellObject.mm:449 HALC_ShellObject::GetPropertyData: call to the proxy failed, Error: 2003329396 (what)
fehler 17:47:07.554996+0200 Audio MIDI Setup HALPlugIn.cpp:295 HALPlugIn::ObjectGetPropertyData: got an error from the plug-in routine, Error: 2003329396 (what)
fehler 17:47:07.555026+0200 Audio MIDI Setup CAHALAudioObject::GetPropertyData: got an error getting the property data, Error: 2003329396 (what)
Do you have any idea how can I set the correct DCFURL for my icon which is inside my dexts resources?
Thanks - but that doesn't seem to support custom icons like a company logo etc. but only selecting one of the predefined type values which in turn probably results in a system predefined icon. Using kAudioDevicePropertyIcon in Audio Server Plugins supports the usage of custom icons located inside my resource bundle. How could this be achieved in a dext based on AudioDriverKit?
OK, the Feedback number is FB13209186
Just for completion: ... seems like I had a bug in my code and now my settings persist a reboot when I use WriteToStorage and CopyFromStorage :-)
Thanks for this answer - I'm still searching for answers to:
Why is the entitlement com.apple.developer.driverkit.transport.usb needed for the "Audio Server Plugin with Driver Extension" sample ?
Would you be granted all the mentioned entitlements (com.apple.developer.driverkit, com.apple.developer.driverkit.transport.usb and com.apple.developer.driverkit.userclient-access) for a virtual audio driver if it is a "AudioServer Plugin" ?
How would you write a virtual audio driver for iOS ?
Does Apple think about opening AudioDriverKit also for virtual drivers or are there any technical limitations making this impossible (though it seems to work) ?
Best,
Johannes
Hi,
I also had quite a time to get this sample running. Perhaps this thread helps:
https://developer.apple.com/forums/thread/726576
In addition I had to learn that AudioDriverKit currently doesn't support virtual audio drivers. Well ... it seems to work but you will not be given the required entitlements. I still try to figure out whether this will be possible in the (near) future as I think AudioDriverKit is much better than the Audio Server Plugin API and currently my virtual driver seems to run fine here on my development machine with SIP off etc.
Best, Johannes