Post

Replies

Boosts

Views

Activity

Help with CoreAudio Input level monitoring
I have spent the past 2 weeks diving into CoreAudio and have seemingly run into a wall... For my initial test, I am simply trying to create an AUGraph for monitoring input levels from a user chosen Audio Input Device (multi-channel in my case). I was not able to find any way to monitor input levels of a single AUHAL input device - so I decided to create a simple AUGraph for input level monitoring. Graph looks like: [AUHAL Input Device] -> [B1] -> [MatrixMixerAU] -> [B2] -> [AUHAL Output Device] Where B1 is an audio stream consisting of all the input channels available from the input device. The MatrixMixer has metering mode turned on, and level meters are read from the each submix of the MatrixMixer using kMatrixMixerParam_PostAveragePower. B2 is a stereo (2 channel) stream from the MatrixMixerAU to the default audio device - however, since I don't really want to pass audio through to an actual output, I have the volume muted on the MatrixMixerAU output channel. I tried using a GenericOutputAU instread of the default system output, however, the GenericOutputAU never seems to pull date from the ringBuffer (the graph renderProc is never called if a GenericOutputAU is used instead of AUHAL default output device). I have not been able to get this simple graph to work. I do not see any errors when creating the graph and initializing the graph, and I have verified that the inputProc is being called for filling up the ringBuffer - but when I read the level of the MatrixMixer, the levels are always -758 (silence). I have posted my demo project on github in hopes I can find someone with CoreAudio expertise to help with this problem. I am willing to move this to DTS Code Level support if there is someone in DTS with CoreAudio experience. Notes: My App is not sandboxed in this test I have tried with and without hardened runtime with Audio Input checked The multichannel audio device I am using for testing is the Audient iD14 USB-C audio device. It supports 12 input channels and 6 output channels. All input channels have been tested and are working in Ableton Live and Logic Pro. Of particular interest, is that I can't even get the Apple CAPlayThrough demo to work on my system. I see no errors when creating the graph, but all I hear is silence. The MatrixMixerTest from the Apple documentation archives does work - but note, that that demo does not use Audio Input devices - it reads audio into the graph from an audio file. Link to github project page. Diagram of AUGraph for initial test (code that is on github) Once I get audio input level metering to work, my plan is to implement something like in Phase 2 below - with the purpose of capturing a stereo input stream, mixing to mono, and sending to lowpass, bandpass, hipass AUs - and will again use MatrixMixer for level monitoring of the levels out of each filter. I have no plans on passthough audio (sending actual audio out to devices) - I am simple monitoring input levels Diagram of ultimate scope - rendering audio levels of a stereo to mono stream after passing through various filters
0
0
245
Nov ’24
How to use CFMessagePort in a Sandbox App when App Group naming convention is not possible?
I am working on an App and I am in the process of adding Syphon support. Syphon uses CFMessagePort for IPC and passing of FrameBuffer data (MTLTexture) between apps - and is widely used in the professional video app and video production space. What I have noticed is that when the App is built as a Sandbox app, during the Syphon initialization, I see the following error message in the log: *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x8703, name = 'info.v002.Syphon.D2499DBD-93AE-4CEA-B21F-FF356DCC069D' See /usr/include/servers/bootstrap_defs.h for the error codes. Syphon uses the "info.v002.Syphon.UUID" naming convention to identify IPC Syphon servers, so I don't think I can use the App Groups naming convention for Sandbox support. I have a very simple example app on github that publishes SpriteKit frames as a Syphon Server. To see the issue, simply enable App Sandbox for the build, and run the app. You should see the error message in the log and no data appears in any Syphon Client (I use Syphon Recorder for testing - available at syphon.github . io I am looking for other options to enable CFMessagePorts on a Sandbox App.
6
0
948
May ’24
Can't disable App Sandbox
My Xcode workspace contains build settings for a macOS, iOS, and tvOS application. My Sandbox macOS app builds just fine and works great - and is on the App Store. I am in the process of creating a new build / branch of this app that is not Sandboxed so that I can add IPC (Syphon support) - as I don't think I can use App Groups to enable CFMessage support (which Syphon requires) because Syphon (third party framework) - uses its own naming convention for the ports. Anyway, sandbox support for a Syphon app is a topic for another day (it's actually quite disappointing that I can't release a Syphon version on the App Store). The trouble I am having, is that even afer deleting the App Sandbox entitlement from my project, my App still seems to be running in the App Sandbox, and I can't figure out how to remove the App Sandbox entitlement completely. What I am seeing, is that even after deleting the App Sandbox entitlement (using the project settings and deleting it in the "Signing and Capabilities" tab (and also checking the entitlements file manually to doubly make sure it is gone) - I am still seeing the following error message: *** CFMessagePort: bootstrap_register(): failed 1100 (0x44c) 'Permission denied', port = 0x8703, name = 'info.v002.Syphon.332143F7-0916-428A-A88A-59B752F95304' See /usr/include/servers/bootstrap_defs.h for the error codes. It is also saving my Application Support data in the ~/Library/Containers folder, and not in ~/Library/ApplicationSupport What step am I missing?
7
0
1.3k
May ’24