Posts

Post not yet marked as solved
0 Replies
871 Views
I am trying to mix the audio from 2 different hardware audio devices together in real-time and record the results. Does anybody have any idea how to do this? This is on macOS. Things I have tried and why it didn't work: Adding 2 audio AVCaptureDevices to an AVCaptureMovieFileOutput or AVAssetWriter. This results in a file that has 2 audio tracks. This doesn't work for me for various reasons. Sure I can mix them together with an AVAssetExportSession, but it needs to be real-time. Programmatically creating an aggregate device and recording that as an AVCaptureDevice. This "sort of" works, but it always results in a recording with strange channel issues. For example, if I combine a 1 channel mic and a 2 channel device, I get a recording with 3 channel audio (L R C). If I make an aggregate out of 2 stereo devices, I get a recording with quadraphonic sound(L R Ls Rs), which won't even play back on some players. If I always force it to stereo, all stereo tracks get turned to mono for some reason. Programmatically creating an aggregate device and trying to use it in an AVAudioEngine. I've had multiple problems with this, but the main one is that when the aggregate device is an input node, it only reports the format of its main device, and no sub-devices. And I can't force it to be 3 or 4 channels without errors. Use an AVCaptureSession to output the sample buffers of both devices, then convert and put those samples into their own AVPlayerNodes. Then mix those AVPlayerNodes into an AVAudioEngine mixer. This actually works, but the resulting audio lags so far behind real-time, that it is unusable. If I record a webcam video along with the audio, the lip-sync is off by like half a second. I really need help with this. If anybody has a way to do this, let me know. Some caveats that have also been tripping me up: The hardware devices that need to be recorded might not be the default input device for the system. The MBP built in mic might be the default device, but I need to record 2 other devices and disclose the built in mic. The devices usually don't have the same audio format. I might be mixing an lpcm mono int16 interleaved with a lpcm stereo float32 non-interleaved. It absolutely has to be real-time and 1 single audio track. It shouldn't be this hard, right?
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
0 Replies
812 Views
When I create an aggregate device with 2 hardware inputs and 1 output and I try to use it with AVAudioEngine, it fails to start. I get the error IsFormatSampleRateAndChannelCountValid(outputHWFormat) If I use an aggregate device with only 1 input/output, it works. The problem seems to stem from how aggregate devices handle channels. If I add a 2 channel device and a 1 channel device to the aggregate as inputs, I get an aggregate device with 3 channels. However, if I try and get the format of the input node, it only reports the format of the first device in the aggregate. So instead of saying the device has 3 channels, it will say it has 1 or 2 depending on which device is the main device. I've tried creating my own AVAudioFormat using channel layouts such as kAudioChannelLayoutTag_AAC_3_0, but this only works in very specific cases and is very unreliable. Can anybody help with this? It is driving me crazy. The main problem I am trying to solve is to combine/mix 2 hardware (or virtual hardware via HAL) audio devices in real-time for recording. An aggregate device alone doesn't work (see https://developer.apple.com/forums/thread/703258) Thanks for any help, you would save my day/week.
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
I need to record 2 stereo AVCaptureDevices into 1 audio track. I can successfully create the aggregate device using AudioHardwareCreateAggregateDevice, but when I record that device, the resulting audio track has quadraphonic audio. The problem with this is that some players, such as VLC, won't play all the tracks. So I tried forcing the file writer to use 2 channels with a stereo layout (see code below). This doesn't quite work because all 4 of the channels are mapped to both of the stereo channels. As in, they are not actually stereo. The left/right channels from the aggregate devices play in both channels of the resulting file. code I used to make the track stereo: var audioOutputSettings = movieFileOutput.outputSettings(for: audioConnection) audioOutputSettings[AVNumberOfChannelsKey] = 2 var layout = AudioChannelLayout() layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo audioOutputSettings[AVChannelLayoutKey] = NSData(bytes: &layout, length: MemoryLayout.size(ofValue: layout)) movieFileOutput.setOutputSettings(audioOutputSettings, for: audioConnection) Can anyone help me with this so that both left channels from the 2 devices play in the left channel and same with both right channels?
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
0 Replies
784 Views
On Xcode Cloud, my macOS archive fails with NSLocalizedDescription=exportOptionsPlist error for key 'method': expected one of {}, but found development It succeeds locally and on my other build systems, it only fails on Xcode Cloud. Any thoughts?
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
4 Replies
3.4k Views
Hi. I am making a test app almost exactly like the Tic Tac Toe example from WWDC19 https://developer.apple.com/videos/play/wwdc2019/713The biggest differences are that instead of of connecting 2 ios apps together, my server is macOS 10.15 and my cleint is iOS13.I create the bonjour listener on my server and start it up. I then use the NWBrowser on my iOS device, just like in the sample code. I create my NWConnection objects just like in the sample code.The problem happens when I call NWConnection.send(On my client I get this in the NWConnection.stateUpdateHandler failed:2019-06-14 08:36:10.853632-0400 ClientSample[622:125162] [] tcp_output [C1.6.1:2] flags=[R.] seq=2719089173, ack=1140980224, win=1025 state=CLOSED rcv_nxt=1140980224, snd_una=2719089173failed with error: POSIXErrorCode: Network is downOn my server, I get this in the same update handler failed case:2019-06-14 08:33:52.884427-0400 ServerSample[1872:22136] [BoringSSL] boringssl_session_handshake_error_print(112) [C1:1][0x1010052c0] 4313953176:error:100000ae:SSL routines:OPENSSL_internal:NO_CERTIFICATE_SET:/BuildRoot/Library/Caches/com.apple.xbs/Sources/boringssl/boringssl-264/ssl/tls13_server.cc:690:2019-06-14 08:33:52.884495-0400 ServerSample[1872:22136] [BoringSSL] nw_protocol_boringssl_handshake_negotiate_proceed(684) [C1:1][0x1010052c0] handshake failed at state 0failed with error: -9858: Optional(handshake failed)I tried creating a self signed cert on my server to see if that was the issue. I followed these instructions:https://devcenter.heroku.com/articles/ssl-certificate-selfI would also like to point out that the server portion of this would be part of a mac app, so having all of our customers install certs would not be an option.Not sure where to go from here.Thanks,Rob
Posted
by nitro805.
Last updated
.