Posts

Post not yet marked as solved
1 Replies
536 Views
I am using the new AVAudioSourceNode to stream audio samples to be played out of a hardware interface. I can successfully do this, however when inspecting the stream, I can see that the node seems to decide it’s own format. I assumed that this would take the format used in the line: engine.connect(sourceNode, to: mainMixerNode, format: format) However it is not using this. I have tried setting the preferred sampleRate for the audio session to 48000. Everything attached to my audioEngine uses 48000Hz but for some reason the audioSourceNode still seems to use 44100Hz, and set to 2-channel! Any suggestions?
Posted
by samp17.
Last updated
.
Post not yet marked as solved
1 Replies
750 Views
I have an audio interface which does multiple channels, of which I want to use just the first channel. This comes into my device at the hardware’s preferred sample rate.What I would like to achieve is to downsample the incoming signal to my apps native format and to record the first channel. To do this, I have the engine.inputNode coming into an AVAudioMixer (called formatMixer) at the inputNode.outputFormat. I then connect this formatMixer to an AVAudioSinkNode to record the data.I have two issues here. Looking at the what’s new in audioAPI video from July, Using a sinkNode I cannot specify the format. It will take the format of the formatMixer outputNode. So how do I specify this, as usually I would do this at the connection call?My second question is: whilst doing the format conversion at the formatMixer stage, is there a way to map the channel output? I do not want to mix down as the documentation suggests, but instead make sure only the first channel of the mixers input bus is sent to the mixers output.
Posted
by samp17.
Last updated
.
Post not yet marked as solved
0 Replies
339 Views
I am using the new AVAudioSinkNode to get audio samples from an input in real-time. I have this successfully working for a 2 channel device. I am now testing on different hardware devices, and have encountered an issue where the console is reporting too many buffers. The format of the device appears to be a 70 channel (24channel in class compliant mode) non interleaved device. I am assuming that as this is meant for real-time applications, there is a limit to the amount of buffers that this can use. Is there a way that I can stream just one channel of this input into my sinkNode? Ie, I would like to specify a single channel that I would like to capture. Until now I have been manually deinterleaving in each callback. As the error occurs in the callback itself, not in the ussr block, I cannot use a AVAudioConverter within the callback. Please can someone suggest a method?
Posted
by samp17.
Last updated
.
Post not yet marked as solved
0 Replies
339 Views
I am using the new AVAudioSinkNode function to get real-time audio samples to process. This works well for a single channel, however I am looking at using this for different audio interfaces. One in particular has the inputFormat of 70 non interleaved channels. In the console, This is giving me the error of too many buffers. Is there a limit to the number of channels that can be used with sinkNode? I’m guessing so since it is designed for real-time. If this is the case, is there a way to extract a specific channel from the 70 before streaming into the sinkNode?
Posted
by samp17.
Last updated
.