I use for my use case the Network.Framework API.
My use case is the following: start a UDP connection on a specific port
send UDP data to this host (A)
get maybe data (its's optional) from this host (A)
and receive data (on same port as above) from another host (B) (must-have)
It's possible to build this scenario with Network.Framework?
With my current code I get no data from the another host (B), it's only possible to receive data from host (A).
Here is my code:
let endpoint = NWEndpoint.hostPort(host: NWEndpoint.Host.init("1.2.3.4"), port: NWEndpoint.Port(rawValue: UInt16(35000))!)
let params = NWParameters(dtls: nil, udp: .init())
params.requiredLocalEndpoint = NWEndpoint.hostPort(host: .ipv4(.any), port: 40000)
connection = NWConnection(to: endpoint, using: params)
connection.start(queue: queue)
// after the connection is ready (in state update handler)
// I start the receive code like this:
connection.receiveMessage { (data, _, isComplete, error)
This code works fine for the steps 1-3 from my scenario, but not for the step 4. IMHO the NF-API create a point-to-point connection the remote site.
But I want a point-to-multipoint connection on my local site (receive data from two host or more).
Is this possible?
Post
Replies
Boosts
Views
Activity
I want to stream audio on iOS and use for that use-case the AVAudioEngine. So, currently I'm not really sure, what is the best solution for my problem.I get the RTP data from the network and want playback this audio data with AVAudioEngine. I use the iOS Network.Framework to receive the network data. Then first I decode my voice data and want to playback it, now.Here is my receive code:connection.receiveMessage { (data, context, isComplete, error) in
if isComplete {
// decode the raw network data with Audio codec G711
let decodedData = AudioDecoder.decode(enc: data, frames: 160)
// create PCMBuffer for audio data for playback
let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatALaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1])
// let format = AVAudioFormat(standardFormatWithSampleRate: 8000, channels: 1)
let buffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: 160)!
buffer.frameLength = buffer.frameCapacity
// TODO: now I have to copy the decodedData --> buffer (AVAudioPCMBuffer)
if error == nil {
// recall receive() for next message
self.receive(on: connection)
}
}?How I have to copy my decoded data into the AVAudioPCMBuffer? Currently, my AVAudioPCMBuffer is created, but not contain any audio data.Background information: My generell approach would be to cash here in the above code (at ToDo-Line) the PCMBuffer in a collection and playback this collection by the AVAudioEngine (in a background thread).My decoded linear data is cashed in an array from type Int16. So the var decodedData is from type [Int16], maybe there is a possibility to consume this data directly? The function scheduleBuffer allows only AVAudioPCMBuffer as input.
Hello,
it's possible to get the own socket port for NWConnection?
I open a network connection and will find out the used port on this created socket.
My example code:
let queue = DispatchQueue(label: "UDP data thread")
let connection = NWConnection(host: "1.2.3.4", port: 40000, using: .udp)
connection.start(queue: queue)
IMHO it's totally simple code. Now I want to now the used socket port for my connection.
Where I can read this information? Many thanks.
Hello,
I want to switch my audio session to the external speaker.
On iOS 12 it works fine but on on iOS 13.
Here is my audio setup code:
do {
		try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: .mixWithOthers)
		try AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker)
		try AVAudioSession.sharedInstance().setActive(true)
} catch {
		print(error)
}
This code above works fine, I start my audio session with the external speaker on the device. Now I want to switch the audio from external speaker to the internal speaker during an active session.
Therefor I use this simple line:
try AVAudioSession.sharedInstance().overrideOutputAudioPort(.none)
The result is: on iOS12 it works fine, but on on iOS 13
Is there an API change or something else? What is my mistake? Maybe I have to use another API from ObjC or deeper one from the C layer? Or work here with AudioUnit?
For the playback I use now the AVAudioEngine (after the speaker switch it plays no audio).