Post

Replies

Boosts

Views

Activity

Receive UDP data from several hosts
I use for my use case the Network.Framework API. My use case is the following: start a UDP connection on a specific port send UDP data to this host (A) get maybe data (its's optional) from this host (A) and receive data (on same port as above) from another host (B) (must-have) It's possible to build this scenario with Network.Framework? With my current code I get no data from the another host (B), it's only possible to receive data from host (A). Here is my code: let endpoint = NWEndpoint.hostPort(host: NWEndpoint.Host.init("1.2.3.4"), port: NWEndpoint.Port(rawValue: UInt16(35000))!) let params = NWParameters(dtls: nil, udp: .init()) params.requiredLocalEndpoint = NWEndpoint.hostPort(host: .ipv4(.any), port: 40000) connection = NWConnection(to: endpoint, using: params) connection.start(queue: queue) // after the connection is ready (in state update handler) // I start the receive code like this: connection.receiveMessage { (data, _, isComplete, error) This code works fine for the steps 1-3 from my scenario, but not for the step 4. IMHO the NF-API create a point-to-point connection the remote site. But I want a point-to-multipoint connection on my local site (receive data from two host or more). Is this possible?
4
0
903
Sep ’20
No audio on internal/external speaker
Hello, I want to switch my audio session to the external speaker. On iOS 12 it works fine but on on iOS 13. Here is my audio setup code: do { 		try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: .mixWithOthers) 		try AVAudioSession.sharedInstance().overrideOutputAudioPort(.speaker) 		try AVAudioSession.sharedInstance().setActive(true) } catch { 		print(error) } This code above works fine, I start my audio session with the external speaker on the device. Now I want to switch the audio from external speaker to the internal speaker during an active session. Therefor I use this simple line: try AVAudioSession.sharedInstance().overrideOutputAudioPort(.none) The result is: on iOS12 it works fine, but on on iOS 13 Is there an API change or something else? What is my mistake? Maybe I have to use another API from ObjC or deeper one from the C layer? Or work here with AudioUnit? For the playback I use now the AVAudioEngine (after the speaker switch it plays no audio).
0
0
648
Aug ’20
Get own socket port (NWConnection)
Hello, it's possible to get the own socket port for NWConnection? I open a network connection and will find out the used port on this created socket. My example code: let queue = DispatchQueue(label: "UDP data thread") let connection = NWConnection(host: "1.2.3.4", port: 40000, using: .udp) connection.start(queue: queue) IMHO it's totally simple code. Now I want to now the used socket port for my connection. Where I can read this information? Many thanks.
10
0
2.6k
Jul ’20
iOS speaker output format (iPhone)
I want to playback audio data received from the network. My incoming audio PCM data are in the format Int16, 1 channel, sample rate 8000, 160 bytes/packageNow I'm not sure, which audio format iOS is supporting on the speaker side? IMHO I have to work with Float32 and the sample rate 44.100 / 48000 is that right?So I think I have to convert my Int16 linear PCM data to Float32. Maybe I have also tu resample the data from 8k to 48k,I'm not sure (maybe the hardware does it).Could someone help me? Here is my current code, where I build the AVAudioPCMBuffer.func convertInt16ToFloat32(_ data: [Int16]) -> AVAudioPCMBuffer { let audioBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 160)! // Each real data of the array input is reduced to the interval [-1, 1] for i in 0..<data.count { // Convert the buffer to floats. (before resampling) let div: Float32 = (1.0 / 32768.0) let floatKiller = div * Float32(i) audioBuffer.floatChannelData?.pointee[i] = floatKiller } audioBuffer.frameLength = audioBuffer.frameCapacity return audioBuffer }And on the other side I play back the created AVAudioPCMBuffer in my AVAudioEngine.func playFromNetwork(data: [Int16]) { // data: linear data PCM-Int16, sample rate 8000, 160 bytes let audio = convertInt16ToFloat32(data) // playback converted data on AVAudioPlayerNode self.playerNode!.scheduleBuffer(audio, completionHandler: nil) Logger.Audio.log("Play audio data .....") }Here is my setup for AVAudioEngine:func initAudio() { try! AVAudioSession.sharedInstance().setActive(true) try! AVAudioSession.sharedInstance().setCategory(.playback) let outputFormat = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) engine = AVAudioEngine() playerNode = AVAudioPlayerNode() engine!.attach(playerNode!) engine!.connect(playerNode!, to: engine!.mainMixerNode, format: outputFormat) engine!.prepare() try! engine!.start() playerNode!.play() }
0
0
1.1k
Nov ’19
Playback streaming audio with AVAudioengine (iOS)
I want to stream audio on iOS and use for that use-case the AVAudioEngine. So, currently I'm not really sure, what is the best solution for my problem.I get the RTP data from the network and want playback this audio data with AVAudioEngine. I use the iOS Network.Framework to receive the network data. Then first I decode my voice data and want to playback it, now.Here is my receive code:connection.receiveMessage { (data, context, isComplete, error) in if isComplete { // decode the raw network data with Audio codec G711 let decodedData = AudioDecoder.decode(enc: data, frames: 160) // create PCMBuffer for audio data for playback let format = AVAudioFormat(settings: [AVFormatIDKey: NSNumber(value: kAudioFormatALaw), AVSampleRateKey: 8000, AVNumberOfChannelsKey: 1]) // let format = AVAudioFormat(standardFormatWithSampleRate: 8000, channels: 1) let buffer = AVAudioPCMBuffer(pcmFormat: format!, frameCapacity: 160)! buffer.frameLength = buffer.frameCapacity // TODO: now I have to copy the decodedData --> buffer (AVAudioPCMBuffer) if error == nil { // recall receive() for next message self.receive(on: connection) } }?How I have to copy my decoded data into the AVAudioPCMBuffer? Currently, my AVAudioPCMBuffer is created, but not contain any audio data.Background information: My generell approach would be to cash here in the above code (at ToDo-Line) the PCMBuffer in a collection and playback this collection by the AVAudioEngine (in a background thread).My decoded linear data is cashed in an array from type Int16. So the var decodedData is from type [Int16], maybe there is a possibility to consume this data directly? The function scheduleBuffer allows only AVAudioPCMBuffer as input.
2
0
1.8k
Oct ’19