Post

Replies

Boosts

Views

Activity

QUIC connection error
Hi, we are currently implementing below method for a quick POC in iOS (Xcode 15.3/macOS Sonoma 14.0): func startQUICConnection() async { // Set the initial stream to bidirectional. options.direction = .bidirectional self.mainConn?.stateUpdateHandler = { [weak self] state in print("Main Connection State: \(state)") switch state { case .ready: print("Ready...") default: break } } // Don't forget to start the connection. self.mainConn?.start(queue: self.queue) } This is what we have in the initializer of the class: parameters = NWParameters(quic: options) mainConn = NWConnection(to: endpoint, using: parameters) These are the class's properties: let endpoint = NWEndpoint.hostPort(host: "0.0.0.0", port: .init(integerLiteral: 6667)) let options = NWProtocolQUIC.Options(alpn: ["echo"]) let queue = DispatchQueue(label: "quic", qos: .userInteractive) var mainConn: NWConnection? = nil let parameters: NWParameters! As per the logs, we never get to the .ready state for the NWConnection. Logs: nw_path_evaluator_create_flow_inner failed NECP_CLIENT_ACTION_ADD_FLOW (null) evaluator parameters: quic, attach protocol listener, attribution: developer, context: Default Network Context (private), proc: 022B7C28-0271-3628-8E5E-26B590B50E5B nw_path_evaluator_create_flow_inner NECP_CLIENT_ACTION_ADD_FLOW 8FEBF750-979D-437F-B4A8-FB71F4C5A882 [22: Invalid argument] nw_endpoint_flow_setup_channel [C2 0.0.0.0:6667 in_progress channel-flow (satisfied (Path is satisfied), interface: en0[802.11], ipv4, ipv6, dns, uses wifi)] failed to request add nexus flow Main Connection State: preparing Main Connection State: waiting(POSIXErrorCode(rawValue: 22): Invalid argument) We're running a local server using proxygen on port 6667. It connects with the proxygen client though... Have tried several thing but results are the same.
1
0
1.5k
Mar ’24
NWConnectionGroup w/QUIC Best Practices
Hello. Wanted to ask about the right way, or the intended way to leverage NWConnectionGroup for a QUIC based streaming solution. The use case is, we are making a request from the client in order to play a movie, and we want to send as much video frames as possible (and as fast as possible) from the streaming server, which also uses the Network framework. Our understanding is, NWConnectionGroup will open a QUIC tunnel between both parties so we can multiplex different streams to the client and we are already doing that. We see a throughput of approx. 20-35MB/s (client device is an iPad and server is an M2 macbook pro running a server app) and we would like to understand if we can improve these results way more. For example: 1.- Is it a good practice to create a second tunnel (NWConnectionGroup), or is not needed here?. We tried that, but the second one is also coming with id 0 on the metadata object, just as the first group we instantiated, not sure why this is the case. 2.- We are using a pool of several NWConnection (initialized with the group object) already instantiated, that way we send a video buffer in chunks as a stream on each connection. We use one connection for a buffer and when we need to send another buffer we use a different NWConnection pulled from the pool. We maybe just want a confirmation/validation of what we are doing, or to see if we are missing something on our implementation... Thanks in advance.
2
0
615
Jun ’24
QUIC Connection Group Server Sending Pace
We have an implementation in which we use QUIC via a connection group, server are client are on Swift using the Network framework. Our use case is, the server should send data buffers to the client as fast and as much as possible, now the pace to call the send method from the server should be carefully done, because if we send too much data of course the client is not gonna be able to receive it. The question would be, is there a way to query the congestion window so we know on the server side, how much data we should be able to send at some point? Asking because we are not getting all the data we are sending from the server on our client side... We are using these settings: let options = NWProtocolQUIC.Options(alpn: ["h3"]) options.direction = .bidirectional // options.idleTimeout = 86_400_000 options.maxUDPPayloadSize = Int.max options.initialMaxData = Int.max options.initialMaxStreamDataBidirectionalLocal = Int.max options.initialMaxStreamDataBidirectionalRemote = Int.max options.initialMaxStreamDataUnidirectional = Int.max options.initialMaxStreamsBidirectional = 400 options.initialMaxStreamsUnidirectional = 400 Questions: 1.- Can we get a little more detail in above options, specifically on their impact to the actual connection? 2.- IsinitialMaxData the actual congestion window value 3.- Are we missing something or making incorrect assumptions? Thanks in advance.
10
0
842
Jun ’24
QUIC Network framework interoperability
We would like to understand/double check if it is possible to use QUIC in Swift via Network framework as the client along with some other QUIC solution on the server (ex. s2n-quic, quiche, msquic, etc..) which won't be a macOS server. If that interoperability is indeed possible, the NWConnectionGroup won't be an approach we could use IMO, since probably we will need to develop that from scratch on both sides. Thanks in advance.
3
0
564
Jul ’24
macOS Server App on background state
Hi, let us explain the situation we have: We have a macOS server app which happens to be/act as a QUIC server (this setup is for a live demo). Once the server receives a streaming request from the client, server starts to send a bunch of QUIC streams to the client. The server needs to run on a macbook pro for the live demo and everything works fine, now when we click on a different app (the server app looses focus) the server app goes to background state and the network activity just stops going from 90MB/s to almost zero, but when we click on the server app again, the network activity goes back to 90MB/s and it continues normally. We understand this is the OS taking some decisions by managing resources efficiently. Question: Kindly let us know which options do we have to keep the server app QUIC networking tasks continuously running, even if it is not on the foreground (basically for it to behave like an actual server/service)? Thanks in advance
4
0
490
Aug ’24
QUIC streams/connections terminated when taking off the AVP
Hi, We have this situation in which we are sending buffers from a server to the Vision Pro in a local network and for some reason when we take the headset off of the user's head, the QUIC stream we are using are getting closed/terminated/disconnected. What our options are in order to remove this behavior, probably resume or make sure the AVP is ready again to receive the buffers from the server in a graceful manner?
1
0
455
Aug ’24
QUIC & http3
Hi, This is basically a fundamental question on the QUIC's implementation via the Network framework. We are using the NWMultiplexGroup object to deal with multiples streams over the wire, but we would like to understand if this object is using http3 under the hood, because our understanding is the actual connection multiplexing is happening under that protocol. If this is not the case, can you please elaborate a little bit more on this. Btw, in this implementation we are not using URLSession at all, is just pure QUIC via Network framework. Thanks in advance.
1
0
400
Aug ’24
Missing buffers on client side
Hi, We are working with a small QUIC POC, in which the macbook pro is the server and the vision pro the client (we use it to test QUIC's functionality). We have below logic to send small buffers (128k) using only one stream because we want the data to arrive in order and reliably as QUIC guarantees: private func createDummyData() { dummyData.append(Data(bytes: &frameNumber, count: MemoryLayout<UInt64>.size)) frameNumber += 1 } private func sendDataToClient() { createDummyData() let start = Date() Thread.sleep(forTimeInterval: 0.015) outgoingConnection?.sendBuffer(dummyData) { [weak self] in let interval = Date().timeIntervalSince(start) print("--> frame #: \(String(describing: self?.frameNumber)), send took: \(interval) seconds") self?.dummyData.removeLast(8) self?.sendDataToClient() } } As you can see we are waiting for the completion handler to call the next send operation. We needed to add a delay (0.015) because even when the data is arriving in order, we are not receiving a considerable amount of buffer on the client side. If we remove the delay, this is the way we are receiving our data. By the way, we are including a frame number (1,2,3,4....) on each buffer so we know which one arrived at the client : Connected to QUIC bi-di tunnel id: 0... Timestamp: 00:42:40.413, Buffer received... Frame number: 0, received... Timestamp: 00:42:40.414, Buffer received... Frame number: 1, received... Timestamp: 00:42:40.416, Buffer received... Frame number: 29, received... Timestamp: 00:42:40.416, Buffer received... Frame number: 30, received... Timestamp: 00:42:40.418, Buffer received... Frame number: 43, received... Timestamp: 00:42:40.418, Buffer received... Frame number: 52, received... Timestamp: 00:42:40.422, Buffer received... Frame number: 65, received... Timestamp: 00:42:40.424, Buffer received... Frame number: 80, received... Timestamp: 00:42:40.426, Buffer received... Frame number: 90, received... As you can see, we have received frames number 0 and 1 but after that we received # 29 and then jumps from 30 to 43 and 52 and 65. Again, if we introduce the delay this is not the case, is not fixing it but at least there are not that many losses. We thought QUIC had an internal sending queue in which every frame is waiting to be sent and it will be delivered reliably. Kindly let us know what are we missing.
0
0
352
Aug ’24