We have an implementation in which we use QUIC via a connection group, server are client are on Swift using the Network framework.
Our use case is, the server should send data buffers to the client as fast and as much as possible, now the pace to call the send method from the server should be carefully done, because if we send too much data of course the client is not gonna be able to receive it.
The question would be, is there a way to query the congestion window so we know on the server side, how much data we should be able to send at some point? Asking because we are not getting all the data we are sending from the server on our client side...
We are using these settings:
let options = NWProtocolQUIC.Options(alpn: ["h3"])
options.direction = .bidirectional
//
options.idleTimeout = 86_400_000
options.maxUDPPayloadSize = Int.max
options.initialMaxData = Int.max
options.initialMaxStreamDataBidirectionalLocal = Int.max
options.initialMaxStreamDataBidirectionalRemote = Int.max
options.initialMaxStreamDataUnidirectional = Int.max
options.initialMaxStreamsBidirectional = 400
options.initialMaxStreamsUnidirectional = 400
Questions:
1.- Can we get a little more detail in above options, specifically on their impact to the actual connection?
2.- IsinitialMaxData
the actual congestion window value
3.- Are we missing something or making incorrect assumptions?
Thanks in advance.
Sure np.
After isolating the logic to only the Network framework implementation, I'm observing exactly what you described:
1.- CPU is steady in 34%
2.- Memory in under pressure, because I'm not waiting for the completion handler to send the next one (I can find a balance in this scenario, not big deal).
In my implementation I should have some specific logic that is causing the cpu to spike like that, so I'm glad is not a bug in the framework.
Really appreciate you help.
Thanks