QUIC Connection Group Server Sending Pace

We have an implementation in which we use QUIC via a connection group, server are client are on Swift using the Network framework.

Our use case is, the server should send data buffers to the client as fast and as much as possible, now the pace to call the send method from the server should be carefully done, because if we send too much data of course the client is not gonna be able to receive it.

The question would be, is there a way to query the congestion window so we know on the server side, how much data we should be able to send at some point? Asking because we are not getting all the data we are sending from the server on our client side...

We are using these settings:

        let options = NWProtocolQUIC.Options(alpn: ["h3"])
        options.direction = .bidirectional
        //
        options.idleTimeout = 86_400_000
        options.maxUDPPayloadSize = Int.max
        options.initialMaxData = Int.max
        options.initialMaxStreamDataBidirectionalLocal = Int.max
        options.initialMaxStreamDataBidirectionalRemote = Int.max
        options.initialMaxStreamDataUnidirectional = Int.max
        options.initialMaxStreamsBidirectional = 400
        options.initialMaxStreamsUnidirectional = 400  

Questions:

1.- Can we get a little more detail in above options, specifically on their impact to the actual connection?

2.- IsinitialMaxData the actual congestion window value

3.- Are we missing something or making incorrect assumptions?

Thanks in advance.

How are you sending this data? Down a single stream? Creating a new stream for each message? Or something else?

Also:

server are client are on Swift using the Network framework.

Did you mean “server and client”?

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

1.- How are you sending this data?

We have a pool of NWConnection objects which were initialized on the server using the group object, of course all of those connections are already in "ready" state. Then when a data buffer is ready/prepared on the server, we pull a connection from the pool, we divide the buffer in chunks (64K each) and we call the send method for every chunk, when the last chunk is sent we wait for the completion: .contentProcessed to be called (we wait only for that last chunk to be processed) and then we repeat the process again with another buffer and a different connection from the pool. As you can see, this happens pretty fast, there's not too much pacing that we are doing here, because we do not have an understanding of how many bytes can be in flight over the wire at a particular time which will probably dictate how much data we can send.

So short answer is, yes we are using an existing nwconnection for each message.

2.- Did you mean “server and client”?

Yes, that's what I meant.

Happy to clarify any other behavior...

Thanks.

we divide the buffer in chunks (64K each) and we call the send method for every chunk

Why do you chunk your data?

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

Our thought is we want to be respectful of the client's bandwidth, but you are probably asking because that's something that's being handled for us? For example, if our messages were 100MB (instead of 2.5 MB which is the case now) should we just send the entire 100MB buffer and Network framework will just take care of that? Interesting...

Then, our client's receive method will process whatever comes until we get the entire file?

func receive() {
connection.batch {
           connection.receive(minimumIncompleteLength: 1, maximumLength: Int.max) { [weak self] content, contentContext, isComplete, error in
               .....
               .....
               self.receive()
            }
        }
}

Questions:

  • Should we just send the entire buffer and let the receive method to get all the chunks?
  • If previous question is true, should we even care about implementing some kind of pacing mechanism in order to understand the amount of data we need to send to the client?
  • For feedback/back pressure sake, if we wanted to implement that, is there a way to query/get the number of bytes that can be sent over the wire in a particular time?

Thanks in advance.

you are probably asking because that's something that's being handled for us?

Right. QUIC implements flow control on both each individual stream and on the tunnel as a whole. In the NWConnection API that flow control is based on the completion handler you pass to to the send(…) API. So, I see two sensible paths forward:

  • If you want to reduce the amount of memory consumed in the sender, issue a send and then, when it completes, issue the next send.

  • If not, just issue one big send.

Right now you’re adding extra complexity without actually reducing your memory usage )-:

Should we just send the entire buffer and let the receive method to get all the chunks?

This is a decision you make on the send side, based on how much memory you want to consume there. No matter what you do on the send side, your sends won’t overwhelm the receiver because of QUIC’s flow control.

For feedback/back pressure sake, if we wanted to implement that, is there a way to query/get the number of bytes that can be sent over the wire in a particular time?

No, because that’s not how flow control is expressed in NWConnection. Rather, it’s based on the completion handler, as explained above.

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

QUIC Connection Group Server Sending Pace
 
 
Q