Another reason we are chunking the data is because we want to send some termination bytes on the last chunk, that way we don't force isComplete = true, since that will finalize the write operations in that stream. We want to keep the streams open and ready in the pool, that will save us the costs of creating new nwconnections.
Unless we are missing an important point, of course????
Post
Replies
Boosts
Views
Activity
Also, from the Console app, seems I'm getting lots of messages like this one:
quic_fc_process_stream_data_blocked [C1:1] [-1ef174f11ce3c9b7] [S1313] received STREAM_DATA_BLOCKED (max=10128933), app read 10128933 bytes and has yet to read 0 bytes, stream receive window is 2097152 bytes, local stream max data is 12226085 bytes and the last received offset is 10128932
Messages have different by counts, but the error is the basically the same.
The data buffers we are working with, have an average size of 2.4 MB (we divide them in 64K chunks).
We would really appreciate some suggestions/analysis on this issue.
Thanks in advance.
Assuming CUBIC algo is in charge, correct?
Maybe the congestion window is getting really small, reason why we see such a drop in throughput?
We are using these options on both ends:
options.idleTimeout = 86_400_000
options.maxUDPPayloadSize = Int.max
options.initialMaxData = Int.max
options.initialMaxStreamDataBidirectionalLocal = Int.max
options.initialMaxStreamDataBidirectionalRemote = Int.max
options.initialMaxStreamDataUnidirectional = Int.max
options.initialMaxStreamsBidirectional = 50