Changing the host to 127.0.0.1, seems to solve the POSIXError 22.
Getting these logs now:
boringssl_session_handshake_incomplete(210) [C3:1][0x10463d380] SSL library error
boringssl_session_handshake_error_print(44) [C3:1][0x10463d380] Error: 4375179344:error:10000133:SSL routines:OPENSSL_internal:reason(307):/Library/Caches/com.apple.xbs/Sources/boringssl/ssl/extensions.cc:1433:
boringssl_session_handshake_error_print(44) [C3:1][0x10463d380] Error: 4375179344:error:10000093:SSL routines:OPENSSL_internal:ERROR_ADDING_EXTENSION:/Library/Caches/com.apple.xbs/Sources/boringssl/ssl/extensions.cc:3892:extension 16
quic_crypto_connection_state_handler [C2:1] [-7b9638dc5ec49cb9] TLS error -9858 (state failed)
nw_connection_copy_connected_local_endpoint_block_invoke [C3] Client called nw_connection_copy_connected_local_endpoint on unconnected nw_connection
nw_connection_copy_connected_remote_endpoint_block_invoke [C3] Client called nw_connection_copy_connected_remote_endpoint on unconnected nw_connection
nw_connection_copy_protocol_metadata_internal_block_invoke [C3] Client called nw_connection_copy_protocol_metadata_internal on unconnected nw_connection
Main Connection State: waiting(-9858: handshake failed)
If we connect via openssl openssl s_client -connect 127.0.0.1:6667 -tls1_3, this is the output:
CONNECTED(00000003)
Can't use SSL_get_servername
depth=0 C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ...
verify error:num=18:self-signed certificate
verify return:1
depth=0 C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ...
verify return:1
---
Certificate chain
0 s:C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ...
i:C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ...
a:PKEY: rsaEncryption, 4096 (bit); sigalg: RSA-SHA256
v:NotBefore: May 8 06:59:00 2019 GMT; NotAfter: May 5 06:59:00 2029 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
<some_certificate>
-----END CERTIFICATE-----
subject=C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ..
issuer=C = US, ST = California, L = Menlo Park, O = Proxygen, OU = Proxygen, CN = Proxygen, emailAddress = ...
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 2319 bytes and written 289 bytes
Verification error: self-signed certificate
---
New, TLSv1.3, Cipher is TLS_CHACHA20_POLY1305_SHA256
Server public key is 4096 bit
This TLS version forbids renegotiation.
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 18 (self-signed certificate)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
Protocol : TLSv1.3
Cipher : TLS_CHACHA20_POLY1305_SHA256
Session-ID: 8C3CC90C4E4C1EA2A464C8026A082617B9DDEB4E3C5390AD1209B1E7699A3B3F
Session-ID-ctx:
Resumption PSK: AE68BFE7F82ADB67D17B4838CE8C3E9231A25F1E880068B667CA44554276258C
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 3600 (seconds)
TLS session ticket:
Start Time: 1710478613
Timeout : 7200 (sec)
Verify return code: 18 (self-signed certificate)
Extended master secret: no
Max Early Data: 0
---
So we added below lines to the startQUICConnection method, in order to use cypher TLS_CHACHA20_POLY1305_SHA256 along with TLS v1.3 as reported by openssl:
sec_protocol_options_set_min_tls_protocol_version(options.securityProtocolOptions, .TLSv13)
sec_protocol_options_set_max_tls_protocol_version(options.securityProtocolOptions, .TLSv13)
sec_protocol_options_append_tls_ciphersuite(options.securityProtocolOptions, tls_ciphersuite_t(rawValue: TLS_PSK_WITH_CHACHA20_POLY1305_SHA256)!)
Now, what's the meaning of SSL routines:OPENSSL_internal:ERROR_ADDING_EXTENSION and how to solve it?
Post
Replies
Boosts
Views
Activity
We really appreciate this recommendation, regarding the use of QUIC Datagrams and we just put it in the queue for the next POC, thank you this was extremely helpful.
Now, since we are constructing a document with the actual POC implementation we have (just because it is a business requirement), the one I just described previously, we would like to report/ask about an strange behavior in which the throughput starts really well approx. 62MB/s or higher but after sometime 30s or so it goes to approx. 1-2MB/s or less, then after sometime it recovers again but can also drops just as the first time and then the pattern repeats on and on. I'm two meters away from my 6GHz wifi router when I run the tests.
Above is basically the server sending the packets, that FLAT behavior at the right is what we do not understand.
We would really appreciate some diagnostic on this issue and happy to provide more details if needed.
Thanks in advance.
1.- How are you sending this data?
We have a pool of NWConnection objects which were initialized on the server using the group object, of course all of those connections are already in "ready" state. Then when a data buffer is ready/prepared on the server, we pull a connection from the pool, we divide the buffer in chunks (64K each) and we call the send method for every chunk, when the last chunk is sent we wait for the completion: .contentProcessed to be called (we wait only for that last chunk to be processed) and then we repeat the process again with another buffer and a different connection from the pool. As you can see, this happens pretty fast, there's not too much pacing that we are doing here, because we do not have an understanding of how many bytes can be in flight over the wire at a particular time which will probably dictate how much data we can send.
So short answer is, yes we are using an existing nwconnection for each message.
2.- Did you mean “server and client”?
Yes, that's what I meant.
Happy to clarify any other behavior...
Thanks.
Our thought is we want to be respectful of the client's bandwidth, but you are probably asking because that's something that's being handled for us? For example, if our messages were 100MB (instead of 2.5 MB which is the case now) should we just send the entire 100MB buffer and Network framework will just take care of that? Interesting...
Then, our client's receive method will process whatever comes until we get the entire file?
func receive() {
connection.batch {
connection.receive(minimumIncompleteLength: 1, maximumLength: Int.max) { [weak self] content, contentContext, isComplete, error in
.....
.....
self.receive()
}
}
}
Questions:
Should we just send the entire buffer and let the receive method to get all the chunks?
If previous question is true, should we even care about implementing some kind of pacing mechanism in order to understand the amount of data we need to send to the client?
For feedback/back pressure sake, if we wanted to implement that, is there a way to query/get the number of bytes that can be sent over the wire in a particular time?
Thanks in advance.
What I'm doing right now is, since we are working with 2.5MB size buffers I send the entire buffer using only one call to the send method and when I get the completion for that I send the next buffer and I repeat that again and again.
The reason I do that is because, if I just call send sequentially for let's say 10 or 20 buffers WITHOUT waiting for the completion of each to get triggered, it doesn't seem to send data to the client and the CPU goes to 300-400%.
Are we missing something?
What you are describing makes total sense to me.
Now, before filing a bug let me give you a little more context:
1.- I'm using a macOS app as the QUIC server using Network framework (macOS Sonoma 14.5)
2.- Client app is on visionOS 2 Beta 2 and I also use Network framework.
macOS app is the one sending the streams to the client and showing the weird CPU usage.
Client app on visionOS seems stable in terms of CPU and memory. I'm using Xcode 16 Beta btw.
Sure np.
After isolating the logic to only the Network framework implementation, I'm observing exactly what you described:
1.- CPU is steady in 34%
2.- Memory in under pressure, because I'm not waiting for the completion handler to send the next one (I can find a balance in this scenario, not big deal).
In my implementation I should have some specific logic that is causing the cpu to spike like that, so I'm glad is not a bug in the framework.
Really appreciate you help.
Thanks
That's exactly the kind of answer I was expecting.
Thank you.
Thanks for coming back so quickly.
The Server app is based on the SwiftUI framework and is meant to be only a macOS app.
The QUIC networking implementation that the Server app uses, is inside a custom framework we created.
Kindly let us know if you need more details.
I tried adding the activity calls and the activity monitor is showing the server app is preventing sleeping. Also I do not see the sleeping behavior anymore, so this was really helpful.
Thank you.