[Removed] this was an attempt to post the console output from the error above
Post
Replies
Boosts
Views
Activity
I tried to add my console output with the network error stack trace for the issue multiple times but the forum keeps deeming it "sensitive" so I'm add it as a .txt here.
Console output.txt
I decided to submit this as a bug using feedback assistant as well, see: FB13747811 (nw_proto_tcp_route_init [C6:3] no mtu received )
Hi Quinn,
Thanks for the response. I do have all the various handlers set and unfortunately none of them are getting hit. I've been playing with some of our NWProtocolTCP options and noticed we originally had our connectionTimeout set to 10 seconds. Increasing that to 300 seconds (arbitrarily longer value) results in us no longer hitting the issue. We've had the shorter timeout for a very long time however, close to 2 years, so I'm not sure what's changed there internally. Or if I'm just blindly throwing values at our TCP options and somehow this one worked but isn't the actual cause of the issue.
Thanks!
Jatinder
Done and done. Feedback ID: FB13401552 (Enabling a multipathServiceType in NWParameters on iOS 17.x+ breaks establishing peer to peer NWConnections)
You're totally right, we originally didn't have it enabled at all. We eventually noticed that with multipath enabled we were getting a faster rate of sending data over the NWConnection using func send(content: Data?, contentContext: NWConnection.ContentContext = .defaultMessage, isComplete: Bool = true, completion: NWConnection.SendCompletion).
Hi Quinn,
Sorry for the slow response. I managed to record an RVI packet trace and found it was very very peculiar. The TCP traffic appears to just stop - it's as though a lock was occurring on the sending device and even though we were trying to send data using @preconcurrency final public func send(content: Data?, contentContext: NWConnection.ContentContext = .defaultMessage, isComplete: Bool = true, completion: NWConnection.SendCompletion) none of it goes through after the what I'm calling a "lock" occurs.
In the end I realized that we had a number of Tasks potentially in "parallel" calling this code that is wrapped in an async method in our peer to peer comm layer. By moving the method to always execute on @MainActor the issues appears resolved. This is a stop gap and I'm still investigating to see if how I've structured my concurrency code is further at fault or can be cleaned up better to interact with the NWConnection. Just wanted to update the thread and see if you has any thoughts or opinions.
Thanks!