I can start by answering your questions at the end, but I suspect the best response you can provide to them will be "yeah that's rough I guess there isn't much that can be done here". But as I implement this I'm actually more interested in some of the other stuff right now–which I describe below–so I don't mind it if we don't have anything actionable here yet.
How many messages do you expect to send per second?
Several hundred.
And what’s a typical size?
Somewhat bimodal. I am working on a video streaming application, so "control" messages range from tens of bytes to say a few kilobytes. And then there are video frames, which are much larger–hundreds of kilobytes to a few megabytes (at least I think this is about right–I'm still working on optimizing this).
What proportion of them must delivered reliably?
Control messages must be delivered reliably and in order. Currently I send video frames on the same stream which makes things easier for me, but in theory I guess you can drop and reorder these and I can make a best-effort rendition on the other end based on what I get.
What proportion of them are latency sensitive?
I know this isn't very helpful, but "all of them". Let's say control messages are a little more important just because they include metadata of what is going on, but in general I want all the messages as fast as possible.
And roughly what sort of latency are we talking about?
"As little as possible". In practice let's say 10-50ms would be pretty good.
Currently I send everything over TCP, which works but isn't great. With my current understanding I think what I "should" be doing is running my control messages over TCP and making a separate UDP connection for video frames (you can make two connections to the same endpoint, right?). I don't know much about any other transport formats but if you have suggestions here I'm all ears.
With that out of the way, I hear you about the underlying protocol. I am actually very open to not worrying about it and having the system pick whatever is the ideal path (in this case, whatever is fastest/has the least latency). From your responses it seems like the way I should do that is to basically allow the system the most flexibility to pick a path (including using includePeerToPeer
) and then responding to path updates as network conditions change.
I've actually already done the first part, and it looks pretty similar to the TicTacToe example you linked. The API I expose to my app is a bidirectional "pipe" you can send data on, using the Network framework. This works, but now I need to think about the second part and I'm a little confused about migrating the connection.
At a high level, there are several things I need to do: I need to somehow create a new connection, then gracefully migrate to using that, then dispose of the original one. Each part poses challenges I have questions about.
When making a new connection, what I really want to do is make a connection to the same endpoint–which is terminology I use abstractly right now, but I want to actually join this up to the classes in Network.framework. In my own words I want a new path to the same endpoint which means I create a new connection. Is this literally what I will do using the APIs? Namely, can I use the same NWEndpoint
(in fact can I just yank it out of the existing NWConnection
?) or do I need to do service discovery again? I assume I need to create a new NWConnection
with this NWEndpoint
(either the same one if this is allowed, or a new one that I re-discovered?) Will this automatically choose a new NWPath
and know which one is the best? If code is clearer than my words, my question is mostly whether reconnection in this case means something like this:
let originalConnection = /* whatever */
print(originalConnection.currentPath) // some bad path
/* betterPathUpdateHandler gets called */
let newConnection = NWConnection(to: originalConnection.endpoint, using: originalConnection.parameters) // Is this the right way to make a new connection?
print(newConnection.currentPath) // will this print a better path now?
Once I've made a new connection, I need to migrate traffic over to it. If I'm using UDP and my clients above me know this then things seem easy, I can just swap the old and new connection and whatever is in flight gets dropped. However if I am promising sequential delivery then it seems like I need to do more work here. The first thing I think I want to do is to send NWConnection.ContentContext.finalMessage
from the "host" (assuming this is the side that got the path update handler called first) to close the connection to new data. Once this happens I assume the host cannot send new data. However, the other side ("endpoint") won't know that I did this until it receives the finalMessage
. Is it legal for the endpoint to keep sending data before this finalMessage
gets to it? Can it keep sending data from its side until it sends its own finalMessage
back to the host? I'm trying to understand whether this finalMessage
is meant to unidirectional or whether it applies to data flowing on both sides of the connection.
Once I've finally stopped traffic on the old connection I can finally swap in the updated one for sending new traffic. To clean up the old one, I assume I should call cancel()
on the NWConnection
? Or should I be using cancelCurrentEndpoint()
(I'm not entirely sure what this does)?
If you have a sample project that actually implements betterPathUpdateHandler I think that would clear up a lot of these questions :)