Hi there, I have some thread related questions regards to network framework completion callbacks. In short, how should I process cross thread data in the completion callbacks?
Here are more details. I have a background serial dispatch queue (call it dispatch queue A) to sequentially process the nw_connection and any network io events. Meanwhile, user inputs are handled by serial dispatch queue ( dispatch queue B). How should I handle the cross thread user data in this case? (I write some simplified sample code below)
struct {
int client_status;
char* message_to_sent;
}user_data;
nw_connection_t nw_connection;
dispatch_queue_t dispatch_queue_A
static void send_message(){
dispatch_data_t data = dispatch_data_create(message, len(message), dispath_event_loop->dispatch_queue, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
nw_connection_send(
nw_connection, data, NW_CONNECTION_DEFAULT_MESSAGE_CONTEXT, false, ^(nw_error_t error) {
user_data.client_status = SENT;
mem_release(user_data.message_to_sent); });
});
}
static void setup_connection(){
dispatch_queue_A=
dispatch_queue_create("unique_id_a", DISPATCH_QUEUE_SERIAL);
nw_connection = nw_connection_create(endpoint, params);
nw_connection_set_state_changed_handler(){
if (state == nw_connection_state_ready) {
user_data.client_status = CONNECTED
}
// ... other operations ...
}
nw_connection_start(nw_connection);
nw_retain(nw_connection);
}
static void user_main(){
setup_connection()
user_data.client_status = INIT;
dispatch_queue_t dispatch_queue_B = dispatch_queue_create("unique_id_b", DISPATCH_QUEUE_SERIAL);
// write socket
dispatch_async(dispatch_queue_B, ^(){
if (user_data.client_status != CONNECTED ) return;
user_data.message_to_sent = malloc(XX,***)
// I would like to have all io events processed on dispatch queue A so that the io events would not interacted with the user events
dispatch_async_f(dispatch_queue_A, send_message);
// Disconnect block
dispatch_async(dispatch_queue_B, ^(){
dispatch_async_f(dispatch_queue_A, ^(){
nw_connection_cancel(nw_connection)
});
user_data.client_status = DISCONNECTING;
});
// clean up connection and so on...
}
To be more specific, my questions would be:
-
As I was using serial dispatch queue, I didn't protect the user_data here. However, which thread would the
send_completion_handler
get called? Would it be a data race condition where theDisconnect
block andsend_completion_handler
both accessuser_data
? -
If I protect the
user_data
with lock, it might block the thread. How does the dispatch queue make sure it would NOT put a related execution block onto the "blocked thread"?
How should I handle the cross thread user data in this case?
That’s kinda up to you. The only guarantee that Network framework applies is that it’ll call the handlers for a connection on the queue that you apply to the connection. Everything after that is your concern.
If you use a serial queue, Dispatch guarantees that only one block will run an the queue at a time.
As I was using serial dispatch queue, I didn't protect the user_data here.
And, as you’ve determined, that would be a problem because Dispatch only serialises execution for each serial queue. If you have two serial queues, they can do work in parallel [1].
If I protect the user_data with lock, it might block the thread.
That would be a reasonable approach. But, as always with locks, it’s best to hold the lock for a minimum amount of time. If you do this:
- Lock.
- Add some data to your buffer.
- Unlock.
you are unlikely to ever run into problems. OTOH, if you expand step 2 with “… and then call some higher-level function to process that data” then you’re more likely to hit issues.
How does the dispatch queue make sure it would NOT put a related execution block onto the "blocked thread"?
Presuming that these are serial queues, this question doesn’t make sense. Dispatch assigns threads to queues when the queues have work to do. With a serial queue, only one thread can be running work from that queue at a time. So any subsequent work on the queue is going to be blocked until the thread doing the work returns. It doesn’t matter [2] if the thread doing the work is blocked, or just taking a long time to grind through its work.
And here is my current solution to avoid any possible data racing issue: I always wrapped any user_data changes into the dispatch queue B.
That’ll work. It may not be the most efficient approach — dispatching to a queue is fast, but it’s significantly slower than taking a lock — but that only matters if you’re dealing with a lot of data.
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] You can set up more complex systems here, most notably using target queues, but I’m presuming that you’ve not done so.
[2] Well, it kinda does, due to both priority propagation and overcommit, but those are much advanced topics.