GCD dispatch_apply()

In the code snippet below, I am trying to have a routine run concurrently with GCD. However, the queue runs the code serially. How would one force the queue to run concurrently? The WWDC 2017 VIDEO shows:

dispatch_apply( DISPATCH_APPLY_AUTO , count , ^(size_t i) { ...} );

but Xcode doesn't seem to recognize this syntax. Would there be a value for the flag parameter of dispatch_get_global_queue that would force concurrency.

code-block

  dispatch_queue_t aQueue = dispatch_get_global_queue(QOS_CLASS_USER_INITIATED, 0);

    dispatch_apply( turnsMax ,  aQueue , ^( size_t t )
        {

Accepted Reply

Rather than have dispatch_apply() run the loop I have replaced the above with repeated calls to dispatch_async() to avoid the problem. The following code works like a hose.

- (void)searchTreeBranches
{
    int                 n;
    RBK_Turn            turn;

    // load up the processor cores

    for ( n = 0 ; n < turnsMax; n++)
    {
        turn = turns[n];


        dispatch_async( myDispatchQueue, ^( void )
        { ••• });
    }
}

Replies

I think I have a clearer picture of what the problem is. The code block is an IDA (Iterative Deepening Algorithm) search in the Cayley graph of the Rubik's Cube group for a solution to a cube position. For the first few iterations the code block returns almost immediately. As the search goes deeper the times increase exponentially. The problem is that GCD profiles the block based on the early iterations and decides it would be more efficiently run serially. So, when the search reaches a deeper level where parallel processing gives a huge performance increase GCD continues to run the tasks one at a time. How can one preempt that profiling?

GCD currently doesn't do any dynamic profiling of your code and give you threads accordingly although that would be a very cool feature.

GCD does have some smarts to detect that we might be in nested dispatch_apply logic and throttles thread creation if so. Is that possible in your code?

dispatch_apply (iterations, queue, ^(size_t i) {

     dispatch_apply(iterations2, AUTO, ^(size_t j) {
            /* We will not spawn more threads for the inner dispatch apply */
     });
});

If you are using the code block above, I'd expect you to get some parallelism from the outer dispatch_apply block though. Are you profiling your code with Instruments System Trace?

  • Actually, I've decided that dispatch_apply is inappropriate to what I am doing. Rather, I've taken the loop out of the hands of dispatch and populated the queue with repeated calls to dispatch_async. From watching the WWDC video I got the idea that dispatch_apply would be more efficient, but that turns out not to be the case here.

Add a Comment

Rather than have dispatch_apply() run the loop I have replaced the above with repeated calls to dispatch_async() to avoid the problem. The following code works like a hose.

- (void)searchTreeBranches
{
    int                 n;
    RBK_Turn            turn;

    // load up the processor cores

    for ( n = 0 ; n < turnsMax; n++)
    {
        turn = turns[n];


        dispatch_async( myDispatchQueue, ^( void )
        { ••• });
    }
}