Consider the slide at 20:55. It appears to show two dispatch sources (S1, S2) that each target their own serial queues (Q1, Q2), which in turn targets a single serial queue (EQ). My interpretation of that slide, is that all of the work is serialized by the one root queue, which means that S1 and S2 do not provide any additional concurrency.A minute later, the speaker mentions that the order of the work items is guaranteed by the root "mutual exclusion queue", but that would have been the case anyways with a single dispatch source.A few more slides later, there's one titled "QoS and Target Queue Hierarchy" which attempts to explain why you'd want to use multiple dispatch sources. In this example, S1 has a low QoS while S2 has a high QoS. But since they both target a root queue, then there's a good chance that the entire tree will run at the higher QoS if S2 is adding a lot of worker items. That means that low priority items, added by S1, will get boosted to a higher QoS which is unlikely to be what I'd want. I'd much rather the system context switch over the higher QoS work item, execute it, then go back to the lower QoS work item. This isn't possible in the presented design because of the root queue.At 26:23, another example is presented using a "single mutual exclusion queue" as the root queue. In this example, the problem really seems to be that the jobs are too small to warrant individual work items. But the solution presented means that only a single event handler can be running at once.At 28:30 the subject of subsystems is brought up. It's very possible I'm mis-interpreting this part of the talk. The solutions presented involve each sub-system targeting a serial queue. (Main Queue, Networking Queue, Database Queue.) Excluding the main queue because it's special, why would I want the networking and database queues to be serial? A long running work item on either would significantly slow down the overall application. Multiple requests to read from a database should be allowed to happen concurrently, IMHO.My earlier comment regarding a single, root queue for the entire app was somewhat influenced by the subsequent slides that suggest using a "Fixed number of serial queue hierarchies."If you look at 31:30 "Mutual Exclusion Context", they show a simply tree with a root serial queue (EQ). On the next slide, they reference EQ as the Application queue, or at least that's how I read it.Finally, consider the slide at 43:00 "Protecting the Queue Hierarchy". The first bullet point suggests that one should "Build your queue hierarchy bottom to top." In that diagram, I see EQ as a root queue for the application, with Q1/S1 and Q2/S2 being task related or subsystems if the application is large enough.But even I was wrong concluding that there should be a root serial queue, I'm still conflicted as to why I'd want all my subsystems to have serial queues. If all of my tasks are long enough to warrant their own work items, then I want as many of them running reasonably possible given the cores available to me. If I'm rendering thumbnails on a MacBook Pro, then I might want 4-6 thumbnail requests to run concurrently. If I'm running on a Mac Pro, then I can handle a lot more. I can't have that flexibility if I build a hierarchy of serial queues, yet that seems to be Apple's recommendation in some of the more recent WWDC videos related to GCD.Follow-up:Proper use of GCD is obviously quite dependent on how your application is architected, so I'm clearly approaching this from my app's perspective. Out of interest, my app's architecture looks something like this:A half-dozen or so "managers", that one could consider to be sub-systems.Each manager has a single, concurrent execution queue with an appropriate QoS level.Each manager is responsible for a certain type of request. (Database, Export, Caching, Rendering, etc...)Requests are submitted to each task almost always from the main thread as a result of a user event.Requests are immutable and independent of each other. There are no locks are shared resources involved.Requests are allowed to execute out-of-order, if explicitly stated. (i.e.: Two database reads can happen out-of-order, but writes cannot.)Requests are relatively high-level and, with very few exceptions, run within their own work item. (i.e.: A request does not, in turn, spawn other GCD work items.)An example request might be exporting an image, rendering a thumbnail, performing a database operation, etc.A request might use a framework, like AVFoundation or Core Image, that in turn uses multi-threading. This is where some manual throttling needs to happen because if you have six cores and try to decode six RAW files concurrently, you'll have worse performance than decoding two or three concurrently since Image IO spawns a bunch of threads itself.Using serial queues in each manager/sub-system would reduce the concurrency in my app, and I've tested that by limiting how many concurrent thumbnail requests I allow at any given time and the degradation is visually obvious.So my app makes almost exclusive use of concurrent queues, with the odd barrier block when some form synchronization is required. However, this seems very much at odds with the above mentioned WWDC talk as well as tips listed on page:https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057