My Setup: I have an Xcode workspace holding a software stack of 3 custom Swift frameworks (each with targets for both iOS and macOS), and a Swift iOS application using those frameworks. Other applications (both iOS and macOS) will be added to this stack in the future. My project currently has about 25,000 lines of Swift, no Objective-C, and is growing fast.
I am currently re-evaluating my concurrency model after watching a few WWDC talks from the GCD team, but both my old and new (theoretical) approaches seem to have fatal flaws.
My question is: What is the recommended practice for solving the problems in the approaches below in Swift on Apple platforms.
Approach 1: The existing approach in my project is that each class requiring synchronization (because shared resources will be accessed) maintains its own serial queue (with a nil target and .workItem autoreleaseFrequency) per instance and calls sync and async on it as necessary. When done correctly, this approach properly avoids race conditions, but it clutters the code. Additionally, based on several years worth of GCD talks at WWDC, it seems this approach could be prone to thread explosion.
Approach 2: At this year's WWDC, the GCD team recommended creating a small finite number of serial queue hierarchies per subsystem, given recent changes to how GCD allots threads under the hood. I then decided to create a new branch and try an approach where a QueueManager object distributes queues to those who need it. For serial queues, when an object wants a serial queue, it would ask the singleton QueueManager for one, and it would receive a new serial DispatchQueue with its target set to the shared one for that subsystem. This approach not only maintains the cluttered code (because sync and async are still everywhere), but it also introduces another inherent problem: deadlock. Suppose objects A and B are two classes in the same subsystem, they both need to be thread-safe, and A happens to be built on top of B. In this model, both A and B will be using the same serial queue at the root of their queue hierarchy because they are part of the same subsystem. So, if A calls one of B's methods from within a sync block, and that B method also calls sync, deadlock will occur. Solving this would seem to require that A have intimate knowledge of B's implementation detail, which also seems wrong.
Yes, designing around the need for synchronization is the ideal approach, but I'm not sure how this is attainable in practice. For example, take an object that dispatches a bunch of blocks to run in parallel and then has a completion method called after each block finishes. Since this could be called simultaneously, you would need synchronization for shared resources. What is the recommended approach to avoid the issues above?