Consider the slide at 20:55. It appears to show two dispatch sources (S1, S2) that each target their own serial queues (Q1, Q2), which in turn targets a single serial queue (EQ). My interpretation of that slide, is that all of the work is serialized by the one root queue, which means that S1 and S2 do not provide any additional concurrency.A minute later, the speaker mentions that the order of the work items is guaranteed by the root "mutual exclusion queue", but that would have been the case anyways with a single dispatch source.A few more slides later, there's one titled "QoS and Target Queue Hierarchy" which attempts to explain why you'd want to use multiple dispatch sources. In this example, S1 has a low QoS while S2 has a high QoS. But since they both target a root queue, then there's a good chance that the entire tree will run at the higher QoS if S2 is adding a lot of worker items. That means that low priority items, added by S1, will get boosted to a higher QoS which is unlikely to be what I'd want. I'd much rather the system context switch over the higher QoS work item, execute it, then go back to the lower QoS work item. This isn't possible in the presented design because of the root queue.At 26:23, another example is presented using a "single mutual exclusion queue" as the root queue. In this example, the problem really seems to be that the jobs are too small to warrant individual work items. But the solution presented means that only a single event handler can be running at once.At 28:30 the subject of subsystems is brought up. It's very possible I'm mis-interpreting this part of the talk. The solutions presented involve each sub-system targeting a serial queue. (Main Queue, Networking Queue, Database Queue.) Excluding the main queue because it's special, why would I want the networking and database queues to be serial? A long running work item on either would significantly slow down the overall application. Multiple requests to read from a database should be allowed to happen concurrently, IMHO.My earlier comment regarding a single, root queue for the entire app was somewhat influenced by the subsequent slides that suggest using a "Fixed number of serial queue hierarchies."If you look at 31:30 "Mutual Exclusion Context", they show a simply tree with a root serial queue (EQ). On the next slide, they reference EQ as the Application queue, or at least that's how I read it.Finally, consider the slide at 43:00 "Protecting the Queue Hierarchy". The first bullet point suggests that one should "Build your queue hierarchy bottom to top." In that diagram, I see EQ as a root queue for the application, with Q1/S1 and Q2/S2 being task related or subsystems if the application is large enough.But even I was wrong concluding that there should be a root serial queue, I'm still conflicted as to why I'd want all my subsystems to have serial queues. If all of my tasks are long enough to warrant their own work items, then I want as many of them running reasonably possible given the cores available to me. If I'm rendering thumbnails on a MacBook Pro, then I might want 4-6 thumbnail requests to run concurrently. If I'm running on a Mac Pro, then I can handle a lot more. I can't have that flexibility if I build a hierarchy of serial queues, yet that seems to be Apple's recommendation in some of the more recent WWDC videos related to GCD.Follow-up:Proper use of GCD is obviously quite dependent on how your application is architected, so I'm clearly approaching this from my app's perspective. Out of interest, my app's architecture looks something like this:A half-dozen or so "managers", that one could consider to be sub-systems.Each manager has a single, concurrent execution queue with an appropriate QoS level.Each manager is responsible for a certain type of request. (Database, Export, Caching, Rendering, etc...)Requests are submitted to each task almost always from the main thread as a result of a user event.Requests are immutable and independent of each other. There are no locks are shared resources involved.Requests are allowed to execute out-of-order, if explicitly stated. (i.e.: Two database reads can happen out-of-order, but writes cannot.)Requests are relatively high-level and, with very few exceptions, run within their own work item. (i.e.: A request does not, in turn, spawn other GCD work items.)An example request might be exporting an image, rendering a thumbnail, performing a database operation, etc.A request might use a framework, like AVFoundation or Core Image, that in turn uses multi-threading. This is where some manual throttling needs to happen because if you have six cores and try to decode six RAW files concurrently, you'll have worse performance than decoding two or three concurrently since Image IO spawns a bunch of threads itself.Using serial queues in each manager/sub-system would reduce the concurrency in my app, and I've tested that by limiting how many concurrent thumbnail requests I allow at any given time and the degradation is visually obvious.So my app makes almost exclusive use of concurrent queues, with the odd barrier block when some form synchronization is required. However, this seems very much at odds with the above mentioned WWDC talk as well as tips listed on page:https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057
Post
Replies
Boosts
Views
Activity
Very much appreciate the double-reply, thank you.The thumbnail rendering situation was something I came across quite awhile ago. I was using dispatch_apply to request thumbnails of dozens or more images at once, but that quickly overwhelmed disk I/O and various other frameworks like Image IO. Back then, I had posted a question on StackOverflow related to just this issue:https://stackoverflow.com/questions/23599251In the end I just ended up using a normal concurrent queue with an explicit semaphore to throttle in-flight worker items, similar to NSOperationQueue.maxConcurrentOperationCount. Works for now, but it's an ad-hoc solution based on the number of reported CPU cores. I fully accept that there are no hard-and-fast rules for this and each application is somewhat different. Comparing my app's architecture to the talking points in the WWDC video, I feel like the video is using rather small dispatch work items while my app uses rather large ones. Most of my work items are "jobs", like exporting an image, fetching results from a database or rendering a thumbnail. For those types of operations, I'm not aiming for maximum throughput, but rather for the best user experience. For example, rendering three thumbnails concurrently might take longer to complete in an absolute sense, but if two of the thumbnails are for small images and one of the thumbnails is for a massive panorama, then it's very likely the two thumbnails will finish sooner and thus be shown to the user quickly. Had they have had to wait for the panorama to finish, the user would see a blank screen for longer than needed. At least that's how I like to design things.(This is particularly important for thumbnail rendering because it can be very hard to cancel a rendering that is in-progress. Many of the Image IO and QuickLook APIs don't have a way to cancel their requests, thus you're stuck waiting for a thumbnail to be generated even if the presenting view has scrolled off the screen.) Similar concurrency thoughts apply to exporting. I'm OK if a small export job is pre-empting a longer export job because that allows the smaller job to complete sooner and for the user to be able to use the resulting files sooner. If a user initiates a small export while a large export is already underway, then chances are they want access to that small export ASAP. They shouldn't have to wait for the larger one to complete. I realize this causes the total time to completion for all outstanding requests to increase, but from the user's perspective (IMHO), the system appears more performant. The only way I know how to do this is with a "tree of concurrent queues", so perhaps my use-case, architecture and trade-offs are significantly different than those used in the WWDC talking points. Additional Commentary: Earlier you pointed out that I was likely wrong in concluding that an application should have a single, serial root queue. Given this discussion, I agree. But what about in the context of a single, concurrent root queue? I remember reading in older mailing lists that we shouldn't explicitly target any of the global queues. Perhaps the introduction of QoS was the reason? Regardless, it's my understanding that if you have a concurrent queue running at a given QoS, then Dispatch will boost that queue's QoS to the highest QoS of any in-flight worker item and then return to the original QoS once that worker item has completed. I've applied that strategy by having a single, concurrent root queue in my application who's QoS is utility. The various sub-systems have their own concurrent queues with higher QoS values, but all target the root queue. This appears to work ok, but I'm sort of just guessing on this strategy. Sadly, imptrace, the tool for monitoring QoS boosting, does not appear to work in Catalina. Should I use a single concurrent queue as the root queue of an application? Does it make sense to run it with a low QoS and rely on boosting from targetted queues or is that just silly?
This question comes up so frequently that it would be extremely appreciated if someone on the SwiftUI team or a DTS representative could chime in on this. The vast majority of sample code and documentation assumes a view model can be created without any parameters to its initializer, which is not always the case. And in the Fruta example app a single Model is used for the entire application, which really isn't realistic for larger scale applications.
I've resorted to the following design pattern but I remain unsure if this is considered a "correct" way to initialize an @StateObject property:
struct ParentView: View {
var body: some View {
ChildView(viewModel: ChildViewModel(someValue: "foo"))
}
}
class ChildViewModel: ObservableObject {
init(someValue: Any) {
}
}
struct ChildView: View {
@StateObject var viewModel: ChildViewModel
}
This pattern appears to work correctly and doesn't require the small "hack" of using the underscore to initialize the @StateObject, which appears to be discouraged based on my reading of the documentation:
StateObject.init(wrappedValue:)
// You don’t call this initializer directly. Instead, declare a property with the
// @StateObject attribute in a View, App, or Scene, and provide an initial value:
struct MyView: View {
@StateObject var model = DataModel()
}
Sure thing, though as a perfect example of what I'm referring to, I actually can't log into Feedback Assistant within a virtual machine. Feedback Assistant's login screen simply reports "An error occurred during authentication." when I enter my Apple Developer credentials.
I can, of course, log into Feedback Assistant from the host macOS instance, but not from within the guest instance. At the moment, my host is running Monterey while my guest is running Ventura. I'd like to spend more time testing Ventura and building software within it, but without access to iCloud or my developer account, it makes that goal almost impossible.
Looks like it's as simple as attached a binder to columnVisibility in the NavigationStackView initializer and setting an appropriate column visibility value as needed.
Updated
(See reply below.)
Follow-up 2:
Oh, interesting! If you create a new XPC target in Xcode and opt for the "libXML" option, then the template that is generated shows how to pair xpc_session with xpc_listener.
Kudos to whoever at Apple was responsible for that...
Same problem here. Updated to macOS 15.1 today and then downloaded Xcode 16.1 (16B40).
On launch, Xcode attempted to download the predictive code model but failed. Also fails when download is invoked from Xcode - Settings - Downloads.
This is the first attempt at downloading Xcode's predictive code model. Prior to today, laptop was running macOS 14.
The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
User Info: {
DVTErrorCreationDateKey = "2024-10-28 20:15:42 +0000";
}
--
Failed to find asset: com.apple.fm.code.generate_small_v1.tokenizer - no asset
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
--
System Information
macOS Version 15.1 (Build 24B83)
Xcode 16.1 (23503) (Build 16B40)
Timestamp: 2024-10-28T16:15:42-04:00