Posts

Post not yet marked as solved
1 Replies
Looks like it's as simple as attached a binder to columnVisibility in the NavigationStackView initializer and setting an appropriate column visibility value as needed.
Post not yet marked as solved
9 Replies
Sure thing, though as a perfect example of what I'm referring to, I actually can't log into Feedback Assistant within a virtual machine. Feedback Assistant's login screen simply reports "An error occurred during authentication." when I enter my Apple Developer credentials. I can, of course, log into Feedback Assistant from the host macOS instance, but not from within the guest instance. At the moment, my host is running Monterey while my guest is running Ventura. I'd like to spend more time testing Ventura and building software within it, but without access to iCloud or my developer account, it makes that goal almost impossible.
Post not yet marked as solved
1 Replies
This question comes up so frequently that it would be extremely appreciated if someone on the SwiftUI team or a DTS representative could chime in on this. The vast majority of sample code and documentation assumes a view model can be created without any parameters to its initializer, which is not always the case. And in the Fruta example app a single Model is used for the entire application, which really isn't realistic for larger scale applications. I've resorted to the following design pattern but I remain unsure if this is considered a "correct" way to initialize an @StateObject property: struct ParentView: View { var body: some View { ChildView(viewModel: ChildViewModel(someValue: "foo")) } } class ChildViewModel: ObservableObject { init(someValue: Any) { } } struct ChildView: View { @StateObject var viewModel: ChildViewModel } This pattern appears to work correctly and doesn't require the small "hack" of using the underscore to initialize the @StateObject, which appears to be discouraged based on my reading of the documentation: StateObject.init(wrappedValue:) // You don’t call this initializer directly. Instead, declare a property with the  // @StateObject attribute in a View, App, or Scene, and provide an initial value: struct MyView: View { @StateObject var model = DataModel() }
Post not yet marked as solved
7 Replies
Very much appreciate the double-reply, thank you.The thumbnail rendering situation was something I came across quite awhile ago. I was using dispatch_apply to request thumbnails of dozens or more images at once, but that quickly overwhelmed disk I/O and various other frameworks like Image IO. Back then, I had posted a question on StackOverflow related to just this issue:https://stackoverflow.com/questions/23599251In the end I just ended up using a normal concurrent queue with an explicit semaphore to throttle in-flight worker items, similar to NSOperationQueue.maxConcurrentOperationCount. Works for now, but it's an ad-hoc solution based on the number of reported CPU cores. I fully accept that there are no hard-and-fast rules for this and each application is somewhat different. Comparing my app's architecture to the talking points in the WWDC video, I feel like the video is using rather small dispatch work items while my app uses rather large ones. Most of my work items are "jobs", like exporting an image, fetching results from a database or rendering a thumbnail. For those types of operations, I'm not aiming for maximum throughput, but rather for the best user experience. For example, rendering three thumbnails concurrently might take longer to complete in an absolute sense, but if two of the thumbnails are for small images and one of the thumbnails is for a massive panorama, then it's very likely the two thumbnails will finish sooner and thus be shown to the user quickly. Had they have had to wait for the panorama to finish, the user would see a blank screen for longer than needed. At least that's how I like to design things.(This is particularly important for thumbnail rendering because it can be very hard to cancel a rendering that is in-progress. Many of the Image IO and QuickLook APIs don't have a way to cancel their requests, thus you're stuck waiting for a thumbnail to be generated even if the presenting view has scrolled off the screen.) Similar concurrency thoughts apply to exporting. I'm OK if a small export job is pre-empting a longer export job because that allows the smaller job to complete sooner and for the user to be able to use the resulting files sooner. If a user initiates a small export while a large export is already underway, then chances are they want access to that small export ASAP. They shouldn't have to wait for the larger one to complete. I realize this causes the total time to completion for all outstanding requests to increase, but from the user's perspective (IMHO), the system appears more performant. The only way I know how to do this is with a "tree of concurrent queues", so perhaps my use-case, architecture and trade-offs are significantly different than those used in the WWDC talking points. Additional Commentary: Earlier you pointed out that I was likely wrong in concluding that an application should have a single, serial root queue. Given this discussion, I agree. But what about in the context of a single, concurrent root queue? I remember reading in older mailing lists that we shouldn't explicitly target any of the global queues. Perhaps the introduction of QoS was the reason? Regardless, it's my understanding that if you have a concurrent queue running at a given QoS, then Dispatch will boost that queue's QoS to the highest QoS of any in-flight worker item and then return to the original QoS once that worker item has completed. I've applied that strategy by having a single, concurrent root queue in my application who's QoS is utility. The various sub-systems have their own concurrent queues with higher QoS values, but all target the root queue. This appears to work ok, but I'm sort of just guessing on this strategy. Sadly, imptrace, the tool for monitoring QoS boosting, does not appear to work in Catalina. Should I use a single concurrent queue as the root queue of an application? Does it make sense to run it with a low QoS and rely on boosting from targetted queues or is that just silly?
Post not yet marked as solved
7 Replies
Consider the slide at 20:55. It appears to show two dispatch sources (S1, S2) that each target their own serial queues (Q1, Q2), which in turn targets a single serial queue (EQ). My interpretation of that slide, is that all of the work is serialized by the one root queue, which means that S1 and S2 do not provide any additional concurrency.A minute later, the speaker mentions that the order of the work items is guaranteed by the root "mutual exclusion queue", but that would have been the case anyways with a single dispatch source.A few more slides later, there's one titled "QoS and Target Queue Hierarchy" which attempts to explain why you'd want to use multiple dispatch sources. In this example, S1 has a low QoS while S2 has a high QoS. But since they both target a root queue, then there's a good chance that the entire tree will run at the higher QoS if S2 is adding a lot of worker items. That means that low priority items, added by S1, will get boosted to a higher QoS which is unlikely to be what I'd want. I'd much rather the system context switch over the higher QoS work item, execute it, then go back to the lower QoS work item. This isn't possible in the presented design because of the root queue.At 26:23, another example is presented using a "single mutual exclusion queue" as the root queue. In this example, the problem really seems to be that the jobs are too small to warrant individual work items. But the solution presented means that only a single event handler can be running at once.At 28:30 the subject of subsystems is brought up. It's very possible I'm mis-interpreting this part of the talk. The solutions presented involve each sub-system targeting a serial queue. (Main Queue, Networking Queue, Database Queue.) Excluding the main queue because it's special, why would I want the networking and database queues to be serial? A long running work item on either would significantly slow down the overall application. Multiple requests to read from a database should be allowed to happen concurrently, IMHO.My earlier comment regarding a single, root queue for the entire app was somewhat influenced by the subsequent slides that suggest using a "Fixed number of serial queue hierarchies."If you look at 31:30 "Mutual Exclusion Context", they show a simply tree with a root serial queue (EQ). On the next slide, they reference EQ as the Application queue, or at least that's how I read it.Finally, consider the slide at 43:00 "Protecting the Queue Hierarchy". The first bullet point suggests that one should "Build your queue hierarchy bottom to top." In that diagram, I see EQ as a root queue for the application, with Q1/S1 and Q2/S2 being task related or subsystems if the application is large enough.But even I was wrong concluding that there should be a root serial queue, I'm still conflicted as to why I'd want all my subsystems to have serial queues. If all of my tasks are long enough to warrant their own work items, then I want as many of them running reasonably possible given the cores available to me. If I'm rendering thumbnails on a MacBook Pro, then I might want 4-6 thumbnail requests to run concurrently. If I'm running on a Mac Pro, then I can handle a lot more. I can't have that flexibility if I build a hierarchy of serial queues, yet that seems to be Apple's recommendation in some of the more recent WWDC videos related to GCD.Follow-up:Proper use of GCD is obviously quite dependent on how your application is architected, so I'm clearly approaching this from my app's perspective. Out of interest, my app's architecture looks something like this:A half-dozen or so "managers", that one could consider to be sub-systems.Each manager has a single, concurrent execution queue with an appropriate QoS level.Each manager is responsible for a certain type of request. (Database, Export, Caching, Rendering, etc...)Requests are submitted to each task almost always from the main thread as a result of a user event.Requests are immutable and independent of each other. There are no locks are shared resources involved.Requests are allowed to execute out-of-order, if explicitly stated. (i.e.: Two database reads can happen out-of-order, but writes cannot.)Requests are relatively high-level and, with very few exceptions, run within their own work item. (i.e.: A request does not, in turn, spawn other GCD work items.)An example request might be exporting an image, rendering a thumbnail, performing a database operation, etc.A request might use a framework, like AVFoundation or Core Image, that in turn uses multi-threading. This is where some manual throttling needs to happen because if you have six cores and try to decode six RAW files concurrently, you'll have worse performance than decoding two or three concurrently since Image IO spawns a bunch of threads itself.Using serial queues in each manager/sub-system would reduce the concurrency in my app, and I've tested that by limiting how many concurrent thumbnail requests I allow at any given time and the degradation is visually obvious.So my app makes almost exclusive use of concurrent queues, with the odd barrier block when some form synchronization is required. However, this seems very much at odds with the above mentioned WWDC talk as well as tips listed on page:https://gist.github.com/tclementdev/6af616354912b0347cdf6db159c37057
Post not yet marked as solved
7 Replies
"In your case you seem to be using requests without reply blocks, in which case the transaction closes as soon as you return from the request handler."Interesting. You're right that I return from the request handler on the XPC side almost immediately. All the request handler does is take the message that was sent over from the app and store it into an array for processing "at a later date" and in a separate thread. The request handler thus returns almost immediately though a strong reference to the original message is obviously in play as the message is in the array.According to your comments, this architecture does not affect the transaction state of an XPC connection, but does it affect the QOS boost? I could have sworn a year or so ago when I was playing around with this I "discovered" that I needed to hold on to that message longer than I thought otherwise my XPC's performance dropped significantly.This reasoning was based on my interpretation of WWDC 2014-716 Power, Performance and Diagnostics - What's new in GCD and XPC. At the 33:20 time mark, they are discussing two things that cause the lifetime of the XPC boost to be maintained: Until reply is sent.While using message.The first one is clear but doesn't apply to me since I'm not using the reply APIs. But the second item isn't so clear to me. What does it mean "while using the message". Note that this video is using the C API when discussing XPC, so I always just assumed that the xpc_object that was sent across the wire and received in the XPC service needed to be strongly held. I couldn't just extract the values I wanted and then discard the xpc_object. Doing so, appeared, to cause the XPC service to significantly slow down. (I was doing image processing in the service and saw terrible performance if I didn't keep the xpc_object around.) But perhaps this isn't the case with the NS-API? (Note that several DTS reports concluded that it was no longer possible to run imptrace to monitor QOS boosting, unfortunately.)
Post not yet marked as solved
7 Replies
Thanks for the fantastic reply, much appreciated and reassuring. I've gone back and forth between the XPC C-API and the NS-API for various reasons. I started with the C-API because it, originally, made more sense to me and it was the only way to use IOSurface at the time. When IOSurface support was added to the NS-API, I decided to give it another shot. But the "reply" model and "protocol" model for how the client should talk to the service never really felt right for the use-case I had in mind. I much prefer a more asynchronous model where the client sends a packaged/serialized "message" across to the service and, asynchronously, receives packaged/serialized "replies" on a different connection. Messages and replies are matched up using unique identifiers.So I went back to using the C-API for awhile until Metal introduced shared events and shared textures, which appeared to only support the NS-API. With IOSurface now on the NS-API and Metal appearing to favour it, I took my C-API "architecture" and implemented it using the NS-API, which is where I stand today and I'm reasonably happy with it.But, per this thread and the other one, there's no DispatchData support in the NS-API (at least there wasn't until you showed me the new API for xpc_type_t).When all is said-and-done, I'm really just sending serialized data across as one big blob. I'll have to wire back up a way to "attach" IOSurfaces and shared Metal objects into the transport layer, but for those messages I might just fallback to a basic NSObject that implements NSSecureCoding to make things easier. (Message replies that include an IOSurface or shared Metal object don't have much other date in them, so there's no need for fast, efficient serialization of those messages.)Finally, to tie things back to your reply, what is the NS-API equivalent of xpc_transaction_begin, and xpc_transaction_end? Some of the WWDC videos talk about "holding on to the message" in the XPC process to ensure that appropritate QOS boost levels are applied. I never really understood what that meant, but I assume if I'm "holding on to the message" in the service then perhaps I'm implicitly inside a transaction as well?For example, my XPC service's protocol has a single method:// Implemented by XPC Services. @objc protocol ServiceRequestHandler { func handleServiceMessage(_ message:ServiceMessage) } // Implemented by the App to receive response. @objc protocol ServiceResponseHandler { func handleServiceMessage(_ message:ServiceMessage) } // ServiceMessage is an NSObject that implements NSSecureCoding. It has a single NSData field in it // representing serialized data. (And, optionally, an IOSSurface as mentioned above.) // ------------------------------------------------------------------------------------------------- // On the service side, the implementation of handleServiceMessage looks roughly like this: func handleServiceMessage(_ message:ServiceMessage) { messagesToProcess.append(message) processAnyPendingMessagesInASeparateThread() }My understanding of "holding on to a message" is that if Io keep a strong reference to `message`, then an implicit transation remains opens and my service remains boosted. As soon as I dequeue that message, process it and release it, then I'm no longer "holding on to it" and the XPC machinery can terminate the service or at least lower its QOS boost.I've just assumed this by concluding that when `ServiceMessage` is serialized on the client using an XPC Coder, it must have some additional metadata attached to it that the XPC system tracks to determine whether the message is in-flight, delivered, handled and still alive or not.For now I'll stick with the NS-API because I'm curious to explore shared Metal textures down the road, but perhaps something else compelling on the C-API will crop up and I'll swing back...
Post not yet marked as solved
7 Replies
I want to explore using an XPC process for all the normal reasons. I already make extensive use of them for shorter-lived actions as Apple routinely encourages. Now I'm interested in how much farther I can take them.Some of the VR rendering stuff is done entirely in a separate process using Metal's new shared textures. How do those proceses guarantee that macOS doesn't suddenly terminate them and thus kill the rendering? Is the solution to just ensure that the process is always "busy" or is there a more formal way to specify my intent using something like the NSSupportsSuddenTermination flag?
Post not yet marked as solved
6 Replies
For the most part I would agree. However, I'm still curious if there's an efficient way of using NSData, DispatchData or Data to reduce the number of unnecessary allocations. Consider how prevalent the use of an IOSurface pool or MetalTexture pool is to efficiently transfer graphics data between processes.
Post not yet marked as solved
1 Replies
Huh, apparently the forums sanitize the (L)aunch (S)ervices (D)aemon three-letter abbreviation....
Post not yet marked as solved
7 Replies
I'm in the same position. I've been using Xcode 11 on macOS 10.15 for the last few days to play around with some of the new Swift stuff. I've made several very small "Hello World" style applications. Xcode 11's build system was quite buggy with numerous failures being shown even though the build was successful. But at least it was "working"... Up until this morning. Since waking up this morning and trying to use Xcode, I'm stuck with "Indexing..." as well. One small project builds, but another doesn't. It worked fine a few hours ago! I have "SourceKitService" and two "swift" processes running in Activity Monitor each around 90% CPU utilization. The Xcode activity bar says "Indexing..." but the process bar is not moving. Like you, I've tried deleting every Xcode folder I can think of but to no avail. Rebooting didn't help either. There must be some other hidden folders where Xcode is caching data... At the moment, Xcode 11 just became unusable for me and it's not obvious how to "Reset" it to whatever condition it was in last night when it was working...