Sorry that the GC thing didn't work out for you. My recommendation for something like this would be either Firebase (use their Realtime Database, not Firestore, which has somewhat worse latency) or a combination of a slower persistence layer like CloudKit and a platform like Ably / PubNub / Pusher for the realtime component. Within the Apple ecosystemy I have not heard of somethign that would achieve what you are looking for and am even surprised GC lives up to this. Out of curiousity, could you confirm whether the GC implementation actually worked for you over internet vs just locally and whether performance was sufficient?
Post
Replies
Boosts
Views
Activity
Ah interesting approach. Yes, I'd be extremely curious to hear how that panned out (and if you ran into any noteworthy pitfalls or documentation inaccuracies. Been having a lot of fun with the latter trying to mess with CloudKit.js). Really appreciate you sharing this experiment here.
Hey Philipp,
Thanks so much for this very thorough investigation.
Can confirm that this issue persists in Beta 3 and that your scripts above eliminate the issue for both categories of simulators.
I had toggled those switches off in the standalone simulators but somehow expected an immediate result rather than one after a restart. So I thought it wasn't effective. But upon checking after your post, it does seem like simply toggling and restarting also does the trick (for anyone reading this who doesn't want to mess with their simulator search preferences, I guess).
Thanks again!
Glad to see this pop up here.
Re rolling back to beta 4: sadly I already brought up all the relevant devices to beta 5, which then obviously breaks things when running on device, for reasons unrelated to this issue.
Given other idiosyncrasies when running in the simulator on a pre-Monterey Mac (stemming from the concurrency evolution) I was wondering if anyone is seeing this while running the simulator on Monterey?
That made me realize that I hadn't checked for it on an Intel Mac.
And it turns out, I actually can't reproduce my specific instance of this on an Intel machine.
Thanks Steve! Didn't think to check today's release notes for anything regarding this bug, since there was no new Xcode. Luckily I had notifications on for this thread.
Thanks for clarifying @FrameworksEngineer.
If that is the case, it would be good to make a note at the top of this document: https://developer.apple.com/news/?id=8vkqn3ih
Since lots of SO and blog posts mention this "one weird trick" of right clicking the run button, something clear that mentions its deprecation would be good.
Feedback Filed: FB9632206
Also: PSA for anyone coming here, a workaround I've started using is the Logger() api + Console app. Note that your log level seemingly must be .info or higher for Simulator / Preview logs to show on the host Mac's Console (whereas .debug is sufficient for attached physical devices.)
Probably the easiest solution:
Start with the background of your choosing (e.g. a very dark gray or black to match your example)
Two shapes in a ZStack (I'd try ellipses, rotate them, offset them a bit; custom shapes for more control)
Give each shape one of the colors you'd like to target (e.g. the turquoise-ish green and the yellow-ish green from your example)
Apply .blur(radius: [something high like 70.0]) to the ZStack
A basic implementation might look like this:
struct BlurredGradientView: View {
let size = 200.0
var body: some View {
ZStack {
Ellipse()
.frame(width: size, height: size * 1.5)
.foregroundColor(.red)
.offset(x: -size * 0.2)
.rotationEffect(Angle(degrees: 5))
Ellipse()
.frame(width: size, height: size * 1.3)
.foregroundColor(.blue)
.offset(x: size * 0.2)
.rotationEffect(Angle(degrees: -10))
}
.blur(radius: 70)
}
}
Options
If you want this effect to automatically adapt to the foreground content, you could replace the ZStack of shapes with said content. Remember that you're going to be blurring it anyway, so if possible try to go with a thumbnail version. While this beats having to read out color values in order to achieve adaptivity, it does reduce control and may add a bit more detail to the effect than desired.
If you get noticeable banding in the blur, you could try overlaying a grain/noise pattern image (look for one that is made to be repeated) with something like .opacity(0.2) and .blendMode(.overlay) (this makes all of this even more expensive, of course). Sometimes this helps, sometimes it doesn't, depends on the gradient and level of banding.
Performance
While this solution is simple, the .blur() effect is a bit expensive. If you're rendering this as a static element, I wouldn't worry about it. If you're animating the effect or modifying it in response to interaction, performance might become an issue if there are other expensive things going on, or if you're app is running on old hardware, espc if that hardware is also in a low performance state – e.g. it's hot from charging.
To start optimizing, fire up the SwiftUI template in Instruments (remember to profile on an actual device) and experiment with these approaches:
Eliminate transparencies and shadows wherever possible, but especially inside the hierarchy to be blurred (the ZStack of the two shapes or your re-rendering of the foreground content). For example, while it can be easier to calibrate the effect via the shapes' .opacity() or via colors with an alpha below 1.0, I would instead try to go with fully opaque colors and adjust their lightness values to approach your background.
Try moving the background into the blurred ZStack and then set the blur to be opaque. If this has performance advantages at all – SwiftUI isn't explicit about it, but there's a decent chance – those advantages may diminish if this approach causes the blur to drastically grow in size. Hence the need to profile along the way. This approach may also require some adjustments to how you place the outcome in your view hierarchy and depending on your final desired result, it may not be an option at all.
Look into the .drawingGroup() modifier to understand what it does and then profile it in various locations (e.g. on each of the two shapes vs. on the ZStack before the blur vs. after the blur; its value may shift depending on where exactly you apply animations)
Alternatives
Alternative approaches that might be more performant but require going a little deeper:
Investigate a solution inside a Canvas view. Set it to draw asynchronously. If feasible in your design, initialize the Canvas with opaque: true and render the background inside the Canvas (in contrast to blur's opaque version, SwiftUI explicitly mentions possible perf gains in the documentation for opaque Canvas views). Blur the context and draw the same multi-colored shapes. You could also try rendering the ZStack of shapes/content from the first solution above as a symbol inside the Canvas, instead of using Canvas drawing code to create them. This has the advantage of allowing you to coordinate animations via SwiftUI code on the shapes. There isn't a lot of info as to how much of the Canvas performance gains this approach retains but I've been positively surprised in a few cases. If you do go this route, don't forget that it's still the Canvas drawing context that should provide the blur, so remove the .blur() modifier from the views that power your symbols.
A solution that foregoes blurs but retains the control needed for this effect is a gradient mesh. This post describes gradient meshes in detail, in the context of SceneKit: https://movingparts.io/gradient-meshes. You can use SceneKit rather easily inside SwiftUI via SceneView. Given the overhead of loading an entire 3D graphics framework, you might have to pay on load performance but hopefully you would be able to guarantee smooth execution even on old devices, given SceneKit's hardware acceleration and the simplicity of what you're rendering. You could also try adapting the gradient mesh approach to a graphics framework with lower overhead (especially if you are already using that framework in the same context, e.g. to render the foreground element). Of course this route complicates any interaction/animation code you might want to coordinate with SwiftUI, whereas the blurred shapes solution above allows you to interact with the effect easily and directly in SwiftUI.
Lastly, there may be a solution that uses only gradients but I'm not sure how close to your example you could get without a blur helping you smooth everything out. That said, in writing this answer I came up with a long shot idea for a gradients-only approach and I'm curious to investigate it – I'll update here if this leads anywhere.
Can confirm the same issue here. Can't even deactivate Workflows. Of course, everything is currently green on System Status.
This is still going on, I continue to be unable to deactivate a workflow that just keeps churning out useless builds.
How am I supposed to advocate for this service with a week between reports here and via Feedback Assistant and just complete failure to even acknowledge an issue that would run against billable hours if I switched to Xcode Cloud for more of my work? It feels like Xcode Cloud and the organization that supports it graduated out of Beta purely from a marketing perspective, not as an indicator of actual readiness for real-world usage.
I've been getting the same thing since somewhere during the Xcode 14 / iOS 16 beta cycle. Xcode will put up a variable, exaggerated number of warnings, none of which seem to actually be properly associated with the code in question, but I do use for await over NotificationCenter's AsyncSequence and assume the warnings are just buggy.
Thanks for posting this, as it has been driving me nuts not to be able to find other instances online. Will watch this thread (and update if I find a solution).
This seems to be caused by having a space in the name of the Xcode app (e.g. to delineate betas, I had renamed the app "Xcode 15 Beta 5"). Removing the spaces solved this issue for me.