I been trying to learn SwiftUI and was wondering on how I can achieve this kind of effect, the one in the background, I have tried using many methods but I couldn't manage to recreate it.
I'd appreciate your help if you have any input.
Reference Image: https://www.imgpaste.net/image/K3JfVN
Probably the easiest solution:
- Start with the background of your choosing (e.g. a very dark gray or black to match your example)
- Two shapes in a ZStack (I'd try ellipses, rotate them, offset them a bit; custom shapes for more control)
- Give each shape one of the colors you'd like to target (e.g. the turquoise-ish green and the yellow-ish green from your example)
- Apply .blur(radius: [something high like 70.0]) to the ZStack
A basic implementation might look like this:
struct BlurredGradientView: View {
let size = 200.0
var body: some View {
ZStack {
Ellipse()
.frame(width: size, height: size * 1.5)
.foregroundColor(.red)
.offset(x: -size * 0.2)
.rotationEffect(Angle(degrees: 5))
Ellipse()
.frame(width: size, height: size * 1.3)
.foregroundColor(.blue)
.offset(x: size * 0.2)
.rotationEffect(Angle(degrees: -10))
}
.blur(radius: 70)
}
}
Options
If you want this effect to automatically adapt to the foreground content, you could replace the ZStack of shapes with said content. Remember that you're going to be blurring it anyway, so if possible try to go with a thumbnail version. While this beats having to read out color values in order to achieve adaptivity, it does reduce control and may add a bit more detail to the effect than desired.
If you get noticeable banding in the blur, you could try overlaying a grain/noise pattern image (look for one that is made to be repeated) with something like .opacity(0.2) and .blendMode(.overlay) (this makes all of this even more expensive, of course). Sometimes this helps, sometimes it doesn't, depends on the gradient and level of banding.
Performance
While this solution is simple, the .blur() effect is a bit expensive. If you're rendering this as a static element, I wouldn't worry about it. If you're animating the effect or modifying it in response to interaction, performance might become an issue if there are other expensive things going on, or if you're app is running on old hardware, espc if that hardware is also in a low performance state – e.g. it's hot from charging.
To start optimizing, fire up the SwiftUI template in Instruments (remember to profile on an actual device) and experiment with these approaches:
- Eliminate transparencies and shadows wherever possible, but especially inside the hierarchy to be blurred (the ZStack of the two shapes or your re-rendering of the foreground content). For example, while it can be easier to calibrate the effect via the shapes' .opacity() or via colors with an alpha below 1.0, I would instead try to go with fully opaque colors and adjust their lightness values to approach your background.
- Try moving the background into the blurred ZStack and then set the blur to be opaque. If this has performance advantages at all – SwiftUI isn't explicit about it, but there's a decent chance – those advantages may diminish if this approach causes the blur to drastically grow in size. Hence the need to profile along the way. This approach may also require some adjustments to how you place the outcome in your view hierarchy and depending on your final desired result, it may not be an option at all.
- Look into the .drawingGroup() modifier to understand what it does and then profile it in various locations (e.g. on each of the two shapes vs. on the ZStack before the blur vs. after the blur; its value may shift depending on where exactly you apply animations)
Alternatives
Alternative approaches that might be more performant but require going a little deeper:
- Investigate a solution inside a Canvas view. Set it to draw asynchronously. If feasible in your design, initialize the Canvas with opaque: true and render the background inside the Canvas (in contrast to blur's opaque version, SwiftUI explicitly mentions possible perf gains in the documentation for opaque Canvas views). Blur the context and draw the same multi-colored shapes. You could also try rendering the ZStack of shapes/content from the first solution above as a symbol inside the Canvas, instead of using Canvas drawing code to create them. This has the advantage of allowing you to coordinate animations via SwiftUI code on the shapes. There isn't a lot of info as to how much of the Canvas performance gains this approach retains but I've been positively surprised in a few cases. If you do go this route, don't forget that it's still the Canvas drawing context that should provide the blur, so remove the .blur() modifier from the views that power your symbols.
- A solution that foregoes blurs but retains the control needed for this effect is a gradient mesh. This post describes gradient meshes in detail, in the context of SceneKit: https://movingparts.io/gradient-meshes. You can use SceneKit rather easily inside SwiftUI via SceneView. Given the overhead of loading an entire 3D graphics framework, you might have to pay on load performance but hopefully you would be able to guarantee smooth execution even on old devices, given SceneKit's hardware acceleration and the simplicity of what you're rendering. You could also try adapting the gradient mesh approach to a graphics framework with lower overhead (especially if you are already using that framework in the same context, e.g. to render the foreground element). Of course this route complicates any interaction/animation code you might want to coordinate with SwiftUI, whereas the blurred shapes solution above allows you to interact with the effect easily and directly in SwiftUI.
Lastly, there may be a solution that uses only gradients but I'm not sure how close to your example you could get without a blur helping you smooth everything out. That said, in writing this answer I came up with a long shot idea for a gradients-only approach and I'm curious to investigate it – I'll update here if this leads anywhere.