I have an app in the App Store called MaskerAid (App Store). The general gist of the app is that it lets you hide things in images using emoji. The emoji are SwiftUI View
s that can be manipulated by the user. They can be resized, relocated, etc. When the user is ready to share their image, a screenshot is taken using an absolutely revolting pile of hacks I wrote to get UIKit to take a snapshot of the screen.
I was super amped to see the new ImageRenderer
API (docs), which would let me throw away my pile-o'-hacks and do things a more sane way. However, the ImageRenderer
seems to be written around a static set of views that are non-interactive.
Am I holding it wrong, or is this a missing part of the API surface for ImageRenderer
? Is there any way to get it to display a View
(and its sub-Views
) as they exist on-screen?
For what it's worth, this has been filed as FB10393458
.