Best practices for displaying very large images (macOS SwiftUI)

I’m writing an app that, among other things, displays very large images (e.g. 106,694 x 53,347 pixels). These are GeoTIFF images, in this case containing digital elevation data for a whole planet. I will eventually need to be able to draw polygons on the displayed image.

There was a time when one would use CATiledLayer, but I wonder what is best today. I started this app in Swift/Cocoa, but I'm toying with the idea of starting over in SwiftUI (my biggest hesitation is that I have yet to upgrade to Big Sur).

The image data I have is in strips, with an integral number of image rows per strip. Strips are not guaranteed to be contiguous in the file. Pixel formats vary, but in the motivating use case are 16 bits per pixel, with the values signifying meters. As a first approximation, I can simply display these values in a 16 bpp grayscale image.

Is the right thing to do to set up a CoreImage pipeline? As I understand it that should give me some automatic memory management, right?

I’m hoping to find out the best approach before I spend a lot of time going down the wrong path.
Best practices for displaying very large images (macOS SwiftUI)
 
 
Q