Post marked as solved
Click to stop watching this thread.
You have stopped watching this post. Click to start watching again.
contentPostList.repliessolved.tooltip
Hi tinrocket -
I was just about to post an update on my progress! I would greatly appreciate more detail. This is where I have got to:
As suggested above, I created a subclass of NSView to be the scroll view's document view and placed a MTKView as its subview (let's call it the image view). To keep the texture to within the memory limits, the image view is set to be the smaller of the image*scale size or the visibleRect size within the clip view. The document view is always resized to image*scale to make the scroll view work.
To keep the image view in place I placed two constraints at the top and leading of the document view; these are adjusted as the sizes change. (Basically it floats around the empty document view.) I calculate what part from my original image is visible and draw that. I can now magnify indefinitely. This works, but it's not great:
I get a lot of "tearing" when the magnification is greater than the visible rect size and I scroll around, i.e. the newly exposed areas are black and then get filled in.
I am not seeing the benefits of responsive scrolling / overdraw.
I feel like I'm effectively reimplementing NSClipView.
I reengineered it again to remove the document view and make my MTKView the document view. This time, I made its size image*scale and hoped that the dirty rect passed into drawRect: would effectively do the same as above, but this time with the benefits of responsive scrolling (which I specifically opted into), but I was always being asked to redraw the full view (so back to memory problems).
I would love to learn more about how you accomplished this. The last part of my render pipeline is displaying a CIImage, so it should match yours. An Apple engineer in a WWDC20 lab suggested looking at MTLViewport but I couldn't figure out how this would work with a CIRenderDestination.
Thanks for any help!