Posts

Post marked as solved
6 Replies
Hi tinrocket - I was just about to post an update on my progress! I would greatly appreciate more detail. This is where I have got to: As suggested above, I created a subclass of NSView to be the scroll view's document view and placed a MTKView as its subview (let's call it the image view). To keep the texture to within the memory limits, the image view is set to be the smaller of the image*scale size or the visibleRect size within the clip view. The document view is always resized to image*scale to make the scroll view work. To keep the image view in place I placed two constraints at the top and leading of the document view; these are adjusted as the sizes change. (Basically it floats around the empty document view.) I calculate what part from my original image is visible and draw that. I can now magnify indefinitely. This works, but it's not great: I get a lot of "tearing" when the magnification is greater than the visible rect size and I scroll around, i.e. the newly exposed areas are black and then get filled in. I am not seeing the benefits of responsive scrolling / overdraw. I feel like I'm effectively reimplementing NSClipView. I reengineered it again to remove the document view and make my MTKView the document view. This time, I made its size image*scale and hoped that the dirty rect passed into drawRect: would effectively do the same as above, but this time with the benefits of responsive scrolling (which I specifically opted into), but I was always being asked to redraw the full view (so back to memory problems). I would love to learn more about how you accomplished this. The last part of my render pipeline is displaying a CIImage, so it should match yours. An Apple engineer in a WWDC20 lab suggested looking at MTLViewport but I couldn't figure out how this would work with a CIRenderDestination. Thanks for any help!
Post marked as solved
6 Replies
That is a great suggestion- it didn't occur to me to place a subview into the document view. I've worked with the scroll sync code before which is pretty straightforward. I have a feeling that a lot of NSRect-math will be involved, but I think the result will be pretty performant. Thanks!
Post not yet marked as solved
2 Replies
Thanks for the reply! I created two sysdiagnose reports and filed a bug report: FB7804023. The reports were both taken while SKAgent was running at ~180% CPU.
Post not yet marked as solved
5 Replies
I had the same question and find this confusing as well. I have submitted a "bug report" for clarification in the documentation; I recommend anyone with the same question do the same.The documentaion in question is: https://developer.apple.com/documentation/coreimage/ciimage/2915368-imagebyapplyingfilter?language=objc
Post marked as solved
1 Replies
In case anyone needs the answer, the solution is to create a texture descriptor directly from the data:MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float width:width height:height mipmapped:NO];Create a new texture using that descriptor ([device newTextureWithDescriptor]), then use the "replaceRegion:mipmapLevel..." method to copy the data directly into the texture buffer.
Post not yet marked as solved
20 Replies
There is no way to do this. I think Apple would argue this is precisely the point; the user must actively, specifically grant this permission. It would defeat the whole purpose if an app could just enable it.
Post not yet marked as solved
20 Replies
I have filed an enhancement request, FB6188278. If any developer comes across this thread, please "+1" this ticket number.