Post

Replies

Boosts

Views

Activity

Reply to CIContext reports that it supports images in and out up to 16384 x 16384, but I don't think that it does.
Hi Sam,I can only guess, but is this maybe an issue that is caused by tiling?If an image exceeds a certain size, Core Image will process the image in tiles and stitch them back together in the end. Maybe some filter in the chain doesn't handle this case very well (e.g., by supplying the wrong region of interest)?Have you tried processing a very large (close to 16384 pixels) image on your machine? That should trigger the tiling even on newer hardware.
Jan ’20
Reply to CIContext reports that it supports images in and out up to 16384 x 16384, but I don't think that it does.
While it might be possible, it seem unlikely that some driver error is causing this problem. But I also don't know how to solve it.I did some quick research and found that you can do gamma correction in vImage. Check out the functions starting with vImageGamma. They even have the sRGB gamma function (kvImageGamma_sRGB_forward_half_precision). This should be the most efficient alternative to Core Image.
Jan ’20
Reply to Any way to let CI know output of CIKernel should not be colormatched?
It's true, Core Image assumes that all intermediate results are in the working color space of the CIContext.You could try to wrap your lookup table in a CISampler with color space set to Null before you pass it to the kernel. This should tell Core Image to not do color-matching on that image:let sampler = CISampler(image: image, options: [kCISamplerColorSpace: NSNull()]) kernel.apply(extent: domainOfDefinition, roiCallback: roiCallback, arguments: [sampler, ...])Please let me know if this works!
Feb ’20
Reply to Any way to let CI know output of CIKernel should not be colormatched?
Oh, what you see in the debug output needn't be the truth. That's because at the time you inspect/print the image, it doesn't "know" which CIContext is going to render it. That's why it's always inserting a generic "I'll convert to working space here" step in there, assuming it's needed. But if your context doesn't do color matching (workingColorSpace set to Null) or the input already is in working space, it's not actually performed.I recommend you change your final render call to use the newish CIRenderDestination API (check out CIRenderDestination and CIContext.startTask(toRender:...)). This will give you access to a CIRenderTask object that you can call waitUntilCompleted on to get a CIRenderInfo object. You can Quick Look both objects in the debugger to get a nice graph showing you what is actually done during rendering.Maybe you can post them here and we'll figure out what we need to change.
Feb ’20
Reply to Any way to let CI know output of CIKernel should not be colormatched?
Yeah, I also couldn't find any equivalent for the __table attribute in the "new" Metal-based CIKernel language—which is favored by Apple, by the way, since the old CI Kernel Language is deprecated now.What I was suggesting is that you use the destination API for your final rendering step to check the rendering graph to see what CI is doing under the hood with different color management settings. It would also be interesting to see the render graph when you use one of their non-image-producing kernels and check if color management happens there.By the way, in the iOS 13 release notes I found the following sentence:Metal CIKernel instances support arguments with arbitrarily structured data.However, I was not able to find any documentation or examples for this. Maybe that's something you can leverage.There is, for instance, the CIColorCube kernel that gets passed an NSData containing the lookup table. Maybe that's the way.Please let me know if you find a way!
Feb ’20
Reply to Extract common functionality in Core Image Metal files
Thanks for the fast reply! I tried your suggestion for building a single metallib, which was easy since I already had this setup (with the "old" -cikernel flag for the Metal linker). It seemed to work at first, but then I noticed that not all kernels get compiled (and hence also not found on CIKernel.init) into the resulting library. I found that kernels that have a coreimage::sampler as input parameter seemingly won't get compiled this way. They do get compiled, however, when using the other .metal -> metal -c -> .air -> metallib -> .metallib (so using metallib for linking) toolchain. I already filed feedback for this including a minimal sample project (FB7795164). It would be great if you'd find the time to look into this. Thanks!
Jun ’20
Reply to Metal Core Image Filter does not Compile
It's also worth noting that there is a built-in filter that does exactly that: CIBlendWithRedMask - https://developer.apple.com/documentation/coreimage/cifilter/3228275-blendwithredmaskfilter?language=objc Uses values from a mask image to interpolate between an image and the background. When a mask red value is 0.0, the result is the background. When the mask red value is 1.0, the result is the image.
Jul ’20