Hi suMac,
I just uploaded a sample project that covers this use case: https://github.com/frankschlegel/core-image-by-example
It's still very new and I haven't tested macOS yet, but on iOS it's working so far.
Any feedback is welcome! 🙂
Post
Replies
Boosts
Views
Activity
A RAW photo is basically a direct dump of the camera sensor's raw data, so it will always be in the full resolution of the sensor.
You need to resize it yourself when you want a smaller version, for instance while converting ("developing") the RAW into a JPEG representation using Core Image. For the best downsampling quality I recommend using the CILanczosScaleTransform
filter.
The process is actually not too hard:
Put the Metal code of your kernel function (e.g., myKernel) into a file with the following naming scheme: <file_name>.ci.metal (like MyFilter.ci.metal).
Add the two Build Rules described by David in the session. But be aware that the -I $MTL_HEADER_SEARCH_PATHS flag seems to cause trouble, so better just omit that.
This will compile all the .ci.metal files into .ci.metallib files with the same <file_name>. You can then load your kernel function into a CIKernel like this:
let url = Bundle(for: type(of: self)).url(forResource: "MyFilter", withExtension: "ci.metallib")!
do {
let data = try Data(contentsOf: url)
self.kernel = try CIKernel(functionName: "myKernel", fromMetalLibraryData: data)
} catch {
fatalError("Failed to create kernel: \(error.localizedDescription)")
}
Maybe you can elaborate on what errors you are getting?
Yes, please let us stay logged-in for longer than half a day!
It's also worth noting that there is a built-in filter that does exactly that: CIBlendWithRedMask - https://developer.apple.com/documentation/coreimage/cifilter/3228275-blendwithredmaskfilter?language=objc
Uses values from a mask image to interpolate between an image and the background. When a mask red value is 0.0, the result is the background. When the mask red value is 1.0, the result is the image.
I'm afraid this needs more clarification:
Why can't you open it in Xcode 11? What is it saying?
What do you mean by "doesn't load any images"? What kind of images? From where? Using which APIs?
Thanks!
I also created sample code and filed another Feedback for the concatenation issue (FB7796293).
Thanks for the fast reply!
I tried your suggestion for building a single metallib, which was easy since I already had this setup (with the "old" -cikernel flag for the Metal linker).
It seemed to work at first, but then I noticed that not all kernels get compiled (and hence also not found on CIKernel.init) into the resulting library. I found that kernels that have a coreimage::sampler as input parameter seemingly won't get compiled this way.
They do get compiled, however, when using the other .metal -> metal -c -> .air -> metallib -> .metallib (so using metallib for linking) toolchain.
I already filed feedback for this including a minimal sample project (FB7795164).
It would be great if you'd find the time to look into this.
Thanks!
Thanks for the fast response!
We filed a feature request as you suggested (FB7753672).
Thanks Cutterpillow, that was also my understanding so far.
Especially for blending Linear makes total sense. But it seems for the blur the output is not what you would expect…
How big is your image and in which format?Note that UIImage is just an opaque container that does not necessarily hold your image in memory. Only when you actually require the image data it is loaded from the underlying provider.
Yeah, I also couldn't find any equivalent for the __table attribute in the "new" Metal-based CIKernel language—which is favored by Apple, by the way, since the old CI Kernel Language is deprecated now.What I was suggesting is that you use the destination API for your final rendering step to check the rendering graph to see what CI is doing under the hood with different color management settings. It would also be interesting to see the render graph when you use one of their non-image-producing kernels and check if color management happens there.By the way, in the iOS 13 release notes I found the following sentence:Metal CIKernel instances support arguments with arbitrarily structured data.However, I was not able to find any documentation or examples for this. Maybe that's something you can leverage.There is, for instance, the CIColorCube kernel that gets passed an NSData containing the lookup table. Maybe that's the way.Please let me know if you find a way!
Oh, what you see in the debug output needn't be the truth. That's because at the time you inspect/print the image, it doesn't "know" which CIContext is going to render it. That's why it's always inserting a generic "I'll convert to working space here" step in there, assuming it's needed. But if your context doesn't do color matching (workingColorSpace set to Null) or the input already is in working space, it's not actually performed.I recommend you change your final render call to use the newish CIRenderDestination API (check out CIRenderDestination and CIContext.startTask(toRender:...)). This will give you access to a CIRenderTask object that you can call waitUntilCompleted on to get a CIRenderInfo object. You can Quick Look both objects in the debugger to get a nice graph showing you what is actually done during rendering.Maybe you can post them here and we'll figure out what we need to change.
It's true, Core Image assumes that all intermediate results are in the working color space of the CIContext.You could try to wrap your lookup table in a CISampler with color space set to Null before you pass it to the kernel. This should tell Core Image to not do color-matching on that image:let sampler = CISampler(image: image, options: [kCISamplerColorSpace: NSNull()])
kernel.apply(extent: domainOfDefinition, roiCallback: roiCallback, arguments: [sampler, ...])Please let me know if this works!
While it might be possible, it seem unlikely that some driver error is causing this problem. But I also don't know how to solve it.I did some quick research and found that you can do gamma correction in vImage. Check out the functions starting with vImageGamma. They even have the sRGB gamma function (kvImageGamma_sRGB_forward_half_precision). This should be the most efficient alternative to Core Image.