CIAreaMinimum / CIAreaMaximum not working for some images

I'm playing with CIImage and CIFilter to create some custom image processing. I'm on macOS 10.13.6 and using XCode 10.1.

I can't work out this problem... the CIAreaMinimum and CIAreaMaximum filters (that return a single pixel image with the max/min colors from the CIImage) don't always produce a sensible output.

It seems that for any PNG image with transparency (first read into a CGImage, and using this to create a CIImage), the max is always black and the min is always white. For any other image (including other PNGs), it seems to produce expected results.

If I can't get this working, an alternative is for me to write a custom CIFilter (since I need some other custom stuff anyway), in which case how do I, in the first instance, go about make my own max/min filter?

I'm guessing I need a way to store min/max values, and then for each pixel sample check the min/max against the pixel value and update the min/max when appropriate, but how do I create such a persistent value to pass in/out of the custom CIFilter using Core Image Kernel Language (not Metal) and Swift?

Replies

Hu, intersting find...

Do you actually need the alpha channel for the max/min? If not, try to perform the calculation on

image.settingAlphaOne(in: image.extent)
.

If the alpha value is relevant, maybe you can use

image.premultiplyingAlpha()
instead.

Have you tried loading the image directly into a

CIImage
(without converting it to
CGImage
between)?


Implementing a reduction operation like that on the GPU is actually not trivial and from the top of my head I don't know how to easily do so with a custom filter. You could however use

vImage
in combination with a
CIImageProcessorKernel
to make it work on the CPU as part of a Core Image pipeline.

I've tried all those things. It's as if the fact that the original image has an alpha channel somehow causes it to not perform the computation.


Below is some code I've written for a custom filter that seems to work - though it's certainly slower than the built-in CIAreaMinimum.


As you can see, to calculate the minimum, I first set the minimum to white before updating it with lower values found in the image. Apple's CIAreaMinimum seems it might also do this (given that it returns white), but then fails to proceed with calculating anything.


class CICustomAreaMinimum: CIFilter {
  @objc dynamic var inputImage: CIImage?

  let customMinKernel = CIKernel(source: "kernel vec4 customMin(sampler srcImage) {" +
  "vec4 extent = samplerExtent(srcImage);" +
  "vec4 minColor = vec4(1.0, 1.0, 1.0, 1.0);" +
  "for (float x=extent.x; x<(extent.x + extent.z); x++) {" +
  "    for (float y=extent.y; y<(extent.y + extent.w); y++) {" +
  "        vec4 pixel = sample(srcImage, vec2(x, y));" +
  "        minColor.rgb = vec3(min(minColor.r, pixel.r), min(minColor.g, pixel.g), min(minColor.b, pixel.b));" +
  "    }" +
  "}" +
  "return minColor; }")

  override var outputImage: CIImage! {
  guard let inputImage = inputImage, let customMinKernel = customMinKernel else {
  return nil
  }
  return customMinKernel.apply(extent: CGRect(x: 0.0, y: 0.0, width: 1.0, height: 1.0),
  roiCallback: { (index, rect) in return self.inputImage!.extent },
  arguments: [inputImage])
  }
}

Your filter probably works, but is unfortunately highly inefficient. Here's why:


The kernel code is executed for each pixel on the GPU. That means for each pixel you would sample the whole input image to find the minimum. But since you set the extent to only be one pixel, the GPU is basically single-threadedly computing the minimum. And a single thead when executed on the GPU is very slow compared to the CPU.


To leverage the parallel nature of the GPU, one would usually perform an iterative parallel reduction, e.g. finding a minimum among 4 pixels per iteration, reducing the resulting image size with each iteration until only one pixel is left.


For your use case, however, I would recomment you use the APIs from the Accellerate framework, specifially vDSP, for finding the minimum on the CPU in the most efficient way possible.


And you should definitly file a bug report with Apple. CIAreaMinimum and CIAreaMaximum not working on images containing transparent pixels doesn't sound right at all...

I'll look at filing a bug report.


Can I just ask... when I apply the kernel to extent CGRect(x: 0, y: 0, width: 1, height: 1), doesn't this mean that the filter only performs the calculation over one pixel?

Or have I misunderstood this?

The extent specifies the size of the output, and your kernel is called once per pixel in the output image. So yes, the kernel only runs once on the GPU. However, since you iterate over the whole extent of the input, it's still using every pixel of the input image for the calculation.


But as I said, this is really not the use case the GPU is design for. The GPU uses its most potential when it can perform concurrent calculations on as many (output) pixels as possibles. So ideally you would have a 1:1 mapping of input and output size. That's why reduction operations (reducing a big input image into one value – the minimum) is not trivial to implement efficiently on the GPU.

Ah yes... because the iteration is happening in a single thread, thereby illiminating the benefit of the GPU.


Thanks.