Cache intermediates in combination with cropping

Core Image has the concept of Region of Interest (ROI) that allows for nice optimizations during processing. For instance, if a filtered image is cropped before rendering, Core Image can tell the filters to only process that cropped region of the image. This means no pixels are processed that would be discarded by the cropping.

Here is an example:

let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cropped = blurred.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))

First, we apply a gaussian blur filter to the whole image, then we crop to a smaller rect. The corresponding filter graph looks like this:

Even though the extent of the image is rather large, the ROI of the crop is propagated back to the filter so that it only processes the pixel within the rendered region.

Now to my problem: Core Image can also cache intermediate results of a filter chain. In fact, it does that automatically. This improves performance when, for example, only changing the parameter of a filter in the middle of the chain and rendering again. Then everything before that filter doesn't change, so a cached intermediate result can be used.

CI also has a mechanism for explicitly defining such caching point by using insertingIntermediate(cache: true). But I noticed that this doesn't play nicely together with propagating ROI.

For example, if I change the example above like this:

let blurred = ciImage.applyingGaussianBlur(sigma: 5)
let cached = blurred.instertingIntermediate(cache: true)
let cropped = cached.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200))

the filter graph looks like this:

As you can see, the blur filter suddenly wants to process the whole image, regardless of the cropping that happens afterward. The inserted cached intermediate always requires the whole input image as its ROI.

I found this a bit confusing. It prevents us from inserting explicit caching points into our pipeline since we also support non-destructive cropping using the abovementioned method. Performance is too low, and memory consumption is too high when processing all those unneeded pixels.

Is there a way to insert an explicit caching point into the pipeline that correctly propagates the ROI?

Replies

I think this is a bad API design.

Typically, you want to be sure that the system is not doing a pile of extra unnecessary work. This sort of API design doesn't provide that certainty.

My experience was with trying to partially decode very large images. It's not good enough to know that it might try to "optimise" by only decoding the region of interest; I need certainty that it will do that.

I suggest you ignore this region-of-interest feature and find some other way to do what you want with certainty.