I'm writing custom Core Image filters and I'm having a hard time really understanding the
extent
parameter of CIKernel
's apply
method.In all documentation and WWDC talks I found so far it's described as the "domain of definition of the kernel", so the area for which the kernel produces meaningful, non-zero results.
From that definition I would assume that the extent of the output of a convolution kernel is the same as the extent of the input image, because a convolution always combins multiple input values into one output value. But in the examples that I found and from observations of behavior of the built-in kernels such as
CIGaussianBlur
, the output extent is always larger than the input (depending on the size if the convolution kernel).I don't understand why. Why should the kernel produce results for pixels that lie outside of the original input domain?