Understanding output image extent of convolution kernels

I'm writing custom Core Image filters and I'm having a hard time really understanding the

extent
parameter of
CIKernel
's
apply
method.


In all documentation and WWDC talks I found so far it's described as the "domain of definition of the kernel", so the area for which the kernel produces meaningful, non-zero results.


From that definition I would assume that the extent of the output of a convolution kernel is the same as the extent of the input image, because a convolution always combins multiple input values into one output value. But in the examples that I found and from observations of behavior of the built-in kernels such as

CIGaussianBlur
, the output extent is always larger than the input (depending on the size if the convolution kernel).


I don't understand why. Why should the kernel produce results for pixels that lie outside of the original input domain?

Replies

I am afriad that I cannot help you understand why; all I know is that since the inception of Core Image (or at least since I started using it back in the days 10.6) , you always had to crop the result of any convolution kernel back to the size of the image before that kernel. Of course using an Affine Clamp prevents the edges from fading out.

I'm not saying that this is certainly the reason, but often in convolution the input image is padded around the edges (based on the size of the kernel) in some manner so that the convolution results are smooth and consistent at the original image's boundaries.

Yes, but this would be the region of interest that needs to be larger then the input, right? This should not be the reason the extent of the output should also be larger.

>Why should the kernel produce results for pixels that lie outside of the original input domain?


Assuming the original goal of the process is optimization based on, wouldn't it be safe to conclude that any built-in algorithmic latitude allow for the end result to render outside simply by default?