What exactly is the use of CIKernel DOD?

I wrote the following Metal Core Image Kernel to produce constant red color.

extern "C" float4 redKernel(coreimage::sampler inputImage, coreimage::destination dest)
{
    return float4(1.0, 0.0, 0.0, 1.0);
}

And then I have this in Swift code:

class CIMetalRedColorKernel: CIFilter {
    var inputImage:CIImage?
    static var kernel:CIKernel = { () -> CIKernel in
        let bundle = Bundle.main
        let url = bundle.url(forResource: "Kernels", withExtension: "ci.metallib")!
        let data = try! Data(contentsOf: url)
        return try! CIKernel(functionName: "redKernel", fromMetalLibraryData: data)
    }()

    override var outputImage: CIImage? {
        guard let inputImage = inputImage else {
            return nil
        }

        let dod = inputImage.extent
        return CIMetalTestRenderer.kernel.apply(extent: dod, roiCallback: { index, rect in
            return rect
        }, arguments: [inputImage])
    }
}

As you can see, the dod is given to be the extent of the input image. But when I run the filter, I get a whole red image beyond the extent of the input image (DOD), why? I have multiple filters chained together and the overall size is 1920x1080. Isn't the red filter supposed to run only for DOD rectangle passed in it and produce clear pixels for anything outside the DOD?

Answered by Graphics and Games Engineer in 705775022

For additional followup questions:

..on what extent rect is CIKernel processing done (it can't be infinite)

It can be infinite and the value returned by dest.coord() in your kernels can be values in an infinite space. In practice though, when your image is rendered only a finite number of pixels will be rendered.

How do I know the limits of destination coordinates in the kernel? I only get normalised texture coordinates in the kernel for the current pixel in processing. But just like Metal compute kernel if I need absolute coordinates I am currently computing, how do I get it? Simple example being if I need to put a 10 pixel red border around the border of input image, I don't see a straight forward way for the same.

This code snippet and kernel will add a red border to an image.

static dispatch_once_t onceToken;
static CIColorKernel *k;
dispatch_once(&onceToken, ^{
    k = [CIColorKernel kernelWithFunctionName:@"red_border" fromMetalLibraryData:metalLibData() error:nil];
});
    
CGRect iExtent = image.extent;
return [k applyWithExtent:image
                arguments:@[image, [CIVector vectorWithCGRect:iExtent], @(10)]];
float4 red_border (sample_t i, float4 extent, float s, destination dest)
{
    float2 dc = dest.coord();
    // if the current location is within 's' of 'extent'
    if (dc.x < extent.x + s ||
        dc.y < extent.y + s ||
        dc.x > extent.x + extent.z - s ||
        dc.y > extent.y + extent.w - s)
    {
        // return "red" multiplied by the image's alpha
        return float4(1.,0.0,0.,1.) * i.a;
    }
    return i;
}

Corrections: in the outputImage code, it is `return CIMetalRedColorKernel.kernel.apply(extent: dod, roiCallback: { index, rect in             return rect         }, arguments: [inputImage])

The issue is that, even though your apply call is specifying that your extent is inputImage.extent, your actual kernel is returning non-clear pixels outside that extent. In CoreImage, the “truth” is in the kernel code and the extent given to the apply must reflect that.

One easy way to resolve this is to have your kernel multiply the pixel it returns by the alpha of the input image. This way you are guaranteed that the result image will indeed have the same extent as the inputImage.

Another way, is to have your apply call declare the the extent is CGRectInfinite and then call imageByCroppingToExtent:inputImage.extent on the result.

For followup questions:

Unfortunately, cropping to extent did not solve the problem...

That should have worked but the crop will only work if the kernel.apply(extent: is given an extent of CGRectInfinite

...But I would really like to understand if one really needs to do the same multiply by alpha in every filter in the filter chain? 

Yes and no. The extent passed to kernel.apply(extent: always needs to match the behavior of kernels but that that does not mean that you always need to multiply by alpha.

This is because rgba samples in a CIKernel are typically already premultiplied. Consider, for example, a trivial kernel that modifies the RGB of a pixel. The kernel (in CIKL) would look like this:

kernel vec4 darken (__sample s)
{
    s.rgb *= 0.5
    return s;
}

or perhaps

kernel vec4 something (__sample s)
{
    s = unpremultiply(s);
    s.rgb = foo(s.rgb)
    return premultiply(s);
}

In the case of the former, the unpremul/premul is not needed because the operation is linear.

In the case of the later, the unpremul/premul are balanced.

In either case, in the case when the __sample s comes from outside the input image, the value of s and hence the value returned from the kernel will be (0,0,0,0).

Because of this behavior, it is valid for the kernel to be applied with a finite extent like kernel.apply(extent:inputImage.extent.

If, as in the original code provided, the kernel is just

kernel vec4 redEverywhere ()
{
    return vec4(1,0,0,1);
}

then the apply must declare that the kernel is infinite.

Accepted Answer

For additional followup questions:

..on what extent rect is CIKernel processing done (it can't be infinite)

It can be infinite and the value returned by dest.coord() in your kernels can be values in an infinite space. In practice though, when your image is rendered only a finite number of pixels will be rendered.

How do I know the limits of destination coordinates in the kernel? I only get normalised texture coordinates in the kernel for the current pixel in processing. But just like Metal compute kernel if I need absolute coordinates I am currently computing, how do I get it? Simple example being if I need to put a 10 pixel red border around the border of input image, I don't see a straight forward way for the same.

This code snippet and kernel will add a red border to an image.

static dispatch_once_t onceToken;
static CIColorKernel *k;
dispatch_once(&onceToken, ^{
    k = [CIColorKernel kernelWithFunctionName:@"red_border" fromMetalLibraryData:metalLibData() error:nil];
});
    
CGRect iExtent = image.extent;
return [k applyWithExtent:image
                arguments:@[image, [CIVector vectorWithCGRect:iExtent], @(10)]];
float4 red_border (sample_t i, float4 extent, float s, destination dest)
{
    float2 dc = dest.coord();
    // if the current location is within 's' of 'extent'
    if (dc.x < extent.x + s ||
        dc.y < extent.y + s ||
        dc.x > extent.x + extent.z - s ||
        dc.y > extent.y + extent.w - s)
    {
        // return "red" multiplied by the image's alpha
        return float4(1.,0.0,0.,1.) * i.a;
    }
    return i;
}
What exactly is the use of CIKernel DOD?
 
 
Q