Core Image: Gamma curve best practice

When setting up a CIContext, one can specify the workingColorSpace. The color space also specifies which gamma curve is used (usually sRGB or linear).
When not explicitly setting a color space, Core Image uses a linear curve. It also says this in the (pretty outdated) Core Image Programming Guide:

By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space.

Now I'm wondering if this makes sense in most scenarios.
For instance, if I blur a checkerboard patter with a CIGaussianBlur filter with a default CIContext, I get a different result than when using a non-linear sRGB color space. See here.
White gets clearly more weight than black with linear gamma. Which makes sense, I suppose. But I find that the non-linear (sRGB) result looks "more correct".

What are best practices here? When should the gamma curve be a consideration?


Linear Gamma is preferred for most kinds of image filtering — blending, scaling, blurring, color operations, etc. The traditional workflow is to:
  1. Convert from Source color space to a Linear “Working” color space.

  2. Do image filtering in the Working space.

  3. Convert from Working color space to Output color space.

The Source & Output color spaces are usually gamma-based spaces — Source is often sRGB while Output is usually your monitor profile (which usually has a gamma curve close to sRGB gamma).

With that said, making images “look good” is more of an art than a science. If you can get better results with different gamma curves then use what works.

Thanks Cutterpillow, that was also my understanding so far.

Especially for blending Linear makes total sense. But it seems for the blur the output is not what you would expect…
Core Image: Gamma curve best practice
 
 
Q