How are the three Core Image working space options related?

Core Image offers three ways to set a color space:

  • kCIContextOutputColorSpace
  • kCIContextWorkingColorSpace
  • kCIImageColorSpace


How are they related? Which should I use? For example, I found that setting kCIImageColorSpace to nil as Apple recommends (https://developer.apple.com/library/content/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_performance/ci_performance.html) messes up the gamma in the image. So: how are these related, and how should they be used?

Replies

It's impossible to answer your "Which should I use" question without knowing more about your application. If you currently don't have a lot of knowledge about RGB color spaces then chances are good that you should not use any of them. As pointed out in the link using the NULL value kCIImageColorSpace - don't use it unless you need it. If you use it you'll need to make sure that the input image is already converted to the working space - or the other way around that you set the working space to the known color space of your input image.

CIImage is just a recipe how to change an image buffer. When you render an image into a CIContext, this recipe is applied. You can see this recipe by checking [CIImage description]. All filters and kernels are applied in a working color space (should mostly be Display P3, if you asked me). However the pixels of the input image are coded in a particular image color space. When you render the output image, the values of the pixels of the input image are converted to the working colorspace before any filter is applied. After all the filters are applied, the pixels are again converted to the output colorspace (mostly Display P3 as well, if you render on screen (check the window colorspace) or sRGB if you render to a web JPG or Rec709 if you render into a video pixel buffer). Telling the CIContext all these colorspaces makes sure the correct color transformations are used and that the output image is exactly what you want.


Removing the colorspace of the input images, makes CIContext assumes a false input colorspace, I guess. Is the image darker or lighter then before?

I now understand much better how and when Core Image converts color spaces, thanks to you.


The image looked washed out, with low contrast, IIRC, when I set kCIImageColorSpace to nil.


I was just trying to follow Apple's recommendation to avoid color space conversions. I'll try setting both kCIContextWorkingColorSpace and kCIContextOutputColorSpaceto the input image's colorspace, it will avoid conversions. My kernel doesn't require a linear space.