Core Image lets us specify a color space for a CIContext, as in:
let context = CIContext(options: [kCIContextOutputColorSpace: NSNull(), kCIContextWorkingColorSpace: NSNull()])
Or for a CIImage, as in:
let image = CIImage(cvImageBuffer: inputPixelBuffer, options: [kCIImageColorSpace: NSNull()])
How are these three related:
What are the pros and cons of setting each of them?
Retrieving data ...