(For Apple folks: rdar://47577096.)
# Background
The Core Video function `CVImageBufferCreateColorSpaceFromAttachments` creates custom color profiles with simplified transfer functions instead of using the standard system color profiles. Let’s take ITU-R 709 as an example.
The macOS `Rec. ITU-R BT.709-5` system color profile specifies the transfer function as
f(x) = { (0.91x + 0.09)^2.222 where x >= 0.081
{ 0.222x where x < 0.081
The Apple-custom `HDTV` color profile created by the above Core Video function specifies the transfer function as
f(x) = x^1.961
My understanding is that `x^1.961` is the closest approximation of the more complex ITU-R 709 transfer function.
# Questions
1. Why use a custom color profile with a simplified transfer function rather than the official specification?
- Was it done for performance?
- Was it done for compatibility with non-QuickTime-based applications?
- etc.
2. Speaking of compatibility, there is a problem when an encoding application uses the official transfer function and the decoding application uses the approximated transfer function. I tested this using two images. One image uses the `Rec. ITU-R BT.709-5` color profile. The other image is derived from the former by assigning the Apple-custom `HDTV` color profile. The latter image loses the details in the darker areas of the image. Why go to the trouble of approximating the transfer function when the approximation isn’t that great?
3. Are the Apple-custom color profiles also used for encoding? Or are they only for decoding?
4. Another thing that concerns me is that the Apple-custom `HDR (PQ)` and `HDR (HLG)` color profiles use the same simplified transfer function of `f(x) = x^1.801`. Isn’t the whole point of the PQ and HLG standards to define more sophisticated transfer functions? Doesn’t simplifying those two transfer functions defeat their purpose?