I've got a custom Metal Core Image kernel (written with CIImageProcessorKernel) that I'm trying to make work properly with HDR video (HDR10 PQ to start).
I understand that for HDR video, the rgb values coming into the shader can have values below 0.0 or above 1.0. However, I don't understand how the 10-bit integer values (ie. 0-1023) in the video are mapped into floating point.
What are the minimum and maximum values in floating point? ie. What will a 1023 (pure white) pixel be in floating point in the shader.
At 11:32 in WWDC20 session 10009, Edit and play back HDR video with AVFoundation, there's an example of a Core Image Metal kernel that isn't HDR aware and therefore won't work. It's inverting the values that come in by subtracting them from 1.0, which clearly breaks down when 1.0 is not the maximum possible value. How should this be implemented to be HDR aware?
I understand that for HDR video, the rgb values coming into the shader can have values below 0.0 or above 1.0. However, I don't understand how the 10-bit integer values (ie. 0-1023) in the video are mapped into floating point.
What are the minimum and maximum values in floating point? ie. What will a 1023 (pure white) pixel be in floating point in the shader.
At 11:32 in WWDC20 session 10009, Edit and play back HDR video with AVFoundation, there's an example of a Core Image Metal kernel that isn't HDR aware and therefore won't work. It's inverting the values that come in by subtracting them from 1.0, which clearly breaks down when 1.0 is not the maximum possible value. How should this be implemented to be HDR aware?
Code Block metal extern “C” float4 ColorInverter(coreimage::sample_t s, coreimage::destination dest) { return float4(1.0 - s.r, 1.0 - s.g, 1.0 - s.b, 1.0); }