Rendering for Display P3 displays

On the Working with Wide Color session Justin Stoyles mentioned that the brightest red(1.0, 0.0, 0.0, 1.0) in Display P3 gamut is represented in the Extended SRGB as 1.358, -0.074, -0.012, 1.0. Indeed to display the brightest red on the MTKView with bgra10_xr_srgb pixel format we need to use that values. But I couldn't find out what conversion techniques are used to get that value.


So can anyone tell me how to convert colors from normalized Display P3 gamut to extended range to use with MTKViews?

Replies

This is descibed some in What's new in Metal, Part 2 from WWDC 2016


You render using a 10 bit Extented Range sRGB format which allows you to render with a standard sRGB gamma, that most artwork is tuned for, while allowing values in the wider P3 gamut.

Thank you for the response.

I have watched the session - it's really great, but you talked mostly about automated color matching mechanisms and little about manual color matching. I tried to calculate color matching matrices for Display P3 -> Extended Range sRGB conversion, but I keep getting (1.22486, -0.0420312, -0.0196301) as the most red color.


My actual problem is that I'm using Core Image to process images with wide gamut content. My CIContext working and output color spaces are both set to Display P3 and CIContext.render(_:to:commandBuffer:bounds:colorSpace:) method still renders the output image in sRGB bounds. I've tried to pass extendedSRGB/extendedLinearSRGB/displayP3 as the last argument, passing MTKView.currentDrawable.texture as a destination texture. The result on the Metal view is still darker then the result of a UIImageView displaying a CGImage generated from the same context and source CIImage.


With a help of the Xcode GPU frame capture I noticed that CIContext outputs are always in range of 0.0-1.0. So I'm trying to manually convert that values to Extended Range sRGB via additional custom render pass with custom color matching fragment shader.

I too am confused about where these numbers in the WWDC video came from -- in my experence P3(r: 1.0, g: 0.0, b: 0.0) converts to one of two sets of sRGB values, depending on whether or not you apply standard gamma correction to the result:


(1.0930, -0.2267, -0.1501) <-- these are the values you get from Apple's ColorSync going from P3 -> Extended sRGB

(1.2249, -0.0420, -0.0196) <-- linear sRGB; no gamma correction


I don't know how they are getting (1.358, -0.074, -0.012)

I'm less familiar with the math needed to convert betwen sRGB and P3. That's probably a better question asked in the CoreImage forum.


petere is correct in that there are different values you'd get depending on whetr you're converting to/from linear or gamma corrected sRGB

OK, thanks, Dan. I'll move the discussion to the Core Image place.

So, the topic is in Core Image place. Anyone can help me with rendering in wide gamuts?

Have you tried setting the

kCIContextWorkingFormat
of your
CIContext
to
kCIFormatRGBAh
?

I can also recommend reading the latest two posts from http://iosoteric.com about wide gamut color rendering on iOS.

Sure, working format is set to RGBAh and the working and output color spaces are set to displayP3 (I tried other combinations with extendedSRGB and extendedLinearSRGB as well). Calling createCGImage(_:from:format:colorSpace) and displaying the result in the image view works just fine, but calling render(_:to:commandBuffer:bounds:colorSpace:) causes result clamped to sRGB gamut and loose of all the information from the extended range.


P.S. MTLTexture passed to render(_:to:commandBuffer:bounds:colorSpace:) received from a MTKView with bgra10_xr(or bgra10_xr_srgb) colorPixelFormat.

I just tried a bit in my code. I'm using the CIRenderDestination APIs for rendering into the view because they allow for better parallelization and that seems to work. Here's my setup (simplified):


let currentDrawable = view.currentDrawable
let drawableSize = view.drawableSize

let destination = CIRenderDestination(width: Int(drawableSize.width),
                                      height: Int(drawableSize.height),
                                      pixelFormat: .rgba8Unorm,
                                      commandBuffer: commandBuffer,
                                      mtlTextureProvider: { () -> MTLTexture in
                                          return currentDrawable.texture
                                      })
destination.colorSpace = CGColorSpace(name: CGColorSpace.displayP3)

let task = try! self.context.startTask(toRender: input, to: destination)

You draw in rgba8Unorm pixel format, it can't render in extended range. Try, for example, render this image and place a UIImageView with the same image. To avoid the asset catalog image management mechanism you can just add the image to your project and initialize a UIImage by calling:

imageView.image = UIImage(named: "p3.png")


On iPhone 7/8/X/XS/XR devices(or any other device with display supporting P3 color space) you'll see the difference between the Metal view and the image view.

I've read it. The author(as myself) assumes that P3 reddest red should be either 1.2249, -0.04203, -0.0196 or 1.0930, -0.2267, -0.1501 in one of the extended range formats. But in the WWDC session mentiond in the original post Justin Stoyles said that the reddest red for the Metal extended range is 1.358, -0.074, -0.012.

By a simple experiment described above you can ensure in that. Standard conversion from P3 to Extended Range doesn't work for Metal. It's either us doing something wrong or there is a bug in the Metal engine.

Also I couldn't make Core Image to render Extended Range on a Metal texture. Using CIContext I never could achieve the same result as displaying the same CIImage by simply wrapping it in UIImage and displaying in a UIImageView.

Strange, they look the same to me. I tried various settings for metal view,

CIContext
and render destination and (if I don't mess up gamma by setting one to linear and the other not) the image looks the same as in the
UIImageView
. Even when not explicitly specifying
colorPixelFormat
of the metal view, working color space of the context and color space of the rendering destination they look identical to me.

Can you maybe upload a screenshot to show what you mean?


Also, did you try to set the format of the view and the rendering destination to

MTLPixelFormatRGBA16Float
? It seems the
MTLPixelFormatBGR10_XR
formats are not supported by Core Image, but 16 bit float is.

Turns out your test image somehow lost the P3 color space during download and was just a normal RGB image when I just tested...


I found another one and played around with it. As it turns out

MTLPixelFormatBGR10_XR
is indeed not supported, but
MTLPixelFormatRGBA16Float
actually works! Here's the full setup:
  • set
    colorPixelFormat
    of the
    MTLView
    to
    MTLPixelFormatRGBA16Float
  • set
    workingColorSpace
    of the CIContext to
    extendedLinearSRGB
    (or non-linear, if that suits your needs better)
  • set the
    pixelFormat
    of the
    CIRenderDestination
    to
    MTLPixelFormatRGBA16Float
  • [optional] set the
    colorSpace
    of the destination to something with extended ragen (seems to be
    extendedLinearSRGB
    by default when using 16 bit float targets)

Still not work for me. I've created a repo to demonstrate my results, I'd be very grateful if you'll say what I'm doing wrong.
In the sample project I placed 2 Metal views and 1 UIImageView. I generated a full red Display P3 image and rendered it on each view. The upper Metal view has a rgba16Float pixel format, the bottom one - bgra10_xr_srgb. On my devices(iPhone 7/X/XS) the upper Metal view clearly renders in sRGB gamut, the bottom one seems to render in Wide Gamut, but it's not the same as UIImageView(you can see difference between views if you look closer).