How do MTKView/CAMetalLayer and extended colorspaces work?

These make no sense. Several of the presentations on wide gamut lack specific example code. I would assume if I have linear rgba16f data, that I could specify srgb, or linearSrgb colorspace and get the same output to the screen but that is not the case. There is no documentation except for the briefest of comments on each color space, but now how MTKView actually transform the colors.

There is even less documentation on extended color spaces, and what to do when the fail to display expected results. When we set one of these, the gamma is totally off. And it's unclear what to set so we go from HDR back to EDR.

src srgb8 -> srgbColorSpace -> gamma 2.2 -> incorrect, doubly applied srgb?, why can layer just do passthrough

rgba16f -> srgbColorSpace -> gamma 2.2 -> correct, seems to apply gamma and composites properly with rest of AppKit

rgba16f -> linearSrbColorSpace -> gamma 1.0 -> incorrect, isn't my data linear?

Post not yet marked as solved Up vote post of AlecazamTGC Down vote post of AlecazamTGC
1.2k views

Replies

MTKView and CAMetalLayer are part of the Metal framework for iOS and macOS, which is used for low-level, high-performance graphics rendering.

An MTKView is a view that is capable of displaying Metal content. It uses a CAMetalLayer to manage the underlying Metal drawable and to present the rendered content on the screen. The CAMetalLayer is a subclass of CALayer that is optimized for Metal content, and it provides a metal layer that can be used for Metal rendering. Extended color spaces are color spaces that are beyond the standard sRGB color space, and they can represent a wider range of colors.

iOS and macOS support extended color spaces such as P3 and Display P3, which are used for displaying content with a wider color gamut on high-end displays. When an MTKView or a CAMetalLayer is created, you can specify the desired color space to use, and the framework will take care of automatically converting the content to the display's color space.

Yes, we've got all that. But what are the correct settings for iOS and macOS. I've listed what I've tried so far above and failures. What have you tried that worked? On SO, someone stated that MTKView can only composite to srgb spaces (limitation of iOS and/or macOS?). We also can't use XR_srgb format except on the new macs, but are doing development on Intel.

Why do I need to set a color space at all if I'm already passing linear data in rgba16f. I'm assuming the rest of the UIKit/AppKit composite has some colorspace as well, not just my MTKView. So all those need to be in agreement.

So far that agreement seems to be that my content needs to be srgb (or DisplayP3 with similar srgb curve). Rec2020_PQ which is HDR10 gets even more complex with exrmetadata on the CAMetalLayer, but that's all that HDR televisions speak. But that's a lot of per pixel matrix ops, and a different compositing path for srgb8 UI and and rgba16f render.

We can create textures that are srgb or linear. The gpu hw will convert the srgb back to linear for us. But I've got linear data in the render texture that doesn't look correct with most of these color spaces. I haven't changed the data. And they are even incorrect in EDR (0 to 1), let alone HDR (<0, >1).

  • The CAMetalLayer.colorspace property effectively tags the drawable. This is how you opt into color management. If your shader is outputting linear sRGB, you must tag your drawable as such to have the system color-match your drawable to the display’s color space.

    On iOS, UIKit’s working color space (Extended sRGB) differs from the output color space (Display P3). You can think of UIKit’s layers as being tagged with Extended sRGB.

Add a Comment

I mean zero of the Metal sample apps even set a colorspace. So what do we reference here? macOS and iOS are supposed to be the ultimate "color managed" platforms, although iOS couldn't afford the pixel ops on the first phones/iPads.

Am I tagging my pixels which are linear as srgb, do I need a transfer function to be gamma 1 or gamma 2.2? What is kCGColorSpaceSRGB vs. kCGColorSpaceLinearSRGB. These are both srgb colorimetry, but am I stating that my content has the srgb gamma curve in the former, and is linear in the latter? And how to get the cheapest pass through. I can set my display color space to sRGB (709) or P3, but how do I get the cheapest, fastest, and highest quality pixel output to screen. Before all the color management, on iOS I had to use BGRA8 instead of RGBA8 to avoid a swizzle blit. These are the kind of details that are sorely lacking even in the WWDC presentations.

There is also true tone, and the user adjusting the maximum brightness of the display. So those also need to be responded too. And what to do when NSApplicationDidChangeScreenParametersNotification calls and says screen.maximumExtendedDynamicRangeColorComponentValue changed.

Am I tagging my pixels which are linear as srgb, do I need a transfer function to be gamma 1 or gamma 2.2?

I’m not sure which of two possible things you’re referring to here, so I’ll explain both and how they interact.

There are a number of MTLPixelFormats which end in _sRGB. These formats implicitly degamma when sampled, and engamma when written to. So if you set the pixelFormat of your MTKView/CAMetalLayer to one of these _sRGB formats, that means that the GPU will implicitly engamma your shader’s output when it writes it into the memory that backs the drawable’s texture.

The compositor reads the drawable’s memory directly; it doesn’t sample it like a texture. Thus, if you use an _sRGB pixel format, your shader can work in linear space as long as you set your CAMetalLayer/MTKView’s colorspace to an sRGB colorspace. The compositor will read the engamma’d pixel values directly, and then color-match them from sRGB space to the display’s colorspace.

What is kCGColorSpaceSRGB vs. kCGColorSpaceLinearSRGB. These are both srgb colorimetry, but am I stating that my content has the srgb gamma curve in the former, and is linear in the latter?

Correct.

And how to get the cheapest pass through. I can set my display color space to sRGB (709) or P3, but how do I get the cheapest, fastest, and highest quality pixel output to screen.

If by “highest quality” you mean “accurately color-matched”, this is in direct opposition to “cheapest” and “fastest”. The fastest approach is to opt out of color matching entirely, by setting the colorspace property to nil. If you do want colormatching, however, you probably want to minimize the amount of work in your shaders. If you use an _sRGB pixel format, you can avoid an explicit engamma/degamma in your shader as discussed above. If you’re working with HDR, you can either implement tone-mapping yourself, or you can use CAEDRMetadata to have the compositor tone-map for you. Which will be faster depends on your specific use case.

There is also true tone, and the user adjusting the maximum brightness of the display. So those also need to be responded too. And what to do when NSApplicationDidChangeScreenParametersNotification calls and says screen.maximumExtendedDynamicRangeColorComponentValue changed.

The effects of True Tone are invisible to your application. Normally, the brightness of the display is also invisible to your application, but this can change when you’re using an Apple display with EDR capabilities, like the Pro Display XDR.

The maximumExtendedDynamicRangeColorComponentValue is known in UIKit as currentEDRHeadroom. “Headroom” is the ratio of the maximum brightness the screen can display relative to “SDR white”, which is the brightness of SRGB(1.0, 1.0, 1.0). On a non-EDR display, SRGB(1.0, 1.0, 1.0) always maps to the display’s “maximum brightness”, and therefore currentEDRHeadroom is equal to 1.0. But on an EDR display (including third-party displays with HDR support), pixels can of course be brighter. If the potentialEDRHeadroom value is equal to 4.0, that means the brightest white that the display can show is 4 times brighter than SRGB(1.0, 1.0, 1.0). One way to encode this particular “brightest white” is ExtendedLinearSRGB(4.0, 4.0, 4.0). If you implement tone-mapping yourself, you can provide this information to your tone-mapping shader, and update it whenever you get the ScreenParamatersChanged notification.