How to change SceneKit/ARKit app texture format?

Hi,

i have a SceneKit/ARKit (with my own metal shader) app and I need to change the main texture format from .bgra8unorm_srgb to .bgra8unorm. Since it is srgb, anything I draw is again converted to srgb and shown too bright.

Is there a way to change it/how can it be changed? I didn't find anything useful in docs or examples.

Thx

Answered by AlgoChris in 694902022

For everyone who also has this Problem (capturing ARKit video image and using it in your frag shader turning out too bright):

(This example is a part of my IOS app, so aspect ratio and such could/should change for other devices)

The conversion vom YCbCr_srgb to RGB conversion is a 2 step operation.

First you have to matrix-multiply your color with this YCbCr conversion matrix.

constant const float4x4 ycbcrToRGBTransform = float4x4(
    +1.0000f, +1.0000f, +1.0000f, +0.0000f,
    +0.0000f, -0.3441f, +1.7720f, +0.0000f,
    +1.4020f, -0.7141f, +0.0000f, +0.0000f,
    -0.7010f, +0.5291f, -0.8860f, +1.0000f
);

The second step is to adjust your gamma.

So you end up with something like this:

fragment float4 myFragmentShader(fragmentInOut in [[ stage_in ]],
                                 texture2d<float, access::sample> capturedImageY [[ texture(0) ]],
                                 texture2d<float, access::sample> capturedImageCBCR [[ texture(1) ]]) {

    constexpr sampler s(mip_filter::linear,
                        mag_filter::linear,
                        min_filter::linear);

    // in.displayBounds and in.position is calculated in the vertex shader, 0.615764 is aspect ratio of the video image size
    float fx = (((in.displayBounds.x - in.position.x) / in.displayBounds.x) - 0.5) * 0.615764 + 0.5;
    float fy = (((in.position.y) / in.displayBounds.y) - 0.5) * 1.0 + 0.5;

    // Flip them around, since we're in portrait mode here, for full device orientation awareness, you have to add a bit more logic
    const float2 tp = float2(fy, fx);

    // Sample the color from the captured video frame
    const float4 ycbcr = float4(capturedImageY.sample(s, tp).r, capturedImageCBCR.sample(s, tp).rg, 1.0);

    // apply the YCbCr to RGB transform matrix and apply the gamma correction
    const float3 rgbColor = pow((ycbcrToRGBTransform * ycbcr).rgb, 2.2);

    // Now if you return this color, it will look exactly the same as the video background
    return float4(rgbColor, 1.0);
}

Or what could help me too is a conversion matrix for YCbCr (srgb) to RGB (not SRGB). Or a conversion from SRGB to RGB.

Hello AlgoChris,

Please request technical support for this question, and I can explain the options that you have to solve this problem!

Accepted Answer

For everyone who also has this Problem (capturing ARKit video image and using it in your frag shader turning out too bright):

(This example is a part of my IOS app, so aspect ratio and such could/should change for other devices)

The conversion vom YCbCr_srgb to RGB conversion is a 2 step operation.

First you have to matrix-multiply your color with this YCbCr conversion matrix.

constant const float4x4 ycbcrToRGBTransform = float4x4(
    +1.0000f, +1.0000f, +1.0000f, +0.0000f,
    +0.0000f, -0.3441f, +1.7720f, +0.0000f,
    +1.4020f, -0.7141f, +0.0000f, +0.0000f,
    -0.7010f, +0.5291f, -0.8860f, +1.0000f
);

The second step is to adjust your gamma.

So you end up with something like this:

fragment float4 myFragmentShader(fragmentInOut in [[ stage_in ]],
                                 texture2d<float, access::sample> capturedImageY [[ texture(0) ]],
                                 texture2d<float, access::sample> capturedImageCBCR [[ texture(1) ]]) {

    constexpr sampler s(mip_filter::linear,
                        mag_filter::linear,
                        min_filter::linear);

    // in.displayBounds and in.position is calculated in the vertex shader, 0.615764 is aspect ratio of the video image size
    float fx = (((in.displayBounds.x - in.position.x) / in.displayBounds.x) - 0.5) * 0.615764 + 0.5;
    float fy = (((in.position.y) / in.displayBounds.y) - 0.5) * 1.0 + 0.5;

    // Flip them around, since we're in portrait mode here, for full device orientation awareness, you have to add a bit more logic
    const float2 tp = float2(fy, fx);

    // Sample the color from the captured video frame
    const float4 ycbcr = float4(capturedImageY.sample(s, tp).r, capturedImageCBCR.sample(s, tp).rg, 1.0);

    // apply the YCbCr to RGB transform matrix and apply the gamma correction
    const float3 rgbColor = pow((ycbcrToRGBTransform * ycbcr).rgb, 2.2);

    // Now if you return this color, it will look exactly the same as the video background
    return float4(rgbColor, 1.0);
}
How to change SceneKit/ARKit app texture format?
 
 
Q