How do I render a CIImage into a 32-bits-per channel floating-point pixel buffer?

I want to render a CIImage into a pixel buffer of type kCVPixelFormatType_128RGBAFloat. But CIContext.render() fails saying "unsupported format". I tested on the iPhone 7 Plus running iOS 11.

Here's my code:


let context = CIContext()

var buffer: CVPixelBuffer? = nil

let buffer = CVPixelBufferCreate(nil, width, height, kCVPixelFormatType_128RGBAFloat, nil, &buffer)

assert(buffer != nil, "Couldn't create buffer")

context.render(ciImage, to: buffer)


The buffer is created successfully — the assertion doesn't fire. It's only the rendering in the last line that fails saying "unsupported format".

I also tried creating an IOSurface-backed CVPixelBuffer by replacing the second nil with

[kCVPixelBufferIOSurfacePropertiesKey: [:]] as CFDictionary
, but it didn't help.


How do I get this to work?


The format needs to be kCVPixelFormatType_128RGBAFloat, for reason that are too complex to get into here, the short version being that the pixel values have a greater range than 0-255, including fractional values that cannot be rounded.

Replies

I tried some more things:

-

kCVPixelFormatType_64ARGB

- The software renderer: https://developer.apple.com/documentation/coreimage/kcicontextusesoftwarerenderer

- Creating the CIContext backed by an EAGLContext

- Creating the CIContext backed by an MTLDevice

- Calling CIContext.createCGImage()

- Rendering to a

MTLTexture
but I couldn't figure out how to create one.

- Rendering to an IOSurface()

- Calling clearCaches() on CIContext().

- Calling reclaimResources(), but that's not available on iOS.

- Checking that my input is < CIContext.inputImageMaximumSize() and outputImageMaximumSize()

None of these worked. Is rendering to 32-bits per channel floats or 16 bits per channel ints not supported by Core Image?

I also tried rendering to a raw byte array.

Try kCVPixelFormatType_64RGBAHalf.


The problem is, that iPhone GPUs don't support 32 bit float textures, only 16 bit (half precision). That's probably why Core Image tells you that the format is not supported.

If the GPU can't support 32-bit floats (kCVPixelFormatType_128RGBAFloats) or 16-bit ints (kCVPixelFormatType_64ARGB), why can't the CPU? Even when I use the software renderer, Core Image doesn't support the above two formats.


kCVPixelFormatType_64RGBAHalf is not precise enough for my use case, since 16-bit floats have a mantissa of only 11 bits. Thanks for your help.

I guess it's because Core Image is designed to be completely opaque on how it performs the filtering. That's why it only supports a common subset of formats. Maybe you can use the Accelerate framework for your use case?

Makes sense, and thanks for your help. I was going to use Accelerate but since I put in lot of effort into Core Image, I wanted to check about it.

You should also check out

CIImageProcessorKernel
. It allows you to integrate any custom subroutine into a Core Image pipeline.