Loading Images to CVPixelBuffer results in invalid pixel format

In order to classify static images using my CoreML learning model, I must first load the images into a CVPixelBuffer before passing it to the classifier.prediction(image:) method.

To do this, I am using the following code:

Code Block
import SwiftUI
struct ImageClassifier{
var pixelBuffer: CVPixelBuffer?
var pixelBufferStatus: CVReturn?
init(width: Int, height: Int) {
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue]
self.pixelBufferStatus = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32RGBA, attrs as CFDictionary, &self.pixelBuffer)
}
func loadImage(name: String){
guard let inputImage = UIImage(named: name) else{ return }
let beginImage = CIImage(image: inputImage)!
let renderer = CIContext.init()
if(self.pixelBufferStatus==kCVReturnSuccess){
renderer.render(beginImage, to: self.pixelBuffer!)
}else{
print("Bad status! \(pixelBufferStatus)")
}
}
}


However, when I call loadImage on an image, pixelBufferStatus remains at -6680, which indicates invalidPixelFormat.

How do I fix this?

I'm afraid kCVPixelFormatType_32RGBA is not supported.

Check this old SO thread. (It's old and written for ObjC, but it is still useful.)
How do I create a CVPixelBuffer with 32RGBA format for iPhone?

I do not know which pixel format would work for CoreML, but you may need to try other pixel formats.
You can use the helper routines from CoreMLHelpers.

As of iOS 13, Core ML also allows you to pass in CGImage objects but this API is not very straightforward to use. I prefer the method from CoreMLHelpers.

Loading Images to CVPixelBuffer results in invalid pixel format
 
 
Q