In order to classify static images using my CoreML learning model, I must first load the images into a CVPixelBuffer before passing it to the classifier.prediction(image:) method.
To do this, I am using the following code:
However, when I call loadImage on an image, pixelBufferStatus remains at -6680, which indicates invalidPixelFormat.
How do I fix this?
To do this, I am using the following code:
Code Block import SwiftUI struct ImageClassifier{ var pixelBuffer: CVPixelBuffer? var pixelBufferStatus: CVReturn? init(width: Int, height: Int) { let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] self.pixelBufferStatus = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32RGBA, attrs as CFDictionary, &self.pixelBuffer) } func loadImage(name: String){ guard let inputImage = UIImage(named: name) else{ return } let beginImage = CIImage(image: inputImage)! let renderer = CIContext.init() if(self.pixelBufferStatus==kCVReturnSuccess){ renderer.render(beginImage, to: self.pixelBuffer!) }else{ print("Bad status! \(pixelBufferStatus)") } } }
However, when I call loadImage on an image, pixelBufferStatus remains at -6680, which indicates invalidPixelFormat.
How do I fix this?