NSInvalidArgumentException in MLClassifier predictionFromFeatures

We do have a classification model trained with CreateML that is throwing


NSInvalidArgumentException


[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0]


in MLClassifier predictionFromFeatures


This is happening within stack of a call to prediction of the auto-generated class


model.prediction(image: cvPxbuffer)


Any idea of what can cause this? Google search is not helping on this one.

Replies

We did have this code in two different projects. In Project1, it worked just fine always. In Project 2, some devices with iCloud activated to optimize space experinenced this crash.


We did replace direct model execution by vision and seems to be working. We are confused with what is happening our best guess is something is incorrect converting from UIImage to CVPixelBuffer



extension UIImage{



func buffer() -> CVPixelBuffer? {

let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary

var pixelBuffer : CVPixelBuffer?

let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(size.width), Int(size.height), kCVPixelFormatType_32ARGB, attrs, &pixelBuffer)

guard (status == kCVReturnSuccess) else {

return nil

}

CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)

let rgbColorSpace = CGColorSpaceCreateDeviceRGB()

let context = CGContext(data: pixelData, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)

context?.translateBy(x: 0, y: size.height)

context?.scaleBy(x: 1.0, y: -1.0)

UIGraphicsPushContext(context!)

draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))

UIGraphicsPopContext()

CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))

return pixelBuffer

}

It seems that it is not working in Project 2 in devices with iCloud activated. What is happening is that our initialization is giving an error loading the model and we do have defensive code that hiding it in Vision version.


let model : VNCoreMLModel?

init() {

model = try? VNCoreMLModel(for: FilterOutClassifier_4Class_93Val().model)

}


This means that the app compiled in project 2 has some issue with the model and XCode auto-generated class throws that indirect error that probably was not supposed to happen.


Now why is the app having an issue loading a model added directly through XCode interface when device executing it has iCloud activated is **** strange!

Back to this one. Not yet solved...


There seems to be an allocation issue. When we execute


VNImageRequestHandler.perform


it throws "Could not create buffer with format BGRA" with error code -6662 which Apple discribes as kCVReturnAllocationFailed (Memory allocation for a buffer or buffer pool failed.). Again or best friend to help us solving problems, Google search, is not helping...


This is happening in the following conditions:


1 - It happens in Project 2 and never in Project 1 (managed by different teams)


2 - it happens only on devices with iCloud with the option to optimize space


If Apple folks can look to this forum once in a while that would be great...

Filed Radar 48907676