Coreml model prediction in vision

Dear all



I want to use a coreml model that take and RGB image and give back RGB image (like a style transfer network).

To get the prediction of the model I used vision in the detect function as here:

func detect(image: CIImage) {

guard let model = try? VNCoreMLModel(for: mlmodelzoo().model) else {

fatalError("Cannot import model")

}

let request = VNCoreMLRequest(model: model) { (request, error) in

let classification = request.results?.first as? VNPixelBufferObservation

self.imageView.image = UIImage(cgImage: (self.Buffer(pixelBuffer: (classification?.pixelBuffer)!))!)

}

let handler = VNImageRequestHandler(ciImage: image)

do {

try handler.perform([request])

}

catch{

print(error)

}

}

But it does not work, the compiler is ok but after running the code when it come to the detect function it stops and highlight the line for self.imageView.image = ....

I also used the Buffer function found online to convert the CVPixelBuffer to CGImage.

How can I get the prediction out of this type of coreml models and show it in the imageview?

Is it possible to do same thing for the case the model accept the multiarray and give multiarray? For a model I trained myself the converted keras to coreml shows multiarray -> mutiarray. (I used the comments people made to get a model out of conversion to have image -> image, but it did not work for me).

Thanks,

Aarsh A. Omrani