CoreML Model Not Providing Output

Hello,


I've been working to understand the nore coremltools for converting Machine Learning models to .mlmodel files for use in my project, but am not finding much success. For reference, I've been using Yahoo's Open NSFW model, which provides a .caffemodel and .prototxt file, suitable for conversion to .mlmodel. The conversion process worked without issue by running this command;


coreml_model = coremltools.converters.caffe.convert(('resnet_50_1by2_nsfw.caffemodel', 'deploy.prototxt'), image_input_names = 'data')


Once I saved out my .mlmodel, I brought the file into my Xcode project and set up my iOS project like so;


override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(true)

        // Perform analysis
        newDetect()  
}


func newDetect() {

  do {      
    let model = try VNCoreMLModel(for: open_nsfw().model)
    let request = VNCoreMLRequest(model: model, completionHandler: handleResults)
    let myImage = CIImage(image: UIImage(named: "testImage")
    let handler = VNImageRequestHandler(ciImage: myImage!)
    try handler.perform([request])
     
    } catch {
      print(error)
    }  
}

func handleResults(request: VNRequest, error: Error?) {

    guard let results = request.results as? [VNCoreMLFeatureValueObservation]
      else { fatalError("An error has occurred.") }

    print(results)
}


The goal here is to take my UIImage, which lives in the bundle, pass it through the model, and receive a "probability" score in handleResults(just to print the results would suitable). Upon running the app, I never end up with any results (rather, I receive an empty dicitonary in my console.


Have I done something wrong in this process?

Replies

I managed to convert it. Kindly have a look here:


https://github.com/kashif/NsfwDetector


Best wishes!

You did not convert your model to be a classifier. For that you need to add class_labels='somefile.txt' in the call to coremltools.converters.caffe.convert().


Maybe you did not want to create a classifier, which is a perfectly reasonable thing to do. I have found that Vision currently does not return any results for models that are not classifiers. This seems to be a bug in Vision.

its a bit more than that as well. Many times a pre-trainined model will have some type of image pre-processing applied, as well as have certain channel ordering. You need to be aware of that and apply the same pre-processing when converting, else you will not get meaningful results...

Thank you for the replies. I am aware that some form of image preprocessing is taking place, but other than setting isBGR=True during the conversion phase with coremltools, I'm unsure how I'd go about determining the bias and scale. From the .py script, I know preprocessing takes place like such;


caffe_transformer.set_transpose('data', (2, 0, 1))  # move image channels to outermost
 caffe_transformer.set_mean('data', np.array([104, 117, 123]))  # subtract the dataset-mean value in each channel
 caffe_transformer.set_raw_scale('data', 255)  # rescale from [0, 1] to [0, 255]
 caffe_transformer.set_channel_swap('data', (2, 1, 0))  # swap channels from RGB to BGR

I suppose this is a conceptual issue, but I do not believe this model is a classifier. Rather, it is a prediction-based, as it is not classifying data and did not come provided with any sort labels. Perhaps I'm misunderstanding, however.

indeed you need to account for line 02 above. You can use the `red_bias`, `blue_bias` and `green_bias` to set that when you conver.

Thanks for the comment. With your suggestion, I tried re-saving my coreml model, using the red_bias=104, green_bias=117, and blue_bias=123. While the result I'm getting is now slightly different than before (factoring in that I am resizing the image to 256x256, then center-cropping to 224x224), I'm still not getting a reasonable output. I'm thinking I need to do something with the scale when saving the coreml model, but have yet to figure out what.

can you try to search on github for kashif and NsfwDetector where I have the model running? Thanks!

Thanks for the sample project! You have definitely succeeded much further than I have at working with this, including by having your project work with Vision. Would you be willing to share your code that was used when converting this model via coremltools? I've not been able to find any combination of arguments that would even produce any ouput with Vision.


I'm interested in continuing the work on this. In your test project, I did find the confidence to be wildly varying as compared to the output when testing the caffemodel through Python (and didn't really find any objectionable media to show a high probability of NSFW confidence, and vice-versa when testing safe media).

hello , i get the same question with you. convert the caffe model to coreML, preprocess param is:

```

coreml_model = coremltools.converters.caffe.convert(('./resnet_50_1by2_nsfw.caffemodel','./deploy.prototxt'),

image_input_names = 'data',

red_bias=-104,

green_bias=-117,

blue_bias=-124,

is_bgr=True,

class_labels=class_label)

```

but the result is wrong.

do you have convert success, use what param?