CoreML frameworks generated different results from coremltools

Hi everyone in this forum,


I have been developing image recognition app in ios 11 following the CoreML examples. However i notice that there is difference of the results when calling the model in ios, and the ones using coremltools in mac/python. I think the difference may lie on the image loading part. Python code use Pillow to load image, but xcode use CoreImage. I pasted the key codes as below. Hopefully somebody can help to point out the issue.


Also the input image is a 299*299 jpg. So should not any resizing happened in either of the implementation. Thank you.


################

## python codes ##

################


import coremltools
from PIL import Image
from keras.preprocessing import image
import numpy as np

IMG_PATH='./test.jpg'
img = image.load_img(IMG_PATH)
model=coremltools.models.MLModel("./Inceptionv3.mlmodel")
res = model.predict({'image':img})


##############

## ios codes . ##

##############



    self.image = [CIImage imageWithContentsOfURL:fileURL];
    self.model = [[[Inceptionv3 alloc] init] model];

    VNCoreMLModel *m = [VNCoreMLModel modelForMLModel: self.model error:nil];
    VNCoreMLRequest *rq = [[VNCoreMLRequest alloc] initWithModel: m completionHandler: (VNRequestCompletionHandler) ^(VNRequest *request, NSError *error){
        NSArray *results = [request.results copy];
        NSString *top_results = @"";
        for(int index = 0; index < kNumResults; index++)
        {
            VNClassificationObservation *res = ((VNClassificationObservation *)(results[index]));
             NSString *tmp = [top_results stringByAppendingFormat: @"- %d %.4f %@\n ", index, res.confidence,res.identifier];
             top_results = [tmp copy];
        }
        self.label_prob = [top_results copy];
    }];

    NSDictionary *d = [[NSDictionary alloc] init];
    NSArray *a = @[rq];
    VNImageRequestHandler *handler = [[VNImageRequestHandler alloc] initWithCIImage:self.image options:d];

    dispatch_queue_t myCustomQueue;
    myCustomQueue = dispatch_queue_create("com.example.MyCustomQueue", NULL);

    dispatch_sync(myCustomQueue, ^{
        [handler performRequests:a error:nil];
    });

Thanks for spotting this issue. This is most unexpected. What is the output you are getting from iOS (via Objective-C) vs Mac (via Python). Are they off within a certian tolerance? Depending on your hadware, it can happen that the network is running on the CPU via the Mac and GPU via iOS. If they are completely different, then that would be most suprising.


Can you upload the Image (along with some info about your Mac and iPhone hardware) so we can try and reproduce the exact issue that you are encountering?

I am facing the same issue which I posted here :

https://forums.developer.apple.com/thread/83060


I thought it was due to an error reported with earlier beta versions of Xcode 9 as reported here:

https://forums.developer.apple.com/thread/81548


But I have upgraded to XCode Beta 4 and still encountering the same issue. Another bug is that the classifiction probabilities are rounded to 0 or 1.

For example, my model returns [[ 0.98856211 0.00662835 0.00480951]] in Python, while it returns [0, 1, 0 ] in iOS. I


have tried using CoreML directly as well as via Vision, there was no difference in results which were wrong in both cases.


Thanks,

Rishi

Did you achieve it??

I'm having the same issue. I asked for help on StackOverflow, but it looks like nobody knows, so I'm guessing is not a common issue.

https://stackoverflow.com/questions/73639510/after-converting-tensorflow-model-to-coreml-the-model-doesnt-predict-correct

If anybody can point me in the right direction, is good enough Thank you

CoreML frameworks generated different results from coremltools
 
 
Q