Converting from .pb to .coreml causes significantly worse performance

Hey!


I have a classifier that I want to use on iOS. My problem is that after I convert the .pb to an mlmodel, the predictions are significantly worse.


Here is my conversion code:

tf_converter.convert(
    tf_model_path = "optimized_graph.pb",
    mlmodel_path = "output/optimized_graph.mlmodel",
    output_feature_names = ['final_result:0'],
    input_name_shape_dict={"Mul:0":[1,299,299,3]},
    image_input_names = 'Mul:0',
    class_labels = 'labels.txt'
)


I run some tests on the same image set and here were my results.

Percentage correctly classified:

- PB = 78.86%

- MLModel = 44.72%



My main suspicion is the way the images are being passed into the model. For the .pb I am using label_image.py from the Tensorflow for Poets codelab, but for the .mlmodel I'm using PIL on the desktop and for mobile I'm getting the image from the `imagePicker`, turning into a `ciImage`, and passing that into `VNImageRequestHandler`. However, I have no idea if this is the actual case.


If it helps I'm using a modified Inception V3 model that allows multiple classifications, however, I seem to have this issue when following the TF for Poets straight.


Thank you.

Replies

I wrote a blog post about this. Not specifically about tfcoreml but the idea is the same:


h t t p: //machinethink.net/blog/help-core-ml-gives-wrong-output/