Image preprocessing with custom model

I'm just getting into ML, so forgive the elementary question here.


I developed my own trained model using scikit learn that takes in image data and outputs a class. When testing my classifier on my computer, I'm obviously doing some preprocessing of the image to get the appropriate input vectors. When I use coremltools to generate an MLModel, will I have to duplicate my image processing on my iOS app as I'm doing in python? How does the MLModel that takes in an image know how to preprocess that image?


Thanks!

Replies

Your question is a very valid point. I'm also trying to port my own model to coreML. Here is my 2 cents: I have tried to use inceptionv3 coreml model inside my app. I have downloaded the one on the apple developer website. The only operation I did was to resize the image to 299*299 (i essentially followedthis tutorial). I have run some tests on the ImageNet2012 validation set and found an accuracy of ~73.5% . However, by applying pre-processing used in tensorflow, my accuracy dropped down to ~40%. Maybe the pre-processing is already included into the mlmodel ? maybe I shouldn't use pre-processing at all ? I don't know, but it seems to alter a lot the model accuracy