I'm just getting into ML, so forgive the elementary question here.
I developed my own trained model using scikit learn that takes in image data and outputs a class. When testing my classifier on my computer, I'm obviously doing some preprocessing of the image to get the appropriate input vectors. When I use coremltools to generate an MLModel, will I have to duplicate my image processing on my iOS app as I'm doing in python? How does the MLModel that takes in an image know how to preprocess that image?
Thanks!