I trained a model to classify images and it's performing much worse than I expected. It's taking about 80ms to classify one image. I'm sure that there is a lot of complexity to the image classifier, but isn't it at heart just transforming the image to an array of numbers and plugging that into a mathematical formula?
My questions are:
1. Is this expected or is there maybe something wrong with my setup?
2. Is there anything I can do to speed this up? There seems to be almost no paramters for this part of CoreML. My images are all 20x20 pixels and one thing I think that may be going wrong is that it's resampling this to a much larger image.
Extra info:
This performance was taken on device with release settings.