Hey guys,
My long-term plan is to retrain InceptionV3 on some custom image classes. I've been planning on using Keras for that, becuase it seems easy to use, I found some resources to help with my python script.
I'm running into some trouble, though.
As a first step, I'm trying to just convert the standard InceptionV3 trained off of the ImageNet set. I run some test images against them, and get results I'd expect. However, once I convert the model, I get different results. Similar, but different.
I tried running it against Apple's InceptionV3 that they provide, and they provide correct results, similar to the unconverted Keras model.
Has anyone done this? I feel like I'm missing some sort of step to prepare the input, like maybe I need to add another input layer to Keras, or something.
For ease of use, I've posted my source code to github:
https://github.com/vml-ffleschner/coremltools-keras-inception-test
I'll keep hacking on this and post back if I figure something out, but hopefully someone can easily see what I'm doing wrong!