TL/DR: The image classification works on the CreateML Live View on playgrounds but does not on the vision+coreML example project from Apple.
Hi everyone!
I'm working on a machine learning app that classifies numbers that are hand drawn. I have made a model using CreateML that supposedly has 100% accuracy (I will admit my sample size was only about 50 images per number). When running it on my app however, it does not work. To see if it was a problem with my app, I downloaded the Apple Vision+CoreML Example Xcode project and replaced the MobileNet classifier with my own. I loaded in the images saved on my phone from my own app and the classifications were still inaccurate. What makes this interesting is that I tried testing the exact same images in the CreateML UI space on the playground where you can test images and the classification works.
My suspicion is that this is a problem involving some bridging that is happening in the backend because apps can use Objective C (mine is written in swift though) wheras the playground is purely swift. Would really appreciate a solution.
Here is an example of an image that I tried to classify
Here is what shows up on the app for 7, Here is what shows up on the app for 5
Here is what shows up on the playground for 7, Here is what shows up on the playground for 5