CoreML model predictions differ from training

I'm fairly new to Core ML, but had heaps of fun playing around with it so far. I'm currently learning how to train models to do facial recognition by creating the model in playground and validating it's results. I save the .mlmodel and implement it in my app.

My issue is when I test it in playground is seems to have a very high degree of accuracy, but when I implement the same model using the same pictures in my app environment I get completely different results and it's pretty much unusable.

Here's some of the code I'm getting from the debug console.


[ BFB8D19B-40AE-45F9-B979-19C11A919DBE, revision 1, 0.778162 "Others",  9E9B2AC8-3969-4086-B3B0-6ED6BEDFFE71, revision 1, 0.221838 "Me"]


Here it wrongly classifies an image of me as someone else, even though it correctly classified the same image in playground during testing. It seems the app itself is working fine, it's just the model that's suddenly off.


What am I missing here?


Thanks

Replies

Did you solve this? It seams quite common. What is not clear to me, yet, is how the images are reshapped to the square that the model expects. There are a number of choices. I suspect training the model in the samle way would be best. What seams strainge is the test apps allow portrait and landscape mode but have no crop indicators. I would expect best results to be if you train with square, cropped images and you provide a cropped square image.