I successfully trained an Object Detection model and exported in CoreML format.
My model has 300 iterations and mean_average_precision is about 0.7.
I then validate it with some images using TC and bounding box drawing util and it could recognize the object pretty well.
I then downloaded the sample project for recognizing objects in live capture here:
https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture
What it's weird is that the object couldn't be detected in iPhone camera's portrait mode, no bounding box is drawn, VNDetectedObjectObservation hasn't returned any results. However, I rotated the iPhone 90 degrees counter-clockwise((home button on the right) and the bounding box appeared. If I move the phone horizontally, the offset between the bounding box and the object become larger.
The same problem happened if I trained the model using CreateML app.
I'm not sure that it could be a problem of Turi Create or Core ML, or the source code itself. Could anyone explain to me, please?