When I train an object detection model using transfer learning and use it in an app on the device I seem to get a lot of false positive predictions with extremely high confidence that I do not get when I train the same model as a full network model. For example, the model is looking for a shoe and it will generate a false positive on an image of a blank wall with a confidence of over 95%. Yet when I test the model and drag an image of the same wall into preview in Xcode it correctly classifies the image. In fact by simply moving the camera so it goes out of focus for a brief second always generates an incorrect prediction. Yet none of these issues are present when using the model trained using full network. I would prefer to use transfer learning until I am able to generate enough training data, So I have two questions.
Is there a reason for this?
Is there a way to prevent this?