I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app. I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models.
I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app.
Does anyone know if this is possible with frameworks as Vision or VisionKit?
You can use the Vision framework to compute the image similarity and find the closest match. Here's an example of how you can do it! https://developer.apple.com/documentation/vision/original_objective-c_and_swift_api/analyzing_image_similarity_with_feature_print