Hello,
My understanding of the paper below is that iOS ships with a MobileNetv3-based ML model backbone, which then uses different heads for specific tasks in iOS.
I understand that this backbone is accessible for various uses through the Vision framework, but I was wondering if it is also accessible for on-device fine-tuning for other purposes. Just as an example, if I want to have a model to detect some unique object in a photo, can I use the built in backbone or do I have to include my own in the app.
Thanks very much for any advice and apologies if I didn't understand something correctly.
Source: https://machinelearning.apple.com/research/on-device-scene-analysis