Next, does this model storage count towards any of the various storage quotas?
Does the encrypted ML model get decrypted every time it needs to perform an inference operation on device? Just curious about what's actually is happening on the user's device here.
Finally, is this model deployment process the same for macOS 11?
I'm not sure which page you're referring to. If you're referring to https://ml.developer.apple.com, that is a step in the deployment process. Developers bring their pre-trained models here and configure for downloading to devices.Can one perform model inference on the model deployment page? Or is this just a step in the deployment process to the device?
Model collections are downloaded and placed into your app's container, so it does associate with those quotas. Downloading collections will also be network attributed to your application.Next, does this model storage count towards any of the various storage quotas?
The model gets decrypted into memory on-demand mostly at the model load time.Does the encrypted ML model get decrypted every time it needs to perform an inference operation on device? Just curious about what's actually is happening on the user's device here.
Yes, the process is the same.Finally, is this model deployment process the same for macOS 11?