Use:
var mlImgClass = try? MLImageClassifier(trainingData: datasource, parameters: parameters)
Instead of:
var trainJob = try MLImageClassifier.train(
trainingData: datasource,
parameters: parameters,
sessionParameters:sessionParameters
)
(... handle the job progress ...)
I'm sorry to say that I don't understand why creating a job creates so many complications, but using the MLImageClassifier directly is even more rapid and efficient.
Is possible that the train function has a similar behavior of makeTrainingSession where you can handle a session with the MLTrainingSession. In this case you have an array of checkpoints (MLCheckpoint). Every checkpoint has an url: you should load the model from this location to make a prediction. The original documentation is not enough intuitive.
Post
Replies
Boosts
Views
Activity
Although better documentation of the use of the CoreML Tools in Python would be appreciated, if the question boils down to the use of the models made available to Create ML, you can achieve the same result using the Swift classes like MLImageClassifier: https://developer.apple.com/documentation/createml/mlimageclassifier
They support the additional training of the models, I guess on the same architecture where the model was created.
Thank you for your reply. I was thinking about a Safari Web Extension, but to be honest the important thing is to know that is possible to achieve this result normally restricted to normal Javascript on web pages or an external sandboxed app.
At the same time, seems that an App Extension has more capabilities and it could be included directly in the primary app without downloading it separately. But in this case I find more difficulties in discovering how to achieve this result. What I found on a web search is deprecated: https://developer.apple.com/documentation/safariextensions/safaribrowserwindow