The Core problem of ML

Whether core ml supports thread execution?

I have two mlmodels here. You need to run on a project.

For example, an image is sent to an mlmodel, and the processed result is given to the second mlmodel

Take an image, write two threads, and pass the image to the mlmodels

Replies

Not sure to understand.


Can't you declare threads (with dispatch_queues) in your app, each for a ml task ?

Right, that's what it means,

I'm very much a begginer researching this, but the examples I have seen run the classification asyncronously. You should be able to chian the handlers so the completion of one starts another. Or, as suggested above, you can use dispatch queues, again, chaining the dispatches. I've not done this but I think the final interaction with the UI needs to be on the main queue. Hope this gives you lots of stuff to lookup and come up with a solution.


This is an example using a image file and an image clasifier.


import Vision
import CoreML
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}

I've done that



But the requirements have changed. Data from the training of the two models,

There is no way to merge one mlmode, so I was wondering if I could support two mlmodels running two

Certainly you can merge two mlmodels into one. A typical way to do this is to make a pipeline. You can read about this on my blog (and more in my Core ML Survival Guide): http ://machinethink.net/blog/mobilenet-ssdlite-coreml/