Posts

Post not yet marked as solved
1 Replies
Does your model use a NonMaximumSuppression layer (or as part of a pipeline)?
Post not yet marked as solved
1 Replies
It takes as long as it takes. Using multiple dispatch queues is unlikely to help, since you're limited by how fast the hardware can go. If you want a faster body pose detection method, you will have to train your own model.
Post not yet marked as solved
2 Replies
I don't know what the answer is, but I would try running the model in CPU-only or GPU mode to see if it happens there too. Perhaps it's an issue with the ANE. In any case, these developer forums are pretty deserted. If this is an important issue for your business (and it sounds like it is), you should use one of your Tech Support Incidents or contact a developer evangelist from Apple.
Post marked as solved
1 Replies
In general, with MLComputeUnitsAll CoreML will try to use the ANE first, but there is no guarantee that it actually will choose the ANE. For more details, see https://github.com/hollance/neural-engine
Post marked as solved
1 Replies
Core ML models can only accept RGB or BGR images as inputs (the alpha channel is ignored). If your model wants a YCbCr image, you'll need to put the image into an MLMultiArray instead. You'll also need to change the input of the Core ML model to be a multi-array, not an image. Alternatively, you could add some layers at the start of the model that convert RGB back to YCbCr, but that seems like more trouble than it's worth.
Post not yet marked as solved
1 Replies
Since you trained the model to use images as input, you will have to draw the SwiftUI paths into an image first.
Post not yet marked as solved
1 Replies
It depends on what features your model uses. If it has a layer type that is only supported by macOS 10.15+, then the model cannot be used on earlier versions of macOS. However, if your model only has the most basic features from Core ML version 1, it can be used on macOS 10.13. There is a field in the mlmodel file named specificationVersion. Set this to 1 to make the model available on 10.13. However, Core ML will give an error if it turns out this model has layer types that are not supported by 10.13. import coremltools as ct spec = ct.utils.load_spec("YourModel.mlmodel")  spec.specificationVersion = 1 ct.utils.save_spec(spec, "YourNewModel.mlmodel")
Post not yet marked as solved
2 Replies
You can use the helper routines from CoreMLHelpers - https://github.com/hollance/CoreMLHelpers. As of iOS 13, Core ML also allows you to pass in CGImage objects but this API is not very straightforward to use. I prefer the method from CoreMLHelpers.
Post not yet marked as solved
1 Replies
I have had all kinds of weird errors with custom layers in Xcode 12, including the app crashing when trying to load the model. My solution: use coremlc from Xcode 11 to compile the model by hand, add the mlmodelc folder to your project, remove the mlmodel file from the project, and load the model by passing in the URL to the mlmodelc folder in your app bundle. You will also need to copy&paste the auto-generated source file from Xcode 11 into your Xcode 12 project (since you no longer have an mlmodel, it also doesn't generate the source file anymore).
Post not yet marked as solved
7 Replies
Here is a workaround: Open the project in Xcode 11. Go to the auto-generated source file for your model. Copy and paste that into a new Swift source file and add this to your project. Disable source code auto-generation for Core ML models in the project settings. Now open the project in Xcode 12 and it should no longer give that error.
Post not yet marked as solved
3 Replies
I replied on Stack Overflow, as you asked there too.
Post not yet marked as solved
3 Replies
The labels are stored in your mlmodel file. If you open the mlmodel in Xcode 12, it will display what those labels are. My guess is that instead of actual labels, your mlmodel contains "CICAgICAwPmveRIJQWdsYWlzX2lv" and so on. If that is the case, you can make a dictionary in the app that maps "CICAgICAwPmveRIJQWdsYWlzX2lv" and so on to the real labels, or you can replace these labels inside the mlmodel file by editing it using coremltools. (My e-book Core ML Survival Guide has a chapter on how to replace the labels in the model.)
Post not yet marked as solved
6 Replies
When this happens, is it possible to save another large file (doesn't matter what is inside)? If not, perhaps the device's storage is full.
Post not yet marked as solved
3 Replies
If I was a visual artist, I'd skip Create ML and learn to train my own style transfer models using TensorFlow or PyTorch. Create ML is fun for simple stuff, but you'll quickly run into its limitations and there is no way to work around those. For example, the Create ML style transfer models look like they're limited to 512x512 images (perhaps that's just the one for video, I didn't look closely). If you don't already have a Linux computer with a nice GPU lying around, that means doing work in the cloud (which is often free for a number of hours).
Post not yet marked as solved
9 Replies
Yes, there is a way but you're not going to like it. ;-) You'll have to write your own training code from scratch so that it can run on the phone. Then to turn this into a Core ML model, you can write protobuf messages to a new mlmodel file. Finally, you can load this mlmodel using the Core ML API, compile it on the device, and then you can use it make predictions. But if you already have to write your own training code, it also makes sense to write your own inference code and skip Core ML altogether.