With coremltools 4, you can directly convert PyTorch models to Core ML. To do this, you must create a "trace" first. This involves running the PyTorch model on a dummy input. It then captures all the operations the PyTorch model performs. It's quite possible that the trace of a fastai model doesn't really contain any custom operations, since fastai is mostly a wrapper around PyTorch.
So I'd simply try to do the conversion first -- it might just work. ;-)
Post
Replies
Boosts
Views
Activity
You can use cloud deployment with or without encryption. When you deploy to the cloud, you can enable encryption by checking the "Encrypt model" option and then it will be handled automatically.
The coremltools documentation was recently updated and is a good place to get started: https://coremltools.readme.io/docs
Vision does indeed resize your images, according to the VNImageCropAndScaleOption that you set on the VNCoreMLRequest object.
If you're not using Vision, but you're using the Core ML API, you'll have to do the resizing yourself. Or you can give the model flexible inputs so that it can handle images of different sizes.
I believe the model gets decrypted when you instantiate it, and the decrypted version stays in memory until the MLModel object is released or the app is terminated. So it doesn't get decrypted on every single inference operation (that would be very inefficient) but it does get decrypted every time it is loaded.
The difference here is that you need to host the model yourself on a server somewhere. It's not automagically handled for you like the new deployment stuff is.
It depends a little on the device you're using, but YOLO can be quite slow (especially the full version of YOLO). If YOLO runs at 15 FPS, for example, and you block the AVCapture thread, then it will automatically drop frames because your code isn't able to keep up.
One solution would be to use a queue of size 1, and make Core ML read from this queue from a separate thread. The AVCapture thread simply appends its new frames to this queue, saves the frame to the movie file using AVAssetWriter, and then waits for the next frame to come in. (Because the queue is size 1, effectively this always overwrites the old frame.)
Now the AVCapture thread will never be blocked for long amounts of time, and you won't drop any frames in the video. (Of course, YOLO will not see all frames.)
Ray Wenderlich publishes the book Machine Learning by Tutorials. The goal of that book is to teach ML to people who are familiar with iOS (or macOS) development but are new to ML. (Full disclosure: I am one of the authors.)
Core ML doesn't support Python at all, as it's the API for doing inference on iOS and macOS devices. It's written in Obj-C and C++ and has an Obj-C and Swift API.coremltools, which is a Python package for building Core ML models and for converting them from other tools such as TensorFlow, works both on Python 2.7 and Python 3.x.(It used to work only with Python 2, but that was fixed several years ago. Since Python 2 is deprecated, you should really use it with Python 3.)
The correct thing to do is convert your model with the `image_input_names` option so that you can pass in an image instead of an MLMultiArray. (You can also change the input type from multi-array to image in the mlmodel afterwards.)Edit: Because this question comes up a lot, I wrote a blog post about it:h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/
You can manually add a few new layers to the end of the model. It's easiest to do this in the original model and then convert it again to Core ML, or you can also patch the mlmodel file directly.The layers you want to add are:- add + 1 so that now the data is in the range [0, 2]- multiply by 127.5 so that now the data is in the range [0, 255]Core ML has a preprocessing stage for the input image, but no "postprocessing" stage for output images. So you'll have to do this yourself with some extra layers.Edit: Because this question comes up a lot, I wrote a blog post about it: h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/