Post

Replies

Boosts

Views

Activity

Reply to Core ML Converters and custom libraries on top of Pytorch
With coremltools 4, you can directly convert PyTorch models to Core ML. To do this, you must create a "trace" first. This involves running the PyTorch model on a dummy input. It then captures all the operations the PyTorch model performs. It's quite possible that the trace of a fastai model doesn't really contain any custom operations, since fastai is mostly a wrapper around PyTorch. So I'd simply try to do the conversion first -- it might just work. ;-)
Jun ’20
Reply to When is Preprocessing necessary
Vision does indeed resize your images, according to the VNImageCropAndScaleOption that you set on the VNCoreMLRequest object. If you're not using Vision, but you're using the Core ML API, you'll have to do the resizing yourself. Or you can give the model flexible inputs so that it can handle images of different sizes.
Jun ’20
Reply to Record video and Classify using YOLO at the same time
It depends a little on the device you're using, but YOLO can be quite slow (especially the full version of YOLO). If YOLO runs at 15 FPS, for example, and you block the AVCapture thread, then it will automatically drop frames because your code isn't able to keep up. One solution would be to use a queue of size 1, and make Core ML read from this queue from a separate thread. The AVCapture thread simply appends its new frames to this queue, saves the frame to the movie file using AVAssetWriter, and then waits for the next frame to come in. (Because the queue is size 1, effectively this always overwrites the old frame.) Now the AVCapture thread will never be blocked for long amounts of time, and you won't drop any frames in the video. (Of course, YOLO will not see all frames.)
Jun ’20
Reply to Does Core ML 3 support Python 3?
Core ML doesn't support Python at all, as it's the API for doing inference on iOS and macOS devices. It's written in Obj-C and C++ and has an Obj-C and Swift API.coremltools, which is a Python package for building Core ML models and for converting them from other tools such as TensorFlow, works both on Python 2.7 and Python 3.x.(It used to work only with Python 2, but that was fixed several years ago. Since Python 2 is deprecated, you should really use it with Python 3.)
May ’20
Reply to How to create MLMultiArray from UIImage?
The correct thing to do is convert your model with the `image_input_names` option so that you can pass in an image instead of an MLMultiArray. (You can also change the input type from multi-array to image in the mlmodel afterwards.)Edit: Because this question comes up a lot, I wrote a blog post about it:h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/
Dec ’19
Reply to CoreML and UIImage.
You can manually add a few new layers to the end of the model. It's easiest to do this in the original model and then convert it again to Core ML, or you can also patch the mlmodel file directly.The layers you want to add are:- add + 1 so that now the data is in the range [0, 2]- multiply by 127.5 so that now the data is in the range [0, 255]Core ML has a preprocessing stage for the input image, but no "postprocessing" stage for output images. So you'll have to do this yourself with some extra layers.Edit: Because this question comes up a lot, I wrote a blog post about it: h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/
Dec ’19