-
Re: CoreML and UIImage.
akatti Dec 2, 2019 11:17 PM (in response to HeoJin)What is the pixel format of this image - grayscale or RGB? Either way, you could scale pixel values using vImage.
Something like https://developer.apple.com/documentation/accelerate/1533273-vimageconvert_fto16u?
-
Re: CoreML and UIImage.
kerfuffle Dec 9, 2019 2:00 PM (in response to HeoJin)You can manually add a few new layers to the end of the model. It's easiest to do this in the original model and then convert it again to Core ML, or you can also patch the mlmodel file directly.
The layers you want to add are:
- add + 1 so that now the data is in the range [0, 2]
- multiply by 127.5 so that now the data is in the range [0, 255]
Core ML has a preprocessing stage for the input image, but no "postprocessing" stage for output images. So you'll have to do this yourself with some extra layers.
Edit: Because this question comes up a lot, I wrote a blog post about it: h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/