CoreML and UIImage.

I've converted pix2pix model whose input/output is Image(256x256)


But the model are normalized to the range [-1,1], and the output image shows nothing but black.


Does anyone know how to convert this image to [0, 255]?

What is the pixel format of this image - grayscale or RGB? Either way, you could scale pixel values using vImage.


Something like https://developer.apple.com/documentation/accelerate/1533273-vimageconvert_fto16u?

You can manually add a few new layers to the end of the model. It's easiest to do this in the original model and then convert it again to Core ML, or you can also patch the mlmodel file directly.


The layers you want to add are:


- add + 1 so that now the data is in the range [0, 2]

- multiply by 127.5 so that now the data is in the range [0, 255]


Core ML has a preprocessing stage for the input image, but no "postprocessing" stage for output images. So you'll have to do this yourself with some extra layers.


Edit: Because this question comes up a lot, I wrote a blog post about it: h t t p s: //machinethink.net/blog/coreml-image-mlmultiarray/

CoreML and UIImage.
 
 
Q