i am new to Swift and CoreML and I have created a Keras model for transforming black and white images to color images. The in- and outputs are:
- Model input: An array of a black and white image with the dimension (256, 256, 1) and the pixel values are floats in range 0-1
- Model output: An array of a colored image with the dimension (256, 256, 3) and the pixel values are floats in range 0-1
The final model is converted with coremltools 0.7:
coreml_model = coremltools.converters.keras.convert(model=(path_to_architecture, path_to_weights), input_names='grayscaleImage', image_input_names='grayscaleImage', output_names='colorImage')
Then i've modified the the output of the model with the code from this post, that the output is an image:
The next step was to import the model in Xcode and load an image from the gallery, crop and resize it to 256x256 and make it black and white. The image type is CVPixelBuffer This image is given to the model and the output (CVPixelBuffer) is converted to UIImage and displayed in the app. The problem is that the output image is always black.
- Is it possible at this time to work with CoreML and image to image transformations?
- Is there a possibility to scale the image pixel values to the range of 0-1 instead of 0-255 or should the model in- and outputs be MLMultiArrays?
Thanks for any information
EDIT: Built a mini-model with "normal" images as in- and output and it works!
- Model input: An array of a black and white image with the dimension (256, 256, 1) and the pixel values are Uint8 in range 0-255
- Model output: An array of a colored image with the dimension (256, 256, 3) and the pixel values are Uint8 in range 0-255
But there's still the question: Can I normalize CVPixelBuffer images (scale the pixel values from 0-255 to 0-1 in Swift for the model input and scale the model output from 0-1 to 0-255) or do i have to train a new model with unnormalized data?