4 Replies
      Latest reply on Feb 24, 2019 5:14 AM by adib
      EMBLab Level 1 Level 1 (0 points)

        Hello,

        i am new to Swift and CoreML and I have created a Keras model for transforming black and white images to color images. The in- and outputs are:

         

        - Model input: An array of a black and white image with the dimension (256, 256, 1) and the pixel values are floats in range 0-1

        - Model output: An array of a colored image with the dimension (256, 256, 3) and the pixel values are floats in range 0-1

         

        The final model is converted with coremltools 0.7:

        coreml_model = coremltools.converters.keras.convert(model=(path_to_architecture, path_to_weights), input_names='grayscaleImage', image_input_names='grayscaleImage', output_names='colorImage')
        
        
        

         

        Then i've modified the the output of the model with the code from this post, that the output is an image:

        https://forums.developer.apple.com/thread/81571

         

        The next step was to import the model in Xcode and load an image from the gallery, crop and resize it to 256x256 and make it black and white. The image type is CVPixelBuffer This image is given to the model and the output (CVPixelBuffer) is converted to UIImage and displayed in the app. The problem is that the output image is always black.

         

        - Is it possible at this time to work with CoreML and image to image transformations?

        - Is there a possibility to scale the image pixel values to the range of 0-1 instead of 0-255 or should the model in- and outputs be MLMultiArrays?

         

        Thanks for any information

         

        EDIT: Built a mini-model with "normal" images as in- and output and it works!

        - Model input: An array of a black and white image with the dimension (256, 256, 1) and the pixel values are Uint8 in range 0-255

        - Model output: An array of a colored image with the dimension (256, 256, 3) and the pixel values are Uint8 in range 0-255

         

        But there's still the question: Can I normalize CVPixelBuffer images (scale the pixel values from 0-255 to 0-1 in Swift for the model input and scale the model output from 0-1 to 0-255) or do i have to train a new model with unnormalized data?

        • Re: CoreML image to image trasformation
          kerfuffle Level 3 Level 3 (110 points)

          In the call to `coremltools.converters.keras.convert()` you can specify scaling options. This adds automatic preprocessing to the Core ML model, so you can pass it a CVPixelBuffer and the model will automatically convert from 0-255 to 0-1 (or whatever range you've trained the model to accept).