ValueError: Channel Value %d not supported for image inputs

When importing a keras model, the kerasconverter assumes that images are imported as <width, height, color space>. I would like to have a paramter to either configure `image_input_names` or a flag to have `image_input_names` prase inputs as <width, color space, height>


Error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/home/keras/test/train.py in <module>()
     49 print model.input_shape
     50
---> 51 coremlmodel = coremltools.converters.keras.convert(model, class_labels="labels.txt", image_input_names="input1")
     52
     53


/opt/conda/envs/coreml/lib/python2.7/site-packages/coremltools/converters/keras/_keras_converter.pyc in convert(model, input_names, output_names, image_input_names, is_bgr, red_bias, green_bias, blue_bias, gray_bias, image_scale, class_labels, predicted_feature_name)
    429                                           blue_bias = blue_bias,
    430                                           gray_bias = gray_bias,
--> 431                                           image_scale = image_scale)
    432
    433     # Return the protobuf spec


/opt/conda/envs/coreml/lib/python2.7/site-packages/coremltools/models/neural_network.pyc in set_pre_processing_parameters(self, image_input_names, is_bgr, red_bias, green_bias, blue_bias, gray_bias, image_scale)
   1670                             input_.type.imageType.colorSpace = _FeatureTypes_pb2.ImageFeatureType.ColorSpace.Value('RGB')
   1671                     else:
-> 1672                         raise ValueError("Channel Value %d not supported for image inputs" % channels)
   1673                     input_.type.imageType.width = width
   1674                     input_.type.imageType.height = height


ValueError: Channel Value 100 not supported for image inputs


Here is a link to samle code on GitHub: https://github.com/joeblau/coremltools-demo

Replies

Thanks for trying out CoreML Beta and sharing your feedback! Keras supports two ways of passing image inputs:

<height, width, color space>, and <color space, height, width>. The Keras conversion tool in CoreML Beta supports only <height, width, color space>. The CoreML team is working on supporting <height, width, color space> configuration. Stay tuned for updates.


The <width, color space, height> is not the most common configuration of image data layout. One suggestion for you would be to transpose the image input. If that's not something you want to change, you can also try adding a permute layer as the first layer to the Keras model:

width, color_space, height = 100, 3, 100
# choose this for <height, width, color space>
model.add(Permute((3,1,2)), input_shape=(width, color_space, height))
# choose this for <color space, height, width>
model.add(Permute((2,3,1)), input_shape=(width, color_space, height))

# Add the rest of the layers
model.add(Convolution2D(32, 3, 3,border_mode='same'))
# ...


Unfortunately, with this work around, you would not be able to pass in an image to the model, you would instead need to use the MLMultiArray as input type:

coremlmodel = coremltools.converters.keras.convert(model,
                                                   class_labels = 'label.txt')


Thanks!

I have a similar issue using the VGG16 model from Keras. Following this tutorial: http://www.codesofinterest.com/2017/08/bottleneck-features-multi-class-classification-keras.html


I get ValueError: Channel Value 512 not supported for image inputs


It seems to me that this number 512 is coming directly from:

model = applications.VGG16(include_top=False, weights='imagenet')

as none of the shapes in my model contain this number.


model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))


Does this mean VGG16 can't be used with the coreml keras converter? I'm pretty new to this and a bit confused.

Hi,


Did you find any solution to this?

I have a retrained both InceptionV3 and VGG16 models with Keras using the same tutorial as you.I get the exact same error.


ValueError: Channel Value 512 not supported for image inputs

"I suspect that you're seeing an issue with dim ordering.


In machine learning, color images are usually presented in the shape (width, height, channels) or (channels, width, height), "channels_last" or "channels_first"


If your model is set to use channels as the last dimension but you are feeding it images where channels are the first dimension, it will throw an error like that because at maximum it expects there to be 3 color channels (R, G, and B)


I would advise you to look in to your ~/.keras/keras.json file and see if what your "image_data_format" setting is. You may have to reshape these images so that the channels are in the appropriate dimension

Hi,


I add the same problem (usin VGG16) and figured it out thanks to a post on stackoverflow - https://stackoverflow.com/questions/47707728/coreml-converted-keras-model-requests-multi-array-input-instead-of-image-in-xcod/50993661#50993661 - : you have to save the whole model, including VGG16 layers.

In my code, I was only saving the top model.


Here is what I did to solve it :


# add the model on top of the convolutional base
  fullModel = Model(inputs=base_model.input, outputs=top_model(base_model.output))


with :

  • base_model being your base model (VGG16) without its top layer
  • top_model being your top model


in my case something like this :


  base_model = applications.VGG16(include_top=False, weights='imagenet', input_shape=(img_width, img_height, 3))
  ....
  top_model = Sequential()  top_model.add(Flatten(input_shape=train_data.shape[1:]))
  top_model.add(Dense(256, activation='relu'))
  top_model.add(Dropout(0.5))
  top_model.add(Dense(num_classes, activation='softmax'))