5 Replies
      Latest reply on Feb 27, 2019 1:23 AM by kerfuffle
      arucoCode Level 1 Level 1 (0 points)

        I am working on Keras model that produces grayscale masks. Unet architecture built on SqueezeNet. Input and output are images.

        I managed to transform my model to accept image but it still outputs MLMultiArray.

        I can see that there is a way to produce a model which input and output are images, for example https://developer.apple.com/videos/play/wwdc2018/708. @ 14 min 12s


        Here is my approach:


        pip install coremltools==2.0b1


        Python script:

        import keras
        import coremltools
        from coremltools.models.neural_network.quantization_utils import *


        follow by:


        model = keras.models.load_model('sqeezeU.hdf5')
        core_model = coremltools.converters.keras.convert((model),


        then I follow post https://forums.developer.apple.com/thread/81571


        model = coremltools.models.MLModel('core_model.mlmodel') 
        spec = model.get_spec() 
        newModel = coremltools.models.MLModel(spec) 


        Despite all of that nothing changes in my result mlmodel. Inside of Xcode I still get model that accepts images and outputs MLMultiArray.

        Am I missing something?

        Is there a secret sauce to make a model that outputs images? Maybe I am using incorrect version of coremltools?

        Please help. I am chasing my tail on various forums.

        Thanks in advance