I am working on Keras model that produces grayscale masks. Unet architecture built on SqueezeNet. Input and output are images.
I managed to transform my model to accept image but it still outputs MLMultiArray.
I can see that there is a way to produce a model which input and output are images, for example https://developer.apple.com/videos/play/wwdc2018/708. @ 14 min 12s
Here is my approach:
Terminal:
pip install coremltools==2.0b1
Python script:
import keras
import coremltools
from coremltools.models.neural_network.quantization_utils import *
follow by:
model = keras.models.load_model('sqeezeU.hdf5')
core_model = coremltools.converters.keras.convert((model),
input_names="image",
image_input_names="image",
output_names="mask",
image_scale=1/255.0,
)
core_model.save('core_model.mlmodel')
then I follow post https://forums.developer.apple.com/thread/81571
model = coremltools.models.MLModel('core_model.mlmodel')
spec = model.get_spec()
convert_multiarray_output_to_image(spec,'imageOutput',is_bgr=True)
newModel = coremltools.models.MLModel(spec)
newModel.save('core_model.mlmodel')
Despite all of that nothing changes in my result mlmodel. Inside of Xcode I still get model that accepts images and outputs MLMultiArray.
Am I missing something?
Is there a secret sauce to make a model that outputs images? Maybe I am using incorrect version of coremltools?
Please help. I am chasing my tail on various forums.
Thanks in advance