I am trying to port a fully convolutional network to coreML. In tensorflow, it can take image with any width and height as input. However, with coreML model defined in the specification of FeatureTypes.proto, both the ImageFeatureType and ArrayFeatureType specify the input dimension explicitly. Then the compiler of coreML in xcode will translate it to something like MultiArray<Double, 3, 128, 128>, where 128 is the image width and height, which is undesired.
In tensorflow, I would specify the input shape as (None, None, 3) in python. Is there a way to permit any image as input for coreML? How should the protobuf message looks like? Such a feature will be very useful for style transfer and image segmentation.
Thanks in advance.