CoreML throws uncaught exception on assigning imageConstraint

Hi all. I am trying to make a MLModel able to take image for style transfer with image size determined at runtime. I notice that the image size constraint is specified with the field "imageConstraint" with type

MLImageConstraint
, inside MLFeatureDescription. I notice the field has { get set }, allowing us to change the "imageConstraint". So I think it is a possible approach to dynamically set image input output size to MLModel.


So the approach taken is: (Notice I am not using Xcode generated interface)

  1. Create an instance of MLModel
  2. From the field modelDescription, use the DIctionary
    inputDescriptionsByName,
    get the
    MLFeatureDescription
    which represent the input.
  3. Create a subclass of MLImageConstraint, which can return desired width and height of image
  4. assign an instance of the subclass to field imageConstraint: MLImageConstraint? of the MLFeatureDescription obtained in step 2


However, it crash in runtime with

[MLFeatureDescription setImageConstraint:]: unrecognized selector sent to instance
.


What is worse, even if I assign

nil
to imageConstraint (just for testing), it still crash. The code is here:
let bundle = Bundle(for: udnie.self)
let assetPath = bundle.url(forResource: "udnie", withExtension:"mlmodelc")
let model = try! MLModel(contentsOf: assetPath!)
NSLog("count %d", model.modelDescription.inputDescriptionsByName.count)
if let input = model.modelDescription.inputDescriptionsByName["input"] {
   NSLog("before")
   input.imageConstraint = nil
   NSLog("after")
}

Crash with:

2017-08-11 14:53:40.228981+0800 MobileNetCoreML[332:38134] count 1
2017-08-11 14:53:40.229107+0800 MobileNetCoreML[332:38134] before
2017-08-11 14:53:40.229193+0800 MobileNetCoreML[332:38134] -[MLFeatureDescription setImageConstraint:]: unrecognized selector sent to instance 0x1c024a9e0
2017-08-11 14:53:40.229594+0800 MobileNetCoreML[332:38134] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[MLFeatureDescription setImageConstraint:]: unrecognized selector sent to instance 0x1c024a9e0'
*** First throw call stack:
(0x180df0d5c 0x180304528 0x180dfe218 0x180df6708 0x180cdc35c 0x102c8eb3c 0x102c8ee84 0x102c93b68 0x102c93a84 0x102c93ae4 0x18a953310 0x18a953290 0x18a93e050 0x18a952b84 0x18a9526a4 0x18a94db74 0x18a91efc8 0x18b23cc2c 0x18b23f09c 0x18b238208 0x180d99570 0x180d994f0 0x180d98dcc 0x180d96950 0x180cb7558 0x182b3af84 0x18a982984 0x102c96d64 0x1807d9db4)
libc++abi.dylib: terminating with uncaught exception of type NSException

If the line "input.imageConstraint = nil" is commented out, style transfer can run (with fixed image size).

get
from imageConstraint is fine, but
set
will crash.


I can share my mlmodel if there is need. Is there anything wrong the piece of code? Or is there a bug in CoreML?


Many thanks to anyone giving advice!

Accepted Reply

Input and output feature sizes and shapes are fixed. Dynamic sizes are not currently supported.


The MLImageConstraint, MLMultiArrayConstraint and MLDictionaryConstraint are readonly. There is a bug in the SDK in which these properties are mistakenly marked as readwrite. This will be fixed in an upcoming SDK update (i.e. they will be correctly annotated as readonly).

Replies

I guess it's a bug that it's marked settable externally. As far a I can see models need to have a fixed input and output size defined. Dynamic sizes are not supported in the current CoreML version. The sizes of all resources (input, output and intermediate resources) are determined at compile time, so it should not be possible to change them by just changing a property on the model object.


I'm also doing style transfer and I'd also love to see support for dynamic input and output sizes. Until then I use resizing and cropping to work around that limitation.

Input and output feature sizes and shapes are fixed. Dynamic sizes are not currently supported.


The MLImageConstraint, MLMultiArrayConstraint and MLDictionaryConstraint are readonly. There is a bug in the SDK in which these properties are mistakenly marked as readwrite. This will be fixed in an upcoming SDK update (i.e. they will be correctly annotated as readonly).

Thanks.

Actually I have found an adhoc way to support dynamic input and output size, though unofficial. I hope you will find it useful.


  1. Make a style transfer mlmodel, say, with 512x512 input size, then make an identical one except it is 511x511.
  2. xcode will generate a mlmodelc for those mlmodel, find them in appbundle and copy them for later analysis.
  3. The 512 and 511 are stored in a file named "coremldata.bin". Binary diff them and you will find the difference.
  4. Copy and Modify the coremldata.bin at runtime.


I have skipped most of the detail and I cannot share source code here as the code I write belongs to my company. You may contact me email at chiu6700@gmail.com for further details.


It only took 2.5 days to implement though looks complicated.