Custom Layers with CoreML results in Zero Shape

Generally speaking, what is the custom layer support for iPhone?  Can we insert a single layer in the
middle of a net?
Using custom layers is resulting in zero shapes during the init call of the mlmodel, resulting in an
exception. I am using Xcode 10.1. Works fine with the simulator XS Max, but as soon as I try it on an
actual XS Max, this exception occurs.
Steps to Reproduce: Add any custom layer to a net. Use the environment above on an XS Max.
Unsure if this is specific to Xcode, SDK, or phone.
Expected Results: The init completes without an exception.
(When I remove the custom layers, only the first 3 lines appear.)

2018-12-06 11:51:50.379147-0800 MyApp[824:140882] [DYMTLInitPlatform] platform initialization successful

2018-12-06 11:51:50.548854-0800 MyApp[824:140808] Metal GPU Frame Capture Enabled

2018-12-06 11:51:50.549081-0800 MyApp[824:140808] Metal API Validation Enabled

2018-12-06 11:51:50.664385-0800 MyApp[824:140808] [espresso] [Espresso::handle_ex_plan] exception=Zero shape error

2018-12-06 11:51:50.665249-0800 MyApp[824:140808] [coreml] Error in adding network -1.

2018-12-06 11:51:50.665421-0800 MyApp[824:140808] [coreml] MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.}

2018-12-06 11:51:50.665471-0800 MyApp[824:140808] [coreml] MLModelAsset: modelWithError: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} (lldb)

Replies

When you have a custom layer, you should implement a method that returns the output shape of the layer. How did you implement this?

This custom layer creates an output with the same shape as the input... So simply,


@objc(MyCustomLayer) class MyCustomLayer: NSObject, MLCustomLayer {


required init(parameters: [String : Any]) throws {

// do something

super.init()

}


func setWeightData(_ weights: [Data]) throws {}


func outputShapes(forInputShapes inputShapes: [[NSNumber]]) throws -> [[NSNumber]] {

return inputShapes

}


func evaluate(inputs: [MLMultiArray], outputs: [MLMultiArray]) throws {

// do something

}

}

But what is inputShapes at that point?

func outputShapes(forInputShapes inputShapes: [[NSNumber]]) throws -> [[NSNumber]] {

print(#function, " MIN ", inputShapes)

return inputShapes

}


outputShapes(forInputShapes:) MIN [[0, 0, 0, 0, 0]]

This is the output with the XS Max simulator:


outputShapes(forInputShapes:) MIN [[1, 1, 3, 768, 512]]

Note that on the simulator Core ML uses a different runtime (CPU) while on the device it uses the GPU or the Neural Engine (ANE). It's possible there is a bug in Core ML where your model doesn't work correctly on the GPU / ANE. It's worth filing a bug report about this.

I have filed a bug report, actually, last December (reference: 46531417). There has been zero activity on the report, so I decided to try the forums.