Bug with iOS and real device? CoreML tools

Hi, I have a very simple coreml script that works on the simulator but not on the real device. Using iOS11 Beta5 for both xcode and on the phone.



'MTLTextureDescriptor has arrayLength (29952) greater than the maximum allowed size of 2048.'
validateTextureDimensions:765: failed assertion `MTLTextureDescriptor has arrayLength (29952) greater than the maximum allowed size of 2048.'


I have remade the project and used as simple neural network as possible to remove possibility of it being anything else. This is the network 3 fully connected dense layers with a softmax.


input_data = Input(name='the_input', shape=(None,mfcc_features))
x = Dense(fc_size, name='fc1', activation=relu)(input_data)
x = Dense(fc_size, name='fc2', activation=relu)(x)
x = Dense(fc_size, name='fc3', activation=relu)(x)
y_pred = Dense(num_classes, name="y_pred", activation="softmax")(x)


Any ideas? What differences are there between the simulator and the device with regards to coreml?


thanks

Replies

On the Simulator, Core ML uses Accelerate and the CPU. On a real device it may use Metal. But Metal has certain limitations. In this case it sounds like the number of channels used is too big, which can be due to a batch size or possibly sequence length that is greater than will fit on the GPU.

Interesting, I am using batch size of 1 with the input dimensions being [1,234,26] so fail to see how it could be that.

Sounds like your model may require a lot of memory when run on the GPU. You can run the model exclusively on the CPU by using the recently added usesCPUOnly option (while making predictions) https://developer.apple.com/documentation/coreml/mlpredictionoptions