Post

Replies

Boosts

Views

Activity

Cannot run CoreML model with Enumerated shape on ANE
I have a neural network that should run on my device with 3 different input shapes. When converting it to mlmodel or mlpackage files with fixed input size it runs on ANE. But when converted it with EnumeratedShape it runs only on CPU. Why? I think that the problematic layer is the slice (which converted in the flexible model to SliceStatic), but don't understand why and if there is any way to solve it and run the Enumerated model on ANE. Here is my code class TestModel(torch.nn.Module): def __init__(self): super(TestModel, self).__init__() self.dw1 = torch.nn.Conv2d(in_channels=641, out_channels=641, kernel_size=(5,4), groups=641) self.pw1 = torch.nn.Conv2d(in_channels=641, out_channels=512, kernel_size=(1,1)) self.relu = torch.nn.ReLU() self.pw2 = torch.nn.Conv2d(in_channels=512, out_channels=641, kernel_size=(1,1)) self.dw2 = torch.nn.Conv2d(in_channels=641, out_channels=641, kernel_size=(5,1), groups=641) self.pw3 = torch.nn.Conv2d(in_channels=641, out_channels=512, kernel_size=(1,1)) self.block1_dw = torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(5,1), groups=512) self.block1_pw = torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=(1,1)) def forward(self, inputs): x = self.dw1(inputs) x = self.pw1(x) x = self.relu(x) x = self.pw2(x) x = self.dw2(x) x = self.pw3(x) x = self.relu(x) y = self.block1_dw(x) y = self.block1_pw(y) y = self.relu(y) z = x[:,:,4:,:] + y return z ex_input = torch.rand(1, 641, 44, 4) traced_model = torch.jit.trace(TestModel().eval(), [ex_input,]) ct_enum_inputs = [ct.TensorType(name='inputs', shape=enum_shape)] ct_outputs = [ct.TensorType(name='out')] mlmodel_enum = ct.convert(traced_model, inputs=ct_enum_inputs, outputs=ct_outputs, convert_to="neuralnetwork") mlmodel.save(...) Thanks.
0
0
624
Jan ’24