When building MLModel, it is set to use NPU. It seems that GPU is used during inference, but it crashes during Compile.
The stack is as follows:
When building MLModel, it is set to use NPU. It seems that GPU is used during inference, but it crashes during Compile.
The stack is as follows:
+1