It would be nice if when engineers thought up error messages they considered that people might waste hours trying to solve what looks like a problem.
Post
Replies
Boosts
Views
Activity
I believe that when you run on the CPU, you also run on the ANE. When you run on the GPU, you just get the GPU. MLCompute.set_mlc_device can be set to 'cpu,' 'gpu,' or 'any.' Any seems to be the fastest using the CPU, GPU, and ANE. However, I think this will only work for inference.