How to detect if prediction can run on GPU?

I am using CoreML Prediction on a model for style transfer.


My model runs perfectly fine on an iPhone 7+ using a total of around 60MB of RAM. It crashes on some other devices that don't support running on the GPU running out of memory trying to allocate 957 MB of RAM for the same model.


What is also surprising to me is that it is apparently running on the CPU on the iPhone 7, but on the GPU on the iPhone 7+. So it crashes on the 7, but works splendidly on the 7+. They both use the A10, so why does it work on one but not the other?


How can I check at runtime if CoreML Prediction will run on the GPU? What chipsets support Prediction on the GPU and which don't (I can't find this documented anywhere)? Is there anyway to use less RAM with the same model on the CPU (eg running one channel at a time maybe?).

Replies

>detect if prediction can run on GPU?


We're told that Core ML will decide for itself whether to run the model on the CPU or the GPU - are you sure you want to meddle w/that process?