I have export a quantization model with ct.convert whose "minimum_deployment_target=ct.target.iOS17,", can I run it without a iphone ?
Post
Replies
Boosts
Views
Activity
hello, I am a machine learning engineer, recently I need to run pytorch's grid_sample opration on iphone. so I use coremltools to convert pytorch grid_sample to MIL resample op which is officially supported. But when running on the phone, it is switched to the CPU instead of the GPU or ANE (xcode connected with phone, run offical performance benchmark). I would like to ask why there is no efficient GPU implementation?
What I am looking forward to is running around 2ms, but 8ms with cpu
hello! I have converted a single grid_sample opration in pytorch to mlpackage using your coremltools, and open it with xcode for benchmarking. there is only one op which is called resample. and I run it with my mac m1 pro .but I found that it is only run on cpu, so the latency is not in my demand.
can you support the resample with gpu, or can i implement it with metal by myself?