Hi! Thank you for the Core ML Performance Report, it's a great tool! Is there a way to get peak memory footprint in addition to runtime? Thanks!
Core ML Performance Report
Hi, thanks for your feedback - glad to hear you like the feature. Currently we don't provide this information, but I'd love to hear more detail about what sort of memory info/detail would be useful for your scenario here. Thanks.
Some models have considerable memory overhead when running inference on an iOS device. We've seen crashes due to process running out of memory. It would be nice to see the total memory overhead incurred by running inference. Breakdown of memory consumption per compute unit or even per layer would be nice, but I'm guessing this isn't even well defined, so an overall "This is the amount of memory pressure using this model in your app is expected to add" would be great.