Core Motion (integrals for acceleration and angular velocity)

Synopsis: Core Motion reports both "raw" sensor data (such as angular velocities) and "processed" motion data (such as the attitude of the device).


  1. How much processing takes place in the dedicated M-series motion coprocessor?
  2. Are the "processed" results of higher-quality than a CPU could calculate from, for instance, numerical integration of the rate series?


(Accelerometry might be a better example if it were to take multiple raw data streams into account to filter confounders and arrive at user activity.)




Naïvely, the "processed" accelerometer and gyroscope data from Core Data could be calculated by hand from "raw" sensor data gathered by handheld, heavily-multitasked sensors in a combinatorial explosion of hardware, software, physical, and processor-bandwidth environment that must be reported for transparency in a published study.


Am I right that hand calculations would be of significantly lower quality than the "processed" equivalents performed by an independent high-performance motion coprocessor designed to, like, process motion in order to filter those factors into a commensurable data series?


In other words,


  1. Does an M-series motion coprocessor have access to a broader range of data than would get through the Core Motion funnel to the CPU?
  2. Does the dedicated hardware of the M chip do the integrals (and corrections for those dependencies, and feeding the angular-velocity integral into pattern recognition for the gravity vector) itself?
  3. Or does it delegate the math to the CPU (whenever it wakes up), so application code or offline stats packages could do as well?


These are hugely — comically — loaded questions, but it's more than possible I'm totally wrong, making it extremely satisfying to tell me so. Have at me.