Referring to Apple Q&A QA1398, the sample code initializes the mach_timebase_info once, and then assumes that ticks -> time fraction remains constant for the rest of the program execution.
Is this safe? Depending on what underlying hardware timing mechanism is being used, would this fraction not change?
As an example, the A10 with ARM big.LITTLE. Would there be a difference in the tick -> time fraction when running on the low-energy or high-performance cores?