Hi all,
I've been digging into Instruments lately to investigate some potential performance wins for my app. I'm really enjoying the level of detail in the System Trace template, and the os_signpost stuff is perfect to get a high-level view of where to focus attention - great job!
I'm using the latest Xcode Version 11.4.1 (11E503a) on Catalina, and an iPod Touch 7th Gen on the latest iOS - 13.4 (17E255).
I'm measuring the app in a steady state where the workload is pretty constant and low (should be no issue hitting 60FPS rendering) but Xcode's CPU meter is pretty jumpy and it drops a frame every couple of seconds, which then seems to coincide with a reduction of CPU usage shown in Xcode.
My suspicion is the device thinks it can maintain the required framerate and use less energy by reducing the CPU frequency. This is not quite true for my app - the main thread is sleeping for a significant period but it is awaiting a result on the run loop before the frame rendering can be completed. I can see why the OS might think downclocking is possible here, but then kick up the frequency again when it sees a frame is dropped.
I did however once seem to get the device into a state where it was running at (I assume) full speed - the CPU meter in Xcode stayed steady as a rock at 15% or so (with "other processes" at a similar number). I haven't been able to figure out exactly how I managed that - Instruments was running a windowed tracing session and I app switched away and back to the app, but that method doesn't seem to work every time.
So the Instruments-related questions arising from this:
1) Is there a way to record and visualize the CPU frequency scaling in Instruments? If the frequency of the CPU is changing and unknown, then the timings coming out of Instruments are not really telling the whole story.
2) Is there an officially supported method to prevent CPU frequency scaling whilst Instruments is recording data?
Thanks for any help!
Simon