Posts

Post not yet marked as solved
2 Replies
I'm still seeing this behaviour on iPhone 12 Pro on 16.0.2 and the 16.1 beta. I have noticed it's a bit device-specific though, and doesn't appear on earlier devices (iPod 7th Gen for example). One other finding of interest - I wrote a quick iOS app for the peripheral side of things and it still reproduced there (with the peripheral running on iPod 7th Gen and central on iPhone 12 Pro). However I also implemented the L2CAP channel and noticed these updates did not seem to be disrupted in the same way. I've made both the peripheral and central test apps (very rough and ready) public on GitHub: https://github.com/tangobravo/ios-bluetooth-central https://github.com/tangobravo/ios-bluetooth-peripheral Here's an Instruments screenshot showing when the updates are received, when they are sent both via the channel and via an attribute update: Note the gap in the characteristic updates still seems to correlate with bluetoothd activity looking at the CPU states data shown on the CPU tracks (for some reason the bluetoothd thread views are missing the data now unfortunately). So that at least gives hope for a workaround - I'm investigating implementing the L2CAP approach on my actual peripheral now.
Post not yet marked as solved
2 Replies
Here's an instruments screenshots showing the correlation with activity in those bluetoothd threads: And a wider view that shows the 10 second pattern in this activity: I've also posted this via feedback assistant as with the ID FB11469459
Post not yet marked as solved
2 Replies
I'm still trying to figure out the best route here. I should say the standard presentDrawable approach is usually described as "present as soon as possible" which also sounds like what I want, but in reality it seems to mean "present as soon as possible after all the other frames in the queue have been presented". From my investigations so far it seems likely that CAMetalLayer has some logic to handle pacing, but I haven't seen that described anywhere either in docs or WWDC talks and I'm struggling to figure out the logic it's using. For example if you look at https://developer.apple.com/videos/play/wwdc2019/606/ at 6:30 - the focus is on the command encoder for a future frame blocking on waitNextDrawable for a full frame, and how offscreen draws could be dispatched ahead. But for me there's an unanswered frame pacing question here too - the orange surface stays on the display for 2 frame periods, even though the following frame (shown in green in the Instruments trace) is fully complete well in advance of the swap interval where we'd expect it to display. It's as if some component (likely CAMetalLayer) has decided that a future frame has missed some submission deadline and so responds by delaying the presentation of the next one in the queue, even though it's ready to go. I think with CAMetalLayer I might just end up triggering rendering the following frame on the presentedHandler callback of the previous one rather than using CADisplayLink / MTLView at all. That way I can hopefully keep maximumDrawableCount at 3 so waitForNextDrawable should always be non-blocking, and guarantee presenting on the following VSYNC, but I don't want to be fighting against internal opaque CAMetalLayer logic that decides I'm not submitting frames fast enough to keep a full drawable queue. I'd love to understand more about all this - any references greatly appreciated!
Post not yet marked as solved
2 Replies
And I've now tracked down the auto-rotation issue, and it's another weird one... I noticed it was only happening when Xcode was debugging the app, not when it was started manually. A simple non-metal app worked fine, and I finally discovered that if I removed the Renderer.m from the app target and commented out the code that referenced it gave an app where rotation of the view controller worked as expected. Looking at the debugger output, I noticed the "Metal GPU Frame Capture Enabled" line only appeared on the output when the Renderer.m code was linked in (even if it wasn't called) and that seemed to correlate with when auto-rotation didn't work. That was enough to find this post: https://developer.apple.com/forums/thread/656241 which contains the solution - setting the "GPU Frame Capture" found in (Scheme - Run - Options page) to either "Metal" or "Disabled" fixes this bug and allows auto-rotation to work when the app is being debugged.
Post not yet marked as solved
2 Replies
I've been pulling my hair out over this too. Turns out the main issue is that the example Metal project doesn't set a launch screen - and iOS way back when used the lack of launch screen images as a signal that your app wasn't compatible with the iPhone 5's taller aspect ratio... Just using New-File...-Launch Screen and setting it in the "General" tab of the target setting made the view full-screen for me. One other annoyance that I can't figure out is the view doesn't rotate despite the orientations being set in the Info.plist. Might be to do with the sample still just AppDelegate and not adopting the new UIScene stuff, but I'm sure I've seen rotation work still in other apps as expected.
Post not yet marked as solved
3 Replies
Here's the Xcode CPU usage graph for my app, showing the difference between the "jumpy" usage and the low steady-state - the app work is the same throughout (essentially an idle animation loop). The times where it drops down to zero are when I double-tapped home to go into the app switcher (all the work done by the app is stopped on willResignActive), and then the jumps back up are tapping on the app to re-activate it.NB: The image above shows up when I edit the post, but not for me in the main thread view - I've uploaded it here too: https://tango-bravo.net/ios-cpu-graph.pngYou can see how much more stable the CPU usage is after the final (very short) trip to the switcher and back again. The overall pie chart suggested 150% or thereabouts of the CPU was "free" during this time.My current process is to repeat this app switching until I obtain that state, and then record more detail in Instruments so that I can have more confidence the timings should be more consistent and comparable. Something more repeatable would be nice of course, and I suspect I have just stumbled onto some bug in the CPU frequency governor code with this "app switching" trick.I'm updating to iOS 13.4.1 now to see if that changes anything.