Apologies if this is the incorrect venue to pose this question, but I couldn't find a clear answer in the available documentation or other parts of the forums. I'm wondering if someone can help clarify the behavior of a call to `os_signpost` (from Swift code) in a version of an iOS app released through the app store.
To be more concrete, in my current project we've recently added performance instrumentation via `os_signpost`, based largely on the information provided in WWDC 2018 - 405. As described in this video, we're utilizing the ability to conditionally specify the `disabled` `OSLog` handle at runtime based on a command line agrument passed through when built via Xcode. I mainly implemented this to prevent unnecessary work being done when the app is running 'in the wild'.
However, after implementing, I realized that I don't understand how signpost logging works if not disabled in a release build. So, I'm wondering if someone can shed some light on what the exact behavior is if an app calls `os_signpost` with a custom `OSLog` handle (that's not disabled) in a production build of an app distributed via the app store.
I've tracked down what I can follow in the publicly-available source code, and the signpost calls appear to end with an invocation of `_swift_os_signpost_with_format(...)`. The available documentation on `os_log` suggests that controlling which invocations end up actually emitting data is done via the `OSLogType` specified at the callsite. Since this parameter is absent in the signpost API, I'm wondering if the same level of control is available for signposts as there is with vanilla `os_log`. Lastly, `os_log` docs also outline a way to set up configuration profiles to control the desired logging level of different subsystems. How does this system interact with an app running on an iOS device?
Thanks for your time,