My fear was that while one queue's work item is still in progress the associated thread could be used somehow to run the other queue's work item.
Post
Replies
Boosts
Views
Activity
Let me try:
let queues = (0 ..< 200).map { i in DispatchQueue(label: "queue-\(i)", serial or concurrent) }
queues.forEach { queue in
queue.async {
while true {
// doing something lengthy here
is Thread.current unique?
In other words was a dedicated thread allocated for this queue? or could a single thread serve several queues?
}
}
}
In my tests so far a unique thread is allocated per queue.
I see. Is there a path that doesn't involve privilege escalation?
I am thinking of using task_threads + thread_info (with THREAD_BASIC_INFO to get cpu_usage) and somehow determine which processor/core the thread is currently assigned to (†) and then accumulate cpu usages per processor/core. This will only account for threads in the current app but for my purposes this would be enough.
(†) - pthread_cpu_number_np doesn't return sensible results for me.
What’s you’re high-level goal here? Why do you intend to do with these processor control ports when you get them?
I want to get the CPU usage per processor / core, similar to Activity Monitor's "CPU Usage" window.
Basically I wanted to have the (short) thread numbers compatible with those shown by Xcode's CPU panel.
I can get the thread number from NSThread already, but not from a pthread_t.
What was the best approach at the end of the day?
(for macOS / iOS + appstore)
Anyone knows?
This is not safe
I see, thank you.
I was thinking of setting up a signal handler that does Thread.current and assigns the result somewhere (e.g. to a global variable). And then pthread_kill(...) to invoke that signal handler on the thread in question. Do you think this won't work? All the mentioned API's included signal and pthread_kill seem to be supported.
This works reasonably well.
However, is there a better approach? The two different devices with the same txPowerLevel could have different RSSI at 1 meter values (besides txPowerLevel is optional).
It's probably related to the "Just Works" pairing method that DOESN'T happen in those cases I am seeing the alert.
Got some heavy-wight workaround idea sketch: split the app into two with the second one being background only process. Specify CBConnectPeripheralOptionNotifyOnConnectionKey / CBConnectPeripheralOptionNotifyOnDisconnectionKey, CBConnectPeripheralOptionNotifyOnNotificationKey keys being false and make the "connect" call from there. Then somehow (XPC?) communicate data back to the main app.
There must be an easier way!
No, that doesn't work: the side panel slides and partially obscures the main panel.
For the record: the newer NavigationSplitView + navigationSplitViewStyle(.balanced) does work correctly on my plus sized device.
No. I mean, it’ll probably work, but it’s relying on implementation details that could change.
In a way it feels even safer to fork in pure Swift app's main.swift file so long as this is done as the very first thing.
In C++ app globals' constructors could create threads, mutexes, call malloc and whatnot before (and during) the app hitting main().
I encourage you to run a performance test
Indeed fork is not faster than posix_spawn, each took about 120 µs in a simple app.
(I forked/spawned 1000 processes for accuracy).
This is the current formula I'm using:
let n = 2.0
let distance = pow(10, Double(txPowerLevel - 12 - 62 - rssi) / (10 * n))
Here 12 was the txPowerLevel reported from one test device of mine, and -62 was approximate averaged rssi level at 1 meter distance from that test device.
Will give this a test in a field to see how well it goes.
Your previous post was about the former, but now you’ve pivoted to the latter. That’s fine, but I want to highlight the change.
Yep. I was under the impression that fork would be faster than "Process().run" or posix_spawn, and if you are going to run a copy of "itself" it sounded the more "direct" approach.
Limit your language and API choice, as described above.
From what you told here and in the linked thread it does sound that even when my app is written in C and uses threads it won't be safe using fork (without immediate exec) due to exactly same reasons: some mutexes would stay locked and never unlocked, malloc memory state could be partly damaged at the time of fork.
I noticed that at the very beginning of the app written in Swift (in main.swift) I don't see threads created by the system or runtime. (Unless I enable App Sandbox – then I see another thread added, which is not doing much it seems) Do you think in that special case (at least without the sandbox where some secondary thread is created) it's fine using fork (without exec) in the swift app or are there still gotchas to look out for?