Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language Concurrency > Concurrency documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
Dispatch
RSS for tagExecute code concurrently on multicore hardware by submitting work to dispatch queues managed by the system using Dispatch.
Posts under Dispatch tag
30 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Let's say I queue some tasks on DispatchQueue.global() and then switch to another app or locking screen for a while. The app was not terminated but stayed in the background.
Is there a chance that some tasks queued but not yet start could be discarded, even if the app hasn’t been terminated, after switching to another app or locking the screen for a while?
I'm calling the following function in a SwiftUI View modifier in Xcode 16.1:
nonisolated function f -> CGFloat {
let semaphore = DispatchSemaphore(value: 0)
var a: CGFloat = 0
DispatchQueue.main.async {
a = ...
semaphore.signal()
}
semaphore.wait()
return a
}
The app freezes, and code in the main queue is never executed.
We are getting a crash _dispatch_assert_queue_fail when the cancellationHandler on NSProgress is called.
We do not see this with iOS 17.x, only on iOS 18. We are building in Swift 6 language mode and do not have any compiler warnings.
We have a type whose init looks something like this:
init(
request: URLRequest,
destinationURL: URL,
session: URLSession
) {
progress = Progress()
progress.kind = .file
progress.fileOperationKind = .downloading
progress.fileURL = destinationURL
progress.pausingHandler = { [weak self] in
self?.setIsPaused(true)
}
progress.resumingHandler = { [weak self] in
self?.setIsPaused(false)
}
progress.cancellationHandler = { [weak self] in
self?.cancel()
}
When the progress is cancelled, and the cancellation handler is invoked. We get the crash. The crash is not reproducible 100% of the time, but it happens significantly often. Especially after cleaning and rebuilding and running our tests.
* thread #4, queue = 'com.apple.root.default-qos', stop reason = EXC_BREAKPOINT (code=1, subcode=0x18017b0e8)
* frame #0: 0x000000018017b0e8 libdispatch.dylib`_dispatch_assert_queue_fail + 116
frame #1: 0x000000018017b074 libdispatch.dylib`dispatch_assert_queue + 188
frame #2: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284
frame #3: 0x000000010b80bd84 MyTests`closure #3 in MyController.init() at MyController.swift:0
frame #4: 0x000000010b80bb04 MyTests`thunk for @escaping @callee_guaranteed @Sendable () -> () at <compiler-generated>:0
frame #5: 0x00000001810276b0 Foundation`__20-[NSProgress cancel]_block_invoke_3 + 28
frame #6: 0x00000001801774ec libdispatch.dylib`_dispatch_call_block_and_release + 24
frame #7: 0x0000000180178de0 libdispatch.dylib`_dispatch_client_callout + 16
frame #8: 0x000000018018b7dc libdispatch.dylib`_dispatch_root_queue_drain + 1072
frame #9: 0x000000018018bf60 libdispatch.dylib`_dispatch_worker_thread2 + 232
frame #10: 0x00000001012a77d8 libsystem_pthread.dylib`_pthread_wqthread + 224
Any thoughts on why this is crashing and what we can do to work-around it? I have not been able to extract our code into a simple reproducible case yet. And I mostly see it when running our code in a testing environment (XCTest). Although I have been able to reproduce it running an app a few times, it's just less common.
Hi there, I have some thread related questions regards to network framework completion callbacks. In short, how should I process cross thread data in the completion callbacks?
Here are more details. I have a background serial dispatch queue (call it dispatch queue A) to sequentially process the nw_connection and any network io events. Meanwhile, user inputs are handled by serial dispatch queue ( dispatch queue B). How should I handle the cross thread user data in this case?
(I write some simplified sample code below)
struct {
int client_status;
char* message_to_sent;
}user_data;
nw_connection_t nw_connection;
dispatch_queue_t dispatch_queue_A
static void send_message(){
dispatch_data_t data = dispatch_data_create(message, len(message), dispath_event_loop->dispatch_queue, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
nw_connection_send(
nw_connection, data, NW_CONNECTION_DEFAULT_MESSAGE_CONTEXT, false, ^(nw_error_t error) {
user_data.client_status = SENT;
mem_release(user_data.message_to_sent); });
});
}
static void setup_connection(){
dispatch_queue_A=
dispatch_queue_create("unique_id_a", DISPATCH_QUEUE_SERIAL);
nw_connection = nw_connection_create(endpoint, params);
nw_connection_set_state_changed_handler(){
if (state == nw_connection_state_ready) {
user_data.client_status = CONNECTED
}
// ... other operations ...
}
nw_connection_start(nw_connection);
nw_retain(nw_connection);
}
static void user_main(){
setup_connection()
user_data.client_status = INIT;
dispatch_queue_t dispatch_queue_B = dispatch_queue_create("unique_id_b", DISPATCH_QUEUE_SERIAL);
// write socket
dispatch_async(dispatch_queue_B, ^(){
if (user_data.client_status != CONNECTED ) return;
user_data.message_to_sent = malloc(XX,***)
// I would like to have all io events processed on dispatch queue A so that the io events would not interacted with the user events
dispatch_async_f(dispatch_queue_A, send_message);
// Disconnect block
dispatch_async(dispatch_queue_B, ^(){
dispatch_async_f(dispatch_queue_A, ^(){
nw_connection_cancel(nw_connection)
});
user_data.client_status = DISCONNECTING;
});
// clean up connection and so on...
}
To be more specific, my questions would be:
As I was using serial dispatch queue, I didn't protect the user_data here. However, which thread would the send_completion_handler get called? Would it be a data race condition where the Disconnect block and send_completion_handler both access user_data?
If I protect the user_data with lock, it might block the thread. How does the dispatch queue make sure it would NOT put a related execution block onto the "blocked thread"?
How and why does the dispatchgroup.notify method get called before all the entered instances have left?
I tried adding the dispatchGroup.enter within the same loop and the output is the same.
background info:
I dispatch async task to main queue in an es_handler_block_t(client subscribe open, create, exit, close events and mute all processes except DesktopServicesHelper). crash happened kinda randomly. most likely to happen when I copy a folder(contains a lot of files) in a volume to another volume.
here's the crashed part of the diagnostic report .
Thread 9 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x18c6e2a60 __pthread_kill + 8
1 libsystem_pthread.dylib 0x18c71ac20 pthread_kill + 288
2 libsystem_c.dylib 0x18c627a20 abort + 180
3 libc++abi.dylib 0x18c6d1d30 abort_message + 132
4 libc++abi.dylib 0x18c6c1fe8 demangling_terminate_handler() + 348
5 libobjc.A.dylib 0x18c3601d0 _objc_terminate() + 144
6 libc++abi.dylib 0x18c6d10f4 std::__terminate(void (*)()) + 16
7 libc++abi.dylib 0x18c6d1098 std::terminate() + 108
8 libdispatch.dylib 0x18c56a3fc _dispatch_client_callout + 40
9 libdispatch.dylib 0x18c571a14 _dispatch_lane_serial_drain + 748
10 libdispatch.dylib 0x18c572578 _dispatch_lane_invoke + 432
11 libdispatch.dylib 0x18c57bea8 _dispatch_root_queue_drain + 392
12 libdispatch.dylib 0x18c57c6b8 _dispatch_worker_thread2 + 156
13 libsystem_pthread.dylib 0x18c716fd0 _pthread_wqthread + 228
14 libsystem_pthread.dylib 0x18c715d28 start_wqthread + 8
Thread 9 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x000000018c6d62cb x5: 0x000000016c1eed20 x6: 0x000000000000006e x7: 0x0000000000000000
x8: 0x851ef9fdee51098d x9: 0x851ef9fc824ff98d x10: 0x0000000000000200 x11: 0x000000000000000b
x12: 0x0000000000000000 x13: 0x00000000001ff800 x14: 0x00000000000007fb x15: 0x00000000a5a0204e
x16: 0x0000000000000148 x17: 0x00000001fe792c30 x18: 0x0000000000000000 x19: 0x0000000000000006
x20: 0x000000016c1ef000 x21: 0x0000000000004003 x22: 0x000000016c1ef0e0 x23: 0x000000016c1ef0e0
x24: 0x00000001f442b6a8 x25: 0x0000000000000000 x26: 0x0000000000000000 x27: 0x0000600003664800
x28: 0x0000000000000000 fp: 0x000000016c1eec90 lr: 0x000000018c71ac20
sp: 0x000000016c1eec70 pc: 0x000000018c6e2a60 cpsr: 0x40001000
far: 0x0000000000000000 esr: 0x56000080 Address size fault
BUG IN CLIENT OF LIBDISPATCH: Unexpected EV_VANISHED (do not destroy random mach ports or file descriptors)
Which, ok, clear: somehow a file descriptor is being closed before DispatchIO.close() is called, yes?
Only I can't figure out where it is being closed. I am currently using change_fdguard_np() to prevent closes anywhere else, and every single place where I call Darwin.close() is preceded by another call to change_fdguard_npand thenDispatchIO.close()`. eg
self.unguardSocket()
self.readDispatcher?.close()
Darwin.close(self.socket)
self.socket = -1
self.completion(self)
Hi ,
Greetings of the day!
I would like to get help to avoid the Endpoint Security System Extension crash due to below reason:
Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline
Couple of events we have subscribed and for AUTH related events we are receiving deadline of 14 seconds in Sonoma and to avoid above issue we have implemented a queue to provide verdict within the deadline to avoid the OS killing of our extension however sometime we observe that we are getting crash with below message:
Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline
**Dispatch Thread Soft Limit Reached: 64** (too many dispatch threads blocked in synchronous operations)
There is no GCD API to check whether queue is reached to soft limit so we need help here to know or check whether queue is reached to soft limit 64.
if we can check above then we should avoid adding the new tasks in it until its free to accept the tasks.
And for NOTIFY_CLOSE, we are getting big value in seconds as deadline however we are adding all the processing of NOTIFY_CLOSE with dispatch_async however still receiving the crash.
Here is code for AUTH_OPEN :
dispatch_queue_t gNotifyCloseQueue = dispatch_queue_create(
"com.example.notify_close_queue", dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL,
QOS_CLASS_UTILITY, 0));
dispatch_queue_t gAuthOpenQueue = dispatch_queue_create("com.example.auth_open_queue",dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL,QOS_CLASS_USER_INTERACTIVE, 0));
BOOL AuthOpenEventHandler(es_message_t *pesMsg)
{
//Some Processing we are doing here like Calculate the deadline in seconds etc. and we are receiving 14 seconds in Sonoma
// deadline - 14 seconds
if ( deadlineInSeconds < 10 )
{
dispatch_time_t triggerTime = dispatch_time(pesMsg->deadline, (int64_t)(-1 * NSEC_PER_SEC));
__block es_message_t *pesTempMsg;
pesTempMsg = es_copy_message(pesMsg);
dispatch_after(triggerTime, gAuthOpenQueue, ^{
if (pesTempMsg != NULL)
{
esRespondRes = es_respond_flags_result(pesClt,pesMsg,pesMsg->event.open.fflag,false);
if(ES_RESPOND_RESULT_SUCCESS != esRespondRes)
{
es_free_message(pesTempMsg);
return;
}
if (pesTempMsg != NULL) {
es_free_message(pesTempMsg);
}
}
return;
});
}
// Some Processing we are doing here to provide verdict and we are making sure that within 11 seconds we are setting the verdict
// we are setting iRetFlag here based on verdict
if (NULL != pesMsg)
{
esRespondRes = es_respond_flags_result(pesClt,pesMsg,iRetFlag,false);
if(ES_RESPOND_RESULT_SUCCESS != esRespondRes)
{
es_free_message(pesMsg);
return FALSE;
}
}
return TRUE;
}
Here is the code for NOTIFY_CLOSE:
BOOL NotifyEventHandler(es_message_t *pesMessage)
{
if (pesMessage->event_type == ES_EVENT_TYPE_NOTIFY_CLOSE && YES == pesMessage->event.close.modified)
{
__block es_message_t *pesTempMsg;
pesTempMsg = es_copy_message(pesMessage);
dispatch_async(gNotifyCloseQueue, ^{
// Performing Some processing on es_message_t
if (pesTempMsg != NULL)
{
es_free_message(pesTempMsg);
}
});
if (pesMessage != NULL)
{
es_free_message(pesMessage);
}
}
else
{
es_free_message(pesMessage);
}
return TRUE;
}
It would be helpful if someone help us to identify what could be wrong we are doing in above code and how to address/solve those problems (code snippet would be helpful) to avoid all possible crashes.
...
Thanks & Regards,
Mohamed Vasim
I came across a useful repo on GitHub:
https://github.com/GianniCarlo/DirectoryWatcher/blob/master/Sources/DirectoryWatcher/DirectoryWatcher.swift
self.queue = DispatchQueue.global()
self.source = DispatchSource.makeFileSystemObjectSource(fileDescriptor: descriptor, eventMask: .write, queue: self.queue)
self.source?.setEventHandler {
[weak self] in
self?.directoryDidChange()
}
self.source?.setCancelHandler() {
close(descriptor)
}
self.source?.resume()
How do I translate this to OC version? I have an app that was written in OC and I plan to incorporate this directory watcher into the project.
I have
var idleScanTimer = DispatchSource.makeTimerSource()
as a class ivar. When the object is started, I have
self.idleScanTimer.schedule(deadline: .now(), repeating: Double(5.0*60))
(and it sets an event handler, that checks some times.)
When the object is stopped, it calls self.idleScanTimer.cancel().
At some point, the object containing it is deallocated, and ... sometimes, I think, not always, it crashes:
Crashed Thread: 61 Dispatch queue: NEFlow queue
[...]
Application Specific Information:
BUG IN CLIENT OF LIBDISPATCH: Release of an inactive object
[...]
Thread 61 Crashed:: Dispatch queue: NEFlow queue
0 libdispatch.dylib 0x7ff81c1232cd _dispatch_queue_xref_dispose.cold.2 + 24
1 libdispatch.dylib 0x7ff81c0f84f6 _dispatch_queue_xref_dispose + 55
2 libdispatch.dylib 0x7ff81c0f2dec -[OS_dispatch_source _xref_dispose] + 17
3 com.kithrup.simpleprovider 0x101df5fa7 MyClass.deinit + 87
4 com.kithrup.simpleprovider 0x101dfbdbb MyClass.__deallocating_deinit + 11
5 libswiftCore.dylib 0x7ff829a63460 _swift_release_dealloc + 16
6 com.kithrup.simpleprovider 0x101e122f4 0x101de7000 + 176884
7 libswiftCore.dylib 0x7ff829a63460 _swift_release_dealloc + 16
8 libsystem_blocks.dylib 0x7ff81bfdc654 _Block_release + 130
9 libsystem_blocks.dylib 0x7ff81bfdc654 _Block_release + 130
10 libdispatch.dylib 0x7ff81c0f3317 _dispatch_client_callout + 8
11 libdispatch.dylib 0x7ff81c0f9317 _dispatch_lane_serial_drain + 672
12 libdispatch.dylib 0x7ff81c0f9dfd _dispatch_lane_invoke + 366
13 libdispatch.dylib 0x7ff81c103eee _dispatch_workloop_worker_thread + 753
14 libsystem_pthread.dylib 0x7ff81c2a7fd0 _pthread_wqthread + 326
15 libsystem_pthread.dylib 0x7ff81c2a6f57 start_wqthread + 15
I tried changing it to an optional and having the deinit call .cancel() and set it to nil, but it still crashes.
I can't figure out how to get it deallocated in a small, standalone test program.
Hi,
When using Swift Concurrency blocking tasks like file I/O, GPU work and networking can prevent forward moving progress and have the potential to exhaust the cooperative thread pool and under utilize the CPU. It's been recommended to offload these tasks from the cooperative thread pool.
Is my understanding correct that the preferred way to do this is by creating async tasks via Dispatch or OperationQueue? And combining these with Continuations if a return value from the task is required? Or should I always be using Continuations in combination with Dispatch/OperationQueue?
There are also Executors but the documentation seems a bit limited on how to use these. The new TaskExecutor is also only available on the latest beta's.
My question is basically what is the recommend way to offload a task?
Thanks!
I have the following var in an @Observable class:
var displayResult: String {
if let currentResult = currentResult, let decimalResult = Decimal(string: currentResult) {
let result = decimalResult.formatForDisplay()
UIAccessibility.post(notification: .announcement, argument: "Current result \(result)")
return result
} else {
return "0"
}
}
The UIAccessiblity.post gives me this warning:
Reference to static property 'announcement' is not concurrency-safe because it involves shared mutable state; this is an error in Swift 6
How can I avoid this?
I got crash report for my mobile application
private var _timedEvents: SynchronizedBarrier<[String: TimeInterval]>
private var timedEvents: [String: TimeInterval] {
get {
_timedEvents.value
}
set {
_timedEvents.value { $0 = newValue }
}
}
func time(event: String) {
let startTime = Date.now.timeIntervalSince1970
trackingQueue.async { [weak self, startTime, event] in
guard let self else { return }
var timedEvents = self.timedEvents
timedEvents[event] = startTime
self.timedEvents = timedEvents
}
}
From the report, the crash is happening at _timedEvents.value { $0 = newValue }
struct ReadWriteLock {
private let concurentQueue: DispatchQueue
init(label: String,
qos: DispatchQoS = .utility) {
let queue = DispatchQueue(label: label,
qos: qos,
attributes: .concurrent)
self.init(queue: queue)
}
init(queue: DispatchQueue) {
self.concurentQueue = queue
}
func read<T>(closure: () -> T) -> T {
concurentQueue.sync { closure() }
}
func write<T>(closure: () throws -> T) rethrows -> T {
try concurentQueue.sync(flags: .barrier) { try closure() }
}
}
struct SynchronizedBarrier<Value> {
private let lock: ReadWriteLock
private var _value: Value
init(_ value: Value,
lock: ReadWriteLock = ReadWriteLock(queue: DispatchQueue(label: "com.example.SynchronizedBarrier",
attributes: .concurrent))) {
self.lock = lock
self._value = value
}
var value: Value { lock.read { _value } }
mutating func value<T>(execute task: (inout Value) throws -> T) rethrows -> T {
try lock.write { try task(&_value) }
}
}
What could be the reason for the crash?
I have attached the crash report.
Masked.crash
I've been studying the AVCam example and notice that everything pertaining to state transitions for the capture session is performed on a dedicated DispatchQueue. My question is this: Can I use an actor instead?
I'm developing iOS framework with Objective C.
I create a dispatch_queue_t by using dispatch_queue_create. And call CFRunLoopRun() for run the Runloop in the queue.
But, It looks like the dispatch_queue_t has share the RunLoop. Some classes has add an invalid timer, and when I call the CFRunLoopRun(), It crashed on my side.
Sample code:
- (void)viewDidLoad {
[super viewDidLoad];
self.queue1 = dispatch_queue_create("com.queue1", DISPATCH_QUEUE_CONCURRENT);
self.queue2 = dispatch_queue_create("org.queue2", DISPATCH_QUEUE_CONCURRENT);
}
- (IBAction)btnButtonAction:(id)sender {
dispatch_async(self.queue1, ^{
NSString *runloop = [NSString stringWithFormat:@"%@", CFRunLoopGetCurrent()];
runloop = [runloop substringWithRange:NSMakeRange(0, 22)];
NSLog(@"Queue1 %p run: %@", self.queue1, runloop);
//NSTimer *timer = [[NSTimer alloc] initWithFireDate:[NSDate date] interval:1 target:self selector:@selector(wrongSeletor:) userInfo:nil repeats:NO];
//[[NSRunLoop currentRunLoop] addTimer:timer forMode:NSRunLoopCommonModes];
});
dispatch_async(self.queue2, ^{
NSString *runloop = [NSString stringWithFormat:@"%@", CFRunLoopGetCurrent()];
runloop = [runloop substringWithRange:NSMakeRange(0, 22)];
NSLog(@"Queue2 %p run: %@", self.queue2, runloop);
CFRunLoopRun();
});
}
Some time they take same RunLoop:
https://i.stack.imgur.com/wGcv3.png
=====
You can see the crash by uncomment the code of NSTimer. The NSTimer has been added in queue1, but it still running when call CFRunLoopRun() in queue2.
I have read some description like: https://stackoverflow.com/questions/38000727/need-some-clarifications-about-dispatch-queue-thread-and-nsrunloop
They told that: "system creates a run loop for the thread". But, in my check, they are sharing the RunLoop.
This is sad for me, because I facing that crashes happen when calling CFRunLoopRun() on production.
Can someone take a look at this.
In the 'Discussion' section of the current documentation for Swift's DispatchQueue, it says:
If the target queue is a concurrent queue, the blocks run in parallel and must therefore be reentrant-safe.
However, unlike dispatch_apply (on which this API is built), this method provides no direct means of specifying a target queue, so this callout is somewhat more confusing than it ought to be. IMO, it's important to highlight the reentrancy considerations that apply in most (all?) cases, but the implicit reference to the implementation details should be removed or clarified.
Feedback filed as: FB13708750
Hello,
I have received 3 almost identical crash reports from the App Store. They all come from the same user, and they are spaced just ± 45 seconds apart.
This is the backtrace of the crashed thread:
Crashed Thread: 3
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: Namespace SIGNAL, Code 6 Abort trap: 6
Terminating Process: Ssssssss [46033]
Thread 3 Crashed:
0 libsystem_kernel.dylib 0x00007ff81b90f196 __pthread_kill + 10
1 libsystem_pthread.dylib 0x00007ff81b946ee6 pthread_kill + 263 (pthread.c:1670)
2 libsystem_c.dylib 0x00007ff81b86dbdf __abort + 139 (abort.c:155)
3 libsystem_c.dylib 0x00007ff81b86db54 abort + 138 (abort.c:126)
4 libc++abi.dylib 0x00007ff81b901282 abort_message + 241
5 libc++abi.dylib 0x00007ff81b8f33fb demangling_terminate_handler() + 267
6 libobjc.A.dylib 0x00007ff81b5c67ca _objc_terminate() + 96 (objc-exception.mm:498)
7 libc++abi.dylib 0x00007ff81b9006db std::__terminate(void (*)()) + 6
8 libc++abi.dylib 0x00007ff81b900696 std::terminate() + 54
9 libdispatch.dylib 0x00007ff81b7a6047 _dispatch_client_callout + 28 (object.m:563)
10 libdispatch.dylib 0x00007ff81b7a87c4 _dispatch_queue_override_invoke + 800 (queue.c:4882)
11 libdispatch.dylib 0x00007ff81b7b5fa2 _dispatch_root_queue_drain + 343 (queue.c:7051)
12 libdispatch.dylib 0x00007ff81b7b6768 _dispatch_worker_thread2 + 170 (queue.c:7119)
13 libsystem_pthread.dylib 0x00007ff81b943c0f _pthread_wqthread + 257 (pthread.c:2631)
14 libsystem_pthread.dylib 0x00007ff81b942bbf start_wqthread + 15 (:-1)
In the backtrace of the main thread, I can see that the error is caught by the app delegate which tries to display an alert, but obviously the message has no time to appear. Incidentally (but it's not my main question), I would like to know if it would be possible in such a case to block the background thread for the time the alert is displayed (e.g. using a dispatch queue)?
... (many other related lines)
72 SSUuuuIIIIIIIIIUUuuuu 0x000000010e8089f2 +[NSAlert(Ssssssss) fatalError:] + 32
73 Ssssssss 0x000000010dd5e75b __universalExceptionHandler_block_invoke (in Ssssssss) (SQAppDelegate.m:421) + 333659
74 libdispatch.dylib 0x00007ff81b7a4d91 _dispatch_call_block_and_release + 12 (init.c:1518)
75 libdispatch.dylib 0x00007ff81b7a6033 _dispatch_client_callout + 8 (object.m:560)
76 libdispatch.dylib 0x00007ff81b7b2fcf _dispatch_main_queue_drain + 954 (queue.c:7794)
77 libdispatch.dylib 0x00007ff81b7b2c07 _dispatch_main_queue_callback_4CF + 31 (queue.c:7954)
78 CoreFoundation 0x00007ff81ba62195 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 9 (CFRunLoop.c:1780)
79 CoreFoundation 0x00007ff81ba21ebf __CFRunLoopRun + 2452 (CFRunLoop.c:3147)
80 CoreFoundation 0x00007ff81ba20ec1 CFRunLoopRunSpecific + 560 (CFRunLoop.c:3418)
81 HIToolbox 0x00007ff8254a5f3d RunCurrentEventLoopInMode + 292 (EventLoop.c:455)
82 HIToolbox 0x00007ff8254a5d4e ReceiveNextEventCommon + 657 (EventBlocking.c:384)
83 HIToolbox 0x00007ff8254a5aa8 _BlockUntilNextEventMatchingListInModeWithFilter + 64 (EventBlocking.c:171)
84 AppKit 0x00007ff81eabfb18 _DPSNextEvent + 858 (CGDPSReplacement.m:818)
85 AppKit 0x00007ff81eabe9c2 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1214 (appEventRouting.m:407)
86 AppKit 0x00007ff81eab1037 -[NSApplication run] + 586 (NSApplication.m:3432)
87 AppKit 0x00007ff81ea85251 NSApplicationMain + 817 (NSApplication.m:9427)
88 dyld 0x00007ff81b5ec41f start + 1903 (dyldMain.cpp:1165)
In all 3 reports, another thread indicates that it is sending or receiving a MIDI data (it's exactly the same backtrace in the 3 reports):
Thread 1:
0 libsystem_kernel.dylib 0x00007ff81b908552 mach_msg2_trap + 10
1 libsystem_kernel.dylib 0x00007ff81b9166cd mach_msg2_internal + 78 (mach_msg.c:201)
2 libsystem_kernel.dylib 0x00007ff81b90f584 mach_msg_overwrite + 692 (mach_msg.c:0)
3 libsystem_kernel.dylib 0x00007ff81b90883a mach_msg + 19 (mach_msg.c:323)
4 CoreMIDI 0x00007ff834adfd50 XServerMachPort::ReceiveMessage(int&, void*, int&) + 94 (XMachPort.cpp:62)
5 CoreMIDI 0x00007ff834b118c5 MIDIProcess::MIDIInPortThread::Run() + 105 (MIDIClientLib.cpp:204)
6 CoreMIDI 0x00007ff834af9c44 CADeprecated::XThread::RunHelper(void*) + 10 (XThread.cpp:21)
7 CoreMIDI 0x00007ff834afae9f CADeprecated::CAPThread::Entry(CADeprecated::CAPThread*) + 77 (CAPThread.cpp:324)
8 libsystem_pthread.dylib 0x00007ff81b9471d3 _pthread_start + 125 (pthread.c:893)
9 libsystem_pthread.dylib 0x00007ff81b942bd3 thread_start + 15 (:-1)
I wonder if this MIDI activity may be related to the crash, even if it doesn't occur in the crashed thread.
I also wonder if the dispatch block starting the backtrace of the thread 3 is the same that leads to the thread 1 (to open the NSAlert), which would mean that it's after the error, or if it's a dispatch block into which the error occurs.
Finally,
2024-03-25_13-04-40.6314.crash
I wonder if std::terminate() could indicate a problem in C++ codes because I have some part of the application written in C++, and it would be a clue.
More generally, is it something in these backtraces that can help me to find what's the problem ?
Any help greatly appreciated, thanks!
-dp
I'm trying to reproduce a case when there are more dispatch queues than there are threads serving them. Is that a possible scenario?
Per my understanding of the DispatchQueue docs, and various WWDC videos on the matter, if one creates a queue in the following manner:
let q = DisqpatchQueue(
label: "my-q",
qos: .utility,
target: .global(qos: .userInteractive)
)
then one should expect work items submitted via async() to effectively run at userInteractive QoS, as the target queue should provide a 'floor' on the effective QoS value (assuming no additional rules are in play, e.g. higher priority items have been enqueued, submitted work items enforce QoS, etc).
In practice, however, this particular formulation does not appear to function that way, and the 'resolved' QoS value seems to be utility, contrary to what the potentially relevant documentation suggests. This behavior appears to be inconsistent with other permutations of queue construction, which makes it even more surprising.
Here's some sample code I was experimenting with to check the behavior of queues created in various ways that I would expect to function analogously (in regards to the derived QoS value for the threads executing their work items):
func test_qos_permutations() {
// q1
let utilTargetingGlobalUIQ = DispatchQueue(
label: "qos:util tgt:globalUI",
qos: .utility,
target: .global(qos: .userInitiated)
)
let customUITargetQ = DispatchQueue(
label: "custom tgt, qos: unspec, tgt:globalUI",
target: .global(qos: .userInitiated)
)
// q2
let utilTargetingCustomSerialUIQ = DispatchQueue(
label: "qos:util tgt:customSerialUI",
qos: .utility,
target: customUITargetQ
)
// q3
let utilDelayedTargetingGlobalUIQ = DispatchQueue(
label: "qos:util tgt:globalUI-delayed",
qos: .utility,
attributes: .initiallyInactive
)
utilDelayedTargetingGlobalUIQ.setTarget(queue: .global(qos: .userInitiated))
utilDelayedTargetingGlobalUIQ.activate()
let queues = [
utilTargetingGlobalUIQ,
utilTargetingCustomSerialUIQ,
utilDelayedTargetingGlobalUIQ,
]
for q in queues {
q.async {
Thread.current.name = q.label
let threadQos = qos_class_self()
print("""
q: \(q.label)
orig qosClass: \(q.qos.qosClass)
thread qosClass: \(DispatchQoS.QoSClass(rawValue: threadQos)!)
""")
}
}
}
Running this, I get the following output:
q: qos:util tgt:customSerialUI
orig qosClass: utility
thread qosClass: userInitiated
q: qos:util tgt:globalUI-delayed
orig qosClass: utility
thread qosClass: userInitiated
q: qos:util tgt:globalUI
orig qosClass: utility
thread qosClass: utility
This test suggests that constructing a queue with an explicit qos parameter and targeting a global queue of nominally 'higher' QoS does not result in a queue that runs its items at the target's QoS. Perhaps most surprisingly is that if the target queue is set after the queue was initialized, you do get the expected 'QoS floor' behavior. Is this behavior expected, or possibly a bug?
I was wondering if there is a way, while debugging, to observe the 'QoS boosting' behavior that is implemented in various places to provide priority inversion avoidance. The pthread_override_qos_class_start/end_np header comments specifically say that overrides aren't reflected in the qos_class_self() and pthread_get_qos_class_np() return values. As far as I can tell the 'CPU Report' UI in Xcode also does not reflect this information (perhaps for the reason the header comments call out).
Is there a direct mechanism to observe this behavior? Presumably a heuristic empirical test could be done to compare throughput of a queue that should have its priority boosted and one that should not, but I would prefer a less opaque means of verification if possible. Thanks in advance!