Swift Concurrency Resources:
DevForums tags: Concurrency
The Swift Programming Language > Concurrency documentation
Migrating to Swift 6 documentation
WWDC 2022 Session 110351 Eliminate data races using Swift Concurrency — This ‘sailing on the sea of concurrency’ talk is a great introduction to the fundamentals.
WWDC 2021 Session 10134 Explore structured concurrency in Swift — The table that starts rolling out at around 25:45 is really helpful.
Swift Async Algorithms package
Swift Concurrency Proposal Index DevForum post
Why is flow control important? DevForums post
Matt Massicotte’s blog
Dispatch Resources:
DevForums tags: Dispatch
Dispatch documentation — Note that the Swift API and C API, while generally aligned, are different in many details. Make sure you select the right language at the top of the page.
Dispatch man pages — While the standard Dispatch documentation is good, you can still find some great tidbits in the man pages. See Reading UNIX Manual Pages. Start by reading dispatch in section 3.
WWDC 2015 Session 718 Building Responsive and Efficient Apps with GCD [1]
WWDC 2017 Session 706 Modernizing Grand Central Dispatch Usage [1]
Avoid Dispatch Global Concurrent Queues DevForums post
Share and Enjoy
—
Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"
[1] These videos may or may not be available from Apple. If not, the URL should help you locate other sources of this info.
Dispatch
RSS for tagExecute code concurrently on multicore hardware by submitting work to dispatch queues managed by the system using Dispatch.
Posts under Dispatch tag
32 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Based on crash reports for our app in production, we're seeing these SwiftUI crashes but couldn't figure out why it is there. These crashes are pretty frequent (>20 crashed per day).
Would really appreciate it if anyone has any insight on why this happens. Based on the stacktrace, i can't really find anything that links back to our app (replaced with MyApp in the stacktrace).
Thank you in advance!
Crashed: com.apple.main-thread
0 libdispatch.dylib 0x39dcc _dispatch_semaphore_dispose.cold.1 + 40
1 libdispatch.dylib 0x4c1c _dispatch_semaphore_signal_slow + 82
2 libdispatch.dylib 0x2d30 _dispatch_dispose + 208
3 SwiftUICore 0x77f788 destroy for StoredLocationBase.Data + 64
4 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196
5 libswiftCore.dylib 0x13a60 UnsafeMutablePointer.deinitialize(count:) + 40
6 SwiftUICore 0x95f374 AtomicBuffer.deinit + 124
7 SwiftUICore 0x95f39c AtomicBuffer.__deallocating_deinit + 16
8 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
9 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
10 SwiftUICore 0x77e53c StoredLocation.deinit + 32
11 SwiftUICore 0x77e564 StoredLocation.__deallocating_deinit + 16
12 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
13 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
14 MyApp 0x1673338 objectdestroyTm + 6922196
15 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
16 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
17 SwiftUICore 0x650290 _AppearanceActionModifier.MergedBox.__deallocating_deinit + 32
18 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
19 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
20 SwiftUICore 0x651b44 closure #1 in _AppearanceActionModifier.MergedBox.update()partial apply + 28
21 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
22 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
23 libswiftCore.dylib 0x3b56fc swift_arrayDestroy + 196
24 libswiftCore.dylib 0xa2a54 _ContiguousArrayStorage.__deallocating_deinit + 96
25 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
26 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
27 SwiftUICore 0x4a6c4c type metadata accessor for _ContiguousArrayStorage<CVarArg> + 120
28 libswiftCore.dylib 0x3d783c _swift_release_dealloc + 56
29 libswiftCore.dylib 0x3d8950 bool swift::RefCounts<swift::RefCountBitsT<(swift::RefCountInlinedness)1>>::doDecrementSlow<(swift::PerformDeinit)1>(swift::RefCountBitsT<(swift::RefCountInlinedness)1>, unsigned int) + 160
30 SwiftUICore 0x4a5d88 static Update.dispatchActions() + 1332
31 SwiftUICore 0xa0db28 closure #2 in closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 132
32 SwiftUICore 0xa0d928 closure #1 in ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 708
33 SwiftUICore 0xa0b0d4 ViewRendererHost.render(interval:updateDisplayList:targetTimestamp:) + 556
34 SwiftUI 0x8f1634 UIHostingViewBase.renderForPreferences(updateDisplayList:) + 168
35 SwiftUI 0x8f495c closure #1 in UIHostingViewBase.requestImmediateUpdate() + 72
36 SwiftUI 0xcc700 thunk for @escaping @callee_guaranteed () -> () + 36
37 libdispatch.dylib 0x2370 _dispatch_call_block_and_release + 32
38 libdispatch.dylib 0x40d0 _dispatch_client_callout + 20
39 libdispatch.dylib 0x129e0 _dispatch_main_queue_drain + 980
40 libdispatch.dylib 0x125fc _dispatch_main_queue_callback_4CF + 44
41 CoreFoundation 0x56204 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
42 CoreFoundation 0x53440 __CFRunLoopRun + 1996
43 CoreFoundation 0x52830 CFRunLoopRunSpecific + 588
44 GraphicsServices 0x11c4 GSEventRunModal + 164
45 UIKitCore 0x3d2eb0 -[UIApplication _run] + 816
46 UIKitCore 0x4815b4 UIApplicationMain + 340
47 SwiftUI 0x101f98 closure #1 in KitRendererCommon(_:) + 168
48 SwiftUI 0xe2664 runApp<A>(_:) + 100
49 SwiftUI 0xe5490 static App.main() + 180
50 MyApp 0x8a7828 main + 4340250664 (MyApp.swift:4340250664)
51 ??? 0x1ba496ec8 (Missing)
We would be creating N NWListener objects and M NWConnection objects in our process' communication subsystem to create server sockets, accepted client sockets on server and client sockets on clients.
Both NWConnection and NWListener rely on DispatchQueue to deliver state changes, incoming connections, send/recv completions etc.
What DispatchQueues should I use and why?
Global Concurrent Dispatch Queue (and which QoS?) for all NWConnection and NWListener
One custom concurrent queue (which QoS?) for all NWConnection and NWListener? (Does that anyways get targetted to one of the global queues?)
One custom concurrent queue per NWConnection and NWListener though all targetted to Global Concurrent Dispatch Queue (and which QoS?)?
One custom concurrent queue per NWConnection and NWListener though all targetted to single target custom concurrent queue?
For every option above, how am I impacted in terms of parallelism, concurrency, throughput & latency and how is overall system impacted (with other processes also running)?
Seperate questions (sorry for the digression):
Are global concurrent queues specific to a process or shared across all processes on a device?
Can I safely use setSpecific on global dispatch queues in our app?
I understand that GCD and it's underlying implementations have evolved over time. And many things have not been shared explicitly in Apple documentation.
The most concepts of DispatchQueue (serial and concurrent queues), DispatchQoS, target queue and system provided queues: main and globals etc.
I have some doubts & questions to clarify:
[Main Dispatch Queue] [Link] Because the main queue doesn't behave entirely like a regular serial queue, it may have unwanted side-effects when used in processes that are not UI apps (daemons). For such processes, the main queue should be avoided. What does it mean? Can you elaborate?
[Global Concurrent Dispatch Queues] Are they global to a process or across processes on a device. I believe it is the first case but just wanted to be sure.
[Global Concurrent Dispatch Queues] Does system create 4 (for each QoS) * 2 (over-commiting and non-overcommiting queues) = 8 queues in all. When does which type of queue comes into play?
[Custom Queue][Target Queue concept] [swift-corelibs-libdispatch/man/dispatch_queue_create.3] QUOTE The default target queue of all dispatch objects created by the application is the default priority global concurrent queue. UNQUOTE Is this stil true?
We could not find a mention of this in any latest official apple documentation (though some old forum threads (one more) and github code documentation indicate the same).
The official documentation only says:
[dispatch_set_target_queue] QUOTE If you want the system to provide a queue that is appropriate for the current object UNQUOTE
[dispatch_queue_create_with_target] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT to set the target queue to the default type for the current dispatch queue.UNQUOTE
[Dispatch>DispatchQueue>init] QUOTE Specify DISPATCH_TARGET_QUEUE_DEFAULT if you want the system to provide a queue that is appropriate for the current object. UNQUOTE
What is the difference between passing target queue as 'nil' vs 'DISPATCH_TARGET_QUEUE_DEFAULT' to DispatchQueue init?
[Custom Queue][Target Queue concept] [dispatch_set_target_queue] QUOTE The system doesn't allocate threads to the dispatch queue if it has a target queue, unless that target queue is a global concurrent queue. UNQUOTE
The system does allocate threads to the custom dispatch queues that have global concurrent queue as the default target.
What does that mean? Why does targetting to global concurrent queues mean in that case?
[System / GCD Thread Pool] that excutes work items from DispatchQueue: Is this thread pool per queue? or across queues per process? or across processes per device?
I’m currently developing an iOS metronome app using DispatchSourceTimer as the timer. The interval is set very small, around 50 milliseconds, and I’m using CFAbsoluteTimeGetCurrent to calculate the elapsed time to ensure the beat is played within a ±0.003-second margin.
The problem is that once the app goes to the background, the timing becomes unstable—it slows down noticeably, then recovers after 1–2 seconds.
When coming back to the foreground, it suddenly speeds up, and again, it takes 1–2 seconds to return to normal. It feels like the app is randomly “powering off” and then “overclocking.” It’s super frustrating.
I’ve noticed that some metronome apps in the App Store have similar issues, but there’s one called “Professional Metronome” that’s rock solid with no such problems. What kind of magic are they using? Any experts out there who can help? Thanks in advance!
P.S. I’ve already enabled background audio permissions.
The professional metronome that has no issues: https://link.zhihu.com/?target=https%3A//apps.apple.com/cn/app/pro-metronome-%25E4%25B8%2593%25E4%25B8%259A%25E8%258A%2582%25E6%258B%258D%25E5%2599%25A8/id477960671
I’ve been experimenting with Dispatch, and workloops in particular. I gather that they’re similar to serial queues, except that they reorder work items by QoS. I suspect there’s more to workloops than meets the eye, though; calling dispatch_set_target_queue on them has no effect, in spite of the <dispatch/workloop.h> saying that workloops “can be passed to all APIs accepting a dispatch queue, except for functions from the dispatch_sync() family”.
Workloops keep showing up in odd places like Metal and Network.framework backtraces, and <dispatch/workloop.h> includes functionality for tying workloops to os_workgroups (?!).
What exactly is a workloop beyond just a serial queue with priority ordering, and why can’t I set the target queue of one?
I have the following TaskExecutor code in Swift 6 and is getting the following error:
//Error
Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure.
May I know what is the best way to approach this?
This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer.
Renderer
@MainActor
static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) {
Task(executorPreference: RendererTaskExecutor.shared) { //Error
let renderer = Renderer(layerRenderer, appModel: appModel)
await renderer.startARSession()
await renderer.renderLoop()
}
}
final class RendererTaskExecutor: TaskExecutor {
private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive)
func enqueue(_ job: UnownedJob) {
queue.async {
job.runSynchronously(on: self.asUnownedSerialExecutor())
}
}
func asUnownedSerialExecutor() -> UnownedTaskExecutor {
return UnownedTaskExecutor(ordinary: self)
}
static let shared: RendererTaskExecutor = RendererTaskExecutor()
}
Crash occurs in @MainActor class or function in iOS 14
Apps built and distributed targeting Xcode 16 version swift6 crash on iOS 14 devices.
We create a static library and put it in our app's library.
Crash occurs in all classes or functions of the static library (@MainActor in front).
It does not occur from iOS / iPadOS 15.
If you change the minimum supported version of the static library to iOS 11, a crash occurs, and if you change it to iOS 14, a crash does not occur.
Is there a way to keep the minimum version of the static library at iOS 11 and prevent crashes?
I create a DispatchIO object (in Swift) from a socketpair, set the low/high water marks to 1, and then call read on it. Elsewhere (multi-threaded, of course), I get data from somewhere, and write to the other side of it. Then when my data is done, I call dio?.close()
The cleanup handler never gets called.
What am I missing? (ETA: Ok, I can get it to work by calling dio?.close(flags: .stop) so that may be what I was missing.)
(Also, I really wish it would get all the data available at once for the read, rather than 1 at a time.)
Let's say I queue some tasks on DispatchQueue.global() and then switch to another app or locking screen for a while. The app was not terminated but stayed in the background.
Is there a chance that some tasks queued but not yet start could be discarded, even if the app hasn’t been terminated, after switching to another app or locking the screen for a while?
I'm calling the following function in a SwiftUI View modifier in Xcode 16.1:
nonisolated function f -> CGFloat {
let semaphore = DispatchSemaphore(value: 0)
var a: CGFloat = 0
DispatchQueue.main.async {
a = ...
semaphore.signal()
}
semaphore.wait()
return a
}
The app freezes, and code in the main queue is never executed.
We are getting a crash _dispatch_assert_queue_fail when the cancellationHandler on NSProgress is called.
We do not see this with iOS 17.x, only on iOS 18. We are building in Swift 6 language mode and do not have any compiler warnings.
We have a type whose init looks something like this:
init(
request: URLRequest,
destinationURL: URL,
session: URLSession
) {
progress = Progress()
progress.kind = .file
progress.fileOperationKind = .downloading
progress.fileURL = destinationURL
progress.pausingHandler = { [weak self] in
self?.setIsPaused(true)
}
progress.resumingHandler = { [weak self] in
self?.setIsPaused(false)
}
progress.cancellationHandler = { [weak self] in
self?.cancel()
}
When the progress is cancelled, and the cancellation handler is invoked. We get the crash. The crash is not reproducible 100% of the time, but it happens significantly often. Especially after cleaning and rebuilding and running our tests.
* thread #4, queue = 'com.apple.root.default-qos', stop reason = EXC_BREAKPOINT (code=1, subcode=0x18017b0e8)
* frame #0: 0x000000018017b0e8 libdispatch.dylib`_dispatch_assert_queue_fail + 116
frame #1: 0x000000018017b074 libdispatch.dylib`dispatch_assert_queue + 188
frame #2: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284
frame #3: 0x000000010b80bd84 MyTests`closure #3 in MyController.init() at MyController.swift:0
frame #4: 0x000000010b80bb04 MyTests`thunk for @escaping @callee_guaranteed @Sendable () -> () at <compiler-generated>:0
frame #5: 0x00000001810276b0 Foundation`__20-[NSProgress cancel]_block_invoke_3 + 28
frame #6: 0x00000001801774ec libdispatch.dylib`_dispatch_call_block_and_release + 24
frame #7: 0x0000000180178de0 libdispatch.dylib`_dispatch_client_callout + 16
frame #8: 0x000000018018b7dc libdispatch.dylib`_dispatch_root_queue_drain + 1072
frame #9: 0x000000018018bf60 libdispatch.dylib`_dispatch_worker_thread2 + 232
frame #10: 0x00000001012a77d8 libsystem_pthread.dylib`_pthread_wqthread + 224
Any thoughts on why this is crashing and what we can do to work-around it? I have not been able to extract our code into a simple reproducible case yet. And I mostly see it when running our code in a testing environment (XCTest). Although I have been able to reproduce it running an app a few times, it's just less common.
Hi there, I have some thread related questions regards to network framework completion callbacks. In short, how should I process cross thread data in the completion callbacks?
Here are more details. I have a background serial dispatch queue (call it dispatch queue A) to sequentially process the nw_connection and any network io events. Meanwhile, user inputs are handled by serial dispatch queue ( dispatch queue B). How should I handle the cross thread user data in this case?
(I write some simplified sample code below)
struct {
int client_status;
char* message_to_sent;
}user_data;
nw_connection_t nw_connection;
dispatch_queue_t dispatch_queue_A
static void send_message(){
dispatch_data_t data = dispatch_data_create(message, len(message), dispath_event_loop->dispatch_queue, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
nw_connection_send(
nw_connection, data, NW_CONNECTION_DEFAULT_MESSAGE_CONTEXT, false, ^(nw_error_t error) {
user_data.client_status = SENT;
mem_release(user_data.message_to_sent); });
});
}
static void setup_connection(){
dispatch_queue_A=
dispatch_queue_create("unique_id_a", DISPATCH_QUEUE_SERIAL);
nw_connection = nw_connection_create(endpoint, params);
nw_connection_set_state_changed_handler(){
if (state == nw_connection_state_ready) {
user_data.client_status = CONNECTED
}
// ... other operations ...
}
nw_connection_start(nw_connection);
nw_retain(nw_connection);
}
static void user_main(){
setup_connection()
user_data.client_status = INIT;
dispatch_queue_t dispatch_queue_B = dispatch_queue_create("unique_id_b", DISPATCH_QUEUE_SERIAL);
// write socket
dispatch_async(dispatch_queue_B, ^(){
if (user_data.client_status != CONNECTED ) return;
user_data.message_to_sent = malloc(XX,***)
// I would like to have all io events processed on dispatch queue A so that the io events would not interacted with the user events
dispatch_async_f(dispatch_queue_A, send_message);
// Disconnect block
dispatch_async(dispatch_queue_B, ^(){
dispatch_async_f(dispatch_queue_A, ^(){
nw_connection_cancel(nw_connection)
});
user_data.client_status = DISCONNECTING;
});
// clean up connection and so on...
}
To be more specific, my questions would be:
As I was using serial dispatch queue, I didn't protect the user_data here. However, which thread would the send_completion_handler get called? Would it be a data race condition where the Disconnect block and send_completion_handler both access user_data?
If I protect the user_data with lock, it might block the thread. How does the dispatch queue make sure it would NOT put a related execution block onto the "blocked thread"?
How and why does the dispatchgroup.notify method get called before all the entered instances have left?
I tried adding the dispatchGroup.enter within the same loop and the output is the same.
background info:
I dispatch async task to main queue in an es_handler_block_t(client subscribe open, create, exit, close events and mute all processes except DesktopServicesHelper). crash happened kinda randomly. most likely to happen when I copy a folder(contains a lot of files) in a volume to another volume.
here's the crashed part of the diagnostic report .
Thread 9 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x18c6e2a60 __pthread_kill + 8
1 libsystem_pthread.dylib 0x18c71ac20 pthread_kill + 288
2 libsystem_c.dylib 0x18c627a20 abort + 180
3 libc++abi.dylib 0x18c6d1d30 abort_message + 132
4 libc++abi.dylib 0x18c6c1fe8 demangling_terminate_handler() + 348
5 libobjc.A.dylib 0x18c3601d0 _objc_terminate() + 144
6 libc++abi.dylib 0x18c6d10f4 std::__terminate(void (*)()) + 16
7 libc++abi.dylib 0x18c6d1098 std::terminate() + 108
8 libdispatch.dylib 0x18c56a3fc _dispatch_client_callout + 40
9 libdispatch.dylib 0x18c571a14 _dispatch_lane_serial_drain + 748
10 libdispatch.dylib 0x18c572578 _dispatch_lane_invoke + 432
11 libdispatch.dylib 0x18c57bea8 _dispatch_root_queue_drain + 392
12 libdispatch.dylib 0x18c57c6b8 _dispatch_worker_thread2 + 156
13 libsystem_pthread.dylib 0x18c716fd0 _pthread_wqthread + 228
14 libsystem_pthread.dylib 0x18c715d28 start_wqthread + 8
Thread 9 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x000000018c6d62cb x5: 0x000000016c1eed20 x6: 0x000000000000006e x7: 0x0000000000000000
x8: 0x851ef9fdee51098d x9: 0x851ef9fc824ff98d x10: 0x0000000000000200 x11: 0x000000000000000b
x12: 0x0000000000000000 x13: 0x00000000001ff800 x14: 0x00000000000007fb x15: 0x00000000a5a0204e
x16: 0x0000000000000148 x17: 0x00000001fe792c30 x18: 0x0000000000000000 x19: 0x0000000000000006
x20: 0x000000016c1ef000 x21: 0x0000000000004003 x22: 0x000000016c1ef0e0 x23: 0x000000016c1ef0e0
x24: 0x00000001f442b6a8 x25: 0x0000000000000000 x26: 0x0000000000000000 x27: 0x0000600003664800
x28: 0x0000000000000000 fp: 0x000000016c1eec90 lr: 0x000000018c71ac20
sp: 0x000000016c1eec70 pc: 0x000000018c6e2a60 cpsr: 0x40001000
far: 0x0000000000000000 esr: 0x56000080 Address size fault
BUG IN CLIENT OF LIBDISPATCH: Unexpected EV_VANISHED (do not destroy random mach ports or file descriptors)
Which, ok, clear: somehow a file descriptor is being closed before DispatchIO.close() is called, yes?
Only I can't figure out where it is being closed. I am currently using change_fdguard_np() to prevent closes anywhere else, and every single place where I call Darwin.close() is preceded by another call to change_fdguard_npand thenDispatchIO.close()`. eg
self.unguardSocket()
self.readDispatcher?.close()
Darwin.close(self.socket)
self.socket = -1
self.completion(self)
Hi ,
Greetings of the day!
I would like to get help to avoid the Endpoint Security System Extension crash due to below reason:
Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline
Couple of events we have subscribed and for AUTH related events we are receiving deadline of 14 seconds in Sonoma and to avoid above issue we have implemented a queue to provide verdict within the deadline to avoid the OS killing of our extension however sometime we observe that we are getting crash with below message:
Termination Reason: Namespace ENDPOINTSECURITY, Code 2 EndpointSecurity client terminated because it failed to respond to a message before its deadline
**Dispatch Thread Soft Limit Reached: 64** (too many dispatch threads blocked in synchronous operations)
There is no GCD API to check whether queue is reached to soft limit so we need help here to know or check whether queue is reached to soft limit 64.
if we can check above then we should avoid adding the new tasks in it until its free to accept the tasks.
And for NOTIFY_CLOSE, we are getting big value in seconds as deadline however we are adding all the processing of NOTIFY_CLOSE with dispatch_async however still receiving the crash.
Here is code for AUTH_OPEN :
dispatch_queue_t gNotifyCloseQueue = dispatch_queue_create(
"com.example.notify_close_queue", dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL,
QOS_CLASS_UTILITY, 0));
dispatch_queue_t gAuthOpenQueue = dispatch_queue_create("com.example.auth_open_queue",dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL,QOS_CLASS_USER_INTERACTIVE, 0));
BOOL AuthOpenEventHandler(es_message_t *pesMsg)
{
//Some Processing we are doing here like Calculate the deadline in seconds etc. and we are receiving 14 seconds in Sonoma
// deadline - 14 seconds
if ( deadlineInSeconds < 10 )
{
dispatch_time_t triggerTime = dispatch_time(pesMsg->deadline, (int64_t)(-1 * NSEC_PER_SEC));
__block es_message_t *pesTempMsg;
pesTempMsg = es_copy_message(pesMsg);
dispatch_after(triggerTime, gAuthOpenQueue, ^{
if (pesTempMsg != NULL)
{
esRespondRes = es_respond_flags_result(pesClt,pesMsg,pesMsg->event.open.fflag,false);
if(ES_RESPOND_RESULT_SUCCESS != esRespondRes)
{
es_free_message(pesTempMsg);
return;
}
if (pesTempMsg != NULL) {
es_free_message(pesTempMsg);
}
}
return;
});
}
// Some Processing we are doing here to provide verdict and we are making sure that within 11 seconds we are setting the verdict
// we are setting iRetFlag here based on verdict
if (NULL != pesMsg)
{
esRespondRes = es_respond_flags_result(pesClt,pesMsg,iRetFlag,false);
if(ES_RESPOND_RESULT_SUCCESS != esRespondRes)
{
es_free_message(pesMsg);
return FALSE;
}
}
return TRUE;
}
Here is the code for NOTIFY_CLOSE:
BOOL NotifyEventHandler(es_message_t *pesMessage)
{
if (pesMessage->event_type == ES_EVENT_TYPE_NOTIFY_CLOSE && YES == pesMessage->event.close.modified)
{
__block es_message_t *pesTempMsg;
pesTempMsg = es_copy_message(pesMessage);
dispatch_async(gNotifyCloseQueue, ^{
// Performing Some processing on es_message_t
if (pesTempMsg != NULL)
{
es_free_message(pesTempMsg);
}
});
if (pesMessage != NULL)
{
es_free_message(pesMessage);
}
}
else
{
es_free_message(pesMessage);
}
return TRUE;
}
It would be helpful if someone help us to identify what could be wrong we are doing in above code and how to address/solve those problems (code snippet would be helpful) to avoid all possible crashes.
...
Thanks & Regards,
Mohamed Vasim
I came across a useful repo on GitHub:
https://github.com/GianniCarlo/DirectoryWatcher/blob/master/Sources/DirectoryWatcher/DirectoryWatcher.swift
self.queue = DispatchQueue.global()
self.source = DispatchSource.makeFileSystemObjectSource(fileDescriptor: descriptor, eventMask: .write, queue: self.queue)
self.source?.setEventHandler {
[weak self] in
self?.directoryDidChange()
}
self.source?.setCancelHandler() {
close(descriptor)
}
self.source?.resume()
How do I translate this to OC version? I have an app that was written in OC and I plan to incorporate this directory watcher into the project.
I have
var idleScanTimer = DispatchSource.makeTimerSource()
as a class ivar. When the object is started, I have
self.idleScanTimer.schedule(deadline: .now(), repeating: Double(5.0*60))
(and it sets an event handler, that checks some times.)
When the object is stopped, it calls self.idleScanTimer.cancel().
At some point, the object containing it is deallocated, and ... sometimes, I think, not always, it crashes:
Crashed Thread: 61 Dispatch queue: NEFlow queue
[...]
Application Specific Information:
BUG IN CLIENT OF LIBDISPATCH: Release of an inactive object
[...]
Thread 61 Crashed:: Dispatch queue: NEFlow queue
0 libdispatch.dylib 0x7ff81c1232cd _dispatch_queue_xref_dispose.cold.2 + 24
1 libdispatch.dylib 0x7ff81c0f84f6 _dispatch_queue_xref_dispose + 55
2 libdispatch.dylib 0x7ff81c0f2dec -[OS_dispatch_source _xref_dispose] + 17
3 com.kithrup.simpleprovider 0x101df5fa7 MyClass.deinit + 87
4 com.kithrup.simpleprovider 0x101dfbdbb MyClass.__deallocating_deinit + 11
5 libswiftCore.dylib 0x7ff829a63460 _swift_release_dealloc + 16
6 com.kithrup.simpleprovider 0x101e122f4 0x101de7000 + 176884
7 libswiftCore.dylib 0x7ff829a63460 _swift_release_dealloc + 16
8 libsystem_blocks.dylib 0x7ff81bfdc654 _Block_release + 130
9 libsystem_blocks.dylib 0x7ff81bfdc654 _Block_release + 130
10 libdispatch.dylib 0x7ff81c0f3317 _dispatch_client_callout + 8
11 libdispatch.dylib 0x7ff81c0f9317 _dispatch_lane_serial_drain + 672
12 libdispatch.dylib 0x7ff81c0f9dfd _dispatch_lane_invoke + 366
13 libdispatch.dylib 0x7ff81c103eee _dispatch_workloop_worker_thread + 753
14 libsystem_pthread.dylib 0x7ff81c2a7fd0 _pthread_wqthread + 326
15 libsystem_pthread.dylib 0x7ff81c2a6f57 start_wqthread + 15
I tried changing it to an optional and having the deinit call .cancel() and set it to nil, but it still crashes.
I can't figure out how to get it deallocated in a small, standalone test program.
Hi,
When using Swift Concurrency blocking tasks like file I/O, GPU work and networking can prevent forward moving progress and have the potential to exhaust the cooperative thread pool and under utilize the CPU. It's been recommended to offload these tasks from the cooperative thread pool.
Is my understanding correct that the preferred way to do this is by creating async tasks via Dispatch or OperationQueue? And combining these with Continuations if a return value from the task is required? Or should I always be using Continuations in combination with Dispatch/OperationQueue?
There are also Executors but the documentation seems a bit limited on how to use these. The new TaskExecutor is also only available on the latest beta's.
My question is basically what is the recommend way to offload a task?
Thanks!
I have the following var in an @Observable class:
var displayResult: String {
if let currentResult = currentResult, let decimalResult = Decimal(string: currentResult) {
let result = decimalResult.formatForDisplay()
UIAccessibility.post(notification: .announcement, argument: "Current result \(result)")
return result
} else {
return "0"
}
}
The UIAccessiblity.post gives me this warning:
Reference to static property 'announcement' is not concurrency-safe because it involves shared mutable state; this is an error in Swift 6
How can I avoid this?