Concurrency

RSS for tag

Concurrency is the notion of multiple things happening at the same time.

Pinned Posts

Posts under Concurrency tag

88 Posts
Sort by:
Post marked as solved
6 Replies
687 Views
Hi, Given pthread_id (†), is there a way to find the associated NSThread (when the one exists)? Perhaps using an undocumented / unsupported method – I don't mind. Thank you! (†) the pthread_id is neither of the current nor of the main thread.
Posted Last updated
.
Post marked as solved
3 Replies
467 Views
Hello, im currently rewriting my entire network stuff to swift concurrency. I have a Swift Package which contains the NWConnection with my custom framing protocol. So Network framework does not support itself concurrency so I build an api around that. To receive messages I used an AsyncThrowingStream and it works like that: let connection = MyNetworkFramework(host: "example.org") Task { await connection.start() for try await result in connection.receive() { // do something with result } } that's pretty neat and I like it a lot but now things got tricky. in my application I have up to 10 different tcp streams I open up to handle connection stuff. so with my api change every tcp connection runs in it's own task like above and I have no idea how to handle the possible errors from the .receive() func inside the tasks. First my idea was to use a ThrowingTaskGroup for that and I think that will work but biggest problem is that I initially start with let's say 4 tcp connections and I need the ability to add additional ones later if I need them. so it seems not possible to add a Task afterwards to the ThrowingTaskGroup. So what's a good way to handle a case like that? i have an actor which handles everything in it's isolated context and basically I just need let the start func throw if any of the Tasks throw I open up. Here is a basic sample of how it's structured. Thanks Vinz internal actor MultiConnector { internal var count: Int { connections.count } private var connections: [ConnectionsModel] = [] private let host: String private let port: UInt16 private let parameters: NWParameters internal init(host: String, port: UInt16, parameters: NWParameters) { self.host = host self.port = port self.parameters = parameters } internal func start(count: Int) async throws -> Void { guard connections.isEmpty else { return } guard count > .zero else { return } try await sockets(from: count) } internal func cancel() -> Void { guard !connections.isEmpty else { return } for connection in connections { connection.connection.cancel() } connections.removeAll() } internal func sockets(from count: Int) async throws -> Void { while connections.count < count { try await connect() } } } // MARK: - Private API - private extension MultiConnector { private func connect() async throws -> Void { let uuid = UUID(), connection = MyNetworkFramework(host: host, port: port, parameters: parameters) connections.append(.init(id: uuid, connection: connection)) let task = Task { [weak self] in guard let self else { return }; try await stream(connection: connection, id: uuid) } try await connection.start(); await connection.send(message: "Sample Message") // try await task.value <-- this does not work because stream runs infinite until i cancel it (that's expected and intended but it need to handle if the stream throws an error) } private func stream(connection: MyNetworkFramework, id: UUID) async throws -> Void { for try await result in connection.receive() { if case .message(_) = result { await connection.send(message: "Sample Message") } // ... more to handle } } }
Posted
by Vinz1911.
Last updated
.
Post not yet marked as solved
2 Replies
472 Views
I'm hoping someone can help me understand some unexpected behavior in a @MainActor Function which internally calls a task that calls a method on a background actor. Normally, the function would call the task, pause until the task completes, and finish the end of the function. However, when the function is annotated @Main actor, the internal task appears to become detached and execute asynchronously such that it finishes after the @MainActor function. The code below demonstrates this behavior in a playground: actor SeparateActor{ func actorFunc(_ str:String){ print("\tActorFunc(\(str))") } } class MyClass{ var sa = SeparateActor() @MainActor func mainActorFunctionWithTask(){ print("mainActorFunctionWithTask Start") Task{ await self.sa.actorFunc("mainActorFunctionWithTask") } print("mainActorFunctionWithTask End") } func normalFuncWithTask(){ print("normalFuncWithTask Start") Task{ await self.sa.actorFunc("normalFuncWithTask") } print("normalFuncWithTask End") } } Task{ let mc = MyClass() print("\nCalling normalFuncWithTask") mc.normalFuncWithTask() print("\nCalling mainActorFunctionWithTask") await mc.mainActorFunctionWithTask() } I would expect both the normalFunc and the mainActorFunc to behave the same, with the ActorFunc being called before the end of the task, but instead, my mainActor function completes before the task. Calling normalFuncWithTask normalFuncWithTask Start ActorFunc(normalFuncWithTask) normalFuncWithTask End Calling mainActorFunctionWithTask mainActorFunctionWithTask Start mainActorFunctionWithTask End ActorFunc(mainActorFunctionWithTask)
Posted
by zabelc.
Last updated
.
Post marked as solved
1 Replies
403 Views
I run the following code in an actor: func aaa() async throws -> Data { async let result = Task( operation: { ... decompressing data through try (data as NSData).decompressed(using: .lzfse) as Data } ).result switch await result { case .success(let value): return value case .failure(let error): throw error } I do it this way because I do not want to block the actor by decompression, and there is no state change in the actor afterwards. I would say that the actor plays no significant role here. Important is that many (14) concurrent tasks run in parallel, however NOT on the same data. It runs fine for a while (dozens/hundreds of data decompressed), and then the following happens: Activity Monitor (macOS GUI tool) shows almost none User CPU time, and approx. 75% System CPU time. The rest is idle. (When it runs fine, User CPU time is 95+%) When I pause the run in Xcode (in release config it behaves the same), all threads are in mach_msg2_trap #0 0x0000000180ac21f4 in mach_msg2_trap () #1 0x0000000180ad4b24 in mach_msg2_internal () #2 0x0000000180ac52fc in vm_copy () #3 0x0000000180916b78 in szone_realloc () #4 0x000000018093cfb0 in _malloc_zone_realloc () #5 0x000000018093d7e8 in _realloc () #6 0x0000000180bb8a10 in __CFSafelyReallocate () #7 0x0000000181d00e30 in _NSMutableDataGrowBytes () #8 0x0000000181ce2630 in -[NSConcreteMutableData appendBytes:length:] () #9 0x00000001823c30d8 in -[_NSDataCompressor processBytes:size:flags:] () #10 0x00000001823c32c4 in -[NSData(NSDataCompression) _produceDataWithCompressionOperation:algorithm:handler:] () #11 0x00000001823c3598 in -[NSData(NSDataCompression) _decompressedDataUsingCompressionAlgorithm:error:] () It looks like something is wrong with safe reallocation, however if this have been a bug, then all macOS is stuck. Any idea, please?
Posted
by hibernat.
Last updated
.
Post not yet marked as solved
1 Replies
268 Views
Hello, I recently implemented a lock that uses OSAllocatedUnfairLock on iOS 16+ and os_unfair_lock on below iOS 16. I know that using os_unfair_lock with Swift is tricky, and it is error-prone. For example, the famous open-source library Alamofire has even been using os_unfair_lock incorrectly in terms of Swift's lifecycle view. (They fixed it as in the link [1] ) So, I implemented a lock like below. To use os_unfair_lock safely, I used the Noncopyable protocol which was added in Swift 5.9 [2]. Also, I allocated memory on the heap to use os_unfair_lock. public struct UnfairLock: ~Copyable { public init() { if #available(iOS 16.0, *) { _osAllocatedUnfairLock = OSAllocatedUnfairLock() } else { self.unfairLock = UnsafeMutablePointer.allocate(capacity: 1) } } deinit { if #unavailable(iOS 16.0) { unfairLock!.deallocate() } } public func lock() { if #available(iOS 16.0, *) { osAllocatedUnfairLock.lock() } else { os_unfair_lock_lock(unfairLock!) } } public func unlock() { if #available(iOS 16.0, *) { osAllocatedUnfairLock.unlock() } else { os_unfair_lock_unlock(unfairLock!) } } public func with<T>(_ closure: () -> T) -> T { lock() defer { unlock() } return closure() } private var _osAllocatedUnfairLock: Any? private var unfairLock: UnsafeMutablePointer<os_unfair_lock_s>? @available(iOS 16.0, *) private var osAllocatedUnfairLock: OSAllocatedUnfairLock<Void> { // swiftlint:disable force_cast _osAllocatedUnfairLock as! OSAllocatedUnfairLock // swiftlint:enable force_cast } } However, I got several crashes on iOS 14-15 users like this (This app targets iOS 14+ and on iOS 16+, it uses OSAllocatedUnfairLock): (Sorry for using a third-party crash reporting tool's log, but I think it is enough to understand the issue) BUG IN CLIENT OF LIBPLATFORM: os_unfair_lock is corrupt Crashed: com.foo.bar.queue 0 libsystem_platform.dylib 0x6144 _os_unfair_lock_corruption_abort + 88 1 libsystem_platform.dylib 0xa20 _os_unfair_lock_lock_slow + 320 2 FoooBarr 0x159416c closure #1 in static FooBar.baz() + 6321360 3 FoooBarr 0x2e65b8 thunk for @escaping @callee_guaranteed @Sendable () -> () + 4298794424 (<compiler-generated>:4298794424) 4 libdispatch.dylib 0x1c04 _dispatch_call_block_and_release + 32 5 libdispatch.dylib 0x3950 _dispatch_client_callout + 20 6 libdispatch.dylib 0x6e04 _dispatch_continuation_pop + 504 7 libdispatch.dylib 0x6460 _dispatch_async_redirect_invoke + 596 8 libdispatch.dylib 0x14f48 _dispatch_root_queue_drain + 388 9 libdispatch.dylib 0x15768 _dispatch_worker_thread2 + 164 10 libsystem_pthread.dylib 0x1174 _pthread_wqthread + 228 11 libsystem_pthread.dylib 0xf50 start_wqthread + 8 ( libplatform's source code [3] suggests that __ulock_wait returns error, but I don't know the details) Per @eskimo 's suggestion in [4], I will change my code to use NSLock until OSAllocatedUnfairLock is available on all users' devices (i.e. iOS 16+), but I still want to know why this crash happens. I believe that making a struct Noncopyable is enough to use os_unfair_lock safely, but it seems that it is not enough. Did I miss something? Or is there any other way to use os_unfair_lock safely? [1] https://github.com/Alamofire/Alamofire/commit/1b89a57c2f272408b84d20132a2ed6628e95d3e2 [2] https://github.com/apple/swift-evolution/blob/1b0b339bc3072a83b5a6a529ae405a0f076c7d5d/proposals/0390-noncopyable-structs-and-enums.md [3] https://github.com/apple-open-source/macos/blob/ea4cd5a06831aca49e33df829d2976d6de5316ec/libplatform/src/os/lock.c#L555 [4] https://forums.developer.apple.com/forums/thread/712379
Posted
by daniel.l.
Last updated
.
Post not yet marked as solved
1 Replies
455 Views
CIFormat static var such as RGBA16 give concurrency warnings: Reference to static property 'RGBA16' is not concurrency-safe because it involves shared mutable state; this is an error in Swift 6 Should all these formats be static let to suppress the warnings (future errors)?
Posted
by yvsong.
Last updated
.
Post marked as solved
2 Replies
1.4k Views
The new Xcode 15.3 Release Candidate produces errors with strict concurrency checking that the usual pattern of using OSLog with a static property like static let logger = Logger(...) is not safe. "Static property 'logger' is not concurrency-safe because it is not either conforming to 'Sendable' or isolated to a global actor; this is an error in Swift 6" Is Logger thread safe and just not marked Sendable? Would it be "safe" to use nonisolated(unsafe) static let logger = Logger(...)?
Posted Last updated
.
Post not yet marked as solved
6 Replies
546 Views
Dear Sirs, I've written a SwiftUI application in XCode where I'm using multiple threads which are synchronized by using NSLock and NSRecursiveLock. This works in Debug mode and in Release mode as long as I don't use the Swift Compiler - Code Generation -&gt; Optimization Level -O for "Optimize for Speed". This is unfortunately the default setting but it results in multiple threads accessing the same piece of code which is encapsulated inside a NSLock. Is this an intended behaviour? Thanks and best regards, Johannes
Posted Last updated
.
Post marked as solved
3 Replies
542 Views
I was wondering if there is a way, while debugging, to observe the 'QoS boosting' behavior that is implemented in various places to provide priority inversion avoidance. The pthread_override_qos_class_start/end_np header comments specifically say that overrides aren't reflected in the qos_class_self() and pthread_get_qos_class_np() return values. As far as I can tell the 'CPU Report' UI in Xcode also does not reflect this information (perhaps for the reason the header comments call out). Is there a direct mechanism to observe this behavior? Presumably a heuristic empirical test could be done to compare throughput of a queue that should have its priority boosted and one that should not, but I would prefer a less opaque means of verification if possible. Thanks in advance!
Posted
by jamie_sq.
Last updated
.
Post not yet marked as solved
2 Replies
1.9k Views
Hi, I'm trying to use async/await for KVO and it seems something is broken. For some reason, it doesn't go inside for in body when I'm changing the observed property. import Foundation import PlaygroundSupport class TestObj: NSObject {   @objc dynamic var count = 0 } let obj = TestObj() Task {   for await value in obj.publisher(for: \.count).values {     print(value)   } } Task.detached {   try? await Task.sleep(for: .microseconds(100))   obj.count += 1 } Task.detached {   try? await Task.sleep(for: .microseconds(200))   obj.count += 1 } PlaygroundPage.current.needsIndefiniteExecution = true Expected result: 0, 1, 2 Actual result: 0 Does anyone know what is wrong here?
Posted
by sviat_sem.
Last updated
.
Post not yet marked as solved
4 Replies
600 Views
I am perplexed as to how to use async await. In the following example, I don't use GCD or performSelector(inBackground:with:). The view controller is NSViewController, but it doesn't make any difference if it's NSViewController or UIViewController. import Cocoa class ViewController: NSViewController { func startWriteImages() { Task{ let bool = await startWriteImagesNext() if bool { print("I'm done!") } } } func startWriteImagesNext() async -&gt; Bool { // pictures is a path to a folder in the sandbox folder // appDelegate.defaultFileManager is a variable pointing to FileManager.default in AppDelegate let pictURL = URL(fileURLWithPath: pictures) if let filePaths = try? self.appDelegate.defaultFileManager.contentsOfDirectory(atPath: pictURL.path) { for file in filePaths { let fileURL = pictURL.appending(component: file) if self.appDelegate.defaultFileManager.fileExists(atPath: fileURL.path) { let newURL = self.folderURL.appending(component: file) do { try self.appDelegate.defaultFileManager.copyItem(at: fileURL, to: newURL) } catch { print("Ugghhh...") } } } return true } return false } func startWriteImagesNext2() async -&gt; Bool { let pictURL = URL(fileURLWithPath: pictures) if let filePaths = try? self.appDelegate.defaultFileManager.contentsOfDirectory(atPath: pictURL.path) { DispatchQueue.global().async() { for file in filePaths { let fileURL = pictURL.appending(component: file) if self.appDelegate.defaultFileManager.fileExists(atPath: fileURL.path) { let newURL = self.folderURL.appending(component: file) do { try self.appDelegate.defaultFileManager.copyItem(at: fileURL, to: newURL) } catch { print("Ugghhh...") } } } } return true } return false } } In the code above, I'm saving each file in the folder to user-selected folder (self.folderURL). And the application will execute the print guy only when work is done. Since it's heavy-duty work, I want to use CCD or performSelector(inBackground:with:). If I use the former (startWriteImagesNext2), the application will execute the print guy right at the beginning. I suppose I cannot use GCD with async. So how can I perform heavy-duty work? Muchos thankos.
Posted
by Tomato.
Last updated
.
Post marked as solved
1 Replies
765 Views
Hello community, I am in search of a tutorial that comprehensively explains the proper utilization of SwiftData for updating model data in a background thread. From my understanding, there is extensive coverage on creating a model and loading model data into a view, likely due to Apple's detailed presentation on this aspect of SwiftData during WWDC23. Nevertheless, I am encountering difficulties in finding a complete tutorial that addresses the correct usage of SwiftData for model updates in a background thread. While searching the web, I came across a few discussions on Stack Overflow and this forum that potentially provide an approach. However, they were either incomplete or proved ineffective in practical application. I would greatly appreciate any links to tutorials that thoroughly cover this topic.
Posted Last updated
.
Post not yet marked as solved
1 Replies
580 Views
I have observed that in my application, Even If I set the QoS value of all the thread created to USER_INTERACTIVE, the OS is producing a warning "Thread running at User-Interactive QoS class waiting on a lower QoS thread running at Default QoS class. Investigate ways to avoid priority inversions". I am aware that it is not right to set all threads as USER_INTERACTIVE QoS, but If we assume this case for once, then it means that the OS can dynamically change the QoS of the threads even if we are explicitly setting it. Is this the correct understanding? Also, the main thread has QoS value USER_INTERACTIVE, then is it the case that its child threads will inherit the QoS value from the parent thread, If we are not setting any QoS for a pthread?
Posted Last updated
.
Post not yet marked as solved
5 Replies
2.1k Views
When marking the ViewController and the function with @MainActor, the assertion to check that the UI is updated on main thread fails. How do I guarantee that a function is run on Main Thread when using @MainActor? Example code: import UIKit @MainActor class ViewController: UIViewController {     let updateObject = UpdateObject()     override func viewDidLoad() {         super.viewDidLoad()                  updateObject.fetchSomeData { [weak self] _ in             self?.updateSomeUI()         }     }     @MainActor     func updateSomeUI() {         assert(Thread.isMainThread) // Assertion failed!     } } class UpdateObject {     func fetchSomeData(completion: @escaping (_ success: Bool) -> Void) {         DispatchQueue.global().async {             completion(true)         }     } } Even changing DispatchQueue.global().async to Task.detached does not work. Tested with Xcode 13.2.1 and Xcode 13.3 RC
Posted
by BAR115.
Last updated
.
Post not yet marked as solved
8 Replies
759 Views
I'm having a hard time relying on TSAN to detect problems due to its rightful insistence on reporting data-races (I know, stick with me). Picture the following implementation of a lazily-allocated property in an Obj-C class: @interface MyClass { id _myLazyValue; // starts as nil as all other Obj-C ivars } @end @implementation MyClass - (id)myLazyValue { if (_myLazyValue == nil) { @synchronized(self) { if (_myLazyValue == nil) { _myLazyValue = &lt;expensive computation&gt; } } } return _myLazyValue; } @end The first line in the method is reading a pointer-sized chunk of memory outside of the protection provided by the @synchronized(...) statement. That same value may be written by a different thread within the execution of the @synchronized block. This is what TSAN complains about, but I need it not to. The code above ensures the ivar is written by at most one thread. The read is unguarded, but it is impossible for any thread to read a non-nil value back that is invalid, uninitialized or unretained. Why go through this trouble? Such a lazily-allocated property usually locks on @synchronized once, until (at most) one thread does any work. Other threads may be temporarily waiting on the same lock but again only while the value is being initialized. The cost of allocation and initialization is guaranteed to be paid once: multiple threads cannot initialize the value multiple times (that’s the reason for the second _myLazyValue == nil check within the scope of the @synchronized block). Subsequent accesses of the initialized property skip locking altogether, which is exactly the performance we want from a lazily-allocated, immutable property that still guarantees thread-safe access. Assuming there isn't a big embarrassing hole in my logic, is there a way to decorate specific portions of our sources (akin to #pragma statements that disable certain warnings) so that you can mark any read/write access to a specific value as "safe"? Is the most granular tool for this purpose the __attribute__((no_sanitize("thread")))? Ideally one would want to ask TSAN to ignore only specific read/writes, rather than the entire body of a function. Thank you!
Posted
by FxFactory.
Last updated
.
Post marked as solved
2 Replies
407 Views
Suppose I have the following function: func doWork(_ someValue: Int, completionHandler: () -&gt; Void) { let q = DispatchQueue() q.async { // Long time of work completionHandler() } } How do I turn it into async function so that I can call it using await doWork()? Are there guidelines/principles/practices for this purpose?
Posted
by imneo.
Last updated
.
Post marked as solved
7 Replies
1.1k Views
Given that SwiftUI and modern programming idioms promote asynchronous activity, and observing a data model and reacting to changes, I wonder why it's so cumbersome in Swift at this point. Like many, I have run up against the problem where you perform an asynchronous task (like fetching data from the network) and store the result in a published variable in an observed object. This would appear to be an extremely common scenario at this point, and indeed it's exactly the one posed in question after question you find online about this resulting error: Publishing changes from background threads is not allowed Then why is it done? Why aren't the changes simply published on the main thread automatically? Because it isn't, people suggest a bunch of workarounds, like making the enclosing object a MainActor. This just creates a cascade of errors in my application; but also (and I may not be interpreting the documentation correctly) I don't want the owning object to do everything on the main thread. So the go-to workaround appears to be wrapping every potentially problematic setting of a variable in a call to DispatchQueue.main. Talk about tedious and error-prone. Not to mention unmaintainable, since I or some future maintainer may be calling a function a level or two or three above where a published variable is actually set. And what if you decide to publish a variable that wasn't before, and now you have to run around checking every potential change to it? Is this not a mess?
Posted Last updated
.
Post not yet marked as solved
1 Replies
2.6k Views
How does one add Codable conformance to a class that needs to be isolated to the MainActor? For example, the following code gives compiler errors: @MainActor final class MyClass: Codable { var value: Int enum CodingKeys: String, CodingKey { case value } init(from decoder: Decoder) throws { // <-- Compiler error: Initializer 'init(from:)' isolated to global actor 'MainActor' can not satisfy corresponding requirement from protocol 'Decodable' let data = try decoder.container(keyedBy: CodingKeys.self) self.value = try data.decode(Int.self, forKey: .value) } func encode(to encoder: Encoder) throws { // <-- Compiler error: Instance method 'encode(to:)' isolated to global actor 'MainActor' can not satisfy corresponding requirement from protocol 'Encodable' var container = encoder.container(keyedBy: CodingKeys.self) try container.encode(value, forKey: .value) } } I'm definitely struggling to get my head around actors and @MainActor at the moment!
Posted Last updated
.
Post not yet marked as solved
3 Replies
497 Views
Hi! I'm seeing some confusing behavior with a propertyWrapper that tries to constrain its wrappedValue to MainActor. I'm using this in a SwiftUI.View… but I'm seeing some confusing behavior when I try to add that component to my graph. There seems to be some specific problem when body is defined in an extension. I start with a simple property wrapper: @propertyWrapper struct Wrapper<T> { @MainActor var wrappedValue: T } I then try a simple App with a View that uses a Wrapper: @main struct MainActorDemoApp: App { var body: some Scene { WindowGroup { ContentView() } } } struct ContentView: View { @Wrapper var value = "Hello, world!" var body: some View { Text(self.value) } } This code compiles with no problems for me. For style… I might choose to define the body property of my MainActorDemoApp with an extension: @main struct MainActorDemoApp: App { // var body: some Scene { // WindowGroup { // ContentView() // } // } } extension MainActorDemoApp { var body: some Scene { WindowGroup { ContentView() // Call to main actor-isolated initializer 'init()' in a synchronous nonisolated context } } } struct ContentView: View { @Wrapper var value = "Hello, world!" var body: some View { Text(self.value) } } Explicitly marking my body as a MainActor fixes the compiler error: @main struct MainActorDemoApp: App { // var body: some Scene { // WindowGroup { // ContentView() // } // } } extension MainActorDemoApp { @MainActor var body: some Scene { WindowGroup { ContentView() } } } struct ContentView: View { @Wrapper var value = "Hello, world!" var body: some View { Text(self.value) } } So I guess the question is… why? Why would code that breaks when my body is in an extension not break when my body is in my original struct definition? Is this intended behavior? I'm on Xcode Version 15.2 (15C500b) and Swift 5.9.2 (swiftlang-5.9.2.2.56 clang-1500.1.0.2.5). It's unclear to me what is "wrong" about the code that broke… any ideas?
Posted Last updated
.