Concurrency

RSS for tag

Concurrency is the notion of multiple things happening at the same time.

Posts under Concurrency tag

161 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Is there a way to update Observable on the main thread but not to read from it?
I've been obsessed with this topic for the past couple of weeks and unfortunately there just isn't a good answer out there even from the community. Therefore I am hoping that I can summon Quinn to get an official Apple position (on what's seemingly a fairly fundamental part of using SwiftUI). Consider this simple example: import Foundation @MainActor @Observable class UserViewModel { var name: String = "John Doe" var age: Int = 30 // other properties and logic } // NetworkManager does not need to update the UI but needs to read/write from UserViewModel. class NetworkManager { func updateUserInfo(viewModel: UserViewModel) { Task { // Read values from UserViewModel prior to making a network call let userName: String let userAge: Int // Even for a simple read, we have to jump onto the main thread await MainActor.run { userName = viewModel.name userAge = viewModel.age } // Now perform network call with the retrieved values print("Making network call with userName: \(userName) and userAge: \(userAge)") // Simulate network delay try await Task.sleep(nanoseconds: 1_000_000_000) // After the network call, we update the values, again on the main thread await MainActor.run { viewModel.name = "Jane Doe" viewModel.age = 31 } } } } // Example usage let viewModel = UserViewModel() let networkManager = NetworkManager() // Calling from some background thread or task Task { await networkManager.updateUserInfo(viewModel: viewModel) } In this example, we can see a few things The ViewModel is a class that manages states centrally It needs to be marked as MainActor to ensure that updating of the states is done on the main thread (this is similar to updating @Published in the old days). I know this isn't officially documented in Apple's documentation. But I've seen this mentioned many times to be recommended approach including www.youtub_.com/watch?v=4dQOnNYjO58 and here also I have observed crashes myself when I don't follow this practise Now so far so good, IF we assume that ViewModel are only in service to Views. The problem comes when the states need to be accessed outside of Views. in this example, NetworkManager is some random background code that also needs to read/write centralized states. In this case it becomes extremely cumbersome. You'd have to jump to mainthread for each write (which ok - maybe that's not often) but you'd also have to do that for every read. Now. it gets even more cumbersome if the VM holds a state that is a model object, mentioned in this thread.. Consider this example (which I think is what @Stokestack is referring to) import Foundation // UserModel represents the user information @MainActor // Ensuring the model's properties are accessed from the main thread class UserModel { var name: String var age: Int init(name: String, age: Int) { self.name = name self.age = age } } @MainActor @Observable class UserViewModel { var userModel: UserModel init(userModel: UserModel) { self.userModel = userModel } } // NetworkManager does not need to update the UI but needs to read/write UserModel inside UserViewModel. class NetworkManager { func updateUserInfo(viewModel: UserViewModel) { Task { // Read values from UserModel before making a network call let userName: String let userAge: Int // Jumping to the main thread to safely read UserModel properties await MainActor.run { userName = viewModel.userModel.name userAge = viewModel.userModel.age } // Simulate a network call print("Making network call with userName: \(userName) and userAge: \(userAge)") try await Task.sleep(nanoseconds: 1_000_000_000) // After the network call, updating UserModel (again, on the main thread) await MainActor.run { viewModel.userModel.name = "Jane Doe" viewModel.userModel.age = 31 } } } } // Example usage let userModel = UserModel(name: "John Doe", age: 30) let viewModel = UserViewModel(userModel: userModel) let networkManager = NetworkManager() // Calling from a background thread Task { await networkManager.updateUserInfo(viewModel: viewModel) } Now I'm not sure the problem he is referring still exists (because I've tried and indeed you can make codeable/decodables marked as @Mainactor) but it's really messy. Also, I use SwiftData and I have to imagine that @Model basically marks the class as @MainActor for these reasons. And finally, what is the official Apple's recommended approach? Clearly Apple created @Observable to hold states of some kind that drives UI. But how do you work with this state in the background?
6
0
257
4w
AsyncStream stops dispatching
Hello I'm a beginner to Swift Concurrency and have run into an issue with AsyncStream. I've run into a situation that causes an observing of a for loop to receiving a values from an AsyncStream. At the bottom is the code that you can copy it into a Swift Playground and run. The code is supposed to mock a system that has a service going through a filter to read and write to a connection. Here is a log of the prints 🙈🫴 setupRTFAsyncWrites Start ⬅️ Pretend to write 0 ➡️ Pretend to read 0 feed into filter 0 yield write data 1 🙈🫴 setupRTFAsyncWrites: write(1 bytes) ⬅️🙈🫴 Async Event: dataToDevice: 1 bytes ⬅️ Pretend to write 1 ➡️ Pretend to read 1 feed into filter 1 yield write data 2 // here our for loop should have picked up the value sent down the continuation. But instead it just sits here. Sample that can go into a playground //: A UIKit based Playground for presenting user interface import SwiftUI import PlaygroundSupport import Combine import CommonCrypto import Foundation class TestConnection { var didRead: ((Data) -> ()) = { _ in } var count = 0 init() { } func write(data: Data) { // pretend we sent this to the BT device print("⬅️ Pretend to write \(count)") Task { try await Task.sleep(ms: 200) print("➡️ Pretend to read \(self.count)") self.count += 1 // pretend this is a response from the device self.didRead(Data([0x00])) } } } enum TestEvent: Sendable { /// ask some one to write this to the device case write(Data) /// the filter is done case handshakeDone } class TestFilter { var eventsStream: AsyncStream<TestEvent> var continuation: AsyncStream<TestEvent>.Continuation private var count = 0 init() { (self.eventsStream, self.continuation) = AsyncStream<TestEvent>.makeStream(bufferingPolicy: .unbounded) } func feed(data: Data) { print("\tfeed into filter \(count)") count += 1 if count > 5 { print("\t✅ handshake done") self.continuation.yield(.handshakeDone) return } Task { // data delivered to us by a bluetooth device // pretend it takes time to process this and then we return with a request to write back to the connection try await Task.sleep(ms: 200) print("\tyield write data \(self.count)") // pretend this is a response from the device let result = self.continuation.yield(.write(Data([0x11]))) } } /// gives the first request to fire to the device for the handshaking sequence func start() -> Data { return Data([0x00]) } } // Here we facilitate communication between the filter and the connection class TestService { private let filter: TestFilter var task: Task<(), Never>? let testConn: TestConnection init(filter: TestFilter) { self.filter = filter self.testConn = TestConnection() self.testConn.didRead = { [weak self] data in self?.filter.feed(data: data) } self.task = Task { [weak self] () in await self?.setupAsyncWrites() } } func setupAsyncWrites() async { print("🙈🫴 setupRTFAsyncWrites Start") for await event in self.filter.eventsStream { print("\t\t🙈🫴 setupRTFAsyncWrites: \(event)") guard case .write(let data) = event else { print("\t\t🙈🫴 NOT data to device: \(event)") continue } print("\t\t⬅️🙈🫴 Async Event: dataToDevice: \(data)") self.testConn.write(data: data) } // for // This shouldn't end assertionFailure("This should not end") } public func handshake() async { let data = self.filter.start() self.testConn.write(data: data) await self.waitForHandshakedone() } private func waitForHandshakedone() async { for await event in self.filter.eventsStream { if case .handshakeDone = event { break } continue } } } Task { let service = TestService(filter: TestFilter()) await service.handshake() print("Done") } /* This is what happens: 🙈🫴 setupRTFAsyncWrites Start ⬅️ Pretend to write 0 ➡️ Pretend to read 0 feed into filter 0 yield write data 1 🙈🫴 setupRTFAsyncWrites: write(1 bytes) ⬅️🙈🫴 Async Event: dataToDevice: 1 bytes ⬅️ Pretend to write 1 ➡️ Pretend to read 1 feed into filter 1 yield write data 2 // It just stops here, the `for` loop in setupAsyncWrites() should have picked up the event sent down the continuation after "yield write data 2" // It should say 🙈🫴 setupRTFAsyncWrites: write(1 bytes) ⬅️🙈🫴 Async Event: dataToDevice: 1 bytes */ extension Task<Never, Never> { public static func sleep(ms duration: UInt64) async throws { try await Task.sleep(nanoseconds: 1_000_000 * duration) } }
1
0
204
Oct ’24
NSMetadataQuery threading issues
The code below is a simplified form of part of my code for my Swift Package Manager, Swift 5.6.1, PromiseKit 6.22.1, macOS command-line executable. It accepts a Mac App Store app ID as the sole argument. If the argument corresponds to an app ID for an app that was installed from the Mac App Store onto your computer, the executable obtains some information from Spotlight via a NSMetadataQuery, then prints it to stdout. I was only able to get the threading to work by calling RunLoop.main.run(). The only way I was able to allow the executable to return instead of being stuck forever on RunLoop.main.run() was to call exit(0) in the closure passed to Promise.done(). The exit(0) causes problems for testing. How can I allow the executable to exit without explicitly calling exit(0), and how can I improve the threading overall? I cannot currently use Swift Concurrency (await/async/TaskGroup) because the executable must support macOS versions that don't support Swift Concurrency. A Swift Concurrency solution variant would be useful as additional info, though, because, sometime in the future, I might be able to drop support for macOS versions older than 10.15. Thanks for any help. import Foundation import PromiseKit guard CommandLine.arguments.count > 1 else { print("Missing adamID argument") exit(1) } guard let adamID = UInt64(CommandLine.arguments[1]) else { print("adamID argument must be a UInt64") exit(2) } _ = appInfo(forAdamID: adamID) .done { appInfo in if let jsonData = try? JSONSerialization.data(withJSONObject: appInfo), let jsonString = String(data: jsonData, encoding: .utf8) { print(jsonString.replacingOccurrences(of: "\\/", with: "/")) } exit(0) } RunLoop.main.run() func appInfo(forAdamID adamID: UInt64) -> Promise<[String: Any]> { Promise { seal in let query = NSMetadataQuery() query.predicate = NSPredicate(format: "kMDItemAppStoreAdamID == %d", adamID) query.searchScopes = ["/Applications"] var observer: NSObjectProtocol? observer = NotificationCenter.default.addObserver( forName: NSNotification.Name.NSMetadataQueryDidFinishGathering, object: query, queue: .main ) { _ in query.stop() defer { if let observer { NotificationCenter.default.removeObserver(observer) } } var appInfo: [String: Any] = [:] for result in query.results { if let result = result as? NSMetadataItem { var attributes = ["kMDItemPath"] attributes.append(contentsOf: result.attributes) for attribute in attributes { let value = result.value(forAttribute: attribute) switch value { case let date as Date: appInfo[attribute] = ISO8601DateFormatter().string(from: date) default: appInfo[attribute] = value } } } } seal.fulfill(appInfo) } DispatchQueue.main.async { query.start() } } }
7
0
292
Oct ’24
[crash] iOS 18 _os_unfair_lock_recursive_abort
crash log: _os_unfair_lock_recursive_abort SIGTRAP BACKGROUND THREAD 29 - CRASHED libsystem_platform.dylib _os_unfair_lock_recursive_abort libsystem_platform.dylib _os_unfair_lock_lock_slow WebKit void IPC::Connection::dispatchToClient&lt;IPC::Connection::enqueueIncomingMessage(***::UniqueRefIPC::Decoder)::$_0&gt;(IPC::Connection::enqueueIncomingMessage(***::UniqueRefIPC::Decoder)::$_0&amp;&amp;) WebKit IPC::Connection::enqueueIncomingMessage(***::UniqueRefIPC::Decoder) WebKit IPC::Connection::processIncomingMessage(***::UniqueRefIPC::Decoder) WebKit ___ZN3IPC10Connection12platformOpenEv_block_invoke libdispatch.dylib _dispatch_client_callout libdispatch.dylib _dispatch_continuation_pop libdispatch.dylib _dispatch_source_latch_and_call libdispatch.dylib _dispatch_source_invoke libdispatch.dylib _dispatch_lane_serial_drain libdispatch.dylib _dispatch_lane_invoke libdispatch.dylib _dispatch_root_queue_drain_deferred_wlh libdispatch.dylib _dispatch_workloop_worker_thread libsystem_pthread.dylib _pthread_wqthread libsystem_pthread.dylib start_wqthread Collapse Note: only crash above iOS 18.0
1
0
175
Oct ’24
async let combination crashes on XCode 16 with iOS 15 as minimum deployment
We started building our project in XCode 16 only to find a super weird crash that was 100% reproducible. I couldn't really understand why it was crashing, so I tried to trim down the problematic piece of code to something that I could provide in a side project. The actual piece of code crashing for us is significantly different, but this small example showcases the crash as well. https://github.com/Elih96/XCode16CrashReproducer our observation is, that this combination of async let usage + struct structure leads to a SIGABRT crash in the concurrency library. In both the main project and the example project, moving away from async let and using any other concurrency mechanism fixes the crash. This was reproducible only on Xcode 16 with iOS 15 set as minimum deployment for the target. It works fine on Xcode 15, and if we bump the min deployment to 16 on Xcode 16, it also runs fine. I've attached a small project that reproduces the error. I'm sure I didn't provide the ideal reproduction scenario, but that's what I managed to trim it down to. Making random changes such as removing some properties from the B struct or remove the: let _ = A().arrayItems.map { _ in "123" } will stop the crash from happening, so I just stopped making changes. The stack trace from the crash: frame #0: 0x00000001036d1008 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x0000000102ecf408 libsystem_pthread.dylib`pthread_kill + 256 frame #2: 0x00000001801655c0 libsystem_c.dylib`abort + 104 frame #3: 0x000000020a8b7de0 libswift_Concurrency.dylib`swift::swift_Concurrency_fatalErrorv(unsigned int, char const*, char*) + 28 frame #4: 0x000000020a8b7dfc libswift_Concurrency.dylib`swift::swift_Concurrency_fatalError(unsigned int, char const*, ...) + 28 frame #5: 0x000000020a8baf54 libswift_Concurrency.dylib`swift_task_dealloc + 124 frame #6: 0x000000020a8b72c8 libswift_Concurrency.dylib`asyncLet_finish_after_task_completion(swift::AsyncContext*, swift::AsyncLet*, void (swift::AsyncContext* swift_async_context) swiftasynccall*, swift::AsyncContext*, void*) + 72 * frame #7: 0x000000010344e6e4 CrashReproducer.debug.dylib`closure #1 in closure #1 in CrashReproducerApp.body.getter at CrashReproducerApp.swift:17:46 frame #8: 0x00000001cca0a560 SwiftUI`___lldb_unnamed_symbol158883 frame #9: 0x00000001cca09fc0 SwiftUI`___lldb_unnamed_symbol158825 frame #10: 0x00000001cca063a0 SwiftUI`___lldb_unnamed_symbol158636 frame #11: 0x00000001cca09268 SwiftUI`___lldb_unnamed_symbol158726
5
3
495
Oct ’24
Core ML Async API Seems to Not Work Properly
I'm experiencing issues with the Core ML Async API, as it doesn't seem to be working correctly. It consistently hangs during the "03 performInference, after get smallInput, before prediction" part, as shown in the attached: log1.txt log2.txt Below is my code. Could you please advise on how I should modify it? private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { guard let pixelBuffer = sampleBuffer.imageBuffer else { return } Task { print("**** createFrameAsync before performInference") do { try await runModelAsync(on: pixelBuffer) } catch { print("Error processing frame: \(error)") } print("**** createFrameAsync after performInference") } } func runModelAsync(on pixelbuffer: CVPixelBuffer) async { print("01 performInference, before resizeFrame") guard let data = metalResizeFrame(sourcePixelFrame: pixelbuffer, targetSize: MTLSize.init(width: InputWidth, height: InputHeight, depth: 1), resizeMode: .scaleToFill) else { os_log("Preprocessing failed", type: .error) return } print("02 performInference, after resizeFrame, before get smallInput") let input = model_smallInput(input: data) print("03 performInference, after get smallInput, before prediction") if let prediction = try? await mlmodel!.model.prediction(from: input) { print("04 performInference, after prediction, before get result") var results: [Float] = [] let output = prediction.featureValue(for: "output")?.multiArrayValue if let bufferPointer = try? UnsafeBufferPointer<Float>(output!) { results = Array(bufferPointer) } print("05 performInference, after get result, before setRenderData") let localResults = results await MainActor.run { ScreenRecorder.shared .setRenderDataNormalized( screenImage: pixelbuffer, depthData: localResults ) } print("06 performInference, after setRenderData") } }
1
0
291
Oct ’24
How do you create an actor with a non-sendable member variable that is initialized with async init()?
Here is my code: ` // A 3rd-party class I must use. class MySession{ init() async throws { // .. } } actor SessionManager{ private var mySession: MySession? // The MySession is not Sendable func createSession() async { do { mySession = try await MySession() log("getOrCreateSession() End, success.") } catch { log("getOrCreateSession() End, failure.") } } }` I get this warning: "Non-sendable type 'MySession' returned by implicitly asynchronous call to a nonisolated function cannot cross the actor boundary." How can this be fixed?
1
0
275
Oct ’24
How is the HandTracking in Happy Beam avoiding data racing?
I am new to learning about concurrency and I am working on an app that uses the HandTrackingProvider class. In the Happy Beam sample code, there is a HearGestureModel which has a reference to the HandTrackingProvider() and this seems to write to a struct called HandUpdates inside the HeartGestureModel class through the publishHandTrackingUpdates() function. On another thread, there is a function called computeTransformofUserPerformedHeartGesture() which reads the values of the HandUpdates to determine whether the user is making the appropriate gesture. My question is, how is the code handling the constant read and write to the HandUpdates struct?
1
0
261
Oct ’24
ARKit delegate code broken by Swift 6
I'm porting over some code that uses ARKit to Swift 6 (with Complete Strict Concurrency Checking enabled). Some methods on ARSCNViewDelegate, namely Coordinator.renderer(_:didAdd:for:) among at least one other is causing a consistent crash. On Swift 5 this code works absolutely fine. The above method consistently crashes with _dispatch_assert_queue_fail. My assumption is that in Swift 6 a trap has been inserted by the compiler to validate that my downstream code is running on the main thread. In Implementing a Main Actor Protocol That’s Not @MainActor, Quinn “The Eskimo!” seems to address scenarios of this nature with 3 proposed workarounds yet none of them seem feasible here. For #1, marking ContentView.addPlane(renderer:node:anchor:) nonisolated and using @preconcurrency import ARKit compiles but still crashes :( For #2, applying @preconcurrency to the ARSCNViewDelegate conformance declaration site just yields this warning: @preconcurrency attribute on conformance to 'ARSCNViewDelegate' has no effect For #3, as Quinn recognizes, this is a non-starter as ARSCNViewDelegate is out of our control. The minimal reproducible set of code is below. Simply run the app, scan your camera back and forth across a well lit environment and the app should crash within a few seconds. Switch over to Swift Language Version 5 in build settings, retry and you'll see the current code works fine. import ARKit import SwiftUI struct ContentView: View { @State private var arViewProxy = ARSceneProxy() private let configuration: ARWorldTrackingConfiguration @State private var planeFound = false init() { configuration = ARWorldTrackingConfiguration() configuration.worldAlignment = .gravityAndHeading configuration.planeDetection = [.horizontal] } var body: some View { ARScene(proxy: arViewProxy) .onAddNode { renderer, node, anchor in addPlane(renderer: renderer, node: node, anchor: anchor) } .onAppear { arViewProxy.session.run(configuration) } .onDisappear { arViewProxy.session.pause() } .overlay(alignment: .top) { if !planeFound { Text("Slowly move device horizontally side to side to calibrate") } else { Text("Plane found!") .bold() .foregroundStyle(.green) } } } private func addPlane(renderer: SCNSceneRenderer, node: SCNNode, anchor: ARAnchor) { guard let planeAnchor = anchor as? ARPlaneAnchor, let device = renderer.device, let planeGeometry = ARSCNPlaneGeometry(device: device) else { return } planeFound = true planeGeometry.update(from: planeAnchor.geometry) let material = SCNMaterial() material.isDoubleSided = true material.diffuse.contents = UIColor.white.withAlphaComponent(0.65) planeGeometry.materials = [material] let planeNode = SCNNode(geometry: planeGeometry) node.addChildNode(planeNode) } } struct ARScene { private(set) var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? private let proxy: ARSceneProxy init(proxy: ARSceneProxy) { self.proxy = proxy } func onAddNode( perform action: @escaping (SCNSceneRenderer, SCNNode, ARAnchor) -> Void ) -> Self { var view = self view.onAddNodeAction = action return view } } extension ARScene: UIViewRepresentable { func makeUIView(context: Context) -> ARSCNView { let arView = ARSCNView() arView.delegate = context.coordinator arView.session.delegate = context.coordinator proxy.arView = arView return arView } func updateUIView(_ uiView: ARSCNView, context: Context) { context.coordinator.onAddNodeAction = onAddNodeAction } func makeCoordinator() -> Coordinator { Coordinator() } } extension ARScene { class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate { var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)? func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) { onAddNodeAction?(renderer, node, anchor) } } } @MainActor class ARSceneProxy: NSObject, @preconcurrency ARSessionProviding { fileprivate var arView: ARSCNView! @objc dynamic var session: ARSession { arView.session } } Any help is greatly appreciated!
1
7
339
Oct ’24
Thread Error with @Model Class
I have a @Model class that is comprised of a String and a custom Enum. It was working until I added raw String values for the enum cases, and afterwards this error and code displays when opening a view that uses the class: { @storageRestrictions(accesses: _$backingData, initializes: _type) init(initialValue) { _$backingData.setValue(forKey: \.type, to: initialValue) _type = _SwiftDataNoType() } get { _$observationRegistrar.access(self, keyPath: \.type) return self.getValue(forKey: \.type) } set { _$observationRegistrar.withMutation(of: self, keyPath: \.type) { self.setValue(forKey: \.type, to: newValue) } } } Thread 1: EXC_BREAKPOINT (code=1, subcode=0x1cc165d0c) I removed the String raw values but the error persists. Any guidance would be greatly appreciated. Below is replicated code: @Model class CopingSkillEntry { var stringText: String var case: CaseType init(stringText: String, case: CaseType) { self.stringText = stringText self.case = case } } enum CaseType: Codable, Hashable { case case1 case case1 case case3 }
2
0
233
Oct ’24
Can SceneKit be used with Swift 6 Concurrency ?
I am trying to port SceneKit projects to Swift 6, and I just can't figure out how that's possible. I even start thinking SceneKit and Swift 6 concurrency just don't match together, and SceneKit projects should - hopefully for the time being only - stick to Swift 5. The SCNSceneRendererDelegate methods are called in the SceneKit Thread. If the delegate is a ViewController: class GameViewController: UIViewController { let aNode = SCNNode() func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) { aNode.position.x = 10 } } Then the compiler generates the error "Main actor-isolated instance method 'renderer(_:updateAtTime:)' cannot be used to satisfy nonisolated protocol requirement" Which is fully understandable. The compiler even tells you those methods can't be used for protocol conformance, unless: Conformance is declare as @preconcurrency SCNSceneRendererDelegate like this: class GameViewController: UIViewController, @preconcurrency SCNSceneRendererDelegate { But that just delays the check to runtime, and therefore, crash in the SceneKit Thread happens at runtime... Again, fully understandable. or the delegate method is declared nonisolated like this: nonisolated func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) { aNode.position.x = 10 } Which generates the compiler error: "Main actor-isolated property 'position' can not be mutated from a nonisolated context". Again fully understandable. If the delegate is not a ViewController but a nonisolated class, we also have the problem that SCNNode can't be used. Nearly 100% of the SCNSceneRendererDelegate I've seen do use SCNNode or similar MainActor bound types, because they are meant for that. So, where am I wrong ? What is the solution to use SceneKit SCNSceneRendererDelegate methods with full Swift 6 compilation ? Is that even possible for now ?
4
0
400
2d
Audio Interruption Not Being Intercepted in AVAudioSession with Classification
Hi everyone, I’m experiencing an issue where audio interruptions (e.g., phone calls) are not being intercepted while running sound classification in an app that uses the AVAudioSession. Classification works fine, but interruptions aren’t handled, even though I’ve followed Apple’s guidelines on handling audio interruptions [1_Document]. The classification was initially based on [2_Classifer], where it worked perfectly. However, when I adopted classification in a more camera-focused app using [3_Cam], the interruption behavior stopped working. The classification setup is functioning with [3_Cam], but audio interruptions are not triggered. The listener is invoked before starting sound analysis as suggested in [2_Classifier]. startListeningForAudioSessionInterruptions() try startAnalyzing([(request, observer)]) FYI, one change I have made for classifications is following. This works fine in all cases. // try audioSession.setCategory(.record, mode: .default) try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth]) I suspect the issue might be related to the AVAudioSession configuration or how the app handles recording and playback together. Is there anything else I should check related to AVAudioSession? Are there additional APIs I could use to pre-check or better handle audio interruptions? Any suggestions or guidance would be greatly appreciated! Platform: Swift 5, Xcode 16, iOS 18. References: Document Classifier Cam Best Regards
1
0
231
Oct ’24
Type ReferenceWritableKeyPath does not conform to the 'Sendable' protocol
This is not a question but more of a hint where I was having trouble with. In my SwiftData App I wanted to move from Swift 5 to Swift 6, for that, as recommended, I stayed in Swift 5 language mode and set 'Strict Concurrency Checking' to 'Complete' within my build settings. It marked all the places where I was using predicates with the following warning: Type '' does not conform to the 'Sendable' protocol; this is an error in the Swift 6 language mode I had the same warnings for SortDescriptors. I spend quite some time searching the web and wrapping my head around how to solve that issue to be able to move to Swift 6. In the end I found this existing issue in the repository of the Swift Language https://github.com/swiftlang/swift/issues/68943. It says that this is not a warning that should be seen by the developer and in fact when turning Swift 6 language mode on those issues are not marked as errors. So if anyone is encountering this when trying to fix all issues while staying in Swift 5 language mode, ignore those, fix the other issues and turn on Swift 6 language mode and hopefully they are gone.
1
0
321
4w
Crash with Progress type, Swift 6, iOS 18
We are getting a crash _dispatch_assert_queue_fail when the cancellationHandler on NSProgress is called. We do not see this with iOS 17.x, only on iOS 18. We are building in Swift 6 language mode and do not have any compiler warnings. We have a type whose init looks something like this: init( request: URLRequest, destinationURL: URL, session: URLSession ) { progress = Progress() progress.kind = .file progress.fileOperationKind = .downloading progress.fileURL = destinationURL progress.pausingHandler = { [weak self] in self?.setIsPaused(true) } progress.resumingHandler = { [weak self] in self?.setIsPaused(false) } progress.cancellationHandler = { [weak self] in self?.cancel() } When the progress is cancelled, and the cancellation handler is invoked. We get the crash. The crash is not reproducible 100% of the time, but it happens significantly often. Especially after cleaning and rebuilding and running our tests. * thread #4, queue = 'com.apple.root.default-qos', stop reason = EXC_BREAKPOINT (code=1, subcode=0x18017b0e8) * frame #0: 0x000000018017b0e8 libdispatch.dylib`_dispatch_assert_queue_fail + 116 frame #1: 0x000000018017b074 libdispatch.dylib`dispatch_assert_queue + 188 frame #2: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284 frame #3: 0x000000010b80bd84 MyTests`closure #3 in MyController.init() at MyController.swift:0 frame #4: 0x000000010b80bb04 MyTests`thunk for @escaping @callee_guaranteed @Sendable () -&gt; () at &lt;compiler-generated&gt;:0 frame #5: 0x00000001810276b0 Foundation`__20-[NSProgress cancel]_block_invoke_3 + 28 frame #6: 0x00000001801774ec libdispatch.dylib`_dispatch_call_block_and_release + 24 frame #7: 0x0000000180178de0 libdispatch.dylib`_dispatch_client_callout + 16 frame #8: 0x000000018018b7dc libdispatch.dylib`_dispatch_root_queue_drain + 1072 frame #9: 0x000000018018bf60 libdispatch.dylib`_dispatch_worker_thread2 + 232 frame #10: 0x00000001012a77d8 libsystem_pthread.dylib`_pthread_wqthread + 224 Any thoughts on why this is crashing and what we can do to work-around it? I have not been able to extract our code into a simple reproducible case yet. And I mostly see it when running our code in a testing environment (XCTest). Although I have been able to reproduce it running an app a few times, it's just less common.
23
7
955
Oct ’24
Casting `[String: Any]` to `[String: any Sendable]`
I have a simple wrapper class around WCSession to allow for easier unit testing. I'm trying to update it to Swift 6 concurrency standards, but running into some issues. One of them is in the sendMessage function (docs here It takes [String: Any] as a param, and returns them as the reply. Here's my code that calls this: @discardableResult public func sendMessage(_ message: [String: Any]) async throws -&gt; [String: Any] { return try await withCheckedThrowingContinuation { (continuation: CheckedContinuation&lt;[String: Any], Error&gt;) in wcSession.sendMessage(message) { response in continuation.resume(returning: response) // ERROR HERE } errorHandler: { error in continuation.resume(throwing: error) } } } However, I get this error: Sending 'response' risks causing data races; this is an error in the Swift 6 language mode Which I think is because Any is not Sendable. I tried casting [String: Any] to [String: any Sendable] but then it says: Conditional cast from '[String : Any]' to '[String : any Sendable]' always succeeds Any ideas on how to get this to work?
3
1
367
Oct ’24
How to wrangle Sendable ReferenceFileDocument and SwiftUI
I recently circled back to a SwiftUI Document-based app to check on warnings, etc. with Xcode 16 and Swift 6 now released. I just found a suite a new errors that indicate that ReferenceFileDocument is now expected to be Sendable, which seems very off for it fundamentally being a reference type. I followed the general pattern from earlier sample code: Building Great Apps with SwiftUI, which goes from an Observable object to adding conformance to ReferenceFileDocument in an extension. But then the various stored properties on the class cause all manner of complaint - given that they're not, in fact, sendable. What is an expected pattern that leverages a final class for the ReferenceDocument, but maintains that Sendability? Or is there a way to "opt out" of Sendability for this protocol? I'm at a bit of a loss on the expectation that ReferenceFileDocument is sendable at all, but since it's marked as such, I'm guessing there's some path - I'm just not (yet) clear on how to accomodate that.
1
0
234
Oct ’24
Crash with Incorrect actor executor assumption
I'm seeing a crash compiling with Swift 6 that I can reproduce with the following code. It crashes with "Incorrect actor executor assumption". Is there something that the compiler should be warning about so that this isn't a runtime crash? Note - if I use a for in loop instead of the .forEach closure, the crash does not happen. Is the compiler somehow inferring the wrong isolation domain for the closure? import SwiftUI struct ContentView: View { var body: some View { Text("Hello, world!") .task { _ = try? await MyActor(store: MyStore()) } } } actor MyActor { var credentials = [String]() init(store: MyStore) async throws { try await store.persisted.forEach { credentials.append($0) } } } final class MyStore: Sendable { var persisted: [String] { get async throws { return ["abc"] } } } The stack trace is: * thread #6, queue = 'com.apple.root.user-initiated-qos.cooperative', stop reason = signal SIGABRT frame #0: 0x0000000101988f30 libsystem_kernel.dylib`__pthread_kill + 8 frame #1: 0x0000000100e2f124 libsystem_pthread.dylib`pthread_kill + 256 frame #2: 0x000000018016c4ec libsystem_c.dylib`abort + 104 frame #3: 0x00000002444c944c libswift_Concurrency.dylib`swift::swift_Concurrency_fatalErrorv(unsigned int, char const*, char*) + 28 frame #4: 0x00000002444c9468 libswift_Concurrency.dylib`swift::swift_Concurrency_fatalError(unsigned int, char const*, ...) + 28 frame #5: 0x00000002444c90e0 libswift_Concurrency.dylib`swift_task_checkIsolated + 152 frame #6: 0x00000002444c63e0 libswift_Concurrency.dylib`swift_task_isCurrentExecutorImpl(swift::SerialExecutorRef) + 284 frame #7: 0x0000000100d58944 IncorrectActorExecutorAssumption.debug.dylib`closure #1 in MyActor.init($0="abc") at <stdin>:0 frame #8: 0x0000000100d58b94 IncorrectActorExecutorAssumption.debug.dylib`partial apply for closure #1 in MyActor.init(store:) at <compiler-generated>:0 frame #9: 0x00000001947f8c80 libswiftCore.dylib`Swift.Sequence.forEach((τ_0_0.Element) throws -> ()) throws -> () + 428 * frame #10: 0x0000000100d58748 IncorrectActorExecutorAssumption.debug.dylib`MyActor.init(store=0x0000600000010ba0) at ContentView.swift:27:35 frame #11: 0x0000000100d57734 IncorrectActorExecutorAssumption.debug.dylib`closure #1 in ContentView.body.getter at ContentView.swift:14:32 frame #12: 0x0000000100d57734 IncorrectActorExecutorAssumption.debug.dylib`closure #1 in ContentView.body.getter at ContentView.swift:14:32 frame #13: 0x00000001d1817138 SwiftUI`(1) await resume partial function for partial apply forwarder for closure #1 () async -> () in closure #1 (inout Swift.TaskGroup<()>) async -> () in closure #1 () async -> () in SwiftUI.AppDelegate.application(_: __C.UIApplication, handleEventsForBackgroundURLSession: Swift.String, completionHandler: () -> ()) -> () frame #14: 0x00000001d17b1e48 SwiftUI`(1) await resume partial function for dispatch thunk of static SwiftUI.PreviewModifier.makeSharedContext() async throws -> τ_0_0.Context frame #15: 0x00000001d19c10c0 SwiftUI`(1) await resume partial function for generic specialization <()> of reabstraction thunk helper <τ_0_0 where τ_0_0: Swift.Sendable> from @escaping @isolated(any) @callee_guaranteed @async () -> (@out τ_0_0) to @escaping @callee_guaranteed @async () -> (@out τ_0_0, @error @owned Swift.Error) frame #16: 0x00000001d17b1e48 SwiftUI`(1) await resume partial function for dispatch thunk of static SwiftUI.PreviewModifier.makeSharedContext() async throws -> τ_0_0.Context
2
0
530
Oct ’24
My Swift concurrency is no longer running properly and it's screwing up my SwiftData objects
I have a bug I've come across since I've upgraded Xcode (16.0) and macOS (15.0 Sequoia) and figured I'd create a minimal viable example in case someone else came across this and help. I've noticed this in both the macOS and iOS version of my app when I run it. Essentially I have an area of my code that calls a class that has a step 1 and 2 process. The first step uses async let _ = await methodCall(...) to create some SwiftData items while the second step uses a TaskGroup to go through the created elements, checks if an image is needed, and syncs it. Before this worked great as the first part finished and saved before the second part happened. Now it doesn't see the save and thus doesn't see the insertions/edits/etc in the first part so the second part just isn't done properly. Those step one changes are set and shown in the UI so, in this case, if I run it again the second part works just fine on those previous items while any newly created items are skipped. I came across this issue when step one handled 74 inserts as each one didn't contain an image attached to it. When I switched the async let _ = await methodCall(...) to a TaskGroup, hoping that would wait better and work properly, I had the same issue but now only 10 to 30 items were created from the first step. Minimal Viable Example: to reproduce something similar In my minimal viable sample I simplified it way down so you can't run it twice and it's limited it creating 15 subitems. That said, I've hooked it up with both an async let _ = await methodCall(...) dubbed AL and a TaskGroup dubbed TG. With both types my second process (incrementing the number associated with the subissue/subitem) isn't run as it doesn't see the subitem as having been created and, when run with TaskGroup, only 12 to 15 items are created rather that the always 15 of async let. Code shared here: https://gist.github.com/SimplyKyra/aeee2d43689d907d7a66805ce4bbf072 And this gives a macOS view of showing each time the button is pressed the sub issues created never increment to 1 while, when using TaskGroup, 15 isn't guaranteed to be created and remembered. I'm essentially wondering if anyone else has this issue and if so have you figured out how to solve it? Thanks
2
0
291
Oct ’24
Actor and the Singleton Pattern
As I migrate my apps to Swift 6 one by one, I am gaining a deeper understanding of concurrency. In the process, I am quite satisfied to see the performance benefits of parallel programming being integrated into my apps. At the same time, I have come to think that actor is a great type for addressing the 'data race' issues that can arise when using the 'singleton' pattern with class. Specifically, by using actor, you no longer need to write code like private let lock = DispatchQueue(label: "com.singleton.lock") to prevent data races that you would normally have to deal with when creating a singleton with a class. It reduces the risk of developer mistakes. import EventKit actor EKDataStore: Sendable { static let shared = EKDataStore() let eventStore: EKEventStore private init() { self.eventStore = EKEventStore() } } Of course, since a singleton is an object used globally, it can become harder to manage dependencies over time. There's also the downside of not being able to inject dependencies, which makes testing more difficult. I still think the singleton pattern is ideal for objects that need to be maintained throughout the entire lifecycle of the app with only one instance. The EKDataStore example I gave is such an object. I’d love to hear other iOS developers' opinions, and I would appreciate any advice on whether I might be missing something 🙏
1
0
451
Sep ’24
Coordination of Video Capture and Audio Engine Start in iOS Development
Question: When implementing simultaneous video capture and audio processing in an iOS app, does the order of starting these components matter, or can they be initiated in any sequence? I have an actor responsible for initiating video capture using the setCaptureMode function. In this actor, I also call startAudioEngine to begin the audio engine and register a resultObserver. While the audio engine starts successfully, I notice that the resultObserver is not invoked when startAudioEngine is called synchronously. However, it works correctly when I wrap the call in a Task. Could you please explain why the synchronous call to startAudioEngine might be blocking the invocation of the resultObserver? What would be the best practice for ensuring both components work effectively together? Additionally, if I were to avoid using Task, what approach would be required? Lastly, is the startAudioEngine effective from the start time of the video capture (00:00)? Platform: Xcode 16, Swift 6, iOS 18 References: Classifying Sounds in an Audio Stream – In my case, the analyzeAudio() method is not invoked. Setting Up a Capture Session – Here, the focus is on video capture. Classifying Sounds in an Audio File Code Snippet: (For further details. setVideoCaptureMode() surfaces the problem.) // ensures all operations happen off of the `@MainActor`. actor CaptureService { ... nonisolated private let resultsObserver1 = ResultsObserver1() ... private func setUpSession() throws { .. } ... setVideoCaptureMode() throws { captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } /* -- Works fine (analyseAudio is printed) Task { self.resultsObserver1.startAudioEngine() } */ self.resultsObserver1.startAudioEngine() // Does not work - analyzeAudio not printed captureSession.sessionPreset = .high try addOutput(movieCapture.output) if isHDRVideoEnabled { setHDRVideoEnabled(true) } updateCaptureCapabilities() }
5
0
377
Oct ’24