I’m also seeing this. The obvious answer is that my code is doing what the AVAudioEngine documentation warns against — deallocating within the notification handler.
But while I do deallocate the engine as a result of the notification, I do it asynchronously on another queue so I can’t figure out why I’m getting this deadlock.
Post
Replies
Boosts
Views
Activity
The answer to this, ultimately, was to ditch using RemoteIO and use AVAudioEngine. It has a TON of its own quirks (have fun hunting down crashes when you make even the tiniest mistake with connections and formats or engine lifecycle), but if you respond to the engine reset event it mostly continues working when Sound Recognition is toggle on and off.
Ok, after some investigation it looks like kLSRErrorDomain/301 is new in iOS15 and is reported when you cancel a speech recognition request, but only on newer devices (specifically, devices with a CoreML-compatible neural engine — A12 Bionic and newer).
So it looks like iOS15 changes the way speech recognition is handled by the neural engine, resulting in a new error code. Cancelling tasks previously seemed to result in a kAFAssistantErrorDomain/216. 216 is still present in iOS15 though, not sure on the details.
It looks like you can work around this using a wrapping closure:
Button(action: { cancelled() }) { Text("Cancel") }
Here’s the only code in my codebase that uses TaskGroup.
// Function that processes a bunch of 'action triggers' and produces a stream of actions
func run(action: Action, find: @escaping (Model.Type) -> Model?) -> AsyncStream<Action> {
AsyncStream { continuation in
Task {
await withTaskGroup(of: Void.self) { group in
for trigger in triggers {
group.addTask {
for await result in trigger(action, find) {
if result is VoidAction { continue }
continuation.yield(result)
}
}
}
await group.waitForAll()
continuation.finish()
}
}
}
}
// Later, from an async context. Process an action and dispatch its output.
Task {
for await output in run(action: action, find: { store.find($0) }) {
try await store.dispatch(output)
}
}
I was able to solve the crash by explicitly marking both the task group’s closures as @MainActor. But would love to get a deeper understanding of why this doesn’t work.
await withTaskGroup(of: Void.self) { @MainActor group in
for trigger in triggers {
group.addTask { @MainActor in
for await result in trigger(action, find) {
if result is VoidAction { continue }
continuation.yield(result)
}
}
}
await group.waitForAll()
continuation.finish()
}
You may be getting the navigation controller’s gesture handling into some state that it’s not intended to handle. I had a similar (or perhaps the same) issue where when swiping back on the root screen (which doesn’t do anything obviously, but nevertheless is possible and as it turns out I do often while trying to scroll vertically), the controller will mess up the transition of the next navigation operation, appearing to 'freeze' until moved to the background and brought back. I solved this by implementing the gesture begin delegate method, and rejecting the gesture when on the root screen. Maybe worth trying for your case?
You can see an example of doing this here: https://github.com/siteline/swiftui-introspect/blob/main/Tests/UITestsHostApp/StatusBarStyle/NavigationView.swift#L42.
func gestureRecognizerShouldBegin(_: UIGestureRecognizer) -> Bool {
viewControllers.count > 1
}
The only other piece of information I have is the crash_info_entry_0, which is as follows:
AttributeGraph/Attribute.swift:473: Fatal error: attempting to create attribute with no subgraph: UpdateAlertActions
Ok, I finally saw a runtime warning in debug when presenting the alert, which is something to go on. But why would this warning be issued? My view model is marked @MainActor.
Publishing changes from within view updates is not allowed, this will cause undefined behavior.
@iaborodin Did you get anywhere with this? I’m looking into fixes and workarounds now with some urgency. Happy to chat, you can reach me at 34534543456789098767654 @ mailer.city.
@tom63001 You can check the segments’ timestamp and duration properties and use that as a proxy as to whether what you’ve received is final.
Unfortunately this is all even more broken on iOS18, where the bestTranscript just gets randomly erased after every pause in speech...