Posts

Post not yet marked as solved
11 Replies
1k Views
I've been working on an iOS project for the iPhone and would like to support running it on macOS computers with Apple Silicon. In the Targets / Supported Destinations we added "Mac (Designed for iPhone)" but experienced Thread 1: EXC_BAD_ACCESS crashes immediately when we tried to run it. We've isolated it down to Stepper UI elements in our view. Starting a new project and just trying to present a single Stepper in the ContentView, we get the same crash. Here is code that presents the issue: // ContentView.swift import SwiftUI struct ContentView: View { @State var someValue = 5 var body: some View { VStack { Stepper("Stepper", value: $someValue, in: 0...10) } } } When run from Xcode on an iOS device or the simulator, it runs fine. Trying to run it on the Mac, it crashes here: // Stepper_01App.swift import SwiftUI @main // <-- Thread 1: EXC_BAD_ACCESS (code=2, address=0x16a643f70) struct Stepper_01App: App { var body: some Scene { WindowGroup { ContentView() } } } Xcode 14.3 (14E222b), MacOS Ventura 13.3.1 (a), Mac mini M2. Target: Mac (Designed for iPhone) We have verified that the same code crashes on all the Apple Silicon Macs we have access to. Searching the Internet and Apple Developer forums I dont find other reports, so I kind of feel there must be some level of either user error or system/project misconfiguration going on? If any iOS app that used Steppers was just crashing when trying to run on a Mac, it seems like this would be a big deal. If anyone has input or can point out what we need to do differently, it would be appreciated!
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.8k Views
We recently started working on getting an iOS app to work on Macs with Apple Silicon as a "Designed for iPhone" app and are having issues with speech synthesis. Specifically, voices retuned by AVSpeechSynthesisVoice.speechVoices() do not all work on the Mac. When we build an utterance and attempt to speak, the synthesizer falls back on a default voice and says some very odd text about voice parameters (that is not in the utterance speech text) before it does say the intended speech. Here is some sample code to setup the utterance and speak: func speak(_ text: String, _ settings: AppSettings) { let utterance = AVSpeechUtterance(string: text) if let voice = AVSpeechSynthesisVoice(identifier: settings.selectedVoiceIdentifier) { utterance.voice = voice print("speak: voice assigned \(voice.audioFileSettings)") } else { print("speak: voice error") } utterance.rate = settings.speechRate utterance.pitchMultiplier = settings.speechPitch do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .default, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) self.synthesizer.speak(utterance) return } catch let error { print("speak: Error setting up AVAudioSession: \(error.localizedDescription)") } } When running the app on the Mac, this is the kind of error we get with "com.apple.eloquence.en-US.Rocko" as the selectedVoiceIdentifier: speak: voice assgined [:] 2023-05-29 18:00:14.245513-0700 A.I.[9244:240554] [aqme] AQMEIO_HAL.cpp:742 kAudioDevicePropertyMute returned err 2003332927 2023-05-29 18:00:14.410477-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.412837-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.413774-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.414661-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.415544-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416384-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416804-0700 A.I.[9244:240554] [AXTTSCommon] Audio Unit failed to start after 5 attempts. 2023-05-29 18:00:14.416974-0700 A.I.[9244:240554] [AXTTSCommon] VoiceProvider: Could not start synthesis for request SSML Length: 140, Voice: [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600002c29590] <speak><voice name="com.apple.eloquence.en-US.Rocko">How much wood would a woodchuck chuck if a wood chuck could chuck wood?</voice></speak> language: en-US footprint: premium rate: 0.500000 pitch: 1.000000 volume: 1.000000 2023-05-29 18:00:14.428421-0700 A.I.[9244:240360] [VOTSpeech] Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha When we run AVSpeechSynthesisVoice.speechVoices(), the "com.apple.eloquence.en-US.Rocko" is absolutely in the list but fails to speak properly. Notice that the line: print("speak: voice assigned \(voice.audioFileSettings)") Shows: speak: voice assigned [:] The .audioFileSettings being empty seems to be a common factor for the voices that do not work properly on the Mac. For voices that do work, we see this kind of output and values in the .audioFileSettings: speak: voice assigned ["AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 16, "AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 0, "AVSampleRateKey": 22050, "AVLinearPCMIsNonInterleaved": 0, "AVNumberOfChannelsKey": 1] So we added a function to check the .audioFileSettings for each voice returned by AVSpeechSynthesisVoice.speechVoices(): //The voices are set in init(): var voices = AVSpeechSynthesisVoice.speechVoices() ... func checkVoices() { DispatchQueue.global().async { [weak self] in guard let self = self else { return } let checkedVoices = self.voices.map { ($0.0, $0.0.audioFileSettings.count) } DispatchQueue.main.async { self.voices = checkedVoices } } } That looks simple enough, and does work to identify which voices have no data in their .audioFileSettings. But we have to run it asynchronously because on a real iPhone device, it takes more than 9 seconds and produces a tremendous amount of error spew to the console. 2023-06-02 10:56:59.805910-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:56:59.971435-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.122976-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.144430-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\\pause=\d+\\)?Florida)\b, message=unrecognized character follows \, characterPosition=1 2023-06-02 10:57:00.147993-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 2023-06-02 10:57:00.148036-0700 A.I.[17186:910116] [AXTTSCommon] Error loading rules: 2147483648 ... This goes on and on and on ... There must be a better way?
Posted Last updated
.
Post not yet marked as solved
0 Replies
713 Views
We have an AppIntent that starts streaming data in its perform() function with a URLSession. This may be a quick operation, or it may take some time (more than 30 seconds but less than a minute). Is there any way we can keep that streaming data URLSession active when the AppIntent asks the user to continue with requestConfirmation? What we have seen so far is that any operation the AppIntent takes in its perform() function that interacts with the user causes the URLSession to be abruptly terminated with a NSURLErrorNetworkConnectionLost error when the app is not in the foreground. If the app is currently running in the foreground then the session does remain active and data continues to stream in. Sadly, our primary use case is for the Siri/Shortcuts interaction to happen with openAppWhenRun set to false and not require the user to open the app. In that case (with the AppIntent invoked while the app is in the background) the network connection is dropped. It has been frustrating in initial development because on the simulator the connection is not dropped and data continues to stream in, even while the app is in the background. On a physical device, this is not the case. The only condition we have found to have the network connection maintained is with the app in the foreground when the AppIntent is run. Here is what we have now: struct AskAI: AppIntent { static var title: LocalizedStringResource = "Ask" static var description: IntentDescription = IntentDescription("This will ask the A.I. app") static var openAppWhenRun = false @Parameter(title: "Prompt", description: "The prompt to send", requestValueDialog: IntentDialog("What would you like to ask?")) var prompt: String @MainActor func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView & ReturnsValue<String> { var continuationCalled = false //Start the streaming data URLSession task Task<String, Never> { await withCheckedContinuation { continuation in Brain.shared.requestIntentStream(prompt: prompt, model: Brain.shared.appSettings.textModel, timeoutInterval: TimeInterval(Brain.shared.appSettings.requestTimeout )) { result in if !continuationCalled { continuationCalled = true continuation.resume(returning: Brain.stripMarkdown(result)) } } } } //Start the intentTimeout timer and early out if continuationCalled changed let startTime = Date() let timeout = Brain.shared.appSettings.intentTimeout while !continuationCalled && Date().timeIntervalSince(startTime) < timeout { try? await Task.sleep(nanoseconds: 1_000_000_000) } //At this point either the intentTimeout was reached (data still streaming) or continuationCalled is true (data stream complete) //best effort for Siri to read the first part and continue as more is received var allReadResponse = "" var partialResponse = "" while !continuationCalled { partialResponse = Brain.shared.responseText.replacingOccurrences(of: allReadResponse, with: "") allReadResponse += partialResponse do { let dialogResponse = partialResponse + " --- There is more, would you like to continue?" //THIS WILL TERMINATE THE URLSession if the app is not in the foreground! try await requestConfirmation(result: .result(dialog: "\(dialogResponse)") { AISnippetView() }) } catch { return .result( value: Brain.shared.responseText, dialog: "", //user cancelled, return what we have so far but we've already spoken the dialog view: AISnippetView() ) } } //Read the last part (or the whole thing it it was retrieved within the intentTimeout) let remainingResponse = Brain.shared.responseText.replacingOccurrences(of: allReadResponse, with: "") return .result( value: Brain.shared.responseText, dialog: "\(remainingResponse)", view: AISnippetView() ) } } With this logic, Siri will read the first part of the response data when the timer expires and continuationCalled is false. The data is still streaming and will continue to come in while she is speaking - ONLY IF THE APP IS IN THE FOREGROUND. Otherwise the call to requestConfirmation will terminate the connection. Is there any way to get the task with the requestIntentStream URLSession to stay active?
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.3k Views
We have a simple AppIntent to let users ask a question in our app. The intent has a single parameter: Prompt which is retrieved by a requestValueDialog. Users have reported that when using Siri, the dialog for "What would you like to ask?" appears, but if they respond with phrases such as "What is the last album by Sting" it just presents the dialog for the prompt again. Testing it, I find that I can reproduce the behavior if I include words like "recent" or "last". Just providing those words in isolation causes the dialog to be presented over and over again. Using the same intent from Shortcuts does not seem to have that limitation, it's only when providing the spoken words for speech recognition. All of that interaction happens outside our code, so I cant see any way to debug or identify why the prompts are being rejected. Is there a different way to specify the @Parameter to indicate the prompt: String could include any arbitrary text - including "recent" or "last" ? My hunch is those words are triggering a system response that is stepping on the requstValueDialog? Here's the basics of how the intent and parameter are setup: struct AskAI: AppIntent { static var title: LocalizedStringResource = "Ask" static var description: IntentDescription = IntentDescription("This will ask the A.I. app") static var openAppWhenRun = false @Parameter(title: "Prompt", description: "The prompt to send", requestValueDialog: IntentDialog("What would you like to ask?")) var prompt: String @MainActor func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView { var response = "" ... response = "You asked: \"\(prompt)\" \n" ... return .result(dialog: "\(response)") } static var parameterSummary: some ParameterSummary { Summary("Ask \(\.$prompt)") } }
Posted Last updated
.
Post not yet marked as solved
2 Replies
931 Views
The talk describes a Default View displayed whenever running my intent (7:32). I never see any view other than the Siri interaction at the bottom. Are there other requirements for getting the Default View to display? I have experimented with adding a custom snippet view to the result: func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView { ... return .result( dialog: "\(response)", view: IntentSnippetView(prompt) ) That does display, but has way too much information. My IntentSnippetView only has a single Text("Hello World") but the view shows the entire contents of the response value as well. Is there any way to limit what actually shows up in the response snippet view? Is there any way to get the described Default View to show while the intent is running? The talk also suggests custom views can be shown at Intent Confirmation and Value Confirmation. In my case neither of those seem applicable since those confirmation steps are not used. I'd really like to have at least the Default View, or be able to make the result view less overwhelming. I've looked through the documentation, but don't find much about custom widget views or intent views - any pointers would be appreciated! Here's my code just in case its clear I'm doing something wrong: import AppIntents struct AskMyApp: AppIntent {     static var title: LocalizedStringResource = "Ask"     static var description: IntentDescription = IntentDescription("This will ask MyApp")     static var openAppWhenRun = false     @Parameter(title: "Prompt", description: "The prompt to send", requestValueDialog: IntentDialog("What would you like to ask?"))     var prompt: String     @MainActor     func perform() async throws -> some IntentResult & ProvidesDialog & ShowsSnippetView {         let response = ViewModel.shared.sendIntentPrompt(newPrompt: prompt)         return .result(             dialog: "\(response)",             view: AskMyAppSnippetView(prompt)         )     }     static var parameterSummary: some ParameterSummary {         Summary("Ask \(\.$prompt)")     } } import SwiftUI struct AskMyAppSnippetView: View {     var body: some View {         Text("Hello World")     } }
Posted Last updated
.