Post

Replies

Boosts

Views

Activity

Reply to Error throws while using the speech recognition service in my app
The error Code=1101 often has to do with incorrect/incomplete setup off offline dictation on your device. If you set request.requiresOnDeviceRecognition = true the recognition process uses Apple’s dictation service. The dictation service only works offline if you have the keyboard installed for the same language + region you want the dictation / speech recognition for you have Enable Dictation toggled On and the Dictation Language for the lang + region you want has been downloaded by the system. If the above conditions are not met, you will see the 1101 error. Example: If you want offline dictation for „de-DE“ (german language for region Germany) you need to have such a keyboard installed. In the device's Setting / General / Keyboard / Keyboards … be sure to have the one keyboard installed for your lang + region speech recognition (in our example „German (Germany)“). Further down in General / Keyboard turn on Enable Dictation. If Dictation is enabled, you see a further entry below called Dictation Languages. Open it to make sure the dictation languages are downloaded (you see a note about the status there). Once the dictation language(s) are downloaded, speech recognition with request.requiresOnDeviceRecognition = true should work for that language/region.
Feb ’24
Reply to Failure of speech recognition when "supportsOnDeviceRecognition" is set to "True".
The error Code=1101 often has to do with incorrect/incomplete setup off offline dictation on your device. If you set request.requiresOnDeviceRecognition = true the recognition process uses Apple’s dictation service. The dictation service only works offline if you have the keyboard installed for the same language + region you want the dictation / speech recognition for you have Enable Dictation toggled On and the Dictation Language for the lang + region you want has been downloaded by the system. If the above conditions are not met, you will see the 1101 error. Example: If you want offline dictation for „de-DE“ (german language for region Germany) you need to have such a keyboard installed. In the device's Setting / General / Keyboard / Keyboards … be sure to have the one keyboard installed for your lang + region speech recognition (in our example „German (Germany)“). Further down in General / Keyboard turn on Enable Dictation. If Dictation is enabled, you see a further entry below called Dictation Languages. Open it to make sure the dictation languages are downloaded (you see a note about the status there). Once the dictation language(s) are downloaded, speech recognition with request.requiresOnDeviceRecognition = true should work for that language/region.
Feb ’24
Reply to AVAudioEngine: audio input does not work on iOS 17 simulator
You need to activate the AudioSession. I have added the following line to your code sample from github try! AVAudioSession.sharedInstance().setActive(true) after the setCategory line. Then it works for me on an iOS 17.2 simulator. I can confirm that your code works on iOS 16.4 Simulator without setActive. Not sure why, as my understanding is it has always been the intended way of doing stuff using setActive.
Jan ’24
Reply to AVSpeechSynthesizer iOS 15/16 lagging for seconds when switching to (different) German language voice
Today I have been testing something unrelated in my SpeechApp on the iOS 16.4 Simulator with Xcode 14.2 (and later 14.3) and to my surprise, the change of German languages from one utterance to the next worked as fast as it should be, no more 3+ secs delay! Cool, has Apple finally solved the bug?! I moved to a device. But no: on device with iOS 16.4.1 installed, the same issue as always, the delays between utterances with different German voices are back. Reinstalled the app on the device, re-downloaded the German voices used. But no luck. This is the console output of the app running on device. The delay happens after the first "[AXTTSCommon] Invalid rule:" appears in the console. Speech Synthesizer - Current utterance voice: Optional("Viktor (Enhanced)") | language: Optional("de-DE") 2023-04-09 13:04:06.172618+0200 SpeechApp[914:31631] [AXTTSCommon] Invalid rule: <----- DELAY HAPPENS AFTER THIS LINE 2023-04-09 13:04:10.052499+0200 SpeechApp[914:31631] [AXTTSCommon] Invalid rule: 2023-04-09 13:04:10.053138+0200 SpeechApp[914:31631] [AXTTSCommon] Invalid rule: 2023-04-09 13:04:10.055567+0200 SpeechApp[914:31631] [AXTTSCommon] Invalid rule: 2023-04-09 13:04:10.113567+0200 SpeechApp[914:31164] [audio] --- SpeechSynthesizer Delegate - did START speaking utterance. The console output of the Simulator. It shows only one line with "[AXTTSCommon] Invalid rule:" and moves over it quickly, without any delay: Speech Synthesizer - Current utterance voice: Optional("Viktor (Enhanced)") | language: Optional("de-DE") 2023-04-09 13:01:59.764986+0200 SpeechApp[7145:111421] [AXTTSCommon] Invalid rule: 2023-04-09 13:01:59.778640+0200 SpeechApp[7145:108690] [audio] --- SpeechSynthesizer Delegate - did START speaking utterance. Can anybody confirm that German voices work correctly on Simulator while switching german voices between utterances, while still showing unacceptable delays between utterances on device?
Apr ’23
Reply to userDidAcceptCloudKitShareWith Not Called on App Launch
@Steepz I found a solution. Use the scene(_, willConnect, options) Scene Delegate callback and extract the share metadata like so:     func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {         guard let cloudKitShareMetadata = connectionOptions.cloudKitShareMetadata else { return }         // Do something with the cloudKitShareMetadata...     }
Feb ’23
Reply to userDidAcceptCloudKitShareWith Not Called on App Launch
@Steepz Have you found a solution? I have the same issue and can not find a way around it. Even the latest Apple Sample Code behaves like that, i.e. not triggering userDidAcceptCloudKitShareWith in SceneDelegate on the apps first/cold launch. On subsequent launches the delegate method gets triggered!?(https://developer.apple.com/documentation/coredata/sharing_core_data_objects_between_icloud_users)
Feb ’23
Reply to AVSpeechSynthesizer iOS 15/16 lagging for seconds when switching to (different) German language voice
Hi @georgbachmann, I have not found a solution, yet. Apple has been requesting additional infos two weeks ago (sysdiagnose file with with a Siri logging profile installed). No further comms from Apple so far. It would be really(!) helpful if you could file a Feedback regarding your (same) issue referencing my Feedback ID (FB11380447). Thanks! Regarding the logs: I connected the device to the Mac running Xcode and then selected the device in Console. I suggest we keep each other updated on the issue here in this forum. Best, Klaus.
Oct ’22
Reply to NSPersistentCloudKitContainer - how to reset CoreData+CloudKit after failed automatic migration (while still in development environment)
Conclusions: After moving "back in time" by checking out a very early build of my app with an early version of the managed object model (database description) installing that app version on the simulator moving forward to the latest build the console reported a successful automatic migration of the CoreData database. After that success, the "Skipping migration" error showed up again. I filed a feedback report to Apple and got the answer: this "skipping migration error" console output is normal/expected behaviour. Case closed for me.
Apr ’21