Posts

Post not yet marked as solved
2 Replies
1.1k Views
I've noticed that contextual strings do not work for on-device speech recognition. I've written a Feedback entry: FB7496068 To reproduce: Create a basic app that transcribes speech. Add “Flubbery Dubbery” or a made up couple of words to a strings array and set it equal to the contextualStrings property of SFSpeechAudioBufferRecognitionRequest For the recognition request being used, set the requiresOnDeviceRecognition Boolean property to true. Transcribe audio and say the made up couple of words. See that the device never correctly transcribes the made up couple of words. Now set the requiresOnDeviceRecognition Boolean to false. Transcribe audio and say the made up words. See that the device correctly transcribes the made up words. Has anyone else run into this problem? I would love a fix. PS, I noticed that if you add a custom word as a contact in the Contacts app, then on-device recognition picks it up. So it seems it's possible, just not implemented quite right.
Posted Last updated
.
Post not yet marked as solved
1 Replies
754 Views
When creating a speech recognition app, such as the sample app called SpokenWord, and asking for permission, a message appears to the user which says: ‘SpokenWord’ Would like to access Speech Recognition Speech data from this app will be sent to Apple to process your requests. …” However, it is my understanding that settings requiresOnDevice to true means that all audio will be processed on device and not be sent to or used by Apple. Is this correct? Or is data sent to Apple even if requiresOnDevice is set to true?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
If a user uses the Document App template in Xcode to create a SwiftUI app, macOS starts them off with a new document. This is good. I can work with that to present onboarding UI within a new document. However, with that same app running on iOS, the user is instead greeted by the stock document view controller to create or pick a document. This is not helpful in that I don't have a way to present onboarding information. I did notice that if you add a WindowGroup to the Scene, the app will display that window group. But then I don't know how to get the user to the picker UI of the DocumentGroup. Has anyone figured out how to do things like present onboarding on top of this DocumentGroup-based app?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
When a using on-device speech recognition through iOS 13’s Speech Framework, sometimes the user may get the following error: "Error Domain=kAFAssistantErrorDomain Code=1103". I found that the accompanying error message says that the language model is not found. I also found that, in all of my experiences, if the user tries to recognize speech again after 10-15 minutes, the issue is gone. My assumption is that the error triggers the OS to download the appropriate language model. However, I've gotten reports that for some users, all outside the United state, the issue never goes away. They also tried to change the device preferred language to the language they were trying to use, and the error still did not go away. Is there any official course of action to take when receiving the error described above? Are there any specific device settings that need to be set (like language and region on the iPhone or iPad) for a model to be able to be downloaded? STEPS TO REPRODUCE Create an on-device speech-to-text app using Apple's Speech framework and setting requiresOnDevice to true in the recognizer configuration. Switch to a new language, such as Spanish or Russian, if using from an English phone. See the error "Error Domain=kAFAssistantErrorDomain Code=1103" surface. If any engineers read this: please see FB7617776 for additional details and supporting files.
Posted Last updated
.
Post not yet marked as solved
0 Replies
431 Views
It seems there is no way to dismiss the keyboard when using a TextEditor in iOS. Is that correct?
Posted Last updated
.