Recognize spoken words in recorded or live audio using Speech.

Speech Documentation

Posts under Speech tag

67 Posts
Sort by:
Post not yet marked as solved
0 Replies
300 Views
As per apple's document didCancel method will be called after stopSpeaking(at:) method call, instead didFinish method is being called. But in reality, so far I've checked: it's working perfectly on iOS 13.2.2, but after iOS 15. Is there anything I'm missing to configure. But it did work perfectly without doing anything on previous version.
Posted Last updated
.
Post not yet marked as solved
1 Replies
542 Views
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
Posted
by godtaehee.
Last updated
.
Post not yet marked as solved
5 Replies
1.8k Views
We recently started working on getting an iOS app to work on Macs with Apple Silicon as a "Designed for iPhone" app and are having issues with speech synthesis. Specifically, voices retuned by AVSpeechSynthesisVoice.speechVoices() do not all work on the Mac. When we build an utterance and attempt to speak, the synthesizer falls back on a default voice and says some very odd text about voice parameters (that is not in the utterance speech text) before it does say the intended speech. Here is some sample code to setup the utterance and speak: func speak(_ text: String, _ settings: AppSettings) { let utterance = AVSpeechUtterance(string: text) if let voice = AVSpeechSynthesisVoice(identifier: settings.selectedVoiceIdentifier) { utterance.voice = voice print("speak: voice assigned \(voice.audioFileSettings)") } else { print("speak: voice error") } utterance.rate = settings.speechRate utterance.pitchMultiplier = settings.speechPitch do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .default, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) self.synthesizer.speak(utterance) return } catch let error { print("speak: Error setting up AVAudioSession: \(error.localizedDescription)") } } When running the app on the Mac, this is the kind of error we get with "com.apple.eloquence.en-US.Rocko" as the selectedVoiceIdentifier: speak: voice assgined [:] 2023-05-29 18:00:14.245513-0700 A.I.[9244:240554] [aqme] AQMEIO_HAL.cpp:742 kAudioDevicePropertyMute returned err 2003332927 2023-05-29 18:00:14.410477-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.412837-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.413774-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.414661-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.415544-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416384-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416804-0700 A.I.[9244:240554] [AXTTSCommon] Audio Unit failed to start after 5 attempts. 2023-05-29 18:00:14.416974-0700 A.I.[9244:240554] [AXTTSCommon] VoiceProvider: Could not start synthesis for request SSML Length: 140, Voice: [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600002c29590] <speak><voice name="com.apple.eloquence.en-US.Rocko">How much wood would a woodchuck chuck if a wood chuck could chuck wood?</voice></speak> language: en-US footprint: premium rate: 0.500000 pitch: 1.000000 volume: 1.000000 2023-05-29 18:00:14.428421-0700 A.I.[9244:240360] [VOTSpeech] Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha When we run AVSpeechSynthesisVoice.speechVoices(), the "com.apple.eloquence.en-US.Rocko" is absolutely in the list but fails to speak properly. Notice that the line: print("speak: voice assigned \(voice.audioFileSettings)") Shows: speak: voice assigned [:] The .audioFileSettings being empty seems to be a common factor for the voices that do not work properly on the Mac. For voices that do work, we see this kind of output and values in the .audioFileSettings: speak: voice assigned ["AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 16, "AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 0, "AVSampleRateKey": 22050, "AVLinearPCMIsNonInterleaved": 0, "AVNumberOfChannelsKey": 1] So we added a function to check the .audioFileSettings for each voice returned by AVSpeechSynthesisVoice.speechVoices(): //The voices are set in init(): var voices = AVSpeechSynthesisVoice.speechVoices() ... func checkVoices() { DispatchQueue.global().async { [weak self] in guard let self = self else { return } let checkedVoices = self.voices.map { ($0.0, $0.0.audioFileSettings.count) } DispatchQueue.main.async { self.voices = checkedVoices } } } That looks simple enough, and does work to identify which voices have no data in their .audioFileSettings. But we have to run it asynchronously because on a real iPhone device, it takes more than 9 seconds and produces a tremendous amount of error spew to the console. 2023-06-02 10:56:59.805910-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:56:59.971435-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.122976-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.144430-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\\pause=\d+\\)?Florida)\b, message=unrecognized character follows \, characterPosition=1 2023-06-02 10:57:00.147993-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 2023-06-02 10:57:00.148036-0700 A.I.[17186:910116] [AXTTSCommon] Error loading rules: 2147483648 ... This goes on and on and on ... There must be a better way?
Posted Last updated
.
Post not yet marked as solved
1 Replies
530 Views
Hi, When attempting to use the my Personal Voice with AVSpeechSythesizer with application in background I receive the below message: > Cannot use AVSpeechSynthesizerBufferCallback with Personal Voices, defaulting to output channel. Other voices can be used without issue. Is this a published limitation of Personal Voice within applications, i.e. no background playback?
Posted
by kauai.
Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
Crash - 1: Fatal Exception: NSRangeException 0 CoreFoundation 0x9e38 __exceptionPreprocess 1 libobjc.A.dylib 0x178d8 objc_exception_throw 2 CoreFoundation 0x1af078 -[__NSCFString characterAtIndex:].cold.1 3 CoreFoundation 0x1a44c -[CFPrefsPlistSource synchronize] 4 UIKitCore 0x1075f68 -[UIPredictionViewController predictionView:didSelectCandidate:] 5 TextInputUI 0x2461c -[TUIPredictionView _didRecognizeTapGesture:] 6 UIKitCore 0xbe180 -[UIGestureRecognizerTarget _sendActionWithGestureRecognizer:] 7 UIKitCore 0x42c050 _UIGestureRecognizerSendTargetActions 8 UIKitCore 0x1a5a18 _UIGestureRecognizerSendActions 9 UIKitCore 0x86274 -[UIGestureRecognizer _updateGestureForActiveEvents] 10 UIKitCore 0x132348 _UIGestureEnvironmentUpdate 11 UIKitCore 0x9ba418 -[UIGestureEnvironment _deliverEvent:toGestureRecognizers:usingBlock:] 12 UIKitCore 0xf6df4 -[UIGestureEnvironment _updateForEvent:window:] 13 UIKitCore 0xfb760 -[UIWindow sendEvent:] 14 UIKitCore 0xfaa20 -[UIApplication sendEvent:] 15 UIKitCore 0xfa0d8 __dispatchPreprocessedEventFromEventQueue 16 UIKitCore 0x141e00 __processEventQueue 17 UIKitCore 0x44a4f0 __eventFetcherSourceCallback 18 CoreFoundation 0xd5f24 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION 19 CoreFoundation 0xe22fc __CFRunLoopDoSource0 20 CoreFoundation 0x661c0 __CFRunLoopDoSources0 21 CoreFoundation 0x7bb7c __CFRunLoopRun 22 CoreFoundation 0x80eb0 CFRunLoopRunSpecific 23 GraphicsServices 0x1368 GSEventRunModal 24 UIKitCore 0x3a1668 -[UIApplication _run] 25 UIKitCore 0x3a12cc UIApplicationMain ============================================================ Crash - 2: Crashed: com.apple.root.background-qos 0 libobjc.A.dylib 0x1c20 objc_msgSend + 32 1 UIKitCore 0xb0e0d8 __37-[UIDictationConnection cancelSpeech]_block_invoke + 152 2 libdispatch.dylib 0x24b4 _dispatch_call_block_and_release + 32 3 libdispatch.dylib 0x3fdc _dispatch_client_callout + 20 4 libdispatch.dylib 0x15b8c _dispatch_root_queue_drain + 684 5 libdispatch.dylib 0x16284 _dispatch_worker_thread2 + 164 6 libsystem_pthread.dylib 0xdbc _pthread_wqthread + 228 7 libsystem_pthread.dylib 0xb98 start_wqthread + 8 ============================================================ I encountered the two keyboard-related crashes in iOS 16.x, but I cannot reproduce them. Can anyone tell me what is going on and how to fix them? Please let me know.
Posted
by lowser.
Last updated
.
Post not yet marked as solved
1 Replies
552 Views
I've been deaf and blind for 15 years' I'm not good at pronunciation in English, since I don't hear what I say, much less hear it from others. When I went to read the phrases to record my personal voice in Accessibility > Personal Voice, the 150 phrases to read are in English' How do I record phrases in Brazilian Portuguese? I speak Portuguese well' My English is very bad in pronunciation and deafness contributed' Help me.
Posted Last updated
.
Post not yet marked as solved
0 Replies
476 Views
How can i show a link in my app to direct access a deep system setting. if a user click a link, app should directly open the deep settings page. For Ex: "Enable Dictation" (Settings-&gt;General-&gt;Keyboards) App Type: Multiplatform(Swift) minimum deployments: ios: 16.4 Mac os: 13.3 Any Help really appreciated.
Posted
by mrnaseer.
Last updated
.
Post not yet marked as solved
0 Replies
364 Views
The WWDC video "Extend Speech Synthesis with personal and custom voices" here: https://developer.apple.com/wwdc23/10033 Shows what appears to be an icon for "Personal Voice at time 10:46. Suggest this be made available to developers for final release.
Posted
by kauai.
Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
Hello, Using iOS 17.0, I can see a list of available voices. However, some will just not work, meaning that when selected there will be no sound produced and no errors. This is true when using my app and AVSpeechUtterance, but it is also true in the settings where the preview button does nothing.
Posted
by MGG9.
Last updated
.
Post not yet marked as solved
1 Replies
673 Views
AVSpeechSynthesisVoice.speechVoices() returns voices that are no longer available after upgrading from iOS 16 to iOS 17 (although this has been an issue for a long time, I think). To reproduce: On iOS 16 download 1 or more enhanced voices under “Accessibility > Spoken Content > Voices”. Upgrade to iOS 17 Call AVSpeechSynthesisVoice.speechVoices() and note that the voices installed in step (1) are still present, yet they are no longer downloaded, therefore they don’t work. And there is no property on AVSpeechSynthesisVoice to indicate if the voice is still available or not. This is a problem for apps that allow users to choose among the available system voices. I receive many support emails surrounding iOS upgrades about this issue. I have to tell them to re-download the voices which is not obvious to them. I've created a feedback item for this as well (FB12994908).
Posted Last updated
.
Post not yet marked as solved
3 Replies
1k Views
Hello, I am deaf and blind. So my Apple studies are in text vi aBraille. One question: how do I add my voice as voice synthesis? Do I have to record it somewhere first? What is the complete process, starting with recording my voice? Do I have to record my voice reading something and then add it as voice synthesis? What's the whole process of that? There is no text explaining this' I found one about authorizing personal voice, but not the whole process starting the recording and such' Thanks!
Posted Last updated
.
Post not yet marked as solved
3 Replies
2.1k Views
I have updated to macOS Monterrey and my code for SFSPeechRecognizer just broke. I get this error if I try to configure an offline speech recognizer for macOS Error Domain=kLSRErrorDomain Code=102 "Failed to access assets" UserInfo={NSLocalizedDescription=Failed to access assets, NSUnderlyingError=0x6000003c5710 {Error Domain=kLSRErrorDomain Code=102 "No asset installed for language=es-ES" UserInfo={NSLocalizedDescription=No asset installed for language=es-ES}}} Here is a code snippet from a demo project: private func process(url: URL) throws {     speech = SFSpeechRecognizer.init(locale: Locale(identifier: "es-ES"))     speech.supportsOnDeviceRecognition = true     let request = SFSpeechURLRecognitionRequest(url: url)     request.requiresOnDeviceRecognition = true     request.shouldReportPartialResults = false     speech.recognitionTask(with: request) { result, error in       guard let result = result else {         if let error = error {           print(error)           return         }         return       }       if let error = error {         print(error)         return       }       if result.isFinal {         print(result.bestTranscription.formattedString)       }     }   } I have tried with different languages (es-ES, en-US) and it says the same error each time. Any idea on how to install these assets or how to fix this?
Posted Last updated
.
Post marked as solved
3 Replies
918 Views
I'm trying to create a list that users can pick for the voice they want to use in my app. The code below works for US but if I change my locale settings to any other country, it fails to load any available voices even though I have downloaded other for other countries.. func voices() -> [String] {          AVSpeechSynthesisVoice.speechVoices().filter { $0.language == NSLocale.current.voiceLanguage }.map { $0.name }                    // AVSpeechSynthesisVoice.speechVoices().map { $0.name }     } If I list all voices available, I can select the voices for other countries that are loaded.
Posted
by cleevans.
Last updated
.
Post not yet marked as solved
1 Replies
455 Views
CFBundleSpokenName = "Apple 123" CFBundleName = "Apple" Accessibility Bundle Name don't work without opening app. When I touch the application on device home screen, voiceover reads the app as "Apple". After the app launched, it reads as "Apple 123". I want reading as "Apple 123" on home screen, too. Can you help me?
Posted
by seyma.
Last updated
.
Post not yet marked as solved
2 Replies
1.2k Views
Download this Apple Speech Project https://developer.apple.com/documentation/accessibility/wwdc21_challenge_speech_synthesizer_simulator The project uses IOS15 deployment, when building and running I receive below errors. Setting deployment to IOS17 results in same errors. Appreciate if anyone else has determined how to re-engage this basic functionality. TTS appears to no longer to work. __ Folder ), NSFilePath=/Library/Developer/CoreSimulator/Volumes/iOS_21A5277g/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.0.simruntime/Contents/Resources/RuntimeRoot/System/Library/TTSPlugins, NSUnderlyingError=0x600000c75d40 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}} Failed to get sandbox extensions Query for com.apple.MobileAsset.VoiceServicesVocalizerVoice failed: 2 #FactoryInstall Unable to query results, error: 5 Unable to list voice folder Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2 Unable to list voice folder Query for com.apple.MobileAsset.VoiceServices.CustomVoice failed: 2 Unable to list voice folder Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2 Unable to list voice folder
Posted
by kauai.
Last updated
.
Post not yet marked as solved
1 Replies
850 Views
Xcode Version 15.0 beta 4 (15A5195m) or Version 14.3.1 (14E300c) same issue when running iOS Simulator iPhone 14 Pro (iOS 17 or iOS 16.4) or iPhone 12 (iOS 17.0 build 21A5277j) just started to play around with SFSpeechRecognition. Ran into the issue with SFSpeechURLRecognitionRequest. the simple project is just a ContentView with 2 buttons ( one for selecting audio file, one for starting transcribing ), a SpeechRecognizer ( from Apple sample code "Transcribing speech to text" with minor additions ) After selecting an audio file, tap the transcribe button, the following error logs appear in the debugger console after the execution of recognitionTask(with:resultHandler:). 2023-07-18 13:58:16.562706-0400 TranscriberMobile[6818:475161] [] <<<< AVAsset >>>> +[AVURLAsset _getFigAssetCreationOptionsFromURLAssetInitializationOptions:assetLoggingIdentifier:figAssetCreationFlags:error:]: AVURLAssetHTTPHeaderFieldsKey must be a dictionary 2023-07-18 13:58:16.792219-0400 TranscriberMobile[6818:475166] [plugin] AddInstanceForFactory: No factory registered for id <CFUUID 0x60000023dd00> F8BB1C28-BAE8-11D6-9C31-00039315CD46 2023-07-18 13:58:16.824333-0400 TranscriberMobile[6818:475166] HALC_ProxyObjectMap.cpp:153 HALC_ProxyObjectMap::_CopyObjectByObjectID: failed to create the local object 2023-07-18 13:58:16.824524-0400 TranscriberMobile[6818:475166] HALC_ShellDevice.cpp:2609 HALC_ShellDevice::RebuildControlList: couldn't find the control object 2023-07-18 13:58:16.872935-0400 TranscriberMobile[6818:475165] [] <<<< FAQ Offline Mixer >>>> FigAudioQueueOfflineMixerCreate: [0x10744b8d0] failed to query kAudioConverterPrimeInfo err=561211770, assuming zero priming 2023-07-18 13:58:16.890002-0400 TranscriberMobile[6818:474951] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "originator doesn't have entitlement com.apple.runningboard.mediaexperience" UserInfo={NSLocalizedFailureReason=originator doesn't have entitlement com.apple.runningboard.mediaexperience}> 2023-07-18 13:58:16.890319-0400 TranscriberMobile[6818:474951] [AMCP] 259 HALRBSAssertionGlue.mm:98 Failed to acquire the AudioRecording RBSAssertion for pid: 6818 with code: 1 - RBSServiceErrorDomain 2023-07-18 13:58:16.893137-0400 TranscriberMobile[6818:474951] [assertion] Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "originator doesn't have entitlement com.apple.runningboard.mediaexperience" UserInfo={NSLocalizedFailureReason=originator doesn't have entitlement com.apple.runningboard.mediaexperience}> 2023-07-18 13:58:16.893652-0400 TranscriberMobile[6818:474951] [AMCP] 259 HALRBSAssertionGlue.mm:98 Failed to acquire the MediaPlayback RBSAssertion for pid: 6818 with code: 1 - RBSServiceErrorDomain Since the AVURLAsset is not created manually, how to get around the initial error "AVURLAssetHTTPHeaderFieldsKey must be a dictionary"? SpeechRecognizer.swift import Foundation import AVFoundation import Speech import SwiftUI /// A helper for transcribing speech to text using SFSpeechRecognizer and AVAudioEngine. actor SpeechRecognizer: ObservableObject { // ... @MainActor func startTranscribingAudioFile(_ audioURL: URL?) { Task { await transcribeAudioFile(audioURL) } } // ... private func transcribeAudioFile(_ audioURL: URL?) { guard let recognizer, recognizer.isAvailable else { self.transcribe(RecognizerError.recognizerIsUnavailable) return } guard let audioURL else { self.transcribe(RecognizerError.nilAudioFileURL) return } let request = SFSpeechURLRecognitionRequest(url: audioURL) request.shouldReportPartialResults = true self.audioURLRequest = request self.task = recognizer.recognitionTask(with: request, resultHandler: { [weak self] result, error in self?.audioFileRecognitionHandler(result: result, error: error) }) } // ... nonisolated private func audioFileRecognitionHandler(result: SFSpeechRecognitionResult?, error: Error?) { if let result { transcribe(result.bestTranscription.formattedString) } if let error { Task { @MainActor in await reset() transcribe(error) } } } } ContentView.swift import SwiftUI struct ContentView: View { @State var showFileBrowser = false @State var audioFileURL: URL? = nil @StateObject var speechRecognizer = SpeechRecognizer() var body: some View { VStack(spacing: 24) { Button { self.showFileBrowser.toggle() } label: { Text("Select an audio file to transcribe") } Text(audioFileURL != nil ? audioFileURL!.absoluteString : "No audio file selected") .multilineTextAlignment(.center) Button { speechRecognizer.startTranscribingAudioFile(audioFileURL) } label: { Text("Transcribe") } Text(speechRecognizer.transcript == "" ? "No transcript yet" : speechRecognizer.transcript) .multilineTextAlignment(.leading) } .padding() .fileImporter(isPresented: $showFileBrowser, allowedContentTypes: [.audio]) { result in switch result { case .success(let fileURL): fileURL.startAccessingSecurityScopedResource() audioFileURL = fileURL print(fileURL) case .failure(let error): NSLog("%s", error.localizedDescription) } } } }
Posted
by mediter.
Last updated
.
Post not yet marked as solved
2 Replies
756 Views
We are using the Speech framework to enable users to interact with our app via voice commands. When a user says "start test" we send DispatchQueue.main.async { self.startButton.sendActions(for: .touchUpInside) } This works beautifully, except that the screen goes into auto lockout in the middle of a test. Apparently, using sendActions does not actually send a touch event to the OS. My question is how can I tell the OS that a touch event happened programmatically? Thank you
Posted
by hwallace.
Last updated
.
Post not yet marked as solved
0 Replies
662 Views
For my project, I would really benefit from continuous on-device speech recognition without the automatic timeout, or at least with a much longer one. In the WebKit web speech implementation, it looks like there are some extra setters for SFSpeechRecognizer exposing exactly this functionality: https://github.com/WebKit/WebKit/blob/8b1a13b39bbaaf306c9d819c13b0811011be55f2/Source/WebCore/Modules/speech/cocoa/WebSpeechRecognizerTask.mm#L105 Is there a chance Apple could enable programmable duration/time-out? If it’s available in WebSpeech, then why not in native applications?
Posted Last updated
.
Post not yet marked as solved
1 Replies
888 Views
[catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 [AXTTSCommon] Invalid rule: [AXTTSCommon] Invalid rule: [AXTTSCommon] File file:///var/MobileAsset/AssetsV2/com_apple_MobileAsset_Trial_Siri_SiriTextToSpeech/purpose_auto/20700159d3b64fc92fc033a3e3946535bd231e4b.asset/AssetData/vocalizer-user-dict.dat contained data that was not null terminated
Posted Last updated
.
Post not yet marked as solved
1 Replies
840 Views
Hi, I'm creating a Text to speech app to read aloud PDF/Epub files. I was working fine on iOS older version but stop working for iOS 16.5 in background. Speaking stop just after couple mins that I don't want. I'm really appreciate if you can provide me a solution for this!
Posted
by nghidoan.
Last updated
.