Hi, I spoke to someone during the audio lab today in relation to audio output of Chinese dialects. So I can now get cantonese and mandarin outputted. But now I'm trying to read the same line of text in both dialects, and the audio overlaps each other. What do I need to change in my code to create a pause between the two utterances rather than reading both at the same time?
Code Block let cantoUtterance = AVSpeechUtterance(string: chinese) //utterance.voice = AVSpeechSynthesisVoice(language: "zh-HK") cantoUtterance.voice = AVSpeechSynthesisVoice(identifier: "com.apple.ttsbundle.Sin-Ji-compact") // utterance.rate = 0.1 let cantoSynthesizer = AVSpeechSynthesizer() cantoSynthesizer.speak(cantoUtterance) let mandoUtterance = AVSpeechUtterance(string: chinese) mandoUtterance.voice = AVSpeechSynthesisVoice(language: "zh-CN") let mandoSynthesizer = AVSpeechSynthesizer() mandoSynthesizer.speak(mandoUtterance)
Depending on how you are using this code, it's possible that the synthesizer is being deallocated. You need to retain the synthesizer until both utterances have been spoken. You should set up the synthesizer as a property on your class and initialize it during an init method, and then use it when you want to speak your utterances. If you put all of this in one code block, and then the code block ends, your synthesizer will be deallocated which could explain why your remaining utterances are not spoken.
If you are sure your synthesizer is being retained, please file a bug through feedback assistant and include a sample project that reproduces the issue, and paste the bug number here.
If you are sure your synthesizer is being retained, please file a bug through feedback assistant and include a sample project that reproduces the issue, and paste the bug number here.