Hi, I'm using one of the delegate methods to determine when words are about to be spoken and then trying to provide a color background to indicate what is about to be spoken:
Based on the debug log I can see that it is reading every word in my UITextView, but when it comes to the yellow background highlighting, it's quite inconsistent and seems to skip around half the words. Is this an inherent issue related to the language I'm using (Japanese) or is there something fundamental in my code that needs to be fixed?
Code Block func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, willSpeakRangeOfSpeechString characterRange: NSRange, utterance: AVSpeechUtterance) { guard let rangeInString = Range(characterRange, in: utterance.speechString) else { return } print("Will speak: \(utterance.speechString[rangeInString])") let attributes: [NSAttributedString.Key: Any] = [ .foregroundColor: UIColor.darkText, .font: UIFont.systemFont(ofSize: 16) ] let mutableAttributedString = NSMutableAttributedString(string: utterance.speechString, attributes: attributes) mutableAttributedString.addAttribute(.backgroundColor, value: UIColor.yellow, range: characterRange) jpTextView.attributedText = mutableAttributedString }
Based on the debug log I can see that it is reading every word in my UITextView, but when it comes to the yellow background highlighting, it's quite inconsistent and seems to skip around half the words. Is this an inherent issue related to the language I'm using (Japanese) or is there something fundamental in my code that needs to be fixed?