iOS 18.2 beta 2 (22C5125e) does not fix the issue, either.
Post
Replies
Boosts
Views
Activity
FYI, neither iOS 18.1 RC nor 18.2 beta 1 resolves the issue.
iOS 18.1 beta 7 does not fix the this bug.
Is this even confirmed as a bug in iOS 18.x?
Would be great if someone from Apple could confirm it is a bug and that it is in the backlog for getting resolved. Thanks in advance!
iOS 18.1 beta 5 does not resolve the PDFKit PDFPage.characterBounds(at:) issue of returning the wrong character bounds.
It seems I do get the same crash on my iOS code. I could not reproduce it, but get between 1 and 4 crashes from every 200 user sessions. Any hints on how to solve this issue would be highly appreciated.
Here is my code (very similar to code shown above):
private func transcribe() {
guard let recognizer, recognizer.isAvailable else {
print("--- SpeechRec.transcribe - SpeechRecognizer TRANSCRIBE ERROR: \(RecognizerError.recognizerIsUnavailable)")
return
}
do {
if let audioEngine {
let request = SFSpeechAudioBufferRecognitionRequest()
request.shouldReportPartialResults = true
request.requiresOnDeviceRecognition = false // might fix speechRec error 1101 in console
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .measurement, policy: .default, options: .duckOthers)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = audioEngine.inputNode
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, _) in
request.append(buffer)
}
try audioEngine.start()
self.recognitionTask = recognizer.recognitionTask(with: request, delegate: self)
} else {
let (audioEngine, request) = try Self.prepareEngine()
self.audioEngine = audioEngine
self.request = request
self.recognitionTask = recognizer.recognitionTask(with: request, delegate: self)
}
} catch {
Logger.audio.error("--- SpeechRec.transcribe - SpeechRecognizer AudioSession/AudioEngine ERROR: \(error)")
self.reset()
}
}
This is the backtrace that the Organizer shows in Xcode:
Last Exception Backtrace (0)
#0 (null) in __exceptionPreprocess ()
#1 (null) in objc_exception_throw ()
#2 (null) in +[NSException raise:format:arguments:] ()
#3 (null) in AVAE_RaiseException(NSString*, ...) ()
#4 (null) in AVAudioIONodeImpl::SetOutputFormat(unsigned long, AVAudioFormat*) ()
#5 (null) in AUGraphNodeBaseV3::CreateRecordingTap(unsigned long, unsigned int, AVAudioFormat*, void (AVAudioPCMBuffer*, AVAudioTime*) block_pointer) ()
#6 (null) in -[AVAudioNode installTapOnBus:bufferSize:format:block:] ()
#7 0x100d34e10 in SpeechRecognizer.transcribe() at /Users/klaus/Developer/ScriptBuddy/ScriptBuddy/Assistants/SpeechRecognizer.swift:245
#8 0x100d34298 in SpeechRecognizer.startTranscribing(andCompareTo:) at /Users/klaus/Developer/ScriptBuddy/ScriptBuddy/Assistants/SpeechRecognizer.swift:167
#9 (null) in Script.speakNextScriptElement() ()
#10 0x100d43bfc in specialized SpeechSynthesizer.speechSynthesizer(_:didFinish:) at /Users/klaus/Developer/ScriptBuddy/ScriptBuddy/Assistants/SpeechSynthesizer.swift:942
#11 (null) in SpeechSynthesizer.speechSynthesizer(_:didFinish:) ()
#12 (null) in @objc SpeechSynthesizer.speechSynthesizer(_:didFinish:) ()
#13 (null) in -[AVSpeechSynthesizer(PublicSpeechImplementation) processSpeechJobFinished:successful:] ()
#14 (null) in -[AVSpeechSynthesizer(PublicSpeechImplementation) _handleSpeechDone:successful:] ()
#15 (null) in __67-[AVSpeechSynthesizer(PublicSpeechImplementation) _speakUtterance:]_block_invoke_6 ()
#16 (null) in __46-[TTSSpeechManager _speechJobFinished:action:]_block_invoke ()
#17 (null) in _dispatch_call_block_and_release ()
#18 (null) in _dispatch_client_callout ()
#19 (null) in _dispatch_main_queue_drain ()
#20 (null) in _dispatch_main_queue_callback_4CF ()
#21 (null) in __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ ()
#22 (null) in __CFRunLoopRun ()
#23 (null) in CFRunLoopRunSpecific ()
#24 (null) in GSEventRunModal ()
#25 (null) in -[UIApplication _run] ()
#26 (null) in UIApplicationMain ()
#27 (null) in closure #1 in KitRendererCommon(_:) ()
#28 (null) in runApp<A>(_:) ()
#29 (null) in static App.main() ()
#30 (null) in static ScriptBuddyApp.$main() ()
#31 (null) in main ()
#32 (null) in start ()
The latest Xcode / iOS RC candidates still have the same issue that PDFKits PDFPage.characterBounds(at:) is returning incorrect coordinates for characters on the PDF page.
This breaks apps that rely on PDFKit to read char positions from PDF files!
It should be fixed by Apple before the final release of iOS 18.
Yeah, sure! Thank you for commenting! I thought it might be a know "wrong" coding pattern or even a iOS 18 beta bug, as it is not showing up in iOS 17.x.
Will try to create a small reproducible example and post it here.
Any news on this topic? Still not working in the latest betas.
Hi, are you having this problem again with iOS 18 betas? I am having troubles again with the PDFPage.characterBounds(at: Int) -> CGRect function. Same as last year in the iOS 17 beta cycle. Filed feedback (FB14843671) but so far no changes in the latest betas.
The error Code=1101 often has to do with incorrect/incomplete setup off offline dictation on your device.
If you set request.requiresOnDeviceRecognition = true the recognition process uses Apple’s dictation service.
The dictation service only works offline if
you have the keyboard installed for the same language + region you want the dictation / speech recognition for
you have Enable Dictation toggled On and
the Dictation Language for the lang + region you want has been downloaded by the system.
If the above conditions are not met, you will see the 1101 error.
Example:
If you want offline dictation for „de-DE“ (german language for region Germany) you need to have such a keyboard installed. In the device's Setting / General / Keyboard / Keyboards … be sure to have the one keyboard installed for your lang + region speech recognition (in our example „German (Germany)“). Further down in General / Keyboard turn on Enable Dictation. If Dictation is enabled, you see a further entry below called Dictation Languages. Open it to make sure the dictation languages are downloaded (you see a note about the status there).
Once the dictation language(s) are downloaded, speech recognition with request.requiresOnDeviceRecognition = true should work for that language/region.
The error Code=1101 often has to do with incorrect/incomplete setup off offline dictation on your device.
If you set request.requiresOnDeviceRecognition = true the recognition process uses Apple’s dictation service.
The dictation service only works offline if
you have the keyboard installed for the same language + region you want the dictation / speech recognition for
you have Enable Dictation toggled On and
the Dictation Language for the lang + region you want has been downloaded by the system.
If the above conditions are not met, you will see the 1101 error.
Example:
If you want offline dictation for „de-DE“ (german language for region Germany) you need to have such a keyboard installed. In the device's Setting / General / Keyboard / Keyboards … be sure to have the one keyboard installed for your lang + region speech recognition (in our example „German (Germany)“). Further down in General / Keyboard turn on Enable Dictation. If Dictation is enabled, you see a further entry below called Dictation Languages. Open it to make sure the dictation languages are downloaded (you see a note about the status there).
Once the dictation language(s) are downloaded, speech recognition with request.requiresOnDeviceRecognition = true should work for that language/region.
You need to activate the AudioSession.
I have added the following line to your code sample from github
try! AVAudioSession.sharedInstance().setActive(true)
after the setCategory line. Then it works for me on an iOS 17.2 simulator.
I can confirm that your code works on iOS 16.4 Simulator without setActive. Not sure why, as my understanding is it has always been the intended way of doing stuff using setActive.
It should be possible without Xcode Cloud, too. See this WWDC 2023 Video https://developer.apple.com/videos/play/wwdc2023/10175/?time=179 at 2:59 where it state: "With Xcode Cloud and xcodebuild command, your tests can have multiple run destinations."
@tonyrencard I had a similiar issue after a github username change for a current project under git. I had to update the git config file with the new path reflecting the new git username. See here: https://developer.apple.com/forums/thread/737062
Run into a PDFKit problem in iOS 17 beta that is most likely related:
PDFPage.characterBounds(at: Int) -> CGRect
is returning incorrect coordinates with iOS 17 beta 5 / Xcode 15 beta 6. Worked fine iOS 16 and earlier.
Same for me, it breaks critical functionality that my app relies on.
I have filed feedback (FB12918701).