I wrote a class for speech recognition with the Speech framework.
// Cancel the previous task if it's running.
if (self.recognitionTask) {
//[self.recognitionTask cancel]; // Will cause the system error and memory problems.
[self.recognitionTask finish];
}
self.recognitionTask = nil;
// Configure the audio session for the app.
NSError *error = nil;
[AVAudioSession.sharedInstance setCategory:AVAudioSessionCategoryRecord withOptions:AVAudioSessionCategoryOptionDuckOthers error:&error];
if (error)
{
[self stopWithError:error];
return;
}
[AVAudioSession.sharedInstance setActive:YES withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
if (error)
{
[self stopWithError:error];
return;
}
// Create and configure the speech recognition request.
self.recognitionRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
self.recognitionRequest.taskHint = SFSpeechRecognitionTaskHintConfirmation;
// Keep speech recognition data on device
if (@available(iOS 13, *)) {
self.recognitionRequest.requiresOnDeviceRecognition = NO;
}
// Create a recognition task for the speech recognition session.
// Keep a reference to the task so that it can be canceled.
__weak typeof(self)weakSelf = self;
self.recognitionTask = [self.speechRecognizer recognitionTaskWithRequest:self.recognitionRequest resultHandler:^(SFSpeechRecognitionResult * _Nullable result, NSError * _Nullable error) {
// ...
if (error != nil || result.final)
{
}
}];
I want to know if the Speech framework supports background tasks. If support, how do I modify the iOS code?
Post
Replies
Boosts
Views
Activity
Update the MacOS Big Sur 11.0.1 (20B29).
Use svn
zsh: command not found: svn.
Update the Xcode Line tool and then svn --version.
zsh: command not found: svn.
Version 12.0 beta (12A6159)
iOS14
Simulator
<pre> (void)startAudioEngine {
NSError *error = nil;
if (!self.audioEngine.isRunning) {
self.audioEngine = [[AVAudioEngine alloc] init];
AVAudioInputNode *inputNode = self.audioEngine.inputNode;
AVAudioFormat *nativeAudioFormat = [inputNode inputFormatForBus:0];
_weak typeof(self)weakSelf = self;
[inputNode installTapOnBus:0 bufferSize:1024 format:nativeAudioFormat block:^(AVAudioPCMBuffer * Nonnull buffer, AVAudioTime * _Nonnull when) {
[weakSelf.recognitionRequest appendAudioPCMBuffer:buffer];
}];
[self.audioEngine prepare];
[self.audioEngine startAndReturnError:&error];
if (error) {
[self stop];
[self onError:[NSError errorWithDomain:@"startAudioEngine error" code:0 userInfo:nil]];
}
else {
[self activeStatusChanged:MMSpeechRecognizerActiveStatusStared];
}
}
else {
[self stop];
[NSError errorWithDomain:@"The audio engine is runing" code:0 userInfo:nil];
}
}
</pre>
Crashed at self.audioEngine.inputNode line.
I update the Xcode12-beta and run the project with it. Then the Audioengine gets the input node crashed.
AVAudioInputNode *inputNode = self.audioEngine.inputNode
Crashed.