Posts

Post not yet marked as solved
4 Replies
3.3k Views
I’m seeing a crash that looks to be related to TaskGroup from Swift Concurrency. I can’t catch it in the debugger, only when running on device. It looks like the crash is actually in the Swift Concurrency library, and I’m not sure how to trace from there back to something in my own code... Some digging so far suggests the issue only happens: on iOS 15.0->15.3.x (ie. it looks to not happen on 15.4) on older iPhones such as 6s, 6s plus, 7, 8, X Where do I start trying to figure out the root cause of this? Crash Report
Posted Last updated
.
Post marked as solved
1 Replies
1.8k Views
I’m getting a flood of these errors in a shipping speech recognition app since users started upgrading to iOS15. It’s usually being returned by the speech recogniser a few seconds after recognition begins. I can’t find any reference to it anywhere in Apple’s documentation. What is it? Code: 301 Domain: kLSRErrorDomain Description: Recognition request was canceled
Posted Last updated
.
Post not yet marked as solved
0 Replies
747 Views
I’m using AVAudioEngine to get a stream of AVAudioPCMBuffers from the device’s microphone using the usual installTap(onBus:) setup. To distribute the audio stream to other parts of the program, I’m sending the buffers to a Combine publisher similar to the following: private let publisher = PassthroughSubject<AVAudioPCMBuffer, Never>() I’m starting to suspect I have some kind of concurrency or memory management issue with the buffers, because when consuming the buffers elsewhere I’m getting a range of crashes that suggest some internal pointer in a buffer is NULL (specifically, I’m seeing crashes in vDSP.convertElements(of:to:) when I try to read samples from the buffer). These crashes are in production and fairly rare — I can’t reproduce them locally. I never modify the audio buffers, only read them for analysis. My question is: should it be possible to put AVAudioPCMBuffers into a Combine pipeline? Does the AVAudioPCMBuffer class not retain/release the underlying AudioBufferList’s memory the way I’m assuming? Is this a fundamentally flawed approach?
Posted Last updated
.
Post not yet marked as solved
0 Replies
994 Views
I’m getting the following crash when using AVAudioSession with AVAudioEngine. What I don’t understand is why InterruptionListener is listed twice in the stack trace. Does this mean it’s somehow being called again before it has returned? Is this likely to be a concurrency issue? Crashed: AVAudioSession Notify Thread 0 libEmbeddedSystemAUs.dylib 0x1dbc3333c InterruptionListener(void*, unsigned int, unsigned int, void const*) 1 libEmbeddedSystemAUs.dylib 0x1dbc33270 InterruptionListener(void*, unsigned int, unsigned int, void const*) 2 AudioToolbox 0x1c86e6484 AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3 AudioToolbox 0x1c8740798 HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4 AudioToolbox 0x1c873fec0 ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5 AudioToolbox 0x1c873f17c ASCallbackReceiver_AudioSessionPingMessage + 632 6 AudioToolbox 0x1c87ad398 _XAudioSessionPingMessage + 44 7 libAudioToolboxUtility.dylib 0x1c8840430 mshMIGPerform + 264 8 CoreFoundation 0x1bd42b174 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9 CoreFoundation 0x1bd42a880 __CFRunLoopDoSource1 + 444 10 CoreFoundation 0x1bd425634 __CFRunLoopRun + 1888 11 CoreFoundation 0x1bd424ba8 CFRunLoopRunSpecific + 424 12 AVFAudio 0x1ca1f4a2c GenericRunLoopThread::Entry(void*) + 156 13 AVFAudio 0x1ca2457a0 CAPThread::Entry(CAPThread*) + 204 14 libsystem_pthread.dylib 0x1bd1c2d98 _pthread_start + 156 15 libsystem_pthread.dylib 0x1bd1c674c thread_start + 8
Posted Last updated
.
Post marked as solved
1 Replies
1k Views
My app is using RemoteIO to record audio. It doesn’t do any playback. RemoteIO seems to be broadly compatible with the new Sound Recognition feature in iOS14, but I’m seeing a glitch when sound recognition is first enabled. If my app is started and I initialise RemoteIO, and then turn on Sound Recognition (say via control centre), the RemoteIO input callback is not called thereafter, until I tear down the audio unit and set it back up. So something like the following: Launch app RemoteIO is initialised and working, can record Turn on Sound Recognition via Settings or control centre widget Start recording with already-set up RemoteIO Recording callback is never again called Though no input callbacks are seen, kAudioOutputUnitProperty_IsRunning is reported as true, so the audio unit thinks it is active Tear down audio unit Set up audio unit again Recording works Buffer size is changed, reflecting some effect on the audio session of the Sound Recognition feature I also noticed that when Sound Recognition is enabled, I see several (usually 3) AVAudioSession.routeChangeNotifications in quick succession. When Sound Recognition is disabled while RemoteIO is set up, I don’t see this problem. I’m allocating my own buffers so it’s not a problem with their size. What could be going on here? Am I not handling a route change properly? There doesn’t seem to be a reliable sequence of events I can catch to know when to reset the audio unit. The only fix I’ve found here is to hack in a timer that checks for callback activity shortly after starting recording, and resets the audio unit if no callback activity is seen. Better than nothing, but not super reliable.
Posted Last updated
.
Post not yet marked as solved
3 Replies
920 Views
I would like to get an idea whether anyone is actually using Multipeer in a recent production iOS app. I’ve burned several days worth of my client’s money getting Multipeer set up, but the invitation system just isn’t anywhere near reliable enough to ship. The use cases is trivial, it’s just a chat room. I assumed I was doing something wrong, so I went looking for examples on Github and my code looked much the same as everyone else’s. Eventually I found the MultipeerGroupChat sample app (https://developer.apple.com/library/archive/samplecode/MultipeerGroupChat/Introduction/Intro.html#//apple_ref/doc/uid/DTS40013691), and after trying it on 3 devices I confirmed that the invite flow is broken for it too. Although the sample app uses Apple’s UI helpers (MCBrowserViewController and MCAdvertiserAssistant), and my app has a custom UI and uses the lower-level MCNearbyServiceAdvertiser and MCNearbyServiceBrowser, I can see the underlying issues are the same. Here’s one example which fails reliably for me: Boot the sample on 3 devices Open the browser on one device and invite the others All 3 are connected and can send messages successfully Go home/lock one device and wait such that the others show it as disconnected; or you can force quit it Bring the app back up on that device Now, try and reconnect it to the other two No matter which device(s) you try to send the invites from (either the one that returned, or one of the others), you’ll get in a mess The invite is (usually) delivered, but then one or both devices involved will get stuck in the 'connecting' state There’s no way out, except to disconnect the session and start again I genuinely want to know whether this framework is in use in real apps and there are workarounds to the reliability issues, or whether it’s just something with a bunch of tutorials online that nobody really uses.
Posted Last updated
.