Post

Replies

Boosts

Views

Activity

Resetting the selected accessibility action on a button
I have a record button that either starts or stops a recording using the default action. When the user is recording, I want to add a custom action to discard the recording instead of saving it. That all works fine with the following code: if isRecording { recordButton.accessibilityCustomActions = [ .init(name: String(localized: "discard recording"), actionHandler: { [weak delegate] _ in delegate?.discardRecording() return true }) ] recordButton.accessibilityLabel = String(localized: "stop recording", comment: "accessibility label") } else { recordButton.accessibilityCustomActions = [] recordButton.accessibilityLabel = String(localized: "start recording", comment: "accessibility label") } The problem I have is that when a user chose "discard recording", it becomes the default selected action again the next time the user records, and instead of stopping and saving the recording, the user might accidentally discard the next one as well. How can I programmatically reset the selected action on this recordButton to the default action?
0
0
182
4w
Recordings on iOS 18.0 beta start with stuttering.
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions. The stuttering occurs for the first 2 seconds. Here's an example: https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured: let engine = AVAudioEngine() let recorderNode = AVAudioMixerNode() engine.attach(recorderNode) engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0)) engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat) engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0)) let bufferSize: AVAudioFrameCount = 4096 recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in guard let self = self else { return } do { // Write recording to disk try audioFile.write(buffer) } catch { // ... } } I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
4
0
598
Aug ’24
Custom rotor is always playing end-of-list sound
I have a custom rotor that changes the skim speed of the skim forward/backward feature of my audio player. The rotor works, but it's always playing an end-of-list sound. Here is the code: // Member variables private let accessibilitySeekSpeeds: [Double] = [10, 30, 60, 180] // seconds private var accessibilitySeekSpeedIndex: Int = 0 func seekSpeedRotor() -> UIAccessibilityCustomRotor { UIAccessibilityCustomRotor(name: "seek speed") { [weak self] predicate in guard let self = self else { return nil } let speeds = accessibilitySeekSpeeds switch predicate.searchDirection { case .previous: accessibilitySeekSpeedIndex = (accessibilitySeekSpeedIndex - 1 + speeds.count) % speeds.count case .next: accessibilitySeekSpeedIndex = (accessibilitySeekSpeedIndex + 1) % speeds.count @unknown default: break } // Return the currently selected speed as an accessibility element let accessibilityElement = UIAccessibilityElement(accessibilityContainer: self) let currentSpeed = localizedDuration(seconds: speeds[accessibilitySeekSpeedIndex]) accessibilityElement.accessibilityLabel = currentSpeed + " seek speed" UIAccessibility.post(notification: .announcement, argument: currentSpeed + " seek speed") return UIAccessibilityCustomRotorItemResult(targetElement: accessibilityElement, targetRange: nil) } } The returned accessibility element isn't read out, and instead an end-of-list sound is played. I can announce the change manually using UIAccessibility.post, but it still plays the end-of-list sound. How can I prevent the rotor from playing the end-of-list sound?
2
0
457
May ’24
UIKit drawing and animation differences in ShareExtensions
I noticed two differences in my share extension's behaviour compared to my main app: The layer.presentation() values can be massively out of date, which means that continuing animations from their current position is not possible. This is both true for manually checking the layer.presentation() values, as well as for letting UIKit doing the replacement-continuation via UIView.animate(..., options: [.beginFromCurrentState], ...). UI updates seem to be ignored if the share extension performs heavy calculation. Interestingly, it doesn't seem to matter whether I do this calculation in the main thread or in a background thread, and call the main thread for UI updates via DispatchQueue.main.sync { ... }. I see my console in Xcode filling with progress updates from print(progress) statements, but the UI just doesn't move. Once the heavy processing is done, it instantly updates again. I assume that 1 and 2 are related. If I cannot get the UI to draw while the computation is done, I probably also can't get up-to-date presentation layer values. Are there any explanations for this behaviour, and any advice on how I could circumvent the problem? Again, this is specific to my share extension and doesn't happen in my main app.
0
0
463
Feb ’24
CarPlay output is mono when AVAudioSession configured as playAndRecord
When configuring an AVAudioSession as playAndRecord, I have to select the CarPlay input as preferredInput to make sure that the output is also routed to the car - if I set the preferredInput to the built-in mic, the output is routed to the speakers instead. However, when I select the CarPlay input as preferredInput, AVAudioSession configures the output as mono: (lldb) po session.currentRoute.inputs.first! <AVAudioSessionPortDescription: 0x282fcec30, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.inputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fccc70, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> (lldb) po session.currentRoute.outputs ▿ 1 element - 0 : <AVAudioSessionPortDescription: 0x282fce9d0, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.outputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fd8590, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> When I configure the session only for playback, the output is stereo, as you'd expect from a car system. This is on iOS 17 beta1, and I'm afraid I can't check whether this is a new regression or has already existed before, but it's quite likely it has existed before. Any advice on how I can circumvent this issue?
0
1
928
Jun ’23
Xcode 14.3 breaks linking to precompiled frameworks when compiling for real devices
The precompiled frameworks that we're using in our app aren't found by the linker in Xcode 14.3. It works fine in the simulator, but the app crashes on a real device with a dynamic library loading error. The problem seems to be that Xcode 14.3 compiles the app so that it somehow looks in a folder called PackageFrameworks. The error messages showing the paths that the app tries are like this: '/Users/{username}/Library/Developer/Xcode/DerivedData/{appname}/Build/Products/Debug-iphoneos/PackageFrameworks/librocksdb.framework/librocksdb' (errno=2) Note that the library exists, but not at Build/Products/Debug-iphoneos/PackageFrameworks/librocksdb.framework/librocksdb But simply at Build/Products/Debug-iphoneos/librocksdb.framework/librocksdb This worked fine in Xcode 14.2. Clearing the whole derived data cache, as well as all of swiftpm's caches, didn't help. To reproduce, include one of these precompiled frameworks in your code, compile and run your app on a real device: https://github.com/tcwalther/sentry-cocoa-sdk-xcframeworks https://github.com/tapeit/rocksdb.swift
0
1
867
Apr ’23
Configuring AVAudioSession to allow "Hey Siri"
I'm building a recording app, and have built an App Shortcut that allows the user to start recording via Siri ("Hey Siri, start a recording"). This works well until the audio session is activated for the first time. After that, the device no longer listens to "Hey Siri". I'd like to have the following behaviour: while the app is recording, the device should not listen to Siri when the app is not recording, the device should listen to Siri I've currently configured AVAudioSession this way: try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay, .mixWithOthers]) I've tried switching the category back and forth between .playAndRecord and .playback when recording/not recording, but that didn't help either. I'd also like to avoid changing the category or activating/deactivating the audio session, since it always takes a bit of time. When the audio session is configured and/or activated, the user experiences a "slow" record button: there's a noticeable delay between tapping the button and the device actually recording. When it is already configured, the record button starts recording instantly. What impact does AVAudioSession have on Siri?
0
1
767
Feb ’23
Write AAC files in a way that they are readable even in a crash
I'm building an audio recording app. For our users it's important that recordings never get lost - even if the app crashes, users would like to have the partial recording. We encode recordings in AAC or ALAC and store them in an m4a file using AVAudioFile. However, in case the app crashes, those m4a files are invalid - the MOOV atom is missing. Are there recording settings that change the m4a file so that it is always playable, even if the recording is interrupted half-way? I'm not at all an expert in audio codecs, but from what I understand, it is possible to write the MOOV atom at the beginning of the audio file instead of the end. This could solve this. But of course, I'd prefer an actual expert to tell me what a good solution is, and how to configure this in AVAudioFile.
3
1
1.6k
Nov ’22
MLShapedArray, MLMultiArray, memory allocations and memory layout
I have two questions on MLShapedArray. 1. Memory Layout I'm using MLMultiArray to get data into and out of a CoreML model. I need to preprocess the data before feeding it to the model, and right now I store it in a ContiguousArray since I know that I can safely pass this to vDSP methods. I'm wondering if I could use an MLShapedArray instead. Is an MLShapedArray guaranteed to have a contiguous memory buffer underneath? 2. Memory Allocations MLShapedArray and MLMultiArray have initializers that allow converting between them. Is data copied, or is the underlying buffer reused? I'd love fo the buffer to be reused to avoid malloc calls. In my ML pipeline, I'd like to allocate all buffers at the start and then just reuse them as I do my processing.
1
0
1.3k
Nov ’22
Animating boundary supplementary item size changes with diffable data source
I have a UICollectionView with a diffable data source and a global list header in a compositional layout (i.e., one header for the whole layout, not one per section). let layout = UICollectionViewCompositionalLayout() { ... } let listHeader = NSCollectionLayoutBoundarySupplementaryItem( layoutSize: NSCollectionLayoutSize(widthDimension: .fractionalWidth(1), heightDimension: .estimated(1)), elementType: ListHeader.self, alignment: .top ) layout.boundarySupplementaryItems = [listHeader] Sometimes, the list header changes its size, and I'd love to animate that. However, I cannot call reconfigureItems with diffable data source. If the new snapshot is identical to the old snapshot, the header size doesn't update in the list (even if I call layoutIfNeeded on the header view directly). If there are changes between the snapshots, the header size updates abruptly, without animation. How can I update and animate the size change of the global list header, regardless of whether the snapshot has changed or not?
0
0
739
Jul ’22
Backing up and restoring an app group container
We're in the process of moving our app's storage from the app container to an app group container so that the data can be accessed by app extensions (such as share extensions) as well. However, we found that Xcode does not backup app group containers via the Devices and Simulators window. How can I include the app group in the backup and restore process? Similarly, could you confirm that app groups are indeed included in the iCloud backup and restore process?
4
0
1.7k
Nov ’21
AVAudioEngine when connected to Airplay
Background We're writing a small recording app - think Voice Memos for the sake of argument. In our app, users should always record with the built-in iPhone microphone. Our Problem Our setup works fine when using just the speakers or in combination with Bluetooth headsets. However, it doesn't work well with Airplay. One of two things can happen: The app records just silence The app crashes when trying to connect the inputNode to the recorderNode (see code below), complaining that IsFormatSampleRateAndChannelCountValid == false Our testing environment is an iPhone Xs, connected to an Airplay 2 compatible Sonos amp. Code We use the following code to set up the AVAudioSession (simplified, without error handling): let session = AVAudioSession.sharedInstance() try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay]) try AVAudioSession.sharedInstance().setActive(true) Every time we record, we configure the audio session to use the built-in mic, and then create a fresh AVAudioEngine. let session = AVAudioSession.sharedInstance() let builtInMicInput = session.availableInputs!.first(where: { $0.portType == .builtInMic }) try session.setPreferredInput(builtInMicInput) let sampleRate: Double = 44100 let numChannels: AVAudioChannelCount = isStereoEnabled ? 2 : 1 let recordingOutputFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: numChannels, interleaved: false)! let engine = AVAudioEngine() let recorderNode = AVAudioMixerNode() // This sets the input volume of those nodes in their destination node (mainMixerNode) to 0. // The raw outputVolume of these nodes remains 1, so when you tap them you still get the samples. // If you set outputVolume = 0 instead, the taps would only receives zeros. recorderNode.volume = 0 engine.attach(recorderNode) engine.connect(engine.mainMixerNode,  to: engine.outputNode,   format: engine.outputNode.inputFormat(forBus: 0)) engine.connect(recorderNode,      to: engine.mainMixerNode,  format: recordingOutputFormat) engine.connect(engine.inputNode,    to: recorderNode,      format: engine.inputNode.inputFormat(forBus: 0)) // and later try engine.start() We install a tap on the recorderNode to save the recorded audio into a file. The tap works fine and is out of scope for this question, and thus not included here. Questions How do we route/configure the audio engine correctly to avoid this problem? Do you have any advice on how to debug such issues in the future? Which variables/states should we inspect? Thank you so much in advance!
1
0
1.5k
Jul ’21