Posts

Post not yet marked as solved
0 Replies
76 Views
I have a custom rotor that changes the skim speed of the skim forward/backward feature of my audio player. The rotor works, but it's always playing an end-of-list sound. Here is the code: // Member variables private let accessibilitySeekSpeeds: [Double] = [10, 30, 60, 180] // seconds private var accessibilitySeekSpeedIndex: Int = 0 func seekSpeedRotor() -> UIAccessibilityCustomRotor { UIAccessibilityCustomRotor(name: "seek speed") { [weak self] predicate in guard let self = self else { return nil } let speeds = accessibilitySeekSpeeds switch predicate.searchDirection { case .previous: accessibilitySeekSpeedIndex = (accessibilitySeekSpeedIndex - 1 + speeds.count) % speeds.count case .next: accessibilitySeekSpeedIndex = (accessibilitySeekSpeedIndex + 1) % speeds.count @unknown default: break } // Return the currently selected speed as an accessibility element let accessibilityElement = UIAccessibilityElement(accessibilityContainer: self) let currentSpeed = localizedDuration(seconds: speeds[accessibilitySeekSpeedIndex]) accessibilityElement.accessibilityLabel = currentSpeed + " seek speed" UIAccessibility.post(notification: .announcement, argument: currentSpeed + " seek speed") return UIAccessibilityCustomRotorItemResult(targetElement: accessibilityElement, targetRange: nil) } } The returned accessibility element isn't read out, and instead an end-of-list sound is played. I can announce the change manually using UIAccessibility.post, but it still plays the end-of-list sound. How can I prevent the rotor from playing the end-of-list sound?
Posted Last updated
.
Post not yet marked as solved
0 Replies
271 Views
I noticed two differences in my share extension's behaviour compared to my main app: The layer.presentation() values can be massively out of date, which means that continuing animations from their current position is not possible. This is both true for manually checking the layer.presentation() values, as well as for letting UIKit doing the replacement-continuation via UIView.animate(..., options: [.beginFromCurrentState], ...). UI updates seem to be ignored if the share extension performs heavy calculation. Interestingly, it doesn't seem to matter whether I do this calculation in the main thread or in a background thread, and call the main thread for UI updates via DispatchQueue.main.sync { ... }. I see my console in Xcode filling with progress updates from print(progress) statements, but the UI just doesn't move. Once the heavy processing is done, it instantly updates again. I assume that 1 and 2 are related. If I cannot get the UI to draw while the computation is done, I probably also can't get up-to-date presentation layer values. Are there any explanations for this behaviour, and any advice on how I could circumvent the problem? Again, this is specific to my share extension and doesn't happen in my main app.
Posted Last updated
.
Post not yet marked as solved
0 Replies
732 Views
When configuring an AVAudioSession as playAndRecord, I have to select the CarPlay input as preferredInput to make sure that the output is also routed to the car - if I set the preferredInput to the built-in mic, the output is routed to the speakers instead. However, when I select the CarPlay input as preferredInput, AVAudioSession configures the output as mono: (lldb) po session.currentRoute.inputs.first! <AVAudioSessionPortDescription: 0x282fcec30, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.inputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fccc70, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> (lldb) po session.currentRoute.outputs ▿ 1 element - 0 : <AVAudioSessionPortDescription: 0x282fce9d0, type = CarAudio; name = CarPlay; UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583; selectedDataSource = (null)> (lldb) po session.currentRoute.outputs.first!.channels ▿ Optional<Array<AVAudioSessionChannelDescription>> ▿ some : 1 element - 0 : <AVAudioSessionChannelDescription: 0x282fd8590, name = CarPlay; label = 0 (0x0); number = 1; port UID = 48:F0:7B:C6:21:A8-Audio-AudioMain-92004763965583> When I configure the session only for playback, the output is stereo, as you'd expect from a car system. This is on iOS 17 beta1, and I'm afraid I can't check whether this is a new regression or has already existed before, but it's quite likely it has existed before. Any advice on how I can circumvent this issue?
Posted Last updated
.
Post not yet marked as solved
0 Replies
749 Views
The precompiled frameworks that we're using in our app aren't found by the linker in Xcode 14.3. It works fine in the simulator, but the app crashes on a real device with a dynamic library loading error. The problem seems to be that Xcode 14.3 compiles the app so that it somehow looks in a folder called PackageFrameworks. The error messages showing the paths that the app tries are like this: '/Users/{username}/Library/Developer/Xcode/DerivedData/{appname}/Build/Products/Debug-iphoneos/PackageFrameworks/librocksdb.framework/librocksdb' (errno=2) Note that the library exists, but not at Build/Products/Debug-iphoneos/PackageFrameworks/librocksdb.framework/librocksdb But simply at Build/Products/Debug-iphoneos/librocksdb.framework/librocksdb This worked fine in Xcode 14.2. Clearing the whole derived data cache, as well as all of swiftpm's caches, didn't help. To reproduce, include one of these precompiled frameworks in your code, compile and run your app on a real device: https://github.com/tcwalther/sentry-cocoa-sdk-xcframeworks https://github.com/tapeit/rocksdb.swift
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.4k Views
I'm building an audio recording app. For our users it's important that recordings never get lost - even if the app crashes, users would like to have the partial recording. We encode recordings in AAC or ALAC and store them in an m4a file using AVAudioFile. However, in case the app crashes, those m4a files are invalid - the MOOV atom is missing. Are there recording settings that change the m4a file so that it is always playable, even if the recording is interrupted half-way? I'm not at all an expert in audio codecs, but from what I understand, it is possible to write the MOOV atom at the beginning of the audio file instead of the end. This could solve this. But of course, I'd prefer an actual expert to tell me what a good solution is, and how to configure this in AVAudioFile.
Posted Last updated
.
Post not yet marked as solved
0 Replies
638 Views
I'm building a recording app, and have built an App Shortcut that allows the user to start recording via Siri ("Hey Siri, start a recording"). This works well until the audio session is activated for the first time. After that, the device no longer listens to "Hey Siri". I'd like to have the following behaviour: while the app is recording, the device should not listen to Siri when the app is not recording, the device should listen to Siri I've currently configured AVAudioSession this way: try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay, .mixWithOthers]) I've tried switching the category back and forth between .playAndRecord and .playback when recording/not recording, but that didn't help either. I'd also like to avoid changing the category or activating/deactivating the audio session, since it always takes a bit of time. When the audio session is configured and/or activated, the user experiences a "slow" record button: there's a noticeable delay between tapping the button and the device actually recording. When it is already configured, the record button starts recording instantly. What impact does AVAudioSession have on Siri?
Posted Last updated
.
Post not yet marked as solved
0 Replies
516 Views
In our app, we're currently using UIActivityViewController to allow users to share files (in our case, audio files). From all the sharing options, a really common use case is to Airdrop a file to your Mac. We'd love to make that easier by providing a dedicated Airdrop button. Is it possible to initiate a share-via-airdrop action directly from Swift?
Posted Last updated
.
Post marked as solved
1 Replies
986 Views
I have two questions on MLShapedArray. 1. Memory Layout I'm using MLMultiArray to get data into and out of a CoreML model. I need to preprocess the data before feeding it to the model, and right now I store it in a ContiguousArray since I know that I can safely pass this to vDSP methods. I'm wondering if I could use an MLShapedArray instead. Is an MLShapedArray guaranteed to have a contiguous memory buffer underneath? 2. Memory Allocations MLShapedArray and MLMultiArray have initializers that allow converting between them. Is data copied, or is the underlying buffer reused? I'd love fo the buffer to be reused to avoid malloc calls. In my ML pipeline, I'd like to allocate all buffers at the start and then just reuse them as I do my processing.
Posted Last updated
.
Post not yet marked as solved
0 Replies
612 Views
I have a UICollectionView with a diffable data source and a global list header in a compositional layout (i.e., one header for the whole layout, not one per section). let layout = UICollectionViewCompositionalLayout() { ... } let listHeader = NSCollectionLayoutBoundarySupplementaryItem( layoutSize: NSCollectionLayoutSize(widthDimension: .fractionalWidth(1), heightDimension: .estimated(1)), elementType: ListHeader.self, alignment: .top ) layout.boundarySupplementaryItems = [listHeader] Sometimes, the list header changes its size, and I'd love to animate that. However, I cannot call reconfigureItems with diffable data source. If the new snapshot is identical to the old snapshot, the header size doesn't update in the list (even if I call layoutIfNeeded on the header view directly). If there are changes between the snapshots, the header size updates abruptly, without animation. How can I update and animate the size change of the global list header, regardless of whether the snapshot has changed or not?
Posted Last updated
.
Post marked as solved
1 Replies
1.3k Views
When compiling my project for a physical device, Xcode does not reuse the build cache but instead recompiles every file. When compiling for a simulator target instead, Xcode properly uses the build cache, and incremental builds are lightning fast. Is there a configuration I can check to enable incremental builds for physical devices, too?
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.3k Views
We're in the process of moving our app's storage from the app container to an app group container so that the data can be accessed by app extensions (such as share extensions) as well. However, we found that Xcode does not backup app group containers via the Devices and Simulators window. How can I include the app group in the backup and restore process? Similarly, could you confirm that app groups are indeed included in the iCloud backup and restore process?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
Background We're writing a small recording app - think Voice Memos for the sake of argument. In our app, users should always record with the built-in iPhone microphone. Our Problem Our setup works fine when using just the speakers or in combination with Bluetooth headsets. However, it doesn't work well with Airplay. One of two things can happen: The app records just silence The app crashes when trying to connect the inputNode to the recorderNode (see code below), complaining that IsFormatSampleRateAndChannelCountValid == false Our testing environment is an iPhone Xs, connected to an Airplay 2 compatible Sonos amp. Code We use the following code to set up the AVAudioSession (simplified, without error handling): let session = AVAudioSession.sharedInstance() try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay]) try AVAudioSession.sharedInstance().setActive(true) Every time we record, we configure the audio session to use the built-in mic, and then create a fresh AVAudioEngine. let session = AVAudioSession.sharedInstance() let builtInMicInput = session.availableInputs!.first(where: { $0.portType == .builtInMic }) try session.setPreferredInput(builtInMicInput) let sampleRate: Double = 44100 let numChannels: AVAudioChannelCount = isStereoEnabled ? 2 : 1 let recordingOutputFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: numChannels, interleaved: false)! let engine = AVAudioEngine() let recorderNode = AVAudioMixerNode() // This sets the input volume of those nodes in their destination node (mainMixerNode) to 0. // The raw outputVolume of these nodes remains 1, so when you tap them you still get the samples. // If you set outputVolume = 0 instead, the taps would only receives zeros. recorderNode.volume = 0 engine.attach(recorderNode) engine.connect(engine.mainMixerNode,  to: engine.outputNode,   format: engine.outputNode.inputFormat(forBus: 0)) engine.connect(recorderNode,      to: engine.mainMixerNode,  format: recordingOutputFormat) engine.connect(engine.inputNode,    to: recorderNode,      format: engine.inputNode.inputFormat(forBus: 0)) // and later try engine.start() We install a tap on the recorderNode to save the recorded audio into a file. The tap works fine and is out of scope for this question, and thus not included here. Questions How do we route/configure the audio engine correctly to avoid this problem? Do you have any advice on how to debug such issues in the future? Which variables/states should we inspect? Thank you so much in advance!
Posted Last updated
.