Post

Replies

Boosts

Views

Activity

Stereo recording in WWDC20-10226 example project creates distorted audio recordings
When running the example project - https://developer.apple.com/documentation/avfoundation/avaudiosession/capturing_stereo_audio_from_built-in_microphones from the WWDC20 session "Record stereo audio with AVAudioSession", I get a broken soundfile. It sounds heavily distorted. Here is a link to an example file: https://www.dropbox.com/s/r0l2wdloqw33j95/recording.wav?dl=0 The distortion affects both the left and right channel equally. This is on an iPhone XS with the first iOS 14 beta. I first suspected that maybe one of the microphones on my device is broken, but I checked by manually recording with the Front, Back and Bottom microphones, and each of these individual recordings is fine. Mono recording in the example app also works fine. It seems like this is a DSP bug on the iPhone XS. Could you confirm this is the case, or provide me further guidance to debug the issue?
2
0
700
Jun ’20
SwiftUI in iOS14: drawingGroup() and Mask have broken behaviour
In the first iOS14 beta, the following code does not work anymore when drawingGroup() is called: import SwiftUI import PlaygroundSupport struct ContentView: View {   var body: some View {     MaskView().frame(width:300, height: 50)   } } struct MaskView: View {   var ratio: CGFloat = 0.3       var body: some View {     GeometryReader { g in       ZStack {         LeftView()           .mask(Rectangle()               .padding(.trailing, (1-self.ratio) * g.size.width))         RightView()           .mask(Rectangle()               .padding(.leading, self.ratio * g.size.width))       }     }   } } struct LeftView: View {   var body: some View {     Rectangle().fill(       LinearGradient(         gradient: Gradient(colors: [.black, .white]),         startPoint: .leading,         endPoint: .trailing)     )     .drawingGroup()   } } struct RightView: View {   var body: some View {     Rectangle().fill(                 LinearGradient(           gradient: Gradient(colors: [.white, .black]),           startPoint: .leading,           endPoint: .trailing)     )     .drawingGroup()   } } PlaygroundPage.current.setLiveView(ContentView()) Without drawingGroup(), this code displays on both iOS13 and 14 a gradient from black to white (0-30% of it), as well as a gradient from white to black (30%-100%), right next to each other. On iOS13 and 14, when enabling drawingGroup(), the gradient in the Playground changes from black->white to white->yellow (for the left view, and equivalently for the right view). This is the first weird behaviour I observed. On iOS14 only, calling drawingGroup() breaks the .padding() calculation. I can fix this as follows, which works on both iOS13 and 14:    var body: some View {     GeometryReader { g in       ZStack {         LeftView()           .mask(Rectangle()               .padding(.trailing, (1-self.ratio) * g.size.width))         RightView()           .mask(Rectangle()               .frame(width: g.size.width)               .padding(.leading, self.ratio * g.size.width))       }     }   } Note that calling .frame(width: g.size.width) only works on RightView, on LeftView, I can either omit this call, or I need to call .frame(width: self.ratio * g.size.width). Which I absolutely don't understand. It would make sense to call .frame(width: (1-ratio) * g.size.width) on RightView, but that leads to the broken behaviour again. It seems to me that in iOS14, for drawing groups, left padding is included in the frame width, whereas right padding is not. in iOS13, left padding is also not included in the frame width. If that is the case, it seems like a regression in iOS14 to me.
5
0
1.7k
Jul ’20
SwiftUI and frequent state updates for animation
I have a use case where I want to animate a sort of progress view based on the current playback position of an audio playback node. I draw the view myself using primitive shapes (in case that matters). Let's say our view consists of two rectangles: struct ProgressView: View {   let progress: CGFloat   var body: some View {     GeometryReader { g in       HStack(spacing: 0) {         Rectangle().fill(Color.red)           .frame(width: g.size.width * self.progress, height: g.size.height)         Rectangle().fill(Color.blue)           .frame(width: g.size.width * (1-self.progress), height: g.size.height)       }     }   } } In a different class, I have the following code (simplified): class Conductor { 	@Published var progress: Double = 0   func play() { 		self.player.play()     self.playbackTimer = Timer.scheduledTimer(withTimeInterval: 0.05, repeats: true) { _ in       self.progress = self.player.currentTime / self.totalTime    }   } } Then I can update my view above as follows: struct UpdatedProgressView: View { 	@EnvironmentObject private var conductor: Conductor 	var body: some View { 		ProgressView(progress: $conductor.progress) 	} } This works (assuming I have no typos in the example code), but it's very inefficient. At this point, SwiftUI has to redraw my ProgressView at 20Hz. In reality, my progress view is not just two Rectangles but a more complex shape (a waveform visualisation), and as a result, this simple playback costs 40% CPU time. It doesn't make a difference whether I use drawingGroup() or not. Then again, I'm quite certain this is not the way it's supposed to be done. I'm not using any animation primitives here, and as far as I understand it, the system has to redraw the entire ProgressView every single time, even though it's just a tiny amount of pixels that actually changed. Any hints on how I should change my code to make it more efficient with SwiftUI?
4
1
1.7k
Jul ’20
Can I reset engine.inputNode.lastRenderTime to 0?
I noticed that when launching my app and creating a fresh AVAudioEngine, engine.inputNode.lastRenderTime is usually > 0. It is nil before calling engine.start(), as expected. However, I would've expected it to start counting from 0 when calling engine.start() for the first time. Calling engine.prepare() and/or engine.reset(), either before or after the .start() call, doesn't change that. What am I missing? There are several code examples out there that seem to assume that inputNode.lastRenderTime starts counting at zero, such as Analyzing Audio to Classify Sounds - https://developer.apple.com/documentation/soundanalysis/analyzing_audio_to_classify_sounds. In this part of the Apple documentation, the code snippet states: audioEngine.inputNode.installTap(onBus: inputBus, 																 bufferSize: 8192, 																 format: inputFormat) { buffer, time in 		 		self.analysisQueue.async { 				self.streamAnalyzer.analyze(buffer, atAudioFramePosition: time.sampleTime) 		} } However, given that lastRenderTime is not guaranteed to start at zero, I've found the correct code to be: let offsetTime = inputNode.lastRenderTime?.sampleTime ?? 0 audioEngine.inputNode.installTap(onBus: inputBus, 																 bufferSize: 8192, 																 format: inputFormat) { buffer, time in 		 		self.analysisQueue.async { 				self.streamAnalyzer.analyze(buffer, atAudioFramePosition: time.sampleTime - offsetTime) 		} } This requires, however, adding the tap after the engine was started, since otherwise, lastRenderTime will be nil. Is there a way to reset lastRenderTime to zero so that I don't have to include offsetTime in my code?
1
0
631
Jul ’20
AVAudioEngine stops running when changing input to AirPods
I have trouble understanding AVAudioEngine's behaviour when switching audio input sources. Expected Behaviour When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly. Actual Behaviour When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true. When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false. Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one. OS This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around. Code to Reproduce Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift: import AVFoundation class Conductor { 		static let shared: Conductor = Conductor() 		 		private let _engine = AVAudioEngine? 		 		init() { 				// Session 				let session = AVAudioSession.sharedInstance() 				try? session.setActive(false) 				try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker, 																													 .allowBluetooth, 																													 .allowAirPlay]) 				try! session.setActive(true) 				_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil) 				_engine.prepare() 		} 		func start() { _engine.start() } } And in AppDelegate, call: Conductor.shared.start() This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop. Question What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.
2
1
2.1k
Aug ’20
UIViewPropertyAnimator and application lifecycle
I'm trying to use UIViewPropertyAnimator to animate a progress bar. The progress bar displays the playback time of an audio file. Approach 0) Given a progress bar and a playback duration let progressBar: UIViewSubclass = ProgressBar(...) let duration: TimeInterval = ... 1) Create an animator from start to finish on the progress bar: let animator = UIViewPropertyAnimator(duration: duration, curve: .linear) animator.pausesOnCompletion = true progressBar.setProgress(0) animator.addAnimations { [weak self] in 		guard let self = self else { return } 		self.setProgress(1) } animator.pauseAnimation() 2) When a file is played, start it with: let startTime: TimeInterval = ... animator.fractionComplete = startTime / duration animator.continueAnimation(withTimingParameters: nil, durationFactor: 0) This works well. It is CPU efficient, and, with a bit of extra code, supports more things, such as dragging the progress bar to a different playback position. Problem: App/View Lifecycle Unfortunately, this approach breaks when sending the app into the background and reopening it. After that, animator.continueAnimation() doesn't work anymore, and the animation is stuck at the finish state. Here is an example project that reproduces this problem: https://github.com/JanNash/AnimationTest/tree/apple-developer-forum-660767 The main logic is in the ViewController: https://github.com/JanNash/AnimationTest/blob/apple-developer-forum-660767/AnimationTest/ViewController.swift In this project, a simple progress bar is animated after a button press, and the button press restarts the animation from the beginning. After the app was sent to the background and restored to the foreground, the animation doesn't work anymore. Question How do I fix this problem? Is there maybe something inherent to animations that I didn't understand? I could, for example, imagine that the render server loses the animation when the app goes into background, and that as such, animations always have to be recreated when the app - or even a view - enters the foreground. But it would be good to know whether this just requires a simple code change to fix, or whether I have misunderstood something conceptually.
2
0
2.6k
Sep ’20
AVAudioSession lifecycle management
I have a question around AVAudioSession lifecycle management. I already got a lot of help in yesterday's lab appointment - thanks a lot for that - but two questions remain. In particular, I'm wondering how to deal with exceptions in session.activate() and session.configure(). Here is my current understanding, assuming that the session configuration is intended to remain constant throughout an app life cycle: a session needs to be configured when the app first launches, or when the media service is reset a session needs to be activated for first use, and after every interruption (e.g. phone call, other app getting access, app was suspended). Because we cannot guarantee that a session.configure or session.activate call will succeed at all times, currently, in our app, we check whether the session is configured and activated before starting playback, and if not, we configure/activate it: extension Conductor { @discardableResult private func configureSessionIfNeeded() -> Bool { guard !isAudioSessionConfigured else { return true } let session = AVAudioSession.sharedInstance() do { try session.setCategory(.playAndRecord, options: [.defaultToSpeaker, .allowBluetoothA2DP, .allowAirPlay]) isAudioSessionConfigured = true } catch { Logging.capture(error) } return isAudioSessionConfigured } @discardableResult func activateSessionIfNeeded() -> Bool { guard !isAudioSessionActive else { return true } guard configureSessionIfNeeded() else { return false } let session = AVAudioSession.sharedInstance() do { try session.setActive(true) isAudioSessionActive = true } catch { Logging.capture(error) } return isAudioSessionActive } } This, however, requires keeping track of the state of the session: class Conductor { // Singleton static let shared: Conductor = Conductor() private var isAudioSessionActive = false private var isAudioSessionConfigured = false } This feels error-prone. Here is how we currently deal with interruptions: extension Conductor { // AVAudioSession.interruptionNotification @objc private func handleInterruption(_ notification: Notification) { guard let info = notification.userInfo, let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { return } if type == .began { // WWDC session advice: ignore the "app was suspended" reason. By the time it is delivered // (when the app re-enters the foreground) it is outdated and useless anyway. They probably // should not have introduced it in the first place, but thought too much information is // better than too little and erred on the safe side. // While the app is in the background, the user could interact with it from the control center, and // for example start playback. This will resume the app, and we will receive both the command from // the control center (resome) and the interruption notification (pause), but in undefined order. // It's a race condition, solved by simply ignoring the app-was-suspended notification. if let wasSuspended = info[AVAudioSessionInterruptionWasSuspendedKey] as? NSNumber, wasSuspended == true { return } // FIXME: in the app-was-suspended case, isAudioSessionActive remains true but should be false. if playbackState == .playing { pausePlayback() } if isRecording { stopRecording() } isAudioSessionActive = false } else if type == .ended { // Resume playback guard let optionsValue = notification.userInfo?[AVAudioSessionInterruptionOptionKey] as? UInt else { return } let options = AVAudioSession.InterruptionOptions(rawValue: optionsValue) if options.contains(.shouldResume) { startPlayback() } // NOTE: imagine the session was active, and the user stopped playback, and then we get interrupted // When the interruption ends, we will still get a .shouldResume, but we should check our own state // to see whether we were even playing before that. } } // AVAudioSession.mediaServicesWereResetNotification @objc private func handleMediaServicesWereReset(_ notification: Notification) { // We need to completely reinitialise the audio stack here, including redoing session configuration pausePlayback() isAudioSessionActive = false isAudioSessionConfigured = false configureSessionIfNeeded() } } And here, for full reference, is the rest of this example class: https://gist.github.com/tcwalther/8999e19ab7e3c952d6763f11c984ef70 With the above design, we check at every playback whether we need to configure or activate the session. If we do and configuration or activation fails, we just ignore the playback request and silently fail. We feel that this is a better UX experience ("play button not working") than crashing the app or landing in an inconsistent UI state. I think we could simplify this dramatically if we know that we'll get an interruption-ended notification alongside the interruption-began notification in case the app was suspended, and that, if the app was resumed because of a media center control, the interruption-ended notification will come before the playback request we can trust session.activate() and session.configure() to never throw an exception. How would you advise simplifying and/or improving this code to correctly deal with AVAudioSession interruption and error cases?
0
0
1.1k
Jun ’21