Posts

Post not yet marked as solved
0 Replies
129 Views
Hi, I'm triggering a notification from the audio interruption handler (but also have a debug button set to trigger it manually) and it frequently does not show up. I don't think I have ever seen it show up when the watch face is off. I have created a singleton class to trigger this notification as follows. Note that I use a UUID in the identifier because an old thread here suggests this is necessary, but it makes no difference as far as I can tell. Any ideas? I'd like this notification to be reliable. I'm also surprised that with trigger set to nil, it does not trigger instantaneously. Any help would be much appreciated! Thanks, -- B. import Foundation import UserNotifications class NotificationSender: NSObject, UNUserNotificationCenterDelegate { static let shared = NotificationSender() override init() { super.init() let center = UNUserNotificationCenter.current() center.requestAuthorization(options: [.sound, .badge]) { granted, error in if granted { print("Notification permission granted") } else { print("Notification permission denied") } } center.delegate = self // Define the action to open the app let openAction = UNNotificationAction(identifier: "openAction", title: "Open App", options: [.foreground]) // Add the action to the notification content let category = UNNotificationCategory(identifier: "resumeAudioCategory", actions: [openAction], intentIdentifiers: [], options: []) center.setNotificationCategories([category]) } func sendNotification() { let center = UNUserNotificationCenter.current() let content = UNMutableNotificationContent() content.title = "Recording Interrupted" content.body = "You will need to restart it manually." content.categoryIdentifier = "resumeAudioCategory" // Create the notification request //let trigger = UNTimeIntervalNotificationTrigger(timeInterval: 5, repeats: false) let request = UNNotificationRequest(identifier: "ResumeAudioNotification-\(UUID().uuidString)", content: content, trigger: nil) center.add(request) { error in if let error = error { print("Error adding notification request: \(error)") } } } // Handle notification when the app is in the foreground func userNotificationCenter(_ center: UNUserNotificationCenter, willPresent notification: UNNotification, withCompletionHandler completionHandler: @escaping (UNNotificationPresentationOptions) -> Void) { // Display the notification while the app is in the foreground completionHandler([.sound, .badge, .banner, .list]) } // Handle notification response func userNotificationCenter(_ center: UNUserNotificationCenter, didReceive response: UNNotificationResponse, withCompletionHandler completionHandler: @escaping () -> Void) { // Handle the user's response to the notification // For example, navigate to a specific screen in your app completionHandler() } }
Posted
by trzy.
Last updated
.
Post not yet marked as solved
2 Replies
245 Views
Hi, I have a watchOS app that records audio for an extended period of time and because the mic is active, continues to record in background mode when the watch face is off. However, when a call comes in or Siri is activated, recording stops because of an audio interruption. Here is my code for setting up the session: private func setupAudioSession() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default, options: [.overrideMutedMicrophoneInterruption]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { print("Audio Session error: \(error)") } } Before this I register an interruption handler that holds a reference to my AudioEngine (which I start and stop each time recording is activated by the user): _audioInterruptionHandler = AudioInterruptionHandler(audioEngine: _audioEngine) And here is how this class implements recovery: fileprivate class AudioInterruptionHandler { private let _audioEngine: AVAudioEngine public init(audioEngine: AVAudioEngine) { _audioEngine = audioEngine // Listen to interrupt notifications NotificationCenter.default.addObserver(self, selector: #selector(handleAudioInterruption(notification:)), name: AVAudioSession.interruptionNotification, object: nil) } @objc private func handleAudioInterruption(notification: Notification) { guard let userInfo = notification.userInfo, let interruptionTypeRawValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt, let interruptionType = AVAudioSession.InterruptionType(rawValue: interruptionTypeRawValue) else { return } switch interruptionType { case .began: print("[AudioInterruptionHandler] Interruption began") case .ended: print("[AudioInterruptionHandler] Interruption ended") print("Interruption ended") do { try AVAudioSession.sharedInstance().setActive(true) } catch { print("[AudioInterruptionHandler] Error resuming audio session: \(error.localizedDescription)") } default: print("[AudioInterruptionHandler] Unknown interruption: \(interruptionType.rawValue)") } } } Unfortunately, it fails with: Error resuming audio session: Session activation failed Is this even possible to do on watchOS? This code worked for me on iOS. Thank you, -- B.
Posted
by trzy.
Last updated
.
Post not yet marked as solved
3 Replies
587 Views
Hi, I'd like to upload audio samples to the OpenAI Whisper API or others that take m4a, mp3, and wav data. I capture from the microphone and perform some basic signal processing to try to filter out non-voice samples and am left with AVAudioPCMBuffer. It seems there are two approaches: Use AVAudioConverter to create AVAudioCompressedBuffer in MPEG-4 audio format. I've gotten this working (although I can't verify that the compressed data is valid because I'm unable to export a file with proper MPEG-4 headers). Use AVAssetWriter, but this writes to disk, which strikes me as inefficient. Neither of these readily produces a memory buffer with a .m4a file inside. Am I missing some obvious way to do this? How do people upload compressed audio data to remote endpoints? I've also explored just trying to create my own MPEG-4 compliant header but I was unable to produce a valid file. Thanks, -- B.
Posted
by trzy.
Last updated
.
Post not yet marked as solved
2 Replies
876 Views
Hi, I've read over many of the helpful posts on background modes ([1], [2], and responses to my thread [3]) but am encountering something that doesn't quite line up with my understanding. Background My app is using Bluetooth background mode in order to receive data from a peripheral periodically and then perform a network request (eventually, this will involve two network requests, one of them being an upload of audio, which will be a much larger transfer of up to a couple hundred KB, possibly requiring m4a compression first). In background mode, all the Bluetooth messages (currently a small string) are received reliably. The app is woken up and from my understanding, given some time to run in the background. I perform a POST request using URLSession.shared.dataTask(with: request). Problem Encountered These often succeed but sporadically fail with a timeout in background mode and with the error message "connection was lost". Furthermore, nearly every request in background mode is associated with some sort of connection error that appears in the log. Given that many of the requests still succeed, I'm not sure what to make of these. I see a variety of different ones. For example: Case 1: 2023-05-23 16:33:49.428668-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_read_request_report [C1] Receive failed with error "Socket is not connected" 2023-05-23 16:33:49.429633-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_read_request_report [C1] Receive failed with error "Socket is not connected" 2023-05-23 16:33:49.431677-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_read_request_report [C1] Receive failed with error "Socket is not connected" 2023-05-23 16:33:49.440358-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.440528-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.440640-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.440784-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.441459-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.441684-0700 ChatGPT for Monocle[4775:1088487] [quic] quic_conn_send_frames_for_key_state_block_invoke [C1.1.1.1:2] [-0151a6eeef6cab8c6b53cceead6cada7cf118a4e] unable to request outbound data 2023-05-23 16:33:49.445598-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_endpoint_handler_add_write_request [C1.1.1.1 104.18.6.192:443 failed channel-flow (satisfied (Path is satisfied), viable, interface: en0[802.11], ipv4, ipv6, dns)] Cannot send after flow table is released 2023-05-23 16:33:49.445773-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_write_request_report [C1] Send failed with error "Socket is not connected" 2023-05-23 16:33:49.446925-0700 ChatGPT for Monocle[4775:1088487] Connection 1: received failure notification 2023-05-23 16:33:49.447348-0700 ChatGPT for Monocle[4775:1088487] Connection 1: write error 1:57 2023-05-23 16:33:49.464436-0700 ChatGPT for Monocle[4775:1088487] [connection] nw_endpoint_handler_unregister_context [C1.1.1.1 104.18.6.192:443 failed channel-flow (satisfied (Path is satisfied), viable, interface: en0[802.11], ipv4, ipv6, dns)] Cannot unregister after flow table is released 2023-05-23 16:33:49.464974-0700 ChatGPT for Monocle[4775:1088487] [] nw_endpoint_flow_fillout_data_transfer_snapshot copy_info() returned NULL Case 2: 2023-05-23 16:34:09.422783-0700 ChatGPT for Monocle[4775:1088784] [connection] nw_read_request_report [C3] Receive failed with error "Socket is not connected" 2023-05-23 16:34:09.423511-0700 ChatGPT for Monocle[4775:1088784] [connection] nw_read_request_report [C3] Receive failed with error "Socket is not connected" 2023-05-23 16:34:09.425478-0700 ChatGPT for Monocle[4775:1088784] [connection] nw_read_request_report [C3] Receive failed with error "Socket is not connected" 2023-05-23 16:34:09.434263-0700 ChatGPT for Monocle[4775:1088784] [quic] quic_conn_send_frames_for_key_state_block_invoke [C3.1.1.1:2] [-01809801f02b5c3795812501322b6f9d3c91236f] unable to request outbound data The code that is generating these requests is fairly straightforward: public func send(query: String, apiKey: String, model: String, completion: @escaping (String, ChatGPTError?) -> Void) { let requestHeader = [ "Authorization": "Bearer \(apiKey)", "Content-Type": "application/json" ] _payload["model"] = model if var messages = _payload["messages"] as? [[String: String]] { messages.append([ "role": "user", "content": "\(query)" ]) _payload["messages"] = messages } let jsonPayload = try? JSONSerialization.data(withJSONObject: _payload) let url = URL(string: "https://api.openai.com/v1/chat/completions")! var request = URLRequest(url: url) request.httpMethod = "POST" request.allHTTPHeaderFields = requestHeader request.httpBody = jsonPayload _task = URLSession.shared.dataTask(with: request) { data, response, error in if let error = error { DispatchQueue.main.async { completion("", ChatGPTError.networkRequestFailed(error: error)) } return } if let data = data { let (contentError, response) = self.extractContent(from: data) if let contentError = contentError { DispatchQueue.main.async { completion("", contentError) } } else if let response = response { DispatchQueue.main.async { completion(response, nil) } } return } DispatchQueue.main.async { completion("", ChatGPTError.responsePayloadParseError) } } _task?.resume() } Questions Is there any way to make this process more reliable? I assume the error messages are meaningful and should not be ignored. If not, will I end up encountering issues when trying to upload larger payloads? Thank you! -- B.
Posted
by trzy.
Last updated
.
Post not yet marked as solved
5 Replies
930 Views
Hi, I'd like to develop an iOS application that keeps the mic open for voice recording and processing even when the screen is off. I want to perform speech-to-text requests whenever samples of voice are detected (using a voice activity detection library) and also send requests to the cloud based on what is spoken. I've enabled the Audio background mode and preliminary testing seems to indicate that this is working. That is, I can press "record" in my app, switch to another app then shut the screen off, and speak for several seconds before auto-stopping the recording and sending it to a SFSpeechRecognizer task, which appears to succeed. However, I have read that this should not be supported so before going further down this path, I wanted to understand what exactly are the processing limitations in this mode? The documentation doesn't seem very clear to me. Thanks, -- B.
Posted
by trzy.
Last updated
.
Post not yet marked as solved
0 Replies
539 Views
Hi, On my iPhone 12 Pro Max (iOS 16.2, Xcode 14.2), I'm noticing that the body tracking sample produces ARBodyAnchors whose wrist joints (left_hand_joint, right_hand_joint) transforms never change. For example, when waving my hand or twisting my wrist. Other joints work (I haven't checked digits, however). Has anyone experienced this? I'm not even sure how to begin debugging this head scratcher. Thanks, Bart
Posted
by trzy.
Last updated
.