Post

Replies

Boosts

Views

Activity

User Notifications Delayed on Watch
Hi, when sending notifications either via APNS remotely or locally on watchOS, they are delivered with a small but significant delay of ~12 to 20s. That doesn't sound like much, but when e.g. the notification is read on AirPods by Siri via the iOS companion app on the phone it's quite annoying having to wait for the long-look notification UI to appear on the watch to e.g. send a quick reaction. Interesting: If the watch is not connected to the phone, notifications are delivered as quick as on the iPhone (<1s delay). That makes me think that this behavior must be related to the notification forwarding feature as described here https://developer.apple.com/documentation/watchos-apps/enabling-and-receiving-notifications . For a "Dependent watchOS app with an iOS app" it is stated that "You can either send the notification just to iPhone, or send it to both devices. In either case, the system ensures that the user only receives one notification at the best destination." So the watch must somehow coordinate with the iPhone to not show a notification if the "same" (same APNS collapse id?) notification is delivered to the phone as well (?). Is there a way to disable this behavior? Thanks! Quirin
2
0
1.2k
Sep ’23
Indicate Packet Loss With AVAudioConverter for OPUS Decoding
I'm using an AVAudioConverter object to decode an OPUS stream for VoIP. The decoding itself works well, however, whenever the stream stalls (no more audio packet is available to decode because of network instability) this can be heard in crackling / abrupt stop in decoded audio. OPUS can mitigate this by indicating packet loss by passing a null pointer in the C-library to int opus_decode_float (OpusDecoder * st, const unsigned char * data, opus_int32 len, float * pcm, int frame_size, int decode_fec), see https://opus-codec.org/docs/opus_api-1.2/group__opus__decoder.html#ga9c554b8c0214e24733a299fe53bb3bd2. However, with AVAudioConverter using Swift I'm constructing an AVAudioCompressedBuffer like so:         let compressedBuffer = AVAudioCompressedBuffer(             format: VoiceEncoder.Constants.networkFormat,             packetCapacity: 1,             maximumPacketSize: data.count         )         compressedBuffer.byteLength = UInt32(data.count)         compressedBuffer.packetCount = 1   compressedBuffer.packetDescriptions! .pointee.mDataByteSize = UInt32(data.count)         data.copyBytes(             to: compressedBuffer.data .assumingMemoryBound(to: UInt8.self),             count: data.count         ) where data: Data contains the raw OPUS frame to be decoded. How can I specify data loss in this context and cause the AVAudioConverter to output PCM data whenever no more input data is available? More context: I'm specifying the audio format like this:         static let frameSize: UInt32 = 960         static let sampleRate: Float64 = 48000.0         static var networkFormatStreamDescription = AudioStreamBasicDescription(             mSampleRate: sampleRate,             mFormatID: kAudioFormatOpus,             mFormatFlags: 0,             mBytesPerPacket: 0,             mFramesPerPacket: frameSize,             mBytesPerFrame: 0,             mChannelsPerFrame: 1,             mBitsPerChannel: 0,             mReserved: 0         )         static let networkFormat = AVAudioFormat( streamDescription: &networkFormatStreamDescription )! I've tried 1) setting byteLength and packetCount to zero and 2) returning nil but setting .haveData in the AVAudioConverterInputBlock I'm using with no success.
0
0
666
Feb ’22
Disable Input Node for AVAudioEngine
We’re working on a VoIP app that can also just spontaneously play audio in background for a walkie-talkie-like functionality. We’re using the AVAudioEngine for recording and playback with voice processing enabled for echo cancellation. However, we’ve been struggling a lot with disabling the microphone for an AVAudioEngine (avoiding the system to show the red recording indicator) not to indicate recording when our app isn’t and also it seems like we cannot start an engine that is recording in background. However, it seemed to us like we cannot disable an engine’s input node / microphone for that purpose after it has been used once. We therefore tried having two audio engines, one for recording and one for playback. However, it seemed like this would almost completely silence our audio output, we assume that’s because voice processing only works correctly when input and output are being routed via the same engine. We therefore tried a third approach: Having one engine for recording and one for both recording and playback. This seems to work now; we can just disable the engine for both recording and playback whenever we’re not recording, but the trade-off seems to be a bit of lag during playback when we’re switching the audio engine used for playback. It would be a great if we could just have a single engine and disable its input node temporarily. Setting the isVoiceProcessingInputMuted property on the input node to true doesn't have the desired effect of removing the system's recording and privacy indicators (and therefore I assume enabling the engine to start in background). We have discussed this issue during a WWDC lab session where the engineers thought our current workaround would probably be the best we can currently do, so consider this as a feature request.
1
2
1.1k
Jun ’21