I have a video conferencing app and we are setting up AVAudioSession to videoChat so that we can get echo cancellation.
let options: AVAudioSession.CategoryOptions = [.mixWithOthers , .defaultToSpeaker, .allowBluetooth]
try session.setCategory(.playAndRecord, mode: .videoChat, options: options)
I also want to do local recording of audio samples using AVAssetWriter API. Setup of AssertWriter is as follows:
let audioAssetWriter = try AVAssetWriter(outputURL: fileURL, fileType: .m4a)
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatMPEG4AAC),
AVSampleRateKey: Int(audioParameters.samplingRate),
AVEncoderBitRateKey: Int(audioParameters.bitrate),
AVNumberOfChannelsKey: Int(audioParameters.channels),
AVEncoderAudioQualityKey: AVAudioQuality.max.rawValue
]
let assetWriterAudioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: settings)
assetWriterAudioInput.expectsMediaDataInRealTime = true
audioAssetWriter.add(assetWriterAudioInput)
I am seeing that on the iPhone 14 Pro,
assetWriterAudioInput.append()
is failing. Everything works fine on iPhone 13 and older devices.
Is anyone else seeing this issue on the iPhone 14 Pro.
Post
Replies
Boosts
Views
Activity
The following call to heifRepresentation is causing occasional crashes in iOS 16.
context.heifRepresentation(of: image, format: CIFormat.RGBA8, colorSpace: perceptualColorSpace!, options: [kCGImageDestinationLossyCompressionQuality as CIImageRepresentationOption : 0.6] )
When we switch to context.jpegRepresentation:
return context.jpegRepresentation(of: image, colorSpace: perceptualColorSpace!, options: [kCGImageDestinationLossyCompressionQuality as CIImageRepresentationOption : compressionFactor])
The app runs fine.
Also, the context.heifRepresentation runs fine on iOS 15.
Can someone please help?
Voice isolation does a great job with noise suppression when the user is holding the phone in hand (facetime use case). But when the phone is about 4-feet away from the user, voice isolation quality substantially drops and we are seeing that it is better to not use it.
Our use case demands that user mounts phone on tripod and sites approximately 4 feet away from camera. In this case we are seeing worst performance from voice isolation, presumably because of heavy signal processing and lower original signal to begin with.
We are using AVCaptureMetadataOutput to detect face and body rects. Face rect is shown in green and body rect is shown in blue. We would like to get body rect that encompasses human body, as shown in red box. This body rect is absolute in a sense that it will include hands, feet, arms etc.
We are a Voip app, so automatically get enrolled in mic mode and video effects like portrait video. We would like to opt out of these features. Is there any way to do it?
In my understanding, there is a way for non-voip apps to opt-in, but no way for voip apps to opt-out.
Continuity Camera is a way to stream raw video and metadata from iPhone to mac. Is it possible for an iPhone local recording app to use camera continuity to stream a preview from iPhone to mac?
Can camera continuity be made available on iPad, so that one can stream video/metadata to iPad screen (use case being a need to use better camera and user does not have a mac-book)