Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

Climatisc changes and streaming, human behaviors need to be changed
Each time your listening music you are streaming from a server powered by frequently coal or gaz are rarely green energy. As a developper on IOS, i request to Apple to provide download of audio file into our audio app . The goal is not to resell the audio and violate authors right. You Tube already does that. it is time to find tips and tricks to reduce the consumption of the energy specially into data brodcasting and useless streaming of the same song again and again and again. is it possible to change the API in accordance to this reality.
0
0
481
Feb ’24
CoreAudio server plugin: 24bit big endian audio streams
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
0
0
658
Feb ’24
Resetting categoryOptions when setting the mode in Swift AVAudioSession
When setting the mode during the configuration of an audio session in Swift, the previously configured categoryOptions get reset. For example, if you perform setMode as shown below, you will observe that all previously set categoryOptions are cleared. Example: try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .videoChat, options: [.allowBluetooth, .defaultToSpeaker]) try AVAudioSession.sharedInstance().setMode(.voiceChat) If you need to change the mode while maintaining the categoryOptions, you have to perform setCategory once again. Although the exact reason for this behavior has not been identified, the practical impact on the application's functionality is not yet clear. Why do you think this handling is in place?
0
0
603
Feb ’24
airpod2 a2dp option issue
Hello, I am having issue with the setting my avaudiosession output to bluetooth a2dp device. I want to use built in mic for the input and a2dp device (airpod pro 2) for the output route. Whenever I set the .allowBluetoothA2DP for my avaudioSession option, the output changes to speaker. the mode is default and category is playandrecord. If I do the same procedure with airpod pro 1, the output sets to the airpod pro 1. I am having the trouble when I use airpod pro 2 with iphone with ios 17. It seems like there is no issue with ios version below 17. Anyone went through this kind of issue? Thank you in advance.
0
1
450
Feb ’24
hdiutil failure (bug?)
hdiutiul bug? When making a DMG image for the whole content of user1 profile (meaning using srcFolder = /Users/user1) using hdiutil, the program fails indicating: /Users/user1/Library/VoiceTrigger: Operation not permitted hdiutil: create failed - Operation not permitted The complete command used was: "sudo hdiutil create -srcfolder /Users/user1 -skipunreadable -format UDZO /Volumes/testdmg/test.dmg" And, of course, the user had local admin rights. I was using Sonoma 14.2.1 and a MacBook Pro (Intel T2) What I would have expected, asuming that /VoiceTrigger cannot be copied for whatever reason, would be to skip that file or folder and continue the process. Then, at the end, produce a log listing the files/folders not included and the reason for their exclusion. The fact that hdiutil just ended inmediately, looks to me as a bug. Or what else could explain the problem described?
3
1
1.1k
Feb ’24
The lack of 24 million pixels in iPhone 14 promax is its biggest shortcoming
iPhone 14 promax uses 12 million pixels to sharpen it so severely that it is impossible to see. Using 48 million pixels takes up too much space. It starts with 128g. How can there be so much space, plus shooting videos. The biggest problem at present is that there are no 24 million pixels, which greatly affects the daily shooting experience. People around me who use the 14pro series say that not having 24 million pixels is the biggest problem at the moment.
0
0
398
Feb ’24
My 14promax urgently needs 24 million pixels
The 24-megapixel camera is most widely used for daily photography, and the 48-megapixel camera is only used for taking landscapes or photos. After all, it takes up too much memory. The biggest problem with the 14promax now is that its photography is lame, and the 12-megapixel camera has long lagged behind Android. There are too many. Adding 24 million modes is much more valuable than updating iOS, and the experience is directly doubled.
0
0
444
Feb ’24
Unable to List Voice Folder - AVSpeechSynthesizer
I am trying to use the Speech Synthesizer to speak the pronunciation of a word in British English rather than play a local audio file which I had before. However, I keep getting this in the debugger: #FactoryInstall Unable to query results, error: 5 Unable to list voice folder Unable to list voice folder Unable to list voice folder IPCAUClient.cpp:129 IPCAUClient: bundle display name is nil Unable to list voice folder Here is my code, any suggestions?? ` func playSampleAudio() { let speechSynthesizer = AVSpeechSynthesizer() let speechUtterance = AVSpeechUtterance(string: currentWord) // Search for a voice with a British English accent. let voices = AVSpeechSynthesisVoice.speechVoices() var foundBritishVoice = false for voice in voices { if voice.language == "en-GB" { speechUtterance.voice = voice foundBritishVoice = true break } } if !foundBritishVoice { print("British English voice not found. Using default voice.") } // Configure the utterance's properties as needed. speechUtterance.rate = AVSpeechUtteranceDefaultSpeechRate speechUtterance.pitchMultiplier = 1.0 speechUtterance.volume = 1.0 // Speak the word. speechSynthesizer.speak(speechUtterance) }
3
1
1.7k
Feb ’24
Using Music API previews in my app
I have an app that allows users to send messages to each other via a link. I'm receiving many requests from my users to add songs. Can I use the Apple Music API to add 30-second song previews along with the user's message? If users want to listen to the full song, there will be a link to the full song on Apple Music. My app contains ads and subscriptions.
0
0
420
Feb ’24
AVAudioEngine: Is there a way to play audio at full volume while having an active input tap?
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
1
0
896
Feb ’24
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, audio works as expected. When launched from a device, it never crashes (at least, so far). Here's the code that outputs speech: Declared at the top level of the View struct @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") synth.speak(utterance) Any idea on how to stop this? It doesn't stop development, but sure slows it down, requiring multiple app starts often.
0
0
384
Feb ’24
Random EXC_BAD_ACCESS using AVFoundation
My app uses the AVFoundation to pronounce some words. Running the app from Xcode, either to a simulator or device, I frequently get this crash at start-up: AXSpeech (13): EXC_BAD_ACCESS (code=EXC_I386_GPFLT). It seems to occur randomly, maybe 20%-30% of the time I launch the app. When it does not crash, using audio works as expected. When launched from the device, it never crashes (so far, at least). Here's the code that outputs speech: Declared at the top level of the View struct: @State var synth = AVSpeechSynthesizer() In the View, as part of a Button's action closure: let utterance = AVSpeechUtterance(string: answer) utterance.voice = AVSpeechSynthesisVoice(language: "en_US") synth.speak(utterance) Any idea on how to stop this? It's annoying having to launch the app multiple times to test on a simulator or device.
0
0
397
Feb ’24
FPS key-rotation: use-case
We are aiming to apply KeyRotation for FPS service for our content streaming service. It is not clear from the spécifications how "FairPlay Streaming Programming Guide.pdf" How to apply this feature and let the "FPS DRM key server KSM " send multiple-keys at a time in one licence :(CKC) in the key rotation moment. without using the Renting and Leasing key features. Note that we don't use Offline Playback capabilities for our streams.
2
0
691
Feb ’24
Fisheye Projection
Does anyone have any knowledge or experience with Apple's fisheye projection type? I'm guessing that it's as the name implies: a circular capture in a square frame (encoded in MV-HEVC) that is de-warped during playback. It'd be nice to be able to experiment with this format without guessing/speculating on what to produce.
2
7
820
Feb ’24
Cannot read playbackStoreID property from MPMediaItem in Mac target
Running in a Mac (Catalyst) target or Apple Silicon (designed for iPad). Just accessing the playbackStoreID from the MPMediaItem shows this error in the console: -[ITMediaItem valueForMPMediaEntityProperty:]: Unhandled MPMediaEntityProperty subscriptionStoreItemAdamID. The value returned is always “”. This works as expected on iOS and iPadOS, returning a valid playbackStoreID. import SwiftUI import MediaPlayer @main struct PSIDDemoApp: App { var body: some Scene { WindowGroup { Text("playbackStoreID demo") .task { let authResult = await MPMediaLibrary.requestAuthorization() if authResult == .authorized { if let item = MPMediaQuery.songs().items?.first { let persistentID = item.persistentID let playbackStoreID = item.playbackStoreID // <--- Here print("Item \(persistentID), \(playbackStoreID)") } } } } } } Xcode 15.1, also tested with Xcode 15.3 beta 2. MacOS Sonoma 14.3.1 FB13607631
1
1
898
Feb ’24
Spatial Video In Apple Vision Pro
HELP! How could I play a spatial video in my own vision pro app like the official app Photos? I've used API of AVKit to play a spatial video in XCode vision pro simulator with the guild of the official developer document, this video could be played but it seems different with what is played through app Photos. In Photos the edge of the video seems fuzzy but in my own app it has a clear edge. How could I play the spatial video in my own app with the effect like what is in Photos?
1
0
1.2k
Feb ’24
App is excluded from capture but triggers frames capturing
Hello, I'm playing with the ScreenCaptureKit. My scenario is very basic: I capture frames and assign them to a CALayer to display in a view (for previewing) - basically what Apple's sample app does. I have two screens, and I capture only one of them, which has no app windows on it. And my app excludes itself from capture. So, when I place my app on the screen which is not being captured, I observe that most didOutputSampleBuffer calls receive frames with Idle status (which is expected for a screen which is not updating). However, if I bring my capturing app (which, to remind, is excluded from capture) to the captured screen, the frames start coming with Complete status (i.e. holding pixel buffers). And this is not what I expect - from capture perspective the screen is still not updating, as the app is excluded. The preview which the app displays proves that, showing empty desktop. So it seems like updates to a CALayer triggers screen capture frame regardless of the app being included or excluded. This looks like a bug to me, and can lead to noticable waste of resources and battery when frames are not just displayed on screen, but also processed somehow or/and sent over a network. Also, I'm observing another issue due to this behavior, where capture hangs when queueDepth is set to 3 in this same scenario (but I'll describe it separately). Please, advise if I should file a bug somewhere, or maybe there is a rational explanation of this behavior. Thank you
0
0
531
Feb ’24