Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics

Post

Replies

Boosts

Views

Activity

How to edit gain map image to preserve HDR in Live Photo
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so: let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true]) let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true]) // edit them... try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage]) I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
1
0
247
Oct ’24
Connect 2 mono nodes as L/R input for a stereo node
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap. But I can't figure out how to connect them back to a stereo node. I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units. Any ideas? let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)! let leftInputMixer = AVAudioMixerNode() let rightInputMixer = AVAudioMixerNode() let leftOutputMixer = AVAudioMixerNode() let rightOutputMixer = AVAudioMixerNode() let channelMixer = AVAudioMixerNode() [leftInputMixer, rightInputMixer, leftOutputMixer, rightOutputMixer, channelMixer].forEach { engine.attach($0) } let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0) let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0) plugin.leftInputMixer = leftInputMixer plugin.rightInputMixer = rightInputMixer plugin.leftOutputMixer = leftOutputMixer plugin.rightOutputMixer = rightOutputMixer plugin.channelMixer = channelMixer leftInputMixer.auAudioUnit.channelMap = [0] rightInputMixer.auAudioUnit.channelMap = [1] engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat) // Process right channel, pass through left channel engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat) engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat) engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat) // Mix back to stereo? engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat) engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
1
0
168
2w
Crash when presenting Camera via Web View in iOS 18.2 Beta - WebCore::AVVideoCaptureSource::create
We are experiencing thousands of crashes in our application when attempting to present the camera through a Web View. The app crashes during this process, and the crash logs point to WebCore::AVVideoCaptureSource::create WebCore::RealtimeMediaSourceCenter::getUserMediaDevices. This issue has only been observed in iOS 18.2 beta versions (beta 1 - 22C5109p, beta 2 - 22C5125e, beta 3 - 22C5131e). In iOS versions below 18.2, the functionality works and we haven't identified any correlation with specific device models. The problem seems to stem from a WebCore framework introduced in these beta releases 18.2. We kindly request a review and fix for this issue in upcoming beta releases to restore functionality. Let us know if there are any workarounds or adjustments we can implement in the interim. Thank you for your attention to this matter.
2
0
273
2w
LivePhoto not applying on iOS 17+ as live wallpaper
Hi fellow iOS developers! 👋 I've written a Swift code that converts a video (from a URL) into a Live Photo after downloading it. The conversion process seems fine, but when I try to set the generated Live Photo as a wallpaper on iOS 17+, it shows the message 'Motion not Available.' Has anyone else experienced this issue or know why this might be happening? Could it be related to changes in iOS 17 Live Photo handling or the generated file structure? Any help or suggestions would be greatly appreciated! 🙏
1
1
98
2w
Generate recomendation QUEUE after selection a song in MusicKit
Hi, I'm developing a musicKit integration in my iOS App, and I want to select songs from recently played (done it), the problem is that the queue is not auto-generated and the user have to select other song if they want to go forward. There is any method to ask for similar songs, or recommended songs, from a song that the user has already selected? It will be really great :) Also if you know it... There is any publisher for the music duration or I need to do a timer?? Thanks. David.
1
1
135
2w
Heic format image incompatibility issue
I am a developer working on iOS apps. In the demo, I planned to replace the local images with Heic format instead of PNG format, but the actual test results showed abnormalities on this device, while the other test devices displayed normally Heic images are converted by the built-in image conversion function on Mac. I tested multiple Heic images, but none of them were displayed and the image information returned nil,,but PNG images can be displayed normally. device information:
6
0
352
Oct ’24
Why doesn't sometimes recommendedVideoSettings have recommend settings?
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings. I found sometimes it return nil, there is my test result. hevc .mov with activeColorSpace sRGB 60FPS -> ok 120FPS -> ok hevc .mov with activeColorSpace displayP3_HLG 60FPS -> nil 120FPS -> nil h264 .mov 30FPS -> ok 60FPS -> nil 120FPS -> nil so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
0
0
167
2w
Error Domain=NSOSStatusErrorDomain Code=-16384, -16155, -16512
I’ve built a custom media player using AVSampleBufferAudioRenderer and AVSampleBufferRenderSynchronizer, and overall, it works great! However, I’ve noticed some unusual logs popping up: Domain: NSOSStatusErrorDomain Error Codes: -16384, -16155, -16512 *That error -16512 keeps happening repeatedly for one of our users, preventing them from playing any media at all. I’ve searched around but can’t find any documentation explaining what these errors mean. Has anyone run into this issue or have any suggestions? Any help would be hugely appreciated! Thanks!
0
0
200
2w
Some album artwork from MPMediaItem display as nil
Hey there, I'm trying to display all user's albums using the MediaPlayer library. I'm getting many albums returning nil, but I know artwork exists because they show up in the default Music app. There doesn't seem to be much rhyme or reason for what shows up and what doesn't. All downloaded albums display artwork, but some cloud album artwork displays as well. Here's the code I'm using to debug this. let query = MPMediaQuery.albums() if let albumCollections = query.collections { albums = albumCollections } for album in albums { let artwork = album.representativeItem?.artwork print(artwork, artwork?.image(at: CGSize(width: 100, height: 100))) } Any help would be greatly appreciated. Thanks!
2
1
606
Jan ’24
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'.
I have an app that edits photos in your library. When I call try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!) The following is logged to the console: writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. What does that mean and how can I resolve it? Xcode Version 16.0 (16A242d) iOS 18.1 (22B82)
3
8
548
Oct ’24
MPRemoteCommand play conflicting with .playPause gesture
We develop a video playback app on Apple TV which has the two following features: Its content browsing screen has installed a gesture recognizer for presses on the PlayPause Siri remote button in order to directly launch a playback. The gesture recognizer is attached to the content browsing UIViewController view. It presents its own custom playback UI with an AVPlayerLayer for the video and supports MPNowPlayingSession in order to publish current playback information and respond to remote commands. It also supports switching between fullscreen and Picture in Picture playback. Both features work fine, ie. the playback is launched when pressing the PlayPause Siri remote button and, during playback, the playback info are properly advertised on other devices and remote commands are also triggered as expected. However, when pressing the PlayPause Siri remote button while the video is playing in PiP, the "pause" remote command is sometimes triggered instead of the .playPause gesture recognizer. The issue may not occur the first time but for subsequent PlayPause presses. Navigating a bit in the app UI seems to help preventing the issue to occur. Finally, the issue only occurs if the video is playing. If the video is paused, the PlayPause Siri remote button gesture is always recognized instead of the remote command. Please note that, before using MPNowPlayingSession (and the corresponding MPRemoteCommandCenter), the app was using the default MPRemoteCommandCenter to support remote commands and the issue did not occur. We don't reproduce this issue with the Apple TV app so there's probably something we are not doing right. Has someone any clue?
0
0
197
2w
MPNowPlayingInfoCenter without playing music
Hello ! I am working on an app connected to an external streamer . I would like to display current playing song on the Lock Screen. I tried to update the information in MPNowPlayingInfoCenter but I need to play a sound on my iPhone for the control to be displayed . Is there a way to do it without playing a sound? If not, playing a silent sound would be the only solution ? validated by Apple ? :-/ Thank you Frederic
2
0
372
Jul ’24
High bitrate video streaming in avplayer sometimes audio disappears
Hello, I used AVPlayer in my project to play network movie. Most movie could play normally, but I found the sound will disappear sometimes if I play specified 4K video network stream. The video will continue playing but audio stops after video is played for a while. If I pause player and then resume, the sound will be back but disappeared again after several seconds Check AVPlayerItem status: isPlaybackLikelyToKeepUp` == true isPlaybackBufferEmpty` = false player.volume > 0 According the value above, it seems not cause by empty playback buffer or volume issue. I am so confused for this situation. Movie information Video Format : AVC Format/Info : Advanced Video Codec Format profile : High L5.1 Codec ID : avc1 Codec ID/Info : Advanced Video Coding Bit rate mode : Variable Bit rate : 100.0 Mb/s Width : 3 840 pixels Height : 2 160 pixels Display aspect ratio : 16:9 Frame rate mode : Constant Frame rate : 29.970 (30000/1001) FPS Audio Format : AAC LC Format/Info : Advanced Audio Codec Low Complexity Codec ID : mp4a-40-2 Duration : 5 min 19 s Bit rate mode : Constant Bit rate : 192 kb/s Nominal bit rate : 48.0 kb/s Channel(s) : 2 channels Channel layout : L R Sampling rate : 48.0 kHz Frame rate : 46.875 FPS (1024 SPF) Does anyone know if AVPlayer has this limitations when playing high-bitrate movie streams, and are there any solutions?
0
0
152
2w
Research Purpose SensorKit Access
Dear Apple, I am currently working on Mental Health related research project supported by South Korea Government funding. In addition to SensorKit access, we are working on the data from microphone. Is there any contact point aside SensorKit access application to discuss the possibility of research data collection from restricted participant samples?
0
0
106
2w
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
170
2w
Managing Excessive Memory Usage with AVAssetReader and AVASSETWriter
Hello, I.m deaf-blind programmer. I'm experiencing memory issues in my app. Essentially, I'm writing a video. In this output video, I get content from two sources. The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video. The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video. So, in the end, the output video should be: 18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write(). However, my app crashes as soon as I start the first process. I'm using AVAssetWriter to write the video and AVAssetReader to read the video. Below, I'll show the code where I get the CMSampleBuffer. I'd like an example of how to add the 18-second video to the beginning of the output video. It doesn't need to be a big piece of code. Here it is: // Variables var audioReaderBuffers = [CMSAMPLEBUFFER]() var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]() // Get CMSampleBuffer of a video asset if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() // Video CMSampleBuffer while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) videoReaderBuffers.append((frame: pixBuf, time: pTime)) } } } if let videoURL = videoURL { let videoAsset = AVAsset(url: videoURL) Task { let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first! let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first! let reader = try AVAssetReader(asset: videoAsset) let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width, kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height ] as [String: Any] let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings) let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings) reader.add(readerVideoOutput) reader.add(readerAudioOutput) reader.startReading() while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() { autoreleasepool { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let pixBuf = imgBuffer as CVPixelBuffer let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) }
1
0
183
2w
[AVPlayer][5G] The buffer duration (preferredForwardBufferDuration) configuration property of AVPlayerItem does not work on a 5G network
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works. For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
0
0
207
2w
Replace MWPhotoBrowser with modern alternative
I have an iPad app, written in objective-c and distributed through Enterprise developer, as it is not for public use but specific to some large companies. The app has a local database and works offline For some functions of the app I need to display images (not edit or cut them, just display them) Right now there is integrated MWPhotoBrowser viewer, which has not been maintained for almost 10 years, so in addition to warnings in compilation I have to fight with some historical bugs especially on high resolution images. https://github.com/mwaterfall/MWPhotoBrowser Do you know of a modern and maintained OFFLINE photo viewer? I evaluate both free and paid (maybe an SDK). My needs are very basic I have found this one https://github.com/TimOliver/TOCropViewController, but I need to disable the photos edit features and especially I would lose the useful feature of displaying multiple images (mwphoto for multiple images showed a gallery)
0
0
151
2w
Issues with Downsampling Live Audio from Mic with AVAudioNodeMixer
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz). To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself. To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling. Here are the key issues and questions I have: 1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case. 2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts? This issue seems longstanding, as noted in a 2018 forum post: https://forums.developer.apple.com/forums/thread/111726 Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
0
0
143
2w
Cannot Transcribe Audio During SharePlay in VisionOS
I’ve encountered an issue when trying to transcribe audio during a SharePlay session in VisionOS. Specifically, the AVAudioSession appears to fail when sharing audio, preventing successful transcription. The problem seems related to AVAudioSession.sharedInstance() and using the .mixWithOthers option, which is supposed to enable multiple audio sources to coexist without interference. Here’s the relevant code snippet that throws the error: private static func prepareEngine() throws -> (AVAudioEngine, SFSpeechAudioBufferRecognitionRequest) { let audioEngine = AVAudioEngine() let request = SFSpeechAudioBufferRecognitionRequest() request.shouldReportPartialResults = true let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .allowBluetooth]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) let inputNode = audioEngine.inputNode let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in request.append(buffer) } audioEngine.prepare() try audioEngine.start() return (audioEngine, request) } The setup is designed to initialize an AVAudioEngine and a SFSpeechAudioBufferRecognitionRequest for real-time transcription, but fails within the SharePlay context. Notably, while .mixWithOthers is intended to handle concurrent audio sessions, it doesn’t appear to work as expected during SharePlay. The audioSession.setActive(true) line is where the setup typically fails, with no clear solution to proceed. Has anyone else faced similar issues with AVAudioSession and SharePlay in VisionOS? Any insights on how to manage audio sharing or transcription during a SharePlay session would be greatly appreciated! The specific error is: The operation couldn't be completed. (com.apple.coreaudio.avfaudio error 561145187.)
0
0
127
2w