Our multimedia application Boinx FotoMagico displays media files of various kinds with a Metal rendering engine. At the moment we still use .bgra8Unorm pixel format and sRGB color space and only render in SDR, which is increasingly a problem, as much of the video content is HDR nowadays (e.g. videos shot on an iPhone). For that reason we would like to switch to EDR rendering with .rgba16Float pixel format and extendedLinearDisplayP3 color space.
We have already worked out how to do this for HDR image files, but still have a technical problem when rendering HDR video files. We are using AVFoundation to get the video frames as CVPixelBuffers and convert them to MTLTexture using a CVMetalTextureCache. MTLTextures are then further processed in various compute shaders before being rendered to screen. However the pixel values in the texture are not what we expected. Video frames appear too bright/overexposed.
In WWDC21 session "Explore HDR rendering with EDR" Ken Greenebaum mentioned:
“AVFoundation does not presently decode HDR formats, such as HDR10, to EDR. Consequently, these need to be adapted for use with EDR rendering. This conversion is straightforward and involves two steps. First, converting to linear light by applying the inverse transfer function. And second, dividing by the medium's reference white.”
https://developer.apple.com/videos/play/wwdc2021/10161?time=1498
However, the session does not explain, how to get or calculate the correct value for "reference white". We could not find any relevant info on the web. This is why we need DTS assistance. We need the code that calculates the correct value for reference white for any kind of video, whether it is SDR or HDR, and regardless of codec and encoding. I assume that Ken Greenebaum is the best Apple engineer to ask in this case, because he recorded most of the EDR related WWDC sessions in recent years?
We have written a small test app that renders a short sample video (HLG encoding). The window contains two views. The upper view uses an AVPlayerLayer and renders the video natively just like QuickTime Player. The video content looks correct here. BTW, the window background is SDR white, so that bright EDR pixels can be clearly identified, e.g. the clouds just above the mountains in the upper left corner of the sample video. You may need to lower display brightness a bit if these clouds do not appear brighter than the white window background.
The bottom view uses a CAMetalLayer and low-level Metal rendering. The CVPixelBuffers we receive from AVFoundation still need to be scaled down so that SDR reference white reaches pixel value 1.0. Entering a value of 9.0 to 10.0 for reference white in the text field makes it look about right on my Studio Display. But that is just experimental for this sample video file. We need code to calculate the correct value for reference white for any kind of video file!
We have a couple of questions:
SDR videos should probably use 1.0 as reference white, as their encoded pixel values can already be used as is? Is this assumption correct?
Different video encoding of HDR video (HLG, PQ, etc) will probably lead to different values for reference white?
Is the value for reference white constant throughout a video, or can it vary over time, either scene by scene, or even frame by frame?
If it can vary, does the CVPixelBuffer of the current video frame contain all the necessary metadata to calculate the correct value?
Does the NSScreen.maximumExtendedDynamicRangeColorComponentValue also influence the reference white value?
The attached sample project is structured in a way that the only piece of code that needs to be modified is the ViewController.sdrReferenceWhiteValue() function. Please read the comments and the #warning in this function. This is where the code for calculating the reference white value should be inserted.
Here is the download link for the sample project:
https://www.dropbox.com/scl/fi/4w5gmftav5xhbixu9u6pb/HDRMetalTest.zip?rlkey=n8cm02soux3rx03vplgo6h1lm&dl=0
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)?
Thanks so much for the help.
class CameraManager: NSObject {
enum Errors: Error {
case noCaptureDevice
case couldNotAddInput
case unsupportedConfiguration
}
enum Resolution {
case hd1080p
case uhd4K
var preset: AVCaptureSession.Preset {
switch self {
case .hd1080p:
return .hd1920x1080
case .uhd4K:
return .hd4K3840x2160
}
}
var dimensions: CMVideoDimensions {
switch self {
case .hd1080p:
return CMVideoDimensions(width: 1920, height: 1080)
case .uhd4K:
return CMVideoDimensions(width: 3840, height: 2160)
}
}
}
enum CameraType {
case wide
case ultraWide
var captureDeviceType: AVCaptureDevice.DeviceType {
switch self {
case .wide:
return .builtInWideAngleCamera
case .ultraWide:
return .builtInUltraWideCamera
}
}
}
enum FrameRate: Int {
case fps60 = 60
case fps120 = 120
case fps240 = 240
}
let orientationManager = OrientationManager()
let captureSession: AVCaptureSession
let previewLayer: AVCaptureVideoPreviewLayer
let movieFileOutput = AVCaptureMovieFileOutput()
let videoDataOutput = AVCaptureVideoDataOutput()
private var videoCaptureDevice: AVCaptureDevice?
override init() {
self.captureSession = AVCaptureSession()
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
super.init()
self.previewLayer.videoGravity = .resizeAspect
}
func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws {
assert(Thread.isMainThread)
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
captureSession.sessionPreset = resolution.preset
if captureSession.canAddOutput(movieFileOutput) {
captureSession.addOutput(movieFileOutput)
} else {
throw Errors.couldNotAddInput
}
videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue"))
if captureSession.canAddOutput(videoDataOutput) {
captureSession.addOutput(videoDataOutput)
// Set the video orientation if needed
if let connection = videoDataOutput.connection(with: .video) {
//connection.videoOrientation = .portrait
}
} else {
throw Errors.couldNotAddInput
}
guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else {
throw Errors.noCaptureDevice
}
let useDimensions = resolution.dimensions
guard let format = videoCaptureDevice.formats.first(where: { format in
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height
let frameRates = format.videoSupportedFrameRateRanges
return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) })
}) else {
throw Errors.unsupportedConfiguration
}
self.videoCaptureDevice = videoCaptureDevice
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
} else {
throw Errors.couldNotAddInput
}
try videoCaptureDevice.lockForConfiguration()
videoCaptureDevice.activeFormat = format
videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue))
videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000)
videoCaptureDevice.exposureMode = .locked
videoCaptureDevice.unlockForConfiguration()
} catch {
throw error
}
configureStabilization(enabled: stabilizationEnabled)
}`
Hi all, we try migrate project to Swift 6
Project use AVPlayer in MainActor
Selection audio and subtitiles not work
Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible)
get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary
and second example
`if #available(iOS 15.0, *) {
player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in
if error != nil {
return
}
if let groupWrp = group {
DispatchQueue.main.async {
self.setupAudio(groupWrp, audio: audioLang)
}
}
})
}`
get error: Sending 'groupWrp' risks causing data races
Hello there,
I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine.
However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time.
To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case.
Here is my "stripped" and simplified version of the custom Player Item:
import AVFoundation
final class PlayerItem: AVPlayerItem {
private var jumpCompletion: ( (CMTime) -> () )?
override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) {
super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys)
NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self)
}
deinit {
NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self)
jumpCompletion = nil
}
@discardableResult func step(by count: Int) async -> CMTime {
await withCheckedContinuation { continuation in
step(by: count) { time in
continuation.resume(returning: time)
}
}
}
func step(by count: Int, completion: @escaping ( (CMTime) -> () )) {
guard jumpCompletion == nil else {
completion(currentTime())
return
}
jumpCompletion = completion
step(byCount: count)
}
@objc private func timeDidChange(_ notification: Notification) {
switch notification.name {
case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self:
jumpCompletion?(currentTime())
jumpCompletion = nil
default: return
}
}
}
In short the notification never gets called thus the above is not working.
I guess the key there is that in the docs about the timeJumpedNotification: is said:
"A notification the system posts when a player item’s time changes discontinuously."
so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it.
I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another.
Thanks a lot
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error:
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.'
*** First throw call stack:
(0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39)
libc++abi: terminating due to uncaught exception of type NSException
My previous code worked fine, but it's crashing with Swift 6.
Does anyone know a solution for this?
## **Previous code:**
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler = exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
// Periodically check the progress of the export session
let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in
audioExportProgressPublisher.send(exportSession.progress)
}
// Export the audio track asynchronously
exportSession.exportAsynchronously {
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
timer.invalidate()
}
}
## New Code:
func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async {
let asset = AVAsset(url: videoURL)
// Create an AVAssetExportSession to export the audio track
guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"])))
return
}
// Set the output file type and path
guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return }
let outputURL = VideoUtils.getTempAudioExportUrl(filename)
VideoUtils.deleteFileIfExists(outputURL.path)
let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0)
if let exportHandler {
exportHandler(exportSession, audioExportProgressPublisher)
}
if #available(iOS 18.0, *) {
do {
try await exportSession.export(to: outputURL, as: .m4a)
let states = exportSession.states(updateInterval: 0.1)
for await state in states {
switch state {
case .pending, .waiting:
break
case .exporting(progress: let progress):
print("Exporting: \(progress.fractionCompleted)")
if progress.isFinished {
completion(.success(outputURL))
}else if progress.isCancelled {
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
}else {
audioExportProgressPublisher.send(Float(progress.fractionCompleted))
}
}
}
}catch let error {
print(error.localizedDescription)
}
}else {
// Periodically check the progress of the export session
let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common)
.autoconnect()
.sink { [weak exportSession] _ in
guard let exportSession else { return }
audioExportProgressPublisher.send(exportSession.progress)
}
exportSession.outputFileType = .m4a
exportSession.outputURL = outputURL
await exportSession.export()
switch exportSession.status {
case .completed:
completion(.success(outputURL))
case .failed:
completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"])))
case .cancelled:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"])))
default:
completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"])))
}
// Invalidate the timer when the export session completes or is cancelled
publishTimer.cancel()
}
}
In my project, i want to add emoji to my video but emoji image becomes dark when add in hdr video, im trying to convert my emoji image to hdr format or create with hdr format but seems not work. I hope someone has experiences in this case could help me.
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong?
import UIKit
import AVKit
import AVFoundation
class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate {
var pipPlayer: AVPlayer!
var avCanvas : UIView!
var pipCanvas: AVPlayerLayer?
var pipController: AVPictureInPictureController!
var mainViewControler : UIViewController!
var playerItem : AVPlayerItem!
var videoAvasset : AVAsset!
public func link(to parentViewController : UIViewController) {
mainViewControler = parentViewController
setup()
}
@objc func appWillResignActiveNotification(application: UIApplication) {
guard let pipController = pipController else {
print("PiP not supported")
return
}
print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)")
print("PIP isPossible: \(pipController.isPictureInPicturePossible)"
if playerItem.status == .readyToPlay {
if pipPlayer.rate == 0 {
pipPlayer.play()
}
pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture.
} else {
print("Player not ready for PiP.")
}
}
private func setupAudio() {
do {
let session = AVAudioSession.sharedInstance()
try session.setCategory(.playback, mode: .moviePlayback)
try session.setActive(true)
} catch {
print("Audio session setup failed: \(error.localizedDescription)")
}
}
@objc func playerItemDidFailToPlayToEnd(_ notification: Notification) {
if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error {
print("Failed to play to end: \(error.localizedDescription)")
}
}
func setup() {
setupAudio()
guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return }
videoAvasset = AVAsset(url: videoURL)
playerItem = AVPlayerItem(asset: videoAvasset)
addPlayerObservers()
pipPlayer = AVPlayer(playerItem: playerItem)
avCanvas = UIView(frame: view.bounds)
pipCanvas = AVPlayerLayer(player: pipPlayer)
guard let pipCanvas else { return }
pipCanvas.frame = avCanvas.bounds
//pipCanvas.videoGravity = .resizeAspectFill
mainViewControler.view.addSubview(avCanvas)
avCanvas.layer.addSublayer(pipCanvas)
if AVPictureInPictureController.isPictureInPictureSupported() {
pipController = AVPictureInPictureController(playerLayer: pipCanvas)
pipController?.delegate = self
pipController?.canStartPictureInPictureAutomaticallyFromInline = true
}
let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50))
playButton.setTitle("Play", for: .normal)
playButton.backgroundColor = .blue
playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside)
mainViewControler.view.addSubview(playButton)
let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50))
pauseButton.setTitle("Pause", for: .normal)
pauseButton.backgroundColor = .red
pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside)
mainViewControler.view.addSubview(pauseButton)
let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50))
pipButton.setTitle("Start PiP", for: .normal)
pipButton.backgroundColor = .green
pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside)
mainViewControler.view.addSubview(pipButton)
print("Error:\(String(describing: pipPlayer.error?.localizedDescription))")
NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in
guard let self = self else { return }
if self.pipPlayer.rate == 0 {
self.pipPlayer.play()
pipController?.startPictureInPicture()
}
}
func addPlayerObservers() {
playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil)
NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem)
}
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if keyPath == "status" {
if let statusNumber = change?[.newKey] as? NSNumber {
let status = AVPlayer.Status(rawValue: statusNumber.intValue)!
switch status {
case .readyToPlay:
print("Player is ready to play")
case .failed:
print("Player failed: \(String(describing: playerItem?.error))")
case .unknown:
print("Player status is unknown")
@unknown default:
fatalError()
}
}
}
}
@objc func playerDidFinishPlaying(_ notification: Notification) {
print("Video finished playing.")
}
deinit {
playerItem?.removeObserver(self, forKeyPath: "status")
NotificationCenter.default.removeObserver(self)
}
@objc func playTapped() {
pipPlayer.play()
}
@objc func pauseTapped() {
pipPlayer.pause()
}
@objc func startPictureInPicture() {
if let pipController = pipController, !pipController.isPictureInPictureActive {
pipController.startPictureInPicture()
}
}
@objc func stopPictureInPicture() {
if let pipController = pipController, pipController.isPictureInPictureActive {
pipController.stopPictureInPicture()
}
}
func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) {
print("Failed to start PiP: \(error.localizedDescription)")
if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] {
print("Underlying error: \(underlyingError)")
}
}
}
Safari is supposed to support animated AVIF images since version 16, but the ones I've tested perform very poorly, even on an M4 Mac Mini running Sequoia 15.1.1.
I believe Safari delegates decoding to the operating system itself, so this issue also happens in Live Preview in the finder, when I try to preview a file.
Sample file here: https://s3.us-west-2.amazonaws.com/cdn.paintera.org/test/sample.avif
322KB file, 5 seconds long, 12fps
This plays perfectly on Chrome on Mac OS, but is slow and laggy on Safari and Live Preview (it takes about 6.5 seconds to finish the 5 second video).
Does anyone know how to fix this or workaround this issue?
Hello,
I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App.
<video src={videoPath} autoplay muted loop playsinline
class="h-auto w-full max-w-full object-cover"></video>
Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS?
Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy?
Thank you in advance for your help!
Hello,
I work on a video streaming app, and I have been working on this crash that we are seeing quite frequently (it is our #2 crasher at the moment). The stack trace indicates that an AVPictureInPictureController is being deallocated on a background thread. This leads to dangling AutoLayout constraints getting cleaned up, further resulting in an exception being thrown from the framework about the layout engine being accessed from a background thread.
We have internal analytics which indicate that the crash occurs after the user comes back to the app after putting it in the background, and switches playback from Picture in Picture mode back to the app's regular playback UI.
What has me puzzled here is that, as I'm sure you know, we have no control over the PIP UI. It is entirely system-provided, and there is none of our app's code in the stack trace. The whole process is even initiated by a KVO on an AVPlayerController property, and our app doesn't use that class directly anywhere. So how did we manage to cause this process to happen on a background thread?
Add to this the fact that the bug only appeared once we switched to compiling with Xcode 16, and is overwhelmingly present only on devices running iOS 18.
These factors lead me to believe that this is probably an OS issue. But before I go to file a feedback, I thought I would post here in case anyone has any ideas. I have attached an instance of the crash log to this post.
2025-01-08_17-32-45.7003_-0800-ea8d5c3323e0f1fc059cf83f6ec86377bdae1788.crash
How do we export a spatial video at a higher resolution than 2200 x 2200?
For example, I want to export a video that I edited in a spatial timeline as 4096 x 4096 resolution with spatial metadata to output as an MV-HEVC file.
I looked under both Final Cut and Compressor export settings, but couldn't find a way to do this.
Thanks for your help!
Can anyone explain how AVAssetExportSession works in iOS 18 and earlier versions?
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance?
`videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1
`audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
Hello!
I am building a video camera app and trying to implement Apple log for iPhone 15 Pro and 16 Pro.
I am not seeing a lot of documentation on it and notice the amount of apps that use it on the app is rather limited. Less an 5 to be exact.
Is Apple Log recording a feature that is accessible to developers?
Here is a link to documentation: https://developer.apple.com/documentation/avfoundation/avcapturecolorspace/applelog
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X.
With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot.
This works fine in 10.8.1 and below.
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames.
For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0.
However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither.
Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
After 18.2 IOS update, videos are not playing in Netflix, Amazon Prime and youtube
Hi all,
I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor.
On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback.
One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released.
It is possible to reproduce the issue with the simple code:
import AVFoundation
import UIKit
class ViewController: UIViewController {
var displayBufferLayer: AVSampleBufferDisplayLayer?
override func viewDidLoad() {
super.viewDidLoad()
let displayBufferLayer = AVSampleBufferDisplayLayer()
displayBufferLayer.videoGravity = .resizeAspectFill
displayBufferLayer.frame = view.bounds
view.layer.insertSublayer(displayBufferLayer, at: 0)
self.displayBufferLayer = displayBufferLayer
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
self.displayBufferLayer?.flush()
self.displayBufferLayer?.removeFromSuperlayer()
self.displayBufferLayer = nil
}
}
}
In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing.
I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs.
Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation
My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example)
Is there a method to configure XCode for interactive attaching to this background process for interactive debugging?
Right now I have to use Console + Print which is not fun or productive.
Does Apple have a working example of a MediaExtension anywhere?
This is an exciting API that is very under-documented.
I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused.
Any assistance is highly appreciated!