Is AVQT capable of being used to measure encoding quality of PQ or HLG based content beyond SDR? If so, how am I able to leverage it. If not, is there a roadmap for timing to enable this type of tool?
VideoToolbox
RSS for tagWork directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.
Posts under VideoToolbox tag
29 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple?
or is the format proprietary rather than just regular h.264 (for example)?
If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
First of all, I tried MobileVLCKit but there is too much delay
Then I wrote a UDPManager class and I am writing my codes below. I would be very happy if anyone has information and wants to direct me.
Broadcast code
ffmpeg -f avfoundation -video_size 1280x720 -framerate 30 -i "0" -c:v libx264 -preset medium -tune zerolatency -f mpegts "udp://127.0.0.1:6000?pkt_size=1316"
Live View Code (almost 0 delay)
ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 1 -strict experimental -framedrop -f mpegts -vf setpts=0 udp://127.0.0.1:6000
OR
mpv udp://127.0.0.1:6000 --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1
UDPManager
import Foundation
import AVFoundation
import CoreMedia
import VideoDecoder
import SwiftUI
import Network
import Combine
import CocoaAsyncSocket
import VideoToolbox
class UDPManager: NSObject, ObservableObject, GCDAsyncUdpSocketDelegate {
private let host: String
private let port: UInt16
private var socket: GCDAsyncUdpSocket?
@Published var videoOutput: CMSampleBuffer?
init(host: String, port: UInt16) {
self.host = host
self.port = port
}
func connectUDP() {
do {
socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: .global())
//try socket?.connect(toHost: host, onPort: port)
try socket?.bind(toPort: port)
try socket?.enableBroadcast(true)
try socket?.enableReusePort(true)
try socket?.beginReceiving()
} catch {
print("UDP soketi oluşturma hatası: \(error)")
}
}
func closeUDP() {
socket?.close()
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didConnectToAddress address: Data) {
print("UDP Bağlandı.")
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didNotConnect error: Error?) {
print("UDP soketi bağlantı hatası: \(error?.localizedDescription ?? "Bilinmeyen hata")")
}
func udpSocket(_ sock: GCDAsyncUdpSocket, didReceive data: Data, fromAddress address: Data, withFilterContext filterContext: Any?) {
if !data.isEmpty {
DispatchQueue.main.async {
self.videoOutput = self.createSampleBuffer(from: data)
}
}
}
func createSampleBuffer(from data: Data) -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes),
blockLength: data.count,
blockAllocator: kCFAllocatorNull,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer)
if status != noErr {
return nil
}
var sampleBuffer: CMSampleBuffer?
let sampleSizeArray = [data.count]
status = CMSampleBufferCreateReady(
allocator: kCFAllocatorDefault,
dataBuffer: blockBuffer,
formatDescription: nil,
sampleCount: 1,
sampleTimingEntryCount: 0,
sampleTimingArray: nil,
sampleSizeEntryCount: 1,
sampleSizeArray: sampleSizeArray,
sampleBufferOut: &sampleBuffer)
if status != noErr {
return nil
}
return sampleBuffer
}
}
I didn't know how to convert the data object to video, so I searched and found this code and wanted to try it
func createSampleBuffer(from data: Data) -> CMSampleBuffer? {
var blockBuffer: CMBlockBuffer?
var status = CMBlockBufferCreateWithMemoryBlock(
allocator: kCFAllocatorDefault,
memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes),
blockLength: data.count,
blockAllocator: kCFAllocatorNull,
customBlockSource: nil,
offsetToData: 0,
dataLength: data.count,
flags: 0,
blockBufferOut: &blockBuffer)
if status != noErr {
return nil
}
var sampleBuffer: CMSampleBuffer?
let sampleSizeArray = [data.count]
status = CMSampleBufferCreateReady(
allocator: kCFAllocatorDefault,
dataBuffer: blockBuffer,
formatDescription: nil,
sampleCount: 1,
sampleTimingEntryCount: 0,
sampleTimingArray: nil,
sampleSizeEntryCount: 1,
sampleSizeArray: sampleSizeArray,
sampleBufferOut: &sampleBuffer)
if status != noErr {
return nil
}
return sampleBuffer
}
And I tried to make CMSampleBuffer a player but it just shows a white screen and doesn't work
struct SampleBufferPlayerView: UIViewRepresentable {
typealias UIViewType = UIView
var sampleBuffer: CMSampleBuffer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let displayLayer = AVSampleBufferDisplayLayer()
displayLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(displayLayer)
context.coordinator.displayLayer = displayLayer
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
context.coordinator.sampleBuffer = sampleBuffer
context.coordinator.updateSampleBuffer()
}
func makeCoordinator() -> Coordinator {
Coordinator()
}
class Coordinator {
var displayLayer: AVSampleBufferDisplayLayer?
var sampleBuffer: CMSampleBuffer?
func updateSampleBuffer() {
guard let displayLayer = displayLayer, let sampleBuffer = sampleBuffer else { return }
if displayLayer.isReadyForMoreMediaData {
displayLayer.enqueue(sampleBuffer)
} else {
displayLayer.requestMediaDataWhenReady(on: .main) {
if displayLayer.isReadyForMoreMediaData {
displayLayer.enqueue(sampleBuffer)
print("isReadyForMoreMediaData")
}
}
}
}
}
}
And I tried to use it but I couldn't figure it out, can anyone help me?
struct ContentView: View {
// udp://@127.0.0.1:6000
@ObservedObject var udpManager = UDPManager(host: "127.0.0.1", port: 6000)
var body: some View {
VStack {
if let buffer = udpManager.videoOutput{
SampleBufferDisplayLayerView(sampleBuffer: buffer)
.frame(width: 300, height: 200)
}
}
.onAppear(perform: {
udpManager.connectUDP()
})
}
}
Hey Developers,
I'm on the hunt for a new Apple laptop geared towards coding, and I'd love to tap into your collective wisdom. If you have recommendations or personal experiences with a specific model that excels in the coding realm, please share your insights. Looking for optimal performance and a seamless coding experience.
Your input is gold – thanks a bunch!
I am using the official website API to decode now, the callback did not trigger.
I am trying to set up HLS with MV HEVC. I have an MV HEVC MP4 converted with AVAssetWriter that plays as a "spatial video" in Photos in the simulator. I've used ffmpeg to fragment the video for HLS (sample m3u8 file below).
The HLS of the mp4 plays on a VideoMaterial with an AVPlayer in the simulator, but it is hard to determine if the streamed video is stereo. Is there any guidance on confirming that the streamed mp4 video is properly being read as stereo?
Additionally, I see that REQ-VIDEO-LAYOUT is required for multivariant HLS. However if there is ONLY stereo video in the playlist is it needed? Are there any other configurations need to make the device read as stereo?
Sample m3u8 playlist
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:13
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:12.512500,
sample_video0.ts
#EXTINF:8.341667,
sample_video1.ts
#EXTINF:12.512500,
sample_video2.ts
#EXTINF:8.341667,
sample_video3.ts
#EXTINF:8.341667,
sample_video4.ts
#EXTINF:12.433222,
sample_video5.ts
#EXT-X-ENDLIST
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。
我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
iOS17, the encoder sets an average bit rate of 5Mbps.
in the first 25 minutes:the encoding rate is normal.
25 minutes-30minutes:The encoding bit rate will be reduced to 4M but can be restoredl.
30 minutes-70minutes:the encoding rate is normal.
70minutes-late:the bit rate will suddenly drop to 1Mbps and cannot be restored.
As shown in the figure below, the yellow line is the frame rate and the green line is the code rate.
The Code show as below
- (void)_setBitrate:(NSUInteger)bitrate forSession:(VTCompressionSessionRef)session {
NSParameterAssert(session && bitrate);
OSStatus status = VTSessionSetProperty(session, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(bitrate));
if (status != noErr) NSLog(@"set AverageBitRate error");
NSArray *limit = @[@(bitrate * 1.5/8), @(1)];
status = VTSessionSetProperty(session, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit);
if (status != noErr) NSLog(@"set DataRateLimits error");
}
The problem only occurs in iOS17. Does anyone know what the reason is?
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support:
VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES
Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following:
{
mediaType:'vide'
mediaSubType:'av01'
mediaSpecific: {
codecType: 'av01' dimensions: 394 x 852
}
extensions: {{
CVFieldCount = 1;
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVPixelAspectRatio = {
HorizontalSpacing = 1;
VerticalSpacing = 1;
};
FullRangeVideo = 0;
}}
}
but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume).
So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿.
VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1.
As of today I am using XCode 15.0 with iOS 17.0.0 SDK.