I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video.
used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone.
release me a patch to optionally enable the old scrubbing behavior.
VideoToolbox
RSS for tagWork directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.
Posts under VideoToolbox tag
29 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported.
Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple?
I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
xcode 16:
VT_EXPORT void
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) API_AVAILABLE(macos(10.8), ios(16.0), tvos(16.0), visionos(1.0)) API_UNAVAILABLE(watchos);
xcode 15:
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) VT_AVAILABLE_STARTING(10_8);
Hi,
Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video)
to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
I was trying to migrate Core Image based code that's rotating an image in a CVPixelBuffer to the newer VTPixelRotationSession from Video Toolbox. Hoping to increase performance.
The original code does:
let rotatedImage = CIImage(cvPixelBuffer: origPixelBuffer).oriented(.left)
context.render(rotatedImage, to: newPixelBuffer)
The new code uses a session:
_ = VTPixelRotationSessionRotateImage(rotationSession, origPixelBuffer, newPixelBuffer)
However I immediately ran into memory limitations, since my code has to be able to run in an iOS extension. It seems VTPixelRotationSessionRotateImage easily lets memory usage spike over the 50MB of allowed memory. While the CIImage based implementation has no such high memory usage at all.
Is this expected? Does the VTPixelRotationSession implementation gain more performance by sacrificing memory? Or is there something I'm overlooking?
I was expecting the VTPixelRotationSession at worst to be on par in terms of memory usage and processing speed compared to CIImage. At this moment it seems VTPixelRotationSession is unusable in extensions.
See also Feedback: FB14977240
My team and I have created an iPhone application that receives and utilizes sensor data from a separate raspberrypi-powered device.
The iPhone app does not function without the use of the sensor data. How will the App Reviewers be able to test the application when submitting to the App Store?
How is it possible to enable EDR on Apple TV without AVFoundation for custom HDR video playback? The use case is a custom video player for HDR playback via VideoToolbox and Metal, which seem to render colors correctly on iOS but not on tvOS.
All related documentation and WWDC sessions describe APIs that are unavailable for tvOS:
let metalLayer = CAMetalLayer()
metalLayer.wantsExtendedDynamicRangeContent = true
metalLayer.edrMetadata = CAEDRMetadata.hdr10(minLuminance: 0.0, maxLuminance: 1000, opticalOutputScale: 100)
What's the alternative path for tvOS to have correct system tone mapping for a setup like:
metalLayer.pixelFormat = .rgba16Float // (or .bgr10_xr)
metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)
Video format: HEVC, YUV 4:2:0 10bit, BT.2020 PQ.
We do set the preferredDisplayCriteria on AVDisplayManager and thus video range matching is in place.
WWDC Ref: https://developer.apple.com/videos/play/wwdc2022/110565?time=557
Is there any way to place 3D objects, maybe using ARKit or Metalkit in the video.
I have tried to extract frames from video, then draw a cube using SCNNode and then render it into UIImage, then gather all images and create video.
But this is not feasible solution as it creates huge memory spike and ultimately gives memory warning.
So is there any other way to draw 3D objects on the video file.
Hello
I am testing the new Media Extension API in macOS 15 Beta 4.
Firstly, THANK YOU FOR THIS API!!!!!! This is going to be huge for the video ecosystem on the platform. Seriously!
My understanding is that to support custom container formats you make a MEFormatReader extension, and to support a specific custom codec, you create a MEVideoDecoder for that codec.
Ok - I have followed the docs - esp the inline header info and have gotten quite far
A Host App which hosts my Media Extenion (MKV files)
A Extension Bundle which exposes the UTTYpes it supports to the system and plugin class ID as per the docs
Entitlements as per docs
I'm building debug - but I have a valid Developer ID / Account associated in Teams in Xcode
My Plugin is visible to the Media Extension System preference
My Plugin is properly initialized, I get the MEByteReader and can read container level metadata in callbacks
I can instantiate my tracks readers, and validate the tracks level information and provide the callbacks
I can instantiate my sample cursors, and respond to seek requests for samples for the track in question
Now, here is where I get hit some issues.
My format reader is leveraging FFMPEGs libavformat library, and I am testing with MKV files which host AVC1 h264 samples, which should be decodable as I understand it out of the box from VideoToolbox (ie, I do not need a separate MEVideoDecoder plugin to handle this format).
Here is my CMFormatDescription which I vend from my MKV parser to AVFoundation via the track reader
Made Format Description: <CMVideoFormatDescription 0x11f005680 [0x1f7d62220]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1920 x 1080
}
extensions: {(null)}
}
My MESampleCursor implementation implements all of the callbacks - and some of the 'optional' sample cursor location methods: (im only sharing the optional ones here)
- (MESampleLocation * _Nullable) sampleLocationReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error
- (MESampleCursorChunk * _Nullable) chunkDetailsReturningError:(NSError *__autoreleasing _Nullable * _Nullable) error
I also populate the AVSampleCursorSyncInfo and AVSampleCursorDependencyInfo structs per each AVPacket* I decode from libavformat
Now my issue:
I get these log files in my host app:
<<<< VRP >>>> figVideoRenderPipelineSetProperty signalled err=-12852 (kFigRenderPipelineError_InvalidParameter) (sample attachment collector not enabled) at FigStandardVideoRenderPipeline.c:2231
<<<< VideoMentor >>>> videoMentorDependencyStateCopyCursorForDecodeWalk signalled err=-12836 (kVideoMentorUnexpectedSituationErr) (Node not found for target cursor -- it should have been created during videoMentorDependencyStateAddSamplesToGraph) at VideoMentor.c:4982
<<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960
<<<< VideoMentor >>>> videoMentorThreadCreateSampleBuffer signalled err=-12841 (err) (FigSampleGeneratorCreateSampleBufferAtCursor failed) at VideoMentor.c:3960
Which I presume is telling me I am not providing the GOP or dependency metadata correctly to the plugin.
I've included console logs from my extension and host app:
LibAVExtension system logs
And my SampleCursor implementation is here
https://github.com/vade/FFMPEGMediaExtension/blob/main/LibAVExtension/LibAVSampleCursor.m
Any guidance is very helpful.
Thank you!
tl;dr how can I get raw YUV in a Metal fragment shader from a VideoToolbox 10-bit/BT.2020 HEVC stream without any extra/secret format conversions?
With VideoToolbox and 10-bit HEVC, I've found that it defaults to CVPixelBuffers w/ formats kCVPixelFormatType_Lossless_420YpCbCr10PackedBiPlanarFullRange or kCVPixelFormatType_Lossy_420YpCbCr10PackedBiPlanarFullRange. To mitigate this, I have the following snippet of code to my application:
// We need our pixels unpacked for 10-bit so that the Metal textures actually work
var pixelFormat:OSType? = nil
let bpc = getBpcForVideoFormat(videoFormat!)
let isFullRange = getIsFullRangeForVideoFormat(videoFormat!)
// TODO: figure out how to check for 422/444, CVImageBufferChromaLocationBottomField?
if bpc == 10 {
pixelFormat = isFullRange ? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
}
let videoDecoderSpecification:[NSString: AnyObject] = [kVTVideoDecoderSpecification_EnableHardwareAcceleratedVideoDecoder:kCFBooleanTrue]
var destinationImageBufferAttributes:[NSString: AnyObject] = [kCVPixelBufferMetalCompatibilityKey: true as NSNumber, kCVPixelBufferPoolMinimumBufferCountKey: 3 as NSNumber]
if pixelFormat != nil {
destinationImageBufferAttributes[kCVPixelBufferPixelFormatTypeKey] = pixelFormat! as NSNumber
}
var decompressionSession:VTDecompressionSession? = nil
err = VTDecompressionSessionCreate(allocator: nil, formatDescription: videoFormat!, decoderSpecification: videoDecoderSpecification as CFDictionary, imageBufferAttributes: destinationImageBufferAttributes as CFDictionary, outputCallback: nil, decompressionSessionOut: &decompressionSession)
In short, I need kCVPixelFormatType_420YpCbCr10BiPlanar so that I have a straightforward MTLPixelFormat.r16Unorm/MTLPixelFormat.rg16Unorm texture binding for Y/CbCr. Metal, seemingly, has no direct pixel format for 420YpCbCr10PackedBiPlanar. I'd also rather not use any color conversion in VideoToolbox, in order to save on processing (and to ensure that the color transforms/transfer characteristics match between streamer/client, since I also have a custom transfer characteristic to mitigate blocking in dark scenes).
However, I noticed that in visionOS 2, the CVPixelBuffer I receive is no longer a compressed render target (likely a bug), which caused GPU texture read bandwidth to skyrocket from 2GiB/s to 30GiB/s. More importantly, this implies that VideoToolbox may in fact be doing an extra color conversion step, wasting memory bandwidth.
Does Metal actually have no way to handle 420YpCbCr10PackedBiPlanar? Are there any examples for reading 10-bit HDR HEVC buffers directly with Metal?
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output.
To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range.
Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left)
As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical.
In short, this is what the utility does:
Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV.
Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs.
The images are encoded to ProRes 4444 using the AVFoundation framework.
The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’.
I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes.
See relevant parts of my 'encodeToProRes' function:
void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) {
NSError *error = nil;
NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]];
AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error];
if (error) {
std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl;
return;
}
// Load the first image to get the dimensions
std::cout << "Debayering the first image to get dimensions..." << std::endl;
Mat firstImage;
int width = 5320;
int height = 3900;
if (!debayer_image(rawPaths[0], firstImage, width, height)) {
std::cerr << "Error debayering the first image" << std::endl;
return;
}
width = firstImage.cols;
height = firstImage.rows;
// Save the first frame as a PNG 16-bit image for validation
std::string pngFilePath = outputPath + "_frame1.png";
if (!imwrite(pngFilePath, firstImage)) {
std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl;
} else {
std::cout << "First frame saved as PNG: " << pngFilePath << std::endl;
}
NSString *codecKey = nil;
if (proResFlavor == "4444") {
codecKey = AVVideoCodecTypeAppleProRes4444;
} else if (proResFlavor == "422HQ") {
codecKey = AVVideoCodecTypeAppleProRes422HQ;
} else if (proResFlavor == "422") {
codecKey = AVVideoCodecTypeAppleProRes422;
} else if (proResFlavor == "LT") {
codecKey = AVVideoCodecTypeAppleProRes422LT;
} else {
std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl;
return;
}
NSDictionary *outputSettings = @{
AVVideoCodecKey: codecKey,
AVVideoWidthKey: @(width),
AVVideoHeightKey: @(height)
};
AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
videoInput.expectsMediaDataInRealTime = YES;
NSDictionary *pixelBufferAttributes = @{
(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE),
(id)kCVPixelBufferWidthKey: @(width),
(id)kCVPixelBufferHeightKey: @(height)
};
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes];
...
[assetWriter startSessionAtSourceTime:kCMTimeZero];
CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps
int numFrames = static_cast<int>(rawPaths.size());
...
// Encoding thread
std::thread encoderThread([&]() {
int frameIndex = 0;
std::vector<CVPixelBufferRef> pixelBufferBuffer;
while (frameIndex < numFrames) {
std::unique_lock<std::mutex> lock(queueMutex);
queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; });
if (!frameQueue.empty()) {
auto [index, debayeredImage] = frameQueue.front();
frameQueue.pop();
lock.unlock();
if (index == frameIndex) {
cv::Mat rgbaImage;
cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer);
if (result != kCVReturnSuccess) {
std::cerr << "Error: Could not create pixel buffer" << std::endl;
dispatch_group_leave(dispatchGroup);
return;
}
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
for (int row = 0; row < height; ++row) {
memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbaImage.ptr(row),
width * 8);
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
pixelBufferBuffer.push_back(pixelBuffer);
...
Thanks very much!
Hello,
Notice that when using h.265 VideoToolBox encoding with Handbrake (latest snapshots) the resulting output is larger than the 264 source.
Obviously this was the case for OSX 14.x and below.
Is this a known "regression" ?
Thanks.
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
I'm trying to cast the screen from an iOS device to an Android device.
I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression.
While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android.
Data transmission over the TCP socket seems to be functioning correctly.
My question is:
Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms?
Here's a breakdown of the iOS sender details:
Device: iPhone 13 mini running iOS 17
Development Environment: Xcode 15 with a minimum deployment target of iOS 16
Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers
Video Compression: VideoToolbox for H.264 compression
Compression Properties:
kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate)
kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level)
kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval)
kVTCompressionPropertyKey_RealTime: true (real-time encoding)
kVTCompressionPropertyKey_Quality: 1 (lowest quality)
NAL Unit Handling: Custom header is added to NAL units
Android Receiver Details:
Device: RedMi 7A running Android 10
Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
I am a bit confused on whether certain Video Toolbox (VT) encoders support hardware acceleration or not.
When I query the list of VT encoders (VTCopyVideoEncoderList(nil,&encoderList)) on an iPhone 14 Pro device, for avc1 (AVC / H.264) and hevc1 (HEVC / H.265) encoders, the kVTVideoEncoderList_IsHardwareAccelerated flag is not there, which -based on the documentation found on the VTVideoEncoderList.h- means that the encoders do not support hardware acceleration:
optional. CFBoolean. If present and set to kCFBooleanTrue, indicates that the encoder is hardware accelerated.
In fact, no encoders from this list return this flag as true and most of them do not include the flag at all on their dictionaries.
On the other hand, when I create a compression session using the VTCompressionSessionCreate() and pass the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder as true in the encoder specifications, after querying the kVTCompressionPropertyKey_UsingHardwareAcceleratedVideoEncoder using the following code, I get a CFBoolean value of true for both H.264 and H.265 encoder.
In fact, I get a true value (for both of the aforementioned encoders) even if I don't specify the kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder during the creation of the compression session (note here that this flag was introduced in iOS 17.4 ^1).
So the question is: Are those encoders actually hardware accelerated on my device, and if so, why isn't that reflected on the VTCopyVideoEncoderList() call?
I have been seeing some crash reports for my app on some devices (not all of them). The crash occurs while converting a CVPixelBuffer captured from Video to a JPG using VTCreateCGImageFromCVPixelBuffer from VideoToolBox. I have not been able to reproduce the crash on local devices, even under adverse memory conditions (many apps running in the background).
The field crash reports show that VTCreateCGImageFromCVPixelBuffer does the conversion in another thread and that thread crashed at call to vConvert_420Yp8_CbCr8ToARGB8888_vec.
Any suggestions on how to debug this further would be helpful.
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097
The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context.
With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845).
From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
I am Using VideoToolbox VTCompressionSession For Encoding The Frame in H264 Format Which I will send through web socket to a browser. The Received Frames Will be Decoded and Output Will be rendered in the Website. Now, when using Some encoders the video is rendered always with four frame latency.
How Frame is sent to server :
start>------------ f1 ------------ f2 ------------ f3 ------------ f4 ------------- f5 ...
How rendering is happening :
start>-------------------------------------------------------------------------- f1 ------------ f2 ------------ f3 ------------ f4 ----------- ...
This Sometime becomes two frame latency and Sometime it becomes sixteen frame latency so the usability is getting affected.
Im using this configuration in videotoolbox's VTCompressionSession:
kVTCompressionPropertyKey_AverageBitRate=3MB
kVTCompressionPropertyKey_ExpectedFrameRate=24
kVTCompressionPropertyKey_RealTime=true
kVTCompressionPropertyKey_ProfileLevel=kVTProfileLevel_H264_High_AutoLevel
kVTCompressionPropertyKey_AllowFrameReordering = false
kVTCompressionPropertyKey_MaxKeyFrameInterval=1000
With Same Configuration i am able to achieve 1 in - 1 out with com.apple.videotoolbox.videoencoder.h264.gva.
This Issue Is replication with Encoder com.apple.videotoolbox.videoencoder.ave.avc
Not Sure if its Encoder Specific. I have also seen that there are difference in VUI Parameters between encoded output of both encoders.
I want to know if there is something i could do to solve this issue from the Encoder Configuration or another API which is provided by the VideoToolBox to ensure that frames are decoded and rendered at the same time by Decoder.
Thanks in Advance....
So I've been trying for weeks now to implement a compression mechanism into my app project that compresses MV-HEVC video files in-app without stripping videos of their 3D properties, but every single implementation I have tried has either stripped the encoded MV-HEVC video file of its 3D properties (making the video monoscopic), or has crashed with a fatal error. I've read the Reading multiview 3D video files and Converting side-by-side 3D video to multiview HEVC documentation files, but was unable to myself come out with anything useful.
My question therefore is: How do you go about compressing/encoding an MV-HEVC video file in-app whilst preserving the stereoscopic/3D properties of that MV-HEVC video file? Below is the best implementation I was able to come up with (which simply compresses uploaded MV-HEVC videos with an arbitrary bit rate). With this implementation (my compressVideo function), the MV-HEVC files that go through it are compressed fine, but the final result is the loss of that MV-HEVC video file's stereoscopic/3D properties.
If anyone could point me in the right direction with anything it would be greatly, greatly appreciated.
My current implementation (that strips MV-HEVC videos of their stereoscopic/3D properties):
static func compressVideo(sourceUrl: URL, bitrate: Int, completion: @escaping (Result<URL, Error>) -> Void) {
let asset = AVAsset(url: sourceUrl)
asset.loadTracks(withMediaType: .video) { videoTracks, videoError in
guard let videoTrack = videoTracks?.first, videoError == nil else {
completion(.failure(videoError ?? NSError(domain: "VideoUploader", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to load video track"])))
return
}
asset.loadTracks(withMediaType: .audio) { audioTracks, audioError in
guard let audioTrack = audioTracks?.first, audioError == nil else {
completion(.failure(audioError ?? NSError(domain: "VideoUploader", code: -2, userInfo: [NSLocalizedDescriptionKey: "Failed to load audio track"])))
return
}
let outputUrl = sourceUrl.deletingLastPathComponent().appendingPathComponent(UUID().uuidString).appendingPathExtension("mov")
guard let assetReader = try? AVAssetReader(asset: asset),
let assetWriter = try? AVAssetWriter(outputURL: outputUrl, fileType: .mov) else {
completion(.failure(NSError(domain: "VideoUploader", code: -3, userInfo: [NSLocalizedDescriptionKey: "AssetReader/Writer initialization failed"])))
return
}
let videoReaderSettings: [String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32ARGB]
let videoSettings: [String: Any] = [
AVVideoCompressionPropertiesKey: [AVVideoAverageBitRateKey: bitrate],
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoHeightKey: videoTrack.naturalSize.height,
AVVideoWidthKey: videoTrack.naturalSize.width
]
let assetReaderVideoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
let assetReaderAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
if assetReader.canAdd(assetReaderVideoOutput) {
assetReader.add(assetReaderVideoOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -4, userInfo: [NSLocalizedDescriptionKey: "Couldn't add video output reader"])))
return
}
if assetReader.canAdd(assetReaderAudioOutput) {
assetReader.add(assetReaderAudioOutput)
} else {
completion(.failure(NSError(domain: "VideoUploader", code: -5, userInfo: [NSLocalizedDescriptionKey: "Couldn't add audio output reader"])))
return
}
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
let videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings)
videoInput.transform = videoTrack.preferredTransform
assetWriter.shouldOptimizeForNetworkUse = true
assetWriter.add(videoInput)
assetWriter.add(audioInput)
assetReader.startReading()
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
let videoInputQueue = DispatchQueue(label: "videoQueue")
let audioInputQueue = DispatchQueue(label: "audioQueue")
videoInput.requestMediaDataWhenReady(on: videoInputQueue) {
while videoInput.isReadyForMoreMediaData {
if let sample = assetReaderVideoOutput.copyNextSampleBuffer() {
videoInput.append(sample)
} else {
videoInput.markAsFinished()
if assetReader.status == .completed {
assetWriter.finishWriting {
completion(.success(outputUrl))
}
}
break
}
}
}
audioInput.requestMediaDataWhenReady(on: audioInputQueue) {
while audioInput.isReadyForMoreMediaData {
if let sample = assetReaderAudioOutput.copyNextSampleBuffer() {
audioInput.append(sample)
} else {
audioInput.markAsFinished()
break
}
}
}
}
}
}
Hi everyone, I need to add spatial video maker in my app which was wrote in objective-c. I found some reference code by swift, can you help me with converting the code to objective -c?
let left = CMTaggedBuffer(
tags: [.stereoView(.leftEye), .videoLayerID(leftEyeLayerIndex)], pixelBuffer: leftEyeBuffer)
let right = CMTaggedBuffer(
tags: [.stereoView(.rightEye), .videoLayerID(rightEyeLayerIndex)],
pixelBuffer: rightEyeBuffer)
let result = adaptor.appendTaggedBuffers(
[left, right], withPresentationTime: leftPresentationTs)