Posts

Post not yet marked as solved
0 Replies
128 Views
As a straightforward example, I've taken Apple's MV-HEVC sample project and added two lines. First, after the AVAssetWriterInput is created: frameInput.performsMultiPassEncodingIfSupported = true Second, after the call to multiviewWriter.startWriting(): print("canPerformMultiplePasses: \(frameInput.canPerformMultiplePasses)") Which prints true. This leads me to believe that the first encoding pass should proceed as-normal (even though I haven't handled the logic for the completion of the first pass, etc.). However, I receive this error when the code attempts to appendTaggedBuffers to the AVAssetWriterInputTaggedPixelBufferGroupAdaptor: Fatal error: Failed to append tagged buffers to multiview output Am I missing a step? Or is the multi-pass encoding only supported for standard sample/pixel buffers (and not tagged buffers)?
Posted Last updated
.
Post not yet marked as solved
0 Replies
118 Views
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene. Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
166 Views
I know there have been a lot of questions about playing back spatial/immersive/MV-HEVC video content on the Vision Pro. Today, I released an example player on GitHub that might answer some questions. Of course, without official documentation on some of these formats, it could be that Apple will eventually do something a little different. We'll just have to wait. In the meantime: https://github.com/mikeswanson/SpatialPlayer
Posted Last updated
.
Post not yet marked as solved
2 Replies
427 Views
I don't know when these were posted, but I noticed them in the AVFoundation documentation last night. There have been a lot of questions about working with this format, and these are useful. They also include code samples. Reading multiview 3D video files: https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/reading_multiview_3d_video_files Converting side-by-side 3D video to multiview HEVC: https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc
Posted Last updated
.
Post not yet marked as solved
0 Replies
232 Views
Does anyone have any knowledge or experience with Apple's fisheye projection type? I'm guessing that it's as the name implies: a circular capture in a square frame (encoded in MV-HEVC) that is de-warped during playback. It'd be nice to be able to experiment with this format without guessing/speculating on what to produce.
Posted Last updated
.
Post not yet marked as solved
0 Replies
409 Views
What are the Mac hardware and software requirements to decode and encode MV-HEVC video with AVFoundation? Many of the new MV-HEVC-related keys require macOS 14.0+, so I'm guessing that macOS Sonoma or later is required on the software side. What about processor architectures? I can read an MV-HEVC source on my Apple Silicon M1. But when I run the same code on my Intel Mac mini (2018) running Sonoma 14.3, AVAssetReader's startReading() returns false. Similarly, when I try to create an AVAssetWriterInput with MV-HEVC output settings, I receive: -[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] Compression property MVHEVCVideoLayerIDs is not supported for video codec type hvc1' Is this because Intel-based Macs don't support MV-HEVC? Or am I missing something else?
Posted Last updated
.
Post marked as solved
4 Replies
602 Views
I’m working with the Spatial Video related APIs in AVFoundation, and while I can create an AVAssetReader that reads an AVAssetTrack that reports a .containsStereoMultiviewVideo media characteristic (on a spatial video recorded by an iPhone 15 Pro), the documentation doesn’t make it clear how I can obtain the secondary video frame from that track. Does anyone know where to look? I've scoured the forums, documentation, and other resources, and I've had no luck. Thanks!
Posted Last updated
.
Post not yet marked as solved
4 Replies
2.2k Views
Our app is recording video from an iPhone and using ffmpeg to create a HLS stream. We then playback the stream using AVPlayer on iOS. When we record "vertical" (portrait) video, we notice that the stream plays back incorrectly. However, if we point AVPlayer directly at the fMP4, it plays back correctly.We can see the rotation metadata in the fMP4 itself, so it makes sense that direct playback works as-expected. It seems that using the HLS playback path ignores this same metadata. Is that correct? In the meantime, we're programmatically rotating the player to accommodate.Here's a screen recording of the simulator playing back both versions. It's easy to see the issue: https://drive.google.com/file/d/1szeIcGFM7qL4IlB3vpKLbLTR9GRkq_4o/view?usp=sharingHere's a link to the extremely basic Xcode project in that example: https://drive.google.com/open?id=1zuZ0IzxjFw606fBNpsNW3hWIe2GTTGmyAnd here are the URLs to the media:https://alfredo-soba.s3.us-west-2.amazonaws.com/assets/0bcb458e-8627-48d5-9886-7c1785ed28ee/video.mp4https://alfredo-soba.s3-us-west-2.amazonaws.com/assets/0bcb458e-8627-48d5-9886-7c1785ed28ee/playlist.m3u8The playlist.m3u8 points to the same video.mp4 that is played directly.Does anyone know if this is expected? Or is there another option/setting that we're missing?Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
477 Views
I'm experimenting with mediafilesegmenter to segment our HLS streams (to-date, we've used ffmpeg), and everything is working fine. However, I notice that the start_time (PTS) of the segmented fMP4 is ~10 seconds, where ffmpeg was ~0.Is there a way to change the start_time? Ideally, we'd like it to behave like ffmpeg and start ~0.Thanks.
Posted Last updated
.