No feedback here, no feedback in the feedback assistant, I don't know why I bothered.
Post
Replies
Boosts
Views
Activity
Unfortunately so far the only way I have seen to get an iOS or tvOS device to output Dolby Atmos is if you use AVPlayer.
I have made a feature request for more APIs to be able to output Atmos, for example why can't AVSampleBufferAudioRenderer output Atmos when given a suitable EAC3/JOC stream?
At the moment it's very restricted.
So an update to this, in case anyone is remotely interested!
I added some sample code to the Feedback Assistant report which illustrates the problem.
However I have since discovered that the equivalent objective C API - AVAudioConverter does not suffer the same problem. i.e. the same exact packets are decoded successfully on all platforms when using this API instead of the lower-level AudioConverterFillComplexBuffer API.
If anything I would have expected the higher-level API to be using the lower-level one so this is a bit surprising, but may help someone else who has come across the same issue.
Hello, yes I did report this via the Feedback Assistant back on Oct 1st, its FB15344866.
I've not yet had a response though.
PS I also discovered the issue is present on iOS 18 too. The exact same piece of code can decode a 5.1 EAC3 stream on macOS Sequoia, iOS 17 and tvOS 17, but fails on both iOS 18 and tvOS 18.
Does your decoder callback always receive -12911 or is that an occasional error? I've had issues with occasional and unexplained errors, and the only solution was to wait for (or in my case request) a key frame to get it back on track. It seemed to relate to low bitrate AV1 streams.
No idea about VP9, I don't use it.
Sorry for the late reply I hadn't set up notifications (I have now).
If you don't append the sequence header OBU data to the end of the AV1CodecConfigurationRecord then decoder initialisation will fail with error -12911.
if you don't include the sequence header OBU data at the start of the first frame you decode, then you will get error -12909 inside the decompression callback.
In my scenario, I am in control of the encoder (NVENC) so I get the sequence header OBU when I initialise the encoder, and send this across as my "extra data" before I send any encoded frame data. On the decode side, I then use this in 1 and 2 above. My encoder then doesn't send any further sequence header OBUs at all as in my case the format doesn't change.
In a more general scenario, there might be a sequence header with every IDR frame, so you'd just need to make sure you wait until you get the first sequence header so you can initialise the decoder as per 1. above, and then since the sequence header is already part of the encoded packet data 2. would be taken care of anyway.
Just posting back here as I got all this working in the end.
In case it's useful, here are the stumbling blocks I encountered. Probably, these are just more a reflection of my lack of understanding but maybe it'll help someone.
To construct AV1 Codec Configuration Box outside of FFmpeg etc, then this describes the structure:
https://aomediacodec.github.io/av1-isobmff/#av1codecconfigurationbox-section
The information needed comes from parsing the Sequence Header OBU:
https://aomediacodec.github.io/av1-spec/#general-sequence-header-obu-syntax
If you're writing from scratch (i.e. not. using ffmpeg or whatever), then you need to write or find code to parse the sequence header OBU.
Once you've written the 4 bytes described in 1. then you also need to append the sequence header OBU data block to the end of the block. If you don't, then the decoder setup will fail.
This is then added to the extensions dictionary, along with all the other basic information needed to initialise the decoder (the Chrome references detail all this information).
You then create the video format description using CMVideoFormatDescriptionCreate, passing in the extensions.
I then got caught out with a decode error because I didn't realise that I also had to pass in the Sequence Header OBU with the first frame data I attempted to decode. It wasn't enough that I had already given the same Sequence Header OBU when creating the video format description (via the extensions).
After that it worked.
Decoding itself is slightly simpler than with HEVC, in that you don't need to parse the OBUs, you just pass the data straight to the decoder. With HEVC, you had to parse the NALUs and only pass in slice segments, while also doing some minor conversion of the way the NALU's length is presented to the decoder.
It would be helpful, Apple, if you could consider writing something like CMVideoFormatDescriptionCreateFromAV1SequenceHeaderOBU similar to the existing CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets.
This would lower the bar a little to AV1 hardware decoding.
Well this is annoying. Just tried to profile for the first time in ages and it fails with this issue on both tvOS and iOS.
And this issue goes back a year, with no apparent solution.
How exactly is one supposed to develop when a basic tool like this is not working for so many people....?
Might these links be of use:
https://chromium.googlesource.com/chromium/src/+/master/media/gpu/mac/vt_video_decode_accelerator_mac.cc
https://chromium.googlesource.com/chromium/src/+/HEAD/media/gpu/mac/vt_config_util.mm
Specifically the function CreateVideoFormatAV1 in 1 and the function CreateFormatExtensions in 2.?
It would be great if Apple could provide an example of how to decode AV1 video on an iPhone 15 Pro Max or one of the new M3 MacBook Pros, using VideoToolbox.
The information appears not to be anywhere.
I know this is 7 years ago but I've just discovered this on macOS 14.0 with a Mac Studio.
shutdown -h -u
...simply does not work. It powers off the macOS immediately.
This makes it really hard to work as a NUT client.
It's really sloppy that I've allowed the final release of Xcode 15 to still have this issue - I'm getting thousands of these every time I run a build. Ridiculous.
I would like this too.
I know it's possible programmatically by checking for the availability of HEVC hardware decoding, but in my case the app is an exclusive HEVC streaming app, so I really don't even want it to be installed/used on anything before the first Apple TV 4K.
I find the FFmpeg source code is a useful reference for things like this, in this case check out videotoolbox.c and there will be a function in there that creates the AVCC extradata.
On my macOS 13.1 there is this file inside /System/Library/Plug-Ins
AV1DecoderSW.bundle
Inside the contents there is this plist:
However, when creating a VTDecompressionSession for AV1, it cannot find a decoder, failing with kVTCouldNotFindVideoDecoderErr.
If I export the symbols from the binary inside the above bundle, it has the following top two entries:
/Users/oliver/Desktop/AV1DecoderSW (for architecture arm64e):
00000000000362dc T _AV1Decoder_CreateInstance
0000000000037794 T _AV1RegisterDecoder
I tried manually loading the plugin bundle as an CFBundleRef and getting a pointer to the AV1RegisterDecoder function shown above, and called AV1RegisterDecoder but this still didn't make a difference.
Would it be a correct assumption, then, that this is WIP that hasn't been enabled yet, or does anyone know of a way to enable it?
Finally, I loaded the executable into a text file and noticed that it appears to be linked directly to a static build of libdav1d, as I could see the function names and error messages.