In the context of an app that uses a VTDecompressionSession to decode incoming ts streams, the creation of the decompression session always fails for some h264 streams with code -12911 when running on a M1 iPad (iPad Pro, 12.9-inch, 5th generation, iOS 15.5).
When the same app is executed on my M1 Mac mini -as My Mac (Designed for iPad)- with the same stream, VTDecompressionSessionCreatesucceeds and the stream can be decoded.
Any idea of what could cause this error on the iPad?
The code:
decompressionSessionCreationStatus = VTDecompressionSessionCreate(allocator: kCFAllocatorDefault,
formatDescription: videoFormatDescription!,
decoderSpecification: nil,
imageBufferAttributes: nil,
outputCallback: nil,
decompressionSessionOut: &videoDecompressionSession)
if videoDecompressionSession != nil { ... }
else {
NSLog("videoDecompressionSession could not be created (status: %d): video format: %@",
decompressionSessionCreationStatus, (videoFormatDescription != nil) ? (CFCopyDescription(videoFormatDescription!) as String) : "{?}")
}
where videoFormatDescription was previously created by extracting h264 parameter sets and calling CMVideoFormatDescriptionCreateFromH264ParameterSets.
Output:
videoDecompressionSession could not be created (status: -12911):
video format: <CMVideoFormatDescription 0x281894360 [0x1dacc01b8]> {
mediaType:'vide'
mediaSubType:'avc1'
mediaSpecific: {
codecType: 'avc1' dimensions: 1920 x 1080
}
extensions: {{
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVImageBufferColorPrimaries = "ITU_R_709_2";
CVImageBufferTransferFunction = "ITU_R_709_2";
CVImageBufferYCbCrMatrix = "ITU_R_709_2";
CVPixelAspectRatio = {
HorizontalSpacing = 1;
VerticalSpacing = 1;
};
FullRangeVideo = 0;
SampleDescriptionExtensionAtoms = {
avcC = {length = 119, bytes = 0x01640028 ffe10064 67640028 ad90a470 ... 68ff3cb0 fdf8f800 };
};
}}
}
Any help on this would be greatly appreciated! :)
Post
Replies
Boosts
Views
Activity
Hi,
I am struggling to receive multicast UDP packets on an iPad Pro (iOS 15.5) in the context of an ethernet-only lab network.
The packet reception code uses a NWConnectionGroup configured with a NWMulticastGroup, as described in https://developer.apple.com/news/?id=0oi77447.
This code works well on a Mac connected to the lab network with a USB ethernet adapter, provided the ethernet adapter interface has the highest priority among connected network interfaces.
To make it work on iOS, I have successfully added the com.apple.developer.networking.multicast to the app, following the process detailed by @eskimo in https://developer.apple.com/forums/thread/663271
However, on the iPad, the app doesn't receive any data packet on the configured connection group, although no error shows on the console.
I suspected that the issue may be related to a question of network interface selection by the receiving NWConnectionGroup, but disabling the wifi on the iPad doesn't seem to help.
Searching in the dev forums, I found this message where @meaton wrote You will want to make sure that you test this on a physical device connected to Wi-Fi to know that the Multicast feature is actually working. This makes we wonder if using the Wi-Fi network is mandatory here…
Hence my question: is there a way to receive multicast UDP packets on an ethernet network on an iPad?
Thanks.
Hi,
Our iOS HDR video app uses AVSampleBufferDisplayLayer to present CVPixelBuffers containing HDR images, and on the HDR iPad screen, the output video look great.
Recently I added support for external displays to the app, so that the app can play the video on Airplay-compatible devices, typically on an Apple TV 4K.
The video display on the Apple TV works as expected, except that the image colours and dynamic don't look right.
Same color issue when displaying the output video on a M1 max MacBook pro HDR screen, so it does no seem that the issue is due to the tv screen.
An example of presented image buffer format:
(lldb) po outputImage
▿ Optional<VideoImage>
▿ some : VideoImage
- pixelBuffer : <CVPixelBuffer 0x283fa4960 width=3840 height=2160 pixelFormat=x444 iosurface=0x280cac880 planes=2>
<Plane 0 width=3840 height=2160 bytesPerRow=7680>
<Plane 1 width=3840 height=2160 bytesPerRow=15360>
<attributes={
Height = 2160;
IOSurfaceProperties = {
IOSurfacePurgeWhenNotInUse = 1;
};
PixelFormatType = 2016687156;
Width = 3840;
} propagatedAttachments={
CVFieldCount = 1;
CVImageBufferChromaLocationBottomField = Left;
CVImageBufferChromaLocationTopField = Left;
CVImageBufferColorPrimaries = "ITU_R_2020";
CVImageBufferTransferFunction = "SMPTE_ST_2084_PQ"
CVImageBufferYCbCrMatrix = "ITU_R_2020";
} nonPropagatedAttachments={
}
Hence my question: is HDR video playback via AirPlay in this context supposed to work? Or is such configuration not supported yet (tested in iOS 15.6)?
Thanks.
My current app implements a custom video player, based on a AVSampleBufferRenderSynchronizer synchronising two renderers:
an AVSampleBufferDisplayLayer receiving decoded CVPixelBuffer-based video CMSampleBuffers,
and an AVSampleBufferAudioRenderer receiving decoded lpcm-based audio CMSampleBuffers.
The AVSampleBufferRenderSynchronizer is started when the first image (in presentation order) is decoded and enqueued, using avSynchronizer.setRate(_ rate: Float, time: CMTime), with rate = 1 and time the presentation timestamp of the first decoded image.
Presentation timestamps of video and audio sample buffers are consistent, and on most streams, the audio and video are correctly synchronized.
However on some network streams, on iOS, the audio and video aren't synchronized, with a time difference that seems to increase with time.
On the other hand, with the same player code and network streams on macOS, the synchronization always works fine.
This reminds me of something I've read, about cases where an AVSampleBufferRenderSynchronizer could not synchronize audio and video, causing them to run with independent and potentially drifting clocks, but I cannot find it again.
So, any help / hints on this sync problem will be greatly appreciated! :)