Since upgrade to iOS17 WebRTC playback have problems on going fullscreen - video element is rapidly changing its dimensions while taking full screen size and animation seems very glitchy.
I'm observing this issue on every webrtc players available, so I think the problem is in the mobile safari.
Is there any way to prevent resizing of video on fullscreen?
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
88 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
I've started learning swiftUI a few months ago, and now I'm trying to build my first app :)
I am trying to display VTT subtitles from an external URL into a streaming video using AVPlayer and AVMutableComposition.
I have been trying for a few days, checking online and on Apple's documentation, but I can't manage to make it work. So far, I managed to display the subtitles, but there is no video or audio playing...
Could someone help?
Thanks in advance, I hope the code is not too confusing.
// EpisodeDetailView.swift
// OroroPlayer_v1
//
// Created by Juan Valenzuela on 2023-11-25.
//
import AVKit
import SwiftUI
struct EpisodeDetailView4: View {
@State private var episodeDetailVM = EpisodeDetailViewModel()
let episodeID: Int
@State private var player = AVPlayer()
@State private var subs = AVPlayer()
var body: some View {
VideoPlayer(player: player)
.ignoresSafeArea()
.task {
do {
try await episodeDetailVM.fetchEpisode(id: episodeID)
let episode = episodeDetailVM.episodeDetail
guard let videoURLString = episode.url else {
print("Invalid videoURL or missing data")
return
}
guard let subtitleURLString = episode.subtitles?[0].url else {
print("Invalid subtitleURLs or missing data")
return
}
let videoURL = URL(string: videoURLString)!
let subtitleURL = URL(string: subtitleURLString)!
let videoAsset = AVURLAsset(url: videoURL)
let subtitleAsset = AVURLAsset(url: subtitleURL)
let movieWithSubs = AVMutableComposition()
let videoTrack = movieWithSubs.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = movieWithSubs.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
let subtitleTrack = movieWithSubs.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid)
//
if let videoTrackItem = try await videoAsset.loadTracks(withMediaType: .video).first {
try await videoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: videoTrackItem,
at: .zero)
}
if let audioTrackItem = try await videoAsset.loadTracks(withMediaType: .audio).first {
try await audioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: audioTrackItem,
at: .zero)
}
if let subtitleTrackItem = try await subtitleAsset.loadTracks(withMediaType: .text).first {
try await subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)),
of: subtitleTrackItem,
at: .zero)
}
let playerItem = AVPlayerItem(asset: movieWithSubs)
player = AVPlayer(playerItem: playerItem)
let playerController = AVPlayerViewController()
playerController.player = player
playerController.player?.play()
// player.play()
} catch {
print("Error: \(error.localizedDescription)")
}
}
}
}
#Preview {
EpisodeDetailView4(episodeID: 39288)
}
I'm building a Camera app, where I have two AVCaptureSessions, one for video and one for audio. (See this for an explanation why I don't just have one).
I receive my CMSampleBuffers in the AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates.
Now, when I enable the video stabilization mode "cinematicExtended", the AVCaptureVideoDataOutput has a 1-2 seconds delay, meaning I will receive my audio CMSampleBuffers 1-2 seconds earlier than I will receive my video CMSampleBuffers!
This is the code:
func captureOutput(_ captureOutput: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from _: AVCaptureConnection) {
let type = captureOutput is AVCaptureVideoDataOutput ? "Video" : "Audio"
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
print("Incoming \(type) buffer at \(timestamp.seconds) seconds...")
}
Without video stabilization, this logs:
Incoming Audio frame at 107862.52558333334 seconds...
Incoming Video frame at 107862.535921166 seconds...
Incoming Audio frame at 107862.54691666667 seconds...
Incoming Video frame at 107862.569257333 seconds...
Incoming Audio frame at 107862.56825 seconds...
Incoming Video frame at 107862.585925333 seconds...
Incoming Audio frame at 107862.58958333333 seconds...
With video stabilization, this logs:
Incoming Audio frame at 107862.52558333334 seconds...
Incoming Video frame at 107861.535921166 seconds...
Incoming Audio frame at 107862.54691666667 seconds...
Incoming Video frame at 107861.569257333 seconds...
Incoming Audio frame at 107862.56825 seconds...
Incoming Video frame at 107861.585925333 seconds...
Incoming Audio frame at 107862.58958333333 seconds...
As you can see, the video frames arrive almost a full second later than when they are intended to be presented!
There are a few guides on how to use AVAssetWriter online, but all recommend to start the AVAssetWriter session once the first video frame arrives - in my case I cannot do that, since the first 1 second of video frames is from before the user even started the recording.
I also can't really wait 1 second here, as then I would lose 1 second of audio samples, since those are realtime and not delayed.
I also can't really start the session on the first audio frame and drop all video frames until that point, since then the resulting video would start with one blank frame, as the video frame is never exactly on that first audio frame timestamp.
Any advices on how I can synchronize that?
Here is my code: RecordingSession.swift
’m using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter.
Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release.
Since it blocks all Threads and only happens in Debug, I’m lead to believe that this is an Xcode/Debugger/LLDB hang issue that I’m seeing.
Does anyone experience something similar?
Here’s how I set all of that up: startRecording(...)
And here’s the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
Simple question: can I shoot log using the new MVHEVC format for spatial video for viewing on the Vision Pro?
We're experiencing an issue on an iPhone 15 (iOS 17.1) where some video files can't be loaded from the results of a PHPickerViewController.
results[index].itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier)
Gives error:
Cannot load representation of type public.movie
Video info (taken from Mac Finder):
H.264
MPEG-4 AAC
HD (1-1-1)
480x848px
Filetype .MP4
Origin: Recorded on iPhone 14, sent over WhatsApp, & auto saved from WhatsApp to an iPhone 15
The iPhone 15 has iCloud enabled and the videos failing are frequently viewed and used in testing, so are likely to be downloaded/cached locally.
I've tried changing the PHPickerConfiguration preferredAssetRepresentationMode to .current with no difference in the error.
I've also tried using the openInPlace alternative but it complains that it's not supported in the debug output.
The Safari version for VisionOS (or spatial computing) supports WebXR, as reported here.
I am developing a Web App that intends to leverage WebXR, so I've tested several code samples on the safari browser of the Vision Pro Simulator to understand the level of support for immersive web content.
I am currently facing an issue that seems like a bug where video playback stops working when entering an XR session (i.e. going into VR mode) on a 3D web environment (using ThreeJS or similar).
There's an example from the Immersive Web Community Group called Stereo Video (https://immersive-web.github.io/webxr-samples/stereo-video.html) that lets you easily replicate the issue, the code is available here.
It's worth mentioning that video playback has been successfully tested on other VR platforms such as the Meta Quest 2.
The issue has been reported in the following forums:
https://discourse.threejs.org/t/videotexture-playback-html5-videoelement-apple-vision-pro-simulator-in-vr-mode-not-playing/53374
https://bugs.webkit.org/show_bug.cgi?id=260259
I am calling AVSampleBufferDisplayLayer.flush from a background queue but this seems to occasionally crash the app. I am calling it from the same thread I pass to - (void)requestMediaDataWhenReadyOnQueue:(dispatch_queue_t)queue usingBlock:(void (^)(void))block;.
My question is, is this API threadsafe, or do I need to call flush from the main thread? Or is there another issue that I am not considering? It seems strange to me that this API would trigger an autolayout pass.
0 CoreFoundation 0x00000001bb384e38 __exceptionPreprocess + 164
1 libobjc.A.dylib 0x00000001b451b8d8 objc_exception_throw + 59
2 CoreAutoLayout 0x00000001d7e09e84 _AssertAutoLayoutOnAllowedThreadsOnly + 327
3 CoreAutoLayout 0x00000001d7e00e60 -[NSISEngine withBehaviors:performModifications:] + 35
4 UIKitCore 0x00000001be58fd40 -[UIView _postMovedFromSuperview:] + 671
5 UIKitCore 0x00000001bd56dfec -[UIView(Internal) _addSubview:positioned:relativeTo:] + 1903
6 UIKitCore 0x00000001bda57ccc -[_UITextLayoutCanvasView textViewportLayoutController:configureRenderingSurfaceForTextLayoutFragment:] + 455
7 UIFoundation 0x00000001c588bc9c __48-[NSTextViewportLayoutController layoutViewport]_block_invoke_4 + 151
8 UIFoundation 0x00000001c5836b50 __80-[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:]_block_invoke + 43
9 UIFoundation 0x00000001c580e158 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke_2 + 535
10 CoreFoundation 0x00000001bb385350 __NSARRAY_IS_CALLING_OUT_TO_A_BLOCK__ + 23
11 CoreFoundation 0x00000001bb3b24dc -[__NSSingleObjectArrayI enumerateObjectsWithOptions:usingBlock:] + 91
12 UIFoundation 0x00000001c580de28 __83-[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:]_block_invoke + 775
13 UIFoundation 0x00000001c57f7504 -[NSTextLayoutManager enumerateTextLayoutFragmentsFromLocation:options:usingBlock:] + 659
14 UIFoundation 0x00000001c57f7264 -[NSTextLayoutManager enumerateViewportElementsFromLocation:options:usingBlock:] + 99
15 UIFoundation 0x00000001c57f6d7c -[NSTextViewportLayoutController layoutViewport] + 1299
16 UIKitCore 0x00000001bd580a3c +[UIView(Animation) performWithoutAnimation:] + 75
17 UIKitCore 0x00000001bd5582d0 -[_UITextLayoutCanvasView layoutSubviews] + 139
18 UIKitCore 0x00000001bd5544c8 -[UIView(CALayerDelegate) layoutSublayersOfLayer:] + 1979
19 QuartzCore 0x00000001bca277fc CA::Layer::layout_if_needed(CA::Transaction*) + 499
20 QuartzCore 0x00000001bca3aeb0 CA::Layer::layout_and_display_if_needed(CA::Transaction*) + 147
21 QuartzCore 0x00000001bca4c234 CA::Context::commit_transaction(CA::Transaction*, double, double*) + 443
22 QuartzCore 0x00000001bca81630 CA::Transaction::commit() + 651
23 MediaToolbox 0x00000001ca8d0da0 videoQueueRemote_SetProperty + 367
24 AVFCore 0x00000001cad191b4 __63-[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:]_block_invoke + 179
25 libdispatch.dylib 0x00000001c299cf88 _dispatch_client_callout + 19
26 libdispatch.dylib 0x00000001c29ac574 _dispatch_lane_barrier_sync_invoke_and_complete + 55
27 AVFCore 0x00000001cad190d0 -[AVSampleBufferVideoRenderer _setContentLayerOnFigVideoQueue:] + 167
28 AVFCore 0x00000001cad14674 -[AVSampleBufferVideoRenderer _createVideoQueue:errorStep:] + 195
29 AVFCore 0x00000001cad14ac8 -[AVSampleBufferVideoRenderer createVideoQueue:] + 55
30 AVFCore 0x00000001cad179cc -[AVSampleBufferVideoRenderer flushWithRemovalOfDisplayedImage:completionHandler:] + 439
31 App 0x0000000102370214 -[AVSampleBufferDisplayLayer flush] + 51