Using version 14.3 of Safari can autoplay, version 15 and above requires user interaction to autoplay, I don't want the user to interact, what should I do?
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
90 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I've encountered an issue with the seek bar time display in the video player on iOS 17, specifically affecting live stream videos using HLS manifests with the time displayed in am/pm format.
As the video progresses, the displayed start time appears to shift backwards in time with a peculiar pattern:
Displayed Start Time = Normal Start Time - Viewed Duration
For instance, if a program begins at 9:00 AM, at 9:30 AM, the start time shown will erroneously be 8:30 AM. Similarly, at 9:40 AM, the displayed start time will be 8:20 AM.
This issue is observed with both VideoPlayer and AVPlayerViewController on iOS 17. The same implementation of the video player on iOS 16 displays the duration of the viewed program and doesn’t have any issues.
Please advise on any known workarounds or solutions to address this issue on iOS 17.
Hey devs!
I recently started a project, a macOS app, which is like a Remote Desktop app but only on local network.
For this I wanted to use the MultipeerConnectivity framework, and it's my first time using it. So I have already done the device discovery side that is working well as it is the easier part. Now I just need someone who knows how it works and who has time to explain me (as I couldn't find any documentation about this) how does work the OutputStream And InputStream in MC and if its a good choice for my needs. It has to be low latency and high resolution... I also have seen other frameworks such as WebRTC that I could combine with a local WebSocket Server, but as I'm new to live video streaming and that I don't know anyone that is experimented with this I wanted to ask there for your advices.
Thank you in advance, TR-MZ (just an unknown Indie dev).
Hey, all!
I've been trying to upload a video preview to the AVP storefront for our app, but some of the export requirements seem to contradict one another.
For the AVP, a resolution of 4K is needed... which would require H264 level 5.2.
Yet, the H264 level can't be any higher than 4... which is 1080p.
It seems like a catch-22 where either the H264 level will be too high, or the resolution will be too low.
Does anyone have a fix or a way around this issue?
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple?
or is the format proprietary rather than just regular h.264 (for example)?
If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
HELP! How could I play a spatial video in my own vision pro app like the official app Photos? I've used API of AVKit to play a spatial video in XCode vision pro simulator with the guild of the official developer document, this video could be played but it seems different with what is played through app Photos. In Photos the edge of the video seems fuzzy but in my own app it has a clear edge.
How could I play the spatial video in my own app with the effect like what is in Photos?
When I try to play video on my Apple Vision Pro simulator using a custom view with an AVPlayerLayer (as seen in my below VideoPlayerView), nothing displays but a black screen while the audio for the video i'm trying to play plays in the background. I've tried everything I can think of to resolve this issue, but to no avail.
import SwiftUI
import AVFoundation
import AVKit
struct VideoPlayerView: UIViewRepresentable {
var player: AVPlayer
func makeUIView(context: Context) -> UIView {
let view = UIView(frame: .zero)
let playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = .resizeAspect
view.layer.addSublayer(playerLayer)
return view
}
func updateUIView(_ uiView: UIView, context: Context) {
if let layer = uiView.layer.sublayers?.first as? AVPlayerLayer {
layer.frame = uiView.bounds
}
}
}
I have noticed however that if i use the default VideoPlayer (as demonstrated below), and not my custom VideoPlayerView, the video displays just fine, but any modifiers I use on that VideoPlayer (like the ones in my above custom struct), cause the video to display black while the audio plays in the background.
import SwiftUI
import AVKit
struct MyView: View {
var player: AVPlayer
var body: some View {
ZStack {
VideoPlayer(player: player)
Does anyone know a solution to this problem to make it so that video is able to display properly and not just appear as a black screen with audio playing in the background?
If an iphone user is recording vertical video it would be a great feature if the iphone records left and right of the visual in the black bar area as well - this eleminates the problem while cutting a movie in landscape format to use poor looking ghost picture in the left and right bars.
It would be nice if this feature is switchable in an option by using flags how the video was recorded and played back. Think about it - mainly people all over the world are recording vertical not because its cool, but because this is the best way to hold the device.
Any suggestions...
Thomas N.- Hamburg/Germany
This is my h5 code:
id="myVideo" src="xxxapp://***.***.xx/***/***.mp4" style="object-fit:cover;opacity:1;width:100%;height:100%;display:block;possition:absolute;" type="video/mp4"></video>
I want to load local large video, so, I use WKURLSchemeHandler.
- (void)webView:(WKWebView *)webView startURLSchemeTask:(id<WKURLSchemeTask>)urlSchemeTask {
NSURLRequest *request = [urlSchemeTask request];
NSURL *url = request.URL;
NSString *urlString = url.absoluteString;
NSString *videoPath = [[NSBundle mainBundle] pathForResource:@"***" ofType:@"mp4"];
NSData *videoData = [NSData dataWithContentsOfFile:videoPath options:nil error:nil];
NSURLResponse *response = [[NSURLResponse alloc] initWithURL:url MIMEType:@"video/mp4" expectedContentLength:videoData.length textEncodingName:nil];
[urlSchemeTask didReceiveResponse:response];
[urlSchemeTask didReceiveData:videoData];
[urlSchemeTask didFinish];
}
but its not work, data is not nil, but video do not play. I would greatly appreciate it if someone could help me find a solution!!
ps: can make it, but we cannot use it due to some reasons.
Since iOS 17.2. the video player in Safari becomes black if I jump forward in a HLS video stream. I only hear the sound of the video. If I close the full screen and reopen it the video continious normally.
I checked if the source meets all the requirements mentioned here and it does.
Does anybody have the same issue or maybe a solution for this problem?
Hello! I'm trying to save videos asynchronously. I've already used performChanges without the completionHandler, but it didn't work. Can you give me an example? Consider that the variable with the file URL is named fileURL. What would this look like asynchronously?
Hello. I have three questions about the Sensitive Content Analysis (SCA) framework:
SCA seems to be asynchronous. Is there a limit to how much a single app can send through it at a time?
For video analysis, can the video be broken into smaller chunks, and then all chunks be hit concurrently?
Can a video stream be sampled as it's being streamed? e.g. Maybe it samples one frame every 3 seconds and scans those?
Thanks.
Hello, can anybody help me with this ? I am downloading video in FS, and when I give that url to player it gives me this error. but this comes up only in case of m3u8. other format like mp4 are working fine locally. please help !
{"error": {"code": -12865, "domain": "CoreMediaErrorDomain", "localizedDescription": "The operation couldn’t be completed. (CoreMediaErrorDomain error -12865.)", "localizedFailureReason": "", "localizedRecoverySuggestion": ""}, "target": 13367}
Why does the bit rate of MV-HEVC videos recorded by iPhone double? Is there some depth information hidden in MV-HEVC?
Hi,
I am looking at display some spatial video content captured on iPhone 15 Pros in a side-by-side format. I've read the HEVC Stereo Video Profile provided by Apple, but I am confused on access the left and right eye video. Looking at the AVAsset track information, there is one video track, one sound, and three metadata ones.
Apple's document references them as layers, but I am unsure how to access them. Could anyone provide some guidance on the access of them?
Thanks,
Will
Loading a video that played on tvOS 17, won't now play in tvOS 17.2. It isn't true for all videos or even all videos of a certain type.
This code works fine on tvOS 17, but not on 17.2
import SwiftUI
import AVKit
struct ContentView: View {
var body: some View {
let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!)
VideoPlayer(player: player)
.onAppear {
player.play()
}
}
}
I have tried reloading the metadata. I tried making the player from an AVAsset rather than a URL. I can't seem to see what is making it work with some videos and not all and what is different from tvOS 17 to 17.2.
This is my h5 code:
<video id="myVideo" src="xxxapp://***.***.xx/***/***.mp4" style="object-fit:cover;opacity:1;width:100%;height:100%;display:block;possition:absolute;" type="video/mp4"></video>
I want to load local large video, so, I use WKURLSchemeHandler.
- (void)webView:(WKWebView *)webView startURLSchemeTask:(id<WKURLSchemeTask>)urlSchemeTask {
NSURLRequest *request = [urlSchemeTask request];
NSURL *url = request.URL;
NSString *urlString = url.absoluteString;
NSString *videoPath = [[NSBundle mainBundle] pathForResource:@"***" ofType:@"mp4"];
NSData *videoData = [NSData dataWithContentsOfFile:videoPath options:nil error:nil];
NSURLResponse *response = [[NSURLResponse alloc] initWithURL:url MIMEType:@"video/mp4" expectedContentLength:videoData.length textEncodingName:nil];
[urlSchemeTask didReceiveResponse:response];
[urlSchemeTask didReceiveData:videoData];
[urlSchemeTask didFinish];
}
but its not work, data is not nil, but video do not play.
I would greatly appreciate it if someone could help me find a solution!!
ps: can make it, but we cannot use it due to some reasons.
Who can I contact that can remove the gray frame from around the full screen video player on my iOS mobile application? This is an Apple iOS feature that I have no control over.
The screenshot attached below shows the full screen view of a video when the iOS mobile phone is held sideways. The issue is the big gray frame that is around the video, is taking up too much space from the video and it needs to be removed so the video can be fully screened.
Can we confirm that as of iOS 16.3.1, key frames for MPEGTS via HLS are mandatory now?
I've been trying to figure out why https://chaney-field3.click2stream.com/ shows "Playback Error" across Safari, Chrome, Firefox, etc.. I ran the diagnostics against one of the m3u8 files that is generated via Developer Tools (e.g. mediastreamvalidator "https://e1-na7.angelcam.com/cameras/102610/streams/hls/playlist.m3u8?token=" and then hlsreport validation_data.json) and see this particular error:
Video segments MUST start with an IDR frame
Variant #1, IDR missing on 3 of 3
Does Safari and iOS devices explicitly block playback when it doesn't find one? From what I understand AngelCam simply acts as a passthrough for the video/audio packets and does no transcoding but converts the RTSP packets into HLS for web browsers But IP cameras are constantly streaming their data and a user connecting to the site may be receiving the video between key frames, so it would likely violate this expectation.
From my investigation it also seems like this problem also started happening in iOS 16.3? I'm seeing similar reports for other IP cameras here:
https://ipcamtalk.com/threads/blue-iris-ui3.23528/page-194#post-754082
https://www.reddit.com/r/BlueIris/comments/1255d78/ios_164_breaks_ui3_video_decode/
For what it's worth, when I re-encoded the MPEG ts files (e.g. ffmpeg-i /tmp/streaming-master-m4-na3.bad/segment-375.ts -c:v h264 /tmp/segment-375.ts) it strips the non key frames in the beginning and then playback works properly if I host the same images on a static site and have the iOS device connect to it.
It seems like Chrome, Firefox, VLC, and ffmpeg are much more forgiving on missing key frames. I'm wondering what the reason for enforcing this requirement? And can I confirm it's been a recent change?
Is there a way to play a specific rectangular region of interest of a video in an arbitrarily-sized view?
Let's say I have a 1080p video but I'm only interested in a sub-region of the full frame. Is there a way to specify a source rect to be displayed in an arbitrary view (SwiftUI view, ideally), and have it play that in real time, without having to pre-render the cropped region?
Update: I may have found a solution here: img DOT ly/blog/trim-and-crop-video-in-swift/ (Apple won't allow that URL for some dumb reason)