I want to show the user actual start and end dates of the video played on the AVPlayer time slider, instead of the video duration data.
I would like to show something like this: 09:00:00 ... 12:00:00 (which indicates that the video started at 09:00:00 CET and ended at 12:00:00 CET), instead of: 00:00:00 ... 02:59:59.
I would appreciate any pointers to this direction.
Video
RSS for tagIntegrate video and other forms of moving visual media into your apps.
Posts under Video tag
89 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
For example, this video
does not work in Safari either through the video tag or through simply opening the link.
I can't figure out what exactly is wrong with this video, since most of the others work well.
Any advice?
I want to generate a video from some images and that video should have some animation while changing one image to another.
is it possible with UIView.transition(with:duration:options:animations:completion:) ? If it possible, then how can I achieve this ?
I have multiple AVAssets with that I am trying to merge together into a single video track using AVComposition.
What I'm doing is iterating over my avassets and inserting them in to a single AVCompositionTrack like so:
- (AVAsset *)combineAssets
{
// Create a mutable composition
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack =
[composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack =
[composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
// Keep track of time offset
CMTime currentOffset = kCMTimeZero;
for (AVAsset *audioAsset in _audioSegments) {
AVAssetTrack *audioTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject];
// Add the audio track to the composition audio track
NSError *audioError;
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration)
ofTrack:audioTrack
atTime:currentOffset
error:&audioError];
if (audioError) {
NSLog(@"Error combining audio track: %@", audioError.localizedDescription);
return nil;
}
currentOffset = CMTimeAdd(currentOffset, audioAsset.duration);
}
// Reset offset to do the same with videos.
currentOffset = kCMTimeZero;
for (AVAsset *videoAsset in _videoSegments) {
{
// Get the video track
AVAssetTrack *videoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] firstObject];
// Add the video track to the composition video track
NSError *videoError;
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoTrack.timeRange.duration)
ofTrack:videoTrack
atTime:currentOffset
error:&videoError];
if (videoError) {
NSLog(@"Error combining audio track: %@", videoError.localizedDescription);
return nil;
}
// Increment current offset by asset duration
currentOffset = CMTimeAdd(currentOffset, videoTrack.timeRange.duration);
}
}
return composition;
}
The issue is that when I export the composition using an AVExportSession I notice that there's a black frame between the merged segments in the track.
In other words, if there were two 30second AVAssets merged into the composition track to create a 60 second video. You would see a black frame for a split second at the 30 second mark where the two assets combine.
I don't really want to re encode the assets I just want to stitch them together. How can I fix the black frame issue?.
The Safari version for VisionOS (or spatial computing) supports WebXR, as reported here.
I am developing a Web App that intends to leverage WebXR, so I've tested several code samples on the safari browser of the Vision Pro Simulator to understand the level of support for immersive web content.
I am currently facing an issue that seems like a bug where video playback stops working when entering an XR session (i.e. going into VR mode) on a 3D web environment (using ThreeJS or similar).
There's an example from the Immersive Web Community Group called Stereo Video (https://immersive-web.github.io/webxr-samples/stereo-video.html) that lets you easily replicate the issue, the code is available here.
It's worth mentioning that video playback has been successfully tested on other VR platforms such as the Meta Quest 2.
The issue has been reported in the following forums:
https://discourse.threejs.org/t/videotexture-playback-html5-videoelement-apple-vision-pro-simulator-in-vr-mode-not-playing/53374
https://bugs.webkit.org/show_bug.cgi?id=260259
I'm trying to use the resourceLoader of an AVAsset to progressively supply media data. Unable to because the delegate asks for the full content requestsAllDataToEndOfResource = true.
class ResourceLoader: NSObject, AVAssetResourceLoaderDelegate {
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
if let ci = loadingRequest.contentInformationRequest {
ci.contentType = // public.mpeg-4
ci.contentLength = // GBs
ci.isEntireLengthAvailableOnDemand = false
ci.isByteRangeAccessSupported = true
}
if let dr = loadingRequest.dataRequest {
if dr.requestedLength > 200_000_000 {
// memory pressure
// dr.requestsAllDataToEndOfResource is true
}
}
return true
}
}
Also tried using a fragmented MP4 created using AVAssetWriter. But didn't work. Please let me know if it's possible for the AVAssetResourceLoader to not ask for the full content?
We're using code based on AVAssetReader to get decoded video frames through AVFoundation.
The decoding part per se works great but the seeking just doesn't work reliably. For a given H.264 file (in the MOV container) the decoded frames have presentation time stamps that sometimes don't correspond to the actual decoded frames.
So for example: the decoded frame's PTS is 2002/24000 but the frame's actual PTS is 6006/24000. The frames have burnt-in timecode so we can clearly tell.
Here is our code:
- (BOOL) setupAssetReaderForFrameIndex:(int32_t) frameIndex
{
NSError* theError = nil;
NSDictionary* assetOptions = @{ AVURLAssetPreferPreciseDurationAndTimingKey: @YES };
self.movieAsset = [[AVURLAsset alloc] initWithURL:self.filePat options:assetOptions];
if (self.assetReader)
[self.assetReader cancelReading];
self.assetReader = [AVAssetReader assetReaderWithAsset:self.movieAsset error:&theError];
NSArray<AVAssetTrack*>* videoTracks = [self.movieAsset tracksWithMediaType:AVMediaTypeVideo];
if ([videoTracks count] == 0)
return NO;
self.videoTrack = [videoTracks objectAtIndex:0];
[self retrieveMetadata];
NSDictionary* outputSettings = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(self.cvPixelFormat) };
self.videoTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:self.videoTrack outputSettings:outputSettings];
self.videoTrackOutput.alwaysCopiesSampleData = NO;
[self.assetReader addOutput:self.videoTrackOutput];
CMTimeScale timeScale = self.videoTrack.naturalTimeScale;
CMTimeValue frameDuration = (CMTimeValue)round((float)timeScale/self.videoTrack.nominalFrameRate);
CMTimeValue startTimeValue = (CMTimeValue)frameIndex * frameDuration;
CMTimeRange timeRange = CMTimeRangeMake(CMTimeMake(startTimeValue, timeScale), kCMTimePositiveInfinity);
self.assetReader.timeRange = timeRange;
[self.assetReader startReading];
return YES;
}
This is then followed by this code to actually decode the frame:
CMSampleBufferRef sampleBuffer = [self.videoTrackOutput copyNextSampleBuffer];
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
if (!imageBuffer)
{
CMSampleBufferInvalidate(sampleBuffer);
AVAssetReaderStatus theStatus = self.assetReader.status;
NSError* theError = self.assetReader.error;
NSLog(@"[AVAssetVideoTrackOutput copyNextSampleBuffer] didn't deliver a frame - %@", theError);
return false;
}
Is this method by itself the correct way of seeking and if not: what is the correct way?
Thanks!
Hi everyone, In my app i use AVCaptureSession to record video. I add videoDeviceInput and audioDeviceInput. And as output I use AVCaptureMovieFileOutput. And the result for some iPhone specially the iPhones after IPhoneX(iPhone 11,12, 13,14) has bad audio quality, The sound is like so low then after some seconds(around 7 secs) become normal. I have tried setting the audio setting for movie file out but it's still happen. Anyone know how to solve this issue?
If I disable playback controls for an AVPlayer (showsPlaybackControls), some feature of MPNowPlayingInfoCenter no longer working. (play/pause, skip forward and backward).
I need custom video and audio controls on my AVPlayer in my app, that's why I disabled the iOS playback controls. But I also need the features of the MPNowPlayingInfoCenter. Is there another solution to achieve this?
I am trying to set the .commonIdentifierTitle, .iTunesMetadataTrackSubTitle, .commonIdentifierDescription metadata to an AVPlayerItem's externalMetadata property, but unfortunately only the title and subtitle shown up in the AVPlayerViewController UI.
According to the WWDC22 "Create a great video playback experience" video, we are expecting to see description with a chevron should appears.
Example code I used is exactly same as outlined in the video:
https://developer.apple.com/videos/play/wwdc2022/10147/?time=248
// Setting content external metadata
let titleItem = AVMutableMetadataItem()
titleItem.identifier = .commonIdentifierTitle
titleItem.value = // Title string
let subtitleItem = AVMutableMetadataItem()
subtitleItem.identifier = .iTunesMetadataTrackSubTitle
subtitleItem.value = // Subtitle string
let infoItem = AVMutableMetadataItem()
infoItem.identifier = .commonIdentifierDescription
infoItem.value = // Descriptive info paragraph
playerItem.externalMetadata = [titleItem, subtitleItem, infoItem]
Could anyone have solutions regarding to this issue or this is a bug in AVPlayerViewController?
Thanks
Can I do the same with the image?
PHAssetChangeRequest.creationRequestForAsset(from: UIImage(data: bytes)
Are there plans to expose the cinematic frames (e.g. disparity) to a AVAsynchronousCIImageFilteringRequest?
I want to use my own lens blur shader on the cinematic frames.
Right now it looks like the cinematic frames are only available in a AVAsynchronousVideoCompositionRequest like this:
guard let sourceFrame = SourceFrame(request: request, cinematicCompositionInfo: cinematicCompositionInfo) else { return }
let disparity = sourceFrame.disparityBuffer
Wondering if there a way to programatically enable enhanced stabilization when recording a video on iPhone 14+, something like https://developer.apple.com/documentation/avfoundation/avcaptureconnection/1620484-preferredvideostabilizationmode
Hello,
consider this very simple example:
guard
let url = URL(string: "some_url")
else {
return
}
let player = AVPlayer(url: url)
let controller = AVPlayerViewController()
controller.player = player
present(controller, animated: true) {
player.play()
}
When the video URL is using redirection and returns 302 when queried, AVPlayer's internal implementation is querying twice, which is proven by proxying.
I'm not sure if I can provide the actual links, thus the screenshot is blurred.
It can be seen though, that the redirecting URL, which receives 302 as a response, it is queried twice, and only after the 2nd attempt the actual redirection is taking place.
This behavior is problematic to the backend services and we need to remediate it somehow.
Do you have any idea on how to address this problem, please?
I have electronjs app for the MAC Catalyst. I have implemented audio/video calling functionalities. Those works well.
I have also implemented functionality to share the screen by using below code.
navigator.mediaDevices.getDisplayMedia(options).then((streams) => {
var peer_connection = session.sessionDescriptionHandler.peerConnection;
var video_track = streams.getVideoTracks()[0];
var sender_kind = peer_connection.getSenders().find((sender) => {
return sender.track.kind == video_track.kind;
});
sender_kind.replaceTrack(video_track);
video_track.onended = () => {
};
},
() => {
console.log("Error occurred while sharing screen");
}
);
But when I hit the button to share the screen by using above code, I am getting below error.
Uncaught (in promise) DOMException: Not supported
I have also tried navigator.getUserMedia(options,success,error). It's supported by the Mac Catalyst desktop apps. But it's only giving the streams of the webcam.
I have also checked online if navigator.mediaDevices.getDisplayMedia(options) is supported in the Mac Catalyst or not. It's supports in the Mac Catalyst. But still I am facing this issue.
I have also tried with the desktopCapturer API of the electronjs. But I don't know how can I get the streams from it.
//CODE OF 'main.js'
ipcMain.on("ask_permission", () => {
desktopCapturer
.getSources({ types: ["window", "screen"] })
.then(async (sources) => {
for (const source of sources) {
// console.log(source);
if (source.name === "Entire screen") {
win.webContents.send("SET_SOURCE", source.id);
return;
}
}
});
});
I have tried to get streams by using the below code in the preload.js. But I was getting the error Cannot read property 'srcObject' of undefined.
window.addEventListener("DOMContentLoaded", (event) => {
ipcRenderer.on("SET_SOURCE", async (event, sourceId) => {
try {
const stream = await navigator.mediaDevices.getUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: "desktop",
chromeMediaSourceId: sourceId,
minWidth: 1280,
maxWidth: 1280,
minHeight: 720,
maxHeight: 720,
},
},
});
handleStream(stream);
} catch (e) {
handleError(e);
}
});
let btn = document.getElementById("btnStartShareOutgoingScreens");
btn.addEventListener("click", () => {
if (isSharing == false) {
ipcRenderer.send("ask_permission");
} else {
console.error("USer is already sharing the screen..............");
}
});
});
function handleStream(stream) {
const video = document.createElement("video");
video.srcObject = stream;
video.muted = true;
video.id = "screenShareVideo";
video.style.display = "none";
const box = document.getElementById("app");
box.appendChild(video);
isSharing = true;
}
How can I resolve it. If this is not supported in the MAC Catalyst, Is there is any other way to share the screen from the MAC Catalyst app by using WebRTC.
I watched the WWDC 2023 session titled Explore media formats for the web. It talked in depth about the Managed Media Source API but google is not finding anything about this API. I'd really like to read more about it
I am currently working on a SwiftUI video app. When I load a slow motion video being in 240 IPS (239.68), I use "asset.loadTracks" and then ".load(.nominalFrameRate)" which returns 30 IPS (29.xx), asset being AVAsset(url: ). And the duration in asset.load(.duration) is also 8 times bigger than original duration. Do you know how to get this 239.68 displayed in the Apple Photo app ? Is it stored somewhere in the video metadata or is it computed ?
We provide a mechanism for a user to upload video files via mobile Safari, using a standard HTML file input, eg:
<input type="file" multiple>
As per a StackOverflow answer from a few years back, we've been including the multiple attribute, which worked around whatever was compressing the video and allowed it to be uploaded in original format.
This no longer works, and the video is compressed as a part of this upload workflow.
We've also noticed this is specific to the Photo Library -- if the user were to copy the video over to Files, and then upload it via the "browse" prompt (instead of Photo Library) it uploads as is without compressing.
Is there anything else we can do to prevent this compression of video prior to upload?
We noticed the video on our site being distorted in the banner. After hours of emptying caches and testing different phones, we narrowed it down to 16.5, being the issue. We replicated on Browser Stack in seconds.
You can see the issue here if you have 16.5. It replicates on Safari, Firefox, and Chrome.
https://stundesign.com/
Background
I am building a web app where users can talk to a microphone and watch dynamic video that changes depending on what they say. Here is a sample flow:
User accesses the page
Mic permission popup appears, user grants it
User presses start buttton, standby video starts playing and mic turns on
User speaks something, the speech gets turned to text, analyzed, and based on that, the src changes
The new video plays, mic turns on, the loop continues.
Problem
The problem in iOS is that volume goes down dramatically to a level where it is hard to hear on max volume if mic access is granted. iPhone's volume up/down buttons don't help much either. That results in a terrible UX. I think the OS is forcefully keeping the volume down, but I could not find any documentation about it.
On the contrary, if mic permission not granted, the volume does not change, video plays in a normal volume.
Question
Why does it happen and what can I do prevent volume going down automatically? Any help would be appreciated. This will not happen in PC(MacOS, Windows) or Android OS. Has anyone had any similar experience before?
Context
For context:
I have two tags(positioned:absolute, with and height 100%) that are switched(by toggling z-indexs) to appear one on top of the other. This is to hide load, buffer and black screen from the user for better UX. If the next video is loaded and can play, then they are switched places.
both tags have playsinline to enable inline play as required by webKit.
both tags start out muted, muted is removed after play starts.
video.play() is initiated after user grants mic permission
Tech stack
NextJS with Typescript, latest versions
Testing on latest Chrome and Safari on iOS 16 fully updated