Integrate video and other forms of moving visual media into your apps.

Posts under Video tag

91 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

This video scrubbing is ******* worthless
I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video. used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone. release me a patch to optionally enable the old scrubbing behavior.
0
0
102
4d
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
123
6d
Rendering YCbCr input using Metal
I would like to take YCbCr CVPixelBuffers from AVCaptureVideoDataOutput, apply some processing in RGB space, render to an MTKView, and pass to AVAssetWriter for recording. Right now, I'm doing this all manually – deswing the incoming data if necessary, choose the right matrix to convert to RGB, apply processing, etc. I also have to convert back to YCbCr before feeding the frames to AVAssetWriter because encoding performs much better if I do. Is there any efficient, built-in way to achieve the same? I can't use AVCaptureVideoPreviewLayer, since I need to do some further processing before display. I can't use AVCaptureVideoDataOutput's videoSettings to get automatic BGRA conversion because that would lose bit depth for 10 bit video formats (and isn't available on all formats anyway). I see these Accelerate functions, but they seemingly don't use the GPU, nor do they support all the formats and bit depths I'd need. I found reference to some undocumented MTLPixelFormats that seem to do exactly what I want, but I don't want to rely on something like this unless it's explicitly endorsed. This would also incur an RGB/YCbCr conversion on every texture read and write, right? Is there anything I'm missing here?
0
0
164
1w
WkWebView breaks with isElementEnabled preference set to true on fullscreen pause
I have an webview that loads videos in it, we would like to be able to fullscreen our videos, so we use the fullscreen preference in the documentation however when it is set to true, upon fullscreening a video then pausing it, the entire video player will disappear. You can exit fullscreen and attempt to fullscreen the video player once again, however upon doing this the entire app view will now disappear and you'll see your desktop background (or whatever is currently behind your app). This behavior seems consistent across multiple websites with the current app. I have setup a sample project you can test here The Main error that seems to trigger to the console is this. I have not been able to find a solution to, maybe I am simply missing something here. I am on Sequoia 15.2 for Mac. Attempting to update all DD element frames, but the bounds or contentsRect are invalid. Bounds: X: 0.00 Y: 0.00, W: 0.00 H: 0.00, contentsRect: X: 0.00 Y: 0.00, W: 1.00 H: 1.00 , skipping
2
0
244
1w
FaceTime camera returns promise before solving it
The strange_behaviour.mp4 video attached shows how when running a list of statements to open (startCamera) and close (closeCamera) the camera against a MacBook Pro 2019 using the FaceTime camera, these return their value almost immediately, when in reality the camera is still opening and closing. We believe that there might be a queue for statements to run against the camera and it finishes the awaits when all the statements are inserted in the queue instead of when they are actually solved. We also attached the expected_behaviour.mp4 video where we replicate the same flow but using an external camera instead of the FaceTime camera. In this video, the promises take a bit longer to return but they do once the camera has already been opened and closed the requested amount of times. The project used in the videos is attached as project.tar.xz. We would like to know if the behaviour in strange_behaviour.mp4 is replicable on your side and if there is a way to access the cameras to make it behave like when using an external camera (expected_behaviour.mp4). Attachments: https://drive.google.com/drive/folders/1cOeFb5GMbh4mPOeZiZyyevk3N778Kn1M?usp=sharing
0
0
129
1w
Issue with WebM Video Transparency Rendering on iOS
I'm experiencing a persistent issue with transparent WebM videos rendered via WKWebView in an iOS Capacitor app. The videos play normal, however, they display a black background frame, which does not occur in the web version of the app. I've tried: Playing with the css setting, Enabling experimental WKWebView features, Adjusting meta tags for inline video playback and hardware acceleration. That my code: showThumbs={false} showStatus={false} showIndicators={true} showArrows={false} infiniteLoop={true} autoPlay={true} interval={5000} // Change slide every 5 seconds onChange={(index) => { if (playerRefs.current[index]) { playerRefs.current[index]?.seekTo(0); playerRefs.current[index]?.getInternalPlayer()?.play(); } }} > {videos.map((video, index) => ( <div key={index} className="video-slide"> <ReactPlayer ref={(player) => (playerRefs.current[index] = player)} url={video.src} playing={isLoaded[index]} // Play only when video is loaded loop muted width="100%" onReady={() => handleVideoReady(index)} // Set loaded state when video is ready style={{ backgroundColor: 'transparent' }} config={{ file: { attributes: { playsInline: true, }, }, }} /> <p className="description">{video.description}</p> </div> ))} </Carousel> Working with React, capacitor. The videos work perfect when I test it on the web app, the problem occurs just on my ios app
2
0
109
1w
Playback problem on AVPlayer with MPEGTS streams
We are experiencing an issue with our HLS MPEG-TS streams on Apple devices, where the AVPlayer in our iOS app and Safari jumps back to the start when the player automatically changes quality. This occurs despite the stream still indicating that it is live and there is no change in the seekbar. After testing our streams with the Apple HLS Validator, the only problem that occured was an "Measured peak bitrate compared to multivariant playlist declared value exceeds error tolerance"-Error. On Chrome and on our Android-App this playback bug does not happen. Has someone else experienced similar issues with the AVPlayer?
1
0
154
2w
H.264 License
I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported. Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple? I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
1
0
201
3w
AVPlayer HLS Restirect Stream Resulotion
Hello, we have HLS stream app and we use AVPlayer for HLS stream. We want to implement dynamic resulotion feature as user's selection. For example, if user want to watch only 1080p user has to watch only 1080p but we have tried to implement "preferredMaximumResolution" and "preferredPeakBitRate" parameters and but AVPlayer does not force it which means that setting preferredMaximumResolution= CGSize(width: 1920, height: 1080) player does not only force to play 1080p profile, player drops resulotion to 720p but we do not want 720p stream if user selected 1080p resulotion. Is there any method to force it even if stream stalls? Thank you in advance
0
0
121
3w
Core Image Tiling and ROI
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc. The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions: return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); } arguments:@[src] ]; The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate: float4 backwards(sampler image, destination dest) { float2 dc = dest.coord(); dc.x = image.size().x - dc.x; return image.sample(image.transform(dc))); } When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large. All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing. I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet. Any thoughts/input appreciated, thanks!
3
0
255
3w
How to Load Stereoscopic Video Using AVFoundation?
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos. I would like to know: Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos... Thank you in advance for your help! Best, Lau
1
0
218
Oct ’24
View Homekit Camera Live Stream
Hello, My goal is to display the live video stream from a HomeKit camera in my Swift/SwiftUI iOS app. Using HMHomeManager, I have retrieved the HMHome object and found the camera as an HMAccessory object. Within this, I have located the HMCameraProfile and, through it, the HMCameraStreamControl. So far, I believe I understand the object model. However, I do not know how to display the stream. I have come across the CameraView, but I am not sure how to access the required HMCameraSource for the CameraView from the aforementioned object model. Is my current approach fundamentally incorrect? I would appreciate it if you could explain the relationships involved and possibly point me to an example project. Best regards, Matthias
0
0
190
Oct ’24
Video Memory Leak when Backgrounding
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing. ImmersiveView's two scenes: Scene 1 has 1 toggle button Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video Attached are the files and images of the memory leak as captured in Xcode. To replicate this memory leak, follow these steps: Create a new visionOS app using Xcode template as illustrated below. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist. Replace all swift files with those you will find in the attached texts. In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project. Launch the app in debug mode via Xcode and onto the AVP device or simulator Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph Press on the first immersive space's button "Open ImmersiveView" Press on the second immersive space's button "Show Immersive Video" Background the app When the app tray appears, foreground the app by selecting it The first immersive space should appear Repeat steps 7, 8, 9, and 10 multiple times Observe the memory use going up, the graph should look similar to the below illustration. In ImmersiveView, upon backgrounding the app, I do: a reset method to clear the video's memory dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..) Am I not releasing the memory correctly? Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app? App file TestVideoLeakOneImmersiveView First ImmersiveSpace file InitialImmersiveView Second ImmersiveSpace File ImmersiveView Skysphere Model File Immersive180VideoViewModel File AppModel
3
0
351
Oct ’24
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:videoOptions resultHandler:^(AVAsset *_Nullable avAsset, AVAudioMix *_Nullable audioMix, NSDictionary *_Nullable info) { if ([avAsset isKindOfClass:[AVURLAsset class]]) { AVURLAsset *urlAsset = (AVURLAsset *)avAsset; NSURL *videoURL = urlAsset.URL; mediaInfo[@"path"] = videoURL.absoluteString; } else { // Failed to get video asset completion(nil); } }];``` Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears 'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
2
0
367
Oct ’24
AVExternalStorageDevice permissions behavior completely broken on iOS 18?
I'm attempting to use AVExternalStorageDevice.requestAccess on iOS 18 using Xcode 16. When calling requestAccess, a dialog does appear, but the completionHandler closure is never called to indicate whether access was granted. If using the async version, the function just never returns. Calling requestAccess also results in a mediaServicesWereReset (-11819) error without fail. Supposedly, "the system only presents the dialog to a person the first time your app calls the method." That also doesn't appear to be the case. The dialog appears every time requestAccess is called, regardless of previous invocations and whether "Allow" or "Don't Allow" was selected. The dialog itself says "You can change this in Privacy settings." I cannot find this permission anywhere in the Settings app, neither under Privacy &amp; Security nor under the app-specific settings page. Has anyone else experienced these issues? Am I missing something here? I did suspect permissions issues and tried adding a NSRemovableVolumesUsageDescription entry to the app. This did not appear to change anything.
1
1
299
Oct ’24
WKWebView on iOS 18: Video Auto-Play Not Working Inline, Launching in Full-Screen Instead
Hello, I'm experiencing an issue with WKWebView in my iOS app when running on devices with iOS 18. Specifically, when auto-playing videos, they do not play inline as expected but instead switch to full-screen mode. This problem only occurs on iOS 18; in earlier versions, the videos played inline without any issues. Here is the configuration I'm using for WKWebView: let configuration: WKWebViewConfiguration = WKWebViewConfiguration.init() configuration.allowsInlineMediaPlayback = true configuration.mediaTypesRequiringUserActionForPlayback = [] Additionally, I have set the playsinline attribute in the video tag on the web page. Despite these settings, when the webpage initially loads, the video sometimes plays in full-screen mode rather than inline. I am looking for a solution to ensure that videos always play inline as intended on iOS 18. Has anyone encountered a similar issue or know of any workarounds? Any help would be greatly appreciated! Thank you!
1
1
486
Sep ’24
WKWebView on iOS 18: Video Auto-Play Not Working Inline, Launching in Full-Screen Instead
Hello, I'm experiencing an issue with WKWebView in my iOS app when running on devices with iOS 18. Specifically, when auto-playing videos, they do not play inline as expected but instead switch to full-screen mode. This problem only occurs on iOS 18; in earlier versions, the videos played inline without any issues. Here is the configuration I'm using for WKWebView: let configuration: WKWebViewConfiguration = WKWebViewConfiguration.init() configuration.allowsInlineMediaPlayback = true configuration.mediaTypesRequiringUserActionForPlayback = [] Additionally, I have set the 'playsinline' attribute in the video tag on the web page. Despite these settings, when the webpage initially loads, the video sometimes plays in full-screen mode rather than inline. I am looking for a solution to ensure that videos always play inline as intended on iOS 18. Has anyone encountered a similar issue or know of any workarounds? Any help would be greatly appreciated! Thank you!
0
0
389
Sep ’24