Integrate video and other forms of moving visual media into your apps.

Posts under Video tag

90 Posts
Sort by:
Post not yet marked as solved
1 Replies
499 Views
Since upgrade to iOS17 WebRTC playback have problems on going fullscreen - video element is rapidly changing its dimensions while taking full screen size and animation seems very glitchy. I'm observing this issue on every webrtc players available, so I think the problem is in the mobile safari. Is there any way to prevent resizing of video on fullscreen?
Posted Last updated
.
Post not yet marked as solved
1 Replies
444 Views
I'm building a Camera app, where I have two AVCaptureSessions, one for video and one for audio. (See this for an explanation why I don't just have one). I receive my CMSampleBuffers in the AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates. Now, when I enable the video stabilization mode "cinematicExtended", the AVCaptureVideoDataOutput has a 1-2 seconds delay, meaning I will receive my audio CMSampleBuffers 1-2 seconds earlier than I will receive my video CMSampleBuffers! This is the code: func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { let type = captureOutput is AVCaptureVideoDataOutput ? "Video" : "Audio" let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) print("Incoming \(type) buffer at \(timestamp.seconds) seconds...") } Without video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107862.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107862.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107862.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... With video stabilization, this logs: Incoming Audio frame at 107862.52558333334 seconds... Incoming Video frame at 107861.535921166 seconds... Incoming Audio frame at 107862.54691666667 seconds... Incoming Video frame at 107861.569257333 seconds... Incoming Audio frame at 107862.56825 seconds... Incoming Video frame at 107861.585925333 seconds... Incoming Audio frame at 107862.58958333333 seconds... As you can see, the video frames arrive almost a full second later than when they are intended to be presented! There are a few guides on how to use AVAssetWriter online, but all recommend to start the AVAssetWriter session once the first video frame arrives - in my case I cannot do that, since the first 1 second of video frames is from before the user even started the recording. I also can't really wait 1 second here, as then I would lose 1 second of audio samples, since those are realtime and not delayed. I also can't really start the session on the first audio frame and drop all video frames until that point, since then the resulting video would start with one blank frame, as the video frame is never exactly on that first audio frame timestamp. Any advices on how I can synchronize that? Here is my code: RecordingSession.swift
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
454 Views
Hi, I've started learning swiftUI a few months ago, and now I'm trying to build my first app :) I am trying to display VTT subtitles from an external URL into a streaming video using AVPlayer and AVMutableComposition. I have been trying for a few days, checking online and on Apple's documentation, but I can't manage to make it work. So far, I managed to display the subtitles, but there is no video or audio playing... Could someone help? Thanks in advance, I hope the code is not too confusing. // EpisodeDetailView.swift // OroroPlayer_v1 // // Created by Juan Valenzuela on 2023-11-25. // import AVKit import SwiftUI struct EpisodeDetailView4: View { @State private var episodeDetailVM = EpisodeDetailViewModel() let episodeID: Int @State private var player = AVPlayer() @State private var subs = AVPlayer() var body: some View { VideoPlayer(player: player) .ignoresSafeArea() .task { do { try await episodeDetailVM.fetchEpisode(id: episodeID) let episode = episodeDetailVM.episodeDetail guard let videoURLString = episode.url else { print("Invalid videoURL or missing data") return } guard let subtitleURLString = episode.subtitles?[0].url else { print("Invalid subtitleURLs or missing data") return } let videoURL = URL(string: videoURLString)! let subtitleURL = URL(string: subtitleURLString)! let videoAsset = AVURLAsset(url: videoURL) let subtitleAsset = AVURLAsset(url: subtitleURL) let movieWithSubs = AVMutableComposition() let videoTrack = movieWithSubs.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid) let audioTrack = movieWithSubs.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid) let subtitleTrack = movieWithSubs.addMutableTrack(withMediaType: .text, preferredTrackID: kCMPersistentTrackID_Invalid) // if let videoTrackItem = try await videoAsset.loadTracks(withMediaType: .video).first { try await videoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: videoTrackItem, at: .zero) } if let audioTrackItem = try await videoAsset.loadTracks(withMediaType: .audio).first { try await audioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: audioTrackItem, at: .zero) } if let subtitleTrackItem = try await subtitleAsset.loadTracks(withMediaType: .text).first { try await subtitleTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: videoAsset.load(.duration)), of: subtitleTrackItem, at: .zero) } let playerItem = AVPlayerItem(asset: movieWithSubs) player = AVPlayer(playerItem: playerItem) let playerController = AVPlayerViewController() playerController.player = player playerController.player?.play() // player.play() } catch { print("Error: \(error.localizedDescription)") } } } } #Preview { EpisodeDetailView4(episodeID: 39288) }
Posted
by JuanV.
Last updated
.
Post marked as solved
1 Replies
578 Views
Hey all! I'm trying to record Video from one AVCaptureSession, and Audio from another AVCaptureSession. The reason I'm using two separate capture sessions is because I want to disable and enable the Audio one on the fly without interrupting the Video session. I believe Snapchat and Instagram also use this approach, as background music keeps playing when you open the Camera, and only slightly stutters (caused by the AVAudioSession.setCategory(..) call) once you start recording. However I couldn't manage to synchronize the two AVCaptureSessions, and whenever I try to record CMSampleBuffers into an AVAssetWriter, the video and audio frames are out of sync. Here's a quick YouTube video showcasing the offset: https://youtube.com/shorts/jF1arThiALc I notice two bugs: The video and audio tracks are out of sync - video frames start almost a second before the first audio sample starts to be played back, and towards the end the delay is also noticeable because the video stops / freezes while the audio continues to play. The video contains frames from BEFORE I even pressed startRecording(), as if my iPhone had a time machine! I am not sure how the second one can even happen, so at this point I'm asking for help if anyone has any experience with that. Roughly my code: let videoCaptureSession = AVCaptureSession() let audioCaptureSession = AVCaptureSession() func setup() { // ...adding videoCaptureSession outputs (AVCaptureVideoDataOutput) // ...adding audioCaptureSession outputs (AVCaptureAudioDataOutput) videoCaptureSession.startRunning() } func startRecording() { self.assetWriter = AVAssetWriter(outputURL: tempURL, fileType: .mov) self.videoWriter = AVAssetWriterInput(...) assetWriter.add(videoWriter) self.audioWriter = AVAssetWriterInput(...) assetWriter.add(audioWriter) AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker]) audioCaptureSession.startRunning() // <-- lazy start that } func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from _: AVCaptureConnection) { // Record Video Frame/Audio Sample to File in custom `RecordingSession` (AVAssetWriter) if isRecording { switch captureOutput { case is AVCaptureVideoDataOutput: self.videoWriter.append(sampleBuffer) case is AVCaptureAudioDataOutput: // TODO: Do I need to update the PresentationTimestamp here to synchronize it to the other capture session? or not? self.audioWriter.append(sampleBuffer) default: break } } } Full code here: Video Capture Session Configuration Audio Capture Session Configuration Later on, startRecording() call RecordingSession, my AVAssetWriter abstraction Audio Session activation And finally, writing the CMSampleBuffers
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
460 Views
I have a use case to rotate a CMSampleBuffer from landscape to portrait. I have written a rough code for it. But still am facing many issues with appending of the sampleBuffer to input frame. Here's the code: `guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil } // Get the dimensions of the image buffer let width = CVPixelBufferGetWidth(imageBuffer) let height = CVPixelBufferGetHeight(imageBuffer) // Determine if the image needs to be rotated let shouldRotate = width > height // Create a CIImage from the buffer var image = CIImage(cvImageBuffer: imageBuffer) // Rotate the CIImage if necessary if shouldRotate { image = image.oriented(forExifOrientation: 6) // Rotate 90 degrees clockwise } let originalPixelFormatType = CVPixelBufferGetPixelFormatType(imageBuffer) // Create a new pixel buffer var newPixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, height, width, originalPixelFormatType, nil, &newPixelBuffer) guard status == kCVReturnSuccess, let pixelBuffer = newPixelBuffer else { return nil } CVBufferPropagateAttachments(imageBuffer, newPixelBuffer!) // Render the rotated image onto the new pixel buffer let context = CIContext() context.render(image, to: pixelBuffer) CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0)) var videoInfo: CMVideoFormatDescription? CMVideoFormatDescriptionCreateForImageBuffer(allocator: kCFAllocatorDefault, imageBuffer: newPixelBuffer!, formatDescriptionOut: &videoInfo) var sampleTimingInfo = CMSampleTimingInfo(duration: CMSampleBufferGetDuration(sampleBuffer), presentationTimeStamp: CMSampleBufferGetPresentationTimeStamp(sampleBuffer), decodeTimeStamp: CMSampleBufferGetDecodeTimeStamp(sampleBuffer)) var newSampleBuffer: CMSampleBuffer? CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault, imageBuffer: newPixelBuffer!, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: videoInfo!, sampleTiming: &sampleTimingInfo, sampleBufferOut: &newSampleBuffer) let attachments: CFArray! = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: true) let dictionary = unsafeBitCast(CFArrayGetValueAtIndex(attachments, 0), to: CFMutableDictionary.self) if let attachmentsArray = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, createIfNecessary: true) as? [CFDictionary] { for attachment in attachmentsArray { for (key, value) in attachment as! Dictionary<CFString, Any> { if let value = value as? CFTypeRef { CMSetAttachment(newSampleBuffer!, key: key, value: value, attachmentMode: kCMAttachmentMode_ShouldPropagate) } } } } return newSampleBuffer! The error that I am getting while appending the frame is: Error occured, isVideo = false, status = 3, Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x282b87390 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}} I read online that this error might be due to different PixelFormatType. But how can that be because I am obtaining the PixelFormatType from the buffer itself. If you want to see the difference between original and rotated sample buffer. https://www.diffchecker.com/V0a55kCB/ Thanks in advance!
Posted Last updated
.
Post not yet marked as solved
0 Replies
265 Views
When I upload a preview video to ASC (which conforms to preview specifications), the uploaded video it uploaded correctly in the first place. During upload to ASC it shows a blurred image of the first video frame. So far so good. But, once upload is finished the video turns into a "cloud" image and says it is currently processed. The problem is that it gets stuck in the status „currently processed“ forever. I waited a few days but processing did never end. To make it worse the landscape "cloud" image turns into a portrait when I come back to the ASC media center. The problem : is reproducible occurs on different video files occurs on different appIDs occurs on all iPhone resolutions This is a serious bug. I can’t finalise my app submission. Any ideas ?
Posted
by chnbr.
Last updated
.
Post not yet marked as solved
0 Replies
307 Views
Hello! I'm trying to display AVPlayerViewController in a separate WindowGroup - my main window opens a new window where the only element is a struct that implements UIViewControllerRepresentable for AVPlayerViewController: @MainActor public struct AVPlayerView: UIViewControllerRepresentable { public let assetName: String public init(assetName: String) { self.assetName = assetName } public func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.player = AVPlayer() return controller } public func updateUIViewController(_ controller: AVPlayerViewController, context: Context) { Task { if context.coordinator.assetName != assetName { let url = Bundle.main.url(forResource: assetName, withExtension: ".mp4") guard let url else { return } controller.player?.replaceCurrentItem(with: AVPlayerItem(url: url)) controller.player?.play() context.coordinator.assetName = assetName } } } public static func dismantleUIViewController(_ controller: AVPlayerViewController, coordinator: Coordinator) { controller.player?.pause() controller.player = nil } public func makeCoordinator() -> Coordinator { return Coordinator() } public class Coordinator: NSObject { public var assetName: String? } } WindowGroup(id: Window.videoPlayer.rawValue) { AVPlayerView(assetName: "wwdc") .onDisappear { print("DISAPPEAR") } } This displays the video player in non-inline mode and plays the video. The problem appears when I try to close the video player's window using the close button. Sound from the video continues playing in the background. I've tried to clean the state myself by using dismantleUIViewController and onDisapear methods, but they are not called by the system (it works correctly if a window doesn't contain AVPlayerView). This appear on Xcode 15.1 Beta 3 (I haven't tested it on other versions). Is there something I do incorrectly that is causing this issue, or is it a bug and I need to wait until its fixed?
Posted
by kmoczala.
Last updated
.
Post not yet marked as solved
1 Replies
604 Views
Hey all! I'm trying to build a Camera app that records Video and Audio buffers (AVCaptureVideoDataOutput and AVCaptureAudioDataOutput) to an mp4/mov file using AVAssetWriter. When creating the Recording Session, I noticed that it blocks for around 5-7 seconds before starting the recording, so I dug deeper to find out why. This is how I create my AVAssetWriter: let assetWriter = try AVAssetWriter(outputURL: tempURL, fileType: .mov) let videoWriter = self.createVideoWriter(...) assetWriter.add(videoWriter) let audioWriter = self.createAudioWriter(...) assetWriter.add(audioWriter) assetWriter.startWriting() There's two slow parts here in that code: The createAudioWriter(...) function takes ages! This is how I create the audio AVAssetWriterInput: // audioOutput is my AVCaptureAudioDataOutput, audioInput is the microphone let settings = audioOutput.recommendedAudioSettingsForAssetWriter(writingTo: .mov) let format = audioInput.device.activeFormat.formatDescription let audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: settings, sourceFormatHint: format) audioWriter.expectsMediaDataInRealTime = true The above code takes up to 3000ms on an iPhone 11 Pro! When I remove the recommended settings and just pass nil as outputSettings: audioWriter = AVAssetWriterInput(mediaType: .audio, outputSettings: nil) audioWriter.expectsMediaDataInRealTime = true ...It initializes almost instantly - something like 30 to 50ms. Starting the AVAssetWriter takes ages! Calling this method: assetWriter.startWriting() ...takes takes 3000 to 5000ms on my iPhone 11 Pro! Does anyone have any ideas why this is so slow? Am I doing something wrong? It feels like passing nil as the outputSettings is not a good idea, and recommendedAudioSettingsForAssetWriter should be the way to go, but 3 seconds initialization time is not acceptable. Here's the full code: RecordingSession.swift from react-native-vision-camera. This gets called from here. I'd appreciate any help, thanks!
Posted
by mrousavy.
Last updated
.
Post not yet marked as solved
0 Replies
436 Views
I'm encountering an issue with live video streaming on iOS 17 using AVMutablePlayer. I'm utilizing a wss URL to stream videos by capturing data in chunks (e.g., 5 seconds) and playing it. Upon completion of the 5-second segment, I load another 5 seconds using self.player.replaceCurrentItem(with: nextPlayerItem). Despite listening to events via self.player.currentItem?.observe, the functionality appears to be working well on iOS 16 but consistently displays a blank video on iOS 17. private func playNext() { let nextSet = self.dataCollector.getNextItem(length: self.configuration.frameDelay) if nextSet.count == self.configuration.frameDelay { var playerTime:Int = self.player.currentItem != nil ? Int(CMTimeGetSeconds(player.currentTime())) : 0 var allData = Data() allData.appendAll(dataSet: dataCollector.getFileType()) nextSet.forEach { (data) in playerTime += 1 allData.append(data.getFragmentData()) self.currentFragmentTimes.updateValue(data.getFragmentTime(), forKey: playerTime) } if(allData.count > 0) { self.player.replaceCurrentItem(with: AVPlayerItem(asset: AVMutableMovie(data:allData, options: nil))) self.playerInitializedTime = nil self.player.play() } } }
Posted
by GVRajeev.
Last updated
.
Post not yet marked as solved
0 Replies
267 Views
I am using the AVCapture API and have tried all stabilization settings and video settings that I am aware of however I am unable to get the same quality of frames that Action Mode gets in the native iOS app during motion video. How can I access the same API or settings that Action Mode uses within my app?
Posted
by IanWils.
Last updated
.
Post not yet marked as solved
0 Replies
451 Views
I wish to parse the bitstream of HEVC video with alpha (specific video format reference WWDC2019: https://developer.apple.com/videos/play/wwdc2019/506). Taking the 'puppets_with_alpha_hevc.mov' file from 'Using HEVC Video with Alpha' as an example, I would first extract the HEVC bitstream, then parse its fields. When it comes to the VPS field, as I reach the vps_extension, I find that the bitstream in 'puppets_with_alpha_hevc.mov' does not conform to the HEVC standard document, preventing further parsing. Besides the 'HEVC Video with Alpha Interoperability Profile.pdf', are there any more detailed documents describing the HEVC video with alpha format? Also, is there anyone who can encode or decode HEVC with alpha videos on systems other than macOS?
Posted
by PaulDirac.
Last updated
.
Post not yet marked as solved
0 Replies
444 Views
I have a ProRes 4444 format video with an alpha channel. The circle in the middle of the video is completely opaque, while the rest is fully transparent. Everything looks normal in iMovie, as I can see the background through the surrounding parts. However, when I play it using AVPlayer, the parts that are supposed to be fully transparent appear somewhat opaque, as shown in the image below: I used the official project provided by Apple named 'using_hevc_video_with_alpha', but I only replaced the HEVC with alpha format file with a ProRes 4444 format video file. Below is the main code. import Cocoa import SpriteKit import AVFoundation class ViewController: NSViewController { @IBOutlet var skView: SKView! var videoPlayer: AVPlayer! override func viewDidLoad() { super.viewDidLoad() if let view = self.skView { // Load the SKScene from 'backgroundScene.sks' guard let scene = SKScene(fileNamed: "backgroundScene") else { print ("Could not create a background scene") return } // Set the scale mode to scale to fit the window scene.scaleMode = .aspectFill // Present the scene view.presentScene(scene) // Add the video node guard let alphaMovieURL = Bundle.main.url(forResource: "xuewang", withExtension: "mov") else { print("Failed to overlay alpha movie on the background") return } videoPlayer = AVPlayer(url: alphaMovieURL) let video = SKVideoNode(avPlayer: videoPlayer) video.size = CGSize(width: view.frame.width, height: view.frame.height) print( "Video size is %f x %f", video.size.width, video.size.height) scene.addChild(video) // Play video videoPlayer.play() } } }
Posted
by PaulDirac.
Last updated
.
Post not yet marked as solved
0 Replies
460 Views
Our initial understanding was that this event is fired only when the DRM blocks the video playback. However, in the present case we see that it is called even when playback is successful(playback with external screen connected). To assess whether playback remains functional when the 'outputObscuredDueToInsufficientExternalProtection' event is triggered, we conducted two specific scenario tests: 1) playing an asset without any DRM restrictions, and 2) playing an asset with DRM restrictions. Result: In our analysis, we have identified that the 'outputObscuredDueToInsufficientExternalProtection' flag always remains set to one, even when playback is successful. However, it is expected to be set to zero when the playback is successful. working case log when playback is successful: default 13:23:19.096682+0530 AMC ||| observeValueForKeyPath = "outputObscuredDueToInsufficientExternalProtection" object = <AVPlayer: 0x281284930> change kind = { kind = 1; new = 1; old = 0; } non working case log when playback came as black screen: default 13:45:21.356857+0530 AMC ||| observeValueForKeyPath = "outputObscuredDueToInsufficientExternalProtection" object = <AVPlayer: 0x281c071e0> change kind = {kind = 1; new = 1; old = 0; } We searched through related documents and conducted a Google search, but we couldn't find any information or references related to this behavior of the 'outputObscuredDueToInsufficientExternalProtection' event. It would be really appreciated if any one can help us with this!
Posted
by vinay1234.
Last updated
.
Post not yet marked as solved
0 Replies
404 Views
We have a logic in the SDK which stops playback when the outputObscuredDueToInsufficientExternalProtection event is fired by the player. Our initial understanding was that this event is fired only when the DRM blocks the video playback. However, in the present case we see that it is called even when playback is successful(playback with external screen connected). To determine whether playback still functions when the 'outputObscuredDueToInsufficientExternalProtection' event is triggered, we temporarily disabled the playback stop implementation that occurs after the event is triggered. code snippet - Observations - After this event was triggered during mirroring playback using a Lightning to HDMI connector, our expectation was that the playback would result in a black screen. However, to our surprise, the playback worked perfectly, indicating that this event is being triggered even when there are no DRM restrictions for that asset's playback. Another scenario we tested involved using a VGA connector. In this case, we observed that the 'outputObscuredDueToInsufficientExternalProtection' event was triggered. Initially, playback started as expected when we commented out the playback stop implementation. However, after a few seconds of playback, the screen went black. In the first scenario, it was unexpected for the 'outputObscuredDueToInsufficientExternalProtection' event to trigger, as the playback worked without issues even after the event was triggered. However, in the second scenario, the event was triggered as expected. The issue we identified is that this event is being triggered irrespective of the presence of DRM restrictions for the asset. In another scenario, we attempted to differentiate between the VGA and HDMI connectors to determine if such distinction was possible. However, we found that the VGA cable was also recognized as an HDMI port in the case of iOS. We also tested the issue on an older iOS version (iOS 14.6.1) to see if the problem persisted. Surprisingly, we found that the 'outputObscuredDueToInsufficientExternalProtection' event was triggered even in the older OS version. Conclusion: In our analysis, we have identified that the 'outputObscuredDueToInsufficientExternalProtection' flag always remains true even though output is not obsecured. working case log: default 13:23:19.096682+0530 AMC ||| observeValueForKeyPath = "outputObscuredDueToInsufficientExternalProtection" object = <AVPlayer: 0x281284930> change kind = { kind = 1; new = 1; old = 0; } non working case log: default 13:45:21.356857+0530 AMC ||| observeValueForKeyPath = "outputObscuredDueToInsufficientExternalProtection" object = <AVPlayer: 0x281c071e0> change kind = {kind = 1; new = 1; old = 0; } We searched through related documents and conducted a Google search, but we couldn't find any information or references related to this behavior of the 'outputObscuredDueToInsufficientExternalProtection' event. It would be really appreciated if any one can help us with this!
Posted
by vinay1234.
Last updated
.
Post marked as solved
4 Replies
1.1k Views
I'm loading a page inside an iframe. The page contains a video with the playsinline attribute. The video is completely unresponsive to touch events, meaning if it fills up the screen I can't scroll past it, thus breaking the page. Note the video also has autoplay, loop and muted attributes but these did not cause scrolling issues; only playsinline causes the break. Myself and another tester are running Safari on an iPhone 14 with iOS 17.0.3 and it's broken for both of us. Confirmed with 2 additional testers there is no issue with iPhones running iOS 16.6. Test case: https://codepen.io/gem0303/pen/qBgEeaG Repro steps: Load in Safari on an iPhone with iOS 17.0.3 Tap and drag on the white background and dummy scrolling text -- works fine. Tap and drag on the cat video -- scrolling is impossible.
Posted
by gem0303.
Last updated
.
Post not yet marked as solved
0 Replies
380 Views
I have designed an app for media player, in this app i need to implement live tv, movies and series. so url can be of any type such as .ts formate for live tv, and .mp4, .mov, etc. I am also going to work with m3u. but AVPlayer does not supports all these urls.So can i get some suggestions and solutions for that. what could be the best practice and how to work with all these kind if urls etc.
Posted Last updated
.
Post not yet marked as solved
0 Replies
362 Views
I have two MacBook Pro computers. On one, an M1 Max, the HomeKit video palyback will not work on my account. It will work a guest account on the same computer. On the second, an i7 13", no issues at all. I can also view the playback on my iPhone 14 and my iPad Pro. This started when I first began to use the HomeKit Video, well over a year ago and on previous OS versions. There is a HomePod and several AppleTvs around the house, so there is no issue of there not being a HomeKit hub.to function with, and I mentioned, it's only on the M1 that there is an issue. It shows the recordings, but when I click on one to view, it just zooms in on the scene. To confuse things, it will work on occasion, especially if I am calling Apple about it. (yes, it's fickle) Everything is up to date, both machines are running the latest Sonoma and the 13" has never had a problem with the video. It's going to be something so simple that I will do a "DUH" but what's the answer?
Posted Last updated
.
Post not yet marked as solved
1 Replies
527 Views
I am learning to develop webrtc apps and have noticed that starting with safari and safari mobile 17 there is a noticeable zoom distortion that occurs when resizing some webrtc players. This seems to be safari specific and only on version 17. What feature change could cause this? Here is an example of catalina vs Sonoma. Sorry i dont have access to any other versions in between atm but i have only seen this issue since updating to safari 17
Posted Last updated
.
Post marked as solved
4 Replies
6.2k Views
Hello there, in our team we were requested to add the possibility to manually select the video quality. I know that HLS is an adaptive stream and that depending on the network condition it choose the best quality that fits to the current situation. I also tried some setting with preferredMaximumResolution and preferredPeakBitRate but none of them worked once the user was watching the steam. I also tried something like replacing the currentPlayerItem with the new configuration but anyway this only allowed me to downgrade the quality of the video. When I wanted to set it for example to 4k it did not change to that track event if I set a very high values to both params mentioned above. My question is if there is any method which would allow me to force certain quality from the manifest file. I already have some kind of extraction which can parse the manifest file and provide me all the available information but I couldn't still figure out how to make the player reproduce specific stream with my desired quality from the available playlist.
Posted Last updated
.