I have the following piece of code that works in Swift 5
func test() {
let url = Bundle.main.url(forResource: "movie", withExtension: "mov")
let videoAsset = AVURLAsset(url: url!)
let t1 = CMTime(value: 1, timescale: 1)
let t2 = CMTime(value: 4, timescale: 1)
let t3 = CMTime(value: 8, timescale: 1)
let timesArray = [
NSValue(time: t1),
NSValue(time: t2),
NSValue(time: t3)
]
let generator = AVAssetImageGenerator(asset: videoAsset)
generator.requestedTimeToleranceBefore = .zero
generator.requestedTimeToleranceAfter = .zero
generator.generateCGImagesAsynchronously(forTimes: timesArray ) { requestedTime, image, actualTime, result, error in
let img = UIImage(cgImage: image!)
}
}
When I compile and run it in Swift 6 it gives a
EXC_BREAKPOINT (code=1, subcode=0x1021c7478)
I understand that Swift 6 adopts strict concurrency. My question is if I start porting my code, what is the recommended way to change the above code?
Rgds,
James
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
I want my app to allow the user to search for certain words in a video file and the transcript of that video. I found a Transcript class. I don't remember which Framework it is in. Would someone point me in the right direction? What Framework and Classes should I use.
1.在Fxplug4.3的 FxRemoteWindowAPI 的协议中,没有提供window.frame.origin的设置。
2.如果我自定义NSWindow时,在FxPlug中 [Window setLevel:NSFloatingWindowLevel];也没有执行。
3.请问我应该如何把窗口保留在Final cut pro的前面,并且不影响Final cut pro 的操作呢?
We have developed a custom player tvOS application using AVPlayer Foundation. When we hit the Siri command "What did they say?" Playback will go backwards but subtitles will not work temporarily. Anyone please suggest a solution for this issue :)
NSPanel *panel = [[myPanel alloc] initWithContentRect:NSMakeRect(100, 100, 400, 300) styleMask:NSWindowStyleMaskTitled | NSWindowStyleMaskClosable
backing:NSBackingStoreBuffered
defer:NO];
[panel setLevel:NSFloatingWindowLevel];//无效????
[panel makeKeyAndOrderFront:self];
问题:在FxPlug4.3中使用setLevel不能将panel放在Final cut pro和Mition的前面?
救命~~~全世界都没找到答案!
I have the Facebook SDK version 17.0.2 and xcode 15. Sharing photos and links work fine but when I try sharing videos, I get the following error:
Failed to log access with error: access=<PATCCAccess 0x301d12b20> accessor:<<PAApplication 0x301d27e30 identifierType:auditToken identifier:{pid:18440, version:47210}>> identifier:A9159DCD-76B1-4C77-A01E-DA611929B50B kind:intervalEvent timestampAdjustment:0 visibilityState:0 assetIdentifierCount:0 accessCount:0 tccService:kTCCServicePhotos, error=Error Domain=NSCocoaErrorDomain Code=4097 "connection to service with pid 15679 named com.apple.privacyaccountingd" UserInfo={NSDebugDescription=connection to service with pid 15679 named com.apple.privacyaccountingd}
I've created a video recorder that records front facing video and depth-camera video at the same time. But as soon as audio is added to AVCaptureSession the depth video capturing stops working.
Is there any way to record and save audio, video and depth camera at the same time?
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5.
ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos)
The part it is failling is,
streamOutput.capturedFrameHandler = { continuation.yield($0) }
And the error message is
Sending '$0' risks causing data races
Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses
Please enlighten me why this is an issue and how to avoid?
Thanks in advance!
Hello,
We are currently developing a mobile game using Unreal Engine 5, and we have encountered an issue where a specific video (mp4 format) stops displaying at a particular frame during playback within the game.
The code within Unreal fails at the following point, causing the issue:
CMTime OutputItemTime = [Output itemTimeForHostTime:CACurrentMediaTime()];
if (![Output hasNewPixelBufferForItemTime:OutputItemTime])
{
return;
}
We have referred to the following Apple documentation:
AVPlayerTimeControlStatus
reasonForWaitingToPlay
Upon logging, we observed the following:
[2024.08.13-05.18.35:266][429]LogTemp: AMP PlayerItem.status AVPlayerItemStatusReadyToPlay
[2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.timeControlStatus AVPlayerTimeControlStatusWaitingToPlayAtSpecifiedRate
[2024.08.13-05.18.35:266][429]LogTemp: AMP reasonForWaitingToPlay: AVPlayerWaitingToMinimizeStallsReason
[2024.08.13-05.18.35:266][429]LogTemp: AMP MediaPlayer.rate 1.000000
[2024.08.13-05.18.35:268][430]LogTemp: avf CurrentMediaTime : 455097.836833
[2024.08.13-05.18.35:268][430]LogTemp: avf OutputItemTime: 3.868346
[2024.08.13-05.18.35:268][430]LogTemp: avf Sampler::Tick() fail hasNewPixelBufferForItemTime OutputItemTime: 3.868346
This issue consistently occurs with videos that have the following specifications:
Codec: H.264
Resolution: 1080x608
Bitrate: 7,922,135 bits/sec
Duration: 90.17 seconds
Frame Rate: 30.0 fps
Pixel Format: yuv420p
Profile: Main
We would like to inquire about the possible reasons for the playback failure and the recommended MP4 specifications for seamless playback on Apple devices. Specifically, we need guidance on recommended resolution, FPS, profile, level, and bitrate limits.
Your assistance would be greatly appreciated.
I am working on a project for a university which wants to alter the passthrough camera feed more so than the standard filters (saturation/contract/etc) that some of the headsets provide.
I don't have access to the headset or enterprise SDK yet, as I'd like to nail down whether or not this is feasible before we purchase the hardware. In the API I see I can use CameraFrameProvider to access a CameraFrame and then grab a sample. The sample has a CVPixelBuffer. I have 2 questions regarding the pixelBuffer:
I see that the buffer itself is read only but can I alter the bytes within this pixel buffer? Lets say change all green pixels to red (not my actual use case but just an example)
Will the updated pixel buffer then be used in the passthrough screen?
If not, then is there any way to have control over the video feed that is being displayed as passthrough? Our ideal setup would be to have access to a frame, alter it however we want, and then have the frame displayed in passthrough. I realize I could take the feed and copy it into a floating window and alter that, but that breaks the immersion we are shooting to create here.
Thanks in advance!
I'm making an app that reads a ProRes file, processes each frame through metal to resize and scale it, then outputs a new ProRes file. In the future the app will support other codecs but for now just ProRes. I'm reading the ProRes 422 buffers in the kCVPixelFormatType_422YpCbCr16 pixel format. This is what's recommended by Apple in this video https://developer.apple.com/wwdc20/10090?time=599.
When the MTLTexture is run through a metal performance shader, the colorspace seems to force RGB or is just not allowing yCbCr textures as the output is all green/purple. If you look at the render code, you will see there's a commented out block of code to just blit copy the outputTexture, if you perform the copy instead of the scaling through MPS, the output colorspace is fine. So it appears the issue is from Metal Performance Shaders.
Side note - I noticed that when using this format, it brings in the YpCbCr texture as a single plane. I thought it's preferred to handle this as two separate planes? That said, if I have two separate planes, that makes my app more complicated as I would need to scale both planes or merge it to RGB. But I'm going for the most performance possible.
A sample project can be found here: https://www.dropbox.com/scl/fo/jsfwh9euc2ns2o3bbmyhn/AIomDYRhxCPVaWw9XH-qaN0?rlkey=sp8g0sb86af1u44p3xy9qa3b9&dl=0
Inside the supporting files, there is a test movie. For ease, I would move this to somewhere easily accessible (i.e Desktop).
Load and run the example project.
Click 'Select Video'
Select that video you placed on your desktop
It will now output a new video next to the selected one, named "Output.mov"
The new video should just be scaled at 50%, but the colorspace is all wrong.
Below is a photo of before and after the metal performance shader.
1.In the FxRemoteWindowAPI protocol, there is no way to set window.frame.origin.
2.When using NSWindow, you cannot set [Window setLevel:NSFloatingWindowLevel].
3.How can I keep the window in front of Final Cut Pro without affecting the normal use of Final Cut Pro?
How is it possible to enable EDR on Apple TV without AVFoundation for custom HDR video playback? The use case is a custom video player for HDR playback via VideoToolbox and Metal, which seem to render colors correctly on iOS but not on tvOS.
All related documentation and WWDC sessions describe APIs that are unavailable for tvOS:
let metalLayer = CAMetalLayer()
metalLayer.wantsExtendedDynamicRangeContent = true
metalLayer.edrMetadata = CAEDRMetadata.hdr10(minLuminance: 0.0, maxLuminance: 1000, opticalOutputScale: 100)
What's the alternative path for tvOS to have correct system tone mapping for a setup like:
metalLayer.pixelFormat = .rgba16Float // (or .bgr10_xr)
metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_PQ)
Video format: HEVC, YUV 4:2:0 10bit, BT.2020 PQ.
We do set the preferredDisplayCriteria on AVDisplayManager and thus video range matching is in place.
WWDC Ref: https://developer.apple.com/videos/play/wwdc2022/110565?time=557
I am have implemented PIP in my app. When user goes to background while video was playing. App opens PIP and everything works fine. But when user locks the device , PIP is not stopping/Pausing the video.
Is there a way I can pause the Video when user locks the device?
I'm trying to secure my m3u8 streaming link with a token. To achieve this, I'm using AVAssetResourceLoaderDelegate in my SwiftUI app. However, the video doesn't play in AVPlayer when I'm using the AVAssetResourceLoaderDelegate. I can see that data is being received in the resourceLoader, but the player does not start playback.
Here's the code I'm using:
@State private var player: AVPlayer?
@EnvironmentObject var pilot: UIPilot<AppRoute>
var body: some View {
VStack {
VerticalSpacer(height: 50)
HStack {
Image(systemName: "arrow.left")
.onTapGesture {
pilot.pop()
}
Spacer()
Text("liveStreamData.titleShort")
.font(.poppins(.semibold, size: 18))
.lineLimit(1)
HorizontalSpacer(width: 16)
Spacer()
}
.padding(.horizontal)
if let player = player {
VideoPlayer(player: player)
.onAppear {
player.play()
}
.onDisappear {
player.pause()
}
} else {
Text("Loading video...")
}
}
.onAppear {
setupPlayer()
}
}
private func setupPlayer() {
guard let url = URL(string: "https://assets.afcdn.com/video49/20210722/v_645516.m3u8") else {
print("Invalid URL")
return
}
// Replace the scheme with a custom scheme
var components = URLComponents(url: url, resolvingAgainstBaseURL: false)
components?.scheme = "customscheme" // Change the scheme to a custom one
guard let customURL = components?.url else {
print("Failed to create custom URL")
return
}
let asset = AVURLAsset(url: customURL)
// Set the resource loader delegate
let resourceLoaderDelegate = VideoResourceLoaderDelegate()
asset.resourceLoader.setDelegate(resourceLoaderDelegate, queue: DispatchQueue.main)
let playerItem = AVPlayerItem(asset: asset)
player = AVPlayer(playerItem: playerItem)
}
}
class VideoResourceLoaderDelegate: NSObject, AVAssetResourceLoaderDelegate {
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
guard let url = loadingRequest.request.url else {
print("Invalid request URL")
return false
}
// Replace the custom scheme with the original HTTP/HTTPS scheme
var components = URLComponents(url: url, resolvingAgainstBaseURL: false)
components?.scheme = "https" // Change the scheme back to HTTP/HTTPS
guard let originalURL = components?.url else {
print("Failed to convert URL back to HTTPS")
return false
}
// Fetch the data from the original URL
let urlSession = URLSession.shared
let task = urlSession.dataTask(with: originalURL) { data, response, error in
if let error = error {
print("Error loading resource: \(error)")
loadingRequest.finishLoading(with: error)
return
}
if let data = data, let dataRequest = loadingRequest.dataRequest {
print("Data loaded: \(data.count) bytes")
dataRequest.respond(with: data)
loadingRequest.finishLoading()
} else {
print("No data received")
loadingRequest.finishLoading(with: NSError(domain: "VideoResourceLoader", code: -1, userInfo: nil))
}
}
task.resume()
return true
}
func resourceLoader(_ resourceLoader: AVAssetResourceLoader, didCancel loadingRequest: AVAssetResourceLoadingRequest) {
print("Loading request was canceled")
}
}
Problem:
The video does not play when using AVAssetResourceLoaderDelegate. The data is being loaded correctly as confirmed by the logs, but AVPlayer fails to start playback.
Without the resource loader, the video plays without any issues.
Question:
What could be causing the player to not play the video when using AVAssetResourceLoaderDelegate?
Are there any additional steps or configurations I need to ensure smooth playback while using a resource loader?
Any help would be greatly appreciated!
I'm using the "Converting side-by-side 3D video to multiview HEVC and spatial video" sample code on iOS. It takes about 8 seconds to convert a 6-second video. At this rate, a 1-hour video would take 1.3 hours to convert.
How can I speed up the conversion?
BTW, are there solutions to convert side-by-side 3D video to spatial video for Windows?
I was trying to migrate Core Image based code that's rotating an image in a CVPixelBuffer to the newer VTPixelRotationSession from Video Toolbox. Hoping to increase performance.
The original code does:
let rotatedImage = CIImage(cvPixelBuffer: origPixelBuffer).oriented(.left)
context.render(rotatedImage, to: newPixelBuffer)
The new code uses a session:
_ = VTPixelRotationSessionRotateImage(rotationSession, origPixelBuffer, newPixelBuffer)
However I immediately ran into memory limitations, since my code has to be able to run in an iOS extension. It seems VTPixelRotationSessionRotateImage easily lets memory usage spike over the 50MB of allowed memory. While the CIImage based implementation has no such high memory usage at all.
Is this expected? Does the VTPixelRotationSession implementation gain more performance by sacrificing memory? Or is there something I'm overlooking?
I was expecting the VTPixelRotationSession at worst to be on par in terms of memory usage and processing speed compared to CIImage. At this moment it seems VTPixelRotationSession is unusable in extensions.
See also Feedback: FB14977240
When displaying and playing multiple HLS videos (4 or 6 screens) side by side using AVPlayer on iPad devices running iOS 17 or later, even though the videos are set to play at normal speed, some frames appear to be skipped, causing the videos to play faster than intended. This issue occasionally occurs when repeatedly playing and pausing the videos, and the more screens there are, the more frequently it happens. However, the occurrence rate is not very high (about 1 in 50 times).
This phenomenon has been reproduced on iPad devices running iOS 17 or later and does not occur on devices running iOS 16 or earlier.
Devices where the issue has been confirmed:
iPad 6th generation / iOS ver 17.6.1
iPad 9th generation / iOS ver 17.6.1
iPad Pro 11-inch 1st generation / iOS ver 17.4.1
I have tried implementing countermeasures based on information from similar issues, such as those mentioned on the following website, but the problem remains unresolved:
https://stackoverflow.com/questions/77224167/avplayer-unexpected-behaviour-after-ios-and-tvos-update-to-17-0
From the console logs, I observed that on devices running iOS 17 or later, the following log was output:
AppleD5500: Bad NAL type 10
I suspect that some kind of decoding failure may be occurring, leading to the issue described above. If you have any information or can provide support on this matter, I would greatly appreciate it.
Hi,
Im trying to use this example (https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video)
to encode a stereoscopic (left eye right eye) video frame using MVHEVC. The sample project creates tagged buffers for left and right eye, and uses a writer to write the MVHEC encoded video buffers. But i after i get right and left tagged buffers, i want to use VideoMaterial and its AVSampleBufferVideoRenderer to enqueue these video frames. If i render MVHEVC encoded left eye sample buffer, and right eye sample buffer, sequentially will the AVSampleBufferVideoRenderer render it as a stereoscopic view? How does this work with VideoMaterial and AVSampleBufferVideoRenderer ? Thanks!
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?