I am making an app that can two two videos, and then stitch them together on the screen (one video on top half and the other on bottom half).
This is achieved with AVMutableComposition, and then I am using AVAssetExportSession to export a mp4 file out:
guard let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPreset1920x1080) else {
return
}
export.exportAsynchronously {
....
When the two input videos are around 1GB each, starting the export session immediately increases memory usage by ~2GB, as if it moves the input files into memory immediately (my guess), and at some point my app is killed for using too much memory.
Is there a way to avoid this upfront memory usage and/or avoid getting killed?
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
I think I have the simplest possible Mac app trying to see if I can have VideoPlayer work in an Xcode Preview. It works in an iOS app project. In a Mac app project it builds and runs. But if I preview in Xcode it crashes.
The diagnostic says:
| [Remote] Unknown Error: The operation couldn’t be completed. XPC error received on message reply handler
|
| BSServiceConnectionErrorDomain (3):
| ==NSLocalizedFailureReason: XPC error received on message reply handler
| ==BSErrorCodeDescription: OperationFailed
The code I'm using is the exact code from the VideoPlayer documentation page. See this link.
Any ideas about this XPC error, and how to work around?
I'm using Xcode 16.0 on macOS 14.6.1
xcode 16:
VT_EXPORT void
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) API_AVAILABLE(macos(10.8), ios(16.0), tvos(16.0), visionos(1.0)) API_UNAVAILABLE(watchos);
xcode 15:
VT_EXPORT OSStatus
VTPixelTransferSessionCreate(
CM_NULLABLE CFAllocatorRef allocator,
CM_RETURNS_RETAINED_PARAMETER CM_NULLABLE VTPixelTransferSessionRef * CM_NONNULL pixelTransferSessionOut) VT_AVAILABLE_STARTING(10_8);
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos.
I would like to know:
Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos...
Thank you in advance for your help!
Best,
Lau
Hi all,
I'm working on a particle system. Got it to work using drawn circles. Now I want to replace the circle with an image. Trying to do so in Draw section, but not sure if that's the right place.
Any suggestions for coding to:
connect the image from BIN to Xcode
to replace particles with the image(s).
Kindly ty
I am developing an iOS app that uses YOLOv8 for object detection and aims to detect objects at 60 FPS using the UltraWide camera. My goal is to process every frame within captureOutput and utilize the detected data (such as coordinates) for each one.
I have a question regarding how background thread processing behaves in this scenario. Does the size of the YOLO model (n, s, m, etc.) or the weight of the operations inside captureOutput affect the number of frames that can be successfully processed?
Specifically, I would like to know if all frames will be processed sequentially with a delay due to heavy processing in the background, or if some frames will be dropped and not processed at all. Any insights on how to handle this would be greatly appreciated.
Thank you!
Hello,
First, some version and software details:
Software: iOS 18.1
Hardware: iPhone 14 Pro Max and later
Xcode: 16.0
Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button.
Details:
Now, let's talk about the app.
Basically, I’m developing an app that generates a video with the following features:
My app will create an output video that is split into an opening scene followed by a fully blue screen.
The opening scene will be taken from a video I choose from my gallery.
I will read the opening video using AVAssetReader as usual.
After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write().
After the opening scene, the synthesized audio will start playing while the blue screen is displayed.
All of this is already defined in the attached project.
Each project file has a comment at the beginning introducing its content.
How to test:
Write something in the field above the "Generate Video" button. For example, type "Hello, world!"
Then, press the "Library" button and select a video from the gallery, about 30 seconds long.
That’s it. Press the "Generate Video" button.
The result I’ve experienced is a crash or failure to generate the video.
Practical example of what I want to achieve:
Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White."
Then, I paste the "Snow White" story into the field above the "Generate Video" button.
The output video should contain me saying, "I’m going to tell you the story of Snow White."
After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen.
I look forward to a solution.
Thank you very much!
convertToCMSampleBuffer.swift
convertToPixelBuffer.swift
createInputs.swift
createVideo.swift
test.swift
saveVideo.swift
TestApp.swift
editingVideo.swift
sampleReaderProvider.swift
misc.swift
sampleProvider.swift
I have using the Destination Video sample project as a template to recreate a similar project.
I have been able to update all text and images but when I go to change out the videos, it still plays the old videos. If I rename the new video to the old video name, the new video appears. Problem is, there are only two videos that repeat on different links.
But changing the URLs in SampleData.swift to the new file path does not work.
Any suggestions on how to replace a video with a newly imported asset? Here is an example of the code.
Video(
id: 1,
name: "Landing",
synopsis: """
After a long journey through the stars, the robot botanist and its trusty spaceship finally arrive at Wolf 1069 B,
ready to explore the mysteries that lie on the planet’s surface.
New plants to catalog, new animals to discover, and cool rocks to unearth. Follow along as the botanist’s mission begins!
""",
categoryIDs: [
1004,
1005
],
url: URL(string: "file://BOT-anist_video.mov")!,
imageName: "landing",
yearOfRelease: 2024,
duration: 66,
contentRating: "NR",
isFeatured: true
),
Hi everyone,
I’ve encountered an issue with the showsPlaybackControls property in AVPlayerViewController after updating to iOS 18. Even though it’s set to true, the native playback controls (play, pause, etc.) are no longer appearing as they used to in previous iOS versions. This behavior was consistent and worked perfectly prior to iOS 18.
Additionally, I’m seeing the same problem when using the VideoPlayer in SwiftUI. The native controls that should appear by default seem to have vanished after the update. Has anyone else experienced this? Is there any workaround or additional configuration required to restore the native controls?
Any help or insights would be appreciated. Thanks!
struct CustomPlayerView: UIViewControllerRepresentable {
let player: AVPlayer
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {
playerController.player = player
playerController.showsPlaybackControls = true
player.play()
}
func makeUIViewController(context: Context) -> AVPlayerViewController {
return AVPlayerViewController()
}
}
The app needs to play remote videos. Sometimes it takes very long time (~10 seconds) to load the media and play with AVPlayer.
So I use a timer to check and try to play next video if it is over 5 seconds:
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];// line 1
NSArray *keys = @[@"playable"];
mediaLoaded = NO;
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() { // line 2
mediaLoaded = YES; // line 4
dispatch_async(dispatch_get_main_queue(), ^{
[self.player replaceCurrentItemWithPlayerItem:[AVPlayerItem playerItemWithAsset:asset]];
[self.player playImmediatelyAtRate:playSpeed];
});
}];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
if (!mediaLoaded) {
[self playNextVideo]; // line 3
}
});
So the flow is: line 1 (of video 1)- line 2 (of video 1)- line 3 (if over 5 seconds and video 1 is not playing)- line 1 (of video 2)-...
Now the problem is that seems line 2 is blocking line 1: only line 4 (for video 1 after ~10 seconds) or the completionHandler is executed will line 2 (for video 2) will be executed.
Anybody can give any insight?
Thx!
We have observed consistent glitches in video playback when using AVPlayer to stream HLS (HTTP Live Streaming) videos on iOS. The issue manifests as intermittent frame drops, stuttering, and playback instability during HLS streams. However, the same behavior is not present when playing MP4 videos using the same AVPlayer instance. The HLS streams being used follow standard encoding practices, and network conditions have been ruled out as a cause for this problem.
https://drive.google.com/file/d/1lhdpHTyjPYCYLHjzvb6ZF6P6jehIuwY0/view?usp=sharing
Steps to Reproduce:
1. Load an HLS video into AVPlayer and initiate playback.
2. Observe intermittent glitches and stuttering during video playback.
3. Load and play an MP4 video in the same AVPlayer instance.
4. Notice that MP4 playback is smooth without any glitches.
I am trying to play HLS videos on my iPhone app and I see frequent glitch on the video, it is not happening with MP4.
https://drive.google.com/file/d/1lhdpHTyjPYCYLHjzvb6ZF6P6jehIuwY0/view?usp=sharing.
Hi,
I'm recording videos frame by frame and occasionally a sound plays (from an MP3 asset). I want to composite these sounds into the video at the correct timings. But this doesn't work. Really pulling my hair out here. I've tried everything, including adding one after another and then inserting silence in between (allegedly this pushes subsequent clips back) but nothing works.
Here, _currentTime is the current time according to the video frames added, which are added at 20Hz.
You can see I am adding silence long enough to cover the time from the end of the last audio clip to now, plus extra padding to contain the audio we are about to add. Doesn't matter if I remove this, it just doesn't work. Sometimes I can get two pieces of audio to play but never a third and usually, only the first audio plays, and then nothing after.
I'm completely stumped.
func addFrame(_ pixelBuffer: CVPixelBuffer) {
guard CGSize(width: pixelBuffer.width, height: pixelBuffer.height) == _outputSize else { return }
let frameTime = CMTimeMake(value: Int64(_frameCount), timescale: _frameRate)
if _videoInput?.isReadyForMoreMediaData == true {
_pixelBufferAdaptor?.append(pixelBuffer, withPresentationTime: frameTime)
_frameCount += 1
_currentTime = frameTime
}
}
func addMP3AudioClip(_ audioData: Data) async throws {
let tempURL = FileManager.default.temporaryDirectory.appendingPathComponent(UUID().uuidString + ".mp3")
try audioData.write(to: tempURL)
let asset = AVAsset(url: tempURL)
let duration = try await asset.load(.duration)
let audioTrack = try await asset.loadTracks(withMediaType: .audio).first!
let currentAudioTime = _currentTime.convertScale(duration.timescale, method: .default)
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: _lastAudioClipEndTime, end: currentAudioTime))
_audioTrack?.insertEmptyTimeRange(CMTimeRangeFromTimeToTime(start: currentAudioTime, end: CMTimeAdd(currentAudioTime, duration)))
let timeRange = CMTimeRangeMake(start: .zero, duration: duration)
try _audioTrack?.insertTimeRange(timeRange, of: audioTrack, at: currentAudioTime)
_lastAudioClipEndTime = CMTimeAdd(currentAudioTime, duration)
try FileManager.default.removeItem(at: tempURL)
_audioClipTimeRanges.append(CMTimeRangeMake(start: _currentTime, duration: duration))
}
Thank you,
-- B.
I have an intra-**** video device that's supported by Apple's AVCaptureDevice. I can use the AV classes to connect to the device and get video. However, this device has a button that's used to acquire still images from the video stream. I can't use the IOUSBDeviceInterface to do an asynchronous read, because the Apple driver has the device opened exclusively. How do I go about receiving the button event in this scenario? I know which pipe to read, based on a bus analyzer when I run this on Windows, I just need to know how to access that pipe when the device is opened by another process.
I have an app that lets the user draw a circle or line over a live video feed. There’s a view for circles and a view for lines. These are displayed in ContentView and their position in the ZStack is altered based on the toolbar section by the user. Whichever view is on top of the stack is active and usable. That all works fine but I want the user to be able to select any shape on screen whether it is a circle or line. My question is what is best practice to manage tool selection and drawing the shapes? I’m guessing I should draw all shapes onto one view but I would like to avoid this as I have all my logic in each shapes respective view.
Hi Team,
When we select the screen Timing option and if the screen limit is exists for the day limit we have an option to click Ignore Limit—-> Ignore limit for today. After we select this option, suppose if we are watching any videos via Third party apps (YouTube) at that moment it got hang after we clicked multiple times the play button the video started to play but the volume is not working, again if we click previous or forward button then only the sounds are coming. Kindly look into this issue and address it.
Thanks,
Pemkumar S
My App is live on app store , user are using it with iPhone 16 pro max and they are getting Operation Stopped while combining videos and audios only specifically on iPhone 16 pro max , on every other device its working fine. And When i adding AVAssetExportPresetPassthrough it able to combine videos and audios but not respecting the encoding and without audio.
NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:composition];
if ([compatiblePresets containsObject:AVAssetExportPresetHighestQuality]) {
presetName = AVAssetExportPresetHighestQuality;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1920x1080]) {
presetName = AVAssetExportPreset1920x1080;
} else if ([compatiblePresets containsObject:AVAssetExportPreset1280x720]) {
presetName = AVAssetExportPreset1280x720;
} else {
presetName = AVAssetExportPresetPassthrough;
}
} else {
presetName = AVAssetExportPreset1280x720;
}
I have a Mac Catalyst video conferencing app that streams video using AVCaptureMultiCamSession. Everything has been working well for me in a variety of scenarios and hardware, but recently I got a report that virtual cameras / camera extensions do not seem to work - which I can reproduce 100% of the time by using something like OBS's virtual camera. FaceTime and Photo Booth work okay with these virtual cameras. Although my app can see and add the external AVCaptureDevice, I get an AVCaptureSessionRuntimeError posted when I start the session with a connection between the virtual camera and a AVCaptureVideoDataOutput (I don't get the error if I don't connect or add an output). The posted error is AVUnknown:
AVCaptureSessionRuntimeErrorNotification with Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x600001dcd680 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}
Which doesn't tell me too much. I do see some fig assertions just above in Console though:
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:3964) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1591) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1418) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:3572) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:4518) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:483) - (err=-12780)
I've verified formats are sane (the usual 420v 1080p 30fps I have everywhere else) and data output functions and such, but I'm a bit stuck as to where to go from here.
One thing that did stand out is that in the AVCamBarcode example I can see the virtual camera in that app's preview layer, but if I create an AVCaptureVideoDataOutput and add it to the session in that example, it fails in what looks like exactly the same way that my app does, with the same assertions.
Does anyone have any advice? Thanks!
When in Final Cut Pro/Logic/Compressor/Motion, the command set is set to German, there is no way to set it to English.
I am working on a macOS app that uses AVFoundation to record the screen. During a recording if I make a window full screen, AVFoundation stops capturing screen frames (or does it at a very slow rate). In my logs I get the following error:
Error Domain=AVFoundationErrorDomain Code=-11844
note that I have had instances where I could not reproduce the error but they were rare.
The screen recording sometimes resumes normally if I switch desktops or minimize the full screen window.
Did anyone ever run across a similar issue or knows how to fix it ?