Hello,
I try to get the Video from an HDMI USB capture card and show it in a PreviewLayer with 60fps. The device I am using (ShadowCast 2) is supporting 1080p with 60fps in "yuvs" and "420v".
This is my code with stripped away uninteresting stuff and removed error handling to build the previewLayer.
I am using the AVFrameRateRange because the capture device is not directly supporting 60.00 but <AVFrameRateRange: 0x600000875680 60.00 - 60.00 (1000000 / 60000240 - 1000000 / 60000240)> fps.
@Observable
final class AVFoundationService: AVService {
// Live View
private let session: AVCaptureSession = .init()
var previewLayer: AVCaptureVideoPreviewLayer {
let layer = AVCaptureVideoPreviewLayer(session: session)
layer.videoGravity = .resizeAspect
return layer
}
var activeVideoDevice: AVCaptureDevice? {
// TODO: implement correct logic
if let device = videoDevices.first(where: { $0.localizedName.contains("Shadow") }) {
return device
}
return AVCaptureDevice.default(for: .video)
}
func setupStreamDemo(completion: @escaping (Error?) -> Void) {
session.beginConfiguration()
if let device = activeVideoDevice {
do {
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input) {
session.addInput(input)
} else {
print("explode")
}
for format in device.formats {
let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription)
if dimensions.width == 1920 && dimensions.height == 1080 && format.formatDescription.mediaSubType.description == "'yuvs'" {
let foundFPS = format.videoSupportedFrameRateRanges.first {
Int($0.minFrameRate) == 60 && Int($0.minFrameRate) == 60
}
try device.lockForConfiguration()
device.activeFormat = format
device.activeVideoMinFrameDuration = foundFPS!.minFrameDuration
device.activeVideoMaxFrameDuration = foundFPS!.minFrameDuration
device.unlockForConfiguration()
}
}
} catch {
return completion(error)
}
}
session.commitConfiguration()
session.startRunning()
completion(nil)
}
}
I am using the following code in SwiftUI to show the AVCaptureVideoPreviewLayer.
struct VideoPreviewView: NSViewRepresentable {
private let previewLayer: AVCaptureVideoPreviewLayer
func makeNSView(context: Context) -> NSView {
let view = NSView()
view.layer = self.previewLayer
view.layer?.frame = view.bounds
return view
}
func updateNSView(_ nsView: NSView, context: Context) {
if let layer = nsView.layer as? AVCaptureVideoPreviewLayer {
layer.session = self.previewLayer.session
}
}
}
When I now run my app, it will ignore whatever I set on device.activeVideoMinFrameDuration and/or device.activeVideoMaxFrameDuration. If I set it to 10 fps - it's running with 30, if I set 60 it is running with 30.
If I start in parallel to my app QuickTime and start a "Recording" from my USB Capture Card, it will switch to 60fps mode.
I am on Mac Sequoia 15.0 with Xcode 16.0.
What I am doing wrong?
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello everyone,
I am working on an iOS app that involves capturing images automatically, and I would like to control the start/stop of the capture process remotely from a Mac app. I explored the iPhone Mirroring feature, which allows some remote control but has the limitation of only functioning when the iPhone is locked, and it doesn’t permit access to the iPhone’s camera from the Mac.
Ideally, I am looking for a solution that would allow me to:
Remotely control the camera capture process on the iOS app from the Mac app.
Ensure the iPhone’s camera remains fully operational and controllable from the Mac during the capture process.
I have considered using options like Handoff for communication between the apps but faced some issues while communicating between the iOS and mac app. I would like to know if there is a more optimal solution within Apple’s ecosystem, or if there are APIs I might have overlooked.
Any advice or guidance on how to achieve this functionality would be greatly appreciated!
Thanks in advance!
I'm attempting to use AVExternalStorageDevice.requestAccess on iOS 18 using Xcode 16.
When calling requestAccess, a dialog does appear, but the completionHandler closure is never called to indicate whether access was granted. If using the async version, the function just never returns.
Calling requestAccess also results in a mediaServicesWereReset (-11819) error without fail.
Supposedly, "the system only presents the dialog to a person the first time your app calls the method." That also doesn't appear to be the case. The dialog appears every time requestAccess is called, regardless of previous invocations and whether "Allow" or "Don't Allow" was selected.
The dialog itself says "You can change this in Privacy settings." I cannot find this permission anywhere in the Settings app, neither under Privacy & Security nor under the app-specific settings page.
Has anyone else experienced these issues? Am I missing something here? I did suspect permissions issues and tried adding a NSRemovableVolumesUsageDescription entry to the app. This did not appear to change anything.
Question:
When implementing simultaneous video capture and audio processing in an iOS app, does the order of starting these components matter, or can they be initiated in any sequence?
I have an actor responsible for initiating video capture using the setCaptureMode function. In this actor, I also call startAudioEngine to begin the audio engine and register a resultObserver. While the audio engine starts successfully, I notice that the resultObserver is not invoked when startAudioEngine is called synchronously. However, it works correctly when I wrap the call in a Task.
Could you please explain why the synchronous call to startAudioEngine might be blocking the invocation of the resultObserver? What would be the best practice for ensuring both components work effectively together? Additionally, if I were to avoid using Task, what approach would be required? Lastly, is the startAudioEngine effective from the start time of the video capture (00:00)?
Platform: Xcode 16, Swift 6, iOS 18
References:
Classifying Sounds in an Audio Stream – In my case, the analyzeAudio() method is not invoked.
Setting Up a Capture Session – Here, the focus is on video capture.
Classifying Sounds in an Audio File
Code Snippet: (For further details. setVideoCaptureMode() surfaces the problem.)
// ensures all operations happen off of the `@MainActor`.
actor CaptureService {
...
nonisolated private let resultsObserver1 = ResultsObserver1()
...
private func setUpSession() throws { .. }
...
setVideoCaptureMode() throws {
captureSession.beginConfiguration()
defer { captureSession.commitConfiguration() }
/* -- Works fine (analyseAudio is printed)
Task {
self.resultsObserver1.startAudioEngine()
}
*/
self.resultsObserver1.startAudioEngine() // Does not work - analyzeAudio not printed
captureSession.sessionPreset = .high
try addOutput(movieCapture.output)
if isHDRVideoEnabled {
setHDRVideoEnabled(true)
}
updateCaptureCapabilities()
}
Hi all, we are working on iOS application that includes the camera functionality. This week we have received a few customer complaints regarding the camera usage with iPhone 16/16 Pro, both of the customers said that they have an issue with the camera preview(when the camera is open) the camera preview is just freezer but any other functionally and UI works as expected. Moreover the issue happens only for back camera, the front camera works perfectly.
We have tested it in context of iOS 18 with iPhone 14/15/15 Pro/15 Pro Max but all devices with iOS 18 works perfectly without any issues. So we assumed there was no issues with iOS 18 but some breaking changes with the new iPhone 16/16 pro cameras were introduced that caused this effect. Unfortunatly, currently we can't test directly usign the iPhone 16/16 Pro since we have't these devices.
We are using SwiftUI framework and here the implementation of the camera preview:
VideoPreviewLayer
final class CameraPreviewView: UIView {
var previewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Layer expected is of type VideoPreviewLayer")
}
return layer
}
var session: AVCaptureSession? {
get { return previewLayer.session }
set { previewLayer.session = newValue }
}
override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self }
}
UIKit -> SwiftUI
struct CameraRecordingView: UIViewRepresentable {
@ObservedObject var cameraManager: CameraManager
func makeUIView(context: Context) -> CameraPreviewView {
let previewView = CameraPreviewView()
previewView.session = cameraManager.session /// AVCaptureSession
previewView.previewLayer.videoGravity = .resizeAspectFill
return previewView
}
func updateUIView(_ uiView: CameraPreviewView, context: Context) {
}
}
Setup camera input
private func saveInput(input: AVCaptureDevice) {
/// Where input is AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
do {
let cameraInput = try AVCaptureDeviceInput(device: input)
if session.canAddInput(cameraInput) {
session.addInput(cameraInput) /// session is AVCaptureSession
} else {
sendError(error: .cannotAddInput)
status = .failed
}
} catch {
if error.nsError.code == -11852 {
sendError(error: .microphoneError)
} else {
sendError(error: .createCaptureInput(error))
}
status = .failed
}
}
Does anybody have similar issues with iPhone 16/16 Pro? We would appreciate any ideas of how to potentially resolve the issue.
Hello,
I am a deaf-blind wheelchair user, and I program in Swift using a braille display.
I’m reaching out for your help on an issue I’ve been struggling to solve.
Basically, when I extract a CMSampleBuffer from an AVAsset of a video, it comes with the Audio Format ID as Linear PCM. However, when I try to pass this CMSampleBuffer to write another video using AVAssetWriter, the video ends up muted.
The audio settings of the output video are configured to MPEG-4 AAC, but the input CMSampleBuffer has the Audio Format ID as Linear PCM.
I would like to request an extension for CMSampleBuffer that converts Linear PCM audio to MPEG-4 AAC.
I’ve searched extensively and couldn’t find anything.
Looking forward to your help.
Thank you.
I have a Mac Catalyst video conferencing app that streams video using AVCaptureMultiCamSession. Everything has been working well for me in a variety of scenarios and hardware, but recently I got a report that virtual cameras / camera extensions do not seem to work - which I can reproduce 100% of the time by using something like OBS's virtual camera. FaceTime and Photo Booth work okay with these virtual cameras. Although my app can see and add the external AVCaptureDevice, I get an AVCaptureSessionRuntimeError posted when I start the session with a connection between the virtual camera and a AVCaptureVideoDataOutput (I don't get the error if I don't connect or add an output). The posted error is AVUnknown:
AVCaptureSessionRuntimeErrorNotification with Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12780), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x600001dcd680 {Error Domain=NSOSStatusErrorDomain Code=-12780 "(null)"}}
Which doesn't tell me too much. I do see some fig assertions just above in Console though:
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:3964) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1591) - (err=-12780)
<<<< BWMultiStreamCameraSourceNode >>>> Fig assert: "err == 0 " at bail (BWMultiStreamCameraSourceNode.m:1418) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:3572) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:4518) - (err=-12780)
<<<< FigCaptureCameraSourcePipeline >>>> Fig assert: "err == 0 " at bail (FigCaptureCameraSourcePipeline.m:483) - (err=-12780)
I've verified formats are sane (the usual 420v 1080p 30fps I have everywhere else) and data output functions and such, but I'm a bit stuck as to where to go from here.
One thing that did stand out is that in the AVCamBarcode example I can see the virtual camera in that app's preview layer, but if I create an AVCaptureVideoDataOutput and add it to the session in that example, it fails in what looks like exactly the same way that my app does, with the same assertions.
Does anyone have any advice? Thanks!
Hello,
I used following technical note to develop app that record mov file with SMPTE timecode.
https://developer.apple.com/library/archive/technotes/tn2310/_index.html
As result, a timecode track is present within .mov file (other tracks are audio and video)
Unfortunately, QuickTime Player doesn't display timecode information.
Analyser tools like mediainfo or online service as https://media-analyzer.pro/app show that timecode track has null duration (and so no "time code of last frame"
example n° of TC track :
Other
ID : 3
Type : Time code
Format : QuickTime TC
Frame rate : 60.000 FPS
Time code of first frame : 17:39:59:00
Time code, stripped : Yes
Title : Core Media Time Code
Encoded date : 2024-09-10 15:39:46 UTC
Tagged date : 2024-09-10 15:39:59 UTC
example 2 of Timecode track :
0000569562Quicktime Timecode #0
00007f6b8a'trak' Track atom #1
00007f6b92'tkhd' Track header atom #2
size 92 (0x5C)
type 'tkhd' (hex 74 6B 68 64)
version 0
flags 15 (0xF)
creation_time 0xE30618C2, '2024-09-10 15:39:46'
modification_time 0xE30618CF, '2024-09-10 15:39:59'
track_ID 3
reserved 0
duration 0
reserved [0, 0]
In each case, duration is considered as null even if the record's duration is more than 20s.
STEPS TO REPRODUCE
Use AVAssetWriter for video and audio.
Create AVAssetWrite for timecode and associate it with video track.
Just before stopping record, a sample buffer containing SMPTE is generated and added.
All track are marked as finished before stopping the record with finishWritingWithCompletionHandler.
Hello,
I am trying to create an MP4 by obtaining the content from another source MP4.
The source MP4 would be read with AVAssetReader and the output written with AVAssetWriter.
I wanted to do partial tests: first, I placed only the video in the output MP4.
Now, I am trying to place only the audio in the output MP4.
I even managed to get the output MP4 to have the same length (in seconds) as the source MP4.
But the problem is simple: the output MP4 is simply silent.
Naturally, I want it to have audio.
Below are two excerpts from the source code.
Reading and writing.
Note: The variable videoURL is from the class where the function writeVideo() is located. Its assignment happens in another scope, already debugged.
Snippet:
let semaphore = DispatchSemaphore(value: 0)
func writeVideo() {
var audioReaderBuffers = [CMSampleBuffer]()
// File directory
url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0].appendingPathComponent("*****/output.mp4")
guard let url = url else { return }
try FileManager.default.createDirectory(at: url.deletingLastPathComponent(), withIntermediateDirectories: true)
if FileManager.default.fileExists(atPath: url.path()) {
try FileManager.default.removeItem(at: url)
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let audioOutputTest = try await audioTrack.getAudioSettings()
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioSettings)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerAudioOutput.copyNextSampleBuffer() {
audioReaderBuffers.append(sampleBuffer)
}
semaphore.signal()
}
}
semaphore.wait()
let audioInput = createAudioInput(sampleBuffer: audioReaderBuffers[0])
let assetWriter = try AVAssetWriter(outputURL: url, fileType: .mp4)
assetWriter.add(audioInput)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: .zero)
for (index, buffer) in audioReaderBuffers.enumerated() {
while !audioInput.isReadyForMoreMediaData {
usleep(1000)
}
audioInput.append(buffer)
}
assetWriter.finishWriting {
switch assetWriter.status {
case .completed:
print("Operation completed successfully: \(url.absoluteString)")
case .failed:
if let error = assetWriter.error {
print("Error description: \(error.localizedDescription)")
} else {
print("Error not found.")
}
default:
print("Error not found.")
}
}
}
Below is the createAudioInput method:
func createAudioInput(sampleBuffer: CMSampleBuffer) -> AVAssetWriterInput {
let audioSettings = [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVSampleRateKey: 48000,
AVEncoderBitRatePerChannelKey: 64000,
AVNumberOfChannelsKey: 1
] as [String : Any]
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioSettings, sourceFormatHint: sampleBuffer.formatDescription)
audioInput.expectsMediaDataInRealTime = false
return audioInput
}
I await your help, please.
As you can see, the value shown in the AVCaptureSystemZoomSlider is not the same as the raw camera zoom factor.
I tried to calculate this value, and it seems it's 0.8. (5-1)*0.8=4.2-1 in this image.
It seems this factor only applies to the default wide-angle camera. And I can't get this value from anywhere. (It's not displayVideoZoomFactorMultiplier btw, I checked that.)
What is it?
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it.
Here's a simplified version of my tracking logic:
func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws {
guard let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) else {
fatalError("Camera not available")
}
let currentZoomFactor = device.videoZoomFactor
let dimensions = device.activeFormat.formatDescription.dimensions
let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor))
let deviceOrientation = UIDevice.current.orientation
let cameraOrientation: DockAccessory.CameraOrientation = {
switch deviceOrientation {
case .landscapeLeft: return .landscapeLeft
case .landscapeRight: return .landscapeRight
case .portrait: return .portrait
case .portraitUpsideDown: return .portraitUpsideDown
default: return .unknown
}
}()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: input.device.deviceType,
cameraPosition: input.device.position,
orientation: cameraOrientation,
cameraIntrinsics: useIntrinsics ? intrinsics : nil,
referenceDimensions: referenceDimensions
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: boundingBox
)
let observations = [observation]
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
}
func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 {
let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription)
let width = Float(dimensions.width)
let height = Float(dimensions.height)
let diagonalPixels = sqrt(width * width + height * height)
let estimatedFocalLength = diagonalPixels * 0.8
let fx = Float(estimatedFocalLength) * Float(currentZoom)
let fy = fx
let cx = width / 2.0
let cy = height / 2.0
return matrix_float3x3(
SIMD3<Float>(fx, 0, cx),
SIMD3<Float>(0, fy, cy),
SIMD3<Float>(0, 0, 1)
)
}
I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases.
Questions:
Why might increasing the zoom factor cause this erratic movement?
I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently?
Are there any other factors I should consider when using DockKit with a variable zoom?
Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency?
Any insights or suggestions would be greatly appreciated. Thanks!
I have an app that plays sound files stored locally. I'm using a single SwiftUI view with a MPVolumeView so the user can control system volume from the player in my app. When I'm playing the sound file on the iPhone, my volume slider operates as expected. When I AirPlay to my AppleTV, the volume slider still works to control the volume, but when I hit play in my app, the volume snaps to a different value, but actual sound volume doesn't change. Control still works. Flipping to control center, I see a volume mismatch between system volume and the MPVolumeView.
Here's the code that I use to put the slider in my app.
struct VolumeSlider: UIViewRepresentable {
func makeUIView(context: Context) -> MPVolumeView {
let vv = MPVolumeView(frame: .zero)
vv.showsVolumeSlider = true
vv.setVolumeThumbImage(UIImage() ,for: UIControl.State.normal)
return vv
}
func updateUIView(_ uiView: MPVolumeView, context: Context) {
// No need to update the view in this case
}
}
I'm using AVFoundation and AVAudioPlayer to playback the sound file. I'm using MediaPlayer to tell MPNowPlayingInfoCenter the track info and AlbumArt. Audio control via control center works perfectly. Does the same if I target iOS 16 or 17.
Is this a bug with the MPVolumeView or the way I added it to the app?
Hello,
I recently started integrating HLS downloads into my application by using AVAssetDownloadTask and AVAssetDownloadConfiguration. I took an example from the documentation as a basis, with only one small difference: the minimum target for my application is iOS 16, so I replaced urlSession(_:assetDownloadTask:willDownloadTo:) with urlSession(_:assetDownloadTask:didFinishDownloadingTo:).
And I encountered the following issue: after pausing a download and resuming it later, the progress no longer functions as expected.
Could you, please, help me with this? What are the right approaches to implementing pause and progress tracking?
Some details:
I used devices with iOS 16.0.2 and 17.6.1 for testing.
There was no code in the example that pauses the download and resumes it. So, I used the following methods to do this: suspend and resume
Also, I have tried to track downloading progress using two different approaches:
Using task.progress.observe(\.fractionCompleted) { ... }, which was presented in the example. In this scenario, after a pause, an observation callback will only be called once, when the download has completed, despite the fact that data is being successfully downloaded over the network.
Using urlSession(_:assetDownloadTask:didLoad:totalTimeRangesLoaded:timeRangeExpectedToLoad:) and calculating progress as totalTimeRangesLoaded.reduce(0.0) { $0 + CMTimeGetSeconds($1.timeRangeValue.duration) / CMTimeGetSeconds(timeRangeExpectedToLoad.duration) }. In this scenario, I have noticed that the result of the calculation does not always increase, but sometimes there are outliers. Example of logs: 68%, 69%, 70%, 72%, 63%, 65%, 66%, 69%, 70%, 71%, 72%. Such fluctuations are most easily reproduced when I try to resume the download after pause. However, sometimes they occur spontaneously. It's important to mention, that this method marked as deprecated, perhaps for this reason.
In both cases download is successful, the problem is with progress reporting only.
Full version of code can be found here.
Hello everyone, I am using QRCodeScanner library in my project, the scan qr code was working in earlier ipad os but now in iPad os 18 it's stopped working.
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
<<<< FigPlayerInterstitial >>>> fpic_ServiceCurrentEvent signalled err=-15671 (kFigPlayerInterstitialError_ClientReleased) (no primary) at FigPlayerInterstitialCoordinator.m:7885
My project uses AVPlayer (AVPlayerViewController) to play video. There are continuous warning logs while playing and when it goes to dealloc, it prints information below.
<<<< PlayerRemoteXPC >>>> remoteXPCItem_handleSetProperty signalled err=-12860 (kFigPlayerError_ParamErr) (propertyValue should be MTAudioProcessingTap) at FigPlayer_RemoteXPC.m:2760
This only happens in iOS 18 and I have no idea about this. There is no any information for FigPlayerInterstitial and else.
When I set a custom exposure duration, like 1/8, and then switch back to continuous auto exposure, the exposure duration in areas that were previously 1/17 changes to something like 1/5 or 1/10. As a result, the screen becomes laggy and overexposed. I'm not sure why this is happening.
Basic
iPhone 11
iOS 17.5.1
Main Thread
libsystem_kernel.dylib___ulock_wait (in libsystem_kernel.dylib) +8
libdispatch.dylib__dlock_wait (in libdispatch.dylib) +52
libdispatch.dylib__dispatch_thread_event_wait_slow (in libdispatch.dylib) +52
libdispatch.dylib___DISPATCH_WAIT_FOR_QUEUE__ (in libdispatch.dylib) +364
libdispatch.dylib__dispatch_sync_f_slow (in libdispatch.dylib) +144
MediaToolbox_fpic_CopyCurrentEvent (in MediaToolbox) +132
AVFCore___104-[AVPlayer _setRate:withVolumeRampDuration:playImmediately:rateChangeReason:affectsCoordinatedPlayback:]_block_invoke_2 (in AVFCore) +244
AVFCore-[AVPlayer _setRate:withVolumeRampDuration:playImmediately:rateChangeReason:affectsCoordinatedPlayback:] (in AVFCore) +276
AVFCore-[AVPlayer setRate:] (in AVFCore) +56
call AVPlayer pause
Thread 81 name: fpic-sync
libsystem_kernel.dylib___ulock_wait (in libsystem_kernel.dylib) +8
libdispatch.dylib__dlock_wait (in libdispatch.dylib) +52
libdispatch.dylib__dispatch_thread_event_wait_slow (in libdispatch.dylib) +52
libdispatch.dylib___DISPATCH_WAIT_FOR_QUEUE__ (in libdispatch.dylib) +364
libdispatch.dylib__dispatch_sync_f_slow (in libdispatch.dylib) +144
MediaToolbox_itemasync_CopyProperty (in MediaToolbox) +588
MediaToolbox_fpic_CurrentItemMoment (in MediaToolbox) +184
MediaToolbox___fpic_EstablishCurrentEventForCurrentItem_block_invoke (in MediaToolbox) +136
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_barrier_sync_invoke_and_complete (in libdispatch.dylib) +52
MediaToolbox_fpic_ServiceCurrentEvent (in MediaToolbox) +600
MediaToolbox___fpic_NotifyServiceCurrentEvent_block_invoke (in MediaToolbox) +912
libdispatch.dylib__dispatch_call_block_and_release (in libdispatch.dylib) +28
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_serial_drain (in libdispatch.dylib) +744
libdispatch.dylib__dispatch_lane_invoke (in libdispatch.dylib) +428
libdispatch.dylib__dispatch_root_queue_drain (in libdispatch.dylib) +388
libdispatch.dylib__dispatch_worker_thread (in libdispatch.dylib) +256
libsystem_pthread.dylib__pthread_start (in libsystem_pthread.dylib) +132
libsystem_pthread.dylib_thread_start (in libsystem_pthread.dylib) +4
Thread 93 name: com.apple.coremedia.player.async.0x303c60240.P/GR
libsystem_kernel.dylib_mach_msg2_trap (in libsystem_kernel.dylib) +8
libsystem_kernel.dylib_mach_msg2_internal (in libsystem_kernel.dylib) +76
libsystem_kernel.dylib_mach_msg_overwrite (in libsystem_kernel.dylib) +432
libsystem_kernel.dylib_mach_msg (in libsystem_kernel.dylib) +20
libdispatch.dylib__dispatch_mach_send_and_wait_for_reply (in libdispatch.dylib) +540
libdispatch.dylib_dispatch_mach_send_with_result_and_wait_for_reply (in libdispatch.dylib) +56
libxpc.dylib_xpc_connection_send_message_with_reply_sync (in libxpc.dylib) +260
CoreMedia_FigXPCConnectionSendSyncMessageCreatingReply (in CoreMedia) +288
CoreMedia_FigXPCRemoteClientSendSyncMessageCreatingReply (in CoreMedia) +44
MediaToolbox_remoteXPCPlayer_SetRateWithOptions (in MediaToolbox) +148
MediaToolbox_playerasync_runOneCommand (in MediaToolbox) +768
MediaToolbox_playerasync_runAsynchronousCommandOnQueue (in MediaToolbox) +180
libdispatch.dylib__dispatch_client_callout (in libdispatch.dylib) +16
libdispatch.dylib__dispatch_lane_serial_drain (in libdispatch.dylib) +744
libdispatch.dylib__dispatch_lane_invoke (in libdispatch.dylib) +428
libdispatch.dylib__dispatch_root_queue_drain (in libdispatch.dylib) +388
libdispatch.dylib__dispatch_worker_thread (in libdispatch.dylib) +256
libsystem_pthread.dylib__pthread_start (in libsystem_pthread.dylib) +132
libsystem_pthread.dylib_thread_start (in libsystem_pthread.dylib) +4
We are trying to build a simple image capture app using AVFoundation and AVCaptureDevice.
Custom settings are used for exposure point and bias.
But when image is captured using front camera , the image captured from the app and front native camera does not match.
The image captured from the app includes more area than the native app.
Also there is difference between the tilt angle between two images.
So is there any way to capture image exactly same as native camera using AVFoundation and AVCaptureDevice.
Native
Custom
We are trying to build a video recording app using AVFoundation and AVCaptureDevice.
No custom settings are used like iso, exposure duration. All the settings are kept to auto.
But when video is captured using front camera and 1080x1920 dimensions, the video captured from the app and front native camera does not match.
In settings i have kept video setting as 30 fps 1080x1920.
The video captured from the app includes more area than the native app. Also some values like iso, exposure duration does not match.
So is there any way to capture video exactly same as native camera using AVFoundation and AVCaptureDevice.
I have attached screenshots from video for reference.
Native
AVCapture
Here is a code snippet about AVPlayer.
avPlayer.addPeriodicTimeObserver(forInterval: CMTime(value: 1, timescale: 60), queue: .main) { [weak self] _ in
// Call main actor-isolated instance methods
}
Xcode shows warnings that Call to main actor-isolated instance method '***' in a synchronous nonisolated context; this is an error in the Swift 6 language mode. How can I fix this?
avPlayer.addPeriodicTimeObserver(forInterval: CMTime(value: 1, timescale: 60), queue: .main) { [weak self] _ in
Task { @MainActor in
// Call main actor-isolated instance methods
}
}
Can I use this solution above? But it seems switching actors frequently can slow down performance.