AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

AVFoundation Documentation

Posts under AVFoundation tag

363 Posts
Sort by:
Post not yet marked as solved
0 Replies
54 Views
Just watched the new product release, and I'm really hoping the new iPad Pro being advertised as the next creative tool for filmmakers and artists will finally allow RAW captures in the native Camera app or AVFoundation API (currently RAW available devices returns 0 on the previous iPad Pro). With all these fancy multicam camera features and camera hardware, I don't think it really takes that much to enable ProRAW and Action Mode on the software side of the iPad. Unless their strategy is to make us "shoot on iPhone and edit on iPad" (as implied in their video credits) which has been my workflow with the iPhone 15 and 2022 iPad Pro :( :(
Posted
by megatran.
Last updated
.
Post not yet marked as solved
1 Replies
35 Views
Hi everyone ! I'm getting random crashes when I'm using the Speech Recognizer functionality in my app. This is an old bug (for 8 years on Apple Forums) and I will really appreciate if anyone from Apple will be able to find a fix for this crashes. Can anyone also help me please to understand what could I do to keep the Speech Recognizer functionality still available in my app, but to avoid this crashes (if there is any other native library available or a CocoaPod library). Here is my code and also the crash log for it. Code: func startRecording() { startStopRecordBtn.setImage(UIImage(#imageLiteral(resourceName: "microphone_off")), for: .normal) if UserDefaults.standard.bool(forKey: Constants.darkTheme) { commentTextView.textColor = .white } else { commentTextView.textColor = .black } commentTextView.isUserInteractionEnabled = false recordingLabel.text = Constants.recording if recognitionTask != nil { recognitionTask?.cancel() recognitionTask = nil } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.record) try audioSession.setMode(AVAudioSession.Mode.measurement) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } catch { showAlertWithTitle(message: Constants.error) } recognitionRequest = SFSpeechAudioBufferRecognitionRequest() let inputNode = audioEngine.inputNode guard let recognitionRequest = recognitionRequest else { fatalError(Constants.error) } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in var isFinal = false if result != nil { self.commentTextView.text = result?.bestTranscription.formattedString isFinal = (result?.isFinal)! } if error != nil || isFinal { self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil self.startStopRecordBtn.isEnabled = true } }) let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer: AVAudioPCMBuffer, when: AVAudioTime) in // CRASH HERE self?.recognitionRequest?.append(buffer) } audioEngine.prepare() do { try audioEngine.start() } catch { showAlertWithTitle(message: Constants.error) } } Here is the crash log: Thanks for very much for reading this !
Posted
by rtdevuk.
Last updated
.
Post not yet marked as solved
1 Replies
64 Views
I'm getting an issue even unencrypted video playback also failing with status failed. Error Domain=CoreMediaErrorDomain Code=-12927 "(null)" I unable to find any info on above error code. Is there some way to look this up? Sample master M3U8 is shared below. Note: If I use any variant M3U8 then it is working playing fine.
Posted Last updated
.
Post not yet marked as solved
0 Replies
39 Views
Hi, just generated a HDR10 MVHEVC file, mediainfo is below: Color range : Limited Color primaries : BT.2020 Transfer characteristics : PQ Matrix coefficients : BT.2020 non-constant Codec configuration box : hvcC+lhvC then generate the segment files with below command: mediafilesegmenter --iso-fragmented -t 4 -f av_1 av_new_1.mov then upload the segment files and prog_index.m3u8 to web server. just find that can not play the HLS stream on Safari... the url is http://ip/vod/prog_index.m3u8 just checked that if i remove the tag Transfer characteristics : PQ when generating the MVHEVC file. above same mediafilesegmenter command and upload the files to web server. the new version of HLS stream is can play on Safari... Is there any way to play HLS PQ video on Safari. thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
99 Views
Hello, I am working on a fairly complex iPhone app that controls the front built-in wide angle camera. I need to take and display a sequence of photos that cover the whole range of focus value available. Here is how I do it : call setExposureModeCustom to set the first lens position wait for the completionHandler to be called back capture a photo do it again for the next lens position. etc. This works fine, but it takes longer than I expected for the completionHandler to be called back. From what I've seen, the delay scales with the exposure duration. When I set the exposure duration to the max value: on the iPhone 14 Pro, it takes about 3 seconds (3 times the max exposure) on the iPhone 8 1.3s (4 times the max exposure). I was expecting a delay of two times the exposure duration: take a photo, throw one away while changing lens position, take the next photo, etc. but this takes more than that. I also tried the same thing with changing the ISO instead of the focus position and I get the same kind of delays. Also, I do not think the problem is linked to the way I process the images because I get the same delay even if I do nothing with the output. Is there something I could do to make things go faster for this use-case ? Any input would be appreciated, Thanks I created a minimal testing app to reproduce the issue : import Foundation import AVFoundation class Main:NSObject, AVCaptureVideoDataOutputSampleBufferDelegate { let dispatchQueue = DispatchQueue(label:"VideoQueue", qos: .userInitiated) let session:AVCaptureSession let videoDevice:AVCaptureDevice var focus:Float = 0 override init(){ session = AVCaptureSession() session.beginConfiguration() session.sessionPreset = .photo videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)! super.init() let videoDeviceInput = try! AVCaptureDeviceInput(device: videoDevice) session.addInput(videoDeviceInput) let videoDataOutput = AVCaptureVideoDataOutput() if session.canAddOutput(videoDataOutput) { session.addOutput(videoDataOutput) videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA ] videoDataOutput.setSampleBufferDelegate(self, queue: dispatchQueue) } session.commitConfiguration() dispatchQueue.async { self.startSession() } } func startSession(){ session.startRunning() //lock max exposure duration try! videoDevice.lockForConfiguration() let exposure = videoDevice.activeFormat.maxExposureDuration.seconds * 0.5 print("set max exposure", exposure) videoDevice.setExposureModeCustom(duration: CMTime(seconds: exposure, preferredTimescale: 1000), iso: videoDevice.activeFormat.minISO){ time in print("did set max exposure") self.changeFocus() } videoDevice.unlockForConfiguration() } func changeFocus(){ let date = Date.now print("set focus", focus) try! videoDevice.lockForConfiguration() videoDevice.setFocusModeLocked(lensPosition: focus){ time in let dt = abs(date.timeIntervalSinceNow) print("did set focus - took:", dt, "frames:", dt/self.videoDevice.exposureDuration.seconds) self.next() } videoDevice.unlockForConfiguration() } func next(){ focus += 0.02 if focus > 1 { print("done") return } changeFocus() } func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection){ print("did receive video frame") } }
Posted
by Saturnyn.
Last updated
.
Post not yet marked as solved
0 Replies
109 Views
Hi everyone! We are wondering whether it's possible to have two macOS apps use the Voice Processing from Audio Engine at the same time, since we have had issues trying to do so. Specifically, our app seems to cut off the input stream from the other, only if it has Voice Processing enabled. We are developing a macOS app that records microphone input simultaneously with videoconference apps like Zoom. We are utilizing the Voice Processing from Audio Engine like in this sample: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing We have also noticed this behaviour in Safari recording audios with the Javascript Web Audio API, which also seems to use Voice Processing under the hood due to the Echo Cancellation. Any leads on this would be greatly appreciated! Thanks
Posted
by davidesq.
Last updated
.
Post not yet marked as solved
0 Replies
86 Views
I am using AVFoundation to capture a photo. This was all working fine, then I realized all the photos were saving to the photo library in portrait mode. I wanted them to save in the orientation the device was in when the camera took the picture, much as the built in camera app does on iOS. So I added this code: if let videoConnection = photoOutput.connection(with: .video), videoConnection.isVideoOrientationSupported { // From() is just a helper to get video orientations from the device orientation. videoConnection.videoOrientation = .from(UIDevice.current.orientation) print("Photo orientation set to \(videoConnection.videoOrientation).") } With this addition, the first photo taken after a device rotation logs this error in the debugger: <<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:866) - (err=-12784) Subsequent photos will not repeat the error. Once you rotate the device again, same behavior. Photos taken after the app loads, but before any rotations have been made, do not produce this error. I have tried many things, no dice. If I comment this code out it works without error, but of course the photos are all saved in portrait mode again.
Posted
by broomhead.
Last updated
.
Post not yet marked as solved
1 Replies
75 Views
I'm trying to read meta information from MXF files without success. I get an empty AVAsset array. I saw that there are mentions of "MTRegisterProfessionalVideoWorkflowFormatReaders". But there is absolutely no documentation. I don't know where to look. Has anyone encountered this? Please help with any information
Posted
by IGGGG.
Last updated
.
Post not yet marked as solved
1 Replies
98 Views
I have built a camera application which uses a AVCaptureSession with the AVCaptureDevice set to .builtInDualWideCamera and isVirtualDeviceConstituentPhotoDeliveryEnabled=true to enable delivery of "simultaneous" photos (AVCapturePhoto) for a single capture request. I am using the hd1920x1080 preset, but both the wide and ultra-wide photos are being delivered in the highest possible resolution (4224x2376). I've tried to disable any setting that suggests that it should be using that 4k resolution rather than 1080p on the AVCapturePhotoOutput, AVCapturePhotoSettings and AVCaptureDevice, but nothing has worked. Some debugging that I've done: When I turn off constituent photo delivery by commenting out the line of code below, I end up getting a single photo delivered with the 1080p resolution, as you'd expect. // photoSettings.virtualDeviceConstituentPhotoDeliveryEnabledDevices = captureDevice.constituentDevices I tried the constituent photo delivery with the .builtInDualCamera and got only 4k results (same as described above) I tried using a AVCaptureMultiCamSession with .builtInDualWideCamera and also only got 4k imagery I inspected the resolved settings on photo.resolvedSettings.photoDimensions, and the dimensions suggest the imagery should be 1080p, but then when I inspect the UIImage, it is always 4k. guard let imageData = photo.fileDataRepresentation() else { return } guard let capturedImage = UIImage(data: imageData ) else { return } print("photo.resolvedSettings.photoDimensions", photo.resolvedSettings.photoDimensions) // 1920x1080 print("capturedImage.size", capturedImage.size) // 4224x2376 -- Any help here would be greatly appreciated, because I've run out of things to try and documentation to follow 🙏
Posted
by nanders.
Last updated
.
Post not yet marked as solved
0 Replies
73 Views
I am implementing pan and zoom features for an app using a custom USB camera device, in iPadOS. I am using an update function (shown below) to apply transforms for scale and translation but they are not working. By re-enabling the animation I can see that the scale translation seems to initially take effect but then the image animates back to its original scale. This all happens in a fraction of a second but I can see it. The translation transform seems to have no effect at all. Printing out the value of AVCaptureVideoPreviewLayer.transform before and after does show that my values have been applied. private func updateTransform() { #if false // Disable default animation. CATransaction.begin() CATransaction.setDisableActions(true) defer { CATransaction.commit() } #endif // Apply the transform. logger.debug("\(String(describing: self.videoPreviewLayer.transform))") let transform = CATransform3DIdentity let translate = CATransform3DTranslate(transform, translationX, translationY, 0) let scale = CATransform3DScale(transform, scale, scale, 1) videoPreviewLayer.transform = CATransform3DConcat(translate, scale) logger.debug("\(String(describing: self.videoPreviewLayer.transform))") } My question is this, how can I properly implement pan/zoom for an AVCaptureVideoPreviewLayer? Or even better, if you see a problem with my current approach or understand why the transforms I am applying do not work, please share that information.
Posted Last updated
.
Post not yet marked as solved
1 Replies
165 Views
In this code, I aim to enable users to select an image from their phone gallery and display it with less opacity on top of the z-index. The selected image should appear on top of the user's phone camera feed, allowing them to see the canvas on which they are drawing as well as the low-opacity image. The app's purpose is to enable users to trace an image on the canvas while simultaneously seeing the camera feed. CameraView.swift import SwiftUI import AVFoundation struct CameraView: View { let selectedImage: UIImage var body: some View { ZStack { CameraPreview() Image(uiImage: selectedImage) .resizable() .aspectRatio(contentMode: .fill) .opacity(0.5) // Adjust the opacity as needed .edgesIgnoringSafeArea(.all) } } } struct CameraPreview: UIViewRepresentable { func makeUIView(context: Context) -> UIView { let cameraPreview = CameraPreviewView() return cameraPreview } func updateUIView(_ uiView: UIView, context: Context) {} } class CameraPreviewView: UIView { private let captureSession = AVCaptureSession() override init(frame: CGRect) { super.init(frame: frame) setupCamera() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } private func setupCamera() { guard let backCamera = AVCaptureDevice.default(for: .video) else { print("Unable to access camera") return } do { let input = try AVCaptureDeviceInput(device: backCamera) if captureSession.canAddInput(input) { captureSession.addInput(input) let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = bounds layer.addSublayer(previewLayer) captureSession.startRunning() } } catch { print("Error setting up camera input:", error.localizedDescription) } } } Thanks for helping and your time.
Posted
by jhems.
Last updated
.
Post not yet marked as solved
0 Replies
179 Views
After upgrading to iOS 17, Thread Performance Checker is complaining of priority inversion when converting a CVPixelBuffer to UIImage through a CIImage instance. It might be a false-positive or an issue? - (UIImage *)imageForSampleBuffer:(CMSampleBufferRef)sampleBuffer andOrientation:(UIImageOrientation)orientation { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; UIImage *uiImage = [UIImage imageWithCIImage:ciImage]; NSData *data = UIImageJPEGRepresentation(uiImage, 90); } The code snippet above, when running in a thread set to the default priority results in the message below: Thread Performance Checker: Thread running at User-interactive quality-of-service class waiting on a lower QoS thread running at Default quality-of-service class. Investigate ways to avoid priority inversions PID: 1188, TID: 723209 Backtrace ================================================================= 3 AGXMetalG14 0x0000000235c77cc8 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 2927816 4 AGXMetalG14 0x0000000235ccd784 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3278724 5 AGXMetalG14 0x0000000235ccf6a4 1FEF1F89-B467-37B0-86F8-E05BC8A2A629 + 3286692 6 MetalTools 0x000000022f758b68 E712D983-01AD-3FE5-AB66-E00ABF76CD7F + 568168 7 CoreImage 0x00000001a7c0e580 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 267648 8 CoreImage 0x00000001a7d0cc08 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 1309704 9 CoreImage 0x00000001a7c0e2e0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266976 10 CoreImage 0x00000001a7c0e1d0 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 266704 11 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 12 libdispatch.dylib 0x0000000105e5be24 _dispatch_lane_barrier_sync_invoke_and_complete + 176 13 CoreImage 0x00000001a7c0a784 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 251780 14 CoreImage 0x00000001a7c0a46c 3D2AC243-0880-3BA9-BBF3-A214454875E0 + 250988 15 libdispatch.dylib 0x0000000105e5b764 _dispatch_block_async_invoke2 + 148 16 libdispatch.dylib 0x0000000105e4a7bc _dispatch_client_callout + 20 17 libdispatch.dylib 0x0000000105e5266c _dispatch_lane_serial_drain + 832 18 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 19 libdispatch.dylib 0x0000000105e524a4 _dispatch_lane_serial_drain + 376 20 libdispatch.dylib 0x0000000105e5343c _dispatch_lane_invoke + 460 21 libdispatch.dylib 0x0000000105e60404 _dispatch_root_queue_drain_deferred_wlh + 328 22 libdispatch.dylib 0x0000000105e5fa38 _dispatch_workloop_worker_thread + 444 23 libsystem_pthread.dylib 0x00000001f35a4f20 _pthread_wqthread + 288 24 libsystem_pthread.dylib 0x00000001f35a4fc0 start_wqthread + 8
Posted Last updated
.
Post not yet marked as solved
1 Replies
120 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
247 Views
Dear Apple Developer Forum, we have customers here complaining about not being able to play live streams (HLS FairPlay) with ou application anymore since having upgraded their phone to iOS 17.4.1. We can't reproduce this problem in-house but the error code sent to ou analytics platform is CoreMediaErrorDomain error -12852 . Would it be possible to get more information on this error especially the potential cause of this and if the app is not responsible how we can help our customers ? Kind regards Cédric
Posted Last updated
.
Post not yet marked as solved
2 Replies
307 Views
Hello, I tried to build AVCam sample application for iOS17 and run it on MacBook (designed as iPad) with macos14.3 (Sonoma). https://developer.apple.com/documentation/avfoundation/capture_setup/avcam_building_a_camera_app?language=objc When building and testing with Xcode 15.2, AVCam application crashes systematically when choosing target "My Mac (Designed for iPad)" In fact, SIGABORT signal is received in a thread dealing with "portrait effect" Thread 19 Queue : com.apple.portrait.effect_init (serial) Is it a known bug? Is there a workaround about this case? Best regards External webcam is detected by AVCam but preview and capture are systematically upside down. (may be the same FaceTime HD camera's) Is it a known bug? Is there a workaround about this case?
Posted
by ftristani.
Last updated
.
Post not yet marked as solved
0 Replies
113 Views
Is it possible to find IDR frame (CMSampleBuffer) in AVAsset h264 video file?
Posted
by tien6b0.
Last updated
.
Post not yet marked as solved
0 Replies
129 Views
I have a camera application which aims to take images as close to simultaneously as possible from the wide and ultra-wide cameras. The AVCaptureMultiCamSession is setup with manual connections. Note: we are not using builtInDualWideCamera with constituent photo delivery enabled since some features we use are not supported in that mode. At the moment, we are manually trying to synchronize frames between the two cameras, but we would like to use the AVCaptureDataOutputSynchronizer to improve our results. Is it possible to synchronize the wide and ultra-wide video outputs? All examples and docs that I've found show synchronization with video and depth, metadata, or audio, but not two video outputs. From my testing, I've found that the dataOutputSynchronizer either fires with the wide video output, or the ultra video output, but never both (at least one is nil), suggesting that they are not being synchronized. self.outputSync = AVCaptureDataOutputSynchronizer(dataOutputs: [wideCameraOutput, ultraCameraOutput]) outputSync.setDelegate(self, queue: .main) ... func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { guard let syncWideData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.wideCameraOutput) as? AVCaptureSynchronizedSampleBufferData, let syncedUltraData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: self.ultraCameraOutput) as? AVCaptureSynchronizedSampleBufferData else { return; } // either syncWideData or syncUltraData is always nil, so the guard condition never passes. }
Posted
by nanders.
Last updated
.
Post not yet marked as solved
1 Replies
246 Views
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Posted Last updated
.
Post not yet marked as solved
0 Replies
138 Views
We found that crashes occur on some specific devices. But don't know the root cause for it. It only appears on the user side and cannot be reproduced on our local devices. From the stack, a crash occurs inside AVCapture after calling discoverySessionWithDeviceTypes: NSArray<AVCaptureDevice*>* GetVideoCaptureDevices() { NSArray* captureDeviceType = @[ AVCaptureDeviceTypeBuiltInWideAngleCamera, AVCaptureDeviceTypeExternalUnknown ]; AVCaptureDeviceDiscoverySession* deviceDiscoverySession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:captureDeviceType mediaType:AVMediaTypeVideo position:AVCaptureDevicePositionUnspecified]; return deviceDiscoverySession.devices; } The following is the crash call stack: OS Version: macOS 13.5 (22G74) Report Version: 104 Crashed Thread: 10301 Application Specific Information: Fatal Error: EXC_BAD_INSTRUCTION / EXC_I386_INVOP / 0x7ff8194b3522 Thread 10301 Crashed: 0 AppKit 0x7ff8194b3522 -[NSApplication _crashOnException:] 1 AppKit 0x7ff8194b32b3 -[NSApplication reportException:] 2 AppKit 0x7ff819569efa NSApplicationUncaughtExceptionHandler 3 CoreFoundation 0x7ff8161c010a <unknown> 4 libobjc.A.dylib 0x7ff815c597c8 <unknown> 5 libc++abi.dylib 0x7ff815f926da std::__terminate 6 libc++abi.dylib 0x7ff815f92695 std::terminate 7 libobjc.A.dylib 0x7ff815c65929 <unknown> 8 libdispatch.dylib 0x7ff815e38046 _dispatch_client_callout 9 libdispatch.dylib 0x7ff815e39266 _dispatch_once_callout 10 AVFCapture 0x7ff8328cafb6 +[AVCaptureDALDevice devices] 11 AVFCapture 0x7ff832996410 +[AVCaptureDevice_Tundra _devicesWithAllowIOSMacEnvironment:] 12 AVFCapture 0x7ff83299652b +[AVCaptureDevice_Tundra _devicesWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:] 13 AVFCapture 0x7ff83299e8c0 -[AVCaptureDeviceDiscoverySession_Tundra _initWithDeviceTypes:mediaType:position:allowIOSMacEnvironment:prefersUnsuspendedAndAllowsAnyPosition:] 14 AVFCapture 0x7ff83299e7a4 +[AVCaptureDeviceDiscoverySession_Tundra discoverySessionWithDeviceTypes:mediaType:position:] 15 Electron Framework 0x119453784 media::GetVideoCaptureDevices (video_capture_device_avfoundation_helpers.mm:22) I want to know what is the root cause of this crash. How should I simulate it and fix it? Any suggestions would be highly appreciated. Thank you.
Posted
by Colin1994.
Last updated
.