So I've spent the last five years optimizing my video AI system so that it runs with less than 5% CPU while processing a 30fps video feed on a Macbook Pro M2, and everything is great, until Sonoma comes out, and I find myself consuming 40% CPU for the exact same workload.
So I fire up Instruments, and the "heaviest stack trace" (see screenshot) turns out to be Espresso doing some completely unasked-for and absolutely useless processing on my video frames. I turn off Reactions, but nothing helps - the CPU consumptions stays at 40%.
"Reactions" is nothing but a useless toy to please some WWDC keynote fanboys, I don't want it anywhere near my app or my users, and I especially do not want to take the blame for it pissing away the user's CPU cycles and battery.
Now, how do I make it go away, for ever?
Best regards
Jacob
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
182 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Subject: Urgent: iPhone 13 Camera App Crashing Issue in Portrait Mode
Dear Apple Support Team,
I trust this message finds you well. I am writing to urgently bring to your attention a persistent issue I have encountered with my iPhone 13 camera.
The problem arises when using Portrait Mode with studio lighting specifically for human subjects. In these instances, the camera screen consistently goes black, leading to a complete crash of the camera app. Despite attempts to address the issue, including resetting all settings and rebooting the device, the problem persists.
Interestingly, I have not experienced any disruptions when capturing photos of objects in Portrait Mode. The camera app crash seems to be directly associated with capturing human subjects using studio lighting in Portrait Mode.
Given that this issue has occurred over 10 times, it is causing a significant disruption to the functionality of the iPhone 13 camera and impacting my overall user experience. I am reaching out to seek your assistance in resolving this matter promptly.
I kindly request a thorough investigation into this issue and appreciate your prompt attention to rectifying the problem. If any additional information is required, or if there are specific troubleshooting steps you recommend, please do let me know.
Your swift resolution to this matter will be highly valued, and I thank you in advance for your assistance.
I have also tried to reset all settings.
Also i am in latest version on 17.2.
The issue facing from i buy the the phone.
Still not fixed.
Starting with Sonoma 14.2 it is not possible any longer to connect canon cameras to an app via USB using Canon's EDSDK framework. This has been working fine up to Sonoma 14.1.
The app using the EDSDK is not crashing, but the SDK is not reporting back any connected cameras any longer. The camera is connected and can be seen in the system report as well as in e.g. gphoto2 and even in the EOS Utility Software.
It seems that 14.2 introduced some breaking change to the access to cameras from within apps.
I've tried upgrading to the newest EDSDK version and checked with and without App Sandbox. There is no way to find the camera any longer on 14.2.
On macOS Sonoma I have a SwiftUI app that correctly plays remote video files and local video files from the app bundle.
Where I'm having trouble is setting up the AVPlayer URL for a UVC camera device directly connected on the Mac.
let url = URL(string: "https://some-remote-video.mp4")!
player = AVPlayer(url: url)
player.play()
Is there some magic to using a UVC device with AVPlayer, or do I need to access the UVC device differently?
Thanks,
Grahm
Greetings everyone,
My app is crash when i open camera screen open and close i have added subview in the camera that shows the main screen but the app does not crash every time, the app works well 5-6 times after the app crashes.
I'm using instead of the Quickpose.ai library and the app crashes instead of lib. so I don't know where is the problem i have shown some code and my crash log.
*** Terminating app due to uncaught exception 'NSGenericException', reason: '*** -[AVCaptureSession startRunning] startRunning may not be called between calls to beginConfiguration and commitConfiguration'
*** First throw call stack:
(0x1889f4870 0x180d13c00 0x1a4e30b44 0x10505cff0 0x1047ed7cc 0x1047ed84c 0x105824f50 0x105826b34 0x10582e98c 0x10582f728 0x10583c5f8 0x10583bc2c 0x1f2365964 0x1f2365a04)
libc++abi: terminating due to uncaught exception of type NSException
![]("https://developer.apple.com/forums/content/attachment/a1eeece3-6529-4c79-8931-963f58818a93" "title=Screenshot 2023-12-12 at 9.35.27 AM.png;width=1920;height=1080")
![]("https://developer.apple.com/forums/content/attachment/2184c975-e299-40e4-b466-cafa5165ae03" "title=Screenshot 2023-12-12 at 9.35.32 AM.png;width=1920;height=1080")
`
![]("https://developer.apple.com/forums/content/attachment/d78ac3ac-313a-4df9-960d-0c58c3087bec" "title=Screenshot 2023-12-15 at 12.11.38 PM.png;width=1920;height=1080")
``
if UIImagePickerController.isSourceTypeAvailable(.camera) {
let imagePicker = UIImagePickerController()
imagePicker.delegate = self
imagePicker.allowsEditing = false
imagePicker.sourceType = .camera
self.present(imagePicker, animated: true, completion: nil)
}
this code crashes on M2 Mac (Designed for iPad) with the following exception
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:249) - (err=0)
An uncaught exception was raised
*** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0]
(
0 CoreFoundation 0x0000000180b02800 __exceptionPreprocess + 176
1 libobjc.A.dylib 0x00000001805f9eb4 objc_exception_throw + 60
2 CoreFoundation 0x0000000180a1a724 -[__NSPlaceholderDictionary initWithObjects:forKeys:count:] + 728
3 CoreFoundation 0x0000000180a1a420 +[NSDictionary dictionaryWithObjects:forKeys:count:] + 52
4 AVFCapture 0x000000019de90374 -[AVCaptureFigVideoDevice _cameraInfo] + 200
5 AVFCapture 0x000000019de90278 -[AVCaptureFigVideoDevice updateStreamingDeviceHistory] + 36
6 AVFCapture 0x000000019deec8c0 -[AVCaptureSession _startFigCaptureSession] + 464
7 AVFCapture 0x000000019def0980 -[AVCaptureSession _buildAndRunGraph:] + 1936
8 AVFCapture 0x000000019deecc00 -[AVCaptureSession _setRunning:] + 120
9 AVFCapture 0x000000019deec46c -[AVCaptureSession startRunning] + 452
10 libRPAC.dylib 0x00000001051c9024 _replacement_AVCaptureSession_startRunning + 104
11 libdispatch.dylib 0x000000010509cf14 _dispatch_call_block_and_release + 32
12 libdispatch.dylib 0x000000010509eb4c _dispatch_client_callout + 20
13 libdispatch.dylib 0x00000001050a7cd8 _dispatch_lane_serial_drain + 864
14 libdispatch.dylib 0x00000001050a8dcc _dispatch_lane_invoke + 416
15 libdispatch.dylib 0x00000001050b877c _dispatch_root_queue_drain_deferred_wlh + 652
16 libdispatch.dylib 0x00000001050b7a54 _dispatch_workloop_worker_thread + 444
17 libsystem_pthread.dylib 0x0000000105147d9c _pthread_wqthread + 288
18 libsystem_pthread.dylib 0x000000010514fab4 start_wqthread + 8
)
I tried running this demo app in "Designed for iPad" mode on my M3 MacBook Pro, and it crashes with the following errors:
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
CMIO_DAL_PlugInManagement.cpp:917:CreatePlugIn Could not find plugin with kCMIOHardwarePlugInTypeID
CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead
CMIO_DAL_CMIOExtension_Device.mm:355:Device legacy uuid isn't present, using new style uuid instead
[C:1-3] Error received: Invalidated by remote connection.
CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode
CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed
Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770)
CMIO_DAL_CMIOExtension_Stream.mm:1429:GetPropertyData wrong data size for kCMIOStreamPropertyCenterStageFramingMode
CMIOHardware.cpp:331:CMIOObjectGetPropertyData Error: 561211770, failed
Fig assert: "err == 0 " at bail (CMIOUtilities.h:133) - (err=561211770)
Using capture device: FaceTime HD Camera
Camera access not determined.
Unknown client: Capturing Photos
LSPrefs: could not find untranslocated node for <FSNode 0x6000022578c0> { isDir = ?, path = '/private/var/folders/yk/2vw8ntf53r79cyldlxx4t4t80000gn/X/6527F067-B4CF-5E9F-8412-6ADCB21853EE/d/Wrapper/Capturing Photos.app' }, proceeding on the assumption it is not translocated: Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
Error loading /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos (84): dlopen(/System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/com.apple.Photos, 0x0109): Symbol not found: _OBJC_CLASS_$_PXSearchResultsViewModel
Referenced from: <128FED4B-1EFC-38CC-BFB9-F6980FB96165> /System/Library/Accessibility/BundlesBase/com.apple.Photos.axbundle/Versions/A/com.apple.Photos
Expected in: <0E82B4EE-CAFC-36CC-8E72-5DF1BAD3BBD2> /System/iOSSupport/System/Library/PrivateFrameworks/PhotosUICore.framework/Versions/A/PhotosUICore
AX Safe category class 'SFUnifiedBarRegistrationAccessibility' was not found!
Photo library access not determined.
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
<<<< FigCaptureCameraParameters >>>> Fig assert: "success" at bail (FigCaptureCameraParameters.m:252) - (err=0)
fopen failed for data file: errno = 2 (No such file or directory)
Errors found! Invalidating cache...
fopen failed for data file: errno = 2 (No such file or directory)
Errors found! Invalidating cache...
CMIOHardware.cpp:1388:CMIOStreamRegisterAsyncStillCaptureCallback stream doesn't support async still capture
CMIOHardware.cpp:1412:CMIOStreamRegisterAsyncStillCaptureCallback Error: 1970171760, failed
<<<< CMIOFigCaptureStream >>>> Fig assert: "! stream->streaming" at bail (CMIOFigCaptureStream.m:1173) - (err=0)
-[MTLDebugDevice newTextureWithDescriptor:iosurface:plane:]:2641: failed assertion `Texture Descriptor Validation
IOSurface textures must use MTLStorageModeShared
libsystem_kernel.dylib`:
0x188f2e0d4 <+0>: mov x16, #0x148
0x188f2e0d8 <+4>: svc #0x80
-> 0x188f2e0dc <+8>: b.lo 0x188f2e0fc ; <+40> Thread 19: signal SIGABRT
0x188f2e0e0 <+12>: pacibsp
0x188f2e0e4 <+16>: stp x29, x30, [sp, #-0x10]!
0x188f2e0e8 <+20>: mov x29, sp
0x188f2e0ec <+24>: bl 0x188f26230 ; cerror_nocancel
0x188f2e0f0 <+28>: mov sp, x29
0x188f2e0f4 <+32>: ldp x29, x30, [sp], #0x10
0x188f2e0f8 <+36>: retab
0x188f2e0fc <+40>: ret
When developing a custom camera for iOS, when the sessionPreset of AVCaptureSession is set to AVCaptureSessionPresetPhoto, photos cannot be taken on the iPhone 15 Pro Max, but other devices are normal.SessionPreset settings and other enumerations can be shot normally. Please help Apple developers to determine the cause.
In addition, I initially thought there was a problem with our code writing, but when I looked at some demos written by others, the same problem would occur when using the AVCaptureSessionPresetPhoto enumeration and running it on iPhone15 pro max.
With the iPhone 15's addition of the USB-C port which can power external devices, such as USB cameras or capture devices, I'm curious what would have to happen to enable the same external USB Camera functionality in iOS 17 that is currently available in iPadOS?
I got code of CMIO CameraExtension by Xcode target and it is running with FaceTime. I guess this kind of Extension has lots of security limitation.
I like to run command like "netstat" in Extension. Is that possible to call Process.run()? I got keep getting error like "The file zsh doesn’t exist". Same code with Process.run() worked in macOS app.
I like to run DistributedNotificationCenter and send text from App to CameraExtension. Is that possible? I do not receive any message on CameraExtension.
If there is any other IPC method between macOS app and CameraExtension, please let me know.
Hello,
Faced with a really perplexing issue. Primary problem is that sometimes I get depth and video data as expected, but at other times I don't. And sometimes I'll get both data outputs for a 4-5 frames and then it'll just stop. The source code I implemented is a modified version of the sample code provided by Apple, and interestingly enough I can't re-create this issue with the Apple sample app. So wondering what I could be doing wrong?
Here's the code for setting up the capture input. preferredDepthResolution is 1280 in my case. I'm running this on an iPad Pro (6th gen). iOS version 17.0.3 (21A360). Encounter this issue on iPhone 13 Pro as well. iOS version is 17.0 (21A329)
private func setupLiDARCaptureInput() throws {
// Look up the LiDAR camera.
guard let device = AVCaptureDevice.default(.builtInLiDARDepthCamera, for: .video, position: .back) else {
throw ConfigurationError.lidarDeviceUnavailable
}
guard let format = (device.formats.last { format in
format.formatDescription.dimensions.width == preferredWidthResolution &&
format.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange &&
format.videoSupportedFrameRateRanges.first(where: {$0.maxFrameRate >= 60}) != nil &&
!format.isVideoBinned &&
!format.supportedDepthDataFormats.isEmpty
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
guard let depthFormat = (format.supportedDepthDataFormats.last { depthFormat in
depthFormat.formatDescription.mediaSubType.rawValue == kCVPixelFormatType_DepthFloat16
}) else {
throw ConfigurationError.requiredFormatUnavailable
}
// Begin the device configuration.
try device.lockForConfiguration()
// Configure the device and depth formats.
device.activeFormat = format
device.activeDepthDataFormat = depthFormat
let desc = format.formatDescription
dimensions = CMVideoFormatDescriptionGetDimensions(desc)
let duration = CMTime(value:1, timescale:CMTimeScale(60))
device.activeVideoMinFrameDuration = duration
device.activeVideoMaxFrameDuration = duration
// Finish the device configuration.
device.unlockForConfiguration()
self.device = device
print("Selected video format: \(device.activeFormat)")
print("Selected depth format: \(String(describing: device.activeDepthDataFormat))")
// Add a device input to the capture session.
let deviceInput = try AVCaptureDeviceInput(device: device)
captureSession.addInput(deviceInput)
guard let audioDevice = AVCaptureDevice.default(for: .audio) else {
return
}
// Configure audio input - always configure audio even if isAudioEnabled is false
audioDeviceInput = try! AVCaptureDeviceInput(device: audioDevice)
captureSession.addInput(audioDeviceInput)
deviceSystemPressureStateObservation = device.observe(
\.systemPressureState,
options: .new
) { _, change in
guard let systemPressureState = change.newValue else { return }
print("system pressure \(systemPressureState.levelAsString()) due to \(systemPressureState.factors)")
}
}
Here's how I'm setting up the output:
private func setupLiDARCaptureOutputs() {
// Create an object to output video sample buffers.
videoDataOutput = AVCaptureVideoDataOutput()
captureSession.addOutput(videoDataOutput)
// Create an object to output depth data.
depthDataOutput = AVCaptureDepthDataOutput()
depthDataOutput.isFilteringEnabled = false
captureSession.addOutput(depthDataOutput)
audioDeviceOutput = AVCaptureAudioDataOutput()
audioDeviceOutput.setSampleBufferDelegate(self, queue: videoQueue)
captureSession.addOutput(audioDeviceOutput)
// Create an object to synchronize the delivery of depth and video data.
outputVideoSync = AVCaptureDataOutputSynchronizer(dataOutputs: [depthDataOutput, videoDataOutput])
outputVideoSync.setDelegate(self, queue: videoQueue)
// Enable camera intrinsics matrix delivery.
guard let outputConnection = videoDataOutput.connection(with: .video) else { return }
if outputConnection.isCameraIntrinsicMatrixDeliverySupported {
outputConnection.isCameraIntrinsicMatrixDeliveryEnabled = true
}
}
The top part of my delegate implementation is as follows:
func dataOutputSynchronizer(
_ synchronizer: AVCaptureDataOutputSynchronizer,
didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection
) {
// Retrieve the synchronized depth and sample buffer container objects.
guard let syncedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData,
let syncedVideoData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else {
if synchronizedDataCollection.synchronizedData(for: depthDataOutput) == nil {
print("no depth data at time \(mach_absolute_time())")
}
if synchronizedDataCollection.synchronizedData(for: videoDataOutput) == nil {
print("no video data at time \(mach_absolute_time())")
}
return
}
print("received depth data \(mach_absolute_time())")
}
As you can see, I'm console logging whenever depth data is not received. Note because I'm driving the video frames at 60 fps, its expected that I'll only receive depth data for every alternate video frame.
Console output is posted as a follow up comment (because of the character limit). I edited some lines out for brevity. You'll see it started streaming correctly but after a while it stopped received both video and depth outputs (in some other runs, it works perfectly and in some other runs I receive no depth data whatsoever). One thing to note, I sometimes run quicktime mirroring to see the device screen to see what the app is displaying (so not sure if that's causing any interference - that said I don't see any system pressure changes either).
Any help is most appreciated! Thanks.
Trying to access camera hardware using
let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.front)
guard let captureDevice = deviceDiscoverySession.devices.first else {
print("Failed to get the camera device")
}
In many devices I get the Failed to get camera device error.
Devices like
iPhone XS
iPhone X
iPhone 7
iPhone 11
iPhone 11 Pro Max
iPhone 8
iPhone14,7
iPhone 12
iPhone XR
iPhone SE (2nd generation)
iPhone 13
iPhone 13 Pro
iPhone 6s
iPhone 6s Plus
Is there a reason why this might happen if the hardware is functioning properly. Please help as a lot of users are facing this issue.
I made s target of "Camera Extension" on Xcode macOS Swift app.
I got Swift code with CMIOExtensionDeviceSource.
I add NSLog() and String.write() to file under FileManager.default.temporaryDirectory.
My camera extension installaion was success and running with FaceTime.
But I cannot see NSLog output or debug output temp file on Xcode or Console.
How can I see debug output from my Camera Extension?
I'm building a Camera app, where I have two AVCaptureSessions, one for video and one for audio. (See this for an explanation why I don't just have one).
I receive my CMSampleBuffers in the AVCaptureVideoDataOutput and AVCaptureAudioDataOutput delegates.
Now, when I enable the video stabilization mode "cinematicExtended", the AVCaptureVideoDataOutput has a 1-2 seconds delay, meaning I will receive my audio CMSampleBuffers 1-2 seconds earlier than I will receive my video CMSampleBuffers!
This is the code:
func captureOutput(_ captureOutput: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from _: AVCaptureConnection) {
let type = captureOutput is AVCaptureVideoDataOutput ? "Video" : "Audio"
let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
print("Incoming \(type) buffer at \(timestamp.seconds) seconds...")
}
Without video stabilization, this logs:
Incoming Audio frame at 107862.52558333334 seconds...
Incoming Video frame at 107862.535921166 seconds...
Incoming Audio frame at 107862.54691666667 seconds...
Incoming Video frame at 107862.569257333 seconds...
Incoming Audio frame at 107862.56825 seconds...
Incoming Video frame at 107862.585925333 seconds...
Incoming Audio frame at 107862.58958333333 seconds...
With video stabilization, this logs:
Incoming Audio frame at 107862.52558333334 seconds...
Incoming Video frame at 107861.535921166 seconds...
Incoming Audio frame at 107862.54691666667 seconds...
Incoming Video frame at 107861.569257333 seconds...
Incoming Audio frame at 107862.56825 seconds...
Incoming Video frame at 107861.585925333 seconds...
Incoming Audio frame at 107862.58958333333 seconds...
As you can see, the video frames arrive almost a full second later than when they are intended to be presented!
There are a few guides on how to use AVAssetWriter online, but all recommend to start the AVAssetWriter session once the first video frame arrives - in my case I cannot do that, since the first 1 second of video frames is from before the user even started the recording.
I also can't really wait 1 second here, as then I would lose 1 second of audio samples, since those are realtime and not delayed.
I also can't really start the session on the first audio frame and drop all video frames until that point, since then the resulting video would start with one blank frame, as the video frame is never exactly on that first audio frame timestamp.
Any advices on how I can synchronize that?
Here is my code: RecordingSession.swift
’m using the AVFoundation Swift APIs to record a Video (CMSampleBuffers) and Audio (CMSampleBuffers) to a file using AVAssetWriter.
Initializing the AVAssetWriter happens quite quickly, but calling assetWriter.startWriting() fully blocks the entire application AND ALL THREADS for 3 seconds. This only happens in Debug builds, not in Release.
Since it blocks all Threads and only happens in Debug, I’m lead to believe that this is an Xcode/Debugger/LLDB hang issue that I’m seeing.
Does anyone experience something similar?
Here’s how I set all of that up: startRecording(...)
And here’s the line that makes it hang for 3+ seconds: assetWriter.startWriting(...)
Hi there,
I am building a camera application to be able to capture an image with the wide and ultra wide cameras simultaneously (or as close as possible) with the intrinsics and extrinsics for each camera also delivered.
We are able to achieve this with an AVCaptureMultiCamSession and AVCaptureVideoDataOutput, setting up the .builtInWideAngleCamera and .builtInUltraWideCamera manually. Doing this, we are able to enable the delivery of the intrinsics via the AVCaptureConnection of the cameras. Also, geometric distortion correction is enabled for the ultra camera (by default).
However, we are seeing if it possible to move the application over to the .builtInDualWideCamera with AVCapturePhotoOutput and AVCaptureSession to simplify our application and get access to depth data. We are using the isVirtualDeviceConstituentPhotoDeliveryEnabled=true property to allow for simultaneous capture. Functionally, everything is working fine, except that when isGeometricDistortionCorrectionEnabled is not set to false, the photoOutput.isCameraCalibrationDataDeliverySupported returns false.
From this thread and the docs, it appears that we cannot get the intrinsics when isGeometricDistortionCorrectionEnabled=true (only applicable to the ultra wide), unless we use a AVCaptureVideoDataOutput.
Is there any way to get access to the intrinsics for the wide and ultra while enabling geometric distortion correction for the ultra?
guard let captureDevice = AVCaptureDevice.default(.builtInDualWideCamera, for: .video, position: .back) else {
throw InitError.error("Could not find builtInDualWideCamera")
}
self.captureDevice = captureDevice
self.videoDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
self.photoOutput = AVCapturePhotoOutput()
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
captureSession.sessionPreset = AVCaptureSession.Preset.hd1920x1080
captureSession.addInput(self.videoDeviceInput)
captureSession.addOutput(self.photoOutput)
try captureDevice.lockForConfiguration()
captureDevice.isGeometricDistortionCorrectionEnabled = false // <- NB line
captureDevice.unlockForConfiguration()
/// configure photoOutput
guard self.photoOutput.isVirtualDeviceConstituentPhotoDeliverySupported else {
throw InitError.error("Dual photo delivery is not supported")
}
self.photoOutput.isVirtualDeviceConstituentPhotoDeliveryEnabled = true
print("isCameraCalibrationDataDeliverySupported", self.photoOutput.isCameraCalibrationDataDeliverySupported) // false when distortion correction is enabled
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer delegate", attributes: []))
if captureSession.canAddOutput(videoOutput) {
captureSession.addOutput(videoOutput)
}
self.videoPreviewLayer.setSessionWithNoConnection(self.captureSession)
self.videoPreviewLayer.videoGravity = AVLayerVideoGravity.resizeAspect
let cameraVideoPreviewLayerConnection = AVCaptureConnection(inputPort: self.videoDeviceInput.ports.first!, videoPreviewLayer: self.videoPreviewLayer);
self.captureSession.addConnection(cameraVideoPreviewLayerConnection)
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
WWDC23 Platform State of the Union mentioned that Volume shutter buttons to trigger the camera shutter is coming later this year. This was mentioned at 0:30:15.
Would anyone know when this will be available?
We activate our camera extension from host application and wait for user to allow access it in System Settings. Once our host application receives notification camera extension is ready to be used we want to communicate with the extension.
When we enumerate AVCaptureDevices or try to find newly added device using CMIOObjectGetPropertyData for property kCMIOHardwarePropertyDevices, our camera extension is not shown. Once we stop and restart host application camera extension is shown as expected, issue only happens once right after activating the extension.
Looks like capture devices are not refreshed for host application after camera extension is activated and approved. Is there a way to force system to refresh cameras? Or any other ideas to make extension immediately visible for host application without relaunching it?
Hello!
I am having trouble calculating accurate distances in the real world using the camera's returned intrinsic matrix and pixel coordinates/depths captured from the iPhone's LiDAR. For example, in the image below, I set a mug 0.5m from the phone. The mug is 8.5cm wide. The intrinsic matrix returned from the phone's AVCameraCalibrationData class has focalx = 1464.9269, focaly = 1464.9269, cx = 960.94916, and cy = 686.3547. Selecting the two pixel locations denoted in the image below, I calculated each one's xyz coordinates using the formula:
x = d * (u - cx) / focalx
y = d * (v - cy) / focaly
z = d
Where I get depth from the appropriate pixel in the depth map - I've verified that both depths were 0.5m. I then calculate the distance between the two points to get the mug width. This gives me a calculated width of 0.0357, or 3.5 cm, instead of the 8.5cm I was expecting. What could be accounting for this discrepancy?
Thank you so much for your help!
I am building an iOS app that uses the phone's GPS EXIF data from both camera and image library.
My problem is that I while I am able to get GPS data from images in the phone's library, I have not been able to get any GPS data when using the camera within my app.
I first built this app about a year ago and at that time I was able to get GPS data from both the library AND the camera from within the app. I believe that at that point I was still building for iOS 12.. I believe that the new security features that came with iOS 13 or 14 now dissalow my app's access to the GPS data when using the camera.
This issue is new as of iOS 13 or 14. The code I had was working fine with earlier versions of iOS
I am having no issues with getting GPS from the EXIF on the device library images.
Images taken with the NATIVE IOS CAMERA APP are saved to the library with full GPS data.
- However I am not able to get GPS data directly from camera image EXIF when using the camera from within my app.
When saving an image taking by the camera from within my app the image is saved to the library with NO GPS data.
I am able, at any time, to ask the device for current GPS coordinates.
As far as I can tell, device settings are all correct. Location services are available at all times.
My feeling is that iOS is stripping the GPS data from the EXIF image before handing the image data to my app.
I have searched Apple developer forums, Apple documention, Stack Exchange, on and on for over several weeks now and I seem no closer to knowing if the camera API even returns this data or not and if it will be necessary for me to talk to the location services and add the GPS data myself (which is what I am working on now as I have about given up on getting it from the camera).
Info.plist keys I am currently setting:
LSRequiresIPhoneOS
NSCameraUsageDescription
NSLocationAlwaysUsageDescription
NSLocationWhenInUseUsageDescription
NSMicrophoneUsageDescription
NSPhotoLibraryUsageDescription
NSPhotoLibraryAddUsageDescription
Am I missing some required plist key? I have been searching and searching for the name of a key that I might be missing but have found absolutely nothing other than people trying to hack some post-camera device location merging.
This has been very frustrating..
Any insite is appreciated
Is it currently possible to get GPS data directly from the camera's EXIF output any more?
Do I need to ask the device for the current GPS values and insert the GPS data into the image EXIF on my own?
Is there any example code of getting GPS data from the camera?
Is there any example code of inserting GPS data into the Exif before saving the file to the device?
Sample swift code which processes the camera image.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
let pickedImage = info[UIImagePickerController.InfoKey.originalImage] as? UIImage
// let pickedImage = info[UIImagePickerController.InfoKey.editedImage] as? UIImage
// using editedImage vs originalImage has no effect on the availabilty of the GPS data
userImage.image = pickedImage
picker.dismiss(animated: true, completion: nil)
}