Discuss using the camera on Apple devices.

Posts under Camera tag

167 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Camera's settings at calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements have on the intrinsic matrix parameters, if the lens is displaced, these parameters no longer describe the camera since the lens position has changed w.r.t. the lensPosition set when they were calibrated [0-1]. If my understanding is correct the AutoFocusRangeRestriction is just a range freedom the system is allowed to auto-focus and not a specific lens position. Conversely, the setFocusModeLocked does indeed fix the lensPosition to a certain value [0 - 1]. In simple words, what is the focus lensPosition the cameras were set when calibrating them for intrnisics?
0
0
327
Jan ’25
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
392
Jan ’25
Can't swap camera with AVCaptureMultiCamSession
I have a PIP camera that is streaming from the front and back based on AVCaptureMultiCamSession. It works fine, but when i go to swap the camera it crashes. This is code that works with a single camera, so not sure what is wrong. Also the object appears valid in the debugger. This is the snippit where the camera is swapped private func updateSessionConfiguration() { guard isCaptureSessionConfigured else { return } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } // Remove all current inputs for input in captureSession.inputs { if let deviceInput = input as? AVCaptureDeviceInput { captureSession.removeInput(deviceInput) app_log("removing input for \(input)") } } // Add the primary device input if let deviceInput = deviceInputFor(device: captureDevice) { app_log("device input \(deviceInput)") if !captureSession.inputs.contains(deviceInput), captureSession.canAddInput(deviceInput) { captureSession.addInput(deviceInput) } } if let secondaryDeviceInput = deviceInputFor(device: secondaryCaptureDevice) { app_log("Secondary device input \(secondaryDeviceInput)") if !captureSession.inputs.contains(secondaryDeviceInput), captureSession.canAddInput(secondaryDeviceInput) { captureSession.addInput(secondaryDeviceInput) } } updateVideoOutputConnection() } It crashes at: captureSession.addInput(deviceInput) with: Thread 10: EXC_BAD_ACCESS (code=1, address=0xcaeb36b964f0) which is strange because canAdd is checked prior to this call. Totally stumped here. Please help. Not sure if this is an AVCaptureMuliCamSession issue or something.
4
0
329
Jan ’25
How to confidently select one type of camera on iOS
We have a web application that uses high resolution images to validate the authenticity of products. For this purpose we want to use the best camera to make the high resolution camera, on iPhone Pro devices this camera is the ultra-wide angle camera. The issue we have is how to confidently select that camera from the list returned by navigator.mediaDevices.enumerateDevices. We can't use the device ID as it change every time (and for every user), we could use the camera name but the string is translate to the device language which is very problematic. We could also just select a specific item in the list but we are not sure that the order is preserved and it makes it hard to deal with other iPhone models that don't have that ultra wide angle camera. Selecting a specific camera looks like an essential feature not only for us. What is the best option, we are looking for something that is future proof and easily scalable.
0
0
342
Dec ’24
Camera feed access issue from web content in Autofill extension
I am working on task to add WKWebView to Autofill extension. This web view presents web content that can access camera feed. As an example here is a simple html: I have added Camera permission entitlements to both main app and autofill extension Info.plist Camera feed is accessed properly from the main app. However, doing the same in the Autofill extension does not show Camera stream in the web content. I am receiving camera permissions alert and am allowing permissions. It just stucks on the black screen and in console I see these logs: 16000a00 - GPUProcessProxy::didClose: 0x116000a00 - GPUProcessProxy::gpuProcessExited: reason=Crash 0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}> 0x115020360 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 524, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit} 0x1160012a0 - GPUProcessProxy::didClose: 0x1160012a0 - GPUProcessProxy::gpuProcessExited: reason=Crash 0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}> 0x115020300 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 525, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit} Looks like WKWebView crashes. Here are my configurations for the WKWebView: let webConfiguration = WKWebViewConfiguration() webConfiguration.allowsInlineMediaPlayback = true webConfiguration.mediaTypesRequiringUserActionForPlayback = [] let webView = WKWebView(frame: .zero, configuration: webConfiguration) webView.navigationDelegate = self webView.uiDelegate = self webView.scrollView.isScrollEnabled = false webView.contentMode = .scaleAspectFit view.addSubview(webView) Does anyone know what might be the problem? Is it even possible to access Camera from web content in Autofill extension?
0
1
390
Dec ’24
Best Way to Trigger and Download Images from Canon Cameras via USB?
I explored several methods to trigger a 35mm camera connected via USB: 1- ICCameraDevice: Unable to make it work with Canon cameras (details). 2- Canon's EDSDK: Works but is complex to implement. 3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed. In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
0
0
321
Dec ’24
Custom Image Filters
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles). My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more. Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
2
0
629
Dec ’24
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
4
0
652
Jan ’25
Camera app post update
after update to iOS 18.2 beta 4 the camera app opens but there is a black screen with setting and option showing but no response from them. rebooted iphone checked camera settings reset iphone still no camera iphone 14 plus ios 18.2 beta 4 canada
1
0
340
Dec ’24
Does the videoDeviceNotAvailableWithMultipleForegroundApps Interruption Occur on iPhones?
Hello, I am developing a service using capture sessions. I have a question concerning something curious I've noticed. Occasionally, I've been informed that the capture session stops working. Upon investigation, I found records of the videoDeviceNotAvailableWithMultipleForegroundApps interruption on the devices. From what I've looked up in the documents, it seems to occur due to multitasking capabilities, but I'm wondering if there are any specific scenarios where this happens on iPhone devices? Here is the relevant documentation link: https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps I suspect it might have something to do with Picture-in-Picture (PIP) mode, but when I developed and tested a direct video streaming PIP, the issue did not occur. Does anyone have insights on this matter or related experiences they could share?
0
0
313
Dec ’24
why does tripple camera take photo faster than single camera device?
I found this phenomenon, and it can be reproduced stably. If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms. If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later. Let me describe the problem I encountered in another way. Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6... Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter. If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9. This means the triple camera can capture photos faster, but why is this the case? Any explanation?
2
0
443
Dec ’24
How to add CIFilter to AVCaptureDeferredPhotoProxy
Hello, I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter? Since there is no way for me to know when the system has finished processing a photo. If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
2
0
493
Dec ’24
How to save 4K60 ProRes Log Video Internally on iPhone internally?
Hello Apple Engineers, Specific Issue: I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far: I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec. The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log. However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device." This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding. Here are my questions: Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)? There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation? If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated. Thank you for your help!
0
0
460
Nov ’24
Issues setting up the Enterprise API entitlements (Main Camera Access)
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro. What happens : When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider : ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.} What I've tried : I followed the instructions given by mail, by : adding the .license file at the root of my project, adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there). I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned). I've also followed those post & post advices. When checking on the Account settings, i do see the capabilities in the "additional capabilities" On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid? What did i missed to get the entitlements working ? Here's the code : import ARKit import SwiftUI import Vision import RealityKit class MainCameraAccess { var arKitSession = ARKitSession() var cameraFrameProvider = CameraFrameProvider() var pixelBuffer: CVPixelBuffer? func startCameraSession() async { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) // Request authorization await arKitSession.requestAuthorization(for: [.cameraAccess]) // Start the session do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } // Get camera frame updates guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } // Process frames for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } } func saveLatestImage() { guard let pixelBuffer = self.pixelBuffer else { print("No image available to save.") return } // Convert CVPixelBuffer to UIImage let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { print("Failed to create CGImage.") return } let uiImage = UIImage(cgImage: cgImage) // Save UIImage to Photos Album UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil) print("Image saved to photo library.") } } Thanks in advance for the help, Jeremy
3
0
522
Dec ’24
How can I use iPhone true depth front camera to detect if the captured depth map of a face is a true 3d face or spoofed 2d image
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution. I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less. When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one. I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one. How can I solve this problem.
0
0
467
Nov ’24
LockedCameraCaptureManager sessionContentUpdates sometimes is not called
Within my app, I have: for try await update in LockedCameraCaptureManager.shared.sessionContentUpdates { It seems that the first time my app opens from LockedCameraCapture (after enabling camera permissions etc...) this update is never called and the user will not see their capture (.added or .initial) If I then try to take another picture/video through my LockedCameraCapture control, it takes the video, opens the app as before, but this time sessionContentUpdates is called twice, once for the first video and once for the second video! After that it doesn't seem to occur again and all works perfectly! My device is: iPhone 16 Pro Max, iOS 18.2 developer beta Has anyone experienced this?
0
0
295
Nov ’24
IOSurface with System Extensions
Hi All, I'm working on a camera system extension where the main app is supposed to transfer a video stream using IOSurface memory sharing to the cam extension. I have built a sample app that does contains all the logic, but without a camera extension. So I'm essentially using IOSurface to render a video in one SwiftUI view and show the result in another SwiftUI view. Just for testing purposes. And everything works fine so far. Now, when moving the receiver code to the camera extensions, I'm having problems in accessing the IOSurface via ID. I am sharing the IOSurface ID via UserDefaults. I know from the logs the ID is correctly transferred. Here is the code that uses IOSurfaceLookup to get the IOSurface. But this fails with the given message. The error message prints the surface ID which is the correct one. I know this from the main app where I get the ID and print it as well. private var surfaceId: Int = -1 { didSet { logger.info("surfaceId has changed") if surfaceId == -1 { stopReceivingFrames() ioSurface = nil } else { guard let surface = IOSurfaceLookup(IOSurfaceID(surfaceId)) else { logger.error("failed to lookup IOSurface with ID: \(self.surfaceId)") return } self.ioSurface = surface logger.info("surface set, now starting receiving frames") startReceivingFrames() } } } My gut feeling says that this issue might be related to some missing entitlement, sandboxing. In general, I have a working camera extension. I'm just not able to render a video in the main app, and send it over to the camera extension to overlay another web cam. Both, the main app and camera extension are in the same XCode workspace and share the same AppGroup. In short, my actual questions are: Is there any entitlement required for using IOSurface between app and camera system extension? Is using IOSurface actually possible in system extensions? Is there any specific setting/requirement that I need to handle to make this work?
0
0
432
Nov ’24