Discuss using the camera on Apple devices.

Posts under Camera tag

174 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

LockedCameraCapture with ARKit based App
Hello, I'm building a camera app around ARKit. I've created a Lockscreen Capture Extension and added a control to initiate my camera app, but when I launch the extension I see just a black screen with no hints at any errors. Also attaching the debugger to the running process shows no logs. Im wondering: Is LockedCameraCapture supported with ARView and ARSession? ARKit was featured in a WWDC video with a camera app use-case, also the introduction of captureHighResolutionFrame(completion:) made me pick it up as an interesting camera app backbone - but if lockscreen capture is not possible with it I have to refactor my codebase.
1
0
104
1d
Slow Auto Focus on iPhone 16 Pro with ARkit camera
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger. This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images. This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used. This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes Is anybody else experiencing this problem?
3
0
170
1d
Macro-mode in AVCaptureDevice(custom camera)
Hi, I would like to use macro-mode for the custom camera using AVCaptureDevice in my project. This feature might help to automatically adjust and switch between lenses to get a close up clear image. It looks like this feature is not available and there are no open apis to achieve macro mode from Apple. Is there a way to get this functionality in the custom camera without losing the image quality. Please let me know if this is possible. Thanks you, Adil Thamarasseri
1
1
147
1w
Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera
Subject: Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera Message: Hello Apple Developer Community, We’re developing an application using the front camera that requires both real-time ARKit face tracking/guidance and the capture of high-resolution still images via AVCaptureSession. Our goal is to leverage ARKit’s depth and face data to render a captured image from another perspective post-capture, maintaining high image quality. Our Approach: Real-Time ARKit Guidance: Utilize ARKit (e.g., ARFaceTrackingConfiguration) for continuous face tracking, depth, and scene understanding to guide the user in real time. High-Resolution Capture Transition: At the moment of capture, we plan to pause the ARKit session and switch to an AVCaptureSession to take a high-resolution image. We assume that for a front-facing image, the subject’s face is directly front-on, and the relative pose between the face and camera remains the same during the transition. The only variation we expect is a change in distance. Our intention is to minimize the delay between the last ARKit frame and the high-res capture to maintain temporal consistency, assuming that aside from distance, the face-camera relative pose remains unchanged. Post-Processing Perspective Rendering: Using the last ARKit face data (depth, pose, and landmarks) along with the high-resolution 2D image, we aim to render the scene from another perspective. We want to correct the perspective of the 2D image using SceneKit or RealityKit, leveraging the collected ARKit scene information to achieve a natural, high-quality rendering from a different viewpoint. The rendering should match the quality of a normally captured high-resolution image, adjusting for the difference in distance while using the stored ARKit data to correct perspective. Our Questions: Session Transition Best Practices: What are the recommended best practices to seamlessly pause ARKit and switch to a high-resolution AVCapture session on the front camera How can we minimize user movement or other issues during this brief transition, given our assumption that the face-camera pose remains largely consistent except for distance changes? Data Integration for Perspective Rendering: How can we effectively integrate stored ARKit face, depth, and pose data with the high-res image to perform accurate perspective correction or rendering from another viewpoint? Given that we assume the relative pose is constant except for distance, are there strategies or APIs to leverage this assumption for simplifying the perspective transformation? Perspective Correction with SceneKit/RealityKit: What techniques or workflows using SceneKit or RealityKit are recommended for correcting the perspective of a captured 2D image based on ARKit scene data? How can we use these frameworks to render the high-resolution image from an alternative perspective, while maintaining image quality and fidelity? 4. Pitfalls and Guidelines: What common pitfalls should we be aware of when combining ARKit tracking data with high-res capture and post-processing for perspective rendering? Are there performance considerations, recommended thresholds for acceptable temporal consistency, or validation techniques to ensure the ARKit data remains applicable at the moment of high-res capture? We appreciate any advice, sample code references, or documentation pointers that could assist us in implementing this workflow effectively. Thank you!
2
0
196
1d
PIP Camera in iOS App
I am developing an iOS app with video call functionality and implementing Picture in Picture (PiP) mode for video calls. The issue I am facing is that the camera stops capturing video when the app goes to the background, even though the PiP view is still visible. I have noticed that some apps, like Telegram, manage to keep the camera working in PiP mode while the app is in the background. How can I achieve this in my app?
1
0
111
1w
Debug View Hierarchy not showing AVCaptureVideoPreviewLayer
I have an iOS application view that contains an AVCaptureSession, AVCaptureVideoPreviewLayer (created with the AVCaptureSession), and a UIImageView (in the backend the app takes the output of the AVCaptureSession, runs it through a Semantic Segmentation model, and displays the output in the UIImageView). When I pause the app and run the “Debug View Hierarchy”, it shows the UIImageView, the relevant buttons and labels. However, it does not seem to show AVCaptureVideoPreviewLayer that I have set up in my application. Is there some special set up that needs to be done to be able to view Camera Related features? The following is part of the view code, a component that is used to render the AVCaptureVideoPreviewLayer (not sure if this is enough, please let me know if its not): class CameraViewController: UIViewController { var session: AVCaptureSession? var frameRect: CGRect = CGRect() var rootLayer: CALayer! = nil private var previewLayer: AVCaptureVideoPreviewLayer! = nil init(session: AVCaptureSession) { self.session = session super.init(nibName: nil, bundle: nil) } required init?(coder: NSCoder) { super.init(coder: coder) } override func viewDidLoad() { super.viewDidLoad() setUp(session: session!) } private func setUp(session: AVCaptureSession) { previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill previewLayer.frame = self.frameRect DispatchQueue.main.async { [weak self] in self!.view.layer.addSublayer(self!.previewLayer) //self!.view.layer.addSublayer(self!.detectionLayer) } } } struct HostedCameraViewController: UIViewControllerRepresentable{ var session: AVCaptureSession! var frameRect: CGRect func makeUIViewController(context: Context) -> CameraViewController { let viewController = CameraViewController(session: session) viewController.frameRect = frameRect return viewController } func updateUIViewController(_ uiView: CameraViewController, context: Context) { } }
3
0
180
1w
Camera's settings at calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements have on the intrinsic matrix parameters, if the lens is displaced, these parameters no longer describe the camera since the lens position has changed w.r.t. the lensPosition set when they were calibrated [0-1]. If my understanding is correct the AutoFocusRangeRestriction is just a range freedom the system is allowed to auto-focus and not a specific lens position. Conversely, the setFocusModeLocked does indeed fix the lensPosition to a certain value [0 - 1]. In simple words, what is the focus lensPosition the cameras were set when calibrating them for intrnisics?
0
0
163
2w
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
203
2w
Can't swap camera with AVCaptureMultiCamSession
I have a PIP camera that is streaming from the front and back based on AVCaptureMultiCamSession. It works fine, but when i go to swap the camera it crashes. This is code that works with a single camera, so not sure what is wrong. Also the object appears valid in the debugger. This is the snippit where the camera is swapped private func updateSessionConfiguration() { guard isCaptureSessionConfigured else { return } captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } // Remove all current inputs for input in captureSession.inputs { if let deviceInput = input as? AVCaptureDeviceInput { captureSession.removeInput(deviceInput) app_log("removing input for \(input)") } } // Add the primary device input if let deviceInput = deviceInputFor(device: captureDevice) { app_log("device input \(deviceInput)") if !captureSession.inputs.contains(deviceInput), captureSession.canAddInput(deviceInput) { captureSession.addInput(deviceInput) } } if let secondaryDeviceInput = deviceInputFor(device: secondaryCaptureDevice) { app_log("Secondary device input \(secondaryDeviceInput)") if !captureSession.inputs.contains(secondaryDeviceInput), captureSession.canAddInput(secondaryDeviceInput) { captureSession.addInput(secondaryDeviceInput) } } updateVideoOutputConnection() } It crashes at: captureSession.addInput(deviceInput) with: Thread 10: EXC_BAD_ACCESS (code=1, address=0xcaeb36b964f0) which is strange because canAdd is checked prior to this call. Totally stumped here. Please help. Not sure if this is an AVCaptureMuliCamSession issue or something.
4
0
187
2w
How to confidently select one type of camera on iOS
We have a web application that uses high resolution images to validate the authenticity of products. For this purpose we want to use the best camera to make the high resolution camera, on iPhone Pro devices this camera is the ultra-wide angle camera. The issue we have is how to confidently select that camera from the list returned by navigator.mediaDevices.enumerateDevices. We can't use the device ID as it change every time (and for every user), we could use the camera name but the string is translate to the device language which is very problematic. We could also just select a specific item in the list but we are not sure that the order is preserved and it makes it hard to deal with other iPhone models that don't have that ultra wide angle camera. Selecting a specific camera looks like an essential feature not only for us. What is the best option, we are looking for something that is future proof and easily scalable.
0
0
204
Dec ’24
Camera feed access issue from web content in Autofill extension
I am working on task to add WKWebView to Autofill extension. This web view presents web content that can access camera feed. As an example here is a simple html: I have added Camera permission entitlements to both main app and autofill extension Info.plist Camera feed is accessed properly from the main app. However, doing the same in the Autofill extension does not show Camera stream in the web content. I am receiving camera permissions alert and am allowing permissions. It just stucks on the black screen and in console I see these logs: 16000a00 - GPUProcessProxy::didClose: 0x116000a00 - GPUProcessProxy::gpuProcessExited: reason=Crash 0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}> 0x115020360 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 524, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit} 0x1160012a0 - GPUProcessProxy::didClose: 0x1160012a0 - GPUProcessProxy::gpuProcessExited: reason=Crash 0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}> 0x115020300 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 525, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit} Looks like WKWebView crashes. Here are my configurations for the WKWebView: let webConfiguration = WKWebViewConfiguration() webConfiguration.allowsInlineMediaPlayback = true webConfiguration.mediaTypesRequiringUserActionForPlayback = [] let webView = WKWebView(frame: .zero, configuration: webConfiguration) webView.navigationDelegate = self webView.uiDelegate = self webView.scrollView.isScrollEnabled = false webView.contentMode = .scaleAspectFit view.addSubview(webView) Does anyone know what might be the problem? Is it even possible to access Camera from web content in Autofill extension?
0
0
228
Dec ’24
Best Way to Trigger and Download Images from Canon Cameras via USB?
I explored several methods to trigger a 35mm camera connected via USB: 1- ICCameraDevice: Unable to make it work with Canon cameras (details). 2- Canon's EDSDK: Works but is complex to implement. 3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed. In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
0
0
243
Dec ’24
Custom Image Filters
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles). My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more. Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
2
0
354
Dec ’24
VisionOS ARKit CameraFrame Sample Parameters Extrinsics
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great! https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor. Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display? What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward? The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical? How is the camera image indexed, is it row-major and is the origin on the top left? I am looking forward to learning about all the details on these extrinsics in order to make use of it.
4
0
462
2w
Camera app post update
after update to iOS 18.2 beta 4 the camera app opens but there is a black screen with setting and option showing but no response from them. rebooted iphone checked camera settings reset iphone still no camera iphone 14 plus ios 18.2 beta 4 canada
1
0
249
Dec ’24
Does the videoDeviceNotAvailableWithMultipleForegroundApps Interruption Occur on iPhones?
Hello, I am developing a service using capture sessions. I have a question concerning something curious I've noticed. Occasionally, I've been informed that the capture session stops working. Upon investigation, I found records of the videoDeviceNotAvailableWithMultipleForegroundApps interruption on the devices. From what I've looked up in the documents, it seems to occur due to multitasking capabilities, but I'm wondering if there are any specific scenarios where this happens on iPhone devices? Here is the relevant documentation link: https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps I suspect it might have something to do with Picture-in-Picture (PIP) mode, but when I developed and tested a direct video streaming PIP, the issue did not occur. Does anyone have insights on this matter or related experiences they could share?
0
0
234
Dec ’24
why does tripple camera take photo faster than single camera device?
I found this phenomenon, and it can be reproduced stably. If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms. If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later. Let me describe the problem I encountered in another way. Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6... Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter. If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9. This means the triple camera can capture photos faster, but why is this the case? Any explanation?
2
0
349
Dec ’24