We have a web application that uses high resolution images to validate the authenticity of products. For this purpose we want to use the best camera to make the high resolution camera, on iPhone Pro devices this camera is the ultra-wide angle camera. The issue we have is how to confidently select that camera from the list returned by navigator.mediaDevices.enumerateDevices. We can't use the device ID as it change every time (and for every user), we could use the camera name but the string is translate to the device language which is very problematic. We could also just select a specific item in the list but we are not sure that the order is preserved and it makes it hard to deal with other iPhone models that don't have that ultra wide angle camera.
Selecting a specific camera looks like an essential feature not only for us. What is the best option, we are looking for something that is future proof and easily scalable.
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
181 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I am working on task to add WKWebView to Autofill extension. This web view presents web content that can access camera feed.
As an example here is a simple html:
I have added Camera permission entitlements to both main app and autofill extension Info.plist
Camera feed is accessed properly from the main app. However, doing the same in the Autofill extension does not show Camera stream in the web content.
I am receiving camera permissions alert and am allowing permissions.
It just stucks on the black screen and in console I see these logs:
16000a00 - GPUProcessProxy::didClose:
0x116000a00 - GPUProcessProxy::gpuProcessExited: reason=Crash
0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash
Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}>
0x115020360 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 524, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}
0x1160012a0 - GPUProcessProxy::didClose:
0x1160012a0 - GPUProcessProxy::gpuProcessExited: reason=Crash
0x1150180c0 - [PID=1 523] WebProcessProxy::gpuProcessExited: reason=Crash
Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}>
0x115020300 - ProcessAssertion::acquireSync Failed to acquire RBS assertion 'GPUProcess Background Assertion' for process with PID=1 525, error: Error Domain=RBSServiceErrorDomain Code=1 "target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit" UserInfo={NSLocalizedFailureReason=target is not running or doesn't have entitlement com.apple.runningboard.assertions.webkit}
Looks like WKWebView crashes.
Here are my configurations for the WKWebView:
let webConfiguration = WKWebViewConfiguration()
webConfiguration.allowsInlineMediaPlayback = true
webConfiguration.mediaTypesRequiringUserActionForPlayback = []
let webView = WKWebView(frame: .zero, configuration: webConfiguration)
webView.navigationDelegate = self
webView.uiDelegate = self
webView.scrollView.isScrollEnabled = false
webView.contentMode = .scaleAspectFit
view.addSubview(webView)
Does anyone know what might be the problem?
Is it even possible to access Camera from web content in Autofill extension?
I explored several methods to trigger a 35mm camera connected via USB:
1- ICCameraDevice: Unable to make it work with Canon cameras (details).
2- Canon's EDSDK: Works but is complex to implement.
3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed.
In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
Hello Devs!
Anyone has an idea if it is feasible to override the native camera in apple?
So if a user have an app called "xyz" installed, when the user open his native camera and qr code is detected, we display a pop if he wants to continue with "xyz"
Thanks
the following documentation tells me that the CameraFrame.Sample.Parameters.extrinsics is of type simd_float4x4, great!
https://developer.apple.com/documentation/arkit/cameraframe/sample/parameters/4443449-extrinsics
I have read in the answer of another post that this extrinsics represents the pose of the physical camera relative to the device anchor.
Did I understand correctly that the device anchor is where the scene is rendered from onto the user's display?
What is the coordinate system in which this offset is defined, which axis is left, which one is up, which one is forward?
The last column of the extrinsics seems to define a translation of approximately 2 cm along the x axis, -2cm along the y axis and -5 cm along the z axis. I tried to measure the physical distance between the main left and right cameras in order to find out if it's rather 2cm or 5 cm from the "middle", it looks more like 5, so I assume that the z axis is looking towards the right (from the user's perspective). Is that so? For x and y, I assume that the physical camera is approximately 2 cm to the front of the user and 2cm to the bottom, which of x and y is horizontal, which on vertical?
How is the camera image indexed, is it row-major and is the origin on the top left?
I am looking forward to learning about all the details on these extrinsics in order to make use of it.
after update to iOS 18.2 beta 4 the camera app opens but there is a black screen with setting and option showing but no response from them.
rebooted iphone
checked camera settings
reset iphone
still no camera
iphone 14 plus
ios 18.2 beta 4
canada
Hello, I am developing a service using capture sessions. I have a question concerning something curious I've noticed. Occasionally, I've been informed that the capture session stops working. Upon investigation, I found records of the videoDeviceNotAvailableWithMultipleForegroundApps interruption on the devices.
From what I've looked up in the documents, it seems to occur due to multitasking capabilities, but I'm wondering if there are any specific scenarios where this happens on iPhone devices? Here is the relevant documentation link: https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps
I suspect it might have something to do with Picture-in-Picture (PIP) mode, but when I developed and tested a direct video streaming PIP, the issue did not occur. Does anyone have insights on this matter or related experiences they could share?
I found this phenomenon, and it can be reproduced stably.
If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms.
If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later.
Let me describe the problem I encountered in another way.
Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6...
Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter.
If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9.
This means the triple camera can capture photos faster, but why is this the case? Any explanation?
Hello,
I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter?
Since there is no way for me to know when the system has finished processing a photo.
If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
Hello Apple Engineers,
Specific Issue:
I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far:
I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec.
The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log.
However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device."
This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding.
Here are my questions:
Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)?
There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation?
If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated.
Thank you for your help!
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro.
What happens :
When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider :
ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.}
What I've tried :
I followed the instructions given by mail, by :
adding the .license file at the root of my project,
adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there).
I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned).
I've also followed those post & post advices.
When checking on the Account settings, i do see the capabilities in the "additional capabilities"
On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid?
What did i missed to get the entitlements working ?
Here's the code :
import ARKit
import SwiftUI
import Vision
import RealityKit
class MainCameraAccess {
var arKitSession = ARKitSession()
var cameraFrameProvider = CameraFrameProvider()
var pixelBuffer: CVPixelBuffer?
func startCameraSession() async {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
// Request authorization
await arKitSession.requestAuthorization(for: [.cameraAccess])
// Start the session
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
print("Failed to start ARKit session: \(error)")
return
}
// Get camera frame updates
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
// Process frames
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
}
func saveLatestImage() {
guard let pixelBuffer = self.pixelBuffer else {
print("No image available to save.")
return
}
// Convert CVPixelBuffer to UIImage
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else {
print("Failed to create CGImage.")
return
}
let uiImage = UIImage(cgImage: cgImage)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil)
print("Image saved to photo library.")
}
}
Thanks in advance for the help,
Jeremy
I'm trying to implement anti-spoofing in iOS app using iphone true depth front camera. I have checked the following questions still can't find a proper working solution.
I trained a coreML model using 22000 depth human face images and 22000 non-human face(objects,food etc) images. The accuracy of the model is very less.
When testing out with flat 2d images shown on a smartphone screen I found that I get depth map even for flat 2D images like this. Even though the image is flat how does it give the depth map for the person shown in the flat 2D picture so the model thinks that it is a real face instead of a spoofed one.
I implemented depth capture by following this documentation and I made sure that I get depth map instead of disparity map
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_photos_with_depth
My next approach was to use NCNN framework to implement anti-spoofing by using the model used in the Mini-vision android anti-spoofing sample. I rewrote their library in iOS by using the objective C++ wrapper for C++ as the sample was only available for android app. And I tested by feeding 80x80 UI-Image in a open cv matrix format it's accurracy is less than the android one.
How can I solve this problem.
Within my app, I have:
for try await update in LockedCameraCaptureManager.shared.sessionContentUpdates {
It seems that the first time my app opens from LockedCameraCapture (after enabling camera permissions etc...) this update is never called and the user will not see their capture (.added or .initial)
If I then try to take another picture/video through my LockedCameraCapture control, it takes the video, opens the app as before, but this time sessionContentUpdates is called twice, once for the first video and once for the second video!
After that it doesn't seem to occur again and all works perfectly!
My device is: iPhone 16 Pro Max, iOS 18.2 developer beta
Has anyone experienced this?
Hi All, I'm working on a camera system extension where the main app is supposed to transfer a video stream using IOSurface memory sharing to the cam extension.
I have built a sample app that does contains all the logic, but without a camera extension. So I'm essentially using IOSurface to render a video in one SwiftUI view and show the result in another SwiftUI view. Just for testing purposes. And everything works fine so far.
Now, when moving the receiver code to the camera extensions, I'm having problems in accessing the IOSurface via ID. I am sharing the IOSurface ID via UserDefaults. I know from the logs the ID is correctly transferred.
Here is the code that uses IOSurfaceLookup to get the IOSurface. But this fails with the given message. The error message prints the surface ID which is the correct one. I know this from the main app where I get the ID and print it as well.
private var surfaceId: Int = -1 {
didSet {
logger.info("surfaceId has changed")
if surfaceId == -1 {
stopReceivingFrames()
ioSurface = nil
} else {
guard let surface = IOSurfaceLookup(IOSurfaceID(surfaceId)) else {
logger.error("failed to lookup IOSurface with ID: \(self.surfaceId)")
return
}
self.ioSurface = surface
logger.info("surface set, now starting receiving frames")
startReceivingFrames()
}
}
}
My gut feeling says that this issue might be related to some missing entitlement, sandboxing. In general, I have a working camera extension. I'm just not able to render a video in the main app, and send it over to the camera extension to overlay another web cam.
Both, the main app and camera extension are in the same XCode workspace and share the same AppGroup.
In short, my actual questions are:
Is there any entitlement required for using IOSurface between app and camera system extension?
Is using IOSurface actually possible in system extensions?
Is there any specific setting/requirement that I need to handle to make this work?
I love to use my iPhone as a camera. it is what makes this phone the best, ever, period
But, ever since iOS 18 update, slow-mo capture & playback is a painful experience. Ok, I get it, it is not an action camera but I was able to have an instant-slow-mo-playback on prior iOS versions. But ever since iOS 18, I have to wait at least 5-10 minutes to "play" slow-mo video and I noticed something else, even though I didn't modify any settings on the video, iOS 18 creates slow-mo video file in phone's storage and takes up a LOT of space and often making me clean all of my files to find what was wrong the device.
I think, in my all honesty, apple must pull iOS 18 update and work on it a little bit more.
Till then, I'll be using my DJI Action Camera for 240fps, at least it works, instantly.
Hi. I encounter some random crashes of my camera app. After some investigations, I found that it's terminated by the system and the crash log did be generated but the information is not quite useful, and here is the log found via the Console app.
Termination & Crash log
"Camera not actively used; AVCaptureEventInteraction not installed":
Received termination request from [osservice<com.apple.SpringBoard>:10931] on <RBSProcessPredicate <RBSProcessInstancePredicate| [app<com.juniperphoton.PhotonCam]>> with context <RBSTerminateContext| explanation:Capture Application Requirements Unmet: "Camera not actively used; AVCaptureEventInteraction not installed" reportType:CrashLog maxTerminationResistance:Interactive>
The crash log exported from the device will have some common information like:
It's a EXC_CRASH (SIGKILL) type with no termination reason.
Exception Type: EXC_CRASH (SIGKILL)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Termination Reason: RUNNINGBOARD 0
It's triggered by the main thread, but it seems to be waiting for an event to process.
Triggered by Thread: 0
Thread 0 name: Dispatch queue: com.apple.main-thread
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x1ee165788 mach_msg2_trap + 8
1 libsystem_kernel.dylib 0x1ee168e98 mach_msg2_internal + 80
2 libsystem_kernel.dylib 0x1ee168db0 mach_msg_overwrite + 424
3 libsystem_kernel.dylib 0x1ee168bfc mach_msg + 24
4 CoreFoundation 0x19cbe47f4 __CFRunLoopServiceMachPort + 160
5 CoreFoundation 0x19cbe3ea0 __CFRunLoopRun + 1212
6 CoreFoundation 0x19cc36274 CFRunLoopRunSpecific + 588
7 GraphicsServices 0x1e9d6d4c0 GSEventRunModal + 164
8 UIKitCore 0x19f783480 -[UIApplication _run] + 816
9 UIKitCore 0x19f3a9410 UIApplicationMain + 340
10 UIKitCore 0x19fae4bb0 0x19f394000 + 7670704
11 PhotonCam 0x1002e7e3c 0x1002cc000 + 114236
12 dyld 0x1c2d5ade8 start + 2724
Address size fault on the main thread
Thread 0 crashed with ARM Thread State (64-bit):
...
far: 0x0000000000000000 esr: 0x56000080 Address size fault
I have once tried to reproduce this issue when the app is attached with debugger, and it says:
Terminated due to signal 9
When the crash or termination happened, the app:
No AVCaptureSession is running.
The app is in the foreground and users are interacting with some functions like viewing photos or editing photos in the app. When users exit the camera view, like entering the gallery or settings, the camera session will be stopped.
Both TestFlight and Debug build will have the same issue.
No 3rd party crash reporter is installed(I deliberately disable it in Debug build and TestFlight build)
It has adopted the LockedCameraCapture, but current it's running on the main app target(if not, my app will have a button of unlock, so I can confirm about this).
Also, when it comes to the memory consumption, there is no JetsamEvent around the crash time.
Device and app information
Additionally, some information about the tech stack and the current state of my device and my app:
iPhone 16 Pro with iOS 18.2 Beta 3.
The app is a camera based app(it's PhotonCam and you can find it on the App Store), its main functionality is the camera feature using AVFoundation + Core Image + Metal to deliver camera functionality.
It has adopted the Camera Control, AVCaptureEventInteraction and LockedCameraCapture features.
If I remember it right, it occurs in iOS 18.1 Release build, but currently I have no such device to confirm. But in iOS 17.x the issue has never happened.
Regarding to this termination, on top of my head is the "watchdog" mechanism that will terminate the process that is running on the LockedCameraCapture feature. However I can make sure that currently the app is running as the main target on the home screen.
Has anybody encountered this kind of issue and has found some solutions? Thanks in advance.
I recently updated both my phones iPhone 14 Pro Max and iPhone 16 Pro Max on iOS 18.1, and after update when I open camera and go to SLO-MO the screen starts flickering, even though after recoding when I play the video. Video is playing the same with the flickering screen. is it iOS update issue or something else.
I would like to take YCbCr CVPixelBuffers from AVCaptureVideoDataOutput, apply some processing in RGB space, render to an MTKView, and pass to AVAssetWriter for recording.
Right now, I'm doing this all manually – deswing the incoming data if necessary, choose the right matrix to convert to RGB, apply processing, etc. I also have to convert back to YCbCr before feeding the frames to AVAssetWriter because encoding performs much better if I do. Is there any efficient, built-in way to achieve the same?
I can't use AVCaptureVideoPreviewLayer, since I need to do some further processing before display. I can't use AVCaptureVideoDataOutput's videoSettings to get automatic BGRA conversion because that would lose bit depth for 10 bit video formats (and isn't available on all formats anyway).
I see these Accelerate functions, but they seemingly don't use the GPU, nor do they support all the formats and bit depths I'd need.
I found reference to some undocumented MTLPixelFormats that seem to do exactly what I want, but I don't want to rely on something like this unless it's explicitly endorsed. This would also incur an RGB/YCbCr conversion on every texture read and write, right?
Is there anything I'm missing here?
I'm trying to create an "Extensible Enterprise SSO" extension as described in the Introducing Extensible Enterprise SSO tech talk.
My SSO extension works fine, but I want to be able to access the camera (via AVFoundation) from within the SSO extension.
According to this thread (which I can't seem to be able to reply to) - it should be possible to access the camera from within an SSO extension, however this doesn't work for me.
When I try to access the camera, I get the permission dialog, but after accepting, the camera preview is empty and no camera frames are produced.
I don't get any errors/warnings in the logs, but it immediately fires AVCaptureSession.wasInterruptedNotification notification with AVCaptureSessionInterruptionReasonKey = 1 which corresponds to videoDeviceNotAvailableInBackground.
However, the SSO extension view controller is clearly not in the background, so is this a bug - or are there special rules for requesting camera permission in an SSO extension? The same camera access works fine in the host app, just not inside the extension.
Interestingly, accessing the camera in a WKWebView using various webcam test pages, doesn't work either.
All of these tests have been on iPadOS 18.