Hi all, we are working on iOS application that includes the camera functionality. This week we have received a few customer complaints regarding the camera usage with iPhone 16/16 Pro, both of the customers said that they have an issue with the camera preview(when the camera is open) the camera preview is just freezer but any other functionally and UI works as expected. Moreover the issue happens only for back camera, the front camera works perfectly.
We have tested it in context of iOS 18 with iPhone 14/15/15 Pro/15 Pro Max but all devices with iOS 18 works perfectly without any issues. So we assumed there was no issues with iOS 18 but some breaking changes with the new iPhone 16/16 pro cameras were introduced that caused this effect. Unfortunatly, currently we can't test directly usign the iPhone 16/16 Pro since we have't these devices.
We are using SwiftUI framework and here the implementation of the camera preview:
VideoPreviewLayer
final class CameraPreviewView: UIView {
var previewLayer: AVCaptureVideoPreviewLayer {
guard let layer = layer as? AVCaptureVideoPreviewLayer else {
fatalError("Layer expected is of type VideoPreviewLayer")
}
return layer
}
var session: AVCaptureSession? {
get { return previewLayer.session }
set { previewLayer.session = newValue }
}
override class var layerClass: AnyClass { AVCaptureVideoPreviewLayer.self }
}
UIKit -> SwiftUI
struct CameraRecordingView: UIViewRepresentable {
@ObservedObject var cameraManager: CameraManager
func makeUIView(context: Context) -> CameraPreviewView {
let previewView = CameraPreviewView()
previewView.session = cameraManager.session /// AVCaptureSession
previewView.previewLayer.videoGravity = .resizeAspectFill
return previewView
}
func updateUIView(_ uiView: CameraPreviewView, context: Context) {
}
}
Setup camera input
private func saveInput(input: AVCaptureDevice) {
/// Where input is AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
do {
let cameraInput = try AVCaptureDeviceInput(device: input)
if session.canAddInput(cameraInput) {
session.addInput(cameraInput) /// session is AVCaptureSession
} else {
sendError(error: .cannotAddInput)
status = .failed
}
} catch {
if error.nsError.code == -11852 {
sendError(error: .microphoneError)
} else {
sendError(error: .createCaptureInput(error))
}
status = .failed
}
}
Does anybody have similar issues with iPhone 16/16 Pro? We would appreciate any ideas of how to potentially resolve the issue.
Camera
RSS for tagDiscuss using the camera on Apple devices.
Posts under Camera tag
182 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Since I updated my XR to ios 18, every time I use the camera mainly to record, it keeps shaking.
As you can see, the value shown in the AVCaptureSystemZoomSlider is not the same as the raw camera zoom factor.
I tried to calculate this value, and it seems it's 0.8. (5-1)*0.8=4.2-1 in this image.
It seems this factor only applies to the default wide-angle camera. And I can't get this value from anywhere. (It's not displayVideoZoomFactorMultiplier btw, I checked that.)
What is it?
I'm developing an iOS app using DockKit to control a motorized stand. I've noticed that as the zoom factor of the AVCaptureDevice increases, the stand's movement becomes increasingly erratic up and down, almost like a pendulum motion. I'm not sure why this is happening or how to fix it.
Here's a simplified version of my tracking logic:
func trackObject(_ boundingBox: CGRect, _ dockAccessory: DockAccessory) async throws {
guard let device = AVCaptureDevice.default(for: .video),
let input = try? AVCaptureDeviceInput(device: device) else {
fatalError("Camera not available")
}
let currentZoomFactor = device.videoZoomFactor
let dimensions = device.activeFormat.formatDescription.dimensions
let referenceDimensions = CGSize(width: CGFloat(dimensions.width), height: CGFloat(dimensions.height))
let intrinsics = calculateIntrinsics(for: device, currentZoom: Double(currentZoomFactor))
let deviceOrientation = UIDevice.current.orientation
let cameraOrientation: DockAccessory.CameraOrientation = {
switch deviceOrientation {
case .landscapeLeft: return .landscapeLeft
case .landscapeRight: return .landscapeRight
case .portrait: return .portrait
case .portraitUpsideDown: return .portraitUpsideDown
default: return .unknown
}
}()
let cameraInfo = DockAccessory.CameraInformation(
captureDevice: input.device.deviceType,
cameraPosition: input.device.position,
orientation: cameraOrientation,
cameraIntrinsics: useIntrinsics ? intrinsics : nil,
referenceDimensions: referenceDimensions
)
let observation = DockAccessory.Observation(
identifier: 0,
type: .object,
rect: boundingBox
)
let observations = [observation]
try await dockAccessory.track(observations, cameraInformation: cameraInfo)
}
func calculateIntrinsics(for device: AVCaptureDevice, currentZoom: Double) -> matrix_float3x3 {
let dimensions = CMVideoFormatDescriptionGetDimensions(device.activeFormat.formatDescription)
let width = Float(dimensions.width)
let height = Float(dimensions.height)
let diagonalPixels = sqrt(width * width + height * height)
let estimatedFocalLength = diagonalPixels * 0.8
let fx = Float(estimatedFocalLength) * Float(currentZoom)
let fy = fx
let cx = width / 2.0
let cy = height / 2.0
return matrix_float3x3(
SIMD3<Float>(fx, 0, cx),
SIMD3<Float>(0, fy, cy),
SIMD3<Float>(0, 0, 1)
)
}
I'm calling this function regularly (10-30 times per second) with updated bounding box information. The erratic movement seems to worsen as the zoom factor increases.
Questions:
Why might increasing the zoom factor cause this erratic movement?
I'm currently calculating camera intrinsics based on the current zoom factor. Is this approach correct, or should I be doing something differently?
Are there any other factors I should consider when using DockKit with a variable zoom?
Could the frequency of calls to trackRider (10-30 times per second) be contributing to the erratic movement? If so, what would be an optimal frequency?
Any insights or suggestions would be greatly appreciated. Thanks!
Hi,
We are a team of student working on a project featuring the Vision Pro, and we'd simply like to know if a 3rd party app can access the video stream of the front cameras?
From our tests, FaceTime for exemple, is able to screen share the entire stream that the user is seeing. (real world + app windows), but apps as Discord, are only able to share app windows, the real world is fully black.
Is it a privacy security, or is it because 3rd parties app doesn't yet support the stream of the front cameras?
To give some more context, we'd like to screenshot an area of the view (real world), with a pinch and drag gesture, and then access the screenshot to work on it. How would we be able to access the video stream?
Thanks in advance for your help,
MrCubic
I've just received my iPhone 16 Pro to develop some of the Camera Control features.
I am trying to set up my app to be launched from a button press, and from my research in the documents this is only possible if I develop a LockedCameraCaptureExtension.
Is this correct? My app is written in React Native, so to build an extension would require me to re-create the entire UI in Swift which just isn't possible with my resources. Ideally I could build a simple extension that requires Authentication to open the app but I'n not sure that will work:
The app extension terminates shortly after launch if it doesn’t have an active camera view that uses AVCaptureEventInteraction to handle events from the hardware buttons, or if access to the camera hasn’t been requested.
This is a bit frustrating for something so simple as to just opening an app.
Thanks,
Alex
I’m developing a visionOS app using EnterpriseKit, and I need access to the main camera for QR code detection. I’m using the ARKit CameraFrameProvider and ARKitSession to capture frames, but I’m encountering this error when trying to start the camera stream:
ar_camera_frame_provider_t: Failed to start camera stream with error: <ar_error_t Error Domain=com.apple.arkit Code=100 "App not authorized.">
Context:
VisionOS using EnterpriseKit for camera access and QR code scanning.
My Info.plist includes necessary permissions like NSCameraUsageDescription and NSWorldSensingUsageDescription.
I’ve added the com.apple.developer.arkit.main-camera-access.allow entitlement as per the official documentation here.
My app is allowed camera access as shown in the logs (Authorization status: [cameraAccess: allowed]), but the camera stream still fails to start with the “App not authorized” error.
I followed Apple’s WWDC 2024 sample code for accessing the main camera in visionOS from this session.
Sample of My Code:
import ARKit
import Vision
class QRCodeScanner: ObservableObject {
private var arKitSession = ARKitSession()
private var cameraFrameProvider = CameraFrameProvider()
private var pixelBuffer: CVPixelBuffer?
init() {
Task {
await requestCameraAccess()
}
}
private func requestCameraAccess() async {
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
print("Failed to start ARKit session: \(error)")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left])
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return }
Task {
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue }
self.pixelBuffer = mainCameraSample.pixelBuffer
// QR Code detection code here
}
}
}
}
Things I’ve Tried:
Verified entitlements in both Info.plist and .entitlements files. I have added the com.apple.developer.arkit.main-camera-access.allow entitlement.
Confirmed camera permissions in the privacy settings.
Followed the official documentation and WWDC 2024 sample code.
Checked my provisioning profile to ensure it supports ARKit camera access.
Request:
Has anyone encountered this “App not authorized” error when accessing the main camera via ARKit in visionOS using EnterpriseKit? Are there additional entitlements or provisioning profile configurations I might be missing? Any help would be greatly appreciated! I haven't seen any official examples using new API for main camera access and no open source examples either.
Hello.
Is it possible to control the iPhone 16 Camera Control Button via HID (Bluetooth/USB) command?
Thanks
When I set a custom exposure duration, like 1/8, and then switch back to continuous auto exposure, the exposure duration in areas that were previously 1/17 changes to something like 1/5 or 1/10. As a result, the screen becomes laggy and overexposed. I'm not sure why this is happening.
Hello, I wanna see camera output of apple vision pro with a little delay, for example 2ms. I tried some approach but all of it just delay at the start not all frame. appreciate any help.
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me.
for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") }
var body: some View {
VStack {
image
.resizable()
.scaledToFit()
if(self.finalImage != nil){
self.finalImage!
.resizable()
.scaledToFit()
}else{
image
.resizable()
.scaledToFit()
}
}
.task {
if #available(visionOS 2.0, *) {
guard CameraFrameProvider.isSupported else {
print("CameraFrameProvider not supported.")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left])
let cameraFrameProvider = CameraFrameProvider()
do {
try await arkitSession.run([cameraFrameProvider])
} catch {
guard let sessionError = error as? ARKitSession.Error else {
preconditionFailure("ARKitSession.run() returned a non-session error: \(error)")
print("ARKitSession.run() returned a non-session error: \(error)")
}
}
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
preconditionFailure("Failed to get an async sequence for the first format.")
print("Failed to get an async sequence for the first format.")
}
print("cameraFrameUpdates:: \(cameraFrameUpdates)")
for await cameraFrame in cameraFrameUpdates {
print("cameraFrame:: \(cameraFrame)")
print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))")
guard let leftSample = cameraFrame.sample(for: .left) else {
print("CameraFrameProviderSample - Nil camera frame left sample")
print("CameraFrameProviderSample - Nil camera frame left sample")
continue
}
self.pixelBuffer = leftSample.pixelBuffer
print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========")
self.finalImage = self.setImage()
}
} else {
// Fallback on earlier versions
}
}
}
I have a third party app for controlling Sony mirrorless cameras over WiFi. I’m really excited to integrate the new camera controls on the iPhone 16 Pro with the app. I’ve found the documentation around this, and seems I need an AVCaptureSession setup in order to utilise them.
func configureControls(_ controls: [AVCaptureControl]) {
// Verify the host system supports controls; otherwise, return early.
guard captureSession.supportsControls else { return }
// Begin configuring the capture session.
captureSession.beginConfiguration()
// Remove previously configured controls, if any.
for control in captureSession.controls {
captureSession.removeControl(control)
}
// Iterate over the passed in controls.
for control in controls {
// Add the control to the capture session if possible.
if captureSession.canAddControl(control) {
captureSession.addControl(control)
} else {
print("Unable to add control \(control).")
}
}
// Commit the capture session configuration.
captureSession.commitConfiguration()
}
can I just use a freshly initialised capture session for this? Or does it need to be configured in any other ways? Are there any down sides to creating a session (CPU usage etc) that I may experience from this?
Also, the scope of the controls is quite narrow. For something like shutter speed or aperture that has quite a number of possible values but requires custom labels, and a non-linear scale (so the AVCaptureIndexPicker seems to be the way to go). Will that picker support enough values to represent something like shutter speed or aperture? Is there any chance we may get non-linear float based controls in the future, which may feel more natural from a UX perspective than index-based?
Apologies, lots of edits going on here as I think about this more.
Is there any way, or would any way be considered of putting these controls in a disabled state like with other UI elements in iOS? There are times (during capture for example) that a lot of these settings can be unavailable (as communicated by the Sony camera) to be changed by the user, and managing a queue of changes when the function is unavailable to be set is going to be a challenge. If there won’t be, how will they behave if controls are removed whilst being interacted with? Presumably they will disappear entirely from the UI?
Thanks!
Hi, Recording Videos with AVAssetWriter, capture fps(camera output fps) is ok, but final result video fps was lower, the reason is AVAssetWriterInput.isReadyForMoreMediaData is false sometimes.
Yes, I have read document many times, it said need to set expectsMediaDataInRealTime to true and balabala...
I really be tortured by this problem for a long time, can I debug this problem? or any advice?
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame?
Button("Camera Feed") {
Task{
if #available(visionOS 2.0, *) {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
//====
print("=========================")
print(mainCameraSample.pixelBuffer)
print("=========================")
// self.pixelBuffer = mainCameraSample.pixelBuffer
}
} else {
// Fallback on earlier versions
}
}
}
I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
I am currently working on a project where I aim to overlay the camera feed obtained via the Apple Vision Pro's camera access API to align perfectly with the user's perspective in Vision Pro.
However, I've noticed a discrepancy between the captured camera feed and the actual view from the user's perspective. My assumption is that this difference might be related to lens distortion correction or the lack thereof.
Unfortunately, I'm not entirely sure how the camera feed is being corrected or processed. For the overlay, I'm using a typical 3D CG approach where a texture captured from the background plane is projected onto a surface. In this case, the "background capture" is the camera feed that I'm projecting.
If anyone has insights or suggestions on how to align the camera feed with the user's perspective more accurately, any information would be greatly appreciated.
Attached image shows what difference between the camera feed and actual user's perspective field of view.
I want to align the camera feed image to the user's perspective.
Hi,
After installing iPadOS18 Beta3 on my iPad 7th gen, the default camera app no longer detects QR codes.
I tried updating to Beta7, but the issue remained.
Also, third-party apps that use AVCaptureMetadataOutput in AVFoundation Framework to detect QR codes also no longer work.
You can reproduce the issue by running default camera app or the AVFoundation sample code from the Apple developer site on iPad 7th gen (iPadOS18Beta installed).
https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Has anyone else experienced this issue?
I would like to know if this issue occurs on other iPad models as well.
This is similar to the following issue that previously occurred with iPadOS 17.4.
https://support.apple.com/en-lamr/118614
https://developer.apple.com/forums/thread/748092
I have used the same external mic on my iPhone 13 Pro Max for years, great quality audio, now when pluggin in the mic doesnt switch, and isn't recognized and all apps record with the internal mic which is really bad in a room setting.
Hello, I have received Enterprise.license from Apple and I am trying to implement main Camera access for Vision Pro by following https://developer.apple.com/videos/play/wwdc2024/10139/. Here is my camera function.
func takePicture() async {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
pixelBuffer = mainCameraSample.pixelBuffer
let image = UIImage(ciImage: CIImage(cvPixelBuffer: pixelBuffer!))
print(image)
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
My problem is debug stops at this line.
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return
}
Why does it happen so and what else do I need to do?
Hello,
How can i find vendor id and product id for internal Webcam "Facetime HD Camera" for M1, M2, M3 Macbook?
I can find Model ID and Unique ID but cannot find vendor id and product id.
I can find vid and pid in intel process Macbooks but cannot find in M process Macbooks?
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video.
But I found the video by my app differs from the iPhone build-in camera app in:
Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature.
In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image.
I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app.
Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.