What is the purpose of AdjustmentsSecondary.data included in the PHAssetResource for a cleaned-up image?
When using creationRequest.addResource, what should be set for the PHAssetResourceType?
If I set the PHAssetResourceType as follows to create an asset, it appears correctly in the camera roll. However, when attempting to edit the image in the Photos app, the app crashes:
IMG_5332.HEIC → .photo
FullSizeRender.HEIC → .fullSizePhoto
Adjustments.plist → .adjustmentData
AdjustmentsSecondary.data → .adjustmentData
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Post
Replies
Boosts
Views
Activity
I'm a new app developer and am trying to add a button that adds pictures from the photo library AND camera. I added the first function (adding pictures from the photo library) using the new-ish photoPicker, but I can't find a way to do the same thing for the camera. Should I just tough it out and use the UI View Controller struct that I've seen in all of the YouTube tutorials I've come across?
I also want the user to be able to crop the picture in the app after they take a picture.
Thanks in advance
Hello
Our application is backing up the user photos to some back end.
When retrieving the asset data from the Photo Library, we set the flag 'accessNetworkAllowed' to true to get the assets that might be optimized in iCloud.
In the application logs, we can see the message below, and it shows as coming from com.apple.photos.backend (PhotoKit)
Missing prefetched properties for PHAssetAdjustmentProperties on <PHAsset: 0x160b1ec00> BCF5688F-F7A7-4196-AFC7-A84E8BD95F3E/L0/001 mediaType=1/0, sourceType=1, (5601x3734), creationDate=2022-01-24 23:36:05 +0000, location=0, hidden=0, favorite=0, adjusted=0 . Fetching on demand on the main queue, which may degrade performance.
In particular, the message says 'Fetching on demand on the main queue' but I'm not sure if that means that PhotoKit will fetch on main queue or if that mean that our application is requesting the data on main queue.
Anyone could clarify?
thanks
Hi everyone,
I am working on a 3D reconstruction project.
Recently I have been able to retrieve the intrinsics from the two cameras on the back of my iPhone.
One consideration is that I want this app to run regardless if there is no LiDAR, but at least two cameras on the back. IF there is a LiDAR that is something I have considered to work later on the course of the project.
I am using a AVCaptureSession with the two cameras AVCaptureDevice:
builtInWideAngleCamera
builtInUltraWideCamera
The intrinsic matrices seem to be correct. However, the when I retrieve the extrinsics, e.g., builtInWideAngleCamera w.r.t. builtInUltraWideCamera the matrix I get looks like this:
Extrinsic Matrix (Ultra-Wide to Wide):
[0.9999968, 0.0008149305, -0.0023960583, 0.0]
[-0.0008256607, 0.9999896, -0.0044807075, 0.0]
[0.002392382, 0.0044826716, 0.99998707, 0.0].
[-14.277955, -8.135408e-10, -0.3359985, 0.0]
The extrinsic matrix of the form: [R | t], seems to be correct for the rotational part, but the translational vector is ALL ZEROS. Which suggests that the cameras are physically overlapped as well the last element not being 1 (homogeneous coordinates).
Has anyone encountered this 'issue' before?
Is there a flaw in my reasoning or something I might be missing?
Any comments are very much appreciated.
I explored several methods to trigger a 35mm camera connected via USB:
1- ICCameraDevice: Unable to make it work with Canon cameras (details).
2- Canon's EDSDK: Works but is complex to implement.
3- gPhoto2 (command-line): Simple to use but requires gPhoto2 to be installed.
In your opinion, what is the most efficient way to trigger and download images via USB from Canon cameras?
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
I have a photo editing app which uses a simple Metal Render to display CIFilter output images. It works just fine in Swift 5 but in Swift 6 it crashes on starting the Metal command buffer with an error in the Queue : com.Metal.CompletionQueueDispatch (serial).
The crash is occurring before I can debug.. I changed the command buffer to report
MTLCommandBufferDescriptorStatus errorOptions = .encoderExecutionStatus.
No luck with getting insight into the source of the crash..
Likewise the error is happening before any of the usual Metal debug tools are enabled.
The Metal render works just fine in Swift 5 and also works fine with almost all of the Swift Compiler Upcoming feature flags set to Yes. [The "Default Internal Imports" flag is still No. (the number of compile errors with this setting is absolutely scary! but that's another topic)
Do you have any suggestions on debugging or ideas on why the Metal library is crashing in Swift 6???
Everything is current release versions and hardware.
Hello Devs!
Anyone has an idea if it is feasible to override the native camera in apple?
So if a user have an app called "xyz" installed, when the user open his native camera and qr code is detected, we display a pop if he wants to continue with "xyz"
Thanks
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac.
This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
Hi all,
we are in the business of scanning documents and barcodes with the camera system of mobile devices. Since there is a wide variety of use cases, from scanning tiniest barcodes and small business cards to scanning barcodes or large documents from far distances we preferably rely on the triple camera devices, if available, with automatic constituent device switching.
This approach used to be working perfectly fine. Depending on the zoom level (we prefer to use an initial zoom value of 2.0) and the focusing distance the iPhone Pro models switched through the different camera systems at light speed: from ultra-wide to wide, tele and back. No issues at all.
Unfortunately the new iPhone 16 Pro models behave very different when it comes to constituent device switching based on focus distance. The switching is slow and sometimes it does not happen at all when the focusing distance changes. Especially when aiming for a at a distant object for a longer time and then aiming at a very close object that is maybe 2" away. The iPhone 15 Pro here always switches immediately to the ultra-wide camera, while the iPhone 16 Pro takes at least 2-3 seconds, in rare cases up to 10 seconds and sometimes forever to switch to the ultra-wide camera.
Of course we assumed that our code is responsible for these issues. So we experimented with restricting the devices and so on. Then we stripped more and more configuration code but nothing we tried improved the situation.
So we ended up writing a minimal example app that demonstrates the problem. You can find the code below. Execute it on various iPhones and aim at far distance (> 10 feet) and then quickly to very close distance (<5 inches).
Here is a list of devices and our test results:
iPhone 15 Pro, iOS 17.6: very fast and reliable switching
iPhone 15 Pro, iOS 18.1: very fast and reliable switching
iPhone 13 Pro Max, iOS 15.3: very fast and reliable switching
iPhone 16 (dual-wide camera), iOS 18.1: very fast and reliable switching
iPhone 16 Pro, iOS 18.1: slow switching, unreliable
iPhone 16 Pro Max, iOS 18.1: slow switching, unreliable
Questions:
Does anyone else have seen this issue? And possibly found a workaround?
Is this behaviour intended on iPhone 16 Pro models? Can we somehow improve the switching speed?
Further the iPhone 16 Pro models also show a jumping preview in the preview layer when they switch the constituent active device. Not dramatic, but compared to the other phones it looks like a glitch.
Thank you very much!
Kind regards,
Sebastian
import UIKit
import AVFoundation
class ViewController: UIViewController {
var captureSession : AVCaptureSession!
var captureDevice : AVCaptureDevice!
var captureInput : AVCaptureInput!
var previewLayer : AVCaptureVideoPreviewLayer!
var activePrimaryConstituentToken: NSKeyValueObservation?
var zoomToken: NSKeyValueObservation?
override func viewDidLoad() {
super.viewDidLoad()
}
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
checkPermissions()
setupAndStartCaptureSession()
}
func checkPermissions() {
let cameraAuthStatus = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
switch cameraAuthStatus {
case .authorized:
return
case .denied:
abort()
case .notDetermined:
AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler:
{ (authorized) in
if(!authorized){
abort()
}
})
case .restricted:
abort()
@unknown default:
fatalError()
}
}
func setupAndStartCaptureSession() {
DispatchQueue.global(qos: .userInitiated).async{
self.captureSession = AVCaptureSession()
self.captureSession.beginConfiguration()
if self.captureSession.canSetSessionPreset(.photo) {
self.captureSession.sessionPreset = .photo
}
self.captureSession.automaticallyConfiguresCaptureDeviceForWideColor = true
self.setupInputs()
DispatchQueue.main.async {
self.setupPreviewLayer()
}
self.captureSession.commitConfiguration()
self.captureSession.startRunning()
self.activePrimaryConstituentToken = self.captureDevice.observe(\.activePrimaryConstituent, options: [.new], changeHandler: { (device, change) in
let type = device.activePrimaryConstituent!.deviceType.rawValue
print("Device type: \(type)")
})
self.zoomToken = self.captureDevice.observe(\.videoZoomFactor, options: [.new], changeHandler: { (device, change) in
let zoom = device.videoZoomFactor
print("Zoom: \(zoom)")
})
let switchZoomFactor = 2.0
DispatchQueue.main.async {
self.setZoom(CGFloat(switchZoomFactor), animated: false)
}
}
}
func setupInputs() {
if let device = AVCaptureDevice.default(.builtInTripleCamera, for: .video, position: .back) {
captureDevice = device
} else {
fatalError("no back camera")
}
guard let input = try? AVCaptureDeviceInput(device: captureDevice) else {
fatalError("could not create input device from back camera")
}
if !captureSession.canAddInput(input) {
fatalError("could not add back camera input to capture session")
}
captureInput = input
captureSession.addInput(input)
}
func setupPreviewLayer() {
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
view.layer.addSublayer(previewLayer)
previewLayer.frame = self.view.layer.frame
}
func setZoom(_ value: CGFloat, animated: Bool) {
guard let device = captureDevice else { return }
let maxZoom: CGFloat = captureDevice.maxAvailableVideoZoomFactor
let minZoom: CGFloat = captureDevice.minAvailableVideoZoomFactor
let zoomValue = max(min(value, maxZoom), minZoom)
let deltaZoom = Float(abs(zoomValue - device.videoZoomFactor))
do {
try device.lockForConfiguration()
if animated {
device.ramp(toVideoZoomFactor: zoomValue, withRate: max(deltaZoom * 50.0, 50.0))
} else {
device.videoZoomFactor = zoomValue
}
device.unlockForConfiguration()
} catch {
return
}
}
}
Hi Team,
Camera preview plugin stopped working after upgrading the iOS 18.1.1 in mobile. Is there any way to implement the CameraPreview Plugin application. After clicking on camera icon, only black screen shown, on the other hand same build working fine with prior version of iOS 18.
Is there any way to resolved this issue.
Error CameraPreview Plugin upgraded to iOS 18
I have the following code in my ObservableObject class and recently XCode started giving purple coloured runtime issues with it (probably in iOS 18):
Issue 1: Performing I/O on the main thread can cause slow launches.
Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
Issue 3: Interprocess communication on the main thread can cause non-deterministic delays.
Here is the code:
@Published var cameraAuthorization:AVAuthorizationStatus
@Published var micAuthorization:AVAuthorizationStatus
@Published var photoLibAuthorization:PHAuthorizationStatus
@Published var locationAuthorization:CLAuthorizationStatus
var locationManager:CLLocationManager
override init() {
// Issue 1 (Performing I/O on the main thread can cause slow launches.)
cameraAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.video)
micAuthorization = AVCaptureDevice.authorizationStatus(for: AVMediaType.audio)
photoLibAuthorization = PHPhotoLibrary.authorizationStatus(for: .addOnly)
//Issue 1: Performing I/O on the main thread can cause slow launches.
locationManager = CLLocationManager()
locationAuthorization = locationManager.authorizationStatus
super.init()
//Issue 2: Interprocess communication on the main thread can cause non-deterministic delays.
locationManager.delegate = self
}
And also in route Change notification handler of AVAudioSession.routeChangeNotification,
//Issue 3: Hangs - Interprocess communication on the main thread can cause non-deterministic delays.
let categoryPlayback = (AVAudioSession.sharedInstance().category == .playback)
I wonder how checking authorisation status can give these issues? What is the fix here?
I found this phenomenon, and it can be reproduced stably.
If I use a triple-camera to take a photo, if the picture is moving, or I move the phone, let's assume it moves horizontally, when I aim at an object, I press the shutter, which is called time T. At this time, the picture in the viewfinder is T0, and the photo produced is about T+100ms.
If I use a single-camera to take a photo, use the same speed to move the phone to move the picture, and press the shutter when aiming at the same object, the photo produced is about T+400ms later.
Let me describe the problem I encountered in another way.
Suppose a pile of cards are placed horizontally on the table, and the cards are written with numbers from left to right, 0,1,2,3,4,5,6...
Now aim the camera at the number 0, and then move to the right at a uniform speed. The numbers pass through the camera's viewfinder and continue to increase. When aiming at the number 5, press the shutter.
If it is a triple-camera, the photo obtained will probably show 6, while if it is taken with a single-camera, the photo obtained will be about 9.
This means the triple camera can capture photos faster, but why is this the case? Any explanation?
I tried stacking 30 RAW exposures of 1 second each, but the quality is far inferior to the 30-second long exposure in night mode.
Hello,
I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter?
Since there is no way for me to know when the system has finished processing a photo.
If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
[[PHPhotoLibrary sharedPhotoLibrary] presentLimitedLibraryPickerFromViewController:self];弹出照片选择器时,导航栏背景颜色和导航栏字体颜色均为白色,导致无法辨认。
使用
[[UINavigationBar appearanceWhenContainedInInstancesOfClasses:@[UIImagePickerController.class]] setTintColor:[UIColor blackColor]];没有作用
Hi,
I'm having difficulties figuring out how I can reliably detect if the macOS system PHPhotoLibrary is available.
If I place the system Photo Library on an external drive and then eject it, other apps on startup, such as Photos, will tell me that the library isn't available. How can I replicate this behavior?
The only API I can find for this detection on startup is "PHPhotoLibrary.shared().unavailabilityReason". However this always returns nil.
Another strange behavior is if I register a PHPhotoLibraryAvailabilityObserver class on startup when the library is available and then eject the drive, I do get a notification via photoLibraryDidBecomeUnavailable, but then directly after the call the app is terminated. This prevents the app to perform any kind of graceful termination. Is this the expected behavior? It would make sense that it's up to the developer to decide what happens if the library becomes unavailable.
Thanks
In the docs it looks like something allows you to set a background replacement image in the Control Center controls (like on a Mac).
However, I can't find any documentation on it, beyond this reference in the Apple docs. https://developer.apple.com/documentation/avfoundation/avcapturedevice/isbackgroundreplacementactive?language=objc
Does anyone have any advice to enable backgrounds in the camera system wide?
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC.
We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that:
guard let source = CGImageSourceCreateWithURL(inputURL, nil),
let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else {
throw <error>
}
let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source)
CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil)
When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console:
createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0
CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620}
writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow.
I re-wrote the last part like this:
guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else {
throw <error>
}
CGImageDestinationAddImage(destination, primaryImage, properties)
This doesn't cause any warnings.
An issue that might be related has been reported here.
I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
Hello Developers,
I am working on an app where I need to capture 48MP high-resolution photos using the ultra-wide camera of the iPhone 16 Pro while an AR session is running. The goal is to take these photos without interrupting or impacting the AR session, which uses the main wide-angle camera. Despite extensive testing and various approaches, we have been unable to achieve the desired functionality.
What We Have Tried So Far
1. Using AVCaptureMultiCamSession:
• We attempted to leverage AVCaptureMultiCamSession to simultaneously use the wide-angle camera for ARKit and the ultra-wide camera for photo capture.
• However, this approach resulted in resource conflicts, with errors such as Cannot Record (OSStatus error -16409) and dropped frames.
Additionally, the ultra-wide camera feed would frequently freeze or stop.
2. Dedicated AVCaptureSession for the Ultra-Wide Camera:
• We separated the ultra-wide camera into its own AVCaptureSession while letting ARKit exclusively use the wide-angle camera.
• This setup showed initial promise, but the ultra-wide camera feed would still stop running after a very short time (under one second).
• Debugging logs indicated potential system-level interruptions, possibly due to resource prioritization by iOS.
3. Notification-Based Monitoring:
• We implemented monitoring for session interruptions (AVCaptureSession.wasInterruptedNotification), but this provided limited insights into the exact cause of the session stopping.
• We suspect iOS is de-prioritizing the ultra-wide camera session due to resource management policies or conflicts with ARKit.
4. Adjusting Camera Configurations:
• We attempted to simplify both ARKit and AVCaptureSession configurations by reducing features like depth data and by using lower session presets for video capture. However, the core issue persisted.
The Core Problem
• The ultra-wide camera session frequently stops or freezes when used alongside ARKit.
• Capturing high-resolution 48MP photos during the AR session is critical to the functionality of our app.
Question
Has anyone successfully implemented a similar setup? Specifically:
• Capturing 48MP photos with the ultra-wide camera while ARKit is actively using the main camera.
• Avoiding conflicts between ARKit and AVCaptureSession for the ultra-wide camera.
Any insights, suggestions, or alternative approaches would be greatly appreciated. Thank you in advance for your help! 😊