We are using inline PhotosPicker introduced with iOS 17.0. The accessbility navigation using touch gestures work fine but the navigation through Keyboard doesn't work properly.
The tab/arrow based Keyboard navigation can't move from native app process to inline PhotosUI process or vice versa. This is logged as a high severity bug by our accessibility team.
Please look into this.
Sample code for repro:
https://github.com/saalisumer/AccessibilityIssueInlinePhotoPickerIOS
Repro video:
https://github.com/saalisumer/AccessibilityIssueInlinePhotoPickerIOS/blob/main/Simulator%20Screen%20Recording%20-%20iPhone%2015%20Pro%20-%202024-12-16%20at%2016.27.48.mp4
Photos and Imaging
RSS for tagIntegrate still images and other forms of photography into your apps.
Posts under Photos and Imaging tag
81 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
I’m building a camera app using SwiftUI and UIKit (with UIViewControllerRepsrwsentable). My app already is able to capture photos, but I also want to implement the important feature - apply my custom image filter to the image for live preview in camera and when this image is saving to the photo library (like in the default Apple camera app with Photographic styles).
My image filter must be pretty advanced because I’m a photographer and I trying to achieve the same colours as I have with my custom image preset in Lightroom. I want to control the image parameters such as basic (exposure, contrast, shadows, etc.), tone curves for each channel (Red, Green, Blue channels separately), HSL (for Red, Orange, Yellow, Green, Blue, Aqua, Purple and Magenta), apply colour grading and more.
Currently I’m straggling with implementation of this. I tried to create a custom image filter using Metal (it works with saturation) but I’m not sure if it is the best approach. I need help and recommendations of how developers implement this complex thing in their apps (what technologies should I use and etc.)
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console:
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none>
GP extension could not be loaded: Extension (platform: 2) could not be found (in update)
dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89]
ImagePlaygroundViewController.isAvailable returns true, however.
In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app.
Is this a bug?
Since the OS was recently updated to 18.1.1 on my iPhone 15, I am no longer able to import my pictures into the Photos app on my iMac. I have to mention that my iMac is pretty old and is running OS High Sierra 10.13.6 and is not allowing me to update the OS to a newer version. Anyway, the main error message I get is: "Some items cannot be added to your Photo library because they may be an unrecognizable file format or the file may not contain valid data". Then, for each individual photo that failed to upload, the error message I get reads, "unable to read metadata. The file may be corrupt". However, videos import just fine from my iPhone to iMac.
This was not a problem before the recent iPhone update. I tried closing the Photo app and reopening it. I tried restarting my iPhone and iMac but nothing seems to work. Any help would be much appreciated.
I have a picture with no white background (transparency), the format is on point but when save to photos, it add white background(?). What are you trying to do Apple?
I am an artist (singer songwriter) and I use the Photos app to manage albums related to my various creative projects. And these are some BIG issues that i am SURPRISED never came into the account or maybe were overlooked -
Missing Search Bar When Adding Photos to Albums: Why there is no search bar when adding a photo to a bag of hundred of albums? (Artists like me like to organise things into different albums and folders)
I can no longer search for albums by name after ios 18 update, which was previously very helpful in quickly locating them.
Albums can be arranged & moved in the same folder but there is no way to move albums between DIFFERENT FOLDERS and the only wat is to create a new album in that folder and select and transfer everything and delete that old album.
Hi!
I recently updated to the latest 18.2 Beta version of iOS on my iPhone 15 Pro Max. Could you please guide me on how to locate and utilize the Image Search feature powered by Apple Intelligence?
Just a little detail: I went on YouTube and the instruction was to hold the camera action button on the iPhone 16 and image search appears.
So far, I haven’t been able to replicate these results on my iPhone 15 Pro Max. This is a great capability and I’d really like to try it out.
“Live long and prosper.” -Spock
-Jordan
Hello,
I m trying to implement deferred photo processing in my photo capture app. After I take a photo, I pass it through a CIFilter, now with the Deferred Photo Processing where would I pass the resulting photo through the CIFilter?
Since there is no way for me to know when the system has finished processing a photo.
If I have to do it in my app foreground every time, how do I prevent a scenario, where the user takes a photo, heads straight to the Photos App and sees the image without the filter?
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC.
We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that:
guard let source = CGImageSourceCreateWithURL(inputURL, nil),
let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else {
throw <error>
}
let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source)
CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil)
When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console:
createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0
CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620}
writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow.
I re-wrote the last part like this:
guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else {
throw <error>
}
CGImageDestinationAddImage(destination, primaryImage, properties)
This doesn't cause any warnings.
An issue that might be related has been reported here.
I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
Hi, i’m doing a work.
i need to transfer files (photo, raw format) from sd to iphone. On my iPad pro.
i need to move photo first on file, then Import to photo.
now. If i do this job for few files there is no problem.
if i copy more then 300 files job is not done.
when i select raw file on file app, and then click on share/ save to my photo option …. Ther‘s no progress bar, no way to conclude copy with success. At least 298 photo are imported. Whitout any error or warning.
Someone do the trick?
can be possible to copy from file to file or import from file to photo with a progress bar display?
it’s very very strange that the basic is turned off.
please help me.
thank you very much in advance.
This is my first day with IOS 18.1.1 and so far it’s smooth. my only problem is how chaotic the photos app has become during the update. For one, i dont like how to access any of the organization it’s at the way bottom and even after customizing and reorganizing there’s no way to to move that section to the top. i also dont like how all my photos are just out on front street when the app is launched, it makes everything hard to look at and hard to find. please fix this and make browsing photos enjoyable again.
I was able to obtain the depth map image using AVCapturePhotoOutput from the delegate method
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
I convert the depth map to kCVPixelFormatType_DepthFloat32 format and get the pixel values of the depth map using the below code
func convertDepthData(depthMap: CVPixelBuffer) -> [[Float32]] {
let width = CVPixelBufferGetWidth(depthMap)
let height = CVPixelBufferGetHeight(depthMap)
var convertedDepthMap: [[Float32]] = Array(
repeating: Array(repeating: 0, count: width),
count: height
)
CVPixelBufferLockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
let floatBuffer = unsafeBitCast(
CVPixelBufferGetBaseAddress(depthMap),
to: UnsafeMutablePointer<Float32>.self
)
for row in 0 ..< height {
for col in 0 ..< width {
if floatBuffer[width * row + col].isFinite{
convertedDepthMap[row][col] = floatBuffer[width * row + col]
}
}
}
CVPixelBufferUnlockBaseAddress(depthMap, CVPixelBufferLockFlags(rawValue: 2))
return convertedDepthMap
}
Is this the right way of accessing the depth float values from a depth map. And what will be the unit for it. Because some times the depth values are in range of 0.7 when I keep the device close to the subject around 15 to 30 cm.
I'm trying to capture the depth map image using true depth camera in iPhone 15 plus. I was able to setup the AVCapture session with AVCaptureDeviceInput as builtInTrueDepthCamera and AVCapturePhotoOutput with isDepthDataDeliveryEnabled set as true. I also manually made the activeDepthDataFormat of AVCapture device to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32. Finally I have enabled isDepthDataDeliveryEnabled, embedsDepthDataInPhoto , embedsPortraitEffectsMatteInPhoto and embedsSemanticSegmentationMattesInPhoto in AVCapturePhotoSettings before capturing the photo using capturePhoto(with: photoSettings, delegate: self) method.
I have checked manually printing the activeDepthDataFormat of AVCapture device. First before setting it by default it is
Optional('dpth'/'hdis' 640x 480, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
After forcing it to kCVPixelFormatType_DepthFloat16 or kCVPixelFormatType_DepthFloat32 the format is
Optional('dpth'/'hdep' 160x 120, { 2- 30 fps}, photo dims:{}, fov:73.699, system exposure bias range:-2.0-2.0)
But when I receive the captured photo in
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?)
The depth map is
Optional(hdis 640x480 (high/abs) calibration:{intrinsicMatrix: [2723.07 0.00 2016.00 | 0.00 2723.07 1512.00 | 0.00 0.00 1.00], extrinsicMatrix: [1.00 0.00 0.00 0.00 | 0.00 1.00 0.00 0.00 | 0.00 0.00 1.00 0.00] pixelSize:0.001 mm, distortionCenter:{2016.00,1512.00}, ref:{4032x3024}})
Here it shows hdis instead of hdep, why is it capturing disparity map instead of true depth map.
The depth quality is high and depth data accuracy is absolute.
Here is my code
import UIKit
import AVKit
import AVFoundation
class ViewController: UIViewController, AVCapturePhotoCaptureDelegate {
@IBOutlet weak var previewView: UIView!
@IBOutlet weak var resultLbl: UILabel!
private var session = AVCaptureSession()
private var captureDevice: AVCaptureDevice?
private var inputDevice: AVCaptureDeviceInput?
private var photoOutput: AVCapturePhotoOutput?
private var photoSettings: AVCapturePhotoSettings?
private var cameraPreviewLayer: AVCaptureVideoPreviewLayer?
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
self.setupCaptureSession()
}
func setupCaptureSession(){
captureDevice = AVCaptureDevice.default(.builtInTrueDepthCamera, for: .video, position: .unspecified)
guard let captureDevice else{
print("ERROR::UNABLE TO SET TRUE DEPTH CAMERA ")
return }
session.beginConfiguration()
do{
inputDevice = try AVCaptureDeviceInput(device: captureDevice)
guard let inputDevice else{
print("ERROR: UNABLE TO SET UP INPUT DEVICE")
return }
if session.canAddInput(inputDevice){
session.addInput(inputDevice)
}
}
catch{
print(error)
}
photoOutput = AVCapturePhotoOutput()
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return }
if session.canAddOutput(photoOutput){
session.addOutput(photoOutput)
}
session.sessionPreset = .photo
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
print("IS DEPTH ENABLED:: \(photoOutput.isDepthDataDeliveryEnabled)")
session.commitConfiguration()
let availableFormats = captureDevice.activeFormat.supportedDepthDataFormats
let depthFormat = availableFormats.filter { format in
let pixelFormatType =
CMFormatDescriptionGetMediaSubType(format.formatDescription)
return (pixelFormatType == kCVPixelFormatType_DepthFloat16 ||
pixelFormatType == kCVPixelFormatType_DepthFloat32)
}.first
session.beginConfiguration()
try! captureDevice.lockForConfiguration()
captureDevice.activeDepthDataFormat = depthFormat
captureDevice.unlockForConfiguration()
session.commitConfiguration()
self.setupPreviewLayer()
}
func setupPreviewLayer(){
cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: session)
cameraPreviewLayer?.videoGravity = .resizeAspectFill
if let cameraPreviewLayer{
self.previewView.layer.addSublayer(cameraPreviewLayer)
cameraPreviewLayer.frame = self.previewView.bounds
}
DispatchQueue.global(qos: .userInteractive).async {
self.session.startRunning()
}
}
@IBAction func captureBtnPressed(_ sender: Any) {
photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
guard let photoSettings else{
print("ERROR: UNABLE TO SETUP PHOTO SETTINGS")
return
}
guard let photoOutput else{
print("ERROR: UNABLE TO SET UP PHOTO OUTPUT")
return
}
photoSettings.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
photoSettings.embedsDepthDataInPhoto = true
photoSettings.embedsPortraitEffectsMatteInPhoto = true
photoSettings.embedsSemanticSegmentationMattesInPhoto = true
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
print(photo.depthData)
switch photo.depthData?.depthDataQuality {
case .low:
print("Depth quality is low")
case .high:
print("Depth quality is high")
case nil:
print("Depth quality is nil")
}
switch photo.depthData?.depthDataAccuracy {
case .relative:
print("Depth accuarcy is relative")
case .absolute:
print("Depth accuarcy is absolute")
case nil:
print("Depth accuarcy is nil")
}
if let imageData = photo.fileDataRepresentation(){
if let image = UIImage(data: imageData){
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil)
}
}
}
}
When trying to edit some Live Photos, calling PHLivePhotoEditingContext.saveLivePhoto results in the following error:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12815), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x300d05380 {Error Domain=NSOSStatusErrorDomain Code=-12815 "(null)"}}
I was able to replicate it on my device by taking a new Live Photo. Not sure what's wrong with that one specifically, not all Live Photos replicate the issue.
I've submitted FB15880825 with a sysdiagnose and a Photos Diagnostics as well. Any ideas what's going on here? It's impacting multiple customers. Thanks!
I have an app that allows the user to change a photo’s EXIF metadata. To do this, I request a content editing input, get the full size image, modify its properties, create a content editing output, write the output image to the rendered content URL, then call performChanges on the PHPhotoLibrary creating an asset change request for that asset setting its content editing output. This works as expected for regular photos but Live Photos get turned off converted to a regular photo.
To address this, I’m doing something similar by changing the properties of the .photo image in the Live Photo. I detect when the content editing input has a Live Photo, create a Live Photo editing context, set a frame processor that returns the frame’s image after setting its properties to the updated properties when the frame type is photo, then I create the content editing output and save the Live Photo to that output. It modifies the Live Photo successfully, but the metadata is not updated. If you get the full size image again the properties are the original properties. If you look at the EXIF metadata using an app like Metapho it remains unchanged. What am I doing wrong here? Thanks!
let imageURL = contentEditingInput.fullSizeImageURL!
let inputImage = CIImage(contentsOf: imageURL, options: [.applyOrientationProperty: true])!
var metadata: [AnyHashable: Any] = inputImage.properties
// Edit the metadata as desired...
let editingContext = PHLivePhotoEditingContext(livePhotoEditingInput: contentEditingInput)!
editingContext.frameProcessor = { frame, error -> CIImage? in
// Edit only the still photo
if frame.type == .photo {
return frame.image.settingProperties(metadata)
}
return frame.image
}
let contentEditingOutput = try await withCheckedThrowingContinuation { continuation in
let editingOutput = PHContentEditingOutput(contentEditingInput: contentEditingInput)
editingOutput.adjustmentData = adjustmentData
editingContext.saveLivePhoto(to: editingOutput) { success, error in
if success {
continuation.resume(returning: editingOutput)
} else {
continuation.resume(throwing: error!)
}
}
}
try await PHPhotoLibrary.shared().performChanges {
let request = PHAssetChangeRequest(for: asset)
request.contentEditingOutput = contentEditingOutput
}
Hi fellow iOS developers! 👋
I've written a Swift code that converts a video (from a URL) into a Live Photo after downloading it. The conversion process seems fine, but when I try to set the generated Live Photo as a wallpaper on iOS 17+, it shows the message 'Motion not Available.'
Has anyone else experienced this issue or know why this might be happening? Could it be related to changes in iOS 17 Live Photo handling or the generated file structure? Any help or suggestions would be greatly appreciated! 🙏
I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video.
used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone.
release me a patch to optionally enable the old scrubbing behavior.
I have an iPad app, written in objective-c and distributed through Enterprise developer, as it is not for public use but specific to some large companies.
The app has a local database and works offline
For some functions of the app I need to display images (not edit or cut them, just display them)
Right now there is integrated MWPhotoBrowser viewer, which has not been maintained for almost 10 years, so in addition to warnings in compilation I have to fight with some historical bugs especially on high resolution images. https://github.com/mwaterfall/MWPhotoBrowser
Do you know of a modern and maintained OFFLINE photo viewer? I evaluate both free and paid (maybe an SDK). My needs are very basic
I have found this one https://github.com/TimOliver/TOCropViewController, but I need to disable the photos edit features and especially I would lose the useful feature of displaying multiple images (mwphoto for multiple images showed a gallery)
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so:
let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true])
let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true])
// edit them...
try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage])
I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?