I have an app that allows the user to change a photo’s EXIF metadata. To do this, I request a content editing input, get the full size image, modify its properties, create a content editing output, write the output image to the rendered content URL, then call performChanges on the PHPhotoLibrary creating an asset change request for that asset setting its content editing output. This works as expected for regular photos but Live Photos get turned off converted to a regular photo.
To address this, I’m doing something similar by changing the properties of the .photo image in the Live Photo. I detect when the content editing input has a Live Photo, create a Live Photo editing context, set a frame processor that returns the frame’s image after setting its properties to the updated properties when the frame type is photo, then I create the content editing output and save the Live Photo to that output. It modifies the Live Photo successfully, but the metadata is not updated. If you get the full size image again the properties are the original properties. If you look at the EXIF metadata using an app like Metapho it remains unchanged. What am I doing wrong here? Thanks!
let imageURL = contentEditingInput.fullSizeImageURL!
let inputImage = CIImage(contentsOf: imageURL, options: [.applyOrientationProperty: true])!
var metadata: [AnyHashable: Any] = inputImage.properties
// Edit the metadata as desired...
let editingContext = PHLivePhotoEditingContext(livePhotoEditingInput: contentEditingInput)!
editingContext.frameProcessor = { frame, error -> CIImage? in
// Edit only the still photo
if frame.type == .photo {
return frame.image.settingProperties(metadata)
}
return frame.image
}
let contentEditingOutput = try await withCheckedThrowingContinuation { continuation in
let editingOutput = PHContentEditingOutput(contentEditingInput: contentEditingInput)
editingOutput.adjustmentData = adjustmentData
editingContext.saveLivePhoto(to: editingOutput) { success, error in
if success {
continuation.resume(returning: editingOutput)
} else {
continuation.resume(throwing: error!)
}
}
}
try await PHPhotoLibrary.shared().performChanges {
let request = PHAssetChangeRequest(for: asset)
request.contentEditingOutput = contentEditingOutput
}
Photos and Imaging
RSS for tagIntegrate still images and other forms of photography into your apps.
Posts under Photos and Imaging tag
70 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi fellow iOS developers! 👋
I've written a Swift code that converts a video (from a URL) into a Live Photo after downloading it. The conversion process seems fine, but when I try to set the generated Live Photo as a wallpaper on iOS 17+, it shows the message 'Motion not Available.'
Has anyone else experienced this issue or know why this might be happening? Could it be related to changes in iOS 17 Live Photo handling or the generated file structure? Any help or suggestions would be greatly appreciated! 🙏
I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video.
used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone.
release me a patch to optionally enable the old scrubbing behavior.
I have an iPad app, written in objective-c and distributed through Enterprise developer, as it is not for public use but specific to some large companies.
The app has a local database and works offline
For some functions of the app I need to display images (not edit or cut them, just display them)
Right now there is integrated MWPhotoBrowser viewer, which has not been maintained for almost 10 years, so in addition to warnings in compilation I have to fight with some historical bugs especially on high resolution images. https://github.com/mwaterfall/MWPhotoBrowser
Do you know of a modern and maintained OFFLINE photo viewer? I evaluate both free and paid (maybe an SDK). My needs are very basic
I have found this one https://github.com/TimOliver/TOCropViewController, but I need to disable the photos edit features and especially I would lose the useful feature of displaying multiple images (mwphoto for multiple images showed a gallery)
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so:
let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true])
let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true])
// edit them...
try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage])
I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
I've requested the authentication in my main app.
PHPhotoLibrary.requestAuthorization(for: .readWrite) { status in }
Add the privacy description in both the main app and the extension.
But No matter the device is locked or unlocked. When I call
let fetchResult = PHAsset.fetchAssets(with: .image, options: nil)
let count = fetchResult.count
the count is always zero, even after a new photo is saved to the album in the same session.
I have an app that edits photos in your library. When I call
try CIContext().writeHEIFRepresentation(of: editedImage, to: fileURL, format: .RGBA8, colorSpace: originalImage.colorSpace!)
The following is logged to the console:
writeImageAtIndex:1012: ⭕️ ERROR: 'App' is trying to save an opaque image (5712x4284) with 'AlphaLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha.
What does that mean and how can I resolve it?
Xcode Version 16.0 (16A242d)
iOS 18.1 (22B82)
Whenever I try to open Photos app on my iphone 14 pro, it immediately crashes .
I cannot access my photos. help me solve this!!
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
Hello,
I’ve encountered an issue where a green line appears only in the thumbnail view of an image in the iPhone photo gallery. The green line is not present in the actual image itself.
Here are some additional details:
• The issue occurs only when saving the image as a JPEG. When saved as PNG, the green line does not appear.
• The green line also shows up when using the PHAsset method requestImage(for:targetSize:). Depending on the targetSize, the resulting image may contain the green line.
• Interestingly, this issue does not appear on iPhone Xs Max running iOS 15.2.1. However, the green line does appear on iPhone 15 Pro Max running iOS 18.0.1 when viewing the same image.
I have attached the problematic image for your reference.
Following images are the screen captures that shows the issue occurring on my iPhone.
iPhone 15 pro max (iOS 18.0.1)
iPhone Xs Max (iOS 15.2.1 )
Could this be related to a display or gallery app issue on iOS? Any advice or solutions would be greatly appreciated.
Thank you!
Hello developer community.
I purchase recently my new iPhone 16 Pro Max; it is a premium device with great quality overall.
However, I am having a big trouble shotting in ProRaw MAX (48 mode) with native camera.
Just to be clear, the problem that i will describe do not happen in 3rd apps, such as ProCam; only with native camera.
When I use ProRaw Max, and take the photo, and watch the photo in the gallery the image can’t load and render properly. Even, when I maximize the image to the maximum I can see pixelated portions, defects and super low resolution and excessive denoise.
For comparison, this not occur with my previous iPhone 15 PM and/or when I capture photos from ProCam (same settings and configurations) in the 16PM. I proceed to take the photo, open the gallery and I see full of details, when zoomed to 100%.
I tried to format the phone, reinstall the software via my mac. Tried even to look at some forums to find if there’s someone with the same issue, the information available so far is very low.
I’m in contact with apple assistant from my country (Portugal), and they escalated this problem to the engineers. (that’s what I’ve been told).
They did all the tests remotely (via analysis and improvement’s) and they told me that my phone is perfect in the hardware department. I will wait for the next days to be contacted again.
I’m on iOS 18.0.1. (The last software available at this time).
I tried multiple 16PM, from friends, family and stores (more or less 10 units), and they all showed the exact same problematic.
I’m a professional photographer, so I find this frustrating and unacceptable.
I would appreciate any additional suggestion or information. Thank you!
Cannot add photos or files because they are bigger than 5Mb.
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR:
SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage])
SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage])
This won't compile because the format argument is missing. What format should be used?
In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use.
If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
[[PHImageManager defaultManager]
requestAVAssetForVideo:asset
options:videoOptions
resultHandler:^(AVAsset *_Nullable avAsset,
AVAudioMix *_Nullable audioMix,
NSDictionary *_Nullable info) {
if ([avAsset isKindOfClass:[AVURLAsset class]]) {
AVURLAsset *urlAsset = (AVURLAsset *)avAsset;
NSURL *videoURL = urlAsset.URL;
mediaInfo[@"path"] = videoURL.absoluteString;
} else {
// Failed to get video asset
completion(nil);
}
}];```
Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
I'm using UIKit to display a long list of large images inside a SwiftUI ScrollView and LazyHStack using UIViewControllerRepresentable. When an image is loaded, I'm using SDWebImage to load the image from the disk.
As the user navigates through the list and continues to load more images, more memory is used and is never cleared, even as the images are unloaded by the LazyHStack. Eventually, the app reaches the memory limit and crashes. This issue persists if I load the image with UIImage(contentsOfFile: ...) instead of SDWebImage.
How can I free the memory used by UIImage when the view is removed?
ScrollView(.horizontal, showsIndicators: false) {
LazyHStack(spacing: 16) {
ForEach(allItems) { item in
TestImageDisplayRepresentable(item: item)
.frame(width: geometry.size.width, height: geometry.size.height)
.id(item.id)
}
}
.scrollTargetLayout()
}
import UIKit
import SwiftUI
import SDWebImage
class TestImageDisplay: UIViewController {
var item: TestItem
init(item: TestItem) {
self.item = item
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 200, height: 200))
imageView.center = view.center
view.addSubview(imageView)
imageView.sd_setImage(with: item.imageURL, placeholder: nil)
}
}
struct TestImageDisplayRepresentable: UIViewControllerRepresentable {
var item: TestItem
func makeUIViewController(context: Context) -> TestImageDisplay {
return TestImageDisplay(item: item)
}
func updateUIViewController(_ uiViewController: TestImageDisplay, context: Context) {
uiViewController.item = item
}
}
Question:
I'm working on a project in Xcode 16.1, using Swift 6 with iOS 18. My code is working fine in Swift 5, but I'm running into concurrency issues when upgrading to Swift 6, particularly with the @preconcurrency attribute in AVFoundation.
Here is the relevant part of my code:
import SwiftUI
@preconcurrency import AVFoundation
struct OverlayButtonBar: View {
...
let audioTracks = await loadTracks(asset: asset, mediaType: .audio)
...
// Tracks are extracted before crossing concurrency boundaries
private func loadTracks(asset: AVAsset, mediaType: AVMediaType) async -> [AVAssetTrack] {
do {
return try await asset.load(.tracks).filter { $0.mediaType == mediaType }
} catch {
print("Error loading tracks: \(error)")
return []
}
}
}
Issues:
When using @preconcurrency, I get the warning:
@preconcurrency attribute on module AVFoundation has no effect. Suggested fix by Xcode is: Remove @preconcurrency.
But if I remove @preconcurrency, I get both a warning and an error:
Warning: Add '@preconcurrency' to treat 'Sendable'-related errors from module 'AVFoundation' as warnings.
Error: Non-sendable type [AVAssetTrack] returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary. (Class AVAssetTrack does not conform to the Sendable protocol (AVFoundation.AVAssetTrack)). This error comes if I attempt to directly access non-Sendable AVAssetTrack in an async context :
let audioTracks = await loadTracks(asset: asset, mediaType: .audio)
How can I resolve this issue while staying compliant with Swift 6 concurrency rules? Is there a recommended approach to handling non-Sendable types like AVAssetTrack in concurrency contexts?
Appreciate any guidance on making this work in Swift 6, especially considering it worked fine in Swift 5.
Thanks in advance!
Hello,
I’m encountering an issue with the PHPhotoLibrary API in Swift 6 and iOS 18. The code I’m using worked fine in Swift 5, but I’m now seeing the following error:
Sending main actor-isolated value of type '() -> Void' with later accesses to nonisolated context risks causing data races
Here is the problematic code:
Button("Save to Camera Roll") {
saveToCameraRoll()
}
...
private func saveToCameraRoll() {
guard let overlayFileURL = mediaManager.getOverlayURL() else {
return
}
Task {
do {
let status = await PHPhotoLibrary.requestAuthorization(for: .addOnly)
guard status == .authorized else {
return
}
try await PHPhotoLibrary.shared().performChanges({
if let creationRequest = PHAssetCreationRequest.creationRequestForAssetFromVideo(atFileURL: overlayFileURL) {
creationRequest.creationDate = Date()
}
})
await MainActor.run {
saveSuccessMessage = "Video saved to Camera Roll successfully"
}
} catch {
print("Error saving video to Camera Roll: \(error.localizedDescription)")
}
}
}
Problem Description:
The error message suggests that a main actor-isolated value of type () -> Void is being accessed in a nonisolated context, potentially leading to data races.
This issue arises specifically at the call to PHPhotoLibrary.shared().performChanges.
Questions:
How can I address the data race issues related to main actor isolation when using PHPhotoLibrary.shared().performChanges?
What changes, if any, are required to adapt this code for Swift 6 and iOS 18 while maintaining thread safety and actor isolation?
Are there any recommended practices for managing main actor-isolated values in asynchronous operations to avoid data races?
I appreciate any points or suggestions to resolve this issue effectively.
Thank you!
If adding text or shapes to a photo, you cannot change the colour of them. Was working in prior release.
please fix in final release.
Hello!
I'm getting crash reports in PHPickerViewController for iOS 17 users only. Can someone point me into the right direction what could be the root cause in my case since it's related to PHPickerViewController?
Thread 0 name:
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000001e7e9342c __pthread_kill + 8 (:-1)
1 libsystem_pthread.dylib 0x00000001fbc32c0c pthread_kill + 268 (pthread.c:1721)
2 libsystem_c.dylib 0x00000001a6d36ba0 abort + 180 (abort.c:118)
3 PhotoFoundation 0x00000001d2420280 -[PFAssertionPolicyAbort notifyAssertion:] + 68 (PFAssert.m:432)
4 PhotoFoundation 0x00000001d2420068 -[PFAssertionPolicyComposite notifyAssertion:] + 160 (PFAssert.m:259)
5 PhotoFoundation 0x00000001d242061c -[PFAssertionPolicyUnique notifyAssertion:] + 176 (PFAssert.m:292)
6 PhotoFoundation 0x00000001d241f7f4 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 (PFAssert.m:169)
7 PhotoFoundation 0x00000001d2420c74 _PFAssertFailHandler + 148 (PFAssert.m:127)
8 PhotosUI 0x0000000216b59e30 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 (PHPicker.m:1502)
9 PhotosUI 0x0000000216b5a954 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 (PHPicker.m:1454)
Crash report: 2024-09-05_18-27-56.7526_+0500-a953eaee085338a690ac1604a78de86e3e49d182.crash