I have an app that allows the user to change a photo’s EXIF metadata. To do this, I request a content editing input, get the full size image, modify its properties, create a content editing output, write the output image to the rendered content URL, then call performChanges on the PHPhotoLibrary creating an asset change request for that asset setting its content editing output. This works as expected for regular photos but Live Photos get turned off converted to a regular photo.
To address this, I’m doing something similar by changing the properties of the .photo image in the Live Photo. I detect when the content editing input has a Live Photo, create a Live Photo editing context, set a frame processor that returns the frame’s image after setting its properties to the updated properties when the frame type is photo, then I create the content editing output and save the Live Photo to that output. It modifies the Live Photo successfully, but the metadata is not updated. If you get the full size image again the properties are the original properties. If you look at the EXIF metadata using an app like Metapho it remains unchanged. What am I doing wrong here? Thanks!
let imageURL = contentEditingInput.fullSizeImageURL!
let inputImage = CIImage(contentsOf: imageURL, options: [.applyOrientationProperty: true])!
var metadata: [AnyHashable: Any] = inputImage.properties
// Edit the metadata as desired...
let editingContext = PHLivePhotoEditingContext(livePhotoEditingInput: contentEditingInput)!
editingContext.frameProcessor = { frame, error -> CIImage? in
// Edit only the still photo
if frame.type == .photo {
return frame.image.settingProperties(metadata)
}
return frame.image
}
let contentEditingOutput = try await withCheckedThrowingContinuation { continuation in
let editingOutput = PHContentEditingOutput(contentEditingInput: contentEditingInput)
editingOutput.adjustmentData = adjustmentData
editingContext.saveLivePhoto(to: editingOutput) { success, error in
if success {
continuation.resume(returning: editingOutput)
} else {
continuation.resume(throwing: error!)
}
}
}
try await PHPhotoLibrary.shared().performChanges {
let request = PHAssetChangeRequest(for: asset)
request.contentEditingOutput = contentEditingOutput
}
Posts under Media tag
73 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I tried several times to use the PhotosPickerItem loadTransferable function with the goal of receiving some progress value especially when loading large video media.
However, the Progress object returned by the function has always isIndeterminate == true and so doesn't have any progress value to observe.
Is there some way to make it work ? Some configuration I might have overlooked ? Or is it just not working
I might have to revert back to the UIKit photo picker because of this
I have an webview that loads videos in it, we would like to be able to fullscreen our videos, so we use the fullscreen preference in the documentation however when it is set to true, upon fullscreening a video then pausing it, the entire video player will disappear.
You can exit fullscreen and attempt to fullscreen the video player once again, however upon doing this the entire app view will now disappear and you'll see your desktop background (or whatever is currently behind your app). This behavior seems consistent across multiple websites with the current app. I have setup a sample project you can test here
The Main error that seems to trigger to the console is this. I have not been able to find a solution to, maybe I am simply missing something here. I am on Sequoia 15.2 for Mac.
Attempting to update all DD element frames, but the bounds or contentsRect are invalid. Bounds: X: 0.00 Y: 0.00, W: 0.00 H: 0.00, contentsRect: X: 0.00 Y: 0.00, W: 1.00 H: 1.00 , skipping
We have had the same video player in our app for at least 5 years with few issues but the iOS 18 updated has now resulted in video playback for our users who have downloaded the video for offline viewing is now played at 2x speed.
While customizing ImagePicker and using it, we find out that the metadata is not reflected normally and report it.
The situation is as follows.
The time or time zone of an image is changed in the Photos app.
Changing the time zone of an image with an actual capture date of 2024:11:08 08:27:44 → 2024:11:07 17:27:44
Image data is extracted from a PHAsset using PHImageManager.
The metadata is obtained from this image data.
The time zone information exposed in the Exif tag information does not reflect the time or time zone changed in the Photos app.
let asset: PHAsset = ...
....
let options = PHImageRequestOptions()
options.isSynchronous = true
options.version = .current
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.normalizedCropRect = .zero
options.isNetworkAccessAllowed = true
options.progressHandler = { progress, error, _, _ in }
PHImageManager.default().requestImageDataAndOrientation(for: asset, options: options) { imageData, uti, orientation, info in
let cgImageSource = CGImageSourceCreateWithData(imageData! as CFData, nil)
let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource!, 0, nil) as? Dictionary<String, Any>
let exif = properties!["{Exif}"]
let dictionary = exif as? Dictionary<String, Any>
}
Metadata Check
In this case, it is reflected in the creationDate of PHAsset, so it can be somewhat compensated by forcibly replacing the metadata.
However, because PHAsset does not include time zone information, when changing the time zone as well, it's impossible to calculate the correct time according to the time zone.
PHPicker
This issue is resolved when using the PHPickerResult provided by PHPicker.
extension PhotosPickerViewController: PHPickerViewControllerDelegate {
public func picker(_ picker: PHPickerViewController,
didFinishPicking results: [PHPickerResult]) {
.....
for result in results {
let identifier = UTType.image.identifier
if result.itemProvider.hasItemConformingToTypeIdentifier(identifier) {
result.itemProvider.loadDataRepresentation(forTypeIdentifier: identifier) { data, error in
guard let data = data,
let cgImageSource = CGImageSourceCreateWithData(data as CFData, nil),
let properties = CGImageSourceCopyPropertiesAtIndex(cgImageSource, 0, nil) as? Dictionary<String, Any>,
let exif = properties["{Exif}"],
let dictionary = exif as? Dictionary<String, Any>
else {
return
}
}
}
}
}
}
Metadata Check
Question
I wonder why this happens, and if this is normal behavior.
Instead of the System Picker that Apple provides as a base, I wonder if there is any way I can supplement it in that situation if I use a customizer.
There are different microphones that can be connected via a 3.5-inch jack or via USB or via Bluetooth, the behavior is the same.
There is a code that gets access to the microphone (connected to the 3.5-inch audio jack) and starts an audio capture session. At the same time, the microphone use icon starts to be displayed. The capture of the audio device (microphone) continues for a few seconds, then the session stops, the microphone use icon disappears, then there is a pause of a few seconds, and then a second attempt is made to access the same microphone and start an audio capture session. At the same time, the microphone use icon is displayed again. After a few seconds, access to the microphone stops and the audio capture session stops, after which the microphone access icon disappears.
Next, we will try to perform the same actions, but after the first stop of access to the microphone, we will try to pull the microphone plug out of the connector and insert it back before trying to start the second session. In this case, the second attempt to access begins, the running part of the program does not return errors, but the microphone access icon is not displayed, and this is the problem. After the program is completed and restarted, this icon is displayed again.
This problem is only the tip of the iceberg, since it manifests itself in the fact that it is not possible to record sound from the audio microphone after reconnecting the microphone until the program is restarted.
Is this normal behavior of the AVFoundation framework? Is it possible to somehow make it so that after reconnecting the microphone, access to it occurs correctly and the usage indicator is displayed? What additional actions should the programmer perform in this case? Is there a description of this behavior somewhere in the documentation?
Below is the code to demonstrate the described behavior.
I am also attaching an example of the microphone usage indicator icon.
Computer description: MacBook Pro 13-inch 2020 Intel Core i7 macOS Sequoia 15.1.
#include <chrono>
#include <condition_variable>
#include <iostream>
#include <mutex>
#include <thread>
#include <AVFoundation/AVFoundation.h>
#include <Foundation/NSString.h>
#include <Foundation/NSURL.h>
AVCaptureSession* m_captureSession = nullptr;
AVCaptureDeviceInput* m_audioInput = nullptr;
AVCaptureAudioDataOutput* m_audioOutput = nullptr;
std::condition_variable conditionVariable;
std::mutex mutex;
bool responseToAccessRequestReceived = false;
void receiveResponse()
{
std::lock_guard<std::mutex> lock(mutex);
responseToAccessRequestReceived = true;
conditionVariable.notify_one();
}
void waitForResponse()
{
std::unique_lock<std::mutex> lock(mutex);
conditionVariable.wait(lock, [] { return responseToAccessRequestReceived; });
}
void requestPermissions()
{
responseToAccessRequestReceived = false;
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeAudio completionHandler:^(BOOL granted)
{
const auto status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeAudio];
std::cout << "Request completion handler granted: " << (int)granted << ", status: " << status << std::endl;
receiveResponse();
}];
waitForResponse();
}
void timer(int timeSec)
{
for (auto timeRemaining = timeSec; timeRemaining > 0; --timeRemaining)
{
std::cout << "Timer, remaining time: " << timeRemaining << "s" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
bool updateAudioInput()
{
[m_captureSession beginConfiguration];
if (m_audioOutput)
{
AVCaptureConnection *lastConnection = [m_audioOutput connectionWithMediaType:AVMediaTypeAudio];
[m_captureSession removeConnection:lastConnection];
}
if (m_audioInput)
{
[m_captureSession removeInput:m_audioInput];
[m_audioInput release];
m_audioInput = nullptr;
}
AVCaptureDevice* audioInputDevice = [AVCaptureDevice deviceWithUniqueID: [NSString stringWithUTF8String: "BuiltInHeadphoneInputDevice"]];
if (!audioInputDevice)
{
std::cout << "Error input audio device creating" << std::endl;
return false;
}
// m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:nil];
// NSError *error = nil;
NSError *error = [[NSError alloc] init];
m_audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioInputDevice error:&error];
if (error)
{
const auto code = [error code];
const auto domain = [error domain];
const char* domainC = domain ? [domain UTF8String] : nullptr;
std::cout << code << " " << domainC << std::endl;
}
if (m_audioInput && [m_captureSession canAddInput:m_audioInput]) {
[m_audioInput retain];
[m_captureSession addInput:m_audioInput];
}
else
{
std::cout << "Failed to create audio device input" << std::endl;
return false;
}
if (!m_audioOutput)
{
m_audioOutput = [[AVCaptureAudioDataOutput alloc] init];
if (m_audioOutput && [m_captureSession canAddOutput:m_audioOutput])
{
[m_captureSession addOutput:m_audioOutput];
}
else
{
std::cout << "Failed to add audio output" << std::endl;
return false;
}
}
[m_captureSession commitConfiguration];
return true;
}
void start()
{
std::cout << "Starting..." << std::endl;
const bool updatingResult = updateAudioInput();
if (!updatingResult)
{
std::cout << "Error, while updating audio input" << std::endl;
return;
}
[m_captureSession startRunning];
}
void stop()
{
std::cout << "Stopping..." << std::endl;
[m_captureSession stopRunning];
}
int main()
{
requestPermissions();
m_captureSession = [[AVCaptureSession alloc] init];
start();
timer(5);
stop();
timer(10);
start();
timer(5);
stop();
}
someone know how to resolve or how much time it take to get access on playground .
I have an app that allows you to edit your photos. To preserve HDR, I edit both the SDR image and gain map image, like so:
let sdrImage = CIImage(data: data, options: [.applyOrientationProperty: true])
let gainMapImage = CIImage(data: data, options: [.applyOrientationProperty: true, .auxiliaryHDRGainMap: true])
// edit them...
try CIContext().writeHEIFRepresentation(of: sdrImage, to: url, format: .RGBA8, colorSpace: colorSpace, options: [.hdrGainMapImage: gainMapImage])
I also support editing the still photo in Live Photos. To do this you create a PHLivePhotoEditingContext, set the frameProcessor block which gives you a CIImage that I edit when the frame.type is .photo, then you create a PHContentEditingOutput and call saveLivePhoto. I’m not seeing any way to preserve HDR here. Interestingly the frame processor is called twice with .photo frame.type, but I don’t see any difference between these images. How can I edit a gain map image to preserve HDR in the still photo of a Live Photo?
Many issues in terms of usability and glitches.
When reviewing Video clips on the Photos app there is no way to have the full screen view and pause and scroll forward and backward. It only shows the shrunken screen.
When zooming in on a video and scrolling forward and backward it only shows part of the clip, you cannot scroll to the end of the clip. This must be some glitch in the software.
Was any of this app tested beforehand for usability? IOS 18, in general, may be one of the worst releases I've experienced.
This stuff may have been said many times already but my goodness fix the Photos app immediately!!
It’s impossible to enjoy video playback with this update. 1) Starting with the play/pause location, not friendly with your hand; 2) How are we supposed to back and forth just a few frames? I was thinking there is a hidden button somewhere to enable the old playback, because that’s waaay absurd.
3) Video playback with buttons enable makes video playback smaller, that’s ridiculous
I have noticed a problem when a PHAsset creation request is made with the resource type PHAssetResourceType.photoProxy.
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: .photoProxy, data: photoData, options: nil)
creationRequest.location = location
creationRequest.isFavorite = true
After successfully saving the resulting asset through PHPhotoLibrary.shared().performChanges, I could verify it in the Photos app.
I noticed that the created photo was initially marked as Favorite and that the location was added to the info as expected. The title of the image changes from "Today" to "" too.
Next, the photo was refreshed, and location data was purged. However, the title remains unchanged and displays the .
This refresh was also observed in the code. PHPhotoLibraryChangeObserver protocols func photoLibraryDidChange(_ changeInstance: PHChange) receives a change notification. The same asset has been changed, and there is no location information anymore. isFavorite information persists correctly.
After debugging for a few hours, I discovered that changing the resource type to .photo fixes this issue. Location data is not removed in the Photos app, and no refresh callback is seen in func photoLibraryDidChange(_ changeInstance: PHChange).
I initially used .photoProxy because in the AVCapturePhotoCaptureDelegate implementation class, I always get the call in func photoOutput(_ output: AVCapturePhotoOutput, didFinishCapturingDeferredPhotoProxy deferredPhotoProxy: AVCaptureDeferredPhotoProxy?, error: Error?). So here is where I am capturing the photo data as photoData = deferredPhotoProxy?.fileDataRepresentation().
I’m currently working on an iOS project that involves loading and playing stereoscopic/spatial videos. I’m using the AVFoundation framework, specifically AVURLAsset, but I’m having trouble determining how to correctly load and handle stereoscopic videos.
I would like to know:
Any guidance or code snippets would be greatly appreciated, I´m not understanding pretty well the apple developer videos...
Thank you in advance for your help!
Best,
Lau
This ideal gonna be cool:
When people finish recording a video and later realize there's something else worth capturing, they can only create a second clip. But what if it were possible to reopen the first video and continue recording from where they left off? This would be a great convenience for many people
I'm seeking information about the original file schema for an m4a file recorded directly on an iPhone (iPhone 5 running iOS 9.2.0).
I currently have two files from which I extracted metadata using ExifTool.
The first file was provided to me by someone who claims it was recorded on an iPhone 5 with iOS 9.2.0. I would like to verify whether this file has been edited.
File Permissions: -rwx------
Content Create Date: 2016:03:01 14:21:08+07:00
The second file was recorded by me on the same device model and iOS version.
File Permissions: -rw-r--r--
Date/Time Original: 2024:10:03 11:44:16+07:00
As you can see, the file permissions differ, and the key for the recording date also differs: one uses "Content Create Date" while the other uses "Date/Time Original." I would like to determine if the first file was edited, but I haven't been able to find any official documentation on the m4a schema or metadata structure from audio recorder apps. I reached out to support, and they directed me to this forum. Any insights or help would be appreciated.
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR:
SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage])
SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage])
This won't compile because the format argument is missing. What format should be used?
In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use.
If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
Hi,
My app allows users to share and view spatial photos.
For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs.
For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend.
However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app:
Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro.
Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11.
Google drive link here:
https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa
I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns.
Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo.
Happy to submit a code-level support request if more information is needed.
// the data is from photos picker item
let data = try await photo.loadTransferable(type: Data.self)
let source = CGImageSourceCreateWithData(data as CFData, nil)
let sbsImage = source.extractSpatialPhoto()
extension CGImageSource {
func extractSpatialPhoto() -> UIImage? {
guard let leftCGImage = extractSpatialImage(at: 0),
let rightCGImage = extractSpatialImage(at: 1)
else {
return nil
}
let leftImage = UIImage(ciImage: leftCGImage)
let rightImage = UIImage(ciImage: rightCGImage)
guard leftImage.size == rightImage.size else {
return nil
}
// merge left + right
let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height)
UIGraphicsBeginImageContextWithOptions(size, true, 1.0)
leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height))
rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height))
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return mergedImage
}
// not sure if this actually works
func extractSpatialImage(at index: Int) -> CIImage? {
guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else {
return nil
}
var ciImage = CIImage(cgImage: cgImage)
if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any],
let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any],
let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any],
let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double]
{
// Default baseline is 64mm (0 for left camera, 0.064m for right camera)
let standardBaseline = 0.064
// Check if it's the right image (should be at [0.064, 0, 0])
let isRightImage = (index == 1)
let expectedPosition = isRightImage ? standardBaseline : 0.0
// Calculate the translation needed to align to standard baseline
let positionDelta = position[0] - expectedPosition
// Apply translation only if there's a mismatch in position
if positionDelta != 0 {
let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0)
ciImage = ciImage.transformed(by: transform)
}
}
return ciImage
}
}
Hey all!
in my personal quest to make future proof apps moving to Swift 6, one of my app has a problem when setting an artwork image in MPNowPlayingInfoCenter
Here's what I'm using to set the metadata
func setMetadata(title: String? = nil, artist: String? = nil, artwork: String? = nil) async throws {
let defaultArtwork = UIImage(named: "logo")!
var nowPlayingInfo = [
MPMediaItemPropertyTitle: title ?? "***",
MPMediaItemPropertyArtist: artist ?? "***",
MPMediaItemPropertyArtwork: MPMediaItemArtwork(boundsSize: defaultArtwork.size) { _ in
defaultArtwork
}
] as [String: Any]
if let artwork = artwork {
guard let url = URL(string: artwork) else { return }
let (data, response) = try await URLSession.shared.data(from: url)
guard (response as? HTTPURLResponse)?.statusCode == 200 else { return }
guard let image = UIImage(data: data) else { return }
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size) { _ in
image
}
}
MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo
}
the app crashes when hitting
MPMediaItemPropertyArtwork: MPMediaItemArtwork(boundsSize: defaultArtwork.size) { _ in
defaultArtwork
}
or
nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size) { _ in
image
}
commenting out these two make the app work again.
Again, no clue on why.
Thanks in advance
I am experiencing an issue with my app, which includes a WKWebView used for displaying and playing WebRTC content (audio and video). Everything works fine on macOS, but on iOS 18, while the video is displayed correctly, there is no sound.
I am wondering if this could be related to privacy permissions on iOS. Could you please clarify if there are any specific privacy permissions I need to address?
I would like to confirm:
AVAudioSession.sharedInstance().setCategory requires any special configuration for WebRTC audio. Are there any particular settings needed for this? My setting codes are below:
try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, policy: .longFormAudio)
Do the JavaScript codes in the HTML file require any special handling to ensure WebRTC audio works properly on iOS?
const audioRender = document.createElement('audio');
audioRender.id = 'xxxid';
audioRender.srcObject = streamSource;
audioRender.autoplay = true;
audioHolder.appendChild(audioRender);
Does WKWebViewConfiguration need any specific parameter adjustments to ensure audio playback in WebRTC works as expected?
let webViewConfiguration = WKWebViewConfiguration()
let contentController = WKUserContentController()
contentController.add(self, name: "***")
webViewConfiguration.userContentController = contentController
webViewConfiguration.allowsInlineMediaPlayback = true
After upgrading to iOS 18 CarPlay with 2023 Lexus and iPhone 15 Pro Max shows multiple issues:
• speakers reduced to Mono sound (going back to normal after some minutes and then reducing again)
• no speaker sound at all
• touching / moving phone while driving resulting in “on and off” sound
No Reboot / Shutdown helps
No Cable connection works
@Apple: do you test your software professionally or is this outsourced to the community? Doesn’t look at all like a professional approach?
Please solve this dangerous (traffic!) and annoying topic ASAP!
Thanks - Torsten