Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

76 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Get View Full HDR state from Settings > Photos to properly set preferredImageDynamicRange in editing extension
I'm updating my Photo Editing Extension to support HDR. To do this I set imageView.preferredImageDynamicRange = .high. But you can turn off the option to view HDR photos in the complete dynamic range in Settings > Photos. When you do that, open a photo, and tap the edit button, it does not appear in the full range as expected, but when you select my app from More > Extensions, it does appear in the complete dynamic range unexpectedly. I need to set imageView.preferredImageDynamicRange = .standard when View Full HDR is off, but I don't see any way to get that in my PHContentEditingController.
1
0
555
Oct ’24
What format for writeHEIFRepresentation preserves HDR?
In the WWDC 24 session "Use HDR for dynamic image experiences in your app" it's noted this is how you save edits for Adaptive HDR: SDR + HDR: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrImage: hdrImage]) SDR + Gain: writeHEIFRepresentation(of: sdrImage, to: url, colorSpace: p3Space, options: [.hdrGainMapImage: gainImage]) This won't compile because the format argument is missing. What format should be used? In the WWDC 23 session "Support HDR images in your app" RGBAf, RGBAh, and RGBA16, and RGB10 were mentioned but I'm not sure which one to use. If relevant, I'm editing photos from the user's photo library, so the image was probably taken on iPhone but perhaps not. Thanks!
1
0
579
Oct ’24
'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
[[PHImageManager defaultManager] requestAVAssetForVideo:asset options:videoOptions resultHandler:^(AVAsset *_Nullable avAsset, AVAudioMix *_Nullable audioMix, NSDictionary *_Nullable info) { if ([avAsset isKindOfClass:[AVURLAsset class]]) { AVURLAsset *urlAsset = (AVURLAsset *)avAsset; NSURL *videoURL = urlAsset.URL; mediaInfo[@"path"] = videoURL.absoluteString; } else { // Failed to get video asset completion(nil); } }];``` Before iOS 18, i could able access AVAsset video using the method mentioned above with the url, but starting from the iOS 18 version, the following error appears 'You don’t have permission. - The AVPlayerItem instance has failed with the error code 257 and domain "NSCocoaErrorDomain".'
2
0
640
Oct ’24
How to extracted stereo image pair from generated spatial photos by visionOS 2.0
Hi, My app allows users to share and view spatial photos. For viewing spatial photos, I'm using a plane in a RealityView that has a camera index switch material node, which takes the stereo images as the inputs. For sharing native spatial photos taken on the vision pro, prior to visionOS 2.0, I extract the stereo image pair and merge them into a single side-by-side image to upload to the app's backend. However, since visionOS 2.0 introduced generating spatial photos from normal photos, I've been seeing some unexpected behaviours in my app, while on the other hand, they can be viewed correctly in the system Photos app: Sometimes the extracted images have different size, the right image is smaller than the left image. See the first image in the google drive below, taken with iPhone 15 Pro. Even if the image pair have the same size, when viewed in my app, it has some artefacts, especially around the edge of objects which are closer to the camera. See the second image in the google drive below, taken with iPhone 11. Google drive link here: https://drive.google.com/drive/folders/1UTfpxvO3-ChqshwfyzY5E_KCgk8VgUaa I know that now Quicklook preview application can support viewing spatial photos now, but I would like to keep it the way I implemented in the app, for compatibility concerns. Below is a code snippet that deals with the extraction. Please point out the correct way to extract stereo image pair from a generated spatial photo. Happy to submit a code-level support request if more information is needed. // the data is from photos picker item let data = try await photo.loadTransferable(type: Data.self) let source = CGImageSourceCreateWithData(data as CFData, nil) let sbsImage = source.extractSpatialPhoto() extension CGImageSource { func extractSpatialPhoto() -> UIImage? { guard let leftCGImage = extractSpatialImage(at: 0), let rightCGImage = extractSpatialImage(at: 1) else { return nil } let leftImage = UIImage(ciImage: leftCGImage) let rightImage = UIImage(ciImage: rightCGImage) guard leftImage.size == rightImage.size else { return nil } // merge left + right let size = CGSize(width: leftImage.size.width * 2, height: leftImage.size.height) UIGraphicsBeginImageContextWithOptions(size, true, 1.0) leftImage.draw(in: CGRect(x: 0, y: 0, width: leftImage.size.width, height: leftImage.size.height)) rightImage.draw(in: CGRect(x: leftImage.size.width, y: 0, width: rightImage.size.width, height: rightImage.size.height)) let mergedImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return mergedImage } // not sure if this actually works func extractSpatialImage(at index: Int) -> CIImage? { guard let cgImage = CGImageSourceCreateImageAtIndex(self, index, nil) else { return nil } var ciImage = CIImage(cgImage: cgImage) if let properties = CGImageSourceCopyPropertiesAtIndex(self, index, nil) as? [String: Any], let heifDictionary = properties[kCGImagePropertyHEIFDictionary as String] as? [String: Any], let extrinsics = heifDictionary[kIIOMetadata_CameraExtrinsicsKey as String] as? [String: Any], let position = extrinsics[kIIOCameraExtrinsics_Position as String] as? [Double] { // Default baseline is 64mm (0 for left camera, 0.064m for right camera) let standardBaseline = 0.064 // Check if it's the right image (should be at [0.064, 0, 0]) let isRightImage = (index == 1) let expectedPosition = isRightImage ? standardBaseline : 0.0 // Calculate the translation needed to align to standard baseline let positionDelta = position[0] - expectedPosition // Apply translation only if there's a mismatch in position if positionDelta != 0 { let transform = CGAffineTransform(translationX: CGFloat(positionDelta), y: 0) ciImage = ciImage.transformed(by: transform) } } return ciImage } }
1
0
1k
Oct ’24
UIKit in SwiftUI memory leak while displaying images
I'm using UIKit to display a long list of large images inside a SwiftUI ScrollView and LazyHStack using UIViewControllerRepresentable. When an image is loaded, I'm using SDWebImage to load the image from the disk. As the user navigates through the list and continues to load more images, more memory is used and is never cleared, even as the images are unloaded by the LazyHStack. Eventually, the app reaches the memory limit and crashes. This issue persists if I load the image with UIImage(contentsOfFile: ...) instead of SDWebImage. How can I free the memory used by UIImage when the view is removed? ScrollView(.horizontal, showsIndicators: false) { LazyHStack(spacing: 16) { ForEach(allItems) { item in TestImageDisplayRepresentable(item: item) .frame(width: geometry.size.width, height: geometry.size.height) .id(item.id) } } .scrollTargetLayout() } import UIKit import SwiftUI import SDWebImage class TestImageDisplay: UIViewController { var item: TestItem init(item: TestItem) { self.item = item super.init(nibName: nil, bundle: nil) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func viewDidLoad() { super.viewDidLoad() let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 200, height: 200)) imageView.center = view.center view.addSubview(imageView) imageView.sd_setImage(with: item.imageURL, placeholder: nil) } } struct TestImageDisplayRepresentable: UIViewControllerRepresentable { var item: TestItem func makeUIViewController(context: Context) -> TestImageDisplay { return TestImageDisplay(item: item) } func updateUIViewController(_ uiViewController: TestImageDisplay, context: Context) { uiViewController.item = item } }
0
0
464
Sep ’24
Issues with @preconcurrency and AVFoundation in Swift 6 on Xcode 16.1/iOS 18 (Worked fine in Swift 5)
Question: I'm working on a project in Xcode 16.1, using Swift 6 with iOS 18. My code is working fine in Swift 5, but I'm running into concurrency issues when upgrading to Swift 6, particularly with the @preconcurrency attribute in AVFoundation. Here is the relevant part of my code: import SwiftUI @preconcurrency import AVFoundation struct OverlayButtonBar: View { ... let audioTracks = await loadTracks(asset: asset, mediaType: .audio) ... // Tracks are extracted before crossing concurrency boundaries private func loadTracks(asset: AVAsset, mediaType: AVMediaType) async -> [AVAssetTrack] { do { return try await asset.load(.tracks).filter { $0.mediaType == mediaType } } catch { print("Error loading tracks: \(error)") return [] } } } Issues: When using @preconcurrency, I get the warning: @preconcurrency attribute on module AVFoundation has no effect. Suggested fix by Xcode is: Remove @preconcurrency. But if I remove @preconcurrency, I get both a warning and an error: Warning: Add '@preconcurrency' to treat 'Sendable'-related errors from module 'AVFoundation' as warnings. Error: Non-sendable type [AVAssetTrack] returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary. (Class AVAssetTrack does not conform to the Sendable protocol (AVFoundation.AVAssetTrack)). This error comes if I attempt to directly access non-Sendable AVAssetTrack in an async context : let audioTracks = await loadTracks(asset: asset, mediaType: .audio) How can I resolve this issue while staying compliant with Swift 6 concurrency rules? Is there a recommended approach to handling non-Sendable types like AVAssetTrack in concurrency contexts? Appreciate any guidance on making this work in Swift 6, especially considering it worked fine in Swift 5. Thanks in advance!
1
0
1.9k
Sep ’24
Handling Main Actor-Isolated Values with `PHPhotoLibrary` in Swift 6
Hello, I’m encountering an issue with the PHPhotoLibrary API in Swift 6 and iOS 18. The code I’m using worked fine in Swift 5, but I’m now seeing the following error: Sending main actor-isolated value of type '() -> Void' with later accesses to nonisolated context risks causing data races Here is the problematic code: Button("Save to Camera Roll") { saveToCameraRoll() } ... private func saveToCameraRoll() { guard let overlayFileURL = mediaManager.getOverlayURL() else { return } Task { do { let status = await PHPhotoLibrary.requestAuthorization(for: .addOnly) guard status == .authorized else { return } try await PHPhotoLibrary.shared().performChanges({ if let creationRequest = PHAssetCreationRequest.creationRequestForAssetFromVideo(atFileURL: overlayFileURL) { creationRequest.creationDate = Date() } }) await MainActor.run { saveSuccessMessage = "Video saved to Camera Roll successfully" } } catch { print("Error saving video to Camera Roll: \(error.localizedDescription)") } } } Problem Description: The error message suggests that a main actor-isolated value of type () -> Void is being accessed in a nonisolated context, potentially leading to data races. This issue arises specifically at the call to PHPhotoLibrary.shared().performChanges. Questions: How can I address the data race issues related to main actor isolation when using PHPhotoLibrary.shared().performChanges? What changes, if any, are required to adapt this code for Swift 6 and iOS 18 while maintaining thread safety and actor isolation? Are there any recommended practices for managing main actor-isolated values in asynchronous operations to avoid data races? I appreciate any points or suggestions to resolve this issue effectively. Thank you!
1
0
1.7k
Sep ’24
Crashes in PHPickerViewController PFAssertionPolicyAbort
Hello! I'm getting crash reports in PHPickerViewController for iOS 17 users only. Can someone point me into the right direction what could be the root cause in my case since it's related to PHPickerViewController? Thread 0 name: Thread 0 Crashed: 0 libsystem_kernel.dylib 0x00000001e7e9342c __pthread_kill + 8 (:-1) 1 libsystem_pthread.dylib 0x00000001fbc32c0c pthread_kill + 268 (pthread.c:1721) 2 libsystem_c.dylib 0x00000001a6d36ba0 abort + 180 (abort.c:118) 3 PhotoFoundation 0x00000001d2420280 -[PFAssertionPolicyAbort notifyAssertion:] + 68 (PFAssert.m:432) 4 PhotoFoundation 0x00000001d2420068 -[PFAssertionPolicyComposite notifyAssertion:] + 160 (PFAssert.m:259) 5 PhotoFoundation 0x00000001d242061c -[PFAssertionPolicyUnique notifyAssertion:] + 176 (PFAssert.m:292) 6 PhotoFoundation 0x00000001d241f7f4 -[PFAssertionHandler handleFailureInFunction:file:lineNumber:description:arguments:] + 140 (PFAssert.m:169) 7 PhotoFoundation 0x00000001d2420c74 _PFAssertFailHandler + 148 (PFAssert.m:127) 8 PhotosUI 0x0000000216b59e30 -[PHPickerViewController _handleRemoteViewControllerConnection:extension:extensionRequestIdentifier:error:completionHandler:] + 1356 (PHPicker.m:1502) 9 PhotosUI 0x0000000216b5a954 __66-[PHPickerViewController _setupExtension:error:completionHandler:]_block_invoke_3 + 52 (PHPicker.m:1454) Crash report: 2024-09-05_18-27-56.7526_+0500-a953eaee085338a690ac1604a78de86e3e49d182.crash
2
0
508
Oct ’24
Can I setup an AVCaptureSession exclusively for use with the new Camera Control APIs?
I have a third party app for controlling Sony mirrorless cameras over WiFi. I’m really excited to integrate the new camera controls on the iPhone 16 Pro with the app. I’ve found the documentation around this, and seems I need an AVCaptureSession setup in order to utilise them. func configureControls(_ controls: [AVCaptureControl]) { // Verify the host system supports controls; otherwise, return early. guard captureSession.supportsControls else { return } // Begin configuring the capture session. captureSession.beginConfiguration() // Remove previously configured controls, if any. for control in captureSession.controls { captureSession.removeControl(control) } // Iterate over the passed in controls. for control in controls { // Add the control to the capture session if possible. if captureSession.canAddControl(control) { captureSession.addControl(control) } else { print("Unable to add control \(control).") } } // Commit the capture session configuration. captureSession.commitConfiguration() } can I just use a freshly initialised capture session for this? Or does it need to be configured in any other ways? Are there any down sides to creating a session (CPU usage etc) that I may experience from this? Also, the scope of the controls is quite narrow. For something like shutter speed or aperture that has quite a number of possible values but requires custom labels, and a non-linear scale (so the AVCaptureIndexPicker seems to be the way to go). Will that picker support enough values to represent something like shutter speed or aperture? Is there any chance we may get non-linear float based controls in the future, which may feel more natural from a UX perspective than index-based? Apologies, lots of edits going on here as I think about this more. Is there any way, or would any way be considered of putting these controls in a disabled state like with other UI elements in iOS? There are times (during capture for example) that a lot of these settings can be unavailable (as communicated by the Sony camera) to be changed by the user, and managing a queue of changes when the function is unavailable to be set is going to be a challenge. If there won’t be, how will they behave if controls are removed whilst being interacted with? Presumably they will disappear entirely from the UI? Thanks!
3
1
914
Sep ’24
Sonoma: Is It No Longer Possible to Fetch Wallpaper File Names?
Under Ventura, desktop wallpaper image names were stored in a sqlite database at ~/Library/Application Support/Dock/desktoppicture.db. This file is no longer being used under Sonoma. I have a process I built that fetches the desktop image file names and displays them, either as a service, or on the desktop. I do this because I have many photos I've taken, and I like to know which one I'm viewing so I can make edits if necessary. I set these images across five spaces and have them randomly change every hour. I tried using AppleScript but it would not pull the file names. A few people have pointed me to ~/Library/Application Support/com.apple.wallpaper/Store/Index.plist. However, on my system, this only reveals the source folder and not the image name itself. On one of my Macs, it shows 64 items, even though I have only five spaces! Is there a way to fetch the image file names under Sonoma? Will Sequoia make this easier or harder?
3
0
744
Nov ’24
AssistantIntent for Photos without library access
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access. Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X". Would that be possible without full library access?
0
0
614
Jul ’24
PhotoAsset in TagView
I'm trying to recreate the Tag people functionality in Instagram. Where a carousel of media the user has selected is displayed to them. They can then go through and tag people to the media. I'm trying to achieve this (but with food items instead of people) with TagView using PHAssets however the result is some funky behaviour I'm pulling my hair out trying to understand. The items are tagging correctly but the scroll feature on the TabView works sporadically. It occasionally scrolls fine but all of a sudden won't let me scroll past one image.. (See attached video for example). import SwiftUI import Photos struct TagItemView: View { @Binding var selectedAssets: [PHAsset] @State private var showTagSheet = false @State private var currentAsset: PHAsset? { didSet { if let currentAsset = currentAsset { assetTags = tags[currentAsset.localIdentifier] ?? [] } } } @State private var tags: [String: [String]] = [:] // Dictionary to store tags for each media item @State private var assetTags: [String] = [] // Tags for the current asset var body: some View { VStack { mediaCarousel tagsView Spacer() } .background(Color.black.ignoresSafeArea()) .onAppear { if let firstAsset = selectedAssets.first { currentAsset = firstAsset } } .onChange(of: currentAsset) { newAsset in if let currentAsset = newAsset { assetTags = tags[currentAsset.localIdentifier] ?? [] print("currentAsset changed: \(currentAsset.localIdentifier)") print("assetTags: \(assetTags)") } } .sheet(isPresented: $showTagSheet) { TagSheetView(selectedAsset: $currentAsset, tags: $tags, showTagSheet: $showTagSheet, assetTags: $assetTags) } } private var mediaCarousel: some View { VStack { TabView(selection: $currentAsset) { ForEach(selectedAssets, id: \.self) { asset in if asset.mediaType == .image { TagItemImageView(asset: asset) .tag(asset.localIdentifier) .onAppear { currentAsset = asset print("Asset in view (onAppear): \(asset.localIdentifier)") } .onTapGesture { currentAsset = asset showTagSheet = true } } else if asset.mediaType == .video { TagItemVideoView(asset: asset) .tag(asset.localIdentifier) .onAppear { currentAsset = asset print("Asset in view (onAppear): \(asset.localIdentifier)") } .onTapGesture { currentAsset = asset showTagSheet = true } } } } .tabViewStyle(PageTabViewStyle(indexDisplayMode: .always)) .frame(height: UIScreen.main.bounds.height * 0.4) // Fixed height for carousel } } private var tagsView: some View { ScrollView { if !assetTags.isEmpty { ItemView(assetTags: assetTags, removeTag: { tag in removeTag(tag, from: currentAsset!) }) .transition(.opacity) } else { InstructionsView() .transition(.opacity) } } .background(Color.black) .padding(.top, 8) .padding(.horizontal, 15) } private func removeTag(_ tag: String, from asset: PHAsset) { guard var assetTags = tags[asset.localIdentifier] else { return } assetTags.removeAll { $0 == tag } tags[asset.localIdentifier] = assetTags if currentAsset?.localIdentifier == asset.localIdentifier { self.assetTags = assetTags } } }
0
0
392
Jul ’24
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
907
Jul ’24
Really High Energy Use
I'm developing an app where users can select items to add to a screen, similar to creating a Canva presentation or choosing blocks in Minecraft. However, I'm encountering an issue with energy usage. When users click the arrows to browse different items, the energy use spikes significantly. Although it returns to normal after a while, continuous clicking causes the energy use to skyrocket. The images I'm using are 500x500 pixels. Ideally, I would like to avoid caching all the images, as the app might have up to 500 items and caching them all would consume too much memory. I have tried numerous way to avoid this but I just can’t seem to make it work. Would anyone know how to avoid such problem? I have included a picture of the energy use when just opened, and one after like 10 seconds of continuously clicking on an arrow to see more items. Also a picture of how the app looks. struct ContentView: View { struct babyBackground { var littleImage = "" } @State var firstSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape") ] @State var secondSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle") ] @State var thirdSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), ] let columns: [GridItem] = Array(repeating: .init(.flexible()), count: 4) func createBackgroundGridView(for backgrounds: [babyBackground], columns: [GridItem] ) -> some View { LazyVGrid(columns: columns, spacing: 10) { ForEach(0..<backgrounds.count, id: \.self) { index in Button(action: { }, label: { if let path = Bundle.main.path(forResource: backgrounds[index].littleImage, ofType: "png"), let uiImage = UIImage(contentsOfFile: path) { Image(uiImage: uiImage) .resizable() .frame(width: 126, height: 96) } }) } } .padding() } @State var indexOn = 0 var body: some View { HStack{ Button(action: { indexOn = (indexOn == 0) ? 2 : indexOn - 1 }) { Label("", systemImage: "arrowtriangle.left.fill") .font(.system(size: 50)) } Spacer() ScrollView { switch indexOn { case 0: createBackgroundGridView(for: firstSet, columns: columns) case 1: createBackgroundGridView(for: secondSet, columns: columns) case 2: createBackgroundGridView(for: thirdSet, columns: columns) case 3: createBackgroundGridView(for: thirdSet, columns: columns) default: createBackgroundGridView(for: firstSet, columns: columns) } } .frame(maxWidth: .infinity, maxHeight: .infinity) Spacer() Button(action: { indexOn = (indexOn == 2) ? 0 : indexOn + 1 }) { Label("", systemImage: "arrowtriangle.right.fill") .font(.system(size: 50)) } } } } Energy Use when app starts: Energy use after clicking for about 10 seconds: App UI:
1
1
684
Jul ’24
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
0
0
619
Jul ’24