Photos and Imaging

RSS for tag

Integrate still images and other forms of photography into your apps.

Posts under Photos and Imaging tag

70 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Can I setup an AVCaptureSession exclusively for use with the new Camera Control APIs?
I have a third party app for controlling Sony mirrorless cameras over WiFi. I’m really excited to integrate the new camera controls on the iPhone 16 Pro with the app. I’ve found the documentation around this, and seems I need an AVCaptureSession setup in order to utilise them. func configureControls(_ controls: [AVCaptureControl]) { // Verify the host system supports controls; otherwise, return early. guard captureSession.supportsControls else { return } // Begin configuring the capture session. captureSession.beginConfiguration() // Remove previously configured controls, if any. for control in captureSession.controls { captureSession.removeControl(control) } // Iterate over the passed in controls. for control in controls { // Add the control to the capture session if possible. if captureSession.canAddControl(control) { captureSession.addControl(control) } else { print("Unable to add control \(control).") } } // Commit the capture session configuration. captureSession.commitConfiguration() } can I just use a freshly initialised capture session for this? Or does it need to be configured in any other ways? Are there any down sides to creating a session (CPU usage etc) that I may experience from this? Also, the scope of the controls is quite narrow. For something like shutter speed or aperture that has quite a number of possible values but requires custom labels, and a non-linear scale (so the AVCaptureIndexPicker seems to be the way to go). Will that picker support enough values to represent something like shutter speed or aperture? Is there any chance we may get non-linear float based controls in the future, which may feel more natural from a UX perspective than index-based? Apologies, lots of edits going on here as I think about this more. Is there any way, or would any way be considered of putting these controls in a disabled state like with other UI elements in iOS? There are times (during capture for example) that a lot of these settings can be unavailable (as communicated by the Sony camera) to be changed by the user, and managing a queue of changes when the function is unavailable to be set is going to be a challenge. If there won’t be, how will they behave if controls are removed whilst being interacted with? Presumably they will disappear entirely from the UI? Thanks!
3
1
625
Sep ’24
Sonoma: Is It No Longer Possible to Fetch Wallpaper File Names?
Under Ventura, desktop wallpaper image names were stored in a sqlite database at ~/Library/Application Support/Dock/desktoppicture.db. This file is no longer being used under Sonoma. I have a process I built that fetches the desktop image file names and displays them, either as a service, or on the desktop. I do this because I have many photos I've taken, and I like to know which one I'm viewing so I can make edits if necessary. I set these images across five spaces and have them randomly change every hour. I tried using AppleScript but it would not pull the file names. A few people have pointed me to ~/Library/Application Support/com.apple.wallpaper/Store/Index.plist. However, on my system, this only reveals the source folder and not the image name itself. On one of my Macs, it shows 64 items, even though I have only five spaces! Is there a way to fetch the image file names under Sonoma? Will Sequoia make this easier or harder?
3
0
369
2w
AssistantIntent for Photos without library access
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access. Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X". Would that be possible without full library access?
0
0
412
Jul ’24
PhotoAsset in TagView
I'm trying to recreate the Tag people functionality in Instagram. Where a carousel of media the user has selected is displayed to them. They can then go through and tag people to the media. I'm trying to achieve this (but with food items instead of people) with TagView using PHAssets however the result is some funky behaviour I'm pulling my hair out trying to understand. The items are tagging correctly but the scroll feature on the TabView works sporadically. It occasionally scrolls fine but all of a sudden won't let me scroll past one image.. (See attached video for example). import SwiftUI import Photos struct TagItemView: View { @Binding var selectedAssets: [PHAsset] @State private var showTagSheet = false @State private var currentAsset: PHAsset? { didSet { if let currentAsset = currentAsset { assetTags = tags[currentAsset.localIdentifier] ?? [] } } } @State private var tags: [String: [String]] = [:] // Dictionary to store tags for each media item @State private var assetTags: [String] = [] // Tags for the current asset var body: some View { VStack { mediaCarousel tagsView Spacer() } .background(Color.black.ignoresSafeArea()) .onAppear { if let firstAsset = selectedAssets.first { currentAsset = firstAsset } } .onChange(of: currentAsset) { newAsset in if let currentAsset = newAsset { assetTags = tags[currentAsset.localIdentifier] ?? [] print("currentAsset changed: \(currentAsset.localIdentifier)") print("assetTags: \(assetTags)") } } .sheet(isPresented: $showTagSheet) { TagSheetView(selectedAsset: $currentAsset, tags: $tags, showTagSheet: $showTagSheet, assetTags: $assetTags) } } private var mediaCarousel: some View { VStack { TabView(selection: $currentAsset) { ForEach(selectedAssets, id: \.self) { asset in if asset.mediaType == .image { TagItemImageView(asset: asset) .tag(asset.localIdentifier) .onAppear { currentAsset = asset print("Asset in view (onAppear): \(asset.localIdentifier)") } .onTapGesture { currentAsset = asset showTagSheet = true } } else if asset.mediaType == .video { TagItemVideoView(asset: asset) .tag(asset.localIdentifier) .onAppear { currentAsset = asset print("Asset in view (onAppear): \(asset.localIdentifier)") } .onTapGesture { currentAsset = asset showTagSheet = true } } } } .tabViewStyle(PageTabViewStyle(indexDisplayMode: .always)) .frame(height: UIScreen.main.bounds.height * 0.4) // Fixed height for carousel } } private var tagsView: some View { ScrollView { if !assetTags.isEmpty { ItemView(assetTags: assetTags, removeTag: { tag in removeTag(tag, from: currentAsset!) }) .transition(.opacity) } else { InstructionsView() .transition(.opacity) } } .background(Color.black) .padding(.top, 8) .padding(.horizontal, 15) } private func removeTag(_ tag: String, from asset: PHAsset) { guard var assetTags = tags[asset.localIdentifier] else { return } assetTags.removeAll { $0 == tag } tags[asset.localIdentifier] = assetTags if currentAsset?.localIdentifier == asset.localIdentifier { self.assetTags = assetTags } } }
0
0
298
Jul ’24
Performant alternative to scaling a CIImage / PixelBuffer
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this? The simplified chain is like this: Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform) Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform) Create CIImages from both CVPixelBuffers Apply VariableDepthBlur (CIFilter.maskedVariableBlur) Scale up final image to metal view size (CIFilter.lanczosScaleTransform) Render CIImage to a MTKView using CIRenderDestination From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view? Any pointers greatly appreciated!
2
0
573
Jul ’24
Really High Energy Use
I'm developing an app where users can select items to add to a screen, similar to creating a Canva presentation or choosing blocks in Minecraft. However, I'm encountering an issue with energy usage. When users click the arrows to browse different items, the energy use spikes significantly. Although it returns to normal after a while, continuous clicking causes the energy use to skyrocket. The images I'm using are 500x500 pixels. Ideally, I would like to avoid caching all the images, as the app might have up to 500 items and caching them all would consume too much memory. I have tried numerous way to avoid this but I just can’t seem to make it work. Would anyone know how to avoid such problem? I have included a picture of the energy use when just opened, and one after like 10 seconds of continuously clicking on an arrow to see more items. Also a picture of how the app looks. struct ContentView: View { struct babyBackground { var littleImage = "" } @State var firstSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape") ] @State var secondSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), babyBackground(littleImage: "anotherShape"), babyBackground(littleImage: "circle") ] @State var thirdSet: [babyBackground] = [ babyBackground(littleImage: "circle"), babyBackground(littleImage: "square"), babyBackground(littleImage: "triangle"), ] let columns: [GridItem] = Array(repeating: .init(.flexible()), count: 4) func createBackgroundGridView(for backgrounds: [babyBackground], columns: [GridItem] ) -> some View { LazyVGrid(columns: columns, spacing: 10) { ForEach(0..<backgrounds.count, id: \.self) { index in Button(action: { }, label: { if let path = Bundle.main.path(forResource: backgrounds[index].littleImage, ofType: "png"), let uiImage = UIImage(contentsOfFile: path) { Image(uiImage: uiImage) .resizable() .frame(width: 126, height: 96) } }) } } .padding() } @State var indexOn = 0 var body: some View { HStack{ Button(action: { indexOn = (indexOn == 0) ? 2 : indexOn - 1 }) { Label("", systemImage: "arrowtriangle.left.fill") .font(.system(size: 50)) } Spacer() ScrollView { switch indexOn { case 0: createBackgroundGridView(for: firstSet, columns: columns) case 1: createBackgroundGridView(for: secondSet, columns: columns) case 2: createBackgroundGridView(for: thirdSet, columns: columns) case 3: createBackgroundGridView(for: thirdSet, columns: columns) default: createBackgroundGridView(for: firstSet, columns: columns) } } .frame(maxWidth: .infinity, maxHeight: .infinity) Spacer() Button(action: { indexOn = (indexOn == 2) ? 0 : indexOn + 1 }) { Label("", systemImage: "arrowtriangle.right.fill") .font(.system(size: 50)) } } } } Energy Use when app starts: Energy use after clicking for about 10 seconds: App UI:
1
1
498
Jul ’24
Generating Live Photo from JPG and MOV fails
I am working on an iOS application using SwiftUI where I want to convert a JPG and a MOV file to a live photo. I am utilizing the LivePhoto Class from Github for this. The JPG and MOV files are displayed correctly in my WallpaperDetailView, but I am facing issues when trying to download the live photo to the gallery and generate the Live Photo. Here is the relevant code and the errors I am encountering: Console prints: Play button should be visible Image URL fetched and set: Optional("https://firebasestorage.googleapis.com/...") Video is ready to play Video downloaded to: file:///var/mobile/Containers/Data/Application/.../tmp/CFNetworkDownload_7rW5ny.tmp Failed to generate Live Photo I have verified that the app has the necessary permissions to access the Photo Library. The JPEG and MOV files are successfully downloaded and can be displayed in the app. The issue seems to occur when generating the Live Photo from the downloaded files. struct WallpaperDetailView: View { var wallpaper: Wallpaper @State private var isLoading = false @State private var isImageSaved = false @State private var imageURL: URL? @State private var livePhotoVideoURL: URL? @State private var player: AVPlayer? @State private var playerViewController: AVPlayerViewController? @State private var isVideoReady = false @State private var showBuffering = false var body: some View { ZStack { if let imageURL = imageURL { GeometryReader { geometry in KFImage(imageURL) .resizable() ... } } if let playerViewController = playerViewController { VideoPlayerViewController(playerViewController: playerViewController) .frame(maxWidth: .infinity, maxHeight: .infinity) .clipped() .edgesIgnoringSafeArea(.all) } } .onAppear { PHPhotoLibrary.requestAuthorization { status in if status == .authorized { loadImage() } else { print("User denied access to photo library") } } } private func loadImage() { isLoading = true if let imageURLString = wallpaper.imageURL, let imageURL = URL(string: imageURLString) { self.imageURL = imageURL if imageURL.scheme == "file" { self.isLoading = false print("Local image URL set: \(imageURL)") } else { fetchDownloadURL(from: imageURLString) { url in self.imageURL = url self.isLoading = false print("Image URL fetched and set: \(String(describing: url))") } } } if let livePhotoVideoURLString = wallpaper.livePhotoVideoURL, let livePhotoVideoURL = URL(string: livePhotoVideoURLString) { self.livePhotoVideoURL = livePhotoVideoURL preloadAndPlayVideo(from: livePhotoVideoURL) } else { self.isLoading = false print("No valid image or video URL") } } private func preloadAndPlayVideo(from url: URL) { self.player = AVPlayer(url: url) let playerViewController = AVPlayerViewController() playerViewController.player = self.player self.playerViewController = playerViewController let playerItem = AVPlayerItem(url: url) playerItem.preferredForwardBufferDuration = 1.0 self.player?.replaceCurrentItem(with: playerItem) ... print("Live Photo Video URL set: \(url)") } private func saveWallpaperToPhotos() { if let imageURL = imageURL, let livePhotoVideoURL = livePhotoVideoURL { saveLivePhotoToPhotos(imageURL: imageURL, videoURL: livePhotoVideoURL) } else if let imageURL = imageURL { saveImageToPhotos(url: imageURL) } } private func saveImageToPhotos(url: URL) { ... } private func saveLivePhotoToPhotos(imageURL: URL, videoURL: URL) { isLoading = true downloadVideo(from: videoURL) { localVideoURL in guard let localVideoURL = localVideoURL else { print("Failed to download video for Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Video downloaded to: \(localVideoURL)") self.generateAndSaveLivePhoto(imageURL: imageURL, videoURL: localVideoURL) } } private func generateAndSaveLivePhoto(imageURL: URL, videoURL: URL) { LivePhoto.generate(from: imageURL, videoURL: videoURL, progress: { percent in print("Progress: \(percent)") }, completion: { livePhoto, resources in guard let resources = resources else { print("Failed to generate Live Photo") DispatchQueue.main.async { self.isLoading = false } return } print("Live Photo generated with resources: \(resources)") self.saveLivePhotoToLibrary(resources: resources) }) } private func saveLivePhotoToLibrary(resources: LivePhoto.LivePhotoResources) { LivePhoto.saveToLibrary(resources) { success in DispatchQueue.main.async { if success { self.isImageSaved = true print("Live Photo saved successfully") } else { print("Failed to save Live Photo") } self.isLoading = false } } } private func fetchDownloadURL(from gsURL: String, completion: @escaping (URL?) -> Void) { let storageRef = Storage.storage().reference(forURL: gsURL) storageRef.downloadURL { url, error in if let error = error { print("Failed to fetch image URL: \(error)") completion(nil) } else { completion(url) } } } private func downloadVideo(from url: URL, completion: @escaping (URL?) -> Void) { let task = URLSession.shared.downloadTask(with: url) { localURL, response, error in guard let localURL = localURL, error == nil else { print("Failed to download video: \(String(describing: error))") completion(nil) return } completion(localURL) } task.resume() } }```
0
0
404
Jul ’24
Add the info of each picture in the photo app, which is derived from the name of the user's self-built album.
Well, I will collect a lot of memes from the Internet and save them on my iPhone. I will name and classify them, but I will click on a photo in "All Photos", and its info does not show which album I added to, which makes me very distressed. If I have this function, I will easily manage the memes that I did not correctly add to the corresponding album.
1
0
447
Jun ’24
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
1.4k
Jun ’24
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
496
Jun ’24
Not able to view custom stereo/spatial images in VisionOS 2
Hello, I've been creating my own stereoscopic images on my laptop and airdropping them to the Vision Pro to view them in 3D. My custom images have a left_eye.png and right_eye.png and have been combined into one HEIF image (as it is done natively with the headset) In VisionOS 1.xx Photos app, I was able to see my custom images in 3D, but in VisionOS 2, the device no longer recognizes that my image(s) should also be shown stereoscopically and instead, it shows it in 2D. I see that it gives me the option to use the AI tool to convert 2D into 3D, but the original file that I airdropped to myself (Mac --> AVP Photos Album) already has a left and right image pair. Is this something that can be fixed?
1
0
471
Jun ’24
Images retain memory usage
This is a very simple code in which there is only one button to start with. After you click the button, a list of images appear. The issue I have is that when I click on the new button to hide the images, the memory stays the same as when all the images appeared for the first time. As you can see from the images below, when I start the app, it starts with 18.5 mb, when I show the images it jumps to 38.5 mb and remains like that forever. I have tried various way to try and reduce the memory usage but I just can't find a solution that works. Does anyone know how to solve this? Thank you! import SwiftUI struct ContentView: View { @State private var imagesBeingShown = false @State var listOfImages = ["ImageOne", "ImageTwo", "ImageThree", "ImageFour", "ImageFive", "ImageSix", "ImageSeven", "ImageEight", "ImageNine", "ImageTen", "ImageEleven", "ImageTwelve", "ImageThirteen", "ImageFourteen", "ImageFifteen", "ImageSixteen", "ImageSeventeen", "ImageEighteen"] var body: some View { if !imagesBeingShown { VStack{ Button(action: { imagesBeingShown = true }, label: { Text("Turn True") }) } .padding() } else { VStack { Button(action: { imagesBeingShown = false }, label: { Text("Turn false") }) ScrollView { LazyVStack { ForEach(0..<listOfImages.count, id: \.self) { many in Image(listOfImages[many]) } } } } } } }
1
1
702
May ’24
The FOV is not inconsistent when taking photo if stabilization applied in iPhone15 pro max
Hi, I am developing iOS mobile camera. I noticed one issue related to the user privacy. when AVCaptureVideoStabilizationModeStandard is set to AVCaptureConnection which sessionPreset is 1920x1080Preset, after using system API to take a photo, the FOV of the photo will be bigger than preview stream and it will show more content especially in iPhone 15 pro max rear camera. I think this inconsistency will cause the user privacy issue. Can you show me the solution if I don't want to turn the StabilizationMode OFF? I tried other devices, this issue is ok but in iPhone 15pm this issue is very obvious. Any suggestions are appreciated.
1
0
533
May ’24
Why Does CameraPicker Require Authorization While ImagePicker and PhotoPicker Do Not?
**Why does using CameraPicker require user authorization through a pop-up? ** Why don't ImagePicker or PhotoPicker require additional pop-up authorizations for accessing the photo library? All of these are implemented using UIImagePickerController, so why does one require a pop-up and the others do not? Additionally, I thought that by configuring the picker, I would theoretically not need any permissions. If permissions are still required, wouldn’t it make more sense to directly request camera permissions and utilize the native camera functionality? What then are the advantages of using the picker?
0
0
512
Apr ’24
Problems importing iPhones’ medias into macOS’ Photos app via USB cables.
With USB cable connection (no cloud) to import from updated iPhones (11 Pro Max, 12 mini, and 13 with their updated iOSes) into updated macOSes (Ventura v13.x, Big Sur v10.11.x, and Mojave v10.14.x)'s Photos app, I noticed imports show already imported medias and missing brand new medias. Others and I noticed this problem in our multiple macOSes with iPhones: https://discussions.apple.com/thread/255565285 and https://talk.tidbits.com/t/does-anyone-have-problems-importing-iphones-medias-into-macos-photos-app/27406/. Thank you for reading and hopefully answering soon. :)
0
0
488
Apr ’24