Image I/O

RSS for tag

Read and write most image file formats, manage color, access image metadata using Image I/O.

Posts under Image I/O tag

55 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Screenshot with ScreenCaptureKit much larger than with Command-Shift-3
I am capturing a screenshot with SCScreenshotManager's captureImageWithFilter. The resulting PNG has the same resolution as the PNG taken from Command-Shift-3 (4112x2658) but is 10x larger (14.4MB vs 1.35MB). My SCStreamConfiguration uses the SCDisplay's width and height and sets the color space to kCGColorSpaceSRGB. I currently save to file by initializing a NSBitmapImageRep using initWithCGImage, then representing as PNG with representationUsingType NSBitmapImageFileTypePNG, then writeToFile:atomically. Is there some configuration or compression I can use to bring down the PNG size to be more closely in-line with a Command-Shift-3 screenshot. Thanks!
1
0
862
May ’24
MacOS Finder Preview/Thumbnail Generation Limited to 20-21 Alpha Channels?
I am using the following shell script to return an image preview for use in FileMaker: qlmanage -t [sourcePath] -s 512 -o [outputPath] This usually works well, but it hangs if the RGB image (.tif, .psb, or .psd) has too many Alpha Channels ( >20 if on transparent background; >21 if flattened). This issue can be also be seen when looking at the image thumbnail or preview in the Finder. It appears MacOS won't create a thumbnail when the image has over 21 Alpha Channels... it just shows the default tif/psb/psd thumbnail, even if the image is very small. Environment MacOS Sonoma 14.4.1 Adobe Photoshop 2024 (25.6.0) Maximize PSD and PSB File Compatibility is enabled when saved from Photoshop Since I'm only able to upload a screenshot to this post, the original test files can be found in the Adobe Forum with the Title: "MacOS Finder Preview Limited to 20-21 Alpha Channels?"
1
0
632
May ’24
iOS 17 UIImageReader has memory leaks
In my SwiftUI view, I try to load the image from data. var body: some View { Group{ if let data = model.detailImageData, let uiimage = UIImage(data: data) {// no memory issue Image(uiImage: uiimage) .resizable() .scaledToFit() } } } But I want to get the HDR style of my image, so I use if let data = model.detailImageData, let uiimage = UIImageReader.default.image(data:data){ //memory leaks!!! When I change the data, the memory of the previous image is never freeed. finally caused my app to crash. You can see it from the Instrument screenshot.
1
1
868
Apr ’24
Memory Leak in ImageIO?
I use this code to show the Image in HDR in SwiftUI struct HDRImageView: UIViewRepresentable { // Set up a common reader for all UIImage read requests. static let reader: UIImageReader = { var config = UIImageReader.Configuration() config.prefersHighDynamicRange = true return UIImageReader(configuration: config) }() let data:Data? let enableHDR:Bool func makeUIView(context: Context) -> UIImageView { let view = UIImageView() view.preferredImageDynamicRange = enableHDR ? .high : .standard update(view) // Set this view to fit itself to the parent view. view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal) view.setContentCompressionResistancePriority(.defaultLow, for: .vertical) view.setContentHuggingPriority(.required, for: .horizontal) view.setContentHuggingPriority(.required, for: .vertical) return view } func updateUIView(_ view: UIImageView, context: Context) { update(view) } func update(_ view: UIImageView) { autoreleasepool{//not working if let data = data { view.image = nil//set to nil first is not working view.image = HDRImageView.reader.image(data: data) } else { view.image = nil } view.preferredImageDynamicRange = enableHDR ? .high : .standard } } } But when I update the input data, seems that the old image data can not be freeed. After several changes, the app takes too much memory and crash. I found it's the VM:ImageIO_Surface_Data and the VM_Image_IO take up the memory. If I change the HDRImageView into a normal Image(uiimage:UIImage(data:)) It no longer have this issue. Is it a memory leak? and how to solve this. Update: I then tried using Image(_:cgImage), and it appear to be the same result.
0
0
814
Apr ’24
Does CVE-2024-1580 affect my app?
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097 The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context. With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845). From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
0
0
805
Apr ’24
iOS ImageRenderer Unable to localize text correctly Bug
A simple view has misaligned localized content after being converted to an image using ImageRenderer. This is still problematic on real phone and TestFlight I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug. I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images. "(1) days" is also not converted to "1 day" properly.
4
1
878
1w
Capturing the coordinates of an image and locating a second image to those coordinates
if balloon == yellow1_balloon { soundFile = "Sounds/newblop.wav" playSound() balloon.isHidden = true poppedImages.isHidden = false poppedImages.animationImages = ["popyellow-1","popyellow-2","popyellow-3","popyellow-4","popyellow-5","popyellow-6","popyellow-7"] .compactMap({ name in UIImage(named: name) }) let x:CGFloat = yellow1_balloon.frame.origin.x let y:CGFloat = yellow1_balloon.frame.origin.y poppedImages.frame.origin.x = x poppedImages.frame.origin.y = y poppedImages.animationDuration = 1.0 poppedImages.animationRepeatCount = 1 poppedImages.startAnimating() score = score + 10 scoreLbl.text = String(score) return } x,y cordinates are always the same a when yellow1_balloon is first created and not where it ends up after being touched.
0
0
546
Mar ’24
Save ARDepthData as .tiff
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff: import UIKit # Save method extension CVPixelBuffer { func saveDepthMapToTIFF(to path: URL) { let ciImage = CIImage(cvPixelBuffer: self) let context = CIContext() do { try context.writeTIFFRepresentation( of: ciImage, to: path, format: .Lf, colorSpace: CGColorSpaceCreateDeviceGray() ) } catch { print("Failed to write TIFF: \(error)") } } } # Calling the save arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath) I am reading the file like this in Python import tifffile depth_map = tifffile.imread("test.tiff") plt.imshow(depth_map) plt.colorbar() which creates this image: The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away. Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same. Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
1
1
948
Mar ’24
Lossy option has no effect when exporting PNG to HEIF
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG. Is Preview using some trick to convert the image using ciContext.createCGImage? PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file. func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? { guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil } guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil } var mutableData = NSMutableData() guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil } let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary CGImageDestinationAddImage(imageDestination, cgImage, options) let success = CGImageDestinationFinalize(imageDestination) if success { return mutableData as Data } return nil } func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? { guard let ciImage = CIImage(contentsOf: url) else { return nil } let context = CIContext() let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB() let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality] return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options) }
5
0
1.3k
Jul ’24
Is it possible to compile images into an APNG using Swift?
Hello, I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them. Any hints or pointers would be appreciated. func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) { currentGIFPath = newGIFPath gifQueue.async { let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]] let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]] guard let path = self.currentGIFPath else { return } guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil) else { finished?(false, nil); return } //logAR.message("\(destination)") CGImageDestinationSetProperties(destination, gifSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } if !CGImageDestinationFinalize(destination) { finished?(false, nil); return } else { finished?(true, path) } } } My adaptation of the above code for APNGs (doesn't work; outputs empty file): func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) { let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]] let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]] guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil) else { fatalError("Failed") } CGImageDestinationSetProperties(destination, apngSettings as CFDictionary) for image in images { if let imageRef = image.cgImage { CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary) } } }
3
0
1.6k
Mar ’24
Render a SwiftUI view into an image on Dark mode not work
This is my test code. import SwiftUI extension View { @MainActor func render(scale: CGFloat) -> UIImage? { let renderer = ImageRenderer(content: self) renderer.scale = scale return renderer.uiImage } } struct ContentView: View { @Environment(\.colorScheme) private var colorScheme @State private var snapImg: UIImage = UIImage() var snap: some View { Text("I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) } @ViewBuilder func snapEx() -> some View { VStack { Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .background(.pink) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .background(.purple) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!") .foregroundStyle(colorScheme == .dark ? .red : .green) } } @ViewBuilder func snapView() -> some View { VStack { Text("Text") Text("Test2") .background(.green) snap snapEx() } } var body: some View { let snapView = snapView() VStack { snapView Image(uiImage: snapImg) Button("Snap") { snapImg = snapView.render(scale: UIScreen.main.scale) ?? UIImage() } } } } When using ImageRenderer, there are some problems with converting View to images. For example, Text cannot automatically modify the foreground color of Dark Mode. This is just a simple test code, not just Text. How should I solve it?
4
1
1.6k
Apr ’24
EXIF Makernote no read in Ventura
I have a custom app running on a Mac Studio with Ventura that grabs a snapshot image from a network camera. It then adds some extra information into the EXIF "MakerNote" field. However the metadata cannot be read back out of the image when running Ventrua, it can however be read out of the same image file on a Mac that is not running Ventura. It would appear Apple has removed support for reading MakerNote in Ventura but still supports writing MakerNote in Ventura. This code is about 7 years old and written in ObjC and has worked with no issue until Ventura came along. Calls used CGImageDestinationAddImageFromSource(); // used to write the image to disk with the extra metadata - Works on Ventura CGImageSourceCopyPropertiesAtIndex(); // used to read the meta data from an image - does not return "MakeNote" data Is there a new way to read EXIF "MakeNote" data from image files that was introduced with Ventura?
2
1
813
Nov ’24