I extracted the gain map info from an image using
let url = Bundle.main.url(forResource: "IMG_1181", withExtension: "HEIC")
let source = CGImageSourceCreateWithURL(url! as CFURL, nil)
let portraitData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(source!, 0, kCGImageAuxiliaryDataTypeHDRGainMap) as! [AnyHashable : Any]
let metaData = portraitData[kCGImageAuxiliaryDataInfoMetadata] as! CGImageMetadata
Then I printed all the metadata tags
func printMetadataProperties(from metadata: CGImageMetadata) {
guard let tags = CGImageMetadataCopyTags(metadata) as? [CGImageMetadataTag] else {
return
}
for tag in tags {
if let prefix = CGImageMetadataTagCopyPrefix(tag) as String?,
let namespace = CGImageMetadataTagCopyNamespace(tag) as String?,
let key = CGImageMetadataTagCopyName(tag) as String?,
let value = CGImageMetadataTagCopyValue(tag){
print("Namespace: \(namespace), Key: \(key), Prefix: \(prefix), value: \(value)")
} else {
}
}
}
//Namespace: http://ns.apple.com/ImageIO/1.0/, Key: hasXMP, Prefix: iio, value: True
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapVersion, Prefix: HDRGainMap, value: 131072
//Namespace: http://ns.apple.com/HDRGainMap/1.0/, Key: HDRGainMapHeadroom, Prefix: HDRGainMap, value: 3.586325
I want to create a new CGImageMetadata and tags.
But when it comes to the HDR tags. It always fails to add to metadata.
let tag = CGImageMetadataTagCreate(
"http://ns.apple.com/HDRGainMap/1.0/" as CFString,
"HDRGainMap" as CFString,
"HDRGainMapHeadroom" as CFString,
.default,
3.56 as CFNumber
)
let path = "\(HDRGainMap):\(HDRGainMapHeadroom)" as CFString
let success = CGImageMetadataSetTagWithPath(metadata, nil, path, tag)// always false
The hasXMP works fine.
Is HDR a private dict for Apple?
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
57 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi, I have a problem that I can't solve, and I hope you can help me:
When I switch tabs or scroll down, the background color of the tab bar changes automatically.
This happened to me with Xcode 16.0.
Can yo help me please?
I am using Apple’s Vision framework with DetectHorizonRequest to detect the horizon in an image. Here is my code:
func processHorizonImage(_ ciImage: CIImage) async {
let request = DetectHorizonRequest()
do {
let result = try await request.perform(on: ciImage)
print(result)
} catch {
print(error)
}
}
After calling the perform method, I am getting result as nil. To ensure the request's correctness, I have verified the following:
The input CIImage is valid and contains a visible horizon.
No errors are being thrown.
The relevant frameworks are properly imported.
Given that my image contains a clear horizon, why am I still not getting any results? I would appreciate any help or suggestions to resolve this issue.
Thank you for your support!
This is the image
Hello, during the development of my website, I discovered that when there are numerous or large WebP images on the screen, the screen keeps flickering, accompanied by the phenomenon of mobile phone heating. The page returns to normal after converting to PNG. It seems that the issue is caused by the update of IOS 18. Could you please assist me in taking a look?
PNG (it works)
https://static.xdbbtswu.com/bbt_of/assets/test/good/#/
WebP (it doesn't work)
https://static.xdbbtswu.com/bbt_of/assets/test/nogood/#/
I am a developer working on iOS apps.
In the demo, I planned to replace the local images with Heic format instead of PNG format, but the actual test results showed abnormalities on this device, while the other test devices displayed normally
Heic images are converted by the built-in image conversion function on Mac. I tested multiple Heic images, but none of them were displayed and the image information returned nil,,but PNG images can be displayed normally.
device information:
Hi everyone,
I've been working with the autoAdjustmentFilters provided by Core Image, which includes filters like CIHighlightShadowAdjust, CIVibrance, and CIToneCurve. However, I’ve noticed that the results differ significantly from the "Auto" enhancement feature in the Photos app. In the Photos app, the Auto function seems to adjust multiple parameters such as contrast, exposure, white balance, highlights, and shadows in a more advanced manner.
Is there an API or a framework available that can replicate the more sophisticated "Auto" adjustments as seen in the Photos app? Or would I need to manually combine filters (like CIExposureAdjust, CIWhitePointAdjust, etc.) to approximate this functionality?
Any insights or recommendations on how to achieve this would be greatly appreciated. Thank you!
The apps. like viber, whatsapp are not able to recognize/ upload the photos after 18.01 installation.
I am building an app about photos and
I want to create a photo sharing feature like Apple's Photos App.
Please see Steps to Reproduce and attached project.
The current share method has the following issues
The file name of the shared photo changes to “FullSizeRender”.
The creation and update dates of shared photos will change to the date they were edited or shared.
I want to ensure that the following conditions are definitely met
Share the latest edited version.
The creation date should be when the original photo was first created.
How can I improve the code?
STEPS TO REPRODUCE
class PHAssetShareManager {
static func shareAssets(_ assets: [PHAsset], from viewController: UIViewController, sourceView: UIView) {
let manager = PHAssetResourceManager.default()
var filesToShare: [URL] = []
let group = DispatchGroup()
for asset in assets {
group.enter()
getAssetFile(asset, resourceManager: manager) { fileURL in
if let fileURL = fileURL {
filesToShare.append(fileURL)
}
group.leave()
}
}
group.notify(queue: .main) {
self.presentShareSheet(filesToShare, from: viewController, sourceView: sourceView)
}
}
private static func getAssetFile(_ asset: PHAsset, resourceManager: PHAssetResourceManager, completion: @escaping (URL?) -> Void) {
print("getAssetFile")
let resources: [PHAssetResource]
switch asset.mediaType {
case .image:
if asset.mediaSubtypes.contains(.photoLive) {
// let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePairedVideo }
// let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .pairedVideo }
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePhoto }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .photo }
resources = editedResources.isEmpty ? originalResources : editedResources
} else {
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizePhoto }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .photo }
resources = editedResources.isEmpty ? originalResources : editedResources
}
case .video:
let editedResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .fullSizeVideo }
let originalResources = PHAssetResource.assetResources(for: asset).filter { $0.type == .video }
resources = editedResources.isEmpty ? originalResources : editedResources
default:
print("Unsupported media type")
completion(nil)
return
}
guard let resource = resources.first else {
print("No resource found")
completion(nil)
return
}
let fileName = resource.originalFilename
let tempDirectoryURL = FileManager.default.temporaryDirectory
let localURL = tempDirectoryURL.appendingPathComponent(fileName)
// Delete existing files and reset cache
if FileManager.default.fileExists(atPath: localURL.path) {
do {
try FileManager.default.removeItem(at: localURL)
} catch {
print("Error removing existing file: \(error)")
}
}
let options = PHAssetResourceRequestOptions()
options.isNetworkAccessAllowed = true
resourceManager.writeData(for: resource, toFile: localURL, options: options) { (error) in
if let error = error {
print("Error writing asset data: \(error)")
completion(nil)
} else {
completion(localURL)
}
}
}
private static func presentShareSheet(_ items: [Any], from viewController: UIViewController, sourceView: UIView) {
print("presentShareSheet")
let activityViewController = UIActivityViewController(activityItems: items, applicationActivities: nil)
if UIDevice.current.userInterfaceIdiom == .pad {
activityViewController.popoverPresentationController?.sourceView = sourceView
activityViewController.popoverPresentationController?.sourceRect = sourceView.bounds
}
viewController.present(activityViewController, animated: true, completion: nil)
}
}```
I have a user who keeps crashing on his iOS 18 device, I need some help~
Exception 1, Code 26, Subcode 8 > Attempted to dereference garbage pointer 0x1a.
0
ImageIO IIOScanner::getVal32() + 36
1
ImageIO PSDReadPlugin::initialize(IIODictionary*) + 620
2
ImageIO PSDReadPlugin::initialize(IIODictionary*) + 620
3
ImageIO IIOReadPlugin::callInitialize() + 400
4
ImageIO IIO_Reader::initImageAtOffset(CGImagePlugin*, unsigned long, unsigned long, unsigned long) + 164
5
ImageIO IIOImageSource::makeImagePlus(unsigned long, IIODictionary*) + 832
6
ImageIO IIOImageSource::getPropertiesAtIndexInternal(unsigned long, IIODictionary*) + 72
7
ImageIO IIOImageSource::createThumbnailAtIndex(unsigned long, IIODictionary*, int*) + 1352
8
ImageIO CGImageSourceCreateThumbnailAtIndex + 740
9
Photos _createDecodedImageUsingImageIOWithFileUrlOrData + 856
10
Photos __91-[PHImageIODecoder decodeImageFromData:orFileURL:options:existingRequestHandle:completion:]_block_invoke_2 + 176
11
libdispatch.dylib _dispatch_call_block_and_release + 32
12
libdispatch.dylib _dispatch_client_callout + 20
13
libdispatch.dylib _dispatch_continuation_pop + 596
14
libdispatch.dylib _dispatch_async_redirect_invoke + 580
15
libdispatch.dylib _dispatch_root_queue_drain + 392
16
libdispatch.dylib _dispatch_worker_thread2 + 156
17
libsystem_pthread.dylib _pthread_wqthread + 228
I'm adding support for Genmoji to my app but it's unclear to me if this is something that should be called manually in code to insert an adaptive image glyph or if this is something that the system calls when the user inserts a Genmoji / adaptive image glyph object (ie. sticker) into the UITextView.
I'm using UIKit to display a long list of large images inside a SwiftUI ScrollView and LazyHStack using UIViewControllerRepresentable. When an image is loaded, I'm using SDWebImage to load the image from the disk.
As the user navigates through the list and continues to load more images, more memory is used and is never cleared, even as the images are unloaded by the LazyHStack. Eventually, the app reaches the memory limit and crashes. This issue persists if I load the image with UIImage(contentsOfFile: ...) instead of SDWebImage.
How can I free the memory used by UIImage when the view is removed?
ScrollView(.horizontal, showsIndicators: false) {
LazyHStack(spacing: 16) {
ForEach(allItems) { item in
TestImageDisplayRepresentable(item: item)
.frame(width: geometry.size.width, height: geometry.size.height)
.id(item.id)
}
}
.scrollTargetLayout()
}
import UIKit
import SwiftUI
import SDWebImage
class TestImageDisplay: UIViewController {
var item: TestItem
init(item: TestItem) {
self.item = item
super.init(nibName: nil, bundle: nil)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func viewDidLoad() {
super.viewDidLoad()
let imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: 200, height: 200))
imageView.center = view.center
view.addSubview(imageView)
imageView.sd_setImage(with: item.imageURL, placeholder: nil)
}
}
struct TestImageDisplayRepresentable: UIViewControllerRepresentable {
var item: TestItem
func makeUIViewController(context: Context) -> TestImageDisplay {
return TestImageDisplay(item: item)
}
func updateUIViewController(_ uiViewController: TestImageDisplay, context: Context) {
uiViewController.item = item
}
}
My App will dynamically load different immersive furniture design scenes.
After each scene is loaded, I need to set the HDR image as ImageBasedLight.
How can I load EnvironmentResource dynamically?
This way I can set the ImageBasedLightComponent dynamically
Hi Team,
We have been working on one image processing app developed using react. In this app we are making the XMLHttpRequests to the server and storing the response in the cache which has around 200MB - 250MB of size. We are tracking the memory footprint using the Xcode instrument tool.
While downloading and rendering the data in app the Xcode instrument shows the memory footprint around 800MB - 1000MB. We are assuming that garbage collection is not working as expected or some resources are not released after use and because of this we get this high memory footprint for 200MB - 250MB data. If the data is changed then we are removing the existing data from cache and storing the new data. But here, when we delete the data from cache, it does not release the memory immediately and takes some time of 3 seconds or more.
In between this, the memory gets allocated to new data too and that increases the overall memory footprint of the app and in some cases the app is crashing. The maximum memory we have seen is average 1.5GB which varies with the device configuration. When we try the same activity on a safari browser where memory gets released immediately. If an app releases the initial acquired memory while loading new data we see very less app crashes. We need help to understand if there is a way to release the memory immediately to avoid the app crash.
To reproduce this scenario, we have created a simple app which creates an array with size of 100MB and checks the memory footprint using the Xcode instrument tool. When we create an array of 100MB size, sometimes it shows the memory footprint peak of around 700MB-800MB and when we clear the array by assigning it with an empty array it releases the memory after 2-3 seconds.
Created an array and then removed it and after removal of the array, created a new array of the same size immediately and again removed it. Because the memory is not released in time, if you repeat these steps a few times the app memory footprint will increase and that crashes the app.
I have a 3х3 Matrix which I need to apply to UIImage and save it in Documents folder. I successfully converted the 3x3 Matrix (represented as [[Double]]) to CATrasform3D and then I have broken my head with trying to figure out how to apply it to UIImage.
The only property where I can I apply it is UIView(or UIImageView in case with working with UIImage) transform property. But it has nothing to do with UIImage itself. I can't save the UIImage from transformed the UIImageView with all the transformations.
And all the CoreGraphic methods (like concatenate for CGContext) only work with affine transformations which not suits for me.
Please give me a hint what direction I should look.
Does Apple has native methods or I have to use 3rd party frameworks for this functionality?
So I have an ios app using wkwebview - ionic framework, my images (not local and hosted) can be seen in ios 17 in my app but not in ios 18.
Is there some changes in wkwebview for this particular thing or am I missing something else?
Dear all,
I'm building my first MacOs app.
I've created my app icon and add it to AppIcon folder, but when I'm building the application the icon shows in the dock of the screen with no rounded borders like all the other apps.
I'm attaching here the icon and as you can see it has sharp edges. It is the same way in which it shows on the dock.
Why? Has anybody experienced the same?
Thanks for the support in advance,
A.
It seems that there’s still no way to get all TIFF tags from a TIFF image, is that right? I've got these GeoTIFF images that have a handful of specialized TIFF tags in them. Calling CGImageSourceCopyPropertiesAtIndex(), I can see basic properties common to all TIFF images, like dimensions and color/pixel information, but no others.
Short of including libtiff, is there another way to get at the metadata? I've tried all of the options in CGImageSourceCopyAuxiliaryDataInfoAtIndex.
I've written a few bugs about this since 2020, all ignored.
Hi, Experts
I am using phpickerviewcontroller to open photos in iPhone. It works well for most of photos, such as jpeg, heif. But it failed for photo with raw image on iPhone14 plus. And I found the TypeIdentifier for it is com.adobe.raw-image.
I use result.itemProvider.loadFileRepresentation(forTypeIdentifier: "com.adobe.raw-image") to load the raw photo, it always failed, with "Error loading file representation: Cannot load representation of type com.adobe.raw-image".
I had try some other param: such as forTypeIdentifier: public.image, public.camera-raw-image, both of them did not work.
How can I load this type of raw photo?
Below is my code details:
// MARK: - PHPickerViewControllerDelegate
func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) {
picker.dismiss(animated: true, completion: nil)
var resultIndex = 0
DDLogInfo("Pick \(results.count) photos")
for result in results {
resultIndex += 1
DDLogInfo("Process \(resultIndex) photo")
DDLogInfo("Registered type identifiers for itemProvider:")
for typeIdentifier in result.itemProvider.registeredTypeIdentifiers {
DDLogInfo("TypeIdentifier \(typeIdentifier)")
}
if(result.itemProvider.hasItemConformingToTypeIdentifier(UTType.image.identifier)) {
DDLogInfo("Result \(resultIndex) is image")
}
if result.itemProvider.canLoadObject(ofClass: UIImage.self) {
DDLogInfo("Can load \(resultIndex) image")
//more code for photo
} else {
DDLogInfo("Load special image, such as raw")
result.itemProvider.loadFileRepresentation(forTypeIdentifier: "com.adobe.raw-image") { url, error in
if let error = error {
DDLogInfo("Error loading file representation: \(error.localizedDescription)")
return
}
Hi,
Currently my app is using ImageCaptureCore framework to work with DSLR camera. But when I tested it in iOS 18, it turns out my camera cannot do connection with iPhone by wired connection.
It seems there are some developer run into the same problem, there are:
https://forums.developer.apple.com/forums/thread/756960
https://stackoverflow.com/questions/78618886/icdevicebrowser-fails-to-find-any-devices-after-ios-18-update
And it’s reproduced in some apps that expected to use ImageCaptureCore framework.
I’d like to clarify that:
Is the issue currently iOS 18 bugs?
Is there any plan of Apple to remove wired connection support of ImageCaptureCore framework?
Thank you.
I am currently running iOS 18 Beta 3 and am working on enabling users to paste (and copy) custom emojis (AdaptiveImageGlyph, such as Memoji, Stickers, and soon GenMoji) into a text field.
I am looking for the UTI for AdaptiveImageGlyph—something similar to "public.adaptive-image-glyph". Does anyone know if such a UTI exists?
Here’s my situation: When typing AdaptiveImageGlyph using the System keyboard, everything functions correctly. However, if I copy some text containing AdaptiveImageGlyph from the Notes app and paste it into my playground app, it only pastes the text. The reverse is also true. In fact, if I copy some AdaptiveImageGlyph from the playground app and paste it, it only pasts the text.
Interestingly, copying AdaptiveImageGlyph from the Notes app and pasting it into iMessage works flawlessly, and vice versa. I am trying to achieve the same seamless functionality in my app.
Given that this feature works in iMessage and Notes, I am inclined to believe the issue might be on my side, though I recognize these are system apps and not third-party.
Example Code:
import SwiftUI
import UIKit
struct AdaptiveImageGlyphTextView: UIViewRepresentable {
class Coordinator: NSObject, UITextViewDelegate {
var parent: AdaptiveImageGlyphTextView
init(parent: AdaptiveImageGlyphTextView) {
self.parent = parent
}
func textViewDidChange(_ textView: UITextView) {
parent.text = textView.text
}
func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool {
// Handle insertion of adaptive image glyphs here if needed
return true
}
}
@Binding var text: String
func makeCoordinator() -> Coordinator {
Coordinator(parent: self)
}
func makeUIView(context: Context) -> UITextView {
let textView = UITextView()
textView.delegate = context.coordinator
textView.supportsAdaptiveImageGlyph = true
textView.isEditable = true
textView.isSelectable = true
textView.font = UIFont.systemFont(ofSize: 17)
// Enable paste with NSAdaptiveImageGlyphs
textView.pasteConfiguration = UIPasteConfiguration(acceptableTypeIdentifiers: [
"public.text",
"public.image",
"public.adaptive-image-glyph" // Replace with the correct UTI if different
])
return textView
}
func updateUIView(_ uiView: UITextView, context: Context) {
if uiView.text != text {
uiView.text = text
}
}
}
struct ContentView: View {
@State private var text: String = ""
var body: some View {
AdaptiveImageGlyphTextView(text: $text)
.frame(height: 200)
.padding()
}
}
#Preview {
ContentView()
}