I want to create framework for this repo:https://github.com/BradLarson/GPUImage
but failed.
1.I downloaded this repo and run below:
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS"
-archivePath "archives/GPUImage"
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS Simulator"
-archivePath "archivessimulator/GPUImage"
xcodebuild -create-xcframework
-archive archives/GPUImage.xcarchive -framework GPUImage.framework
-archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework
-output xcframeworks/GPUImage.xcframework
there is error :cryptexDiskImage' is an unknown content type
and com.apple.platform.xros' is an unknown platform identifier
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
56 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
I'm looking to update the metadata properties of a DNG image stored on disc, saving to a new file.
Using ImageIO's CGImageSource and CGImageDestination classes, I run into a problem where by the destination doesn't support the type of the source. For example:
let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
if
let cgImageSource = CGImageSourceCreateWithURL(sourceURL as CFURL, imageSourceOptions),
let type = CGImageSourceGetType(cgImageSource) {
guard let imageDestination = CGImageDestinationCreateWithURL(destinationURL as CFURL, type, 1, nil) else {
fatalError("Unable to create image destination")
}
// Code to update properties and write out to destination url
}
}
When this code is executed I get the following errors on the command line when trying to create the destination:
2024-06-30 11:52:25.531530+0100 ABC[7564:273101] [ABC] findWriterForTypeAndAlternateType:119: *** ERROR: unsupported output file format 'com.adobe.raw-image'
2024-06-30 11:52:25.531661+0100 ABC[7564:273101] [ABC] CGImageDestinationCreateWithURL:4429: *** ERROR: CGImageDestinationCreateWithURL: failed to create 'CGImageDestinationRef'
I don't see a way to create a destination directly from a source? Yes, the code works for say a JPEG file but I want it to work for any image format that CGImageSource can work with?
I want to display my own image in an inline widget. Using the SwiftUI Image syntax doesn't show the image, so following advice from online forums, I used the syntax Image(uiImage: UIImage(named: String)). This successfully displays the image, but if I change the image file name in the app, the image doesn't update properly.
I tested displaying the image file name using Text in the inline widget, and it correctly shows the updated file name from my app. So, my AppStorage and AppGroups seem to be working correctly.
I'd like to ask if there's a way to update my images properly and if there's an alternative method to display images without converting them to UIImage. Thanks.
I want to display my own image in an inline widget. Using the SwiftUI Image syntax doesn't show the image, so following advice from online forums, I used the syntaxImage(uiImage: UIImage(named: String))This
successfully displays the image, but if I change the image file name in the app, the image doesn't update properly.
I tested displaying the image file name using Text in the inline widget, and it correctly shows the updated file name from my app. So, my AppStorage and AppGroups seem to be working correctly.
I'd like to ask if there's a way to update my images properly and if there's an alternative method to display images without converting them to UIImage. Thanks.
In the example https://developer.apple.com/documentation/imageio/writing-spatial-photos, we see that for each image encoded with the photo we include the following information:
kCGImagePropertyGroups: [
kCGImagePropertyGroupIndex: 0,
kCGImagePropertyGroupType: kCGImagePropertyGroupTypeStereoPair,
(isLeft ? kCGImagePropertyGroupImageIsLeftImage : kCGImagePropertyGroupImageIsRightImage): true,
kCGImagePropertyGroupImageDisparityAdjustment: encodedDisparityAdjustment
],
Which will identify which image is left, and which is right, also information about group type = stereo pair.
Now, how do you read those back?
I tried to implement a reading simply with CGImageSourceCopyPropertiesAtIndex, and that did not work, getting back "No property groups found."
func tryToReadThose() {
guard
let imageData = try? Data(contentsOf: outputImageURL),
let source = CGImageSourceCreateWithData(imageData as NSData, nil)
else {
print("cannot read")
return
}
for i in 0..<CGImageSourceGetCount(source) {
guard let imageProperties = CGImageSourceCopyPropertiesAtIndex(source, i, nil) as? [String: Any] else {
print("cannot read options")
continue
}
if let propertyGroups = imageProperties[String(kCGImagePropertyGroups)] as? [Any] {
// Process the property groups as needed
print(propertyGroups)
} else {
print("No property groups found.")
}
//print(imageProperties)
}
}
I assume maybe CGImageSourceCopyPropertiesAtIndex expects something as a 3rd parameter. But in the https://developer.apple.com/documentation/imageio/cgimagesource "Specifying the Read Options" I don't see anything related to that.
I am capturing a screenshot with SCScreenshotManager's captureImageWithFilter. The resulting PNG has the same resolution as the PNG taken from Command-Shift-3 (4112x2658) but is 10x larger (14.4MB vs 1.35MB).
My SCStreamConfiguration uses the SCDisplay's width and height and sets the color space to kCGColorSpaceSRGB.
I currently save to file by initializing a NSBitmapImageRep using initWithCGImage, then representing as PNG with representationUsingType NSBitmapImageFileTypePNG, then writeToFile:atomically.
Is there some configuration or compression I can use to bring down the PNG size to be more closely in-line with a Command-Shift-3 screenshot.
Thanks!
I am using the following shell script to return an image preview for use in FileMaker:
qlmanage -t [sourcePath] -s 512 -o [outputPath]
This usually works well, but it hangs if the RGB image (.tif, .psb, or .psd) has too many Alpha Channels ( >20 if on transparent background; >21 if flattened).
This issue can be also be seen when looking at the image thumbnail or preview in the Finder. It appears MacOS won't create a thumbnail when the image has over 21 Alpha Channels... it just shows the default tif/psb/psd thumbnail, even if the image is very small.
Environment
MacOS Sonoma 14.4.1
Adobe Photoshop 2024 (25.6.0)
Maximize PSD and PSB File Compatibility is enabled when saved from Photoshop
Since I'm only able to upload a screenshot to this post, the original test files can be found in the Adobe Forum with the Title: "MacOS Finder Preview Limited to 20-21 Alpha Channels?"
When i am fetching Images and passing it to
await faceapi.fetchImage(label);
then i am facing RefrenceError:Can't Find Variable: FileReader
Please Help For this.
In my SwiftUI view, I try to load the image from data.
var body: some View {
Group{
if let data = model.detailImageData, let uiimage = UIImage(data: data) {// no memory issue
Image(uiImage: uiimage)
.resizable()
.scaledToFit()
}
}
}
But I want to get the HDR style of my image, so I use
if let data = model.detailImageData, let uiimage = UIImageReader.default.image(data:data){ //memory leaks!!!
When I change the data, the memory of the previous image is never freeed. finally caused my app to crash.
You can see it from the Instrument screenshot.
I use this code to show the Image in HDR in SwiftUI
struct HDRImageView: UIViewRepresentable {
// Set up a common reader for all UIImage read requests.
static let reader: UIImageReader = {
var config = UIImageReader.Configuration()
config.prefersHighDynamicRange = true
return UIImageReader(configuration: config)
}()
let data:Data?
let enableHDR:Bool
func makeUIView(context: Context) -> UIImageView {
let view = UIImageView()
view.preferredImageDynamicRange = enableHDR ? .high : .standard
update(view)
// Set this view to fit itself to the parent view.
view.setContentCompressionResistancePriority(.defaultLow, for: .horizontal)
view.setContentCompressionResistancePriority(.defaultLow, for: .vertical)
view.setContentHuggingPriority(.required, for: .horizontal)
view.setContentHuggingPriority(.required, for: .vertical)
return view
}
func updateUIView(_ view: UIImageView, context: Context) {
update(view)
}
func update(_ view: UIImageView) {
autoreleasepool{//not working
if let data = data {
view.image = nil//set to nil first is not working
view.image = HDRImageView.reader.image(data: data)
} else {
view.image = nil
}
view.preferredImageDynamicRange = enableHDR ? .high : .standard
}
}
}
But when I update the input data, seems that the old image data can not be freeed.
After several changes, the app takes too much memory and crash.
I found it's the VM:ImageIO_Surface_Data and the VM_Image_IO take up the memory.
If I change the HDRImageView into a normal Image(uiimage:UIImage(data:)) It no longer have this issue.
Is it a memory leak? and how to solve this.
Update: I then tried using Image(_:cgImage), and it appear to be the same result.
I have an image viewing app with support for avif (and avis) images. I'm trying to figure out if the recent bug in CoreMedia (dav1d) affects my app. The apple security update: https://support.apple.com/en-gb/HT214097
The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845), where c is the dav1d context.
With some reverse engineering, the way I see CMPhoto calling into VideoToolBox (which internally calls into AV1SW.videodecoder, which is a wrapper around dav1d), the max frame delay is hardcoded to 1 in the dav1d settings which intern means that c->n_fc in dav1d is always 1. The vulnerable code path in dav1d is only reached when c->n_fc > 1 (https://code.videolan.org/videolan/dav1d/-/blob/2b475307dc11be9a1c3cc4358102c76a7f386a51/src/decode.c#L2845).
From my understand, this should mean that my app isn't affected. The apple security update however clearly mentions that "Processing an image may lead to arbitrary code execution". Surely I'm missing something?
A simple view has misaligned localized content after being converted to an image using ImageRenderer.
This is still problematic on real phone and TestFlight
I'm not sure what the problem is, I'm assuming it's an ImageRenderer bug.
I tried to use UIGraphicsImageRenderer, but the UIGraphicsImageRenderer captures the image in an inaccurate position, and it will be offset resulting in a white border. And I don't know why in some cases it encounters circular references that result in blank images.
"(1) days" is also not converted to "1 day" properly.
I have some depth map files in TIFF format that I am trying to extract data from programmatically. I see that I can import TIFF format images with NSImage, and from there, can get the raw pixel data. But how do I convert this to real distances? Any help would be appreciated, thanks!
I have a page that needs to display a large PNG image (1024 x 100247 )
Everything works fine in Chrome and Edge, but failed in safari.
this is test image :
https://storage-staging.passton.jp/images/2024/03/11/E0R6G8FKd3B3iLPO.png
is there any limit in safari ?
if balloon == yellow1_balloon {
soundFile = "Sounds/newblop.wav"
playSound()
balloon.isHidden = true
poppedImages.isHidden = false
poppedImages.animationImages = ["popyellow-1","popyellow-2","popyellow-3","popyellow-4","popyellow-5","popyellow-6","popyellow-7"]
.compactMap({ name in
UIImage(named: name)
})
let x:CGFloat = yellow1_balloon.frame.origin.x
let y:CGFloat = yellow1_balloon.frame.origin.y
poppedImages.frame.origin.x = x
poppedImages.frame.origin.y = y
poppedImages.animationDuration = 1.0
poppedImages.animationRepeatCount = 1
poppedImages.startAnimating()
score = score + 10
scoreLbl.text = String(score)
return
}
x,y cordinates are always the same a when yellow1_balloon is first created and not where it ends up after being touched.
I would like to save the depth map from ARDepthData as .tiff, but notice my output tiff distances are incorrect. Objects that are close are reported to be slightly farther away, and walls that are around 4 meters away from me have a recorded value of 2 meters. I am using this code to write the tiff:
import UIKit
# Save method
extension CVPixelBuffer {
func saveDepthMapToTIFF(to path: URL) {
let ciImage = CIImage(cvPixelBuffer: self)
let context = CIContext()
do {
try context.writeTIFFRepresentation(
of: ciImage,
to: path,
format: .Lf,
colorSpace: CGColorSpaceCreateDeviceGray()
)
} catch {
print("Failed to write TIFF: \(error)")
}
}
}
# Calling the save
arFrame.sceneDepth?.depthMap.saveDepthMapToTIFF(to: depthMapPath)
I am reading the file like this in Python
import tifffile
depth_map = tifffile.imread("test.tiff")
plt.imshow(depth_map)
plt.colorbar()
which creates this image:
The farthest parts of the room should be around 4 meters, not 2. The dark blue spot on the lower right is closer than half a meter away.
Notably the depth map contains distances from the camera plane to each region, not the distance from the camera sensor to the region. Even correcting for this though, the depth map remains about the same.
Is there an issue with how I am saving the depth image? Is there a scale factor or format error?
Under Sonoma 14.4 the compression option doesn't work with PNG images. It works for JPG/HEIF. Preview can export PNG file to HEIC with compression option. What am I missing? Previously this has worked. I am trying with 0.01 and 0.9 as compression quality and the file size is the same for PNG.
Is Preview using some trick to convert the image using ciContext.createCGImage?
PS: Compression option of 1.0 was broken under 14.4 RC and Preview created empty file.
func heifImageDataUsingDestination(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) else { return nil }
guard let cgImage = CGImageSourceCreateImageAtIndex(imageSource, 0, nil) else { return nil }
var mutableData = NSMutableData()
guard let imageDestination = CGImageDestinationCreateWithData(mutableData, "public.heic" as CFString, 1, nil) else { return nil }
let options = [ kCGImageDestinationLossyCompressionQuality: compressionQuality ] as CFDictionary
CGImageDestinationAddImage(imageDestination, cgImage, options)
let success = CGImageDestinationFinalize(imageDestination)
if success {
return mutableData as Data
}
return nil
}
func heifImageDataUsingCIContext(at url: URL, compressionQuality : CGFloat) -> Data? {
guard let ciImage = CIImage(contentsOf: url) else { return nil }
let context = CIContext()
let colorspace = ciImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
let options = [CIImageRepresentationOption(rawValue: kCGImageDestinationLossyCompressionQuality as String) : compressionQuality]
return context.heifRepresentation(of: ciImage, format: .RGBA8, colorSpace: colorspace, options: options)
}
I am developing an app using a data cable to link a camera. When I enter the page for the first time, I can detect the camera device, and then when I exit the page and enter again, I cannot detect the linked camera.
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
self.view.backgroundColor = [UIColor whiteColor];
[self addImageCaptureCore];
}
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self checkCameraConnection];
});
}
- (void)checkCameraConnection {
if (@available(iOS 13.0, *)) {
NSArray<ICDevice *> *connectedDevices = self.browser.devices;
if (connectedDevices.count > 0) {
NSLog(@"Camera is connected");
} else {
NSLog(@"Camera is not connected");
}
}
else {
// Fallback on earlier versions
}
}
- (void)viewWillDisappear:(BOOL)animated {
[super viewWillDisappear:animated];
if (@available(iOS 13.0, *)) {
if (self.cameraDevice) {
if (self.cameraDevice.hasOpenSession) {
[self.cameraDevice requestCloseSession];
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self.browser stop];
self.browser.delegate = nil;
self.browser = nil;
});
}
else {
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(0.5 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
[self.browser stop];
self.browser.delegate = nil;
self.browser = nil;
});
}
}
} else {
// Fallback on earlier versions
}
}
- (void)addImageCaptureCore {
if (@available(iOS 13.0, *)) {
ICDeviceBrowser *browser = [[ICDeviceBrowser alloc] init];
browser.delegate = self;
[browser start];
self.browser = browser;
}
else {
}
}
#pragma mark - ICDeviceBrowserDelegate
- (void)deviceBrowser:(ICDeviceBrowser*)browser didAddDevice:(ICDevice*)device moreComing:(BOOL) moreComing API_AVAILABLE(ios(13.0)){
NSLog(@"Device name = %@",device.name);
if ([device isKindOfClass:[ICCameraDevice class]]) {
if ([device.capabilities containsObject:ICCameraDeviceCanAcceptPTPCommands]) {
ICCameraDevice *cameraDevice = (ICCameraDevice *)device;
cameraDevice.delegate = self;
[cameraDevice requestOpenSession];
self.cameraDevice = cameraDevice;
}
}
}
- (void)deviceBrowser:(ICDeviceBrowser*)browser didRemoveDevice:(ICDevice*)device moreGoing:(BOOL) moreGoing API_AVAILABLE(ios(13.0)){
if (self.cameraDevice) {
if (self.cameraDevice.hasOpenSession) {
[self.cameraDevice requestCloseSession];
self.cameraDevice.delegate = nil;
self.cameraDevice = nil;
}
else {
self.cameraDevice.delegate = nil;
self.cameraDevice = nil;
}
}
}
#pragma mark - ICCameraDeviceDelegate
- (void)cameraDevice:(ICCameraDevice*)camera didAddItems:(NSArray<ICCameraItem*>*) items API_AVAILABLE(ios(13.0)){
if (items.count > 0) {
ICCameraItem *latestItem = items.lastObject;
NSLog(@"name = %@",latestItem.name);
}
}
#pragma mark - ICDeviceDelegate
- (void)device:(ICDevice*)device didOpenSessionWithError:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Failed to open session %@",error.localizedDescription);
}
else {
NSLog(@"open session success");
}
}
- (void)device:(ICDevice*)device didCloseSessionWithError:(NSError* _Nullable)error API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"close session error = %@",error.localizedDescription);
}
else {
NSLog(@"didCloseSession");
}
}
- (void)didRemoveDevice:(ICDevice*)device {
}
How to extract an object from a picture or remove the background of an object just like you can create stickers in Photos app. Is there any other official model or library other than using some website's API? (DeepLabV3.mlmodel cannot infer what I need)
I want to use CoreML to process video data. The ML model will take multiple frames as input. How should I get multi frames from ios and process it?
Thanks in advance for any suggestions.