hdiutiul bug?
When making a DMG image for the whole content of user1 profile (meaning using srcFolder = /Users/user1) using hdiutil, the program fails indicating:
/Users/user1/Library/VoiceTrigger: Operation not permitted hdiutil: create failed - Operation not permitted
The complete command used was: "sudo hdiutil create -srcfolder /Users/user1 -skipunreadable -format UDZO /Volumes/testdmg/test.dmg"
And, of course, the user had local admin rights. I was using Sonoma 14.2.1 and a MacBook Pro (Intel T2)
What I would have expected, asuming that /VoiceTrigger cannot be copied for whatever reason, would be to skip that file or folder and continue the process. Then, at the end, produce a log listing the files/folders not included and the reason for their exclusion. The fact that hdiutil just ended inmediately, looks to me as a bug. Or what else could explain the problem described?
Image I/O
RSS for tagRead and write most image file formats, manage color, access image metadata using Image I/O.
Posts under Image I/O tag
59 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi Team,
I would like to Crete a line code that would help me to take whatever image in my ***** in Xcode and what ever the size of the image I would like this image to be resize to fit in a circle
I have trained a model to classify some symbols using Create ML.
In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data.
If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app.
If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999).
If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image.
If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing.
I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected.
What am I doing wrong.
tl;dr
my model works, as backed up by using video input directly and also dropping cropped images into preview sections
passing the cropped images directly to the VNImageRequestHandler does not work
modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results.
I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
I use a data cable to connect my Nikon camera to my iPhone. In my project, I use the framework ImageCaptureCore. Now I can read the photos in the camera memory card, but when I press the shutter of the camera to take a picture, the camera does not respond, the connection between the camera and the phone is normal. Then the camera screen shows a picture of a laptop. I don't know. Why is that? I hope someone can help me.
I'm working on a very simple App where I need to visualize an image on the screen of an iPhone. However, the image has some special properties. It's a 16bit, yuv422_yuy2 encoded image. I already have all the raw bytes saved in a Data object.
After googling for a long time, I still did not figure out the correct way. My current understanding is first create a CVPixelBuffer to properly represent the encoding information. Then conver the CVPixelBuffer to an UIImage. The following is my current implementation.
public func YUV422YUY2ToUIImage(data: Data, height: Int, width: Int, bytesPerRow: Int) -> UIImage {
return rosImage.data.withUnsafeMutableBytes { rawPointer in
let baseAddress = rawPointer.baseAddress!
let tempBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.allocate(capacity: 1)
CVPixelBufferCreateWithBytes( kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_422YpCbCr16,
baseAddress,
bytesPerRow,
nil,
nil,
nil,
tempBufferPointer)
let ciImage = CIImage(cvPixelBuffer: tempBufferPointer.pointee!)
return UIImage(ciImage: ciImage)
}
}
However, when I execute the code, I have the followin error
-[CIImage initWithCVPixelBuffer:options:] failed because its pixel format v216 is not supported.
So it seems CIImage is unhappy. I think I need to convert the encoding from yuv422_yuy2 to something like plain ARGB. But after a long tim googling, I didn't find a way to do that. The closest function I cand find is https://developer.apple.com/documentation/accelerate/1533015-vimageconvert_422cbypcryp16toarg
But the function is too complex for me to understand how to use it.
Any help is appreciated. Thank you!
I found that the app reported a crash of a pure virtual function call, which could not be reproduced.
A third-party library is referenced:
https://github.com/lincf0912/LFPhotoBrowser
Achieve smearing, blurring, and mosaic processing of images
Crash code:
if (![LFSmearBrush smearBrushCache]) {
[_edit_toolBar setSplashWait:YES index:LFSplashStateType_Smear];
CGSize canvasSize = AVMakeRectWithAspectRatioInsideRect(self.editImage.size, _EditingView.bounds).size;
[LFSmearBrush loadBrushImage:self.editImage canvasSize:canvasSize useCache:YES complete:^(BOOL success) {
[weakToolBar setSplashWait:NO index:LFSplashStateType_Smear];
}];
}
- (UIImage *)LFBB_patternGaussianImageWithSize:(CGSize)size orientation:(CGImagePropertyOrientation)orientation filterHandler:(CIFilter *(^ _Nullable )(CIImage *ciimage))filterHandler
{
CIContext *context = LFBrush_CIContext;
NSAssert(context != nil, @"This method must be called using the LFBrush class.");
CIImage *midImage = [CIImage imageWithCGImage:self.CGImage];
midImage = [midImage imageByApplyingTransform:[self LFBB_preferredTransform]];
midImage = [midImage imageByApplyingTransform:CGAffineTransformMakeScale(size.width/midImage.extent.size.width,size.height/midImage.extent.size.height)];
if (orientation > 0 && orientation < 9) {
midImage = [midImage imageByApplyingOrientation:orientation];
}
//图片开始处理
CIImage *result = midImage;
if (filterHandler) {
CIFilter *filter = filterHandler(midImage);
if (filter) {
result = filter.outputImage;
}
}
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
UIImage *image = [UIImage imageWithCGImage:outImage];
CGImageRelease(outImage);
return image;
}
This line trigger crash:
CGImageRef outImage = [context createCGImage:result fromRect:[midImage extent]];
b9c90c7bbf8940e5aabed7f3f62a65a2-symbolicated.crash
- (void)cameraDevice:(ICCameraDevice*)camera
didReceiveMetadata:(NSDictionary* _Nullable)metadata
forItem:(ICCameraItem*)item
error:(NSError* _Nullable) error API_AVAILABLE(ios(13.0)){
NSLog(@"metadata = %@",metadata);
if (item) {
ICCameraFile *file = (ICCameraFile *)item;
NSURL *downloadsDirectoryURL = [[NSFileManager defaultManager] URLsForDirectory:NSDocumentDirectory inDomains:NSUserDomainMask].firstObject;
downloadsDirectoryURL = [downloadsDirectoryURL URLByAppendingPathComponent:@"Downloads"];
NSDictionary *downloadOptions = @{ ICDownloadsDirectoryURL: downloadsDirectoryURL,
ICSaveAsFilename: item.name,
ICOverwrite: @YES,
ICDownloadSidecarFiles: @YES
};
[self.cameraDevice requestDownloadFile:file options:downloadOptions downloadDelegate:self didDownloadSelector:@selector(didDownloadFile:error:options:contextInfo:) contextInfo:nil];
}
}
- (void)didDownloadFile:(ICCameraFile *)file
error:(NSError* _Nullable)error
options:(NSDictionary<NSString*, id>*)options
contextInfo:(void* _Nullable) contextInfo API_AVAILABLE(ios(13.0)){
if (error) {
NSLog(@"Download failed with error: %@", error);
}
else {
NSLog(@"Download completed for file: %@", file);
}
}
I don't know what's wrong. I don't know if this is the right way to get the camera pictures. I hope someone can help me
Is it impossible to write tEXt chunk data to PNG on iOS?
I successfully read the chunk data and modified it to add tEXt data to the chunk and saved it as an image in Gallery,
But the tEXt data keeps disappearing when I read the chunk data from the image in the Gallery.
Does iOS prevent preserving tEXt data when saving an image to Gallery?
I have applied some filters (like applyingGaussianBlur) to a CIImage that was converted from UIImage. The resulting image data gets corrupted only in lower end devices. What could be the reason?
Hello,
I'm wondering if there is a way to programmatically write a series of UIImages into an APNG, similar to what the code below does for GIFs (credit: https://github.com/AFathi/ARVideoKit/tree/swift_5). I've tried implementing a similar solution but it doesn't seem to work. My code is included below
I've also done a lot of searching and have found lots of code for displaying APNGs, but have had no luck with code for writing them.
Any hints or pointers would be appreciated.
func generate(gif images: [UIImage], with delay: Float, loop count: Int = 0, _ finished: ((_ status: Bool, _ path: URL?) -> Void)? = nil) {
currentGIFPath = newGIFPath
gifQueue.async {
let gifSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFLoopCount as String : count]]
let imageSettings = [kCGImagePropertyGIFDictionary as String : [kCGImagePropertyGIFDelayTime as String : delay]]
guard let path = self.currentGIFPath else { return }
guard let destination = CGImageDestinationCreateWithURL(path as CFURL, __UTTypeGIF as! CFString, images.count, nil)
else { finished?(false, nil); return }
//logAR.message("\(destination)")
CGImageDestinationSetProperties(destination, gifSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
if !CGImageDestinationFinalize(destination) {
finished?(false, nil); return
} else {
finished?(true, path)
}
}
}
My adaptation of the above code for APNGs (doesn't work; outputs empty file):
func generateAPNG(images: [UIImage], delay: Float, count: Int = 0) {
let apngSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGLoopCount as String : count]]
let imageSettings = [kCGImagePropertyPNGDictionary as String : [kCGImagePropertyAPNGDelayTime as String : delay]]
guard let destination = CGImageDestinationCreateWithURL(outputURL as CFURL, UTType.png.identifier as CFString, images.count, nil)
else { fatalError("Failed") }
CGImageDestinationSetProperties(destination, apngSettings as CFDictionary)
for image in images {
if let imageRef = image.cgImage {
CGImageDestinationAddImage(destination, imageRef, imageSettings as CFDictionary)
}
}
}
I want to read metadata of image files such as copyright, author etc.
I did a web search and the closest thing is CGImageSourceCopyPropertiesAtIndex:
- (void)tableViewSelectionDidChange:(NSNotification *)notif {
NSDictionary* metadata = [[NSDictionary alloc] init];
//get selected item
NSString* rowData = [fileList objectAtIndex:[tblFileList selectedRow]];
//set path to file selected
NSString* filePath = [NSString stringWithFormat:@"%@/%@", objPath, rowData];
//declare a file manager
NSFileManager* fileManager = [[NSFileManager alloc] init];
//check to see if the file exists
if ([fileManager fileExistsAtPath:filePath] == YES) {
//escape all the garbage in the string
NSString *percentEscapedString = (NSString *)CFURLCreateStringByAddingPercentEscapes(NULL, (CFStringRef)filePath, NULL, NULL, kCFStringEncodingUTF8);
//convert path to NSURL
NSURL* filePathURL = [[NSURL alloc] initFileURLWithPath:percentEscapedString];
NSError* error;
NSLog(@"%@", [filePathURL checkResourceIsReachableAndReturnError:error]);
//declare a cg source reference
CGImageSourceRef sourceRef;
//set the cg source references to the image by passign its url path
sourceRef = CGImageSourceCreateWithURL((CFURLRef)filePathURL, NULL);
//set a dictionary with the image metadata from the source reference
metadata = (NSDictionary *)CGImageSourceCopyPropertiesAtIndex(sourceRef,0,NULL);
NSLog(@"%@", metadata);
[filePathURL release];
} else {
[self showAlert:@"I cannot find this file."];
}
[fileManager release];
}
Is there any better or easy approach than this?
Webp images work on iOS and iPadOS. But it doesn't work on tvOS.
In fact, Apple say that:
But why doesn't it happen? Is there a way that it works on tvOS?
This is my test code.
import SwiftUI
extension View {
@MainActor func render(scale: CGFloat) -> UIImage? {
let renderer = ImageRenderer(content: self)
renderer.scale = scale
return renderer.uiImage
}
}
struct ContentView: View {
@Environment(\.colorScheme) private var colorScheme
@State private var snapImg: UIImage = UIImage()
var snap: some View {
Text("I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
}
@ViewBuilder
func snapEx() -> some View {
VStack {
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.background(.pink)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.background(.purple)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
Text("@ViewBuilder I'm now is \(colorScheme == .dark ? "DARK" : "LIGHT") Mode!")
.foregroundStyle(colorScheme == .dark ? .red : .green)
}
}
@ViewBuilder
func snapView() -> some View {
VStack {
Text("Text")
Text("Test2")
.background(.green)
snap
snapEx()
}
}
var body: some View {
let snapView = snapView()
VStack {
snapView
Image(uiImage: snapImg)
Button("Snap") {
snapImg = snapView.render(scale: UIScreen.main.scale) ?? UIImage()
}
}
}
}
When using ImageRenderer, there are some problems with converting View to images.
For example, Text cannot automatically modify the foreground color of Dark Mode.
This is just a simple test code, not just Text.
How should I solve it?
guard let rawfilter = CoreImage.CIRAWFilter(imageData: data, identifierHint: nil) else { return }
guard let ciImage = rawfilter.outputImage else { return }
let width = Int(ciImage.extent.width)
let height = Int(ciImage.extent.height)
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = CIContext()
guard let cgImage = context.createCGImage(ciImage, from: rect, format: .RGBA16, colorSpace: CGColorSpaceCreateDeviceRGB()) else { return }
print("cgImage prepared")
guard let dataProvider = cgImage.dataProvider else { return }
let rgbaData = CFDataCreateMutableCopy(kCFAllocatorDefault, 0, dataProvider.data)
In iOS 16 this process is much faster than the same process in iOS 17
Is there a method to boost up the decoding speed?
I am able to create a UIImage from webP data using UIImage(data: data) on iOS and iPadOS. When I try to do this same thing on watchOS 10, it fails. Is there a workaround to displaying webp images on watch os if this isn't expected to work?
It appears I can't add a WebP image as an Image Set in an Asset Catalog. Is that correct?
As a workaround, I added the WebP image as a Data Set. I'm then loading it as a CGImage with the following code:
guard let asset = NSDataAsset(name: imageName),
let imageSource = CGImageSourceCreateWithData(asset.data as CFData, nil),
let image = CGImageSourceCreateImageAtIndex(imageSource, 0, nil)
else {
return nil
}
// Use image
Is it fine to store and load WebP images in this way? If not, then what's best practice?
I have written and used the code to get the colors from CGImage and it worked fine up to iOS16.
However, when I use the same code in iOS17, Red and Blue out of RGB are reversed.
Is this a temporary bug in the OS and will it be fixed in the future? Or has the specification changed and will it remain this way after iOS17?
Here is my code:
let pixelDataByteSize = 4
guard let cfData = image.cgImage?.dataProvider?.data else { return }
let pointer:UnsafePointer = CFDataGetBytePtr(cfData)
let scale = UIScreen.main.nativeScale
let address = ( Int(image.size.width * scale) * Int(image.size.height * scale / 2) + Int(image.size.width * scale / 2) ) * pixelDataByteSize
let r = CGFloat(pointer[address]) / 255
let g = CGFloat(pointer[address+1]) / 255
let b = CGFloat(pointer[address+2]) / 255
I am facing an issue with a blog post, where I cannot view the image that is added on the blog post. In the blog post the image is a BMP type. I have read some earlier posts and it seems that BMP was not supported on the earlier IOS version. My IOS version is 17.0.2.
Browser: Safari
OS: iOS 17.0.2
Device: iPhone 14
URL: https://www.optimabatteries.com/experience/blog/if-a-cars-charging-system-isnt-working-properly-why-cant-we-just-jump-start-it-with-an-external-booster-pack
Please let me know what could be the issue.
Our iOS app can access the photo library when running it on an M1 Mac. The app was programmed using Xcode and Objective C. We cannot select a photo from the library and we need the Objective C code to accomplish this task. None of our attempts were successful.
Using the screencapture CLI on macOS Sonoma 14.0 (23A344) results in a 72dpi image file, no matter if it was captured on a retina display or not.
For example, using
screencapture -i ~/Desktop/test.png in Terminal lets me create a selective screenshot, but the resulting file does not contain any DPI metadata (checked using mdls ~/Desktop/test.png), nor does the image itself have the correct DPI information (should be 144, but it's always 72; checked using Preview.app).
I noticed a (new?) flag option, -r, for which the documentation states:
-r Do not add screen dpi meta data to captured file.
Is that flag somehow automatically set? Setting it myself makes no difference and obviously results in a no-dpi-in-metadata and wrong-dpi-in-image file.
The only two ways I got the correct DPI information in a resulting image file was using the default options (forced by -p): screencapture -i -p, and by making the capture go to the clipboard screencapture -i -c. Sadly, I can't use those in my case.
Feedback filed: FB13208235
I'd appreciate any pointers,
Matthias