Core Image

RSS for tag

Use built-in or custom filters to process still and video images using Core Image.

Posts under Core Image tag

50 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

iOS 17 swift get GPS Location from Image
I am fetching an image from the photo library and fetch the GPS Location data, but it's not working. This needs to work on iOS 17 as well, so I used PHPicker. kCGImagePropertyGPSDictionary is always returning nil. The code I tried: import CoreLocation import MobileCoreServices import PhotosUI class ViewController: UIViewController, UIImagePickerControllerDelegate, UINavigationControllerDelegate { @IBOutlet weak var selectedImageView:UIImageView! @IBAction func selectTheImage() { self.pickImageFromLibrary_PH() } func pickImageFromLibrary_PH() { var configuration = PHPickerConfiguration(photoLibrary: PHPhotoLibrary.shared()) configuration.filter = .images let picker = PHPickerViewController(configuration: configuration) picker.delegate = self present(picker, animated: true, completion: nil) } func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { if let itemProvider = results.first?.itemProvider, itemProvider.canLoadObject(ofClass: UIImage.self) { itemProvider.loadObject(ofClass: UIImage.self) { (image, error) in if let image = image as? UIImage { self.fetchLocation(for: image) } } } picker.dismiss(animated: true, completion: nil) } func fetchLocation(for image: UIImage) { let locationManager = CLLocationManager() guard let imageData = image.jpegData(compressionQuality: 1.0) else { print("Unable to fetch image data.") return } guard let source = CGImageSourceCreateWithData(imageData as CFData, nil) else { print("Unable to create image source.") return } guard let properties = CGImageSourceCopyPropertiesAtIndex(source, 0, nil) as? [String: Any] else { print("Unable to fetch image properties.") return } print(properties) if let gpsInfo = properties[kCGImagePropertyGPSDictionary as String] as? [String: Any], let latitude = gpsInfo[kCGImagePropertyGPSLatitude as String] as? CLLocationDegrees, let longitude = gpsInfo[kCGImagePropertyGPSLongitude as String] as? CLLocationDegrees { let location = CLLocation(latitude: latitude, longitude: longitude) print("Image was taken at \(location.coordinate.latitude), \(location.coordinate.longitude)") } else { print("PHPicker- Location information not found in the image.") } } } Properties available in that image: Exif/Meta-Data is available, I expect GPS location data ColorSpace = 65535; PixelXDimension = 4032; PixelYDimension = 3024; }, "DPIWidth": 72, "Depth": 8, "PixelHeight": 3024, "ColorModel": RGB, "DPIHeight": 72, "{TIFF}": { Orientation = 1; ResolutionUnit = 2; XResolution = 72; YResolution = 72; }, "PixelWidth": 4032, "Orientation": 1, "{JFIF}": { DensityUnit = 0; JFIFVersion = ( 1, 0, 1 ); XDensity = 72; YDensity = 72; }] Note: I'm trying in Xcode 15 and iOS 17. In photos app location data is available, but in code, it's returning nil.
0
0
883
Dec ’23
Create CGColorSpace from CFPropertyList
I am trying to create a custom CGColorSpace in Swift on macOS but am not sure I really understand the concepts. I want to use a custom color space called Spot1 and if I extract the spot color from a PDF I get the following: "ColorSpace<Dictionary>" = { "Cs2<Array>" = ( Separation, Spot1, DeviceCMYK, { "BitsPerSample<Integer>" = 8; "Domain<Array>" = ( 0, 1 ); "Filter<Name>" = FlateDecode; "FunctionType<Integer>" = 0; "Length<Integer>" = 526; "Range<Array>" = ( 0, 1, 0, 1, 0, 1, 0, 1 ); "Size<Array>" = ( 1024 ); } ); }; How can I create this same color space using the CGColorSpace(propertyListPlist: CFPropertyList) API func createSpot1() -> CGColorSpace? { let dict0 : NSDictionary = [ "BitsPerSample": 8, "Domain" : [0,1], "Filter" : "FlateDecode", "FunctionType" : 0, "Length" : 526, "Range" : [0,1,0,1,0,1,0,1], "Size" : [1024]] let dict : NSDictionary = [ "Cs2" : ["Separation","Spot1", "DeviceCMYK", dict0] ] let space = CGColorSpace(propertyListPlist: dict as CFPropertyList) if space == nil { DebugLog("Spot1 color space is nil!") } return space }
0
0
464
Dec ’23
CIImage.clampedToExtent() doesn't fill some edges
Hi, I'm trying to find an explanation to strange behaviour of .clampedToExtent() method: I'm doing pretty strait forward thing, clamp the image and then crop it with some insets, so as a result I expert same image as original with padding on every side with repeating last pixel of each edge (to apply CIPixellate filter then), here is the code: originalImage .clampedToExtent() .cropped(to: originalImage.extent.insetBy(dx: -50, dy: -50)) The result is strange: In the result image image has padding as specified, but only there sides have content there (left, right, bottom) and top side has transparent padding. Sometimes right side has transparency instead of content. So the question is why this happens and how to get all sides filled with last pixel data? I tested on two different devices with iOS 16 and 17.
2
0
670
Dec ’23
MTLTexture streaming to app directory
For better memory usage when working with MTLTextures (editing + displaying in render passes, compute shaders, etc.) is it possible to save the texture to the app's Documents folder, and then use an UnsafeMutablePointer to access/modify the contents of the texture before displaying in a render pass? And would this be performant (i.e 60fps)? That way the texture won't be directly in memory all the time, but the contents can still be edited and displayed when needed.
1
0
616
Nov ’23
Correctly process HDR in Metal Core Image Kernels (& Metal)
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
0
0
603
Nov ’23
Metal Core Image kernel workingColorSpace
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
0
0
513
Nov ’23
Correct settings to record HDR/SDR with AVAssetWriter
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
0
0
537
Nov ’23
Difference between CIContext outputs
I have two CIContexts configured with the following options: let options1:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false, CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()]; let options2:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false]; And an MTKView with CAMetalLayer configured with HDR output. metalLayer = self.layer as? CAMetalLayer metalLayer?.wantsExtendedDynamicRangeContent = true metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_HLG) colorPixelFormat = .bgr10a2Unorm The two context options produce different outputs when input is in BT.2020 pixel buffers. But I believe the outputs shouldn't be different. Because the first option simply disables color management. The second one performs intermediate buffer calculations in sRGB extended linear color space and then converts those buffers to BT.2020 color space in the output.
0
0
415
Nov ’23
[[stitchable]] Metal core image CIKernel fails to load
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code: class FilterTwo: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails! }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } } Here is the Metal code: using namespace metal; [[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest) { float2 srcCoord = inputImage.coord(); half4 color = half4(inputImage.sample(srcCoord)); return color; } And here is the usage: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let filter = FilterTwo() filter.inputImage = CIImage(color: CIColor.red) let outputImage = filter.outputImage! NSLog("Output \(outputImage)") } } And the output: StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆} Kernels [] reflect Function 'secondFilter' does not exist.
2
0
731
Nov ’23
Xcode 15 [[stitchable]] Metal core image kernels fail
I have imported two metal files and defined two stitchable Metal Core Image kernels, one of them being CIColorKernel and other being CIKernel. As outlined in the WWDC video, I need to add a flag -framework CoreImage to other Metal Linker flags. Unfortunately, Xcode 15 puts a double quotes around this and generates an error metal: error: unknown argument: '-framework CoreImage'. So I built without this flag and it works for the first kernel that was added. The other kernel is never added to metal.defaultlib and fails to load. How do I get it working? class SobelEdgeFilterHDR: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "sobelEdgeFilterHDR", fromMetalLibraryData: data) }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return SobelEdgeFilterHDR.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } }
1
0
700
Nov ’23
Processing YCbCr422 10-bit HDR pixel buffers with Metal
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows: let pixelFormatY = MTLPixelFormat.r16Unorm let pixelFormatUV = MTLPixelFormat.rg16Unorm renderPassDescriptorY!.colorAttachments[0].texture = texture; renderPassDescriptorY!.colorAttachments[0].loadAction = .clear; renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorY!.colorAttachments[0].storeAction = .store; renderPassDescriptorCbCr!.colorAttachments[0].texture = texture; renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear; renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store; // Vertices and texture coordinates for Metal shader let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)), AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)), AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)), AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0)) ] let commandBuffer = commandQueue!.makeCommandBuffer() if let commandBuffer = commandBuffer { let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!) renderEncoderY?.setRenderPipelineState(pipelineStateY!) renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0) renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1)) renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderY?.endEncoding() let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!) renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!) renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0) renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1)) renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderCbCr?.endEncoding() commandBuffer.commit() } And here is shader code: vertex MappedVertex vertexShaderYCbCrPassthru ( constant Vertex *vertices [[ buffer(0) ]], unsigned int vertexId [[vertex_id]] ) { MappedVertex out; Vertex v = vertices[vertexId]; out.renderedCoordinate = float4(v.position, 0.0, 1.0); out.textureCoordinate = v.texCoord; return out; } fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureY [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float Y = float(textureY.sample(s, in.textureCoordinate).r); return half(Y); } fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureCbCr [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg); return half2(CbCr); } Is there anything fundamentally wrong in the code that makes it slow?
1
0
596
Oct ’23
iOS 17 Problems in getting RGB color from image
I have written and used the code to get the colors from CGImage and it worked fine up to iOS16. However, when I use the same code in iOS17, Red and Blue out of RGB are reversed. Is this a temporary bug in the OS and will it be fixed in the future? Or has the specification changed and will it remain this way after iOS17? Here is my code: let pixelDataByteSize = 4 guard let cfData = image.cgImage?.dataProvider?.data else { return } let pointer:UnsafePointer = CFDataGetBytePtr(cfData) let scale = UIScreen.main.nativeScale let address = ( Int(image.size.width * scale) * Int(image.size.height * scale / 2) + Int(image.size.width * scale / 2) ) * pixelDataByteSize let r = CGFloat(pointer[address]) / 255 let g = CGFloat(pointer[address+1]) / 255 let b = CGFloat(pointer[address+2]) / 255
1
0
636
Oct ’23
macOS 14 Sonoma cifilter issue (Designed for iPad)
macOS 14 Sonoma cifilter issue (Designed for iPad) I just updated my M1 Mac to MacOS 14 Sonoma. I ran the iOS/iPadOS app built in Xcode 15.0 on a Mac whose OS was updated using Testflight. (My app is Designed for iPad and is set to run on macOS.) When UIImage is processed using cifilter, the generated image may be grayed out. The same code does not occur on iOS and iPadOS. In other words, it does not occur on iPhone and iPad iOS/iPadOS 17. Has anyone experienced a similar issue? And I would like to know the solution ASAP. I am distributing the app as Designed for iPad. Every year, when Mac OS is updated, an issue arises. So, I am considering not allowing Vision Pro to run the current iOS/iPadOS app.
0
0
542
Sep ’23
PNG Compression - ImageIO
Hello All, I am trying to compress PNG image by applying PNG Filters like(Sub, Up, Average, Paeth), I am applying filers using property kCGImagePropertyPNGCompressionFilter but there is no change seen in resultant images after trying any of the filter. What is the issue here can someone help me with this. Do I have compress image data after applying filter? If yes how to do that? Here is my source code CGImageDestinationRef outImageDestRef = NULL; long keyCounter = kzero; CFStringRef dstImageFormatStrRef = NULL; CFMutableDataRef destDataRef = CFDataCreateMutable(kCFAllocatorDefault,0); Handle srcHndl = //source image handle; ImageTypes srcImageType = //'JPEG', 'PNGf, etct; CGImageRef inImageRef = CreateCGImageFromHandle(srcHndl,srcImageType); if(inImageRef) { CFTypeRef keys[4] = {nil}; CFTypeRef values[4] = {nil}; dstImageFormatStrRef = CFSTR("public.png"); long png_filter = IMAGEIO_PNG_FILTER_SUB; //IMAGEIO_PNG_FILTER_SUB, IMAGEIO_PNG_FILTER_UP, IMAGEIO_PNG_FILTER_AVG, IMAGEIO_PNG_FILTER_PAETH .. it is one of this at a time keys[keyCounter] = kCGImagePropertyPNGCompressionFilter; values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&png_filter); keyCounter++; outImageDestRef = CGImageDestinationCreateWithData(destDataRef, dstImageFormatStrRef, 1, NULL); if(outImageDestRef) { // keys[keyCounter] = kCGImagePropertyDPIWidth; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; // // keys[keyCounter] = kCGImagePropertyDPIHeight; // values[keyCounter] = CFNumberCreate(NULL,kCFNumberLongType,&Resolution); // keyCounter++; CFDictionaryRef options = CFDictionaryCreate(NULL,keys,values,keyCounter,&kCFTypeDictionaryKeyCallBacks,&kCFTypeDictionaryValueCallBacks); CGImageDestinationAddImage(outImageDestRef,inImageRef, options); CFRelease(options); status = CGImageDestinationFinalize(outImageDestRef); if(status == true) { UInt8 *destImagePtr = CFDataGetMutableBytePtr(destDataRef); destSize = CFDataGetLength(destDataRef); //using destImagePtr after this ... } CFRelease(outImageDestRef); } for(long cnt = kzero; cnt < keyCounter; cnt++) if(values[cnt]) CFRelease(values[cnt]); if(inImageRef) CGImageRelease(inImageRef); }
2
0
850
Sep ’23
HDR Image capture/conversion
Hello! After recent talk on the WWDC2023 about HDR support and finding this documentation page on Applying Apple HDR effect on photos, I became very interested in the HDR Gain Map format. From documentation page it is clear how we can restore original HDR from SDR and Gain Map representation, but my question is - how from HDR we can convert back to the SDR + Gain Map representation? As I understand right know, conversion from HDR to SDR + Gain Map includes two steps: Tone mapping of HDR for getting correct SDR When we have both HDR and SDR, from equation in the documentation page we can calculate Gain Map Am I correct? If so, what tone mapping algorithm for HDR -> SDR conversion is used right know? Can't find any information about this in the internet:( Would be very grateful for your response!
3
2
1.3k
Sep ’23
CIColor.init(color: UIColor) not working properly on macOS 14 Catalyst
When initializing a CIColor with a dynamic UIColor (like the system colors that resolve differently based on light/dark mode) on macOS 14 (Mac Catalyst), the resulting CIColor is invalid/uninitialized. For instance: po CIColor(color: UIColor.systemGray2) → <uninitialized> po CIColor(color: UIColor.systemGray2.resolvedColor(with: .current)) → <CIColor 0x60000339afd0 (0.388235 0.388235 0.4 1) ExtendedSRGB> But also, not all colors work even when resolved: po CIColor(color: UIColor.systemGray.resolvedColor(with: .current)) → <uninitialized> I think this is caused by the color space of the resulting UIColor: po UIColor.systemGray.resolvedColor(with: .current) → kCGColorSpaceModelRGB 0.596078 0.596078 0.615686 1 po UIColor.systemGray2.resolvedColor(with: .current) → UIExtendedSRGBColorSpace 0.388235 0.388235 0.4 1 This worked correctly before in macOS 13.
2
0
760
Aug ’23