Post

Replies

Boosts

Views

Activity

Reply to Core Image drawing corruption
Hmm, this is really hard to debug without seeing the actual filters. But if I would need to guess, I'd say the implementation of the ROI callback (passed when calling the CIKernel) is wrong. I can also recommend using a MTKView for displaying a CIImage instead of using Core Graphics. There is a sample from Apple showing how to do that.
Jul ’23
Reply to High CPU usage with CoreImage vs Metal
Every time you render a CIImage with a CIContext, CI does a filter graph analysis to determine the best path for rendering the image (determining intermediates, region of interest, kernel concatenation, etc.). This can be quite CPU-intensive. If you only have a few simple operations to perform on your image, and you can easily implement them in Metal directly, you are probably better off using that. However, I would also suggest you file Feedback with the Core Image team and report your findings. We also observe a very heavy CPU load in our apps, caused by Core Image. Maybe they find a way to further optimize the graph analysis – especially for consecutive render calls with the same instructions.
Jul ’23
Reply to EDR doesn't work on iOS?
To enable EDR rendering, all we do is to set the colorPixelFormat to MTLPixelFormatRGBA16Float (note: RGBA, not BGRA) and wantsExtendedDynamicRangeContent to YES. We don't change the colorSpace since it is already set to extended linear sRGB when setting the other properties. As soon as we render pixel values outside [0...1], the screen switches to EDR mode and the potential and current HDR headroom adjust accordingly.
Mar ’23
Reply to CIFilter documentation for CIMaximumComponent?
For Core Image documentation in general, I can recommend cifilter.io, though it does not list the newest filters. You can also check out the Filter Magic app, which lets you play with most CIFilters and has a lot of documentation. As for CIMaximumComponent and CIMinimumComponent: They will take the max/min values of R, G, and B and return a pixel with all channels set to this value. Some examples: RGB(1.0, 0.0, 0.0) -> max: RGB(1.0, 1.0, 1.0) | min: RGB(0.0, 0.0, 0.0) RGB(0.5, 0.7, 0.3) -> max: RGB(0.7, 0.7, 0.7) | min: RGB(0.3, 0.3, 0.3) So yes, they turn the image into grayscale, but I might not be what you want since the value doesn't represent perceived lightness of the color. You might want to check out CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal for a more natural grayscale conversions.
Jan ’23
Reply to [CIRAWFilterImpl semanticSegmentationHairMatte]: unrecognized selector sent to instance
This seems like a bug in the CIRAWFilter implementation. It would be great if you could file a bug report in the Feedback app for that. Thanks! A conceptual note: The CIRAWFilter is meant to be initialized with RAW image data. You are passing it PNG data, which is not what it was designed for. It's a bit surprising that it even works with non-RAW images. If you want to read the auxiliary data embedded in an image, you can instead do the following: let hairMatte = CIImage(contentsOf: imageFileURL, options: [CIImageOption.auxiliarySemanticSegmentationHairMatte: true]) This should work with most CIImage initializers that provide the options parameter. Though I'm not sure if it would work if you load the image with UIImage(named:) as it might strip the auxiliary data on load. Check out CIImageOption for available aux data to load.
Jan ’23