Filed under FB13121690. Thanks for looking into this!
Post
Replies
Boosts
Views
Activity
Also filed as FB12966266, including a demo project. Thanks for looking into this!
FB12914857. Thanks for looking into this!
Filed as FB12901757. Thanks for looking into this!
Hmm, this is really hard to debug without seeing the actual filters. But if I would need to guess, I'd say the implementation of the ROI callback (passed when calling the CIKernel) is wrong. I can also recommend using a MTKView for displaying a CIImage instead of using Core Graphics. There is a sample from Apple showing how to do that.
Can you please upload some screenshots to show what "corrupted" means? Thanks!
Filed as FB12389932. Thanks!
Every time you render a CIImage with a CIContext, CI does a filter graph analysis to determine the best path for rendering the image (determining intermediates, region of interest, kernel concatenation, etc.). This can be quite CPU-intensive.
If you only have a few simple operations to perform on your image, and you can easily implement them in Metal directly, you are probably better off using that.
However, I would also suggest you file Feedback with the Core Image team and report your findings. We also observe a very heavy CPU load in our apps, caused by Core Image. Maybe they find a way to further optimize the graph analysis – especially for consecutive render calls with the same instructions.
Please provide a bit more context, e.g., what is the byte order info and pixel format you are setting?
I think this is a design decision from Apple's side. You are not supposed to set the pixel brightness to more than 8x the current brightness of the screen. Otherwise, you could "blind" the user with very bright pixels, even though they set the screen to be dim.
Also filed as FB12119140. Thanks!
To enable EDR rendering, all we do is to set the colorPixelFormat to MTLPixelFormatRGBA16Float (note: RGBA, not BGRA) and wantsExtendedDynamicRangeContent to YES. We don't change the colorSpace since it is already set to extended linear sRGB when setting the other properties.
As soon as we render pixel values outside [0...1], the screen switches to EDR mode and the potential and current HDR headroom adjust accordingly.
Also filed as FB12042331. Thanks for looking into this!
For Core Image documentation in general, I can recommend cifilter.io, though it does not list the newest filters. You can also check out the Filter Magic app, which lets you play with most CIFilters and has a lot of documentation.
As for CIMaximumComponent and CIMinimumComponent: They will take the max/min values of R, G, and B and return a pixel with all channels set to this value.
Some examples:
RGB(1.0, 0.0, 0.0) -> max: RGB(1.0, 1.0, 1.0) | min: RGB(0.0, 0.0, 0.0)
RGB(0.5, 0.7, 0.3) -> max: RGB(0.7, 0.7, 0.7) | min: RGB(0.3, 0.3, 0.3)
So yes, they turn the image into grayscale, but I might not be what you want since the value doesn't represent perceived lightness of the color.
You might want to check out CIPhotoEffectMono, CIPhotoEffectNoir, and CIPhotoEffectTonal for a more natural grayscale conversions.
This seems like a bug in the CIRAWFilter implementation. It would be great if you could file a bug report in the Feedback app for that. Thanks!
A conceptual note: The CIRAWFilter is meant to be initialized with RAW image data. You are passing it PNG data, which is not what it was designed for. It's a bit surprising that it even works with non-RAW images.
If you want to read the auxiliary data embedded in an image, you can instead do the following:
let hairMatte = CIImage(contentsOf: imageFileURL, options: [CIImageOption.auxiliarySemanticSegmentationHairMatte: true])
This should work with most CIImage initializers that provide the options parameter. Though I'm not sure if it would work if you load the image with UIImage(named:) as it might strip the auxiliary data on load.
Check out CIImageOption for available aux data to load.