Post

Replies

Boosts

Views

Activity

Correct CIContext setup for wide gamut processing
What is the correct way to set up Core Image for processing (and preserving) wide gamut images? I understand that there are four options for the workingColorSpace: displayP3, extendedLinearDisplayP3, extendedSRGB, and extendedLinearSRGB. While I understand the implications of all of them (linear vs. sRGB gamma curve and Display P3 vs. sRGB primaries), I don't know which is recommended or best practice to use. Also, might there be compatibility issues with the built-in CI filters? Do they make assumptions about the working color space? Further, what's the recommended color space to use for the rendering destination? I assume Display P3 since it's also the color space of photos taken with the iPhone camera... Considering the workingFormat: While I understand that it makes sense to use a 16-bit float type format (RGBAh) for extended range, it also seems very costly. Would it be somehow possible (and advisable) set up the CIContext to use an 8-bit format while still preserving wide gamut? Are there differences or special considerations for the different platforms for both, color space and format? (Sorry for the many questions, but they seem all related…)
0
0
478
Jun ’20
Core Image: Gamma curve best practice
When setting up a CIContext, one can specify the workingColorSpace. The color space also specifies which gamma curve is used (usually sRGB or linear). When not explicitly setting a color space, Core Image uses a linear curve. It also says this in the (pretty outdated) Core Image Programming Guide - https://developer.apple.com/library/archive/documentation/GraphicsImaging/Conceptual/CoreImaging/ci_advanced_concepts/ci.advanced_concepts.html#//apple_ref/doc/uid/TP30001185-CH9-SW14: By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. Now I'm wondering if this makes sense in most scenarios. For instance, if I blur a checkerboard patter with a CIGaussianBlur filter with a default CIContext, I get a different result than when using a non-linear sRGB color space. See here - https://www.icloud.com/keynote/0FLvnwEPx-dkn95dMorENGa0w#Presentation. White gets clearly more weight than black with linear gamma. Which makes sense, I suppose. But I find that the non-linear (sRGB) result looks "more correct". What are best practices here? When should the gamma curve be a consideration?
2
0
947
Jun ’20
Store uncompressed depth data in HEIF
I set up my AVCaptureSession for photo capture with depth data. In my AVCapturePhotoCaptureDelegate I get the AVCapturePhoto of the capture that contains the depth data. I call fileDataRepresentation() on it and later use a PHAssetCreationRequest to save the image (including the depth data) to a new asset in Photos. When loading the image and its depth data later again, the depth data seemed compressed. I observe some heavy quantization of the data. Is there a way to avoid this compression? Do I need to use specific settings or even a different API for exporting the image?
2
0
1.1k
Jun ’20
Capture photo depth data in 32-bit
Is it possible to set up a AVCaptureSession in a way that it will deliver 32-bit depth data (instead of 16-bit) during a photo capture? I configured the AVCapturePhotoOutput and the AVCapturePhotoSettings to deliver depth data. And it works: my delegate receives a AVDepthData block… containing 16-bit depth data. I tried setting the AVCaptureDevice's activeDepthDataFormat to a 32-bit format, but the format of the delivered AVDepthData is still only 16-bit—regardless of which format I set on the device. For video capture using an AVCaptureDepthDataOutput this seem to work, just not for an AVCapturePhotoOutput. Any hints are appreciated. 🙂
1
0
1.5k
Jun ’20