A UIImage is a wrapper that can be backed by different types of images. The ciImage property is only set when the UIImage was created from a CIImage using CIImage(image:). In most cases, however, a UIImage is backed by a CGImage.
If you want to create a CIImage from a UIImage, you should always use the CIImage(image:) initializer and never rely on the ciImage property.
Post
Replies
Boosts
Views
Activity
Same here. ✋
Both APIs are used for writing a Gain Map HDR image, i.e, an SDR RGB image that contains an auxiliary single-channel gain map image that contains the HDR information.
You can and should use an 8-bit format for this kind of image, e.g., RGBA8.
The format Apple mentioned in the 2023 video (they called it ISO HDR) is for storing an HDR image directly (without an SDR representation). For that, you'd need more than 8 bit because the range of color values in an HDR image is much larger.
However, it seems the industry is moving towards the SDR + gain map standard introduced by Adobe last year, which Apple is now also adopting. I would assume that they won't pursue the ISO HDR format much further, as it's not as compatible and takes more space.
I recommend checking out the old Core Image Programming Guide on how to supply a ROI function.
Basically, you are given a rect in the target image (that your filter should produce) and are asked what part of the input image your filter needs to produce it. For small images, this is usually not relevant because Core Image processes the whole image in one go. For large images, however, CI applies tiling, i.e., processing slices of the image in sequence and stitching them together in the end. For this, the ROI is very important.
In your mirroring example, the first tile might be the left side of the image and the second tile the right side. When your ROI is asked what part of the input is needed to produce the left side of the result, you need to return the right side of the input image because it's mirrored along the x-axis, and vise versa.
So you basically have to apply the same x-mirroring trick you use when sampling to mirror the rect in your ROI callback.
Objective-C or Swift?
You can check out my MTKView subclass for an example on how to render CoreImage output with Metal. I hope that helps.
We found a workaround for this issue by replacing NSDecimalRound with the following helper (works when coming from a Double):
extension Double {
/// Helper for rounding a number to a fixed number of decimal places (`scale`).
///
/// This is a replacement for `NSDecimalRound`, which causes issues in release builds
/// with the Xcode 16 RC.
func roundedDecimal(scale: Int = 0, rule: FloatingPointRoundingRule = .toNearestOrEven) -> Decimal {
let significand = Decimal((self * pow(10, Double(scale))).rounded(rule))
return Decimal(sign: self.sign, exponent: -scale, significand: significand)
}
}
Same here. 🖐️
I'm not sure using Core Image is the best choice here. CI might impose limits on the runtime of kernels, and your regression kernel seems too expensive.
It's also not intended that you pass the images and mask as CGImage via the arguments into the kernel. It would be better if you'd convert them to CIImage first and then pass them via the inputs parameter. CI would then convert them to Metal textures for you. Unfortunately, Core Image doesn't support texture arrays, so you would need to find a workaround for that.
Have you tried running your kernel in a pure Metal pipeline? It might be the better choice here. Or do you need it to be part of a Core Image pipeline?
It turns out that the issue is only occurring when there is also a CIContext being initialized in the same file (it doesn’t matter where in the file).
As soon as I remove the CIContext, the compiler doesn't complain anymore about the cast to IOSurfaceRef (doesn't even need to be a force-cast), and there is also no runtime error.
Any updates on this issue?
I unfortunately still can't ask a question about App Intents and Apple Intelligence. 😕
It's not yet available:
Apple Intelligence will be available in an upcoming beta.
https://developer.apple.com/apple-intelligence/
The static properties on CIFormat are lets now in iOS 18 / macOS 15. 👍
I can confirm the issue (tested the Core Image API).
Interestingly, the heif10Representation(...) API still works as expected.
Did you set wantsExtendedDynamicRangeContent on the CAMetalLayer? What happens when you set the layer's colorSpace to some HDR color space? Also, make sure that you set the CIRenderDestinations colorSpace to the same space as the layer.
So far, there is no API in visionOS that allows developers access to the live video feed.
This is by design, and most likely to protect the user's privacy: While on an iPhone you explicitly consent to sharing your surroundings with an app by pointing your camera at things, you can't really avoid that on the Apple Vision Pro.