You can easily test Apple Intelligence features in the EU on an iPad. You just have to set the system and Siri language to English (US) and sign in with a US Apple account under iCloud → Media & Purchases.
It even works with sandbox accounts! So if you don’t have a US account, you can easily create a sandbox one in App Store Connect.
Also note that Apple Intelligence is available on macOS in Europe without the need for a US Apple account. You just have to set the languages to English.
Post
Replies
Boosts
Views
Activity
It turns out this issue is most notable for Mac Catalyst apps that use the Mac Idiom.
We built a small package that provides a better alternative to the system default Image Playground: BetterImagePlayground.
See screenshots on GitHub for comparison.
More countries will be supported in April.
Regardless, Apple Intelligence is only be available on iPhone 15 Pro and newer.
There is also an API that has an optional soureImage instead of the URL:
imagePlaygroundSheet(isPresented:concepts:sourceImage:onCompletion:onCancellation:)
Feedback filed: FB16090123
Hi J,
Thanks for the response!
Yes, Apple Intelligence is enabled on my Mac. When I run the app as "real" Mac Catalyst app, it's actually working.
I filed FB16077581 for this, including a very minimal sample project.
The following should be enough when added to a new iOS app project:
struct ContentView: View {
@State var playgroundVisible = false
var body: some View {
Button(action: { playgroundVisible.toggle() }, label: {
Label("Show Image Playground", systemImage: "apple.image.playground")
})
.padding()
.imagePlaygroundSheet(isPresented: $playgroundVisible) { url in
print("Image generated: \(url)")
}
}
}
Have you exported the image from Photos to check with another app if the background is really white?
It might be that Photos just visualizes transparency with white.
It could be a problem that you are getting the currentDrawable outside the Operation. If your rendering can't keep up with the draw requests, the operations will pile up while also holding locks on the view's drawable.
Instead, you could get the view's drawable inside of the operation and discard the whole draw operation if the view is not yet read.
You have to import CoreImage.CIFilterBuiltins explicitly to get access to the type-safe filters. Did you do that?
A UIImage is a wrapper that can be backed by different types of images. The ciImage property is only set when the UIImage was created from a CIImage using CIImage(image:). In most cases, however, a UIImage is backed by a CGImage.
If you want to create a CIImage from a UIImage, you should always use the CIImage(image:) initializer and never rely on the ciImage property.
Same here. ✋
Both APIs are used for writing a Gain Map HDR image, i.e, an SDR RGB image that contains an auxiliary single-channel gain map image that contains the HDR information.
You can and should use an 8-bit format for this kind of image, e.g., RGBA8.
The format Apple mentioned in the 2023 video (they called it ISO HDR) is for storing an HDR image directly (without an SDR representation). For that, you'd need more than 8 bit because the range of color values in an HDR image is much larger.
However, it seems the industry is moving towards the SDR + gain map standard introduced by Adobe last year, which Apple is now also adopting. I would assume that they won't pursue the ISO HDR format much further, as it's not as compatible and takes more space.
I recommend checking out the old Core Image Programming Guide on how to supply a ROI function.
Basically, you are given a rect in the target image (that your filter should produce) and are asked what part of the input image your filter needs to produce it. For small images, this is usually not relevant because Core Image processes the whole image in one go. For large images, however, CI applies tiling, i.e., processing slices of the image in sequence and stitching them together in the end. For this, the ROI is very important.
In your mirroring example, the first tile might be the left side of the image and the second tile the right side. When your ROI is asked what part of the input is needed to produce the left side of the result, you need to return the right side of the input image because it's mirrored along the x-axis, and vise versa.
So you basically have to apply the same x-mirroring trick you use when sampling to mirror the rect in your ROI callback.
Objective-C or Swift?
You can check out my MTKView subclass for an example on how to render CoreImage output with Metal. I hope that helps.
We found a workaround for this issue by replacing NSDecimalRound with the following helper (works when coming from a Double):
extension Double {
/// Helper for rounding a number to a fixed number of decimal places (`scale`).
///
/// This is a replacement for `NSDecimalRound`, which causes issues in release builds
/// with the Xcode 16 RC.
func roundedDecimal(scale: Int = 0, rule: FloatingPointRoundingRule = .toNearestOrEven) -> Decimal {
let significand = Decimal((self * pow(10, Double(scale))).rounded(rule))
return Decimal(sign: self.sign, exponent: -scale, significand: significand)
}
}