Post

Replies

Boosts

Views

Activity

Unable to change Photos permission of iPad app on Mac
Users can run our apps on Macs with Apple Silicon via the "iPad Apps on Mac" feature. The apps use PHPhotoLibrary.requestAuthorization(for: .addOnly, handler: callback) to request write-only access to the user's Photo Library during image export. This works as intended on macOS, but a huge problem arises when the user denies access (by accident or intentionally) and later decides that they want us to add their image to Photos: There is no way to grant this permission again. In System Preferences → Privacy & Security → Photos, the app is just not listed – in fact, none of the "iPad Apps on Mac" apps appear here. Not even tccutil reset all my.bundle.id works. It just reports tccutil: Failed to reset all approval status for my.bundle.id. Uninstalling, restarting the Mac, and reinstalling the app also doesn't work. The system seems to remember the initial decision. Is this an oversight in the integration of those apps with macOS, or are we missing something fundamental here? Is there maybe a way to prompt the user again?
3
3
1.3k
Apr ’23
PHPicker fails to load RAW images
We observed that the PHPicker is unable to load RAW images captured on an iPhone in some scenarios. And it is also somehow related to iCloud. Here is the setup: The PHPickerViewController is configured with preferredAssetRepresentationMode = .current to avoid transcoding. The image is loaded from the item provider like this: if itemProvider.hasItemConformingToTypeIdentifier(kUTTypeImage) { itemProvider.loadFileRepresentation(forTypeIdentifier: kUTTypeImage) { url, error in // work } } This usually works, also for RAW images. However, when trying to load a RAW image that has just been captured with the iPhone, the loading fails with the following errors on the console: [claims] 43A5D3B2-84CD-488D-B9E4-19F9ED5F39EB grantAccessClaim reply is an error: Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}} Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.image" UserInfo={NSLocalizedDescription=Cannot load representation of type public.image, NSUnderlyingError=0x280480540 {Error Domain=NSCocoaErrorDomain Code=4097 "Couldn’t communicate with a helper application." UserInfo={NSUnderlyingError=0x2804a8e70 {Error Domain=NSCocoaErrorDomain Code=4097 "connection from pid 19420 on anonymousListener or serviceListener" UserInfo={NSDebugDescription=connection from pid 19420 on anonymousListener or serviceListener}}}}} We observed that on some devices, loading the image will actually work after a short time (~30 sec), but on others it will always fail. We think it is related to iCloud Photos: On the device that has iCloud Photos sync enabled, the picker is able to load the image right after it was synced to the cloud. On devices that don't sync the image, loading always fails. It seems that the sync process is doing some processing (?) of the image that will later enable the picker to load it successfully, but that's just guessing. Additional observations: This seems to only occur for images that were taken with the stock Camera app. When using Halide to capture RAW (either ProRAW or RAW), the Picker is able to load the image. When trying to load the image as kUTTypeRawImage instead of kUTTypeImage, it also fails. The picker also can't load RAW images that were AirDroped from another device, unless it synced to iCloud first. This is reproducable using the Selecting Photos and Videos in iOS sample code project. We observed this happening in other apps that use the PHPicker, not just ours. Is this a bug, or is there something that we are missing?
5
1
2.2k
Mar ’23
Observe currentEDRHeadroom for changes
Is there a way to observe the currentEDRHeadroom property of UIScreen for changes? KVO is not working for this property... I understand that I can query the current headroom in the draw(...) method to adapt the rendering. However, our apps only render on-demand when the user changes parameters. But we would also like to re-render when the current EDR headroom changes to adapt the tone mapping to the new environment. The only solution we've found so far is to continuously query the screen for changes, which doesn't seem ideal. It would be better if the property would be observable via KVO or if there would be a system notification to listen for. Thanks!
1
0
1k
Jan ’23
Large memory consumption when running Core ML model on A13 GPU
We recently had to change our MLModel's architecture to include custom layers, which means the model can't run on the Neural Engine anymore. After the change, we observed a lot of crashes being reported on A13 devices. It turns out that the memory consumption when running the prediction with the new model on the GPU is much higher than before, when it was running on the Neural Engine. Before, the peak memory load was ~350 MB, now it spikes over 2 GB, leading to a crash most of the time. This only seems to happen on the A13. When forcing the model to only run on the CPU, the memory consumption is still high, but the same as running the old model on the CPU (~750 MB peak). All tested on iOS 16.1.2. We profiled the process in Instruments and found that there are a lot of memory buffers allocated by Core ML that are not freed after the prediction. The allocation stack trace for those buffers is the following: We ran the same model on a different device and found the same buffers in Instruments, but there they are only 4 KB in size. It seems, Core ML is somehow massively over-allocating memory when run on the A13 GPU. So far we limit the model to only run on CPU for those devices, but this is far from ideal. Is there any other model setting or workaround that we can use to avoid this issue?
2
0
1.2k
Dec ’22
CIColorCube sometimes producing no or broken output in macOS 13
With macOS 13, the CIColorCube and CIColorCubeWithColorSpace filters gained the extrapolate property for supporting EDR content. When setting this property, we observe that the outputImage of the filter sometimes (~1 in 3 tries) just returns nil. And sometimes it “just” causes artifacts to appear when rendering EDR content (see screenshot below). The artifacts even appear sometimes when extrapolate was not set. input | correct output | broken output This was reproduced on Intel-based and M1 Macs. All of our LUT-based filters in our apps are broken in this way and we could not find a workaround for the issue so far. Does anyone experice the same?
2
0
1.2k
Oct ’22
Old macOS duplicate detection in Photos
A few of our users reported that images saved with our apps disappear from their library in Photos after a few seconds. All of them own a Mac with an old version of macOS, and all of them have iCloud syncing enabled for Photos. Our apps use Core Image to process images. Core Image will transfer most of the input's metadata to the output. While we thought this was generally a good idea, this seems to be causing the issue: The old version of Photos (or even iPhoto?) that is running on the Mac seems to think that the output image of our app is a duplicate of the original image that was loaded into our app. As soon as the iCloud sync happens, the Mac removes the image from the library, even when it's in sleep mode. When the Mac is turned off or disconnected from the internet, the images stay in the library—until the Mac comes back online. This seems to be caused by the output's metadata, but we couldn't figure out what fields are causing the old Photos to detect the new image as duplicate. It's also very hard to reproduce without installing an old macOS on some machine. Does anyone know what metadata field we need to change to not be considered a duplicate?
0
0
933
Oct ’22
Core ML model execution sometimes fails under load
I'm processing a 4K video with a complex Core Image pipeline that also invokes a neural style transfer Core ML model. This works very well, but sometimes, for very few frames, the model execution fails with the following error messages: Execution of the command buffer was aborted due to an error during execution. Internal Error (0000000e:Internal Error) Error: command buffer exited with error status. The Metal Performance Shaders operations encoded on it may not have completed. Error: (null) Internal Error (0000000e:Internal Error) <CaptureMTLCommandBuffer: 0x280b95d90> -> <AGXG15FamilyCommandBuffer: 0x108f143c0> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU commandQueue = <AGXG15FamilyCommandQueue: 0x1206cee40> label = <none> device = <AGXG15Device: 0x106034e00> name = Apple A16 GPU retainedReferences = 1 [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": Internal Error (0000000e:Internal Error); code=1 status=-1 [coreml] Error computing NN outputs -1 [coreml] Failure in -executePlan:error:. It's really hard to reproduce it since it only happens occasionally. I also didn't find a way to access that Internal Error mentioned, so I don't know the real reason why it fails. Any advice would be appreciated!
1
0
1.7k
Oct ’22
EDR support for images
More and more iOS devices can capture content with high/extended dynamic range (HDR/EDR) now, and even more devices have screens that can display that content properly. Apple also gave us developers the means to correctly display and process this EDR content in our apps on macOS and now also on iOS 16. There are a lot of EDR-related sessions from WWDC 2021 and 2022. However, most of them focus on HDR video but not images—even though Camera captures HDR images by default on many devices. Interestingly, those HDR images seem to use a proprietary format that relies on EXIF metadata and an embedded HDR gain map image for displaying the HDR effect in Photos. Some observations: Only Photos will display those metadata-driven HDR images in their proper brightness range. Files, for instance, does not. Photos will not display other HDR formats like OpenEXR or HEIC with BT.2100-PQ color space in their proper brightness. When using the PHPicker, it will even automatically tone-map the EDR values of OpenEXR images to SDR. The only way to load those images is to request the original image via PHAsset, which requires photo library access. And here comes my main point: There is no API that enables us developers to load iPhone HDR images (with metadata and gain map) that will decode image + metadata into EDR pixel values. That means we cannot display and edit those images in our app the same way as Photos. There are ways to extract and embed the HDR gain maps from/into images using Image I/O APIs. But we don't know the algorithm used to blend the gain map with the image's SDR pixel values to get the EDR result. It would be very helpful to know how decoding and encoding from SDR + gain map to HDR and back works. Alternatively (or in addition), it would be great if common image loading APIs like Image I/O and Core Image would provide APIs to load those images into an EDR image representation (16-bit float linear sRGB with extended values, for example) and write EDR images into SDR + gain map images so that they are correctly displayed in Photos. Thanks for your consideration! We really want to support HDR content in our image editing apps, but without the proper APIs, we can only guess how image HDR works on iOS.
2
8
2k
Sep ’22
Cache intermediates in combination with cropping
Core Image has the concept of Region of Interest (ROI) that allows for nice optimizations during processing. For instance, if a filtered image is cropped before rendering, Core Image can tell the filters to only process that cropped region of the image. This means no pixels are processed that would be discarded by the cropping. Here is an example: let blurred = ciImage.applyingGaussianBlur(sigma: 5) let cropped = blurred.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200)) First, we apply a gaussian blur filter to the whole image, then we crop to a smaller rect. The corresponding filter graph looks like this: Even though the extent of the image is rather large, the ROI of the crop is propagated back to the filter so that it only processes the pixel within the rendered region. Now to my problem: Core Image can also cache intermediate results of a filter chain. In fact, it does that automatically. This improves performance when, for example, only changing the parameter of a filter in the middle of the chain and rendering again. Then everything before that filter doesn't change, so a cached intermediate result can be used. CI also has a mechanism for explicitly defining such caching point by using insertingIntermediate(cache: true). But I noticed that this doesn't play nicely together with propagating ROI. For example, if I change the example above like this: let blurred = ciImage.applyingGaussianBlur(sigma: 5) let cached = blurred.instertingIntermediate(cache: true) let cropped = cached.cropped(to: CGRect(x: 100, y: 100, width: 200, height: 200)) the filter graph looks like this: As you can see, the blur filter suddenly wants to process the whole image, regardless of the cropping that happens afterward. The inserted cached intermediate always requires the whole input image as its ROI. I found this a bit confusing. It prevents us from inserting explicit caching points into our pipeline since we also support non-destructive cropping using the abovementioned method. Performance is too low, and memory consumption is too high when processing all those unneeded pixels. Is there a way to insert an explicit caching point into the pipeline that correctly propagates the ROI?
1
0
878
Aug ’22
Hardware camera access from inside a Camera Extension
While trying to re-create the CIFilterCam demo shown in the WWDC session, I hit a roadblock when trying to access a hardware camera from inside my extension. Can I simply use an AVCaptureSession + AVCaptureDeviceInput + AVCaptureVideoDataOutput to get frames from an actual hardware camera and pass them to the extension's stream? If yes, when should I ask for camera access permissions? It seems the extension code is run as soon as I install the extension, but I never get prompted for access permission. Do I need to set up the capture session lazily? What's the best practice for this use case?
1
0
1.3k
Jun ’22
Allow 16-bit RGBA image formats as input/output of MLModels
Starting in iOS 16 and macOS Ventura, OneComponent16Half will be a new scalar type for Images. Ideally, we would also like to use the 16-bit support for RGBA images. As of now, we need to make an indirection using MLMultiArray with Float (Float16 with the update) set as type and copy the data into the desired image buffer. Direct usage of 16-bit RGBA predictions in Image format would be ideal for some applications requiring high precision outputs, like models that are trained on EDR image data. This is also useful when integrating Core ML into Core Image pipelines since CI’s internal image format is 16-bit RGBA by default. When passing that into a Neural Style Transfer model with (8-bit) RGBA image input/output type, conversions are always necessary (as demonstrated in WWDC2022-10027). If we could modify the models to use 16-bit RGBA images instead, no conversion would be necessary anymore. Thanks for the consideration!
3
0
1.2k
Jun ’22
CIKernel ROI Callback Leak
The ROI callback that is passed to a CIKernel’s apply(…) method seems to be referenced beyond the render call and is not released properly. That also means that any captured state is retained longer than expected. I noticed this in a camera capture scenario because the capture session stopped delivering new frames after the initial batch. The output ran out of buffers because they were not properly returned to the pool. I was capturing the filter’s input image in the ROI callback like in this simplified case: override var outputImage: CIImage? { guard let inputImage = inputImage else { return nil } let roiCallback: CIKernelROICallback = { _, _ in return inputImage.extent } return Self.kernel.apply(extent: inputImage.extent, roiCallback: roiCallback, arguments: [inputImage]) } While it is avoidable in this case, it is also very unexpected that the ROI callback is retained longer than needed for rendering the output image. Even when not capturing a lot of state, this would still unnecessarily accumulate over time. Note that calling ciContext.clearCaches() does actually seem to release the captured ROI callbacks. But I don’t want to do that after every frame since there are also resources worth caching. Is there a reason why Core Image caches the ROI callbacks beyond the rendering calls they are involved in?
1
0
736
Apr ’22
Support tiling in ML-based CIImageProcessorKernel
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling. We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires. In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling). In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer. This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image. However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced. We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image. If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput. What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
1
0
1k
Mar ’22
External build configuration for framework target
We have a Filters framework that contains many image processing filters (written in Swift and Metal) and the resources they require (like ML models and static images). But not every app we have uses all the filters in Filters. Rather we want to only build and bundle the required filters and resources that are needed by the app. The only way we can think of to achieve that is to create different framework targets in Xcode, one for each app. But that would require that the Filters framework project “knows” all of its consumers (apps) and we would rather like to avoid that. Especially since the filters are in a separate repository. Is there a way to, for instance, pass some kind of configuration file to the framework that is used at build time to decide which files to build and bundle?
0
0
799
Nov ’21
Allow PHPicker access to original/unadjusted asset
The newish PHPicker is a great way to access the users’ photo library without requiring fill access permissions. However, this is currently no way for accessing the original or unadjusted version of an asset. The preferredAssetRepresentationMode of the PHPickerConfiguration only allows the options automatic, compatible, and current, where current still returns the asset with previous adjustments applied. The option only seems to impact potential asset transcoding. In contrast, when fetching PHAsset data, one can specify the PHImageRequestOptionsVersion unadjusted or original, which give access to the underlying untouched image. It would be great to have these options in the PHPicker interface as well. The alternative would be to load the image through PHAsset via the identifier returned by the picker, but that would require full library access, which I want to avoid. Or is there another way to access the original image without these permissions?
2
0
888
Oct ’21