Post

Replies

Boosts

Views

Activity

Resize Image Playground sheet
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently. Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system... I guess this question applies to other system-provided sheets like the photo picker as well.
1
0
147
1w
Image Playground not available for "Designed for iPad" apps?
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console: Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> GP extension could not be loaded: Extension (platform: 2) could not be found (in update) dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89] ImagePlaygroundViewController.isAvailable returns true, however. In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app. Is this a bug?
3
1
385
2w
CGImageDestinationAddImageFromSource causes issues in iOS 18 / macOS 15
There seems to be an issue in iOS 18 / macOS 15 related to image thumbnail generation and/or HEIC. We are transcoding JPEG images to HEIC when they are loaded into our app (HEIC has a much lower memory footprint when loaded by Core Image, for some reason). We use Image I/O for that: guard let source = CGImageSourceCreateWithURL(inputURL, nil), let destination = CGImageDestinationCreateWithURL(outputURL, UTType.heic.identifier as CFString, 1, nil) else { throw <error> } let primaryImageIndex = CGImageSourceGetPrimaryImageIndex(source) CGImageDestinationAddImageFromSource(destination, source, primaryImageIndex, nil) When we use CGImageDestinationAddImageFromSource, we get the following warnings on the console: createImage:1445: *** ERROR: bad image size (0 x 0) rb: 0 CGImageSourceCreateThumbnailAtIndex:5195: *** ERROR: CGImageSourceCreateThumbnailAtIndex[0] - 'HJPG' - failed to create thumbnail [-67] {alw:-1, abs: 1 tra:-1 max:4620} writeImageAtIndex:1025: ⭕️ ERROR: '<app>' is trying to save an opaque image (4620x3466) with 'AlphaPremulLast'. This would unnecessarily increase the file size and will double (!!!) the required memory when decoding the image --> ignoring alpha. It seems that CGImageDestinationAddImageFromSource is trying to extract/create a thumbnail, which fails somehow. I re-wrote the last part like this: guard let primaryImage = CGImageSourceCreateImageAtIndex(source, primaryImageIndex, nil), let properties = CGImageSourceCopyPropertiesAtIndex(source, primaryImageIndex, nil) else { throw <error> } CGImageDestinationAddImage(destination, primaryImage, properties) This doesn't cause any warnings. An issue that might be related has been reported here. I've also heard from others having issues with CGImageSourceCreateThumbnailAtIndex.
0
0
203
3w
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
227
Nov ’24
AssistantIntent for Photos without library access
The new .photos AssistantSchema for intents allow integrating App Intents for Photos-related actions with Apple Intelligence. I was wondering if it would be possible to create intents that do not require full library access. Our app supports loading image from Photos via the PHPicker, which doesn't require any user permission. Now we want to support the .photos.openAsset schema in an app intent to allow interactions like "Open this image in BeCasso and apply preset X". Would that be possible without full library access?
0
0
462
Jul ’24
Transition from "Designed for iPad" to "Mac Catalyst"
Our apps can currently be installed on Apple Silicon Macs via the iPad app on Mac feature (“Designed for iPad”). Now we are working on “proper” (universal) Catalyst-based Mac apps that will be available on the Mac App Store. How does the transition work for users that currently have the iPad version installed? Will they automatically update to the Mac Catalyst app once it’s available, or do they need to re-install the app from the Mac App Store?
1
1
513
Jul ’24
IOSurface vs. IOSurfaceRef on Catalyst
I have an IOSurface and I want to turn that into a CIImage. However, the constructor of CIImage takes a IOSurfaceRef instead of a IOSurface. On most platforms, this is not an issue because the two types are toll-free bridgeable... except for Mac Catalyst, where this fails. I observed the same back in Xcode 13 on macOS. But there I could force-cast the IOSurface to a IOSurfaceRef: let image = CIImage(ioSurface: surface as! IOSurfaceRef) This cast fails at runtime on Catalyst. I found that unsafeBitCast(surface, to: IOSurfaceRef.self) actually works on Catalyst, but it feels very wrong. Am I missing something? Why aren't the types bridgeable on Catalyst? Also, there should ideally be an init for CIImage that takes an IOSurface instead of a ref.
2
1
633
Jun ’24
TipKit: explicit vs. implicit iCloud sync
With iOS 18, TipKit got explicit support for syncing tip state via iCloud. However, before that, TipKit already did iCloud syncing implicitly, as far as I know. How does the new explicit syncing relate to the previous mechanism? Do we have to enable iCloud syncing manually now to retain the functionality in iOS 18? Is there a way to sync with the state that was already stored by TipKit in iCloud on iOS 17?
1
0
550
Jun ’24
Camera intrinsic matrix for single photo capture
Is it possible to get the camera intrinsic matrix for a captured single photo on iOS? I know that one can get the cameraCalibrationData from a AVCapturePhoto, which also contains the intrinsicMatrix. However, this is only provided when using a constituent (i.e. multi-camera) capture device and setting virtualDeviceConstituentPhotoDeliveryEnabledDevices to multiple devices (or enabling isDualCameraDualPhotoDeliveryEnabled on older iOS versions). Then photoOutput(_:didFinishProcessingPhoto:) is called multiple times, delivering one photo for each camera specified. Those then contain the calibration data. As far as I know, there is no way to get the calibration data for a normal, single-camera photo capture. I also found that one can set isCameraIntrinsicMatrixDeliveryEnabled on a capture connection that leads to a AVCaptureVideoDataOutput. The buffers that arrive at the delegate of that output then contain the intrinsic matrix via the kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix metadata. However, this requires adding another output to the capture session, which feels quite wasteful just for getting this piece of metadata. Also, I would somehow need to figure out which buffer was temporarily closest to when the actual photo was taken. Is there a better, simpler way for getting the camera intrinsic matrix for a single photo capture? If not, is there a way to calculate the matrix based on the image's metadata?
0
0
837
Feb ’24
Sharing a JPEG via Action or Share Extension fails in Photos on macOS
We have a Share Extension that fails in Photos on macOS when trying to share a JPEG image for the following reason: From the NSItemProvider we get from the NSExtensionItem.attachments, we try to load the image using loadFileRepresentation(forTypeIdentifier: “public.image”, completionHandler: …). This fails for .jpeg images in the library. There seems to be a mismatch in expected and actual file extension internally. Here is the log: Error copying file type public.image. Error: Error Domain=NSItemProviderErrorDomain Code=-1000 "Cannot load representation of type public.jpeg" UserInfo={NSLocalizedDescription=Cannot load representation of type public.jpeg, NSUnderlyingError=0x1527c1a80 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg." UserInfo={NSLocalizedDescription=Cannot copy file at URL file:///Users/frank/Library/Containers/com.apple.Photos/Data/tmp/TemporaryItems/ShareKit-Exports/7CCFA760-AAC9-42B0-812D-68F051ED1543/F912E593-2BE5-4E70-86AB-7657A40657E5/IMG_3517.jpg., NSUnderlyingError=0x152789670 {Error Domain=NSItemProviderErrorDomain Code=-1 "Cannot create a temporary file. Error: Undefined error: 0" UserInfo={NSLocalizedDescription=Cannot create a temporary file. Error: Undefined error: 0}}}}}``` In the specified folder, there is an image, however, it’s named IMG_3517.jpeg, not IMG_3517.jpg. This seems to be a bug in Photo’s item provider implementation. If we use loadObject(ofClass: URL.self, completionHandler: …) instead, we get the correct .jpeg URL in the completion handler.
1
0
863
Jan ’24
arrowEdge of popoverTip not working anymore on iOS 17.1
In iOS 17.1 (and 17.2 beta), the arrowEdge parameter of the SwiftUI popoverTip doesn't work anymore. This code button .popoverTip(tip, arrowEdge: .bottom) looks like this on iOS 17.0 and like this on 17.1 and up. I checked permittedArrowDirections of the corresponding UIPopoverPresentationController (via the Memory Graph): It's .down on iOS 17.0 and .any (the default) on 17.1. It seems the parameter of popoverTip is not properly propagated to the popover controller anymore.
2
1
1.2k
Nov ’23
How to contribute to Journaling Suggestions?
Apple's new Journal app was introduced with the iOS 17.2 beta. In the release notes, the following is mentioned: If your app donates activities or interactions to SiriKit or CallKit or if someone authorizes your app to save data to HealthKit, some data might show up as part of Journaling Suggestions. Is there any documentation on how this works exactly? What kind of activities can be featured in Journal? How does the system decide what to feature? For instance, if I have an app that allows the user to create art images, can I somehow make those images appear in the Journaling Suggestions?
1
7
1k
Oct ’23
SwiftUI: popoverTip prevents other modal views from appearing
I want to show a tip on a button that will open a modal sheet on tap. The state if the sheet should be presented is held in an ObservableObject view model: @MainActor class ViewModel: ObservableObject { @Published var showSheet = false } struct ContentView: View { var tip = PopoverTip() @ObservedObject var viewModel = ViewModel() var body: some View { Button(action: { viewModel.showSheet.toggle() }, label: { Text("Button") }) .popoverTip(tip) .sheet(isPresented: $viewModel.showSheet) { Text("Sheet") } } } Here is the issue: When the tip is dismissed by tapping outside of it instead of tapping the close button, the tip will always reappear when tapping the button instead of showing the sheet. So effectively there is no way of triggering the actual button action, the tip will always pop up again and prevent the sheet from appearing. This is only an issue when using an ObservableObject to track the sheet state. When using a @State var showSheet: Bool inside the view itself instead, the sheet is shown as expected when tapping the button. It seems to be an issue of timing: Attempting to show the sheet somehow causes the view to be re-evaluated, which causes the tip to reappear (since it wasn't dismissed via close action). And since the tip is presented using modal presentation, the sheet can't be presented anymore. Is this a bug, or is there a simple way to avoid this issue?
3
3
1.7k
Sep ’23
Popover tips not affected by tip view modifiers
Tips presented using the popoverTip view modifier can't be styled using other tip view modifiers (as of beta 8). For instance, the last two modifiers don't have any effect here: Image(systemName: "wand.and.stars") .popoverTip(tip) .tipBackground(.red) .tipCornerRadius(30) It will look like this: Whereas applying the same modifiers to a TipView changes its look: TipView(tip, arrowEdge: .bottom) .tipBackground(.red) .tipCornerRadius(30) Is this intended behavior? How can we change the appearance of popup tips?
2
0
1.2k
Sep ’23
Issues with new MLE5Engine in Core ML
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
4
0
1.9k
Aug ’23