I'm trying to archive my app via Xcode Cloud and auto-deploy to TestFlight but it fails during "Prepare Build for App Store Connect" with the following error message:
ITMS-90334: Invalid Code Signature Identifier. The identifier "CoreImageExtensions-dynamic-555549444e09e22796a23eadb2704bf219d5c1fa" in your code signature for "CoreImageExtensions-dynamic" must match its Bundle Identifier "CoreImageExtensions-dynamic"
CoreImageExtensions-dynamic is a .dynamic library target of a package that we are using.
It seems that at some point a UUID is added to the library's identifier, which messes with code signing. When archiving and uploading the app in Xcode directly, everything works just fine.
Any idea why this is happening and how I could fix it?
Post
Replies
Boosts
Views
Activity
We are implementing a CIImageProcessorKernel that uses an MTLRenderCommandEncoder to perform some mesh-based rendering into the output’s metalTexture. This works on iOS, but crashes on macOS. This is because the usage of the texture does not specify renderTarget on those devices—but not always. Sometimes the output’s texture can be used as renderTarget, but sometimes not. It seems there are both kinds of textures in CIs internal texture cache, and which one is used depends on the order in which filters are executed.
So far we only observed this on macOS (on different Macs, even on M1 and macOS 12 Beta) but not on iOS (also not on an M1 iPad).
We would expect to always be able to use the output’s texture as render target so we can use it as a color attachment for the render pass.
Is there some way to configure a CIImageProcessorKernel to always get renderTarget output textures? Or do we really need to render into a temporary texture and blit the result into the output texture? This would be a huge waste of memory and time…
Adding to my previous post, it would be great if the PHPicker would display if an asset is only available in the cloud and would need to be downloaded first. This might give the user a hint that the loading process might take longer and might cause network traffic.
Right now, it's unclear for the user (and for us developers) that an asset needs to be downloaded. A small cloud icon would help a lot, I think. (FB9221095)
Thanks for considering!
We have a lot of users reporting to us that they can't load images into our app. They just "see the spinner spin indefinitely". We now think we found the reason why:
When trying to load an asset via the PHPickerViewController that is not downloaded yet without an active internet connection, the loadFileRepresentationmethod of the item provider will just stall without reporting any progress or error. The timeout for this seems to be 5 minutes, which is way too high.
The same is true if the user disabled cellular data for Photos and attempts to load a cloud asset while not on wifi.
Steps to reproduce:
have a photo in iCloud that is not yet downloaded
activate Airplane Mode
open the picker and select that photo
see when loadFileRepresentation will return
Since it is clear that without an internet connection the asset can’t be downloaded, I would hope to be informed via a delegate method of the picker or the loadFileRepresentation callback that there was an error trying to load the asset. (FB9221090)
Right now we are attempting to solve this by adding an extra timer and a network check. But this will not catch the "no cellular data allowed"-case.
Please consider some callback mechanism to the API so we can inform the user what the problem might be. Thanks!
Our account holder got the message that we can now use Xcode Cloud via the beta. However, the developers in our organization are not able to access any Xcode Cloud features. Creating a workflow fails with the message
This operation couldn’t be completed.
Is there any additional setup required to get Xcode Cloud running on a developer machine?
The new VNGeneratePersonSegmentationRequest is a stateful request, i.e. it keeps state and improves the segmentation mask generation for subsequent frames.
There is also the new CIPersonSegmentationFilter as a convenient way for using the API with Core Image. But since the Vision request is stateful, I was wondering how this is handled by the Core Image filter.
Does the filter also keep state between subsequent calls? How is the "The request requires the use of CMSampleBuffers with timestamps as input" requirement of VNStatefulRequest ensured?
In the "Explore Core Image kernel improvements" session, David mentioned that it is now possible to compile [[stitchable]] CI kernels at runtime. However, I fail to get it working.
The kernel requires the #import of <CoreImage/CoreImage.h> and linking against the CoreImage Metal library. But I don't know how to link against the library when compiling my kernel at runtime. Also, according to the Metal Best Practices Guide, "the #include directive is not supported at runtime for user files."
Any guidance on how the runtime compilation works is much appreciated! 🙂
I want to play a video and process it using Core Image filters. I have the following setup:
the video as an AVAsset
an AVPlayer for controlling the video playback
the AVAsset is passed via AVPlayerItem to the player
the Core Image filters are applied using an AVMutableVideoComposition that was initialized with init(asset:applyingCIFiltersWithHandler:) and assigned to the player item
an AVPlayerItemVideoOutput is used to get the processed frames using copyPixelBuffer(forItemTime:itemTimeForDisplay:)
This works as expected. However, I'm observing strange behavior when seeking backwards on the player using seek(to:completionHandler:): the AVPlayerItemVideoOutput suddenly stops delivering new pixel buffers and hasNewPixelBuffer(forItemTime:) returns false. But this only happens when the filters applied in the video composition are a bit more expensive. When using a very simple filter or no filter at all, seeking works as expected.
Inspired by this technical note - https://developer.apple.com/library/archive/qa/qa1966/_index.html I found a workaround be re-assigning the composition to the player item after the seek finished, but this feels very hacky—especially since it's working for simple filter pipelines.
Did anybody encounter this before? Is there anything I need to configure differently?
I have the following setup:
An AVCaptureSession with a AVCaptureVideoDataOutput delivers video frames from the camera.
OpenGL textures are created from the CVPixelBuffers using a CVOpenGLESTextureCache.
Some OpenGL-based image processing is performed on the frames (with many intermediate steps) in a separate queue.
The final texture of the processing pipeline is rendered into a CAEAGLLayer on the main thread (with proper context and share group handling).
This worked very well up to iOS 13. Now in iOS 14 the AVCaptureVideoDataOutput suddenly stops delivering new frames (to the delegate) after ~4 sec. of capture—without any warning or log message.
Some observations:
The AVCaptureSession is still running (isRunning is true, isInterrupted is false).
All connections between the camera device and output are still there and active.
The capture indicator (green circle in the status bar, new in iOS 14) is still there.
The output's delegate does not report any frame drops.
When I perform an action that causes the session to be re-configured (like switching to the front camera), the output will start delivering frames again for ~4 sec. and then stop again.
When I don't process and display the frames, the output continues to deliver frames without interruption.
I'm debugging this for a while now and I'm pretty clueless. Any hints or ideas on what might cause this behavior now in iOS 14 are much appreciated! 🙂
Extending the PencilKit APIs to expose the inner data structures was a great move! And for apps like the "Handwriting Tutor" sample app this is enough.
However, I'd love to implement a custom drawing engine based on PKDrawing since it provides a lot of functionality out of the box (user interaction through PKCanvasView, de-/serialization, spline interpolation). But for a custom renderer, two key parts are missing: Custom inks (FB8261616), so we can define custom brushes that render differently than the three system styles.
Detecting changes while the user is drawing (FB8261554), otherwise we can't draw their current stroke on screen.
I know this was mentioned here before, but I wanted to emphasize that those two features would enable us to implement a full custom render engine based on PencilKit.
Thanks for considering!
Some of the filters that can be created using the CIFilterBuiltins extensions cause runtime exceptions when assigning an inputImage to them:
NSInvalidArgumentException: "-[CIComicEffect setInputImage:]: unrecognized selector sent to instance ..." I tried a few and found this to be the case for CIFilter.comicEffect(), CIFilter.cmykHalftone(), and CIFilter.pointillize() (probably more). Other filters like CIFilter.gaussianBlur() work fine.
This happens in Xcode 11.5 and Xcode 12 beta 2 on iOS and macOS.
I already filed feedback for this (FB8013603).
From the iOS 13 release notes:
Metal CIKernel instances support arguments with arbitrarily structured data. How does this work? Is there any example code for this?
So far I was only able to pass float-typed literals, CIVectors, NSNumbers , CIImages, and CISamplers into kernels as arguments when calling apply.
In his talk "Build Metal-based Core Image kernels with Xcode", David presents the build phases necessary to compile Core Image Metal files into Metal libraries, that can be used to instantiate CIKernels.
There is a 1-to-1 mapping between .ci.metal file and .ci.metallib file. I also found that the Metal linker doesn't allow to link more then one .air file into one library when building for Core Image.
This works fine until I want to have some common code (such as math functions) extracted into another file to be used by multiple kernels. As soon as I have two color kernels (that get concatenated during filter execution) that use the same shared functions, the runtime Metal compiler crashes (I assume because of duplicate symbols in the merged libraries).
Is there a good way to extract common functionality to be usable by multiple kernels in a pipeline?
In his talk, David mentioned the updated documentation for built-in Core Image filters, which got me very excited. However, I was not able to find it online or in Xcode.
I just found out that it's only visible when switching the language to Objective-C (for instance here - https://developer.apple.com/documentation/coreimage/cifilter/3228331-gaussianblurfilter?language=objc). From the screenshot in the presentation it looks like it should be available for Swift as well, though.
It also seems that very few filters are documented yet.
Is support for Swift and more documentation coming before release? That would be very helpful!
One of the announcements from WWDC 2020 was that Family Sharing will be available for IAP and subscriptions:
And in addition to shared family app purchases, the App Store now supports Family Sharing for subscriptions and in-app purchases. This is great for developers who offer content for the whole family to enjoy. How can we enable this and is it only available in iOS/iPadOS 14?