Posts

Post not yet marked as solved
0 Replies
429 Views
Are there plans to expose the cinematic frames (e.g. disparity) to a AVAsynchronousCIImageFilteringRequest? I want to use my own lens blur shader on the cinematic frames. Right now it looks like the cinematic frames are only available in a AVAsynchronousVideoCompositionRequest like this: guard let sourceFrame = SourceFrame(request: request, cinematicCompositionInfo: cinematicCompositionInfo) else { return } let disparity = sourceFrame.disparityBuffer
Posted Last updated
.
Post not yet marked as solved
0 Replies
1.2k Views
In iOS16, will there be an open dialog that combines Photos and Files? For any creative app, this seems like a common scenario…I want to open a photo from either the Camera Roll or Files to edit in my cool app. I would envision the left toolbar showing sources, such as: On My iPad, iCoud Drive, and Camera Roll. Currently, it’s a bit clunky and confusing for users to have two different choices just to open a photo for editing. Thanks! James
Posted Last updated
.
Post not yet marked as solved
0 Replies
491 Views
I'm using CoreML for image segmentation. I have a VNCoreMLRequest to run a model that returns a MLMultiArray. To accelerate processing the model output, I use MPSNNReduceFeatureChannelsArgumentMax to reduce the multiarray output to a 2D array which I then convert to a grayscale image. This works great on iOS, but when running on Mac as a catalyst build, the output 2D array is all zeros. I'm running Version 12.2 beta 2 (12B5025f) on an iMac Pro. I'm not seeing any runtime errors. MPSNNReduceFeatureChannelsArgumentMax appears to not work on Mac Catalyst. I'm able to reduce the channels directly on the cpu by looping through all the array dimensions but it's very slow. This proves the model output works, just the metal reduce features fails. Anyone else using CoreML and Catalyst? Here's the bit of code that doesn't work: 										let buffer = self.queue.makeCommandBuffer() 										let filter = MPSNNReduceFeatureChannelsArgumentMax(device: self.device) 										filter.encode(commandBuffer: buffer!, sourceImage: probs, destinationImage: classes) 										 										// add a callback to handle the buffer's completion and commit the buffer 										buffer?.addCompletedHandler({ (_buffer) in 												let argmax = try! MLMultiArray(shape: [1, softmax.shape[1], softmax.shape[2]], dataType: .float32) 												classes.readBytes(argmax.dataPointer, 																					dataLayout: .featureChannelsxHeightxWidth, 																					imageIndex: 0) 												 												// unmap the discrete segmentation to RGB pixels 												guard var mask = codesToMask(argmax) else { 														return 												} 						 // display image in view DispatchQueue.main.async { 												 self.imageView.image = mask } 										})
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
I opened up a basic iOS project in XCode12 on the DTK. In the scheme, I don’t see the My Mac (Designed for iPad) as a device, as shown in the 10114 WWDC video. The DTK comes with Xcode 12 for macOS beta (12A8158a). Is this the right Xcode to test an iOS app running on DTK? Note, I’m trying to build with iOS sdk, not create a Catalyst target, to test on Apple silicon.
Posted Last updated
.
Post not yet marked as solved
0 Replies
736 Views
As of iOS13 beta 6, A UIViewController's UIModalPresentationStyle defaults to .pageSheet instead of .fullScreen. This appears to break any touch control such as touchesMoved since the .pageSheet style consumes touch and moves the page up and down. Solution, use .fullScreen when touch control needed on presented view.
Posted Last updated
.