Posts

Post not yet marked as solved
0 Replies
229 Views
I am trying to recreate Discrete scrubber in SwiftUI with haptic feedback and snap to nearest integer step. I use ScrollView and LazyHStack as follows: struct DiscreteScrubber: View { @State var numLines:Int = 100 var body: some View { ScrollView(.horizontal, showsIndicators: false) { LazyHStack { ForEach(0..<numLines, id: \.self) { _ in Rectangle().frame(width: 2, height: 10, alignment: .center) .foregroundStyle(Color.red) Spacer().frame(width: 10) } } } } } Problem: I need to add content inset of half the frame width of ScrollView so that the first line in the scrubber starts at the center of the view and so does the last line, and also generate haptic feedback as it scrolls. This was easy in UIKit but not obvious in SwiftUI.
Posted Last updated
.
Post not yet marked as solved
1 Replies
322 Views
Dear StoreKit Engineers, I recently migrated my app to freemium model from paid and am using AppTransaction to get original purchase version and original purchase date to determine if user already paid for the app before or not. It seems to be working for normal AppStore users but I am now flooded with complains from VPP users who previously purchased the app. It seems AppTransaction history is absent for these users and these users are asked to sign-in using Apple ID. What is the solution here?
Posted Last updated
.
Post not yet marked as solved
0 Replies
389 Views
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong? var videoURL:URL let player = AVPlayer() var body: some View { VStack { VideoPlayer(player: player) .frame(maxWidth:.infinity) .frame(height:300) .padding() .ignoresSafeArea() .background { Color.black } .onTapGesture { player.rate = player.rate == 0.0 ? 1.0 : 0.0 } Spacer() } .ignoresSafeArea() .background(content: { Color.black }) .onAppear { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers) } catch { NSLog("Unable to set session category to playback") } let playerItem = AVPlayerItem(url: videoURL) player.replaceCurrentItem(with: playerItem) } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
441 Views
The title says it all, XCode 15 freezes in so many cases that it has to be force quit and reopened, be it opening Swift packages or recognising connected devices. This happens on Intel based 2018 Macbook Pro. Is it the problem on Intel based Macs or XCode 15 is super buggy on all devices? For instance, I can not open and build this project downloaded from Github, specifically opening the file named MetalViewUI.swift in the project, it hangs forever.
Posted Last updated
.
Post not yet marked as solved
0 Replies
368 Views
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
Posted Last updated
.
Post not yet marked as solved
0 Replies
477 Views
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
Posted Last updated
.
Post not yet marked as solved
1 Replies
350 Views
Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code: device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)! NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)") device.activeColorSpace = .HLG_BT2020 device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.unlockForConfiguration() self.session?.addInput(input) let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.setSampleBufferDelegate(self, queue: self.samplesQueue) if self.session!.canAddOutput(output) { self.session?.addOutput(output) } let previewVideoOut = AVCaptureVideoDataOutput() previewVideoOut.alwaysDiscardsLateVideoFrames = true previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false previewVideoOut.deliversPreviewSizedOutputBuffers = true previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue) if self.session!.canAddOutput(previewVideoOut) { self.session?.addOutput(previewVideoOut) } self.vdo = vdo self.previewVDO = previewVideoOut self.session?.startRunning() It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?
Posted Last updated
.
Post not yet marked as solved
0 Replies
442 Views
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
Posted Last updated
.
Post not yet marked as solved
0 Replies
254 Views
I am trying to use the new API CAEDRMetadata.hlg(ambientViewingEnvironment:) introduced in iOS 17.0. Since ambientViewingEnvironmentData is dynamic, I understand the edrMetaData of CAMetalLayer needs to be set on every draw call. But doing so causes CAMetalLayer to freeze and even crash. if let pixelBuffer = image.pixelBuffer, let aveData = pixelBuffer.attachments.propagated[kCVImageBufferAmbientViewingEnvironmentKey as String] as? Data { if #available(iOS 17.0, *) { metalLayer.edrMetadata = CAEDRMetadata.hlg(ambientViewingEnvironment: aveData) } else { // Fallback on earlier versions } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
426 Views
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
Posted Last updated
.
Post not yet marked as solved
0 Replies
350 Views
I have two CIContexts configured with the following options: let options1:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false, CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()]; let options2:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false]; And an MTKView with CAMetalLayer configured with HDR output. metalLayer = self.layer as? CAMetalLayer metalLayer?.wantsExtendedDynamicRangeContent = true metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_HLG) colorPixelFormat = .bgr10a2Unorm The two context options produce different outputs when input is in BT.2020 pixel buffers. But I believe the outputs shouldn't be different. Because the first option simply disables color management. The second one performs intermediate buffer calculations in sRGB extended linear color space and then converts those buffers to BT.2020 color space in the output.
Posted Last updated
.
Post not yet marked as solved
1 Replies
701 Views
I am trying to migrate my app from paid to freemium and am facing several issues and doubts. Specifically, I am trying to use StoreKit2 AppTransaction API but I am not averse to using StoreKit if my problems are not solved by StoreKit2: Here are my questions: AppTransaction/Receipt on launch: I see on launch the AppTransaction.shared call fails on the sandbox initially. That means it's possible that on user's who have purchased the app previously, the AppTransaction (or appStoreReceipt in original StoreKit) may not be available when the user downloads or updates the app? That means I will need to ask every user to authenticate with AppStore to refresh the receipt/AppTransaction? Volume Purchase Users: I see StoreKit2 is not advised for volume purchases on the Apple website. I am not sure why that is the case, but does that mean AppTransaction will not be available for users who made Volume purchases under VPP? Is the flow to validate VPP users different? If StoreKit 2 can not be used, can the original StoreKit API help here, or nothing can be of help here?
Posted Last updated
.
Post not yet marked as solved
1 Replies
589 Views
I have imported two metal files and defined two stitchable Metal Core Image kernels, one of them being CIColorKernel and other being CIKernel. As outlined in the WWDC video, I need to add a flag -framework CoreImage to other Metal Linker flags. Unfortunately, Xcode 15 puts a double quotes around this and generates an error metal: error: unknown argument: '-framework CoreImage'. So I built without this flag and it works for the first kernel that was added. The other kernel is never added to metal.defaultlib and fails to load. How do I get it working? class SobelEdgeFilterHDR: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "sobelEdgeFilterHDR", fromMetalLibraryData: data) }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return SobelEdgeFilterHDR.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } }
Posted Last updated
.
Post marked as solved
2 Replies
583 Views
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code: class FilterTwo: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails! }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } } Here is the Metal code: using namespace metal; [[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest) { float2 srcCoord = inputImage.coord(); half4 color = half4(inputImage.sample(srcCoord)); return color; } And here is the usage: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let filter = FilterTwo() filter.inputImage = CIImage(color: CIColor.red) let outputImage = filter.outputImage! NSLog("Output \(outputImage)") } } And the output: StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆} Kernels [] reflect Function 'secondFilter' does not exist.
Posted Last updated
.
Post not yet marked as solved
0 Replies
420 Views
I have been allocating pixel buffers from CVPixelBufferPool and the code has been adapted from older various Apple sample codes such as RosyWriter. I see direct API such as CVPixelBufferCreate are highly performant and rarely cause frame drops as opposed to allocating from pixel buffer pool where I regularly get frame drops. Is this a known issue or a bad use of API? Here is the code for creating pixel buffer pool: private func createPixelBufferPool(_ width: Int32, _ height: Int32, _ pixelFormat: FourCharCode, _ maxBufferCount: Int32) -> CVPixelBufferPool? { var outputPool: CVPixelBufferPool? = nil let sourcePixelBufferOptions: NSDictionary = [kCVPixelBufferPixelFormatTypeKey: pixelFormat, kCVPixelBufferWidthKey: width, kCVPixelBufferHeightKey: height, kCVPixelFormatOpenGLESCompatibility: true, kCVPixelBufferIOSurfacePropertiesKey: [:] as CFDictionary] let pixelBufferPoolOptions: NSDictionary = [kCVPixelBufferPoolMinimumBufferCountKey: maxBufferCount] CVPixelBufferPoolCreate(kCFAllocatorDefault, pixelBufferPoolOptions, sourcePixelBufferOptions, &outputPool) return outputPool } private func createPixelBufferPoolAuxAttributes(_ maxBufferCount: size_t) -> NSDictionary { // CVPixelBufferPoolCreatePixelBufferWithAuxAttributes() will return kCVReturnWouldExceedAllocationThreshold if we have already vended the max number of buffers return [kCVPixelBufferPoolAllocationThresholdKey: maxBufferCount] } private func preallocatePixelBuffersInPool(_ pool: CVPixelBufferPool, _ auxAttributes: NSDictionary) { // Preallocate buffers in the pool, since this is for real-time display/capture var pixelBuffers: [CVPixelBuffer] = [] while true { var pixelBuffer: CVPixelBuffer? = nil let err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(kCFAllocatorDefault, pool, auxAttributes, &pixelBuffer) if err == kCVReturnWouldExceedAllocationThreshold { break } assert(err == noErr) pixelBuffers.append(pixelBuffer!) } pixelBuffers.removeAll() } And here is the usage: bufferPool = createPixelBufferPool(outputDimensions.width, outputDimensions.height, outputPixelFormat, Int32(maxRetainedBufferCount)) if bufferPool == nil { NSLog("Problem initializing a buffer pool.") success = false break bail } bufferPoolAuxAttributes = createPixelBufferPoolAuxAttributes(maxRetainedBufferCount) preallocatePixelBuffersInPool(bufferPool!, bufferPoolAuxAttributes!) And then creating pixel buffers from pool err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) if err == kCVReturnWouldExceedAllocationThreshold { // Flush the texture cache to potentially release the retained buffers and try again to create a pixel buffer err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) } if err != 0 { if err == kCVReturnWouldExceedAllocationThreshold { NSLog("Pool is out of buffers, dropping frame") } else { NSLog("Error at CVPixelBufferPoolCreatePixelBuffer %d", err) } break bail } When used with AVAssetWriter, I see lot of frame drops caused due to kCVReturnWouldExceedAllocationThreshold error. No frame drops are seen when I directly allocate the pixel buffer without using a pool: CVPixelBufferCreate(kCFAllocatorDefault, Int(dimensions.width), Int(dimensions.height), outputPixelFormat, sourcePixelBufferOptions, &dstPixelBuffer) What could be the cause?
Posted Last updated
.