Post

Replies

Boosts

Views

Activity

Making ScrollView based discrete scrubber in SwiftUI
I am trying to recreate Discrete scrubber in SwiftUI with haptic feedback and snap to nearest integer step. I use ScrollView and LazyHStack as follows: struct DiscreteScrubber: View { @State var numLines:Int = 100 var body: some View { ScrollView(.horizontal, showsIndicators: false) { LazyHStack { ForEach(0..<numLines, id: \.self) { _ in Rectangle().frame(width: 2, height: 10, alignment: .center) .foregroundStyle(Color.red) Spacer().frame(width: 10) } } } } } Problem: I need to add content inset of half the frame width of ScrollView so that the first line in the scrubber starts at the center of the view and so does the last line, and also generate haptic feedback as it scrolls. This was easy in UIKit but not obvious in SwiftUI.
0
0
279
Feb ’24
AppTransaction issues for previous purchases under VPP
Dear StoreKit Engineers, I recently migrated my app to freemium model from paid and am using AppTransaction to get original purchase version and original purchase date to determine if user already paid for the app before or not. It seems to be working for normal AppStore users but I am now flooded with complains from VPP users who previously purchased the app. It seems AppTransaction history is absent for these users and these users are asked to sign-in using Apple ID. What is the solution here?
1
0
363
Jan ’24
SwiftUI Video Player autorotation issue
I am embedding SwiftUI VideoPlayer in a VStack and see that the screen goes black (i.e. the content disappears even though video player gets autorotated) when the device is rotated. The issue happens even when I use AVPlayerViewController (as UIViewControllerRepresentable). Is this a bug or I am doing something wrong? var videoURL:URL let player = AVPlayer() var body: some View { VStack { VideoPlayer(player: player) .frame(maxWidth:.infinity) .frame(height:300) .padding() .ignoresSafeArea() .background { Color.black } .onTapGesture { player.rate = player.rate == 0.0 ? 1.0 : 0.0 } Spacer() } .ignoresSafeArea() .background(content: { Color.black }) .onAppear { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(AVAudioSession.Category.playback, mode: AVAudioSession.Mode.default, options: AVAudioSession.CategoryOptions.duckOthers) } catch { NSLog("Unable to set session category to playback") } let playerItem = AVPlayerItem(url: videoURL) player.replaceCurrentItem(with: playerItem) } }
0
0
464
Jan ’24
AVSampleBufferDisplayLayer vs CAMetalLayer for displaying HDR
I have been using MTKView to display CVPixelBuffer from the camera. I use so many options to configure color space of the MTKView/CAMetalLayer that may be needed to tonemap content to the display (CAEDRMetadata for instance). If however I use AVSampleBufferDisplayLayer, there are not many configuration options for color matching. I believe AVSampleBufferDisplayLayer uses pixel buffer attachments to determine the native color space of the input image and does the tone mapping automatically. Does AVSampleBufferDisplayLayer have any limitations compared to MTKView, or both can be used without any compromise on functionality?
0
0
439
Nov ’23
XCode 15 freezes so many times on Intel based Macbook Pro
The title says it all, XCode 15 freezes in so many cases that it has to be force quit and reopened, be it opening Swift packages or recognising connected devices. This happens on Intel based 2018 Macbook Pro. Is it the problem on Intel based Macs or XCode 15 is super buggy on all devices? For instance, I can not open and build this project downloaded from Github, specifically opening the file named MetalViewUI.swift in the project, it hangs forever.
0
0
489
Nov ’23
Correctly process HDR in Metal Core Image Kernels (& Metal)
I am trying to carefully process HDR pixel buffers (10-bit YCbCr buffers) from the camera. I have watched all WWDC videos on this topic but have some doubts expressed below. Q. What assumptions are safe to make about sample values in Metal Core Image Kernels? Are the sample values received in Metal Core Image kernel linear or gamma corrected? Or does that depend on workingColorSpace property, or the input image that is supplied (though imageByMatchingToColorSpace() API, etc.)? And what could be the max and min values of these samples in either case? I see that setting workingColorSpace to NSNull() in context creation options will guarantee receiving the samples as is and normalised to [0-1]. But then it's possible the values are non-linear gamma corrected, and extracting linear values would involve writing conversion functions in the shader. In short, how do you safely process HDR pixel buffers received from the camera (which are in YCrCr420_10bit, which I believe have gamma correction applied, so Y in YCbCr is actually Y'. Can AVFoundation team clarify this?) ?
0
0
538
Nov ’23
Adding multiple AVCaptureVideoDataOutput stalls captureSession
Adding multiple AVCaptureVideoDataOutput is officially supported in iOS 16 and works well, except for certain configurations such as ProRes (YCbCr422 pixel format) where session fails to start if two VDO outputs are added. Is this a known limitation or a bug? Here is the code: device.activeFormat = device.findFormat(targetFPS, resolution: targetResolution, pixelFormat: kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange)! NSLog("Device supports tone mapping \(device.activeFormat.isGlobalToneMappingSupported)") device.activeColorSpace = .HLG_BT2020 device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(targetFPS)) device.unlockForConfiguration() self.session?.addInput(input) let output = AVCaptureVideoDataOutput() output.alwaysDiscardsLateVideoFrames = true output.setSampleBufferDelegate(self, queue: self.samplesQueue) if self.session!.canAddOutput(output) { self.session?.addOutput(output) } let previewVideoOut = AVCaptureVideoDataOutput() previewVideoOut.alwaysDiscardsLateVideoFrames = true previewVideoOut.automaticallyConfiguresOutputBufferDimensions = false previewVideoOut.deliversPreviewSizedOutputBuffers = true previewVideoOut.setSampleBufferDelegate(self, queue: self.previewQueue) if self.session!.canAddOutput(previewVideoOut) { self.session?.addOutput(previewVideoOut) } self.vdo = vdo self.previewVDO = previewVideoOut self.session?.startRunning() It works for other formats such as 10-bit YCbCr video range HDR sample buffers, but there are lot of frame drops when recording with AVAssetWriter at 4K@60 fps. Are these known limitations or bad use of API?
1
0
395
Nov ’23
Metal Core Image kernel workingColorSpace
I understand that by default, Core image uses extended linear sRGB as default working color space for executing kernels. This means that the color values received (or sampled from sampler) in the Metal Core Image kernel are linear values without gamma correction applied. But if we disable color management by setting let options:[CIContextOption:Any] = [CIContextOption.workingColorSpace:NSNull()]; do we receive color values as it exists in the input texture (which may have gamma correction already applied)? In other words, the color values received in the kernel are gamma corrected and we need to manually convert them to linear values in the Metal kernel if required?
0
0
476
Nov ’23
CAEDRMetadata.hlg(ambientViewingEnvironment:) API crash
I am trying to use the new API CAEDRMetadata.hlg(ambientViewingEnvironment:) introduced in iOS 17.0. Since ambientViewingEnvironmentData is dynamic, I understand the edrMetaData of CAMetalLayer needs to be set on every draw call. But doing so causes CAMetalLayer to freeze and even crash. if let pixelBuffer = image.pixelBuffer, let aveData = pixelBuffer.attachments.propagated[kCVImageBufferAmbientViewingEnvironmentKey as String] as? Data { if #available(iOS 17.0, *) { metalLayer.edrMetadata = CAEDRMetadata.hlg(ambientViewingEnvironment: aveData) } else { // Fallback on earlier versions } }
0
0
283
Nov ’23
Correct settings to record HDR/SDR with AVAssetWriter
I have set AVCaptureVideoDataOutput with 10-bit 420 YCbCr sample buffers. I use Core Image to process these pixel buffers for simple scaling/translation. var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size /* *srcImage is created from sample buffer received from Video Data Output */ _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) I then set the color attachments to this dstPixelBuffer using set colorProfile in the app settings (BT.709 or BT.2020). switch colorProfile { case .BT709: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) case .HLG2100: CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(dstPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) } These pixel buffers are then vended to AVAssetWriter whose videoSettings is set to recommendedSettings by VDO. But the output seems to be washed out completely, esp. for SDR (BT.709). What am I doing wrong?
0
0
476
Nov ’23
Difference between CIContext outputs
I have two CIContexts configured with the following options: let options1:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false, CIContextOption.outputColorSpace: NSNull(), CIContextOption.workingColorSpace: NSNull()]; let options2:[CIContextOption:Any] = [CIContextOption.cacheIntermediates: false]; And an MTKView with CAMetalLayer configured with HDR output. metalLayer = self.layer as? CAMetalLayer metalLayer?.wantsExtendedDynamicRangeContent = true metalLayer.colorspace = CGColorSpace(name: CGColorSpace.itur_2100_HLG) colorPixelFormat = .bgr10a2Unorm The two context options produce different outputs when input is in BT.2020 pixel buffers. But I believe the outputs shouldn't be different. Because the first option simply disables color management. The second one performs intermediate buffer calculations in sRGB extended linear color space and then converts those buffers to BT.2020 color space in the output.
0
0
382
Nov ’23
[[stitchable]] Metal core image CIKernel fails to load
It looks like [[stitchable]] Metal Core Image kernels fail to get added in the default metal library. Here is my code: class FilterTwo: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "secondFilter", fromMetalLibraryData: data) //<-- This fails! }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return FilterTwo.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } } Here is the Metal code: using namespace metal; [[ stitchable ]] half4 secondFilter (coreimage::sampler inputImage, coreimage::destination dest) { float2 srcCoord = inputImage.coord(); half4 color = half4(inputImage.sample(srcCoord)); return color; } And here is the usage: class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. let filter = FilterTwo() filter.inputImage = CIImage(color: CIColor.red) let outputImage = filter.outputImage! NSLog("Output \(outputImage)") } } And the output: StitchableKernelsTesting/FilterTwo.swift:15: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=CIKernel Code=1 "(null)" UserInfo={CINonLocalizedDescriptionKey=Function does not exist in library data. …•∆} Kernels [] reflect Function 'secondFilter' does not exist.
2
0
658
Nov ’23
Xcode 15 [[stitchable]] Metal core image kernels fail
I have imported two metal files and defined two stitchable Metal Core Image kernels, one of them being CIColorKernel and other being CIKernel. As outlined in the WWDC video, I need to add a flag -framework CoreImage to other Metal Linker flags. Unfortunately, Xcode 15 puts a double quotes around this and generates an error metal: error: unknown argument: '-framework CoreImage'. So I built without this flag and it works for the first kernel that was added. The other kernel is never added to metal.defaultlib and fails to load. How do I get it working? class SobelEdgeFilterHDR: CIFilter { var inputImage: CIImage? var inputParam: Float = 0.0 static var kernel: CIKernel = { () -> CIKernel in let url = Bundle.main.url(forResource: "default", withExtension: "metallib")! let data = try! Data(contentsOf: url) let kernelNames = CIKernel.kernelNames(fromMetalLibraryData: data) NSLog("Kernels \(kernelNames)") return try! CIKernel(functionName: "sobelEdgeFilterHDR", fromMetalLibraryData: data) }() override var outputImage : CIImage? { guard let inputImage = inputImage else { return nil } return SobelEdgeFilterHDR.kernel.apply(extent: inputImage.extent, roiCallback: { (index, rect) in return rect }, arguments: [inputImage]) } }
1
0
645
Nov ’23
CVPixelBufferPool poor performance vis-a-vis directly allocation
I have been allocating pixel buffers from CVPixelBufferPool and the code has been adapted from older various Apple sample codes such as RosyWriter. I see direct API such as CVPixelBufferCreate are highly performant and rarely cause frame drops as opposed to allocating from pixel buffer pool where I regularly get frame drops. Is this a known issue or a bad use of API? Here is the code for creating pixel buffer pool: private func createPixelBufferPool(_ width: Int32, _ height: Int32, _ pixelFormat: FourCharCode, _ maxBufferCount: Int32) -> CVPixelBufferPool? { var outputPool: CVPixelBufferPool? = nil let sourcePixelBufferOptions: NSDictionary = [kCVPixelBufferPixelFormatTypeKey: pixelFormat, kCVPixelBufferWidthKey: width, kCVPixelBufferHeightKey: height, kCVPixelFormatOpenGLESCompatibility: true, kCVPixelBufferIOSurfacePropertiesKey: [:] as CFDictionary] let pixelBufferPoolOptions: NSDictionary = [kCVPixelBufferPoolMinimumBufferCountKey: maxBufferCount] CVPixelBufferPoolCreate(kCFAllocatorDefault, pixelBufferPoolOptions, sourcePixelBufferOptions, &outputPool) return outputPool } private func createPixelBufferPoolAuxAttributes(_ maxBufferCount: size_t) -> NSDictionary { // CVPixelBufferPoolCreatePixelBufferWithAuxAttributes() will return kCVReturnWouldExceedAllocationThreshold if we have already vended the max number of buffers return [kCVPixelBufferPoolAllocationThresholdKey: maxBufferCount] } private func preallocatePixelBuffersInPool(_ pool: CVPixelBufferPool, _ auxAttributes: NSDictionary) { // Preallocate buffers in the pool, since this is for real-time display/capture var pixelBuffers: [CVPixelBuffer] = [] while true { var pixelBuffer: CVPixelBuffer? = nil let err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes(kCFAllocatorDefault, pool, auxAttributes, &pixelBuffer) if err == kCVReturnWouldExceedAllocationThreshold { break } assert(err == noErr) pixelBuffers.append(pixelBuffer!) } pixelBuffers.removeAll() } And here is the usage: bufferPool = createPixelBufferPool(outputDimensions.width, outputDimensions.height, outputPixelFormat, Int32(maxRetainedBufferCount)) if bufferPool == nil { NSLog("Problem initializing a buffer pool.") success = false break bail } bufferPoolAuxAttributes = createPixelBufferPoolAuxAttributes(maxRetainedBufferCount) preallocatePixelBuffersInPool(bufferPool!, bufferPoolAuxAttributes!) And then creating pixel buffers from pool err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) if err == kCVReturnWouldExceedAllocationThreshold { // Flush the texture cache to potentially release the retained buffers and try again to create a pixel buffer err = CVPixelBufferPoolCreatePixelBufferWithAuxAttributes( kCFAllocatorDefault, bufferPool!, bufferPoolAuxAttributes, &dstPixelBuffer ) } if err != 0 { if err == kCVReturnWouldExceedAllocationThreshold { NSLog("Pool is out of buffers, dropping frame") } else { NSLog("Error at CVPixelBufferPoolCreatePixelBuffer %d", err) } break bail } When used with AVAssetWriter, I see lot of frame drops caused due to kCVReturnWouldExceedAllocationThreshold error. No frame drops are seen when I directly allocate the pixel buffer without using a pool: CVPixelBufferCreate(kCFAllocatorDefault, Int(dimensions.width), Int(dimensions.height), outputPixelFormat, sourcePixelBufferOptions, &dstPixelBuffer) What could be the cause?
0
0
469
Nov ’23
Processing YCbCr422 10-bit HDR pixel buffers with Metal
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows: let pixelFormatY = MTLPixelFormat.r16Unorm let pixelFormatUV = MTLPixelFormat.rg16Unorm renderPassDescriptorY!.colorAttachments[0].texture = texture; renderPassDescriptorY!.colorAttachments[0].loadAction = .clear; renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorY!.colorAttachments[0].storeAction = .store; renderPassDescriptorCbCr!.colorAttachments[0].texture = texture; renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear; renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store; // Vertices and texture coordinates for Metal shader let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)), AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)), AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)), AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0)) ] let commandBuffer = commandQueue!.makeCommandBuffer() if let commandBuffer = commandBuffer { let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!) renderEncoderY?.setRenderPipelineState(pipelineStateY!) renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0) renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1)) renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderY?.endEncoding() let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!) renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!) renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0) renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1)) renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderCbCr?.endEncoding() commandBuffer.commit() } And here is shader code: vertex MappedVertex vertexShaderYCbCrPassthru ( constant Vertex *vertices [[ buffer(0) ]], unsigned int vertexId [[vertex_id]] ) { MappedVertex out; Vertex v = vertices[vertexId]; out.renderedCoordinate = float4(v.position, 0.0, 1.0); out.textureCoordinate = v.texCoord; return out; } fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureY [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float Y = float(textureY.sample(s, in.textureCoordinate).r); return half(Y); } fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureCbCr [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg); return half2(CbCr); } Is there anything fundamentally wrong in the code that makes it slow?
1
0
549
Oct ’23