Posts

Post not yet marked as solved
0 Replies
524 Views
I am currently using CoreImage to process YCbCr422/420 10-bit pixel buffers but it is lacking performance at high frame rates so I decided to switch to Metal. But with Metal I am getting even worse performance. I am loading both the Luma (Y) and Chroma (CbCr) textures in 16-bit format as follows: let pixelFormatY = MTLPixelFormat.r16Unorm let pixelFormatUV = MTLPixelFormat.rg16Unorm renderPassDescriptorY!.colorAttachments[0].texture = texture; renderPassDescriptorY!.colorAttachments[0].loadAction = .clear; renderPassDescriptorY!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorY!.colorAttachments[0].storeAction = .store; renderPassDescriptorCbCr!.colorAttachments[0].texture = texture; renderPassDescriptorCbCr!.colorAttachments[0].loadAction = .clear; renderPassDescriptorCbCr!.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0) renderPassDescriptorCbCr!.colorAttachments[0].storeAction = .store; // Vertices and texture coordinates for Metal shader let vertices:[AAPLVertex] = [AAPLVertex(position: vector_float2(-1.0, -1.0), texCoord: vector_float2( 0.0 , 1.0)), AAPLVertex(position: vector_float2(1.0, -1.0), texCoord: vector_float2( 1.0, 1.0)), AAPLVertex(position: vector_float2(-1.0, 1.0), texCoord: vector_float2( 0.0, 0.0)), AAPLVertex(position: vector_float2(1.0, 1.0), texCoord: vector_float2( 1.0, 0.0)) ] let commandBuffer = commandQueue!.makeCommandBuffer() if let commandBuffer = commandBuffer { let renderEncoderY = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorY!) renderEncoderY?.setRenderPipelineState(pipelineStateY!) renderEncoderY?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderY?.setFragmentTexture(CVMetalTextureGetTexture(lumaTexture!), index: 0) renderEncoderY?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthY), height: Double(dstHeightY), znear: 0, zfar: 1)) renderEncoderY?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderY?.endEncoding() let renderEncoderCbCr = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptorCbCr!) renderEncoderCbCr?.setRenderPipelineState(pipelineStateCbCr!) renderEncoderCbCr?.setVertexBytes(vertices, length: vertices.count * MemoryLayout<AAPLVertex>.stride, index: 0) renderEncoderCbCr?.setFragmentTexture(CVMetalTextureGetTexture(chromaTexture!), index: 0) renderEncoderCbCr?.setViewport(MTLViewport(originX: 0, originY: 0, width: Double(dstWidthUV), height: Double(dstHeightUV), znear: 0, zfar: 1)) renderEncoderCbCr?.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1) renderEncoderCbCr?.endEncoding() commandBuffer.commit() } And here is shader code: vertex MappedVertex vertexShaderYCbCrPassthru ( constant Vertex *vertices [[ buffer(0) ]], unsigned int vertexId [[vertex_id]] ) { MappedVertex out; Vertex v = vertices[vertexId]; out.renderedCoordinate = float4(v.position, 0.0, 1.0); out.textureCoordinate = v.texCoord; return out; } fragment half fragmentShaderYPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureY [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float Y = float(textureY.sample(s, in.textureCoordinate).r); return half(Y); } fragment half2 fragmentShaderCbCrPassthru ( MappedVertex in [[ stage_in ]], texture2d<float, access::sample> textureCbCr [[ texture(0) ]] ) { constexpr sampler s(s_address::clamp_to_edge, t_address::clamp_to_edge, min_filter::linear, mag_filter::linear); float2 CbCr = float2(textureCbCr.sample(s, in.textureCoordinate).rg); return half2(CbCr); } Is there anything fundamentally wrong in the code that makes it slow?
Posted Last updated
.
Post not yet marked as solved
1 Replies
561 Views
I have CVPixelBuffers in YCbCr (422 or 420 biplanar 10 bit video range) coming from camera. I see vImage framework is sophisticated enough to handle a variety of image formats (including pixel buffers in various YCbCr formats). I was looking to compute histograms for both Y (luma) and RGB. For the 8 bit YCbCr samples, I could use this code to compute histogram of Y component. CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly) let bytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0) let baseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0) let height = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) let width = CVPixelBufferGetWidthOfPlane(pixelBuffer, 0) CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly) var buffer = vImage_Buffer(data: baseAddress, height: vImagePixelCount( height), width: vImagePixelCount(width), rowBytes: bytesPerRow) alphaBin.withUnsafeMutableBufferPointer { alphaPtr in let error = vImageHistogramCalculation_Planar8(&buffer, alphaPtr.baseAddress!, UInt32(kvImageNoFlags)) guard error == kvImageNoError else { fatalError("Error calculating histogram luma: \(error)") } } How does one implement the same for 10 bit HDR pixel buffers, preferably using new iOS 16 vImage APIs that provide lot more flexibility (for instance, getting RGB histogram as well from YCbCr sample without explicitly performing pixel format conversion)?
Posted Last updated
.
Post not yet marked as solved
0 Replies
251 Views
I have the following code adapted from AVCamManual sample code to set white balance. I still see crash reports in analytics where exception is raised in setting WB: *** -[AVCaptureDevice temperatureAndTintValuesForDeviceWhiteBalanceGains:] whiteBalanceGains contain an out-of-range value - red, green, and blue gain Here is my code, it is not clear how things are turning out of range. public func normalizedGains(gains:AVCaptureDevice.WhiteBalanceGains) -> AVCaptureDevice.WhiteBalanceGains { var g = gains if let device = videoDevice { g.redGain = max(1.0, g.redGain) g.blueGain = max(1.0, g.blueGain) g.greenGain = max(1.0, g.greenGain) g.redGain = min(device.maxWhiteBalanceGain, g.redGain) g.blueGain = min(device.maxWhiteBalanceGain, g.blueGain) g.greenGain = min(device.maxWhiteBalanceGain, g.greenGain) } return g } And my code to set WB: public func setTemperatureAndTint( colorTemperature:Float?, tint:Float?) { if let device = videoDevice { var tint = tint var colorTemperature = colorTemperature if colorTemperature == nil { colorTemperature = device.temperatureAndTintValues(for: device.deviceWhiteBalanceGains).temperature } if tint == nil { tint = device.temperatureAndTintValues(for: device.deviceWhiteBalanceGains).tint } let temperatureTint = AVCaptureDevice.WhiteBalanceTemperatureAndTintValues(temperature: colorTemperature!, tint: tint!) NSLog("Setting tint \(temperatureTint.tint)") do { try device.lockForConfiguration() device.setWhiteBalanceModeLocked(with: normalizedGains(gains: device.deviceWhiteBalanceGains(for: temperatureTint)) , completionHandler: nil) device.unlockForConfiguration() wbLockedtoGray = false } catch { NSLog("Unable to change White balance gain \(error)") } } } Is there anything I am doing wrong?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.2k Views
I am trying to use a CIColorKernel or CIBlendKernel with sampler arguments but the program crashes. Here is my shader code which compiles successfully. extern "C" float4 wipeLinear(coreimage::sampler t1, coreimage::sampler t2, float time) { float2 coord1 = t1.coord(); float2 coord2 = t2.coord(); float4 innerRect = t2.extent(); float minX = innerRect.x + time*innerRect.z; float minY = innerRect.y + time*innerRect.w; float cropWidth = (1 - time) * innerRect.w; float cropHeight = (1 - time) * innerRect.z; float4 s1 = t1.sample(coord1); float4 s2 = t2.sample(coord2); if ( coord1.x > minX && coord1.x < minX + cropWidth && coord1.y > minY && coord1.y <= minY + cropHeight) { return s1; } else { return s2; } } And it crashes on initialization. class CIWipeRenderer: CIFilter { var backgroundImage:CIImage? var foregroundImage:CIImage? var inputTime: Float = 0.0 static var kernel:CIColorKernel = { () -> CIColorKernel in let url = Bundle.main.url(forResource: "AppCIKernels", withExtension: "ci.metallib")! let data = try! Data(contentsOf: url) return try! CIColorKernel(functionName: "wipeLinear", fromMetalLibraryData: data) //Crashes here!!!! }() override var outputImage: CIImage? { guard let backgroundImage = backgroundImage else { return nil } guard let foregroundImage = foregroundImage else { return nil } return CIWipeRenderer.kernel.apply(extent: backgroundImage.extent, arguments: [backgroundImage, foregroundImage, inputTime]) } } It crashes in the try line with the following error: Fatal error: 'try!' expression unexpectedly raised an error: Foundation._GenericObjCError.nilError If I replace the kernel code with the following, it works like a charm: extern "C" float4 wipeLinear(coreimage::sample_t s1, coreimage::sample_t s2, float time) { return mix(s1, s2, time); }
Posted Last updated
.
Post not yet marked as solved
2 Replies
530 Views
It seems AVAssetWriter is rejecting CVPixelBuffers with error -12743 when appending NSData for kCVImageBufferAmbientViewingEnvironmentKey for HDR videos. Here is my code: var ambientViewingEnvironment:CMFormatDescription.Extensions.Value? var ambientViewingEnvironmentData:NSData? ambientViewingEnvironment = sampleBuffer.formatDescription?.extensions[.ambientViewingEnvironment] let plist = ambientViewingEnvironment?.propertyListRepresentation ambientViewingEnvironmentData = plist as? NSData And then attaching this data, CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, ambientViewingEnvironmentData! as CFData, .shouldPropagate) No matter what I do, including copying the attachment from sourcePixelBuffer to destinationPixelBuffer as it is, the error remains! var attachmentMode:CVAttachmentMode = .shouldPropagate let attachment = CVBufferCopyAttachment(sourcePixelBuffer!, kCVImageBufferAmbientViewingEnvironmentKey, &attachmentMode) NSLog("Attachment \(attachment!), mode \(attachmentMode)") CVBufferSetAttachment(dstPixelBuffer, kCVImageBufferAmbientViewingEnvironmentKey, attachment!, attachmentMode) I need to know if there is anything wrong in the way metadata is copied.
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.2k Views
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
Posted Last updated
.
Post not yet marked as solved
0 Replies
276 Views
I need to know correct color conversion matrices for converting YCbCr422 and YCbCr420 10 bit video range sample buffers (BT.2020 color space) to RGB. AVFoundation framework mentions AVVideoYCbCrMatrix_ITU_R_2020 which is a string constant. But I need to know the full matrix that can be used to perform color conversion. I have this matrix for full range BT2020, not sure if this is correct and what is the correct way to adapt it to video range. let colorMatrixBT2020_fullRange = ColorConversion(matrix: matrix_float3x3(columns: (simd_float3(1.0, 1.0, 1.0), simd_float3(0.000, -0.11156702/0.6780, 1.8814), simd_float3(1.4746, -0.38737742/0.6780, 0.000))), offset: vector_float3(0.0, -0.5, -0.5))
Posted Last updated
.
Post not yet marked as solved
1 Replies
438 Views
I want to know the correct way to enable Apple log on AVCaptureDevice. I understand one can query device formats, query whether a format supports AVCaptureColorSpace_AppleLog, and set the format and set the activeColorSpace to this value. Is that the way to enable Log?
Posted Last updated
.
Post not yet marked as solved
1 Replies
974 Views
This issue is driving me crazy. I load an NSAttributedString in UITextView and within moments after loading the foregroundColor attribute of text is erased(i.e becomes white) without me doing anything. Here is the code and NSLog dump. How do I debug this I wonder? class ScriptEditingView: UITextView, UITextViewDelegate { var defaultFont = UIFont.preferredFont(forTextStyle: .body) var defaultTextColor = UIColor.white private func commonInit() { self.font = UIFont.preferredFont(forTextStyle: .body) self.allowsEditingTextAttributes = true self.textColor = defaultTextColor self.backgroundColor = UIColor.black self.isOpaque = true self.isEditable = true self.isSelectable = true self.dataDetectorTypes = [] self.showsHorizontalScrollIndicator = false } } And then in my ViewController that contains the UITextView, I have this code: textView = ScriptEditingView(frame: newTextViewRect, textContainer: nil) textView.delegate = self view.addSubview(textView) textView.allowsEditingTextAttributes = true let guide = view.safeAreaLayoutGuide // 5 textView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ textView.leadingAnchor.constraint(equalTo: guide.leadingAnchor), textView.trailingAnchor.constraint(equalTo: guide.trailingAnchor), textView.topAnchor.constraint(equalTo: view.topAnchor), textView.bottomAnchor.constraint(equalTo: view.bottomAnchor) ]) textView.attributedText = attributedString NSLog("Attributed now") dumpAttributesOfText() DispatchQueue.main.asyncAfter(deadline: .now() + 1.0) { NSLog("Attributes after 1 sec") self.dumpAttributesOfText() } And here is code to dump attributes of text: private func dumpAttributesOfText() { textView.attributedText?.enumerateAttributes(in: NSRange(location: 0, length: textView.attributedText!.length), options: .longestEffectiveRangeNotRequired, using: { dictionary, range, stop in NSLog(" range \(range)") if let font = dictionary[.font] as? UIFont { NSLog("Font at range \(range) - \(font.fontName), \(font.pointSize)") } if let foregroundColor = dictionary[.foregroundColor] as? UIColor { NSLog("Foregroundcolor \(foregroundColor) at range \(range)") } if let underline = dictionary[.underlineStyle] as? Int { NSLog("Underline \(underline) at range \(range)") } }) } The logs show this: 2022-07-02 13:16:02.841199+0400 MyApp[12054:922491] Attributed now 2022-07-02 13:16:02.841370+0400 MyApp[12054:922491] range {0, 14} 2022-07-02 13:16:02.841486+0400 MyApp[12054:922491] Font at range {0, 14} - HelveticaNeue, 30.0 2022-07-02 13:16:02.841586+0400 MyApp[12054:922491] Foregroundcolor UIExtendedGrayColorSpace 1 1 at range {0, 14} 2022-07-02 13:16:02.841681+0400 MyApp[12054:922491] range {14, 6} 2022-07-02 13:16:02.841770+0400 MyApp[12054:922491] Font at range {14, 6} - HelveticaNeue, 30.0 2022-07-02 13:16:02.841855+0400 MyApp[12054:922491] Foregroundcolor kCGColorSpaceModelRGB 0.96863 0.80784 0.27451 1 at range {14, 6} 2022-07-02 13:16:03.934816+0400 MyApp[12054:922491] Attributes after 1 sec 2022-07-02 13:16:03.935087+0400 MyApp[12054:922491] range {0, 20} 2022-07-02 13:16:03.935183+0400 MyApp[12054:922491] Font at range {0, 20} - HelveticaNeue, 30.0 2022-07-02 13:16:03.935255+0400 MyApp[12054:922491] Foregroundcolor UIExtendedGrayColorSpace 1 1 at range {0, 20}
Posted Last updated
.
Post not yet marked as solved
2 Replies
445 Views
I noticed on iOS 17 that calling AppTransaction.shared fails with following errors. Here is the code: let result: VerificationResult<AppTransaction> = try await AppTransaction.shared NSLog("Result \(result)") It does not hit the NSLog statement and instead throws the following error logs. Error getting app transaction: Error Domain=ASDErrorDomain Code=500 "(null)" UserInfo={NSUnderlyingError=0x281815050 {Error Domain=AMSErrorDomain Code=301 "Invalid Status Code" UserInfo={NSLocalizedDescription=Invalid Status Code, AMSURL=https://mzstorekit-sb.itunes.apple.com/inApps/v1/receipts/createAppReceipt?REDACTED, AMSStatusCode=401, AMSServerPayload={ errorCode = 500317; }, NSLocalizedFailureReason=The response has an invalid status code}}} Received error that does not have a corresponding StoreKit Error: Error Domain=AMSErrorDomain Code=301 "Invalid Status Code" UserInfo= {NSLocalizedDescription=Invalid Status Code, AMSURL=https://mzstorekit- sb.itunes.apple.com/inApps/v1/receipts/createAppReceipt?REDACTED, AMSStatusCode=401, AMSServerPayload={ errorCode = 500317; }, NSLocalizedFailureReason=The response has an invalid status code} Received error that does not have a corresponding StoreKit Error: Error Domain=ASDErrorDomain Code=500 "(null)" UserInfo=.{NSUnderlyingError=0x281815050 {Error Domain=AMSErrorDomain Code=301 "Invalid Status Code" UserInfo={NSLocalizedDescription=Invalid Status Code, AMSURL=https://mzstorekit-sb.itunes.apple.com/inApps/v1/receipts/createAppReceipt?REDACTED, AMSStatusCode=401, AMSServerPayload={ errorCode = 500317; }, NSLocalizedFailureReason=The response has an invalid status code}}}
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.5k Views
I see these errors when presenting UIActivityViewController with a video file. [ShareSheet] Failed to request default share mode for fileURL:file:///var/mobile/Containers/Data/Application/B0EB55D3-4BF1-430A-92D8-2231AFFD9499/Documents/IMG-0155.mov error:Error Domain=NSOSStatusErrorDomain Code=-10814 "(null)" UserInfo={_LSLine=1538, _LSFunction=runEvaluator} I don't understand if I am doing something wrong and what the error means. The share sheet shows anyways.
Posted Last updated
.
Post not yet marked as solved
1 Replies
496 Views
I am planning to convert a paid app to freemium. I would like existing paid users to remain unaffected in this process. In this question, I am focussing on volume purchase users (both existing and future). The info on Apple developer website advises to use original Store Kit if one needs to support Volume Purchase users: You may need to use the Original API for in-app purchase for the following features, if your app supports them: The Volume Purchase Program (VPP). For more information, see Device Management. Does that mean I can't use StoreKit2 to verify receipts of volume purchases made before the app went freemium (to get original purchase version and date), OR, the API can not be used to make in-app volume purchases and perhaps, users will not be able to make volume purchases from AppStore, OR, both?
Posted Last updated
.
Post not yet marked as solved
0 Replies
395 Views
I am looking to move from paid app to fremium without upsetting my existing users. I see WWDC2022 session where new fields introduced in iOS 16 are used to extract original application version user used to purchase the app. While my app supports iOS 14 and above, I am willing to sacrifice iOS 14 and go iOS 15 and above as StoreKit2 requires iOS 15 at the minimum. The code below is however only valid for iOS 16. I need to know what is the best way out for iOS 15 devices if I am using StoreKit2? If it is not possible in StoreKit2, then how much is the work involved in original StoreKit API(because in that case I can handle for iOS 14 as well)?
Posted Last updated
.
Post not yet marked as solved
0 Replies
467 Views
Dear AVKit Engineers, I see a strange bug in AVKit. Even after the viewcontroller hosting AVPlayerViewController is dismissed, I see CPU spiking to over 100% which is caused by some animation code still running after AVPlayerViewController no more exists ([AVMobileChromelessControlsViewController__animateSliderToTintState:duration:completionHandler:]). How does this code continue to run even after AVPlayerViewController is no more? And what can I do to fix it?
Posted Last updated
.