Post

Replies

Boosts

Views

Activity

StoreKit2 for Volume Purchases
I am planning to convert a paid app to freemium. I would like existing paid users to remain unaffected in this process. In this question, I am focussing on volume purchase users (both existing and future). The info on Apple developer website advises to use original Store Kit if one needs to support Volume Purchase users: You may need to use the Original API for in-app purchase for the following features, if your app supports them: The Volume Purchase Program (VPP). For more information, see Device Management. Does that mean I can't use StoreKit2 to verify receipts of volume purchases made before the app went freemium (to get original purchase version and date), OR, the API can not be used to make in-app volume purchases and perhaps, users will not be able to make volume purchases from AppStore, OR, both?
1
0
647
Jul ’23
AVPlayerViewController causing CPU spikes after deallocation
Dear AVKit Engineers, I see a strange bug in AVKit. Even after the viewcontroller hosting AVPlayerViewController is dismissed, I see CPU spiking to over 100% which is caused by some animation code still running after AVPlayerViewController no more exists ([AVMobileChromelessControlsViewController__animateSliderToTintState:duration:completionHandler:]). How does this code continue to run even after AVPlayerViewController is no more? And what can I do to fix it?
0
0
575
Jul ’23
High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
1.6k
Jul ’23
ShareSheet errors on console when presenting UIActivityViewController
I see these errors when presenting UIActivityViewController with a video file. [ShareSheet] Failed to request default share mode for fileURL:file:///var/mobile/Containers/Data/Application/B0EB55D3-4BF1-430A-92D8-2231AFFD9499/Documents/IMG-0155.mov error:Error Domain=NSOSStatusErrorDomain Code=-10814 "(null)" UserInfo={_LSLine=1538, _LSFunction=runEvaluator} I don't understand if I am doing something wrong and what the error means. The share sheet shows anyways.
5
3
2k
Apr ’23
What exactly does CVBufferSetAttachment do?
I understand the CVBufferSetAttachment simply appends metadata attachment to the sample buffer in a dictionary. But I see there are no errors in appending metadata that is contradictory in nature. For instance, for the sample buffers received from camera in HDR mode which are in YUV422 10 bit biplanar format, both the following succeed: CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_2020, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_2100_HLG, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_2020, .shouldPropagate) Or CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferColorPrimariesKey, kCVImageBufferColorPrimaries_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferTransferFunctionKey, kCVImageBufferTransferFunction_ITU_R_709_2, .shouldPropagate) CVBufferSetAttachment(testPixelBuffer!, kCVImageBufferYCbCrMatrixKey, kCVImageBufferYCbCrMatrix_ITU_R_709_2, .shouldPropagate) So one could set the color primaries and transfer function to be BT.709 format for sample buffers that are in 10 bit HDR. I see no errors when either sample buffer is appended to AVAssetWriter. I am wondering how attachments actually work and how AVFoundation resolves the contradictions?
1
0
472
Mar ’23
CVPixelBuffer HDR10 to BT.709 conversion using AVAssetWriter
I am getting CMSampleBuffers in kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange format from the camera. The pixel buffers are 10 bit HDR. I need to record using ProRes422 codec but in non-HDR format. I am not sure what is a reliable way of doing this so reaching out here. What I did is simply set AVAssetWriter compression dictionary as follows: compressionSettings[AVVideoTransferFunctionKey] = AVVideoTransferFunction_ITU_R_709_2 compressionSettings[AVVideoColorPrimariesKey] = AVVideoColorPrimaries_ITU_R_709_2 compressionSettings[AVVideoYCbCrMatrixKey] = AVVideoYCbCrMatrix_ITU_R_709_2 It works and the end video recording shows the color space to be HD 1-1-1 with Apple ProRes codec. But I am not sure if AVAssetWriter has actually performed colorspace conversion from HDR 10 to BT.709, or simply clipped the colors out of range. I need to know a definitive way to achieve this. I see Apple's native camera app is doing this, but not sure how.
0
0
548
Mar ’23
AVAssetWriter sample rate AV drift
I have an AVAssetWriter and I do set audio compression settings dictionary using canApply(outputSettings: audioCompressionSettings, forMediaType: .audio) API. One of the fields in the compression settings is setting an audio sample rate using AVSampleRateKey. My question is if the sample rate I set in this key is different from sample rate of audio sample buffers that are appended, can this cause audio to drift away from video? Is setting arbitrary sample rate in asset writer settings not recommended?
1
0
811
Dec ’22
iPadPro M2 ProRes unavailable
I have the following code to determine ProRes and HDR support on iOS devices. extension AVCaptureDevice.Format {     var supports10bitHDR:Bool {         let mediaType = CMFormatDescriptionGetMediaType(formatDescription)         let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)         return mediaType == kCMMediaType_Video && mediaSubtype == kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange     }     var supportsProRes422:Bool {         let mediaType = CMFormatDescriptionGetMediaType(formatDescription)         let mediaSubtype = CMFormatDescriptionGetMediaSubType(formatDescription)         return (mediaType == kCMMediaType_Video && (mediaSubtype == kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange))     } } On iPad Pro M2, supportsProRes422 returns false for all device formats (for wide angle camera). Is it a bug or intentional?
1
0
966
Dec ’22
'User Assigned Device Name' in XCode Capabilities list
I got approval for User Assigned Device Name but now am stuck how to import this capability in XCode? I successfully updated my AppId in membership portal to reflect this permission, but I can't find anything in XCode to import this. I opened Editor -> Add Capability but there is no option to add User Assigned Device Name in the list! User Assigned Device Name
17
1
5.9k
Nov ’22
AVAssetWriter audio settings failure with compression settings
I have the following audio compression settings which fail with AVAssetWriter (mov container, HEVC codec, kAudioFormatMPEG4AAC format ID): ["AVSampleRateKey": 48000, "AVFormatIDKey": 1633772320, "AVNumberOfChannelsKey": 1, "AVEncoderBitRatePerChannelKey": 128000, "AVChannelLayoutKey": <02006500 00000000 00000000 00000000 00000000 00000000 00000000 00000000>] Here is the code line that fails: if _assetWriter?.canApply(outputSettings: audioSettings!, forMediaType: AVMediaType.audio) ?? false { } else { /* Failure */ } Want to understand what is wrong? I can not reproduce it at my end (only reproducible on user's device with a particular microphone). I need to know if it is mandatory to provide value for AVChannelLayoutKey in the dictionary with kAudioFormatMPEG4AAC? That could be a possible culprit.
1
0
983
Oct ’22
iOS 16.1 CDPurgeableResultCache logs
I keep getting repeated logs on iOS 16.1: [client] 114 CDPurgeableResultCache _recentPurgeableTotals no result for /private/var/wireless/baseband_data, setting to zero The logs fill the console and it's difficult to find useful logs that are there in my code. What can be done to disable these logs?
0
3
1.4k
Oct ’22
iOS 16.1(b5) - AVMultiCamSession emitting silent audio frames
This seems like a new bug in iOS 16.1(b5) where AVMultiCamSession outputs silent audio frames when back & front mics have been added to it. This issue is not seen in iOS 16.0.3 or earlier. I can't reproduce this issue with AVMultiCamPIP sample code so I believe I have some AVAudioSession or AVMultiCamSession configuration in my code that is causing this. Moreover, setting captureSession.usesApplicationAudioSession = true also fixes the issue, but then I do not get the audio samples from both the microphones. Here is the code:     public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)     {         if let videoDataOutput = output as? AVCaptureVideoDataOutput {             processVideoSampleBuffer(sampleBuffer, fromOutput: videoDataOutput)         } else if let audioDataOutput = output as? AVCaptureAudioDataOutput {             processsAudioSampleBuffer(sampleBuffer, fromOutput: audioDataOutput)         }     }  private var lastDumpTime:TimeInterval?     private func processsAudioSampleBuffer(_ sampleBuffer: CMSampleBuffer, fromOutput audioDataOutput: AVCaptureAudioDataOutput) {         if lastDumpTime == nil {             lastDumpTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds         }         let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer).seconds         if time - lastDumpTime! >= 1.0 {             dumpAudioSampleBuffer(sampleBuffer)             lastDumpTime = time         }       }  }  private func dumpAudioSampleBuffer(_ sampleBuffer:CMSampleBuffer) {         NSLog("Dumping audio sample buffer")         var audioBufferList = AudioBufferList(mNumberBuffers: 1,               mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))         var buffer: CMBlockBuffer? = nil         CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, bufferListSizeNeededOut: nil, bufferListOut: &audioBufferList, bufferListSize: MemoryLayout.size(ofValue: audioBufferList), blockBufferAllocator: nil, blockBufferMemoryAllocator: nil, flags: UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment), blockBufferOut: &buffer)         // Create UnsafeBufferPointer from the variable length array starting at audioBufferList.mBuffers         withUnsafePointer(to: &audioBufferList.mBuffers) { ptr in             let buffers = UnsafeBufferPointer<AudioBuffer>(start: ptr, count: Int(audioBufferList.mNumberBuffers))             for buf in buffers {                 // Create UnsafeBufferPointer<Int16> from the buffer data pointer                 let numSamples = Int(buf.mDataByteSize)/MemoryLayout<Int16>.stride                 var samples = buf.mData!.bindMemory(to: Int16.self, capacity: numSamples)                 for i in 0..<numSamples {                     NSLog("Sample \(samples[i])")                 }             }         }     } And here is the output: Dump Audio Samples
2
0
717
Oct ’22