Hi All,I'm trying to run a temporal neural net (essentially an LSTM with a convolutional featurizer) on iOS. It was originally trained in Pytorch and then converted to CoreML via onnx. It needs to run sequentially on video frames (i.e. cannot be parallelized).In my Xcode unit tests, I always get the same run time (~0.06s or ~17 FPS on iPhone 11). When I actually run the app, however, I only achieve the 17FPS some of the time on an iPhone 11 - the other times, the run-time goes down to about ~0.01s (~10 FPS) on iPhone 11.Firstly, I'm a bit surprised the model runs so slowly (17 FPS) at base value as I've tried many large off the shelf CNNs that can run at well over 30 FPS on iPhone 11. What's more concerning is that the run-time performance is inconsistent, and seems to depend on OTHER apps that I've backgrounded. If I force quit all other open apps, I can guarantee the unit test performacne of ~17FPS every single time I run the app!My only guess is that my model is not running on Apple Neural Engine and is instead running on the CPU... otherwise, why would run-time performance depend on what other apps I have backgrounded?In any case, any help or suggestions on my architecture would be greatly appreciated! I'm attaching a link to my 16-bit quantized model here.Thank you!
Post
Replies
Boosts
Views
Activity
I'm looking to setup offer codes for my in-app purchases.
The support documentation here - https://help.apple.com/app-store-connect/#/dev6a098e4b1 states the following:
Each offer code can be redeemed only once per customer, per offer. So does this mean:
(a) multiple customers can redeem the same code, albeit only once each; or
(b) each code can only be redeemed by one customer?
Also is there a way to track analytics for each code? (i.e. number of times redeemed, number of times these converted to paid, etc.)
I am writing an MP4 file from a video recording.
I'd like to understand how large this file is getting during the recording itself so that I can warn the user if the file is close to exceeding the amount of available space on the device.
Is there a way to do this prior to finishing writing the file? I understand that AVCaptureFileOutput has a maxRecordedFileSize (which would obviate the need for me to estimate the file size in the first place) but I have to use AVAssetWriter in this case since I need access to the individual frames' sample buffers during the recording for some real-time image processing.
One attempt I've made is to try calling CMSampleBufferGetTotalSampleSize() on each audio and video sample buffer that is successfully appended to its corresponding AVAssetWriterInput, and add these up. However, the resulting size is 10x smaller than what I get if I check FileManager.default.fileSize(atPath:) after the file has finished writing.
Are there any other options here?