I have a support incident open with Apple and also have filed a bug report. They had me try using VNVideoProcessor. When using this the issue still persists, except instead of throwing an error, it just skips the image and does not call the request. The support engineer claims he is only seeing the issue on a simulator, and not a real device. I am waiting on further feedback to clarify this. I am processing videos by extracting each frame, so I am dealing with many frames. Some frames will process, but most will not. It seems consistent as to which frames do and don't get processed. Prior to iOS/iPadOS 14.5 we had no issues at all. My fear is that this will not get fixed anytime soon since the problem seems to exist as part of the OS itself. Apple has all their libraries baked into the OS so it makes situations like these unsolvable until future OS releases. And even then, the users need to be on that newer OS. The support engineer suggestion if VNVideoProcessor did not work was to file a bug report and try going a layer lower and work with CoreML directly, bypassing the Vision API. If that is going to be the case, I might try to move off of CoreML entirely and work with openCV.