Hi,
I have an application that creates spherical panoramas using arkit, we want to improve the quality of the stitching by adding depth data ( https://developer.apple.com/documentation/avfoundation/avdepthdata ) per photo (the app takes 40 photos on iPhone X for a full sphere).
The problem is: using ARKit framework, we can't get depth data directly, so we need to use also AVFoundation, but ARKit blocks AVFoundation while working. Is there a workaround?