ARKit vs AVFoundation for high res depth capture

Hi there!

From the documentation and sample code (https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera), my understand is that AVFoundation does provide more access to manual control as well as 2x higher resolution of depth image than ARKit.

However, upon reading the https://developer.apple.com/augmented-reality/arkit/ website as well the WWDC vid (https://developer.apple.com/videos/play/wwdc2022/10126/), it looks like ARKit 6 now also supports 4K video capture (RGB mode) while scene understanding is running under the hood. I was wondering if anyone knows if the resolution of depth images is still a limitation of ARKit vs AVFoundation.

I'm trying to build a capture app that relies on high-quality depth image/lidar info. What would you suggest or any other consideration I should keep in mind?

Thank you!