I've been using the front facing camera's depth data from the ARFrame's during an ARKit session, and on the iPhone XS/XR/XS Max the resultant point clouds are 'wider' or more spread out than the 11 Pro or the X, and they are different to the results from the standard foundation AV depth data captured on the same device.
In reviewing the data, the depth values themselves seem correct, it more appears that the intrinsics matrix and reference image size to produce values are 5-6% too spread out.
Is there a known issue with XS/XR/XS Max and these values in ARKit? It appears that the focal length in the ARKit intrinsics matrix is the same as in AV depth data's intrinsics matrix, but the ARKit reference image size is larger (by 5.1% coincidence?).
I think if I were to scale the focal length's by the difference in reference image sizes then this would come close to correcting the issue, BUT this is a 'hack' and implies that the OS is not providing the correct values for us to trust/use, and if it is a bug then it could be corrected and any 'hack' would then over compensation and distort the data.
Can anyone help?
Thanks in advance,
Peter Myerscough-Jackopson
Ps. (I think there may be a smaller issue with the X, but I am not confident enough with my analysis and the number of devices I have available is not significant enough to suggest it.)