Hi everyone,
I am working on a 3D reconstruction project.
Recently I have been able to retrieve the intrinsics from the two cameras on the back of my iPhone.
One consideration is that I want this app to run regardless if there is no LiDAR, but at least two cameras on the back. IF there is a LiDAR that is something I have considered to work later on the course of the project.
I am using a AVCaptureSession with the two cameras AVCaptureDevice:
builtInWideAngleCamera
builtInUltraWideCamera
The intrinsic matrices seem to be correct. However, the when I retrieve the extrinsics, e.g., builtInWideAngleCamera w.r.t. builtInUltraWideCamera the matrix I get looks like this:
Extrinsic Matrix (Ultra-Wide to Wide):
[0.9999968, 0.0008149305, -0.0023960583, 0.0]
[-0.0008256607, 0.9999896, -0.0044807075, 0.0]
[0.002392382, 0.0044826716, 0.99998707, 0.0].
[-14.277955, -8.135408e-10, -0.3359985, 0.0]
The extrinsic matrix of the form: [R | t], seems to be correct for the rotational part, but the translational vector is ALL ZEROS. Which suggests that the cameras are physically overlapped as well the last element not being 1 (homogeneous coordinates).
Has anyone encountered this 'issue' before?
Is there a flaw in my reasoning or something I might be missing?
Any comments are very much appreciated.