It seems pretty fundamental to want to know how the depth pixels align with the image pixels. Is curvsurf right that the raw LiDAR sensor array is only 24x24, and we only have access to the result of some black box fusion between the image and the 24x24 depth?
Regardless, it would be really helpful to know how the LiDAR pixels align to the image...