ARKit 4 iPad Pro (LiDAR) - transforming depth data into pointcloud

Hello,
I am new to iOS/iPadOS and Apple development.
I am interested in transforming and saving pointcloud data without rendering and using the shaders in the SceneDepthPointcloud example.

To that extent I have been able to successfuly transform the depthmap to local pointcloud data, like so:

Code Block
/* x and y are the coordinates in the depth buffer and widthResFactor and heightResFactor are used to scale up the depth resolution to that of the camera */
let cameraPoint = simd_float2(Float(x*widthResFactor),Float(y*heightResFactor))
let localPoint = cameraIntrinsicsInversed * simd_float3(cameraPoint, 1) * depthValue; // from camera.intrinsics.inverse


If I capture the data without moving the device the pointcloud looks reasonable.
However I am not sure what's the correct way to transform it to world coordinates.

First I tried to follow the same implementation as in the shader in the SceneDepthPointcoud example, and that didn't seem to work when I moved the device.

I also tried the following:

Code Block
 var worldPoint = transform * simd_float4(localPoint, 1) // transform is taken from frame.camera.transform
worldPoint = worldPoint / worldPoint.w;


I am still getting jumbled data when I move the device.

I would greatly appreciate any suggestions or tips.


I'm reading the source code from Apple recently. I notice there is a rotateToARCamera matrix to correct the coordinate, but I don't know what's the theory behind it. Maybe this is also the key to your problem? By the way, if you konw what does the rotateToARCamera matrix mean, please let me know.
ARKit 4 iPad Pro (LiDAR) - transforming depth data into pointcloud
 
 
Q