How to create a point cloud from a RGB image & depth map obtained by TrueDepth camera??

I have an application that captures an image with a depth map and calibration data and exports it so then I can work with it in python.

The depth map and calibration data are all converted to Float32 and is stored as a json file. The image is stored as a jpeg file. 

The depth map shape is (480, 640) and the image shape is (3024, 4032, 3) 

My goal is to be able to create a point cloud from this data.  

I’m new to working with data provided by apples TrueDepth camera and would like some clarity to what preprocessing steps I need to perform before creating the point cloud.  Here they are below:

1) since the 640x480 is a scaled version of the 12MP image, means that I can scale down the intrinsics as well. So I should scale [fx, fy, cx, cy] by the scaling factor 640/4032 = 0.15873?

2) after scaling comes taking care of the distortion, which I should use lensDistortionLookupTable to distort both the image and depth map?

Are the above two questions correct or am I missing something??

Answered by DTS Engineer in 714724022

Hello,

I recommend that you take a look at the "Displaying a Point Cloud Using Scene Depth" sample project.

It doesn't use the TrueDepth camera, but it does demonstrate all of the steps required to combine depth map data with camera intrinsics to create a point cloud.

Accepted Answer

Hello,

I recommend that you take a look at the "Displaying a Point Cloud Using Scene Depth" sample project.

It doesn't use the TrueDepth camera, but it does demonstrate all of the steps required to combine depth map data with camera intrinsics to create a point cloud.

How to create a point cloud from a RGB image & depth map obtained by TrueDepth camera??
 
 
Q