Exporting Point Cloud as 3D PLY Model

I have seen this question come up a few times here on Apple Developer forums (recently noted here), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal.

In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so.

From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export.

How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?

Accepted Reply

Okay well probably the easiest approach to exporting the point cloud to some 3d file is to make use of SceneKit.

The general steps would be as follows:
  1. Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space.

  2. Store world space points in a MTLBuffer. (You could also store the sampled color for each point if you wanted to use that data in your model)

  3. When the command buffer has completed, copy the world space points from the buffer and append them to an array. Repeat with the next frame. (Consider limiting how large you allow this array to grow to, otherwise you will eventually run out of memory)

  4. When you are ready to write out your file (i.e. you have finished "scanning"), create an SCNScene.

  5. Iterate through the stored world space points and add an SCNNode with some geometry (i.e. an SCNSphere) to your SCNScene. (If you also stored a color, use it as the diffuse material parameter of your geometry)

  6. Use write(to:options:delegate:progressHandler:) to write your point cloud model to some supported 3d file format, like .usdz

  • Hi, @gchiste! Thanks a lot for all your answers! Can you suggest how to create PointCloud with not just one TrueDepth frame but many of them? Do I understand correctly that I need to get the inverted transform matrix from ARFrame to get the position of camera in world-space and then multiply all my XYZ points to this matrix? Can I build PointCloud from all sides by this way?

Add a Comment

Replies

Hi @HeoJin,

I certainly could not fancy myself an expert in Metal or working with LiDAR/point clouds in any way, but the help I received in this thread was what get me in the right direction to understanding how to work with the data being gathered and rendered by the LiDAR scanner/Metal.

My suggestion is to begin with the Visualizing a Point Cloud Using Scene Depth sample project that Apple provides, and have a look at the comments in this thread to gather an understanding of where the points are being saved. Namely, this code from @gchiste;

Code Block
commandBuffer.addCompletedHandler { [self] _ in
print(particlesBuffer[9].position) // Prints the 10th particles position
}


If you have a look in Renderer.swift in that referenced sample project, you will find that particlesBuffer is already a variable, which is a buffer that contains an array of ParticleUniforms (which has the position, rather, coordinate, of each point, the color values of each point, as well as the confidence of each point and an index).

What I ended up doing, per my comment to @JeffCloe, is to iterate over the particlesBuffer "array", using the currentPointCount, which is another variable you will find in Renderer.swift. As an example;

Code Block
for i in 0..<currentPointCount {
let point = particlesBuffer[i]
}

Doing that would give you access to each gathered point from the "scan" of the environment. That said, I have a way to go to learning more myself on this topic, including improving efficiency, but exploring that particlesBuffer really helped me to gather an understanding of what's happening here.
Amazing! You have saved me 😁
Hi, bradon.

I'm not really sure why your .ply export code keep crashing. 😭

Do you have any github repository for this ?
Hello @brandonK212
Could we apply a similar solution to this sample code? I recently posted many posts, including this one on regards how to export similar data (X,Y,Z) from the available DepthMap. Maybe u know an viable option , to achieve this?

@brandonK212,

Thanks for your best explanation on exporting points cloud in ply format.
I tried the same but it exports in grayed scale.
Can you help me how we can generate ply model in actual color?
it looks that you are going down the right path
Hi @brandonK212

Is it possible to share your code? For exporting the .ply with texture?
Hi @aditiSwaroop, I made a repository based on this thread.

https://github.com/pjessesco/iPad-PLY-scanner


  • Hi @pjessesco iphone12 and ipad lidar How do I set the scan range? For example, draw a rectangular box, the point cloud is generated in the rectangular box and no point cloud is generated outside the box. Ask for help.

Add a Comment