In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so.
From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export.
How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?
The general steps would be as follows:
Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space.
Store world space points in a MTLBuffer. (You could also store the sampled color for each point if you wanted to use that data in your model)
When the command buffer has completed, copy the world space points from the buffer and append them to an array. Repeat with the next frame. (Consider limiting how large you allow this array to grow to, otherwise you will eventually run out of memory)
When you are ready to write out your file (i.e. you have finished "scanning"), create an SCNScene.
Iterate through the stored world space points and add an SCNNode with some geometry (i.e. an SCNSphere) to your SCNScene. (If you also stored a color, use it as the diffuse material parameter of your geometry)
Use write(to:options:delegate:progressHandler:) to write your point cloud model to some supported 3d file format, like .usdz