Point Cloud Processing on iOS

Hi everyone, Swift developer here, working on an iPhone12 app capturing dense Cloud Point.

I'm new to all this ARKit/Metal, but not to Swift/C/C++.
My goal is to denoise/decimate (voxel filtering probably) then mesh my points (poisson?).

I've been told this has to be done on the cloud, but so far I'm having great perf on device.

I'm compile VTK c++ lib for ARM64 after a few hairpulling days, but now I wonder, should I wrap it for use in Swift, or can I directly use that in Metal.
Also is there some alternative in Metal that I should use instead as maybe my needs are simple enough.

I'm just seeing the tip of the iceberg I guess, but these 3D libraries seems super powerful and using that on iOS device seems promising.

Cheers,

A
Hi Alex,

Thanks for sharing your experience. We believe the GPUs on our iOS device are capable of some quite heavy workloads.

Please let us know how your development is going.

Hello Guys!

It's going great so far, my first time doing any GPU/Graphic things since... 2011 and NCurses stuff 😅

I've released a simple app called PointCloudKit (Capture then display with a SCNView)

I've since then compiled VTK for arm64, and I'm now rendering my ARKit capture (XZY-RGB-D) of point cloud using the provided renderer, it's OpenGL but so far so good.

I'm limited in my knowledge, so learning VTK pipelines now, but I'm processing 2M points cloud instantly with VoxelFiltering, then can display and play around in real time 60fps, it's quite impressive.
One issue I had was that I wished to stay on the GPU, passing my MTLBuffer directly to VTK/OpenGL but I did not succeed yet. I'm casting my MTLBuffer.contents in the C++ code and it seems to be very fast for now (Still in debug mode)

My real "end game boss" goal is to do all of this in metal, and real time. That would be super interesting, and I'll explore that once I'm more comfortable and have more time.

I also reconstruct mesh andapply colors, results are great and it take 10sec for denoised 500k points (end up being 50k points after filtering)

Cheers
Ok I've been ditching any non Apple tech and I'm starting from scratch with SwiftUI and pure metal implementation. It's super fast, but now implementing post processing (or live) will be challenging.

I have one question. Currently I move my data around using this custom struct containing the metal buffer:

Code Block Swift
public struct PointCloudCapture {
  public let buffer: MetalBuffer<ParticleUniforms>
  public let count: Int
   
  public var stride: Int {
    buffer.stride
  }
  public enum Component {
    case position
    case color
    case confidence
    public var format: MTLVertexFormat {
      switch self {
      case .position:
        return MTKMetalVertexFormatFromModelIO(.float3)
      case .color:
        return MTKMetalVertexFormatFromModelIO(.float3)
      case .confidence:
        return MTKMetalVertexFormatFromModelIO(.float)
      }
    }
    public var dataOffset: Int {
      switch self {
      case .position:
        return 0
      case .color:
        return MemoryLayout<Float>.size * 4
      case .confidence:
        return MemoryLayout<Float>.size
      }
    }
    public var semantic: SCNGeometrySource.Semantic {
      switch self {
      case .position:
        return .vertex
      case .color:
        return .color
      case .confidence:
        return .confidence
      }
    }
  }
}
extension SCNGeometrySource.Semantic {
  // Represent the confidence from the ARKit capture
  public static let confidence = SCNGeometrySource.Semantic(rawValue: "confidence")
}


As you can see I wanted to embed the Confidence data in it's metadata of some sort (I use these later with SCNGeometrySource and SCNGeometryElement to render the point cloud in SCN.

I'm not sure how to use my custom Confidence data with SCNGeometry(sources:, elements:) though, would gladly take any pointers.

I would like to have an overridden version of some sort to decide, based on Confidence, wether in render or not the point.

All this could easily be made in my metal code earlier, but the flow of my app is two steps : Capture then navigate to Viewer.
And I want this viewer to retain as much data as possible, to allow the use to apply/revert treatments to it's capture, and when happy export it.


I'm very happy with my transition from Capture to Viewer (Metal buffer to SCNScene is instant even for millions points captures, AWESOME)
But yes, I'm wondering if my idea of applying treatments at this stage make sense?

nb: I also plan to add some very light, non critical, treatments in the live capture metal code. My goal here is to prevent oversampling areas. Kind of a VoxelGrid filtering of some sort but running live. Not sure how but I'll update. My company asked me to work on this for another project so I can revive this petproject!

Cheers,

A

Continuing updates on my journey...
  • seen that you guys released a new Point Cloud App demo, seems to be WIP ? Layout is buggy and I'm not sure what's it's about but seems more 2D focused. It's interesting though as it streamline the way to use in in SwiftUI (I've done the same but it was a bit tedious to discover by myself)

  • I've been using python from swift for processing and i've got some great post processing for voxel filtering, outlier removal (statistical), normal estimation, surface reconstruction... I do GPU<->CPU shared memory, write to file, open file with python library, then back to memory and into the Metal buffer. Thanks to aligned memory everything happen very fast (1ms for 100k points on average) making this solution much better than my previous attempts at using C++ libraries (Even if that depend on how I interface things more than the inherent performance or C++ or Python :))

It's going great and SwiftUI is a blast. Will work on this a bit more and update my app on the store

To anyone reading my AppleForumBlogUpdates™️... Cheers 😁
Alright here is a first release of the SwiftUI version that's more like a toolbox. Processing of large set is stable, except surface reconstruction sometimes.

https://apps.apple.com/us/app/pointcloudkit/id1546476130

Hi Alex, Im Jin from Korea. I have been working on swift like 2-3 months, so everything are hard for me. I had made ply file on swift /cpu ( metal is very very hard to me.. so i made it with swift cpu ) I am trying to show my ply file just like your app. But I don't know how to do it. ( I had tried scenekit view to show ply, it was not showing ply file but obj file )

thanks for any kind of hints or codes.

Point Cloud Processing on iOS
 
 
Q