Alright here is a first release of the SwiftUI version that's more like a toolbox. Processing of large set is stable, except surface reconstruction sometimes.
https://apps.apple.com/us/app/pointcloudkit/id1546476130
Post
Replies
Boosts
Views
Activity
Continuing updates on my journey...
seen that you guys released a new Point Cloud App demo, seems to be WIP ? Layout is buggy and I'm not sure what's it's about but seems more 2D focused. It's interesting though as it streamline the way to use in in SwiftUI (I've done the same but it was a bit tedious to discover by myself)
I've been using python from swift for processing and i've got some great post processing for voxel filtering, outlier removal (statistical), normal estimation, surface reconstruction... I do GPU-CPU shared memory, write to file, open file with python library, then back to memory and into the Metal buffer. Thanks to aligned memory everything happen very fast (1ms for 100k points on average) making this solution much better than my previous attempts at using C++ libraries (Even if that depend on how I interface things more than the inherent performance or C++ or Python :))
It's going great and SwiftUI is a blast. Will work on this a bit more and update my app on the store
To anyone reading my AppleForumBlogUpdates™️... Cheers 😁
Ok I've been ditching any non Apple tech and I'm starting from scratch with SwiftUI and pure metal implementation. It's super fast, but now implementing post processing (or live) will be challenging.
I have one question. Currently I move my data around using this custom struct containing the metal buffer:
Swift
public struct PointCloudCapture {
public let buffer: MetalBufferParticleUniforms
public let count: Int
public var stride: Int {
buffer.stride
}
public enum Component {
case position
case color
case confidence
public var format: MTLVertexFormat {
switch self {
case .position:
return MTKMetalVertexFormatFromModelIO(.float3)
case .color:
return MTKMetalVertexFormatFromModelIO(.float3)
case .confidence:
return MTKMetalVertexFormatFromModelIO(.float)
}
}
public var dataOffset: Int {
switch self {
case .position:
return 0
case .color:
return MemoryLayoutFloat.size * 4
case .confidence:
return MemoryLayoutFloat.size
}
}
public var semantic: SCNGeometrySource.Semantic {
switch self {
case .position:
return .vertex
case .color:
return .color
case .confidence:
return .confidence
}
}
}
}
extension SCNGeometrySource.Semantic {
// Represent the confidence from the ARKit capture
public static let confidence = SCNGeometrySource.Semantic(rawValue: "confidence")
}
As you can see I wanted to embed the Confidence data in it's metadata of some sort (I use these later with SCNGeometrySource and SCNGeometryElement to render the point cloud in SCN.
I'm not sure how to use my custom Confidence data with SCNGeometry(sources:, elements:) though, would gladly take any pointers.
I would like to have an overridden version of some sort to decide, based on Confidence, wether in render or not the point.
All this could easily be made in my metal code earlier, but the flow of my app is two steps : Capture then navigate to Viewer.
And I want this viewer to retain as much data as possible, to allow the use to apply/revert treatments to it's capture, and when happy export it.
I'm very happy with my transition from Capture to Viewer (Metal buffer to SCNScene is instant even for millions points captures, AWESOME)
But yes, I'm wondering if my idea of applying treatments at this stage make sense?
nb: I also plan to add some very light, non critical, treatments in the live capture metal code. My goal here is to prevent oversampling areas. Kind of a VoxelGrid filtering of some sort but running live. Not sure how but I'll update. My company asked me to work on this for another project so I can revive this petproject!
Cheers,
A
This works for me
Swift
particles.pointSize = 10.0
particles.minimumPointScreenSpaceRadius = 2.5
particles.maximumPointScreenSpaceRadius = 2.5
You need to increase the min/max values as stated in the doc
Discussion
Some visual effects call for rendering a geometry as a collection of individual points—that is, a point cloud, not a solid surface or wireframe mesh. When you use this option, SceneKit can render each point as a small 2D surface that always faces the camera. By applying a texture or custom shader to that surface, you can efficiently render many small objects at once.
To render a geometry element as a point cloud, you must set three properties: pointSize, minimumPointScreenSpaceRadius, and maximumPointScreenSpaceRadius. Use pointSize to determine how large each point appears in world space, so that points farther away appear as smaller 2D surfaces. Use the minimum and maximum radius properties to ensure that the on-screen rendering of each point fits within a certain range of pixel sizes.
For example, to render a point cloud where each point is always one pixel wide (like a field of stars), set both the minimum and maximum sizes to one pixel. To render a group of objects whose screen sizes vary with perspective (like a set of images representing planets), set the minimum size to one pixel and the maximum size to a much larger value.
Hello Guys!
It's going great so far, my first time doing any GPU/Graphic things since... 2011 and NCurses stuff 😅
I've released a simple app called PointCloudKit (Capture then display with a SCNView)
I've since then compiled VTK for arm64, and I'm now rendering my ARKit capture (XZY-RGB-D) of point cloud using the provided renderer, it's OpenGL but so far so good.
I'm limited in my knowledge, so learning VTK pipelines now, but I'm processing 2M points cloud instantly with VoxelFiltering, then can display and play around in real time 60fps, it's quite impressive.
One issue I had was that I wished to stay on the GPU, passing my MTLBuffer directly to VTK/OpenGL but I did not succeed yet. I'm casting my MTLBuffer.contents in the C++ code and it seems to be very fast for now (Still in debug mode)
My real "end game boss" goal is to do all of this in metal, and real time. That would be super interesting, and I'll explore that once I'm more comfortable and have more time.
I also reconstruct mesh andapply colors, results are great and it take 10sec for denoised 500k points (end up being 50k points after filtering)
Cheers
Same problem