Ok I've been ditching any non Apple tech and I'm starting from scratch with SwiftUI and pure metal implementation. It's super fast, but now implementing post processing (or live) will be challenging.
I have one question. Currently I move my data around using this custom struct containing the metal buffer:
Code Block Swiftpublic struct PointCloudCapture { |
public let buffer: MetalBuffer<ParticleUniforms> |
public let count: Int |
|
public var stride: Int { |
buffer.stride |
} |
|
public enum Component { |
case position |
case color |
case confidence |
|
public var format: MTLVertexFormat { |
switch self { |
case .position: |
return MTKMetalVertexFormatFromModelIO(.float3) |
case .color: |
return MTKMetalVertexFormatFromModelIO(.float3) |
case .confidence: |
return MTKMetalVertexFormatFromModelIO(.float) |
} |
} |
|
public var dataOffset: Int { |
switch self { |
case .position: |
return 0 |
case .color: |
return MemoryLayout<Float>.size * 4 |
case .confidence: |
return MemoryLayout<Float>.size |
} |
} |
|
public var semantic: SCNGeometrySource.Semantic { |
switch self { |
case .position: |
return .vertex |
case .color: |
return .color |
case .confidence: |
return .confidence |
} |
} |
} |
} |
|
extension SCNGeometrySource.Semantic { |
|
// Represent the confidence from the ARKit capture |
public static let confidence = SCNGeometrySource.Semantic(rawValue: "confidence") |
|
} |
As you can see I wanted to embed the
Confidence data in it's metadata of some sort (I use these later with
SCNGeometrySource and
SCNGeometryElement to render the point cloud in SCN.
I'm not sure how to use my custom
Confidence data with
SCNGeometry(sources:, elements:) though, would gladly take any pointers.
I would like to have an overridden version of some sort to decide, based on
Confidence, wether in render or not the point.
All this could easily be made in my metal code earlier, but the flow of my app is two steps : Capture then navigate to Viewer.
And I want this viewer to retain as much data as possible, to allow the use to apply/revert treatments to it's capture, and when happy export it.
I'm very happy with my transition from Capture to Viewer (Metal buffer to SCNScene is instant even for millions points captures, AWESOME)
But yes, I'm wondering if my idea of applying treatments at this stage make sense?
nb: I also plan to add some very light, non critical, treatments in the live capture metal code. My goal here is to prevent oversampling areas. Kind of a VoxelGrid filtering of some sort but running live. Not sure how but I'll update. My company asked me to work on this for another project so I can revive this petproject!
Cheers,
A