SceneUnderstanding missing from SceneKit?

The new demo app for the lidar scanner uses RealityKit and the user is able to visualize meshes detected by the scanner:


https://developer.apple.com/documentation/arkit/world_tracking/visualizing_and_interacting_with_a_reconstructed_scene


The meshes are visualized using 'showSceneUnderstanding' as a debug option:


arView.debugOptions.insert(.showSceneUnderstanding)


You can also occlude virtual content by accessing the sceneUnderstanding:


arView.environment.sceneUnderstanding.options.insert(.occlusion)


I am trying to recreate these features using SceneKit but so far i've had no success. It looks like 'sceneUnderstading' doesn't seem to exist in SceneKit. There doesn't seem to be any documentation on this.


Any help would be appreciated.

Thanks.

Unfortunately, SceneKit doesn't support any features of the LiDAR scanner. So, you'll have to apply information from the scene mesh (ARMeshAnchors) in SceneKit yourself.

Thanks for the response.

Well that's a shame, i suppose it's only a matter of time before SceneKit becomes obsolete.

Do you know of any guides or articles explaining how to create SCNGeometry using the ARMeshAnchors?

ARMeshAnchors use ARGeometrySource & ARGeometryElement and so far i haven't been able to convert them into SCNGeometrySource and SCNGeometryElement.

Here is a convenient class which wraps ARMeshGeometry to provide an SCNGeometry of the mesh (note this does not include the normals):


class ARSCNMeshGeometry {

let scnGeometry: SCNGeometry

init(meshAnchor: ARMeshAnchor) {

let meshGeometry = meshAnchor.geometry

// Vertices source

let vertices = meshGeometry.vertices

let verticesSource = SCNGeometrySource(buffer: vertices.buffer, vertexFormat: vertices.format, semantic: .vertex, vertexCount: vertices.count, dataOffset: vertices.offset, dataStride: vertices.stride)

// Indices Element

let faces = meshGeometry.faces

let facesData = Data(bytes: faces.buffer.contents(), count: faces.buffer.length)

let facesElement = SCNGeometryElement(data: facesData, primitiveType: .triangles, primitiveCount: faces.count, bytesPerIndex: faces.bytesPerIndex)

scnGeometry = SCNGeometry(sources: [verticesSource], elements: [facesElement])

}

}



Using that class, you can get the SCNGeometry from the ARMeshGeometry for each mesh anchor.


Whenever an ARMeshAnchor is added to the session (use session(_:didAdd:) to check) you should create an SCNNode and add the geometry to it.

Then add the SCNNode to an SCNScene.


Whenever an ARMeshAnchor is updated (use session(_:didUpdate:) to check) you should update the corresponding node’s geometry with the anchor’s new geometry.

You should also set the node’s simdWorldTransform to the ARMeshAnchor’s transform at this time, since ARMeshGeometry vertices are located relative to their mesh anchor's transform.

Thanks for the code sample! It is working for me for a few seconds, but then the camera feed freezes and I get this error:

LiDAR Meshes[2865:523366] Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (IOAF code 4)

Same, any update on the proper way to dynamically update and display our own version of the Lidar mesh?

Please request technical support for this issue: https://developer.apple.com/support/technical/

That’s really bad news for AR. RealityKit might be the successor of SceneKit but it lacks of so much functionality, creating custom geometry is just one example.
Therefore the decision to stop developing SceneKit is hurting so many developers.

Just give us all the LIDAR features we need to develop great apps with SceneKit!!

Important Clarification Below:

Code Block
let facesData = Data(bytesNoCopy: faces.buffer.contents(), count: faces.buffer.length, deallocator: .none)


Using bytesNoCopy here means that SceneKit and ARKit will share this underlying data, so, if ARKit updates this data, and SceneKit needs to access it for some reason (for example, for hitTesting), then this will likely result in a bad access, and you app will crash.

So, if this describes your use-case (you need to access the SCNGeometry after creation for something like hit-testing), then you should copy the bytes:

Code Block
let facesData = Data(bytes: faces.buffer.contents(), count: faces.buffer.length)

Have you found a way to use occlusion with SceneKit and Lidar meshes?

Code Block
arView.environment.sceneUnderstanding.options.insert(.occlusion)
still doesn't exist in SceneKit, does it?


Also, here's some code I'm using to get geometry from ARMeshAnchors:

Code Block
@available(iOS 13.4, *)
extension SCNGeometry {
/**
Constructs an SCNGeometry element from an ARMeshAnchor.
if setColors, will set colors automatically on each face based on ARMeshClassification above
*/
public static func fromAnchor(meshAnchor: ARMeshAnchor, setColors: Bool) -> SCNGeometry {
let meshGeometry = meshAnchor.geometry
let vertices = meshGeometry.vertices
let normals = meshGeometry.normals
let faces = meshGeometry.faces
// use the MTL buffer that ARKit gives us
let vertexSource = SCNGeometrySource(buffer: vertices.buffer, vertexFormat: vertices.format, semantic: .vertex, vertexCount: vertices.count, dataOffset: vertices.offset, dataStride: vertices.stride)
let normalsSource = SCNGeometrySource(buffer: normals.buffer, vertexFormat: normals.format, semantic: .normal, vertexCount: normals.count, dataOffset: normals.offset, dataStride: normals.stride)
// Copy bytes as we may use them later
let faceData = Data(bytes: faces.buffer.contents(), count: faces.buffer.length)
// create the geometry element
let geometryElement = SCNGeometryElement(data: faceData, primitiveType: .of(faces.primitiveType), primitiveCount: faces.count, bytesPerIndex: faces.bytesPerIndex)
let geometry : SCNGeometry
if setColors {
// calculate colors for each indivudal face, instead of the entire mesh
var colors = [SIMD4<Float>]()
for i in 0..<faces.count {
colors.append(meshGeometry.classificationOf(faceWithIndex: i).colorVector)
}
let colorSource = SCNGeometrySource(data: Data(bytes: &colors, count: colors.count * SIMD4_FLOAT_STRIDE),
semantic: .color,
vectorCount: colors.count,
usesFloatComponents: true,
componentsPerVector: 4,
bytesPerComponent: FLOAT_STRIDE,
dataOffset: 0,
dataStride: SIMD4_FLOAT_STRIDE
)
geometry = SCNGeometry(sources: [vertexSource, normalsSource, colorSource], elements: [geometryElement])
}
else {
geometry = SCNGeometry(sources: [vertexSource, normalsSource], elements: [geometryElement])
}
return geometry;
}
}
let SIMD4_FLOAT_STRIDE = MemoryLayout<SIMD4<Float>>.stride
let FLOAT_STRIDE = MemoryLayout<Float>.stride
let VECTOR_WHITE : SIMD4<Float> = SIMD4<Float>(1.0, 1.0, 1.0, 1.0)
let VECTOR_YELLOW: SIMD4<Float> = SIMD4<Float>(1.0, 1.0, 0, 1.0)
let VECTOR_BLUE: SIMD4<Float> = SIMD4<Float>(0, 0, 1.0, 1.0)
@available(iOS 13.4, *)
extension ARMeshClassification {
var description: String {
switch self {
case .ceiling: return "Ceiling"
case .door: return "Door"
case .floor: return "Floor"
case .seat: return "Seat"
case .table: return "Table"
case .wall: return "Wall"
case .window: return "Window"
case .none: return "None"
@unknown default: return "Unknown"
}
}
// make more or less same vertical/horizontal colors as planes
var color: UIColor {
switch self {
case .ceiling: return .blue
case .door: return .white
case .floor: return .blue
case .seat: return .white
case .table: return .white
case .wall: return .yellow
case .window: return .white
case .none: return .white
@unknown default: return .white
}
}
var colorVector: SIMD4<Float> {
switch self {
case .ceiling: return VECTOR_BLUE
case .door: return VECTOR_WHITE
case .floor: return VECTOR_BLUE
case .seat: return VECTOR_WHITE
case .table: return VECTOR_WHITE
case .wall: return VECTOR_YELLOW
case .window: return VECTOR_WHITE
case .none: return VECTOR_WHITE
@unknown default: return VECTOR_WHITE
}
}
}

SceneUnderstanding missing from SceneKit?
 
 
Q