Create Alpha Matte from SceneDepth API in SceneKit (Occlusion)

Hello,

I would love to utilize the new Occlusion Feature that has been added to RealityKit.
But as my app still requires a lot of features that RealityKit doesn't yet deliver I have to keep using SceneKit for now.

I was wondering if I could convert the depth map provided by the SceneDepth API into an alpha matte that I could then feed into a SCNTechnique to achieve a 'poor mans' occlusion.

Is there some kind of CIFilter or workflow that could help me with this? Maybe some kind of edge detection?

Thankful for any hints!
Configure for arworldtracking , then use

configuration.frameSemantics = .personSegmentationWithDepth

run the configuration
As Sunkenf250 points out, the occlusion actually comes as an ARKit feature, not specifically RealityKit. In addition, once you've added the frame semantics to your configuration, you will receive a pixel buffer in each frame's estimatedDepthData property for all recognized people, which can easily be translated to a CIImage.
Hello,

It will be significantly easier to make use of the sceneReconstruction mesh to implement scene occlusion in SceneKit.

You would construct your SCNGeometry from the ARMeshAnchors, (see this thread: https://developer.apple.com/forums/thread/130599), and then you would apply an occlusion material to that geometry. This sample (https://developer.apple.com/documentation/arkit/tracking_and_visualizing_faces) explains how to create an occlusion material in SceneKit.

Create Alpha Matte from SceneDepth API in SceneKit (Occlusion)
 
 
Q