SceneKit method to calculate depth from smoothedSceneDepth

Hey ARKit Engineers,
I'm after a simple way to calculate the depth of a pixel - in SceneKit units from my
Code Block
sceneView.session.currentFrame?.smoothedSceneDepth?.depthMap
I'm not wanting to change to Metal, or RealityKit. I just want to find a point in my currentFrame and it's corresponding depth map, to get the depth of a point in SceneKit (ideally in world coordinates, not just local to that frustum at that point in time).

I don't require great performance - I just don't want to port an unproject method from Metal to find a single point's value.

Here's a start I've made on the code:


Code Block guard let frame = sceneView.session.currentFrame  else { return }
guard let depthData = frame.sceneDepth else { return }
let camera = frame.camera
let depthPixelBuffer = depthData.depthMap
let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer)
let depthWidth  = CVPixelBufferGetWidth(depthPixelBuffer)
let resizeScale = CGFloat(depthWidth) / CGFloat(CVPixelBufferGetWidth(frame.capturedImage))
let resizedColorImage = frame.capturedImage.toCGImage(scale: resizeScale);
guard let colorData = resizedColorImage.pixelData() else {
    fatalError()
}
var intrinsics = camera.intrinsics;
let referenceDimensions = camera.imageResolution;
let ratio = Float(referenceDimensions.width) / Float(depthWidth)
intrinsics.columns.0[0] /= ratio
intrinsics.columns.1[1] /= ratio
intrinsics.columns.2[0] /= ratio
intrinsics.columns.2[1] /= ratio
var points: [SCNVector3] = []
let depthValues = depthPixelBuffer.depthValues()
for vv in 0..<depthHeight {
    for uu in 0..<depthWidth {
        let z = -depthValues[uu + vv * depthWidth]
        let x = Float32(uu) / Float32(depthWidth) * 2.0 - 1.0;
        let y = 1.0 - Float32(vv) / Float32(depthHeight) * 2.0;
        points.append(SCNVector3(x, y, z))
    }
}


I can get this into a point cloud - however it's really bent on the Z axis and doesn't look pretty.

Thanks for any help.

Hello,

I don't require great performance - I just don't want to port an unproject method from Metal to find a single point's value. 

The unprojection calculation itself is going to be identical, regardless of whether it is done CPU side or GPU side.

CPU side, the calculation would look something like this:

Code Block
/// Returns a world space position given a point in the camera image, the eye space depth (sampled/read from the corresponding point in the depth image), the inverse camera intrinsics, and the inverse view matrix.
    func worldPoint(cameraPoint: SIMD2<Float>, eyeDepth: Float, cameraIntrinsicsInversed: simd_float3x3, viewMatrixInversed: simd_float4x4) -> SIMD3<Float> {
        let localPoint = cameraIntrinsicsInversed * simd_float3(cameraPoint, 1) * -eyeDepth
        let worldPoint = viewMatrixInversed * simd_float4(localPoint, 1);
        return (worldPoint / worldPoint.w)[SIMD3(0,1,2)];
    }


If you continue to have issues with this, please Request Technical Support.
Hey thanks so much for the quick reply.


At line 28, this is what I'm trying, however the points are appearing off camera in a thin line distant from the camera - so I guess I need to submit a TSI to get any further help?
Code Block
let worldPoint = worldPoint(cameraPoint: SIMD2(x,y), eyeDepth: z, cameraIntrinsicsInversed: simd_float3x3(intrinsics.inverse), viewMatrixInversed: frame.camera.projectionMatrix.inverse)
points.append(SCNVector3(worldPoint))


Thanks heaps for this so far.
Thanks - I'm starting to get acceptable results with the worldPoint function. This is how I've implemented it:
Code Block
for vv in 0..<depthHeight {
for uu in 0..<depthWidth {
        let z = -depthValues[uu + vv * depthWidth]
        let viewMatInverted = (sceneView.session.currentFrame?.camera.viewMatrix(for: UIApplication.shared.statusBarOrientation))!.inverse
        let worldPoint = worldPoint(cameraPoint: SIMD2(Float(uu), Float(vv)), eyeDepth: z, cameraIntrinsicsInversed: intrinsics.inverse, viewMatrixInversed: viewMatInverted * rotateToARCamera )
        points.append(SCNVector3(worldPoint))
    }
}

Some points seem off - however I'll need to start checking each point with their corresponding confidence map to check. Thanks heap, this has been a great start.
Incidentally noticed that confidence levels beyond Low are not currently working on latest iOS 14.5 beta 18E5178a on iPadPro 11", this is effecting Apple Point Cloud demo project too. This makes monitoring re-projection not too easy as the point cloud is very noisy.

At line 28, this is what I'm trying, however the points are appearing off camera in a thin line distant from the camera - so I guess I need to submit a TSI to get any further help?

Code Block
let worldPoint = worldPoint(cameraPoint: SIMD2(x,y), eyeDepth: z, cameraIntrinsicsInversed: simd_float3x3(intrinsics.inverse), viewMatrixInversed: frame.camera.projectionMatrix.inverse)points.append(SCNVector3(worldPoint))
Looks like you are using the inverse projectionMatrix instead of the inverse viewMatrix, which would likely lead to some strange results. Give it a try with the inverse viewMatrix, and then if you still have issues, please submit a TSI.
SceneKit method to calculate depth from smoothedSceneDepth
 
 
Q