Hey ARKit Engineers,
I'm after a simple way to calculate the depth of a pixel - in SceneKit units from my sceneView.session.currentFrame?.smoothedSceneDepth?.depthMap I'm not wanting to change to Metal, or RealityKit. I just want to find a point in my currentFrame and it's corresponding depth map, to get the depth of a point in SceneKit (ideally in world coordinates, not just local to that frustum at that point in time).
I don't require great performance - I just don't want to port an unproject method from Metal to find a single point's value.
Here's a start I've made on the code:
guard let frame = sceneView.session.currentFrame else { return }
guard let depthData = frame.sceneDepth else { return }
let camera = frame.camera
let depthPixelBuffer = depthData.depthMap
let depthHeight = CVPixelBufferGetHeight(depthPixelBuffer)
let depthWidth = CVPixelBufferGetWidth(depthPixelBuffer)
let resizeScale = CGFloat(depthWidth) / CGFloat(CVPixelBufferGetWidth(frame.capturedImage))
let resizedColorImage = frame.capturedImage.toCGImage(scale: resizeScale);
guard let colorData = resizedColorImage.pixelData() else {
fatalError()
}
var intrinsics = camera.intrinsics;
let referenceDimensions = camera.imageResolution;
let ratio = Float(referenceDimensions.width) / Float(depthWidth)
intrinsics.columns.0[0] /= ratio
intrinsics.columns.1[1] /= ratio
intrinsics.columns.2[0] /= ratio
intrinsics.columns.2[1] /= ratio
var points: [SCNVector3] = []
let depthValues = depthPixelBuffer.depthValues()
for vv in 0..depthHeight {
for uu in 0..depthWidth {
let z = -depthValues[uu + vv * depthWidth]
let x = Float32(uu) / Float32(depthWidth) * 2.0 - 1.0;
let y = 1.0 - Float32(vv) / Float32(depthHeight) * 2.0;
points.append(SCNVector3(x, y, z))
}
}
I can get this into a point cloud - however it's really bent on the Z axis and doesn't look pretty.
Thanks for any help.
Post
Replies
Boosts
Views
Activity
I'm tyring to extract the depth information for my scene only for my virtual objects... not the depthDataMap. In openGL it was pretty easy to get this gray scal map, but I i'm really struggling to find informaiton as to how to do this for my virtual objects. The main objective of this is for me to be able to render out 3D photos for Facebook. Obvioulsy the ARKit depth of field must use something like this to calcualte the blur of scene objects based on distance, however, I'm at a loss as to how to do this with SceneKit/ARKit?