Should Face Tracking Position and Depth Map 3D Point Be in the Same Space?

Hi,


Should the position returned by the face tracking ARFaceAnchor transform property match a 3d point derived from taking the depth map and using the intrinsics matrix to translate into a 3d point? I thought they should correlate, but when I look at my head position in the 3d cloud produced from the depth map plus intrinsics matrix it's center position does not seem to correlate to the position returned by the face anchor.


I'm using currentFrame.capturedDepthData.cameraCalibrationData.intrinsicMatrix and currentFrame.capturedDepthData.cameraCalibrationData.intrinsicMatrixReferenceDimensions to take the depth map value (front facing camera real meters) to a 3d point using the formula...



// use ar kit camera intrinsics
float principleX = thisPhoto.intrinsicMatrix.m20;
float principleY = thisPhoto.intrinsicMatrix.m21;
float focalX = thisPhoto.intrinsicMatrix.m00;
float focalY = thisPhoto.intrinsicMatrix.m11;

// add point
float U = (float)u / (float)thisPhoto.depthWidth;
float V = (float)v / (float)thisPhoto.depthHeight;
float y = ((float)U * thisPhoto.intrinsicSize.x - principleX) * z / focalX;
float x = ((float)V * thisPhoto.intrinsicSize.y - principleY) * z / focalY;



Thanks

Ray

Replies

Ugh, I'm a *****. It seems...


float principleX = thisPhoto.intrinsicMatrix.m20;

float principleY = thisPhoto.intrinsicMatrix.m21;


Should have been...


float principleX = thisPhoto.intrinsicMatrix.m02;

float principleY = thisPhoto.intrinsicMatrix.m12;


The column/row seems to be swapped.