Hi, I'm not sure whether to post this here in Vision or in ARKit as it pertains to both. I followed the Apple project "Using Vision in Real Time with ARKit" and added Vision's VNDetectFaceRectanglesRequest.
The issue I have is the mapping from Vision to the AR/video screen. I'm testing in portrait mode and the Y-axis works fine. The X-axis seems to be wrong since ARKit appears to have a wider camera than the camera view on the iPhone.
What is the correct way to display the boundingBox on the screen/UIKit/ARKit space? I can't use methods like `self.cameraLayer.layerRectConverted(fromMetadataOutputRect: transformedRect)` since I'm not using AVCaptureSession (using ARKit instead).
The following code's Y-axis works, but the X-axis is skewed (more so on the sides of the screen).
// face is an instance of VNFaceObservation let transform = CGAffineTransform(scaleX: 1, y: -1).translatedBy(x: 0, y: -view.frame.height) let translate = CGAffineTransform.identity.scaledBy(x: view.frame.width, y: view.frame.height) let rect = face.boundingBox.applying(translate).applying(transform)
Using ARKit + Vision, I'm not sure how to convert the X axis from Vision's normalized rect to ARKit/UKit's coordinate space. The X origin of the CGRect is noticeably off. The X origin seems like it should be further outward because the iPhone's camera is wider.
Thank you