I'm working on an App with a feature that determines and marks the point a user is looking at (i.e in screen coordinates)
I have successfully set my ARKit configurations to retrieve the leftEyeTransform, rightEyeTransform and lookAtPoint. My challenge is correcting converting this data to CGPoint coordinates to present the point the user is looking at.
I have attempted using "renderer.projectPoint" to derive the screen coordinates from the leftEyeTransform, rightEyeTransform and lookAtPoint, but haven't been able to get it to work correctly.
I think I'm missing something, please any suggestions or points will be greatly appreciated.
Thank you in anticipation.
Post
Replies
Boosts
Views
Activity
Hi there,
Please I need some help with ARKit. I'm currently working on an App that tracks the point a user's eyes are looking at on the device screen. I have been able to successfully do this, by performing a hitTest using the left and right eye transform and a set targetNode on the screen.
I'm able to get these points (i.e x, y in screen coordinates), my problem is they're not stable and a slight movement of the head or the device significantly affect these coordinates.
My question: is there a way to improve the accuracy of data retrieved while tracking the eyes (i.e without enforcing the user keep their head still). Or is there a better way to track the eyes and then the "LookAtPoint" and eventually convert it to screen coordinates (2D).
It's not clear what I'm doing wrong, I can see the Face is tracked accurately because moving the device or head doesn't affect the face mesh I set up.
Please any helpful suggestions or ideas are welcomed.
Thanks in anticipation.