About "Tracking the User's Face in Real Time" of Developer

I am trying a feasibility study to make 3D vision.

Question:

1) The definition of Coordinate XYZ on video frame(Right-handed coordinate? Where is Origin? Where is the direction of X-axis for? Where is the direction of Y-axis for? Where is the direction of Z-axis for? Abusolute coordinates or relative coordinates?)



2) Where are the defined point of Left Eye and Right Eye?(Ex.the center of eyebrows, the center of pupil)

and the variable names?(leftEyebrow?,rightEyebrow?,leftEye?,rightEye?)



3 )Where is the defined point of nose?(Ex.nose crest) and variable name?(noseCrest? nose?)



4) What is the aim of Rectangle? Why does Rectangle need?



5) Please show me the concrete flow-chart of ViewController.swift for the purpose of realizing and tweaking this to add more 3D functions.


Sincerely

Replies

You should move this thread to ARKit section of the forum.


Good luck.

Hello,


To clarify, is this a question about ARFaceAnchor, or is this about VNFaceObservation?


If it is about VNFaceObservation as I suspect, then you should move this thread to the Vision section of the forum, where I will be happy to attempt to answer your questions!

Gchiste san, thank you very much for having your precious time to me.

Speaking directly, I think it's much rapid and profitable between Apple and me (Developer) to get the solutions from ARFaceAnchor (who are the presentators of WWDC17/506).

Can you arrange and coodinate us?

Hello, it is still unclear which api you want to use here. You mention ARFaceAnchor, but the WWDC video that you linked to is about VNFaceLandmarks?


Please let me know which api you are using and I will be happy to answer some of your questions.

Gchiste san, thank you very much.

I think that the WWDC video of "WWDC17/section507" includes ARFaceLandmarks anyway.

May you arrange and coodinate Apple's experts and me sincerely for the purpose of progressing?

Hello!

Are you very busy?

May you explain about your progress inside Apple if any?

It still is not clear to me if you are asking about VNFaceLandmarks, or ARFaceAnchor, but I am just going to go ahead and assume you are asking about ARFaceAnchor since this post is in the ARKit section of the forum.


In the order of your original questions:


1. As described in the figures on this page of the documentation, in a session running the ARFaceTrackingConfiguration the origin is "centered behind the face".


2. Again, check the diagrams on this page of the documentation: leftEyeTransform


3. ARFaceAnchors do not give you a point that directly corresponds to the nose


4. Not sure what you are asking here.


5. This documentation talks about view controllers in a general sense: https://developer.apple.com/library/archive/featuredarticles/ViewControllerPGforiPhoneOS/

Hello! Gchiste, thank you very much for the explanations.

I am trying several tests of sample app "Tracking the User's Face in Real Time" with my iPhone.

I added "print(results)" to ViewController.swift at the step-545th for minitoring.


......................................................(ViewController.swift)...............................................................

......................................................................................................................................................

guard let landmarksRequest = request as? VNDetectFaceLandmarksRequest,

let results = landmarksRequest.results as? [VNFaceObservation] else {

return

}

print(results) //********************************************************************* 2019.2.28 Try Test by addition.

// Perform all UI updates (drawing) on the main queue, not the background queue on which this handler is being called.

DispatchQueue.main.async {

self.drawFaceObservations(results)

}

.....................................................................................................................................................

.....................................................................................................................................................


The reacted Debug area in Real time is the following.

......................................................................................................................................................

[<VNFaceObservation: 0x131e11da0> 36E02117-F477-47AF-BD6A-FC94E2DEBC8D, revision 0, 1.000000 [0.204768 0.0890206 0.626646 0.352488] ID=0 landmarks 0.499072]

[<VNFaceObservation: 0x131d084f0> D0B5286F-7A77-45AE-94DA-1D8ABE0DFD94, revision 0, 1.000000 [0.212853 0.0844527 0.633526 0.356358] ID=0 landmarks 0.524259]

......................................................................................................................................................


May you explain these meaning of ech item? Specially I am assuming that [0.204768 0.0890206 0.626646 0.352488] is [ x,y,width,height] of the rectangle. Is this correct?


Respectfully.

Hello! Gchiste

Are you very busy?

May you explain about your progress inside Apple again?

VNFaceObservation is a subclass of VNDetectedObjectObservation, VNDetectedObjectObservation returns a bounding box which is a CGRect, CGRects are defined by an origin (x,y) and a size (width, height), so yes, your assumption is correct.

Hello! Gchiste, thank you for your advice.

By the way "About my Mac condition of Xcode" in running this Sample app.

I could not run "Tracking the User's Face in Real Time" on Xcode of my Mac suddenly.

The two errors happened and are like the following copy.


https://teratail.com/questions/179202


It seem to be that my research-result depends upon the side of Apple system.

May you repair the condition if possible?