Is there a guarantee on the rate of renderer callbacks when using ARKit face tracking?

Checking feasibility of eye tracking with ARKit for a new application.

We would like to record the point on screen (along with time stamp) that the user looks at, using an iOS device with the True Depth capabilities.

I have 2 questions:



1. Is there some guarantee on the rate that `renderer:didUpdate` is called. Do we know for example that it is called at least 30 times per second?

2. In all examples that i saw, using ARKit face tracking requires SceneKit, is there an option to use face tracking without SceneKit?

Replies

Hello,


"1. Is there some guarantee on the rate that `renderer:didUpdate` is called. Do we know for example that it is called at least 30 times per second?"


No, even if your preferredFramesPerSecond is set to 30 or higher, there is no guarantee that the device will actually be able to render your scene at that framerate. You are simply setting a target framerate, whether it can actually be achieved with the content and effects in your scene is another matter. That being said, in practice, a simple (or even reasonably complex) scene should not have any difficulty maintaining 30 fps, but it is best to verify this by rigorously testing your app.


Also, it sounds like you are not necessarily trying to render anything, so you do not actually need to use the renderer:didUpdate method. It would make more sense for you to use session:didUpdateFrame:.


"2. In all examples that i saw, using ARKit face tracking requires SceneKit, is there an option to use face tracking without SceneKit?"


Yes, there are other options to use face tracking without SceneKit. You could use RealityKit, SpriteKit, or Metal. Also keep in mind, you can run an ARSession without rendering anything at all.