1 Reply
      Latest reply on May 21, 2020 12:04 AM by Budabellly
      KTRosenberg Level 1 Level 1 (0 points)

        The ARKit API supports simultaneous world and face tracking via the back and front cameras, but unfortunately due to hardware limitations, the new iPad Pro 2020 is unable to use this feature (probably because the LIDAR camera takes a lot more power). This is a bit of a step back.


        Here is an updated reference in the example project.



        guard ARWorldTrackingConfiguration.supportsUserFaceTracking else {
            fatalError("This sample code requires
        iOS 13 / iPad OS 13, and an iOS device with
        a front TrueDepth camera. Note: 2020 iPads
        do not support user face-tracking while world tracking.")



        There is also a forum conversation proving that this is an unintentional hardware flaw.


        It looks like the mobile technology is not "there yet" for both. However, has anyone confirmed whether this limitation extends to simply getting multiple video feeds, as opposed to using tracking on both cameras?


        What I want to achieve: run a continuous world tracking session in ARKit and render the rear camera feed. At the same time, get front camera data using the regular video APIs, but don't do any tracking. Just process the front camera video frames using CoreML or vision APIs for other purposes.


        The comment says "Note: 2020 iPads do not support user face-tracking while world tracking." This almost suggests the issue is related exclusively to *tracking*. Simultaneous front/back camera feed support was only made available in 2019, I believe. There's a new API for starting a session with both cameras. Since ARKit implicitly initializes one of the cameras, does this make it impossible to do what I want?


        In short, can I use ARKit to do rear-camera world tracking and simultaneously receive and process front-camera data? If so, how? May I have a code example? I hope there is a solution to this.