Is there really no AVCaptureDevice that returns depth data (via AVCaptureDepthDataOutput) for any of the rear cameras on a 2020 iPadPro w/LIDAR?
Will something like a .builtInTOFCamera source device be available for iPadOS 13.6, or is rear camera background substitution on the 2020 iPad (that shipped in March) out of the question until iPadOS 14 (due in late Fall), using all new API in ARKit 4?
[Apple described the APIs in "Enhancing Live Video with TrueDepth Camera Data" (2018) with accompanying sample code that performs realtime background substitution (something like "Selfie Scenes" in Apple's Clips.app) that works with front "TrueDepth" cameras and rear "dual" cameras on more recent iPhones by specifying the source device to be .builtInDualWideCamera or .builtInDualCamera instead of .builtInTrueDepthCamera.]
Post
Replies
Boosts
Views
Activity
At WWDC'20 last week, Apple indicated new support for applying video as materials on AR objects.
In the latest RealityComposer beta, cannot find an option under Materials to associate a video with an object and this seems like it would preclude using video on objects in RealityFiles played with ARQuickLook.
Are we "holding it wrong" or should we avoid ARQuickLook and directly call RealityKit/ARKit to utilize this feature?
It seems like whenever an external audio source of any kind is connected (hardwired or via Bluetooth) or whenever the built-in Microphone is enabled in ReplayKit, the iPhone will go into AVAudioSession "voiceChat" (or "videoChat") mode and will not allow us to even specify which microphone(s) or any gain or parabolic settings.
This does not seem to happen if the audio originates from the app (with the built-in microphone disabled), for example, if the app is playing an audio file.
For the case where we are trying to use external audio device, is there anyway ReplayKit can be told to set (or leave) the iPhone's audio mode to the same mode as if the audio had originated from within the app?