We're well into COVID times now so building vision app involving people wearing masks should be expected. Vision's face rectangle detector works perfectly fine on faces with masks, but that's not the case for face landmarks. Even when someone is wearing a mask, there are still a lot of landmarks exposed (e.g., pupils, eyes, nose, eyebrows, etc.). When can expect face landmark detection to work on faces with masks?
Post
Replies
Boosts
Views
Activity
At around the 5 min mark of "Discover advancements in iOS camera capture: Depth, focus, and multitasking", you state that TrueDepth delivers relative depth. This appears to contradict official documentation. In Capturing Photos with Depth, you state explicitly that TrueDepth camera measures depth directly with absolute accuracy. Why the change?
Despite the host stating that RecognizedObjectsVisualizer.swift will be available on the page, it does not seem to be included in the project available for download. I hope you guys can provide that, so we can play around with the playground project he started.