PeopleOcclusion not working?

Hi all,


I am running Apple's people occlusion sample on my A12 iPad mini. It is running iOS 13 beta 3, and I am using Xcode 11 Beta 3. I can run it and build ok, but for whatever reason the occlusion itself doesn't work whether its turned on or off.


I am getting this output in the console:


2019-07-12 10:37:19.714564-0400 PeopleOcclusion[460:32143] [espresso] Warning: padding deconvolution MobileNetv2-Depth/FPN/p1/conv2d_transpose:0 in SAME mode will not be pad-invariant for all resolutions


Any ideas? Anyone have a similar issue?

Replies

I have a same problem on my iPad mini 5th(A12), iPad OS 13 beta 4.

I think iPad mini cannot use people occlusion.

very disappointing...

Can we get any confirmation from this from Apple? It doesn't make sense why the iPad mini A12 would be excluded? This would be highly disappointing if true.

Please file a bug report.

I am getting the same thing with the iPad Air 3, but I am on iPad OS. Also, my usdz files with transparency no longer have transparency in ARKit/RealityKit or the 'Quick Look' viewer. I am also on xCode 11.0 beta 4 and MacOS Mojave 10.14.6.

Frankly having to file a bug report on this raises eyebrows considering we are now at iOS 13b4. and the iPad Air 3 & iPad Mini 5 have been available since March 18th and they will be the most popular besides the 2018-2019 iPhones for this ability. The only thing I can think of is the 8 megapixel cameras shared by these two devices versus the 12 megapixel in the iPad Pros and iPhones.
I for one will not be very happy to have spent $400 for an iPad Air 3 based on Apple's guidance in print and all of the mentions on stage during WWDC 2019 of only requiring an A12. I will be even angrier if 30 days go by since my purchase without a solution or refined list of compatible devices to allow me to return it and purchase the correct device as well as the time lost.

Hi I've filed a bug report on this, but haven't heard anything yet:


https://feedbackassistant.apple.com/feedback/6784030


Would be great to get some feedback from Apple on this.

I have just updated to iPadOS 13b5 & Xcode 11b5 and sadly it has not been added either for the iPad Air Gen 3 (2019). In both cases the back cameras are 8 megapixels versus 12 megapixels in the other A12 devices.


It should also be noted that latest AR Quickview app in iOS &iPadOS 13 is also supposed to have 'people occlusion' and face anchoring and if you have a A12X, depth of focus and motion blurring.


I am also not happy that I my request has not been fulfilled for the iOS RealityKit app. My guess is that I have not renewed my developer license, which would be another $99 along with the $399 I have already spent on an iPad Air Gen 3 and it seems I have to spend another $800+ plus for a Mac that can run MacOS 10.15 Catalinia with Metal 3 capability since this too has been mandated and without it Reality Composer (MacOS) just crashes when you try to do anything with it.

It works on iPad Mini 2019 A12 in Unity with the ARFoundation SDK ...

Stil not working on my end, how about you?


I have filed a bug report today since it still not working in iOS 13b6 either.


I't like to know if you works for anyone else on other devices.

Hmmm will have to try that. What version of ARFoundation SDK because the latest sample only does masking and now oclusion yet.

Any updates on this? The bug report has been open for quite some time with no response. We'd really like to start developing to take advantage of this feature but our primary target is tablet. Disappointing that isn't working yet.

Still not working with an iPad Pro 11 iPad OS13.1

Also filed a bugreport


ARConfiguration.supportsFrameSemantics( .personSegmentationWithDepth )

returns false


But why? Shouldn't it be supported?

You need to check support for frame semantics on the specific configuration that you will run.


i.e.


ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)