Can we share planes/anchorpoints with others?

Does anyon eknow if it might be possible to share a plane with some? Meaning a friend and I could look at the same scene from different angles based on our position?

Replies

It's possible, but would require a bit of work as this is not something natively supported by Apple.


To do this the naive way, User 1 would "initiate" the experience, and User 2 would attempt to "find" the same surface, however, since all you have is the extent of a surface, you may just assume any surface found by User 2 that matches (within reason) to User 1's active surface is the same.


Ideally to make this really work well, you'd have to reconcile the difference between User 1's world and User 2's world. If you knew that User 2's world origin is [5, 2, -4] units away from User 1, then you could then "detect" that User 1 and User 2 are looking at the same plane. The best way to do this is to create a 3D point cloud map of the users' environment. I'm not sure Apple provides the entire world's point cloud, but ARFrame seems to have a rawFeaturePoints property which is an ARPointCloud, but local to that one frame. You could then take these ARPointClouds and try to reconcile them on your service. There'll also be the issue of User 1 seeing User 2 in their camera frame and vice versa which would add "bad" points to the point clouds.


TL;DR: It's not supported by Apple. it'd be pretty hard, but you just *might* have enough information to do it yourself using your own service.

I thought about this some more today and wonder if this solution would work. Every ARCamera, which is a property of ARFrame I believe, has a transform matrix that represents the translation and rotation from the users camera to the origin of the scene. The origin is where the session began. My solution would involve two people pointing their camera ,while in a session, at the same real world object( something small like a cup or something ) and setting an anchor on that object. At that point, you would have a very close point in each users coordinate system that they could share with the other. Take that point to their origin and do some math to figure out the other users origin in their own coordiante system.

>My solution would involve two people pointing their camera ,while in a session, at the same real world object( something small like a cup or something ) and setting an anchor on that object.


I think that the ability to plant a virtual 'bouy' is something ARKit could benefit from greatly. Perhaps the reflection from pinging the LED flash, as an example...

For those interested, I got this working and it's pretty accurate.

Awesome! Would love to see a write up or a github project.

This use case for ARKIT sounds fun. Could you please share your approach with a github sample :)

+1 please share if you can

I will put this on github in the coming weeks and respond back here.

Just found this thread and am excited at the prospects. Looking forward to hearing about your progress, EpcMhwk

This sounds great, I love the way ARKit handles AR stuff in that the user doesnt have to worry about having a tracking marker that they are always looking at but one advantage the tracking marker sytem had was that you could lock the two players into the exact same point, and that could make for some pretty fun applications.

+1 I am also very interested in your experiment EpcMhwk

KMT:

"The ability to plant a virtual 'bouy' is something ARKit could benefit from greatly."

"Virtual 'bouy' - I think that is an excellent way to discribe what is need :-)

Sounds amazing! I just bumped into this thread while trying to figure out something similar, can you share your findings?

can't wait for tuesday! any chance you could show us a demo of this?