I am using Unity with ARKit in an Iphone 8 and trying out the scene UnityARImageAnchor where markers can be tracked and become anchors in the scene.
I am trying to track this marker set https://github.com/artoolkit/artoolkit5/blob/master/doc/patterns/Cubes/cube00-05-a4.pdf which I printed with a 26 mm edge length and I set the size of the marker correctly in Unity and verified it in Xcode. Even though the marker is clearly seen in the iPhone screen, the tracking is very bad and most often the marker cannot be tracked. I tried also with the same markers but with 58 mm edge length and it works much better, however its a bit strange how the smaller one yields such a bad tracking performance in comparison.
Any ideas why the size should have such a large impact, even though the image displayed in the screen for the small marker has a good resolution and of a good size? Is it that ARkit is using an image with much lower resolution for the tracking?
I'm stuck with the same question: I want to keep markers as small as possible, since I am not necessarily overdrawing them by AR visuals, and furthermore to minimize their visual impact on bystanding audience (in a public space).
My best guess is that ARKit saves processing power at runtime by limiting image recognition algorithms to a minimum on-screen size for marker visuals to be considered.