AR app behaving differently for different people

I have written a small app which recognizes an image and then places spheres on the four corners and in the center. This is the code which places them:

      let width = Float(imageAnchor.referenceImage.physicalSize.width)
      let height = Float(imageAnchor.referenceImage.physicalSize.height)
      let x = imageAnchor.transform.columns.3.x
      let y = imageAnchor.transform.columns.3.y
      let z = imageAnchor.transform.columns.3.z

      let lowerLeft = SIMD3<Float>(x - width/2, y - height/2, z)
      let lowerRight = SIMD3<Float>(x + width/2, y - height/2, z)
      let upperRight = SIMD3<Float>(x + width/2, y + height/2, z)
      let upperLeft = SIMD3<Float>(x - width/2, y + height/2, z)
      let center = SIMD3<Float>(x, y, z)

      self.model_01.position = lowerLeft  // pink
      self.model_02.position = lowerRight // blue
      self.model_03.position = upperRight // red
      self.model_04.position = upperLeft  // green
      self.model_05.position = center     // yellow

I have run this app on a 14 Pro Max and a X, both running iOS 16.3. On both devices the spheres that should be on the corners are placed quite a ways in from the actual corners, though still in a rectangular pattern of the same aspect ratio.

When my co-worker runs the app built on his computer from the same source, using the same reference image, the spheres are placed at the corners as they should be. He has an 11 Pro running 15.7 and a X running 16.3, and gets the same result on both.

Our values for width, height, x, y, and z are all the same, but somehow the outcome is still different. We've eliminated all the variables we can think of, like displaying the reference image on our laptop screens which are the same model.

What could possibly be causing this???

What triggers calling this code? It's possible that the imageAnchor's transform is not perfectly accurate yet when it is initially detected, which results in an inconsistent placement of the spheres. In order to benefit from refinements of the tracked image's pose, you can do the following:

  1. Make sure you are running an ARImageTrackingConfiguration, or an ARWorldTrackingConfiguration with automaticImageScaleEstimationEnabled set to true and maximumNumberOfTrackedImages set to a value > 0.
  2. Whenever session(_:didUpdate:) is called, run the above code block to update the position of the spheres.

When running this code with the same reference image, you should get a similar result no matter of the device model.

I have two versions of this code which both exhibit the same problem, one in SceneKit and one in RealityKit.

For RealityKit, this code is called from session:didUpdate:.

For SceneKit, it is called in renderer:nodeFor:. I added it to renderer:didUpdate just in case but it didn't help.

I am using ARImageTrackngConfiguration in both, and I have set maximumNumberOfTrackedImages to 1.

I can share the whole file if you would like... just let me know if you want to see SceneKit, RealityKit, or both.

Thanks for your help!

AR app behaving differently for different people
 
 
Q