My organisation Ghyston is looking at a similar problem at the moment. Some ideas might be to use QR codes and the Vision framework to track the object. The QR codes could be in a fixed position attached to the real model providing a relatively simple way to track the object's movement, but I'm not sure if it would be precise enough for you.
A more involved solution might be to take a sample point cloud from the real world model and then use an algorithm like ICP to map it to your CAD model, but again if the point cloud is not particularly accurate I'm not sure if that would benefit you either.
The iPhone 7/8 Plus / X include dual cameras and might provide better results too (if you're using an older model at the moment). Camera specs can be seen here:
If you'd be interested in working with us to achieve higher (mm) accuracy with AR, please contact me / Ghyston.
I've attempted tracking using QR Codes and the recognition of the code and it's embedded value is fast and accurate. But the accuracy of reprojecting the QR Code marker position and orientation isn't accurate or consistent at the moment. It's a little better if the QR Code is located on a horizontal surface.
Point cloud mapping is also a bit tricky from what I've tried.
I'm hoping to have object recognition built into the system which would allow for mapping to a model and allowing for alignment of the scene.
Is this the sort of things you guys are looking to do? https://www.youtube.com/watch?v=6W7_ZssUTDQ
Vuforia has accurate 3D model recognition + tracking so you can overlay content ontop of objects based on their CAD data. These features should be available this month.
(I work for Vuforia)