I'm checking out the "Image Anchors" functionnality. If i understand it correctly, the system first detects multiple points of the marker in the image, and then uses those to compute the markers position and orientation in the AR coordinate system.
However, I'm in the situation where I have 2D-3D coordinates correspondance provided by my own code for an object (not a marker), and would like to estimate its pose in relation to the camera. My research suggests that I could use the Perspective-n-points Ransac algorithm which is for example implemented in openCV, but I'm pretty sure that something similar is used internally by ARKit after the marker is detected in the image. Is there any way to access that lower level functionality in order to get an object's anchor without it being a 2D marker provided to the system, but instead by providing one's own 2D-3D coordinate correspondance?