2 Replies
      Latest reply on Aug 2, 2018 8:29 AM by Ysee-kzsln
      Ysee-kzsln Level 1 Level 1 (0 points)

        I'm working on perspective tests with ARKit and SceneKit. The idea is to improve 3D rendering when displaying a flat 3D model on the ground. I had already opened a ticket to another perspective problem that is almost solved. (https://stackoverflow.com/questions/51377892/arkit-perspective-rendering?noredirect=1#comment89803924_51377892)



        However, I noticed after my multitudes tests / 3D display, that sometimes when I anchor a 3D model, the size of it can differ... (width and length)

        I usually display a 3D model that is 16 meters long and 1.5 meters wide. You can well imagine that this distorts my rendering.

        I don't know why my display may differ in terms of 3D model size.

        Maybe it's from the tracking and my test environment.



        Below is the code I use to add my 3D model to the scene:







           func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
                guard let imageAnchor = anchor as? ARImageAnchor else { return }
                let referenceImage = imageAnchor.referenceImage
                let imageAnchorPosition = imageAnchor.transform.columns.3
                print("Image detected")
                let modelName = "couloirV2"
                //let modelName = "lamp"
                guard let object = VirtualObject
                    .filter({ $0.modelName == modelName })
                    .first else { fatalError("Cannot get model \(modelName)") }
                print("Loading \(object)...")
                self.sceneView.prepare([object], completionHandler: { _ in
                    self.updateQueue.async {
                        // Translate the object's position to the reference node position.
                        object.position.x = imageAnchorPosition.x
                        object.position.y = imageAnchorPosition.y
                        object.position.z = imageAnchorPosition.z
                        // Save the initial y value for slider handler function
                        self.tmpYPosition = object.position.y
                        // Match y node's orientation
                        object.orientation.y = node.orientation.y
                        print("Adding object to the scene")
                        // Prepare the object
                        // Show origin axis
                        // Translate on z axis to match perfectly with image detected.
                        var translation = matrix_identity_float4x4
                        translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
                        object.simdTransform = matrix_multiply(object.simdTransform, translation)
                        self.virtualObjectInteraction.selectedObject = object
                        self.sceneView.addOrUpdateAnchor(for: object)
        • Re: Bad scaling rendering ARKit
          Ysee-kzsln Level 1 Level 1 (0 points)

          This case appears randomly after fast motion or init tracking.

          • Re: Bad scaling rendering ARKit
            Ysee-kzsln Level 1 Level 1 (0 points)

            I did an investigation today about the problem mentioned above. I did tests again in the lab and outside (on a parking lot) always by placing the 3D model on a marker. The result is always the same. The scaling problem is random.

            I thought the scaling problem came from the environment (I usually did my tests in a long corridor), so I then did the tests outside.

            I noticed that the scaling problem could appear when I move the camera too fast or when the tracking is lost. However, I add the 3D model when the tracking status is normal (TrackingState.normal).

            The code used to add the 3D model is the same as above.