Posts

Post not yet marked as solved
0 Replies
563 Views
I am able to track body motion using Arkit. In the session function on anchor update I get the anchor values. I need to update the anchor transform values in realtime so that the usdz shows the motion that I program and not the human tracked motion. How can we achieve this. The session function is below func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) { for anchor in anchors { guard let bodyAnchor = anchor as? ARBodyAnchor else { continue } let hipWorldPosition = bodyAnchor.transform let skeleton = bodyAnchor.skeleton let jointTransforms = skeleton.jointModelTransforms for (i, jointTransform) in jointTransforms.enumerated() { let parentIndex = skeleton.definition.parentIndices[i] guard parentIndex != -1 else {continue} let parentJointTransform = jointTransforms[parentIndex] } // Update the position of the character anchor's position. let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3) characterAnchor.position = bodyPosition + characterOffset // Also copy over the rotation of the body anchor, because the skeleton's pose // in the world is relative to the body anchor's rotation. characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation if let character = character, character.parent == nil { characterAnchor.addChild(character) //characterAnchor1.addChild(character1!) } } }
Posted
by RPiOS.
Last updated
.
Post not yet marked as solved
0 Replies
333 Views
I have a code where I get the body anchors and am able to track the human motion using arkit. This is same as the wwdc sample and it works fine. My next step is to be able to record/save the human motion and replay it when needed. The replay will be in 3D animation. What are steps needed to achieve this.
Posted
by RPiOS.
Last updated
.
Post not yet marked as solved
0 Replies
525 Views
I am trying t instantiate my arview controller from appdelegate. Code is as below.     func application(_ application: UIApplication,                      didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?)         -> Bool {                          let storyboard = UIStoryboard(name: "Main", bundle: nil)                          let initialViewController = storyboard.instantiateViewController(withIdentifier: "StartViewMain") as! ViewController                          self.window?.rootViewController = initialViewController             self.window?.makeKeyAndVisible()         return true     } This doesn't give me the ARview. However if I comment the code in appdelegate for instantiation and make the Main.storyboard as an Initial View Controller it works fine. What could be the problem with this code. The end objective I have to be able to show the ARview controller on need. Below is the code for the viewController,swift file Thanks /* See LICENSE folder for this sample’s licensing information. Abstract: The sample app's main view controller. */ import UIKit import RealityKit import ARKit import Combine class ViewController: UIViewController, ARSessionDelegate { @IBOutlet var arView: ARView! // The 3D character to display. var character: BodyTrackedEntity? let characterOffset: SIMD3<Float> = [-1.0, 0, 0] // Offset the character by one meter to the left let characterAnchor = AnchorEntity() override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) arView.session.delegate = self arView.debugOptions.insert(.showStatistics) // If the iOS device doesn't support body tracking, raise a developer error for // this unhandled case. guard ARBodyTrackingConfiguration.isSupported else { fatalError("This feature is only supported on devices with an A12 chip") } // Run a body tracking configration. let configuration = ARBodyTrackingConfiguration() arView.session.run(configuration) arView.scene.addAnchor(characterAnchor) // Asynchronously load the 3D character. var cancellable: AnyCancellable? = nil cancellable = Entity.loadBodyTrackedAsync(named: "character/robot").sink( receiveCompletion: { completion in if case let .failure(error) = completion { print("Error: Unable to load model: \(error.localizedDescription)") } cancellable?.cancel() }, receiveValue: { (character: Entity) in if let character = character as? BodyTrackedEntity { // Scale the character to human size character.scale = [1.0, 1.0, 1.0] self.character = character cancellable?.cancel() } else { print("Error: Unable to load model as BodyTrackedEntity") } }) } func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) { for anchor in anchors { guard let bodyAnchor = anchor as? ARBodyAnchor else { continue } // Update the position of the character anchor's position. let bodyPosition = simd_make_float3(bodyAnchor.transform.columns.3) characterAnchor.position = bodyPosition + characterOffset // Also copy over the rotation of the body anchor, because the skeleton's pose // in the world is relative to the body anchor's rotation. characterAnchor.orientation = Transform(matrix: bodyAnchor.transform).rotation if let character = character, character.parent == nil { // Attach the character to its anchor as soon as // 1. the body anchor was detected and // 2. the character was loaded. characterAnchor.addChild(character) } ////////////////////////////////////////////////////////// ///Anchor transform in 3D /// // Access the position of the root node. let hipWorldPosition = bodyAnchor.transform // Access the skeleton geometry. let skeleton = bodyAnchor.skeleton // Access the list of transforms of all joints relative to the root. let jointTransforms = skeleton.jointModelTransforms // Iterate over all the joints. for (i, jointTransform) in jointTransforms.enumerated() { // Extract parent index from definition. let parentIndex = skeleton.definition.parentIndices[i] // Check if it's not root (the root doesn't have a parent). guard parentIndex != -1 else { continue } // Find position of parent joint let parentJointTransform = jointTransforms[parentIndex] // Use this however you want... } } }
Posted
by RPiOS.
Last updated
.
Post not yet marked as solved
0 Replies
586 Views
Hello, I know that we can capture object using reality kit and create a 3D model of the real world object. Can we also use reality kit to scan a human and create a 3D model of the human along with the skeletal structure. So we can use the same scanned human in the motion capture and display the motion animation with the real person. Thanks & Regards, Rakesh
Posted
by RPiOS.
Last updated
.