Generating MeshResource.Skeleton for IKComponent in RealityComposerPro/XCode

I'm building a VisionOS 2.0 app where the AVP user can change the position of the end effector of a robot model, which was generated in Reality Composer Pro (RCP) from Primitive Shapes. I'd like to use an IKComponent to achieve this functionality, following example code here. I am able to load my entityy and access its MeshResource following the IKComponent example code, but on the line

let modelSkeleton = meshResource.contents.skeletons[0]

I get an error since my MeshResource does not include a skeleton.

Is there some way to directly generate the skeleton with my entities in RCP, or is there a way to add a skeleton generated in XCode to an existing MeshResource that corresponds to my entities generated in RCP? I have tried using MeshSkeletonCollection.insert() with a skeleton I generated in XCode, but I cannot figure out how to assign this skeletonCollection to the MeshResource of the entity.

Hi @AVPDeveloper

RealityKit's IKComponent is currently best suited for animating skeletal models, and does not work out-the-box with models generated from multiple primitive shape entities in Reality Composer Pro. If this is a feature you'd like to see Reality Composer Pro support in the future, please file a feedback request at https://feedbackassistant.apple.com and post the FB number here so I can take a look or forward it to the relevant engineers. Thanks!

That being said, it is technically possible to achieve the effect you're after. The key is to first create an invisible mesh with a custom skeleton that matches the layout of the entities in your hierarchy. Then, position your entities at each of the joint positions of the skeleton using jointTransforms. By doing this, you can have your primitive entities match up with the joint transforms of the invisible skeleton, which you can then apply inverse kinematics to.

Here's the approach I took to achieve this effect:

To start, and since I don't know the exact hierarchy of your custom robot entity built with primitives in RCP, I built an entity hierarchy from code in a reality view that could represent the joints of an arm.

// Create entities to represent the joints of an arm.
let rootJoint = ModelEntity(mesh: .generateBox(size: 0.03), materials: [SimpleMaterial(color: .red, isMetallic: false)])
let armJoint = ModelEntity(mesh: .generateBox(size: 0.025), materials: [SimpleMaterial(color: .blue, isMetallic: false)])
let forearmJoint = ModelEntity(mesh: .generateBox(size: 0.02), materials: [SimpleMaterial(color: .green, isMetallic: false)])
let handJoint = ModelEntity(mesh: .generateBox(size: 0.015), materials: [SimpleMaterial(color: .yellow, isMetallic: false)])
// Position entities hierarchically: root -> arm_joint -> forearm_joint -> hand_joint.
rootJoint.name = "root"
armJoint.setParent(rootJoint)
armJoint.name = "arm_joint"
armJoint.position = [0.1, 0, 0]
forearmJoint.setParent(armJoint)
forearmJoint.name = "forearm_joint"
forearmJoint.position = [0.3, 0, 0]
handJoint.setParent(forearmJoint)
handJoint.name = "hand_joint"
handJoint.position = [0.2, 0, 0]

In your case, feel free to replace this with code that loads your root entity from Reality Composer Pro into a variable named rootJoint.

With the root entity of your hierarchy in hand, the next step is to programmatically generate a skeleton from it. Two helper methods are essential to doing this:

  • getEntityHierarchyAsArray() takes a root entity and transforms its hierarchy into a one dimensional array.
  • createSkeletonFromEntityArray() takes that array and attempts to create a skeleton from it with rest pose transforms and parents based on the current transforms and entity hierarchy.
/// Takes a root entity and traverses through its hierarchy breadth first to create a flat array of all the entities in the hierarchy.
func getEntityHierarchyAsArray(rootEntity: Entity) -> [Entity] {
    // Prepare the entity array.
    var entities: [Entity] = []
    
    // Create the queue which will be used to traverse the entity hierarchy breadth first.
    var queue = [rootEntity]
    
    // There are more entities to traverse as long as the queue isn't empty.
    while !queue.isEmpty {
        // Get the first entity in the queue.
        let entity = queue.removeFirst()
        // Add it to the entities array.
        entities.append(entity)
        
        // Enqueue all of the entity's children.
        for child in entity.children {
            queue.append(child)
        }
    }
    
    // Return the array of entities. The root entity will be the first entity in the array.
    return entities
}

/// Creates a skeleton from an array of joint entities created with `getEntityHierarchyAsArray`.
func createSkeletonFromEntityArray(skeletonId: String, jointEntities: [Entity]) -> MeshResource.Skeleton? {
    // Prepare the arrays needed to create the skeleton.
    var jointNames: [String] = []
    var restPoseTransforms: [Transform] = []
    var parentIndices: [Int?] = []
    
    // Iterate through each of the entities.
    for jointEntity in jointEntities {
        // Set a unique joint name.
        jointNames.append(jointEntity.name + UUID().uuidString)
        // Record the entity's transform.
        restPoseTransforms.append(jointEntity.transform)
        // Find the entity's parent index, if it has one.
        let jointParent = jointEntity.parent
        parentIndices.append(jointParent == nil ? nil : jointEntities.lastIndex(of: jointParent!))
    }
    
    // Create a skeleton from the joint names, transforms and parent indices.
    return MeshResource.Skeleton(id: skeletonId,
                                 jointNames: jointNames,
                                 inverseBindPoseMatrices: .init(repeating: matrix_identity_float4x4, count: jointNames.count),
                                 restPoseTransforms: restPoseTransforms,
                                 parentIndices: parentIndices)
}

The following snippet demonstrates how to use these functions to create a skeleton from the rootJoint of your entity hierarchy.

// Define a unique skeleton id.
let skeletonId = "customSkeleton"

// Convert the root entity hierarchy into an array of "joint" entities.
let jointEntities = getEntityHierarchyAsArray(rootEntity: rootJoint)

// Create a custom skeleton from the entity array.
guard let customSkeleton = try? createSkeletonFromEntityArray(skeletonId: skeletonId, jointEntities: jointEntities) else {
    assertionFailure("Failed to create custom skeleton.")
    return
}

Continued in next post.

In order for inverse kinematics to be able to operate on the skeleton, the skeleton needs to be part of a mesh on an entity. Since you'll be visualizing the skeleton with the jointEntities instead of a skinned mesh, create an empty mesh with the skeleton and add it to an entity like so.

// In order for an entity to use IK it needs a mesh with a skeleton,
// so create a dummy mesh part referencing the skeleton.
var newPart = MeshResource.Part(id: "dummyPart", materialIndex: 0)
newPart.skeletonID = skeletonId
// Make the mesh part consist of a single invisible triangle.
newPart.positions = .init([[0,0,0], [0,0,0], [0,0,0]])
newPart.triangleIndices = .init([0, 1, 2])
newPart.jointInfluences = .init(influences: .init([.init(), .init(), .init()]), influencesPerVertex: 1)
// Add the invisible mesh part and skeleton to the mesh contents.
var meshContent = MeshResource.Contents()
meshContent.models = [.init(id: "dummyModel", parts: [newPart])]
meshContent.skeletons = [customSkeleton]

// Create a model entity to contain the skeleton mesh.
guard let meshResource = try? MeshResource.generate(from: meshContent) else {
    assertionFailure("Failed to create skeleton mesh resource.")
    return
}
let skeletonContainerEntity = ModelEntity(mesh: meshResource, materials: [])

// Add the skeleton model entity and the root joint entity to the scene.
let skeletonRootEntity = Entity()
skeletonContainerEntity.setParent(skeletonRootEntity)
jointEntities.first?.setParent(skeletonRootEntity)  // `jointEntities.first` is the root joint entity.
content.add(skeletonRootEntity)

From here, you can follow the IKComponent documentation you previously referenced to set up inverse kinematics. Here's how I set up my IKComponent.

// Create an IK rig for the skeleton.
guard var customIKRig = try? IKRig(for: customSkeleton) else {
    assertionFailure("Failed to create IK rig from custom skeleton.")
    return
}

// Set the global forward kinematics weight to zero so that the arm is entirely moved by inverse kinematics.
customIKRig.globalFkWeight = 0.0

// Define the joint constraints for the rig.
let rootConstraintName = "root_constraint"
let handConstraintName = "end_constraint"
customIKRig.constraints = [
    // Constrain the root joint's position.
    .point(named: rootConstraintName, on: customSkeleton.joints.first!.name, positionWeight: [5.0, 5.0, 5.0]),
    // Add a point demand to the hand joint.
    // This will be used to set a target position for the hand.
    .point(named: handConstraintName, on: customSkeleton.joints.last!.name)
]
    
// Create an IK resource for the custom IK rig.
guard let ikResource = try? IKResource(rig: customIKRig) else {
    assertionFailure("Failed to create IK resource.")
    return
}
    
// Add an IK component to the entity using the new resource.
skeletonContainerEntity.components.set(IKComponent(resource: ikResource))

Finally, subscribe to the SkeletalPoseUpdateComplete event and position the jointEntities to match their corresponding skeleton joint transforms whenever it is triggered.

// Subscribe to the skeletal pose update complete event and update the joint entities whenever it is triggered.
content.subscribe(to: AnimationEvents.SkeletalPoseUpdateComplete.self) { event in
    for i in 0..<skeletonContainerEntity.jointTransforms.count {
        jointEntities[i].transform.rotation = skeletonContainerEntity.jointTransforms[i].rotation
        jointEntities[i].transform.translation = skeletonContainerEntity.jointTransforms[i].translation
    }
}

I then use RealityKit's Entity Component System to set a target position for the hand constraint every frame. Let me know if you would like an example of how to do that!

@Vision Pro Engineer Fantastic, thank you for the detailed response. This approach makes sense, but I am still wondering about setting the hand position at every frame. Could you please share an implementation for this using the Entity Component System as you mentioned? I'd like to use a DragGesture() targeted to the 'handJoint' to direct its position at each frame, which I have been able to implement without IK. I'm thinking along these lines, but I do not know what I am missing (note this example code does not successfully update the handJoint and other joint positions):

            .gesture(
                DragGesture()
                    .targetedToEntity(shapes.handJoint)
                    .onChanged { value in
                        let currentHandPosition = value.convert(value.location3D, from: .local, to: shapes.skeletonContainerEntity)
                        
                        var ikComponent = shapes.skeletonContainerEntity.components[IKComponent.self]!
                        ikComponent.solvers[0].constraints["end_constraint"]!.target.translation = currentHandPosition
                        shapes.skeletonContainerEntity.components.set(ikComponent)
                    }
            )

Thank you very much!

Hi @AVPDeveloper

Here's how you can use RealityKit's Entity Component System to update the target position of an IK constraint in response to a drag gesture.

First, create a component and a system responsible for updating the target position of a given IK constraint every frame.

/// Stores the name of an IK constraint and its target position, along with an optional helper entity for visualizing the target position.
struct IKTargetPositionerComponent: Component {
    let targetConstraintName: String
    var targetPosition: SIMD3<Float>
    let targetVisualizerEntity: Entity?
}

/// Updates the target position of an IK constraint every frame.
struct IKTargetPositionerSystem: System {
    let query: EntityQuery = EntityQuery(where: .has(IKTargetPositionerComponent.self))
    
    init(scene: RealityKit.Scene) {}

    func update(context: SceneUpdateContext) {
        // Get all entities with an `IKTargetPositionerComponent`.
        let entities = context.entities(matching: self.query, updatingSystemWhen: .rendering)
        
        for entity in entities {
            // Get the necessary IK components attached to the entity.
            guard let ikComponent = entity.components[IKComponent.self],
                  let ikTargetPositionerComponent = entity.components[IKTargetPositionerComponent.self] else {
                assertionFailure("Entity is missing required IK components.")
                return
            }
            
            // Set the target position of the target constraint.
            ikComponent.solvers[0].constraints[ikTargetPositionerComponent.targetConstraintName]!.target.translation = ikTargetPositionerComponent.targetPosition
            // Fully override the rest pose, allowing IK to fully move the joint to the target position.
            ikComponent.solvers[0].constraints[ikTargetPositionerComponent.targetConstraintName]!.animationOverrideWeight.position = 1.0
            
            // Position the target visualizer entity at the target position.
            ikTargetPositionerComponent.targetVisualizerEntity?.position = ikTargetPositionerComponent.targetPosition

            // Apply component changes.
            entity.components.set(ikComponent)
        }
    }
}

Be sure to register the IKTargetPositionerSystem. You can do this in the initializer of your view.

init() {
    IKTargetPositionerSystem.registerSystem()
}

Next, add an InputTargetComponent and a collision shape to the hand joint entity so that it can be targeted by gestures. I modified the constraint initialization code to achieve this like so:

// Get the index of the hand joint.
// May be different for your custom skeleton.
let handJointIndex = customSkeleton.joints.count - 1

// Define the joint constraints for the rig.
let rootConstraintName = "root_constraint"
let handConstraintName = "end_constraint"
customIKRig.constraints = [
    // Constrain the root joint's position.
    .point(named: rootConstraintName, on: customSkeleton.joints.first!.name, positionWeight: [5.0, 5.0, 5.0]),
    // Add a point demand to the hand joint.
    // This will be used to set a target position for the hand.
    .point(named: handConstraintName, on: customSkeleton.joints[handJointIndex].name)
]

// Add an input target and collision shape to the hand joint entity
// so that it can be the target of a drag gesture.
handJointEntity = jointEntities[handJointIndex]
handJointEntity.components.set(InputTargetComponent())
handJointEntity.generateCollisionShapes(recursive: false)

Where, for the sake of this example, handJointEntity is defined as a state variable so that it can be accessed in the drag gesture in the final step.

@State var handJointEntity: Entity = Entity()

Then, add the IKTargetPositionerComponent to the skeletonContainerEntity.

// Create a helper entity to visualize the target position.
let targetVisualizerEntity = ModelEntity(mesh: .generateSphere(radius: 0.015), materials: [SimpleMaterial(color: .magenta, isMetallic: false)])
targetVisualizerEntity.components.set(OpacityComponent(opacity: 0.5))
targetVisualizerEntity.setParent(skeletonContainerEntity)
// Create the IK target positioner component and add it to the skeleton container entity.
skeletonContainerEntity.components.set(IKTargetPositionerComponent(targetConstraintName: handConstraintName,
                                                                   targetPosition: handJointEntity.position(relativeTo: skeletonContainerEntity),
                                                                   targetVisualizerEntity: targetVisualizerEntity))

The targetVisualizerEntity is useful for visualizing the target position and making sure everything is working correctly, but feel free to leave it out in your final implementation.

Finally, modify your drag gesture to update the targetPosition of the skeletonContainerEntity's IKTargetPositionerComponent.

.gesture(
    DragGesture()
        .targetedToEntity(handJointEntity)
        .onChanged { value in
            let currentHandPosition = value.convert(value.location3D, from: .local, to: skeletonContainerEntity)
            
            skeletonContainerEntity.components[IKTargetPositionerComponent.self]?.targetPosition = currentHandPosition
        }
)

Let me know if you run into any issues getting this to work!

Hi @Vision Pro Engineer! Thank you so much for this information. The implementation works great for controlling the entities based on a single gesture from the end effector.

I am interested in extending this functionality to allow the user to select different joints along the hierarchy with DragGestures to control, and have IKComponent solve the IK for the desired joint positions regardless of what joint is selected.

How would I go about implementing this?

Generating MeshResource.Skeleton for IKComponent in RealityComposerPro/XCode
 
 
Q