Best way to include dynamic elements in RealityKit

I am trying to work around RealityKit's limited code-generated elements, and looking for recommendations of creating dynamic elements that can change in response to other code.

Here are some ideas I am considering:
  • create pre-built models and animations for them which I can skip to particular frames in the code depending on data

  • coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements

Are there other approaches to consider, especially ones that would allow me to include 3D elements.

Replies

Hi,

if I correctly understand what you wish to do;

1) “create pre-built models and animations for dynamic element models which can skip to particular frames in the code depending on data” — In RealityKit and Composer one of the behavior actions which can be added is ‘send notify message to Xcode’ consider placing one of these ‘actions’ at various points in a specific scene or object’s ‘overall behavior’ where you wish to have a the Xcode ‘do something’ in response to being ‘notified’ by the ‘action’ in your ‘dynamic element’ in RealityKit-Composer.If, instead, you meant; you want Xcode to first respond to ‘data’ (perhaps from a Core ML Model) and based on the ‘data stream’ to then perform code assigned to a RealityKit scene, object grouping, or single object entity; that too, is possible, but may require more intricate coding in Xcode.

2) coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements.

if I understand correctly, you’d be using UIview of SKView sub view for the UI aspect of taking your existing app and going AR with it?
So, the desired outcome is to move the ‘coordinate space’ of a specific 3D Object or points in a 3D ‘AR box’ and ‘pancaking’ them into a 2D interface? ‘Unprojecting’ May be the wrong term; it sounds more like a ‘coordinate space transform’ to me.

While there may be an app for that; there’s at least Xcode. I do not know the ‘how’ to do it, but my internal ML says it can be done without too much bloodshed.

3) Are there other approaches to consider, especially ones that would allow me to include 3D elements.

Your ‘approach’ should usually be connected to ‘the identifiable needs’ of your conversion to AR space for your app. My experience has been that you’ll benefit by trying multiple strategies and approaches to any given ‘how do do this in RealityKit or Composer?’ Being ‘open’ to applying new strategies is half the battle. AR is still a very ‘new space’ for us Developers to explore. A sense of curiosity is almost as important as specific detailed knowledge about ‘this or that’.

Part of the ‘untold secrets’ of effectively using the great capabilities Apple has given us with ARkit,RealityKit-Composer-Converter, Scenekit and SpriteKit together in the AR ‘experience’ is going back and forth between ‘what result you want’ and comparing it with the ‘available capabilities of the tools’. By refining the ‘fit’ between them, strategies, approaches and solutions have fertile ground to emerge.

I hope this helps.

Best,

Catfish





When loading models or entities into a RealityKit app, structure in the model or entity files is exposed to you as entities and components. This can be manipulated in code. For example, you can show and hide parts of the file programmatically, or move/animated a specific entity and its children.

Do you have any examples of the kinds of dynamic elements you are intending to build?
In my case I am using motion capture and I am trying to include visual elements, like for example a swoosh, as someone moves their arm. Ideally the swoosh could be dynamically created and three dimensional to better integrate with user's motion. The two ideas I proposed are partial solutions:
  • the first requires the user's movement to match up with a pre-built animation that gets positioned relative to the user and steps through coordinated with the user.

  • the second can be dynamically drawn but only in a 2D plane approximating the user's true 3D movement.

Are there other ideas that might have different trade-offs or are closer to my goal?
One approach you could take is to create a model for your visual effect that is skinned to a set of joints placed throughout the effect's shape. The joint transforms can be modified by the app code using the jointTransforms property. With enough joints, fairly dynamic shapes can be expressed in true 3D.