Posts

Post not yet marked as solved
4 Replies
1.3k Views
I am trying to work around RealityKit's limited code-generated elements, and looking for recommendations of creating dynamic elements that can change in response to other code. Here are some ideas I am considering: create pre-built models and animations for them which I can skip to particular frames in the code depending on data coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements Are there other approaches to consider, especially ones that would allow me to include 3D elements.
Posted Last updated
.
Post marked as solved
3 Replies
739 Views
I frequently crash out when I run an AR session using my LiDAR iPad Pro. The error is:[ADMillimeterRadiusPairsLensDistortionModel applyDistortionModelToPixels:inPixels:intrinsicsMatrix:pixelSize:distort:outPixels:] ().  The crash only happens when debugging, so I expect it is a resource issue. To try to fix it, sometimes I restart Xcode, sometimes I force quit the app, sometimes I unplug and replug the device from the computer. Nothing consistently works that I've found. Anything I can do to avoid it?
Posted Last updated
.
Post marked as solved
1 Replies
380 Views
In the absence of the ability to simultaneously use ARKit motion capture and people occlusion with depth, I am brainstorming ways that I can still make use of virtual objects in my app in more limited ways. One idea I am considering is using the occlusion configuration until I detect the person is in a particular position and switching configurations to motion capture. Is that switch going to cause me problems with loss of world anchor or other disruptive experiences for the user? How seamless will mode switching appear?
Posted Last updated
.
Post marked as solved
1 Replies
1.1k Views
I currently use motion capture in an app, and I am intrigued by the new Action Classifiers as a way to detect behaviors as either a signal to start / end something or score the user's performance. I am wondering about how realistic it is to run Vision framework implementing a machine learning model simultaneously with ARKit implementing motion capture.
Posted Last updated
.