In my application, we want to add an optional RealityKit experience for our customers who have upgraded their OS without forcing them to do so. I was going through my code adding @available attributes to all of my classes that accessed iOS 13-specific features, but I discovered in the process that a bunch of my errors were in the generated RealityKit class files which I cannot modify. Is there any way to produce a build which includes RealityKit when the target version is below iOS 13?
Post
Replies
Boosts
Views
Activity
I am making use of personSegmentationWithDepth in my app, and the actual occlusion in my RealityKit implementation works very well. But when I retrieve values from the estimatedDepthData for a particular point, it has trouble beyond a depth of about 1.5-1.75 meters. Once I get past that depth, it returns a depth value of 0 meters most of the time. With that said when it does return a value the value appears to be accurate.
I am working with a LiDAR-equipped iPad on a tripod running iOS 14 beta 2. Does the stillness of the tripod affect my limited data returned? And since estimatedDepthData predates iOS 13.5, I'm also wondering if it is taking advantage of the newer hardware.
I am trying to do a hit test of sorts between a person in my ARFrame and a RealityKit Entity. So far I have been able to use the position value of my entity and project it to a CGPoint which I can match up with the ARFrame's segmentationBuffer to determine whether a person intersects with that entity. Now I want to find out if that person is at the same depth as that entity. How do I relate the SIMD3 position value for the entity, which is in meters I think, to the estimatedDepthData value?
Our team is doing innovative things with ARKit motion capture. Is there someone we can talk to about partnering with Apple on motion capture?
I am trying to work around RealityKit's limited code-generated elements, and looking for recommendations of creating dynamic elements that can change in response to other code.
Here are some ideas I am considering: create pre-built models and animations for them which I can skip to particular frames in the code depending on data
coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements
Are there other approaches to consider, especially ones that would allow me to include 3D elements.
In the absence of the ability to simultaneously use ARKit motion capture and people occlusion with depth, I am brainstorming ways that I can still make use of virtual objects in my app in more limited ways. One idea I am considering is using the occlusion configuration until I detect the person is in a particular position and switching configurations to motion capture. Is that switch going to cause me problems with loss of world anchor or other disruptive experiences for the user? How seamless will mode switching appear?
I currently use motion capture in an app, and I am intrigued by the new Action Classifiers as a way to detect behaviors as either a signal to start / end something or score the user's performance. I am wondering about how realistic it is to run Vision framework implementing a machine learning model simultaneously with ARKit implementing motion capture.
In ARKit 3, person segmentation with depth and body detection frame semantics seem to be mutually exclusive. It crashes returning the error: "This set of frame semantics is not supported on this configuration." Is this still the case in ARKit 4?
I frequently crash out when I run an AR session using my LiDAR iPad Pro. The error is:[ADMillimeterRadiusPairsLensDistortionModel applyDistortionModelToPixels:inPixels:intrinsicsMatrix:pixelSize:distort:outPixels:] ().
The crash only happens when debugging, so I expect it is a resource issue. To try to fix it, sometimes I restart Xcode, sometimes I force quit the app, sometimes I unplug and replug the device from the computer. Nothing consistently works that I've found.
Anything I can do to avoid it?