Post

Replies

Boosts

Views

Activity

Crash with Debugger when using LiDAR
I frequently crash out when I run an AR session using my LiDAR iPad Pro. The error is:[ADMillimeterRadiusPairsLensDistortionModel applyDistortionModelToPixels:inPixels:intrinsicsMatrix:pixelSize:distort:outPixels:] ().  The crash only happens when debugging, so I expect it is a resource issue. To try to fix it, sometimes I restart Xcode, sometimes I force quit the app, sometimes I unplug and replug the device from the computer. Nothing consistently works that I've found. Anything I can do to avoid it?
3
0
897
Jun ’20
Switching configuration for occlusion and motion capture
In the absence of the ability to simultaneously use ARKit motion capture and people occlusion with depth, I am brainstorming ways that I can still make use of virtual objects in my app in more limited ways. One idea I am considering is using the occlusion configuration until I detect the person is in a particular position and switching configurations to motion capture. Is that switch going to cause me problems with loss of world anchor or other disruptive experiences for the user? How seamless will mode switching appear?
1
0
461
Jun ’20
Best way to include dynamic elements in RealityKit
I am trying to work around RealityKit's limited code-generated elements, and looking for recommendations of creating dynamic elements that can change in response to other code. Here are some ideas I am considering: create pre-built models and animations for them which I can skip to particular frames in the code depending on data coordinating with a UIView or SKView subview, unprojecting points from the 3D space to 2D to align the elements Are there other approaches to consider, especially ones that would allow me to include 3D elements.
4
0
1.5k
Jun ’20
Matching Virtual Object Depth with ARFrame Estimated Depth Data
I am trying to do a hit test of sorts between a person in my ARFrame and a RealityKit Entity. So far I have been able to use the position value of my entity and project it to a CGPoint which I can match up with the ARFrame's segmentationBuffer to determine whether a person intersects with that entity. Now I want to find out if that person is at the same depth as that entity. How do I relate the SIMD3 position value for the entity, which is in meters I think, to the estimatedDepthData value?
9
0
2.5k
Jun ’20
Does estimatedDepthData take advantage of LiDAR?
I am making use of personSegmentationWithDepth in my app, and the actual occlusion in my RealityKit implementation works very well. But when I retrieve values from the estimatedDepthData for a particular point, it has trouble beyond a depth of about 1.5-1.75 meters. Once I get past that depth, it returns a depth value of 0 meters most of the time. With that said when it does return a value the value appears to be accurate. I am working with a LiDAR-equipped iPad on a tripod running iOS 14 beta 2. Does the stillness of the tripod affect my limited data returned? And since estimatedDepthData predates iOS 13.5, I'm also wondering if it is taking advantage of the newer hardware.
3
0
1.3k
Jul ’20
Include RealityKit in iOS 12 project for iOS 13 users
In my application, we want to add an optional RealityKit experience for our customers who have upgraded their OS without forcing them to do so. I was going through my code adding @available attributes to all of my classes that accessed iOS 13-specific features, but I discovered in the process that a bunch of my errors were in the generated RealityKit class files which I cannot modify. Is there any way to produce a build which includes RealityKit when the target version is below iOS 13?
1
0
881
Jul ’20
Using an PerspectiveCamera in RealityKit
It looks like for nonAR configurations, it is possible to add a PerspectiveCamera to a RealityKit scene; is there is a way to do kind of a picture-in-picture layout with a live camera feed showing one perspective and using a PerspectiveCamera to somehow simultaneously show a second angle on the same virtual content? Short of that, is it feasible to record my AR data and play it back in a nonAR configuration using the PerspectiveCamera?
0
0
680
Aug ’20
Possible Bug in Install Gestures for RealityKit
I have a navigation controller with screens leading up to a RealityKit scene in an ARView. Once the view loads and my model is in the scene, I call arView.installGestures(_, for:) to add translation and rotation gestures to my model. This works just fine, unless I pop my view controller to a previous screen and return to my ARView. The model loads, the gestures appear to be attached to the ARView identically to the first scenario, but in this case the gestures no longer work, or are possibly shifted out of position from the model. Sometimes I can kind of reactivate the gestures by touching areas around the model and then it works to touch the model too. But it is very inconsistent. I will also file a bug report, but I'm wondering if there is simply some cleanup I need to do before exiting the view controller to ensure the gestures work every time.
1
0
1k
Sep ’20
Using AVAudioEngine output for RealityKit AudioResource
I have been able to take a static audio file and implement it as an AudioFileResource attached to an entity in my RealityKit scene. But I currently am using AVAudioEngine to change pitch based on relative position, and ideally I would be able to spatialize the sounds I'm generating relative to 3D points. I have experimented with AVAudioPlayerNode sourceMode, but as far as I can tell there is no way to tie an AVAudioPlayerNode to a 3D point in my AR scene unless I can attach my audio to a RealityKit AudioResource. Is that possible? Is there some other way to do it?
0
0
831
Sep ’20
iOS 14 change in performance cost of SCNNode creation
Code that has been working for many months including on iOS 14 beta builds became unusable with the release of iOS 14. In the past the performance cost of creating a SCNNode was small enough that I could use its many transform properties as a convenience tool for converting between quaternions and Euler angles, among other things, at runtime. After the release of iOS 14, the cost grew dramatically, slowing my app frame rate to <20 fps down from 60-120 fps. I have found mathematical solutions to replace my usage of SCNNodes but it seems like there are plenty of circumstances that could not be solved so readily.
2
0
1.3k
Sep ’20
ARKit Body Detection in Squats
From my tests, one of the major limitations of body detection is a lack of environmental awareness. For example, body detection has no relationship to the scene detection, like floor plane, so a detected figure can move in unnatural ways like sliding backwards, and even through the ground plane, to solve for the detected position. This is particularly true when a user squats, which causes the root joint to move down in space and child joints to move up relative to the root. During a real squat, the root moves backward and down in space, but the body detection often moves the root backward one or more meters, and rotates the upper leg joints outward to match up what it sees causing the knees and feet to widen even though the feet are not moving in reality. I have tried many techniques to correct for this, from feeding motion data through a SceneKit IK robot to trying to use geometry to correct for the root's backward movement, but have not been able to convincingly correct for ARKit's body detection anomalies. Is there any way to put limits on ARKit's body detection (don't move the feet) or maybe some other technique for correcting it after the fact that anyone has devised?
1
0
991
Nov ’20
Using NSURLSession delegate with BackgroundTasks
It seems that apps using background processing are required to implement BackgroundTasks, but I am struggling to figure out how to do that when continuing an URLSession upload task when a device enters the background. Currently, I use BGTaskScheduler to register a new task, and schedule that task when I enter the background: BGTaskScheduler.shared.register(forTaskWithIdentifier: backgroundTaskIdentifier, using: nil) { task in    &amp;#9; self.handleUploadTask(task) } The actual content of the task uses a stored uploadIdentifier to recreate the session configuration and get the relevant upload tasks. Then I add a cancel call for each to when the BGTask expires, and (unnecessarily?) resume each of those ongoing upload tasks: func handleUploadTask(_ task: BGTask) {      let uploader = UploadService()      if let identifier = uploader.uploadIdentifier {          let sessionConfig = URLSessionConfiguration.background(withIdentifier: identifier)          let session = URLSession(configuration: sessionConfig, delegate: firebase, delegateQueue: OperationQueue.main)          task.expirationHandler = {              session.getTasksWithCompletionHandler { (_, uploadTasks, _) in                  for uploadTask in uploadTasks {                         uploadTask.cancel()                  }              }          }          session.getTasksWithCompletionHandler { (_, uploadTasks, _) in              for uploadTask in uploadTasks {                     uploadTask.resume()              }          }      }  } How do I complete the BGTask when I get a callback from my URLSession delegate in my UploadService? Are there other pieces of the puzzle I am missing?
1
0
1.1k
Feb ’21