Post

Replies

Boosts

Views

Activity

Reply to Does estimatedDepthData take advantage of LiDAR?
Yes, thanks, I recognize the estimated depth data is specific to people in the segmentation buffer. I have had some strange experiences yesterday--I turned on mesh scene reconstruction and the showSceneUnderstanding flag on my iPad running iOS 14 beta, and all of the sudden I got a continuous stream of of estimated depth data to a depth of around 4-5 meters. But then I tried it again with a new build, and I got literally no estimated depth data (i.e. all depth values were 0). I restarted my device and Xcode and again got a steady stream for a single build and then it again showed no data. Once I get a better handle on it I will file a bug report, but it did strike me that turning on one or both of those flags seemed to "wake up" the LiDAR for the estimated depth data. One other important note: in another test, I tried having someone hold the iPad instead of using a tripod, and that also seemed to improve my estimated depth data info, again suggesting that the LiDAR is not activated at least by default for estimated depth data.
Jul ’20
Reply to Matching Virtual Object Depth with ARFrame Estimated Depth Data
That extension was super helpful and solved my problems, so thank you so much! Comparing the extension to my code, I think the key problem was in fact what you highlighted earlier--I needed to account for the pixel buffer width. In my previous implementation, I had been just accounting for the bytes per row which is what I thought you were saying too, but in fact you need to account for both. Thanks again!
Jul ’20
Reply to Matching Virtual Object Depth with ARFrame Estimated Depth Data
Thanks for the suggestion. Since posting this I have indeed been able to get the beginnings of a hit test going with the segmentationBuffer, but then when I try to use the estimatedDepthData, I run into trouble extracting values. Here's some of my code: let segmentationCols = CVPixelBufferGetWidth(segmentationBuffer) let segmentationRows = CVPixelBufferGetHeight(segmentationBuffer) let colPosition = screenPosition.x / UIScreen.main.bounds.width * CGFloat(segmentationCols) let rowPosition = screenPosition.y / UIScreen.main.bounds.height * CGFloat(segmentationRows) CVPixelBufferLockBaseAddress(segmentationBuffer, .readOnly) guard let baseAddress = CVPixelBufferGetBaseAddress(segmentationBuffer) else { return } let bytesPerRow = CVPixelBufferGetBytesPerRow(segmentationBuffer) let buffer = baseAddress.assumingMemoryBound(to: UInt8.self) let index = Int(colPosition) + Int(rowPosition) * bytesPerRow let b = buffer[index] if let segment = ARFrame.SegmentationClass(rawValue: b), segment == .person, let depthBuffer = frame.estimatedDepthData { print("Person!") CVPixelBufferLockBaseAddress(depthBuffer, .readOnly) guard let depthAddress = CVPixelBufferGetBaseAddress(depthBuffer) else { return } let depthBytesPerRow = CVPixelBufferGetBytesPerRow(depthBuffer) let depthBoundBuffer = depthAddress.assumingMemoryBound(to: Float32.self) let depthIndex = Int(colPosition) * Int(rowPosition) let depth_b = depthBoundBuffer[depthIndex] print(depth_b) CVPixelBufferUnlockBaseAddress(depthBuffer, .readOnly) } CVPixelBufferUnlockBaseAddress( segmentationBuffer, .readOnly ) I strongly suspect that my problems are in line 19 and 20 of my code above, but I can't figure out the right values to find the point I want in the estimatedDepthData
Jun ’20
Reply to Matching Virtual Object Depth with ARFrame Estimated Depth Data
Thanks for the suggestion. Since posting this I have indeed been able to get the beginnings of a hit test going with the segmentationBuffer, but then when I try to use the estimatedDepthData, I run into trouble extracting values. Here's some of my code: let segmentationCols = CVPixelBufferGetWidth(segmentationBuffer) let segmentationRows = CVPixelBufferGetHeight(segmentationBuffer) let colPosition = screenPosition.x / UIScreen.main.bounds.width * CGFloat(segmentationCols) let rowPosition = screenPosition.y / UIScreen.main.bounds.height * CGFloat(segmentationRows) CVPixelBufferLockBaseAddress(segmentationBuffer, .readOnly) guard let baseAddress = CVPixelBufferGetBaseAddress(segmentationBuffer) else { return } let bytesPerRow = CVPixelBufferGetBytesPerRow(segmentationBuffer) let buffer = baseAddress.assumingMemoryBound(to: UInt8.self) let index = Int(colPosition) + Int(rowPosition) * bytesPerRow let b = buffer[index] if let segment = ARFrame.SegmentationClass(rawValue: b), segment == .person, let depthBuffer = frame.estimatedDepthData { print("Person!") CVPixelBufferLockBaseAddress(depthBuffer, .readOnly) guard let depthAddress = CVPixelBufferGetBaseAddress(depthBuffer) else { return } let depthBytesPerRow = CVPixelBufferGetBytesPerRow(depthBuffer) let depthBoundBuffer = depthAddress.assumingMemoryBound(to: Float32.self) let depthIndex = Int(colPosition) * Int(rowPosition) let depth_b = depthBoundBuffer[depthIndex] print(depth_b) CVPixelBufferUnlockBaseAddress(depthBuffer, .readOnly) } CVPixelBufferUnlockBaseAddress( segmentationBuffer, .readOnly ) I strongly suspect that my problems are in line 19 and 20 of my code above, but I can't figure out the right values to find the point I want in the estimatedDepthData
Jun ’20
Reply to Best way to include dynamic elements in RealityKit
In my case I am using motion capture and I am trying to include visual elements, like for example a swoosh, as someone moves their arm. Ideally the swoosh could be dynamically created and three dimensional to better integrate with user's motion. The two ideas I proposed are partial solutions: the first requires the user's movement to match up with a pre-built animation that gets positioned relative to the user and steps through coordinated with the user. the second can be dynamically drawn but only in a 2D plane approximating the user's true 3D movement. Are there other ideas that might have different trade-offs or are closer to my goal?
Jun ’20