Post

Replies

Boosts

Views

Activity

RGB-D and Point Clouds in visionOS
Dear all, We are building an XR application demonstrating our research on open-vocabulary 3D instance segmentation for assistive technology. We intend on bringing it to visionOS using the new Enterprise APIs. Our method was trained on datasets resembling ScanNet which contain the following: localized (1) RGB camera frames (2) with Depth (3) and camera intrinsics (4) point cloud (5) I understand, we can query (1), (2), and (4) from the CameraFrameProvider. As for (3) and (4), it is unclear to me if/how we can obtain that data. In handheld ARKit, this example project demos how the depthMap can be used to simulate raw point clouds. However, this property doesn't seem to be available in visionOS. Is there some way for us to obtain depth data associated with camera frames? "Faking" depth data from the SceneReconstructionProvider-generated meshes is too coarse for our method. I hope I'm just missing some detail and there's some way to configure CameraFrameProvider to also deliver depth and/or point clouds. Thanks for any help or pointer in the right direction! ~ Alex
1
0
382
Sep ’24
iOS 17: Cannot find type 'PhotogrammetrySession' in scope (Object Capture)
Dear all, I'm building a SwiftUI-based frontend for the RealityKit Object Capture feature. It is using the iOS+macOS (native, not Catalyst) template on Xcode 15.1. When compiling for macOS 14.2, everything works as expected. When compiling for iPadOS 17.2, I receive the compiler error message "Cannot find type 'PhotogrammetrySession' in scope". As minimum deployment target, I selected iOS 17 and macOS 14 (see attachment). An example of the lines of code causing errors is import RealityKit typealias FeatureSensitivity = PhotogrammetrySession.Configuration.FeatureSensitivity // error: Cannot find type PhotogrammetrySession in scope typealias LevelOfDetail = PhotogrammetrySession.Request.Detail // error: Cannot find type PhotogrammetrySession in scope I made sure to wrap code that uses features unavailable on iOS (e.g. the .raw LOD setting) in #available checks. Is this an issue with Xcode or am I missing some compatibility check? The SDK clearly says /// Manages the creation of a 3D model from a set of images. /// /// For more information on using ``PhotogrammetrySession``, see /// <doc://com.apple.documentation/documentation/realitykit/creating-3d-objects-from-photographs>. @available(iOS 17.0, macOS 12.0, macCatalyst 15.0, *) @available(visionOS, unavailable) public class PhotogrammetrySession { /* ... */ } Thanks for any help with this :) And greetings from Köln, Germany ~ Alex
2
0
971
Jan ’24
Dynamic Mesh Updates
Dear all, In "Explore advanced rendering with RealityKit 2," Courtland presents how one can efficiently leverage dynamic meshes in RealityKit and update them at runtime. My question is quite practical: Say, I have a model of fixed topology and a set of animations (coordinates of each vertex per frame, finite duration) that I can only generate at runtime. How do I drive the mesh updates at 60FPS? Can I define a reusable Animation Resource for every animation once at startup and then schedule their playback like simple transform animations? Any helpful reply pointing me in the right direction is appreciated. Thank you. ~ Alexander
0
0
1.3k
May ’22
Capture the texture of a face for later use with `ARSCNFaceGeometry`
Hello everyone! My team and I are working on a shared AR experience involving the users' faces. Upon launch, we want all users to capture their faces on their respective device, i.e. generate a texture that, when applied to ARSCNFaceGeometry, looks similar to them. We will then broadcast the textures before starting the session and use them to create digital replicas of everyone. Can anyone recommend a specific technique to obtain these textures? This step needn't be incredibly efficient since it only happens once. It should, though, produce a high-quality result without blank areas on the face. My first intuition was to somehow distort a snapshot of the ARView using the spatial information provided by ARSCNFaceGeometry. If I understand correctly, textureCoordinates can be used to map vertices to their corresponding 2D-coordinates in the texture bitmap. How would I approach the transforms concretely, though? Writing this down has already helped a lot. We would nevertheless appreciate any input. Thanks! ~ Alex (Note: None of us have prior experience with shaders but are eager to learn if necessary.)
1
0
740
Mar ’22