Posts

Post marked as solved
1 Replies
492 Views
Dear all, I'm building a SwiftUI-based frontend for the RealityKit Object Capture feature. It is using the iOS+macOS (native, not Catalyst) template on Xcode 15.1. When compiling for macOS 14.2, everything works as expected. When compiling for iPadOS 17.2, I receive the compiler error message "Cannot find type 'PhotogrammetrySession' in scope". As minimum deployment target, I selected iOS 17 and macOS 14 (see attachment). An example of the lines of code causing errors is import RealityKit typealias FeatureSensitivity = PhotogrammetrySession.Configuration.FeatureSensitivity // error: Cannot find type PhotogrammetrySession in scope typealias LevelOfDetail = PhotogrammetrySession.Request.Detail // error: Cannot find type PhotogrammetrySession in scope I made sure to wrap code that uses features unavailable on iOS (e.g. the .raw LOD setting) in #available checks. Is this an issue with Xcode or am I missing some compatibility check? The SDK clearly says /// Manages the creation of a 3D model from a set of images. /// /// For more information on using ``PhotogrammetrySession``, see /// <doc://com.apple.documentation/documentation/realitykit/creating-3d-objects-from-photographs>. @available(iOS 17.0, macOS 12.0, macCatalyst 15.0, *) @available(visionOS, unavailable) public class PhotogrammetrySession { /* ... */ } Thanks for any help with this :) And greetings from Köln, Germany ~ Alex
Posted
by AlexLike.
Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
Dear all, In "Explore advanced rendering with RealityKit 2," Courtland presents how one can efficiently leverage dynamic meshes in RealityKit and update them at runtime. My question is quite practical: Say, I have a model of fixed topology and a set of animations (coordinates of each vertex per frame, finite duration) that I can only generate at runtime. How do I drive the mesh updates at 60FPS? Can I define a reusable Animation Resource for every animation once at startup and then schedule their playback like simple transform animations? Any helpful reply pointing me in the right direction is appreciated. Thank you. ~ Alexander
Posted
by AlexLike.
Last updated
.
Post marked as solved
1 Replies
599 Views
Hello everyone! My team and I are working on a shared AR experience involving the users' faces. Upon launch, we want all users to capture their faces on their respective device, i.e. generate a texture that, when applied to ARSCNFaceGeometry, looks similar to them. We will then broadcast the textures before starting the session and use them to create digital replicas of everyone. Can anyone recommend a specific technique to obtain these textures? This step needn't be incredibly efficient since it only happens once. It should, though, produce a high-quality result without blank areas on the face. My first intuition was to somehow distort a snapshot of the ARView using the spatial information provided by ARSCNFaceGeometry. If I understand correctly, textureCoordinates can be used to map vertices to their corresponding 2D-coordinates in the texture bitmap. How would I approach the transforms concretely, though? Writing this down has already helped a lot. We would nevertheless appreciate any input. Thanks! ~ Alex (Note: None of us have prior experience with shaders but are eager to learn if necessary.)
Posted
by AlexLike.
Last updated
.