Post

Replies

Boosts

Views

Activity

Hand Tracking Palm towards face or not
Hi all, I’m quite new to XR development in general and need some guidance. I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start. I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this: Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the user’s palm is showing or not. While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. I’m not asking for the entire implementation, but rather just enough to get me on the right track. Heres basically all I have so far (no idea if this is correct or not): func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool { // Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x, hand.joint(.wrist).anchorFromJointTransform.columns.3.y, hand.joint(.wrist).anchorFromJointTransform.columns.3.z) let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x, hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y, hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z) let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x, hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y, hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z) }
1
0
375
Sep ’24
Streaming Mac Virtual Display
Is it possible to stream my Mac's virtual display to a website I create, so I can view my screen remotely? The main goal is to capture screenshots of my Mac's display using voice commands. The idea is to have the display streamed to the website, where I could say something like 'take a screenshot,' and the website would then capture and save a screenshot of the display. Has anyone done something similar or knows how this could be accomplished?
0
0
222
Aug ’24
'init(make:update:attachments:)' is unavailable in visionOS
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug... RealityView { content, attachments in contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5)) content.add(contentEntity!) } attachments: { Text("Hello!") }.task { await loadImage() await runSession() await processImageTrackingUpdates() }
1
0
503
May ’24
VisonOS Image tracking help
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code: VIEWMODEL CODE: Observable class ImageTrackingModel { var session = ARKitSession() // ARSession used to manage AR content var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity var rootEntity = Entity() // Root entity to which all other entities are added let imageInfo = ImageTrackingProvider( referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper") ) init() { setupImageTracking() } func setupImageTracking() { if ImageTrackingProvider.isSupported { Task { try await session.run([imageInfo]) for await update in imageInfo.anchorUpdates { updateImage(update.anchor) } } } } func updateImage(_ anchor: ImageAnchor) { let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES if imageAnchors[anchor.id] == nil { rootEntity.addChild(entity) imageAnchors[anchor.id] = true print("Added new entity for anchor \(anchor.id)") } if anchor.isTracked { entity.transform = Transform(matrix: anchor.originFromAnchorTransform) print("Updated transform for anchor \(anchor.id)") } } } APP: @main struct MyApp: App { @State var session = ARKitSession() @State var immersionState: ImmersionStyle = .mixed private var viewModel = ImageTrackingModel() var body: some Scene { WindowGroup { ModeSelectView() } ImmersiveSpace(id: "appSpace") { ModeSelectView() } .immersionStyle(selection: $immersionState, in: .mixed) } } Content View: RealityView { content in Task { viewModel.setupImageTracking() } } //Im serioulsy so clueless on how to use this view
1
0
697
May ’24