Placing Item on plane, position is great, but trouble with rotation

The Goal

My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped.

In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do:

let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)

The Problem

Now, I notice that my entities aren't oriented correctly:

The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off.

If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct?

I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me.

So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation?

What I tried:

I've tried investigating the tap-target-entity like so: let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation I believe this returns the rotation of the "plane entity" the user tapped, relative to the world.

While that get's me the following, I'm not sure if it's useful?

rotationRelativeToWorld:

▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14))
  ▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068)

If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!

Accepted Reply

My Solution

You can get the location of the tap gesture on an entity with:

let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)

You can grab he orientation of the tapped plane with

let rotation = value.entity.orientation(relativeTo: nil)

I want to track this location so it can be persisted in the WorldTrackingProvider (whenever it works??? ) with a world anchor. I do so by making a world anchor with:

let pose = Pose3D(position: worldPosition, rotation: rotation)
let worldAnchor = WorldAnchor(originFromAnchorTransform: simd_float4x4(pose))

When you get the anchor from your world tracking listener you can apply it's transform (which contains position and orientation) to an entity like so:

entityForAnchor.transform = Transform(matrix: anchor.originFromAnchorTransform)


If you've got another way, or a way that has a more intuitive use of the API topology please share. This was cobbled together with a ton of trial and error.

  • Is there any info about this type of usage of world anchors? We would need exactly the same (anchor an object relative to the user, but not tied to a wall or desk).

Add a Comment

Replies

An item (model, or virtual object) has shape, size, position, and orientation. There is the world reference coordinate system. And, your device has moving device coordinate system. A real moving/stationary object has own model (shape, size, position, and orientation) coordinate system. And finally, You like to attach your item onto/around a real object surface inside your device screen. It's a real time AR example. Really difficult unsolved problem yet.

It's a hard problem. But, there is already a solution.

First, you created/designed an item (model, or virtual object) with shape, size on the model coordinate system.

Then, the world reference coordinate system was set as you ran your App. The device's coordinate system referenced to the world reference coordinate system is determined in real time by motion tracking of ARKit.

What information is lacked at this moment?

What unknown is the shape, size, position, and orientation of the real object surface of your interest (e.g., your real object plane).

Our FindSurface runtime library solves your last problem:

FindSurface Web Demo https://developers.curvsurf.com/WebDemo/

Virtual ads inside Chungmuro station Line 3 - iPhone Pro 12 https://youtu.be/BmKNmZCiMkw

  • This is a simplified case where the “real object surface of interest” is a known model entity being tapped on. This simulates the scenario where the PlaneDetectionProvider exists. Also, do your resources use RealityKit and run on iOS?

  • There are numerous object surfaces around us. Accurately determining their shape, size, position and posture in real time is a major problem. Essentially, this is a problem we are all trying to solve.

  • Sure, but my question is more about entity-to-entity implementation where the entities are known.

My Solution

You can get the location of the tap gesture on an entity with:

let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene)

You can grab he orientation of the tapped plane with

let rotation = value.entity.orientation(relativeTo: nil)

I want to track this location so it can be persisted in the WorldTrackingProvider (whenever it works??? ) with a world anchor. I do so by making a world anchor with:

let pose = Pose3D(position: worldPosition, rotation: rotation)
let worldAnchor = WorldAnchor(originFromAnchorTransform: simd_float4x4(pose))

When you get the anchor from your world tracking listener you can apply it's transform (which contains position and orientation) to an entity like so:

entityForAnchor.transform = Transform(matrix: anchor.originFromAnchorTransform)


If you've got another way, or a way that has a more intuitive use of the API topology please share. This was cobbled together with a ton of trial and error.

  • Is there any info about this type of usage of world anchors? We would need exactly the same (anchor an object relative to the user, but not tied to a wall or desk).

Add a Comment