Hi,Yes, using scene.anchors.append(myAnchor) has the same effect, and has the added bonus of being able to add multiple independent anchors at the same time by using scene.anchors.append(contentsOf: [Scene.AnchorCollection.Element]).I don't know of any reason this should or should not need to be utilised. Only that to me, scene.addAnchor() seems like a nicer looking way to achieve the same thing, thus may be what the RealityKit team is expecting us to use.
Post
Replies
Boosts
Views
Activity
I typically set the frame to zero when making UIKit apps too (non SwiftUI), and later set the frame like this:self.arView.frame = self.view.bounds
self.arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]I'm not extremely knowledgeable about UIKit in general, but this works for me to make the view scale to fit the screen accordingly no matter the orientation.
It should be straightforward to fix those issues, try something like this:entity.look(at: camera.transform, from: entity.position, upVector: [0, 1, 0])It does depend on your entity to not be a child of something else in the scene with a non identity transform. Otherwise you have to find the camera transform relative to the entity's parent, there are several conversion methods in the HasTransform docs here.As a side note, I noticed that this look() function also scales your object, so watch out for that if you have an object with a non [1,1,1] scale.
Yes exactly 🚀
hi, Using the method HasTransform.look() has worked great for me, you just have to make sure the direction RealityKit thinks is forward on your model is the same direction you do. https://developer.apple.com/documentation/realitykit/hastransform/3244204-look You just set ‘at’ to be the entity’s position, and ‘to’ should be the camera transform’s position.
I think the easiest way would be to import your model and animation to blender (or similar), export it as a glb file, and then convert to USDZ. The glb file bundles the model and animations together.
Assuming you've set up a UITapGestureRecognizer for your ARView, and have a function that starts like this:@objcfunc handleTap(_ sender: UITapGestureRecognizer? = nil) {Get the CGPoint of that touch:guard let touchInView = sender?.location(in: self.arView) else {
return
}Perform a raycast at that CGPoint:if let result = arView.raycast(
from: touchInView,
allowing: .existingPlaneGeometry, alignment: .horizontal
).first {
print(result.worldTransform)
// worldTransform is of type simd_float4x4
}From there, use this 4x4 matrix to position your entity at the touch location in the scene using moveTo.
I haven't worked on image detection with ARKit in a little while, but as I remember, ARImageTrackingConfiguration is used for actively tracking an image, for example postcard or something that might move. When you use ARWorldTrackingConfiguration this should be reserved for images that will be static in the scene, and you can rely on them to only exist at one location (for example, a poster).The benefit of using ARWorldTrackingConfiguration over ARImageTrackingConfiguration is that ARKit will search for the image less often by assuming that it isn't moving, only the camera is. This will mean you can get away with more complex scenes, wheras ARImageTrackingConfiguration will be wanting to use the CPU as many frames as it can.
Not sure exactly what you're after, but look up "RealityKit Entity Gestures" for information on how to add panning gestures to an entity using installGestures().
I just posted an article on exactly this with a library to take away the boilerplate code and a working example.https://medium.com/@maxxfrazer/realitykit-synchronization-289ba9409a6eAnd the library to do most of the work for you complete with an example project:https://github.com/maxxfrazer/MultipeerHelperHopefully it helps you all!