Where do I need to add one of these lines of code in order to enable people occlusion in my scene?
config.frameSemantics.insert(.personSegmentationWithDepth)
static var personSegmentationWithDepth: ARConfiguration.FrameSemantics { get }
Here is the code:
import UIKit
import RealityKit
import ARKit
class ViewControllerBarock: UIViewController, ARSessionDelegate {
@IBOutlet var arView: ARView!
let qubeAnchor = try! Barock.loadQube()
var imageAnchorToEntity: [ARImageAnchor: AnchorEntity] = [:]
override func viewDidLoad() {
super.viewDidLoad()
arView.scene.addAnchor(qubeAnchor)
arView.session.delegate = self
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]){
anchors.compactMap { $0 as? ARImageAnchor }.forEach {
let anchorEntity = AnchorEntity()
let modelEntity = qubeAnchor.stehgreifWurfel!
anchorEntity.addChild(modelEntity)
arView.scene.addAnchor(anchorEntity)
anchorEntity.transform.matrix = $0.transform
imageAnchorToEntity[$0] = anchorEntity
}
}
func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
anchors.compactMap { $0 as? ARImageAnchor }.forEach {
let anchorEntity = imageAnchorToEntity[$0]
anchorEntity?.transform.matrix = $0.transform
}
}
}
Post
Replies
Boosts
Views
Activity
Greetings everyone!
Judging by the fact that Xcode is running very slowlz and the banner up top is saying "An internal error occurred. Editing functionality might be limited." I'm thinking either I am doing something wrong or something is not working right. Is it too many elements/pages? How is one supposed to do this correctly?
Context: I'm making an app that is supposed to be an AR platform, kind of like a browser for AR content.
Hi everyone,
I want to make an AR experience where it just continuously drops a cube into the scene every 10 seconds. I was able to create something close enough in Reality Composer where it spawns a cube every time every time you tap the screen but a) it's not automatic and b) it's not continuous. Basically I am moving a cube from very far away so it cannot be seen into the view and then dropping it. Instead I would like this action to be automated and continuous on a loop.
Any ideas on how to archive this?
Many thanks in advance!
– Berke
Hello everyone!
I'm an architecture student posting here for the first time, so I apologize in advance for any rookie mistakes.
For a semester project, I designed an empty cube which is supposed to serve as a vessel for AR experiences. The idea is users would be able to discover all sorts of exhibitions and works of art (I partnered up with a local museum).
I managed to get my 3D CAD models converted to USDZ and build demos using Reality Composer. Now I professors have asked me to build the actual thing and I am faced with a challange… the cube is 4x4x4 meters in size and I would like for it to be possible for the viewer to step inside the cube, walk around, and experience whatever they're looking at as if it were actually standing in the cube – without the phone loosing track of the AR object. I have figured out how to map objects to a single image or some type of marker, but what about multiple markers? Is it possible, for instance, if I color each of the corners of the cube with different colors to make the software understand this real life object and thus scale and rotate the model accordingly?
Many thanks in advance! I included a render from my pitch, as they say a picture says more than a thousand words haha…