I'm trying to achieve a similar behaviour to the native AR preview app on iOS when we can place a model and once we move or rotate it, it automatically detects the obstacles and gives a haptic feedback, and doesn't go through the walls. I'm using the devices with LiDAR only.
Here is what I have so far:
Session setup
private func configureWorldTracking() {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
configuration.environmentTexturing = .automatic
if ARWorldTrackingConfiguration.supportsSceneReconstruction(.meshWithClassification) {
configuration.sceneReconstruction = .meshWithClassification
}
let frameSemantics: ARConfiguration.FrameSemantics = [.smoothedSceneDepth, .sceneDepth]
if ARWorldTrackingConfiguration.supportsFrameSemantics(frameSemantics) {
configuration.frameSemantics.insert(frameSemantics)
}
session.run(configuration)
session.delegate = self
arView.debugOptions.insert(.showSceneUnderstanding)
arView.renderOptions.insert(.disableMotionBlur)
arView.environment.sceneUnderstanding.options.insert([.collision, .physics, .receivesLighting, .occlusion])
}
Custom entity:
class CustomEntity: Entity, HasModel, HasCollision, HasPhysics {
var modelName: String = ""
private var cancellable: AnyCancellable?
init(modelName: String) {
super.init()
self.modelName = modelName
self.name = modelName
load()
}
required init() {
fatalError("init() has not been implemented")
}
deinit {
cancellable?.cancel()
}
func load() {
cancellable = Entity.loadModelAsync(named: modelName + ".usdz")
.sink(receiveCompletion: { result in
switch result {
case .finished:
break
case .failure(let failure):
debugPrint(failure.localizedDescription)
}
}, receiveValue: { modelEntity in
modelEntity.generateCollisionShapes(recursive: true)
self.model = modelEntity.model
self.collision = modelEntity.collision
self.collision?.filter.mask.formUnion(.sceneUnderstanding)
self.physicsBody = modelEntity.physicsBody
self.physicsBody?.mode = .kinematic
})
}
Entity loading and placing
let tapLocation = sender.location(in: arView)
guard let raycastResult = arView.raycast(from: tapLocation, allowing: .estimatedPlane, alignment: .horizontal).first else { return }
let entity = CustomEntity(modelName: modelName)
let anchor = AnchorEntity(world: raycastResult.worldTransform)
anchor.name = entity.name
anchor.addChild(entity)
arView.scene.addAnchor(anchor)
arView.installGestures([.rotation, .translation], for: entity)
This loads my model properly and allows me to move it and rotate as well, but I cannot figure out how to handle the collision handling with the real environment like walls and interrupt gestures once my model starts going thought it?
Post
Replies
Boosts
Views
Activity
Is it possible to use ARReferenceImage objects created programatically from the images on iOS 11.3.1
The images that I want to track downloaded from the web service and stored locally. I'm creating the ARImageReference objects from them using the following code:
guard
		let image = UIImage(contentsOfFile: imageLocalPath),
		let cgImage = image.cgImage
else {
		return nil
}
return ARReferenceImage(cgImage, orientation: .up, physicalWidth: 0.12)
The session configured depending on the current iOS version since ARImageTrackingConfiguration not available for iOS 11:
private lazy var configuration: ARConfiguration = {
		if #available(iOS 12.0, *),
				ARImageTrackingConfiguration.isSupported {
				return ARImageTrackingConfiguration()
		}
		return ARWorldTrackingConfiguration()
}()
if #available(iOS 12.0, *),
		let imagesTrackingConfig = configuration as? ARImageTrackingConfiguration {
		imagesTrackingConfig.trackingImages = referenceImages
} else if let worldTrackingConfig = configuration as? ARWorldTrackingConfiguration {
		worldTrackingConfig.detectionImages = referenceImages
}
session.run(configuration, options: [.resetTracking, .removeExistingAnchors])
The code above works fine for iOS 12 and 13 even if I use ARWorldTrackingConfiguration. Images correctly detected by the ARKit. But when I try to run it on iOS 11.3.1, the app immediately crashes with the following error:
Assert: /BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleCV3D/AppleCV3D-1.13.11/library/VIO/OdometryEngine/src/FrameDownsampleNode/FrameDownsampler.cpp, 62: std::abs(staticcast(aspect1) - staticcast(srcframe.image.width * outputframeheight)) < maxslack (lldb) Is it possible that the dynamic markers creation programmatically is not available for the iOS version below 12.0 or am I doing something wrong? Unfortunately, I wasn't able to find any information regarding the specific versions. Thank you.