Post not yet marked as solved
Currently in an app I am working on, we are adding collision shapes/components to objects by using the ShapeResource.generateConvex method to generate the shape from the mesh of our ModelEntity. Unfortunately, this does not result in a totally accurate collision shape. The following example is how the collision component looks currently.
Is there anyway to generate a collision shape that fits the exact bounds of the ModelEntity?
Post not yet marked as solved
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug...
RealityView { content, attachments in
contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5))
content.add(contentEntity!)
} attachments: {
Text("Hello!")
}.task {
await loadImage()
await runSession()
await processImageTrackingUpdates()
}
Post not yet marked as solved
I am fairly new to 3D model rendering and do not know where to start.
I am trying to, ideally with ARKit & RealityKit or SceneKit, do a scan of an environment. This includes:
Applying realistic textures to the model.
Being able to save it as a .usdz file (to be able to open it within the App itself)
Once it is save do post-processing measurements within the model.
I would prefer to accomplish this feature by using a mesh, instead of the pointCloud that is used in the sample project of apple. Would this be doable using Apple's APIs and on a mobile device or would it be necessary to use a third party program?
I have managed to create a USDZ file using SceneKit's .scene.write(to:,delegate:) method. However the saved file is a "single object" and it is not possible to use raycasting to do post-processing measurements in the model.
Post not yet marked as solved
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code:
VIEWMODEL CODE:
Observable class ImageTrackingModel {
var session = ARKitSession() // ARSession used to manage AR content
var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed
var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity
var rootEntity = Entity() // Root entity to which all other entities are added
let imageInfo = ImageTrackingProvider(
referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper")
)
init() {
setupImageTracking()
}
func setupImageTracking() {
if ImageTrackingProvider.isSupported {
Task {
try await session.run([imageInfo])
for await update in imageInfo.anchorUpdates {
updateImage(update.anchor)
}
}
}
}
func updateImage(_ anchor: ImageAnchor) {
let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES
if imageAnchors[anchor.id] == nil {
rootEntity.addChild(entity)
imageAnchors[anchor.id] = true
print("Added new entity for anchor \(anchor.id)")
}
if anchor.isTracked {
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
print("Updated transform for anchor \(anchor.id)")
}
}
}
APP:
@main
struct MyApp: App {
@State var session = ARKitSession()
@State var immersionState: ImmersionStyle = .mixed
private var viewModel = ImageTrackingModel()
var body: some Scene {
WindowGroup {
ModeSelectView()
}
ImmersiveSpace(id: "appSpace") {
ModeSelectView()
}
.immersionStyle(selection: $immersionState, in: .mixed)
}
}
Content View:
RealityView { content in
Task {
viewModel.setupImageTracking()
}
} //Im serioulsy so clueless on how to use this view
Post not yet marked as solved
In visionOS. mix mode, I place a virtual object on the floor and a chair in front of it, but the chair does not obstruct the virtual object, making the effect unrealistic. How to make chairs and other objects in reality cover virtual objects
Post not yet marked as solved
I was executing some code from Incorporating real-world surroundings in an immersive experience
func processReconstructionUpdates() async {
for await update in sceneReconstruction.anchorUpdates {
let meshAnchor = update.anchor
guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue }
switch update.event {
case .added:
let entity = ModelEntity()
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
entity.components.set(InputTargetComponent())
entity.physicsBody = PhysicsBodyComponent(mode: .static)
meshEntities[meshAnchor.id] = entity
contentEntity.addChild(entity)
case .updated:
guard let entity = meshEntities[meshAnchor.id] else { continue }
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision?.shapes = [shape]
case .removed:
meshEntities[meshAnchor.id]?.removeFromParent()
meshEntities.removeValue(forKey: meshAnchor.id)
}
}
}
I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that.
I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
I am trying to map the 3D skeleton joint positions of an ARBodyAnchor to the real body on the camera image.
I know I could simply use the "detectedBody" of the ARFrame, which would already deliver the normalized 2D position of each joint, but what I am mostly interested in is the z-axis (the distance of each joint to the camera).
I am starting a ARBodyTrackingConfiguration, setting the world alignment to ARWorldAlignmentCamera (in which case the camera transform is an identity matrix) and multiplying each joint transform in model space (via modelTransformForJointName:) with the transform of the ARBodyAnchor. And then tried many different ways to get the joints to line up with the image, by for example multiplying the transforms with the projectionMatrix of the ARCamera. But whatever I do, it never lines up correctly.
For example, the doesn't really seem to be a scale factor in the projectionMatrix or the ARBodyAnchor transform, no matter the distance of the camera to the detected body, the scale of the body is always the same.
Which means I am missing something important, and I haven't figured out what. So does anyone have an example of how I can get the body align to the camera image? (or get the distance to each joint in any other way?)
Thanks!
Post not yet marked as solved
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine.
Unfortunately this is very difficult to manage in the flow of our App.
Post not yet marked as solved
I am trying to determine the corners of a RoomPlan-detected wall using the information available in the ARView session's frame, but can't quite figure out what I'm doing wrong. The corners appear to be correct relative to each other, but the wall appears too large when I render it. (I'm also not sure I'm handling the image rotation correctly either, which may be compounding my problem). Here is the code I currently have, along with a sample image, and the resulting image when I pass it through the perspective filter. it is close but isn't cropping the walls and floors correctly.
func captureSession(_ session: RoomCaptureSession, didChange room: CapturedRoom) {
for surface in room.walls {
if let frame = self.arView.session.currentFrame {
var image: CGImage? = nil
VTCreateCGImageFromCVPixelBuffer(frame.capturedImage, options: nil, imageOut: &image)
let wallTransform = surface.transform
let cameraTransform = frame.camera.transform
let intrinsics = frame.camera.intrinsics
let projectionMatrix = frame.camera.projectionMatrix
let width = surface.dimensions.y
let height = surface.dimensions.x
let inverseCameraTransform = simd_inverse(cameraTransform)
let wallTopRight = simd_float4(width/2, height/2, 0, 1)
let wallTopLeft = simd_float4(-width/2, height/2, 0, 1)
let wallBottomRight = simd_float4(width/2, -height/2, 0, 1)
let wallBottomLeft = simd_float4(-width/2, -height/2, 0, 1)
let worldTopRight = wallTransform * wallTopRight
let worldTopLeft = wallTransform * wallTopLeft
let worldBottomRight = wallTransform * wallBottomRight
let worldBottomLeft = wallTransform * wallBottomLeft
let cameraTopRight = projectionMatrix * inverseCameraTransform * worldTopRight
let cameraTopLeft = projectionMatrix * inverseCameraTransform * worldTopLeft
let cameraBottomRight = projectionMatrix * inverseCameraTransform * worldBottomRight
let cameraBottomLeft = projectionMatrix * inverseCameraTransform * worldBottomLeft
let imageTopRight = intrinsics * simd_float3(cameraTopRight.x / cameraTopRight.w, cameraTopRight.y / cameraTopRight.w, cameraTopRight.z / cameraTopRight.w)
let imageTopLeft = intrinsics * simd_float3(cameraTopLeft.x / cameraTopLeft.w, cameraTopLeft.y / cameraTopLeft.w, cameraTopLeft.z / cameraTopLeft.w)
let imageBottomRight = intrinsics * simd_float3(cameraBottomRight.x / cameraBottomRight.w, cameraBottomRight.y / cameraBottomRight.w, cameraBottomRight.z / cameraBottomRight.w)
let imageBottomLeft = intrinsics * simd_float3(cameraBottomLeft.x / cameraBottomLeft.w, cameraBottomLeft.y / cameraBottomLeft.w, cameraBottomLeft.z / cameraBottomLeft.w)
let topRight = CGPoint(x: CGFloat(imageTopRight.x), y: CGFloat(imageTopRight.y))
let topLeft = CGPoint(x: CGFloat(imageTopLeft.x), y: CGFloat(imageTopLeft.y))
let bottomRight = CGPoint(x: CGFloat(imageBottomRight.x), y: CGFloat(imageBottomRight.y))
let bottomLeft = CGPoint(x: CGFloat(imageBottomLeft.x), y: CGFloat(imageBottomLeft.y))
if let image {
let filter = CIFilter.perspectiveCorrection()
filter.inputImage = CIImage(image: UIImage(cgImage: image))
filter.topRight = topRight
filter.topLeft = topLeft
filter.bottomRight = bottomRight
filter.bottomLeft = bottomLeft
let transformedImage = filter.outputImage
if let transformedImage {
let context = CIContext()
if let outputImage = context.createCGImage(transformedImage, from: transformedImage.extent) {
let wall = Wall(id: surface.identifier, image: outputImage, surface: surface)
self.walls.append(wall)
}
}
}
}
}
}
Post not yet marked as solved
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking.
No focus when isAutoFocusEnabled is set to false.
How to set up to solve this problem.
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return }
let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1))
let arImages: Set<ARReferenceImage> = [arReferenceImage]
let configuration = ARImageTrackingConfiguration()
configuration.trackingImages = arImages
configuration.maximumNumberOfTrackedImages = 1
configuration.isAutoFocusEnabled = false
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
anchors.compactMap { $0 as? ARImageAnchor }.forEach {
let anchor = AnchorEntity(anchor: $0)
let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005)
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = ModelEntity(mesh: mesh, materials: [material])
model.transform.translation.y = 0.05
anchor.children.append(model)
arView.scene.addAnchor(anchor)
}
}
Post not yet marked as solved
I am trying to change the color of usdz asset provide by my designer. But I am unable to do. Can some help me with some sample code
Hi all,
I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code:
var entityCollection : Set<Entity> = []
let faceEntity = scene.performQuery(query1).first {
$0.components[SceneUnderstandingComponent.self]?.entityType == .face
}
Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
Im trying to take an object capture, and scale it. What I did so far is create a Reality Composer project, insert the .objcap file into the project, and then scaled it from 100%, to 200%. I then extracted it as a USDZ. it just won't show up in the Xcode preview now, and im not sure why it doesn't show. Is there any way to fix this? im going crazy trying to find a fix for this to work.
Post not yet marked as solved
I am developing an iOS app intended to be used only a specific location (a campus). In this case, I'd like to use ARGeoAnchor to anchor content across a relatively large space, though in the pedestrian-only areas this is not supported and tracking begins to fail.
Is it possible to additionally use ARReferenceObject to re-localize to a specific location when I am walking in an unsupported area.
(FB13719373)
Post not yet marked as solved
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona?
Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I
Any help is very welcome, thanks.
Hello, I tried to build something with scene reconstruction but I want to add occlusion on the Surfaces how can I do that? I tried to create an entity and than apply an Occlusion Material but I received an ShapeResourece and I should pass an MeshResource to create a mesh for the entity and than apply a material. Any suggestions?
Post not yet marked as solved
It appears that when a class like the following:
" class RoomCaptureViewController: UIViewController,
RoomCaptureViewDelegate,ARSCNViewDelegate,
MTKViewDelegate, ARSessionDelegate, RoomCaptureSessionDelegate. "
has multiple delegates, the ordering of the priority of each message is delivered to a delegate by a priority sensitive order based algorithm and that one message can be processed by only one delegate and not passed off to other delegates if they don't have the proper entry points. Specifically I noted that changing the order seems to result in a delegate not getting a message that it should be seeing. Is there a "handoff" call that can be made after a delegate has seen a message but needs to pass it off to another delegate for processing? This is a protocol typically utilized in Interrupt handlers for PCIe and other messaging protocols and I have not been able to find a similar capability In the voluminous documentation available for IOS and Mac systems. I would also like to know how a message is dispatched by a class to the particular delegate for which the message was intended. Is there a detailed document that explains how the messaging protocol works that is not so fragmented as to require having multiple monitors open in order to form a coherent picture of the messaging interface for Delegates belonging to a class?
I have a simple visionOS app that uses a RealityView to map floors and ceilings using PlaneDetectionProvider and PlaneAnchors.
I can look at a location on the floor or ceiling, tap, and place an object at that location (I am currently placing a small cube with X-Y-Z axes sticking out at the location).
The tap locations are consistently about 0.35m off along the horizontal plane (it is never off vertically) from where I was looking.
Has anyone else run into the issue of a spatial tap gesture resulting in a location offset from where they are looking?
And if I move to different locations, the offset is the same in real space, so the offset doesn't appear to be associated with the orientation of the Apple Vision Pro (e.g. it isn't off a little to the left of the headset of where I was looking).
Attached is an image showing this. I focused on the corner of the carpet (yellow circle), tapped my fingers to trigger a tap gesture in RealityView, extracted the location, and placed a purple cube at that location.
I stood in 4 different locations (where the orange squares are), looked at the corner of the rug (yellow circle) and tapped. All 4 purple cubes are place at about the same location ~0.35m away from the look location.
Here is how I captured the tap gesture and extracted the 3D location:
var myTapGesture: some Gesture {
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { event in
let location3D = event.convert(event.location3D, from: .global, to: .scene)
let entity = event.entity
model.handleTap(location: location3D, entity: entity)
}
}
Here is how I set the position of the purple cube:
func handleTap(location: SIMD3<Float>, entity: Entity) {
let positionEntity = Entity()
positionEntity.setPosition(location, relativeTo: nil)
...
}
Post not yet marked as solved
While WorldTrackingProvider.removeAnchor() completes without error, the WorldAnchor might be back the next time the App is run. This can easily be replicated by the ObjectPlacement sample. Just add 10 objects, Remove All, then run App again. The first run the anchors might be gone, but run the App a couple more times and the anchors will come back.
This becomes a big problem when paired with the issue that anchors are not always found when the App enter Immersive mode. When an anchor is not found our App will add an anchor. That usually works fine for that run. The next run, however, the other anchors will show up. Anchors accumulate and it becomes difficult to track.
Post not yet marked as solved
Flow:
User enters app and starts an arkit session with worldtracking and scene reconstruction.
User closes app so we stop the session.
User re-enters app and we try to run the session but app crashes with error: "It is not possible to re-run a stopped data provider.
If we remove code to stop the session, when the user re-enters the app the scene reconstruction doesn't work properly and shows inaccurate meshing data.
Is this a bug or am I doing something wrong here? Any ideas or insight are appreciated