Post

Replies

Boosts

Views

Activity

RealityKit's ARView raycast returns nothing
Hello, I have rendered an usdz File using sceneKit's .write() method on the displayed scene. Once I load it on another RealityKit's ARView using the .nonAR mode of the camera, I am trying to use the view's raycast(from:,allowing:,alignment:) method, to get the coordinates on the model. I have applied the collisionComponents when loading the model using the .generateCollisionShapes() function to be able to interact with the modelEntity. However, the raycast result returns nothing. Is there something I am missing for it to work? Thanks!
1
0
543
Jun ’24
Applying post-processing to SceneKit's Scene and saving it to a USDZ file
I am fairly new to 3D model rendering and do not know where to start. I am trying to, ideally with ARKit & RealityKit or SceneKit, do a scan of an environment. This includes: Applying realistic textures to the model. Being able to save it as a .usdz file (to be able to open it within the App itself) Once it is save do post-processing measurements within the model. I would prefer to accomplish this feature by using a mesh, instead of the pointCloud that is used in the sample project of apple. Would this be doable using Apple's APIs and on a mobile device or would it be necessary to use a third party program? I have managed to create a USDZ file using SceneKit's .scene.write(to:,delegate:) method. However the saved file is a "single object" and it is not possible to use raycasting to do post-processing measurements in the model.
0
0
593
May ’24
Displaying captured image while device orientation is locked
Hello, I am creating an AR application that can be used on iPad and iPhone. While using it on iPhone, I am locking the Apps orientation in portrait mode. When taking a snapshot of the ARView while holding the iPhone in a landscape orientation, and displaying the taken image in a gallery view / Image View, the image is always rotated by 90°. Similarly, when the iPhone is held upside down, the displayed image is shown upside down. Is there a way to make sure that the image is displayed properly?
0
0
485
Nov ’23
Reconstructing 3D data points from DepthScene
Hello, I am trying to add a feature to my App that allows the user to, take a picture, open the image and by tapping on the screen to measure a linear distance on the image. According to this thread, by saving the cameraInstrincsInverse matrix and the localToWorld matrix, I should be able to get the 3D data points by using the location tapped on the screen and the depth from the SceneDepth API. I can't seem to find a formula using those parameters that allows me to compute the data I am looking for. Any help is appreciated! Thank you!
0
0
508
Sep ’23
Accessing vertex data from a MTKView
I am drawing am image in a MTKView using a Metal shader based on the 'pointCloudVertexShader' used in this sample code. The image can be moved with a 'dragGesture' in the MTKView, similarly to the 'MetalPointCloud' View in the sample Code. I want to implement a 'UITapGestureRecognizer' that, when tapping in the MTKView returns the appropriate 'vecout' (from the 'pointCloudVertexShader') value for the given location (see code bellow). // Calculate the vertex's world coordinates. float xrw = ((int)pos.x - cameraIntrinsics[2][0]) * depth / cameraIntrinsics[0][0]; float yrw = ((int)pos.y - cameraIntrinsics[2][1]) * depth / cameraIntrinsics[1][1]; float4 xyzw = { xrw, yrw, depth, 1.f }; // Project the coordinates to the view. float4 vecout = viewMatrix * xyzw; The 'vecout' variable is computed in the metal vertex shader, it would be associated with a coloured pixel. My idea would be to use the 'didTap' method to evaluate the wanted pixel data (3D coordinates). @objc func didTap(_ gesture: UITapGestureRecognizer){ let location = gesture.location(in: mtkView) //Use the location to get the pixel value. } Is there any way to get this value directly from the MTKView? Thanks in advance!
0
0
663
Jul ’23
ARView rotation animation changes when coming back to it from a navigationLink
I have an app that uses RealityKit and ARKit, which includes some capturing features (to capture and image with added Entities). I have a navigationLink that allows the user to see the gallery of the images he has taken. When launching the App, the rotation animation of the ARView happens smoothly, the navigationBar transitions from one orientation to another with the ARView keeping it's orientation. However, when I go to the galeryView to see the images and go back to the root view where the ARView is, the rotation animation of the ARView changed: When transitioning from one orientation to another, the ARView is flipped by 90° before transitioning to the new orientation. The issue is shown in this gif (https://i.stack.imgur.com/IOvCx.gif) Any idea why this happens and how I could resolve it without locking the App's orientation changes? Thanks!
0
0
655
Jul ’23
Interracting with Metal PointCloud
I am trying to build an App that allows the user to interact (tap in the view to get 3D coordinates of the tapped location) with a MTKView and the Point Cloud that is drawn on it, using SwiftUI. Based on the Displaying a Point Cloud Using Scene Depth sample App I managed to save a frame and the needed data to draw the captured image in the MTKView. From what I read from this post, I need to store a given set of information (depthMap / capturedImage /...) to be able to access the 3D data of each point. I am not sure to understand fully how to do this. I suppose that, by multiplying the cameraIntrinsicsInversed matrix by the localToWorld matrix and depthMap, I can recreate a 3D map of the captured image? Once the 3D map is created I would have a 256x192 matrix that maps each pixel to it's 3D coordinates. Which would mean that, when tapping in the MTKView, I would have to fetch the coordinates for the pressed location which can then be shown to the user. However the MTKView is way bigger than the drawn image. On top of that I would expect the drawn image to be either 256x192 pixels or a ratio of the capturedImage over the depthMap, which is not the case. Is there a way to fit the drawn image to the whole MTKView? Do I need to set a frame for the MTKView? If so, what should be the size of it since I cannot seem to find the size of the drawn image. Is my train of thought correct or am I missing some information to make this possible? Any help would be much appreciated!
0
0
712
Jun ’23
Access plane anchor relative to plane surface
I am using SwiftUI/ARKit/RealityKit to try and perform some live changes in my ARView. It consists in drawing planes when the user applies a long pressure on the view and moves his finger. The drawing of the plane happens in the following way: a sphere is drawn where the user presses for more than 0.5 sec. when the user moves his finger, a sphere follows the user's finger. when the user moves his finger, a plane is drawn with its center positioned in the center of the spheres in 1. and 2. using the following code. when the user removes his finger the steps 2. and 3. are performed to finalise the position of the end sphere and plane. My issue is, the drawn plane's normal vector (and probably the spheres as well) is tilted with an angle in comparison with the surface's actual normal vector. When showing the anchor origins in the debug options of the arView, there always is an anchor origin which would correspond to the touched surface and have axes that are collinear to the gravity and the normal of the surface. Is there a way to access this anchor origin to be able to project the device's world anchor? Bellow the code I am using to draw the surface. let location = sender.location(in: parent.arView) var position = SIMD3<Float>() let raycastResults = parent.arView.raycast(from: location, allowing: .existingPlaneGeometry, alignment: .vertical) if let raycastResult = raycastResults.first { position = SIMD3(x: raycastResult.worldTransform.columns.3.x, y: raycastResult.worldTransform.columns.3.y, z: raycastResult.worldTransform.columns.3.z) let anchorDistance = distance(position, self.parent.arView.cameraTransform.translation) switch sender.state { case .began: clearAllAnchors() let startSphereModel = sphere(radius: anchorDistance*0.005, color: .red) let startSphereAnchorEntity = AnchorEntity(world: raycastResult.worldTransform) startSphereAnchorEntity.addChild(startSphereModel) self.currentAnchor.append(startSphereAnchorEntity) parent.arView.scene.addAnchor(startSphereAnchorEntity) startPosition = position startSphere = startSphereModel case .changed: clearAnchors(tempAnchor) let tempEndSphereModel = sphere(radius: anchorDistance*0.005, color: .red) guard let anchor = raycastResult.anchor as? ARPlaneAnchor else {return} let width = computeRectangleParameters(position: position, anchor: anchor).width let height = computeRectangleParameters(position: position, anchor: anchor).height let midPosition = (startPosition + position)/2 let planeMesh = MeshResource.generatePlane(width: width, height: height) let tempPlaneModel = ModelEntity(mesh: planeMesh, materials: [SimpleMaterial(color: UIColor(red: 1, green: 0, blue: 0, alpha: 0.4), isMetallic: false)]) let tempPlaneAnchorEntity = AnchorEntity(world: midPosition) let tempEndSphereAnchorEntity = AnchorEntity(world: raycastResult.worldTransform) tempEndSphereAnchorEntity.addChild(tempEndSphereModel) tempPlaneAnchorEntity.addChild(tempPlaneModel) self.tempAnchor.append(tempPlaneAnchorEntity) self.tempAnchor.append(tempEndSphereAnchorEntity) parent.arView.scene.addAnchor(tempEndSphereAnchorEntity) parent.arView.scene.addAnchor(tempPlaneAnchorEntity) I have not posted the ending state since it is pretty similar to the changed state. On top of that, the plane dimensions are actually not correct as the code is now (expanding in the right directions) but this is easily resolvable once I manage to get access to the plane's "intrinsic" anchor
0
0
725
Mar ’23
Saving Mesh on iOS/iPadOS
I am pretty new to ARKit/RealityKit and am trying to build an app that would, on an iPad or iPhone that has access to a LiDAR sensor, save the mesh data of the field of view of the seen environment and save it to a document 3D document that would later be used to make measurements as a post-processing action. For now I am unsure on how to start on saving the mesh that is visible in debugging. I am thinking about saving the ARMeshAnchors and their corresponding geometries and maybe convert the whole list into a USDZ (or similar file). Would this be the right approach or am I going completely the wrong way? Would appreciate any help! Thanks in advance
0
0
1k
Mar ’23