Post

Replies

Boosts

Views

Activity

Reply to Access to Raw Lidar point cloud
The numbers of 576, 256x192, 60 Hz, ..., and, the steps of data processing. WWDC 2020 Video, Scene Geometry (15m42s): https://developer.apple.com/kr/videos/play/wwdc2020/10611/ What kind of lidar cameras does apple use: https://developer.apple.com/forums/thread/692724?answerId=692054022#692054022 . The steps of data processing: Laser 576 distance points are originals. Interpolation of 576 points with RGB image to depthMap. MeshAnchor from depthMap. We expect a measurement accuracy of 2-3 mm in 576 points, but 10-15 mm in vertex points of MeshAnchor. We prefer 576 points to vertex points of MeshAnchor. Both are sparse.
1w
Reply to Prevent Window (or Volume) Mouse Focus
User intent changes quickly and is sometimes misinterpreted, e.g. by mouse, eye, device or hand tracking. Recently we added an ad-hoc solution with confirmation dialog to our visionOS app. https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time However, the better solution would be to consider the speed and acceleration of the mouse, eye, device or hand tracking.
1w
Reply to Access to Raw Lidar point cloud
Feedback FB15735753 is filed. Any data processing (through HW or SW) makes the original information lose irreversibly. The data processing steps: Acquisition of ‘sparse’ 576 raw LiDAR distance points even in dark lighting (No API. R1 chip inside?) Interpolation of the 576 distance points with RGB image, producing ‘dense’ 256x192 depthMap image of 60 Hz (API in iOS) Generating and updating ‘sparse’ MeshAnchor of about 2 Hz from depthMap (API in iOS and visionOS). Review on the data processing: 576 raw LiDAR distance points are original. Object edges and textures cause artefacts in depthMap image. Low lighting conditions make the existing original information lose. Data density of sparse -> dense –> sparse. In summary, 576 raw LiDAR distance points are preferable to MeshAnchor.
1w
Reply to [Reality Composer Pro] Is it possible to play video over a specific mesh like a material?
We are currently developing a set of visionOS apps that detect and measure object surface geometries from MeshAnchor. https://github.com/CurvSurf/FindSurface-visionOS FindSurfaceST (Spatial Tap): Object surface detection by spatial tap FindSurfaceRR (Responce-to-Request): autonomous object surface detection FindSurfaceRT (Real-Time): Real-time object surface detection FindSurfaceAD (Ads): rendering photos/videos on detected object surfaces. The corresponding iOS app is here; https://github.com/CurvSurf/FindSurface-SceneKit-ARDemo-iOS The source code of the FindSurfaceAD app is planned to be released in December 2024. Photos/videos are planned to be selected in the Photos app. Please keep GitHub CurvSurf running.
2w
Reply to Geometry recognition and measurement from MeshAnchor
The source code of real-time app is available: https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time This is a minimized and optimized version of https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request The speed of object surface extraction and measurement: Plane: 600 Hz Sphere/cylinder: 300 Hz Cone/torus: 100 Hz.
2w
Reply to Object Occlusion in Non LiDAR devices
The FindSurfaceFramework for iOS basically requires a point set generated by a scanner (e.g. LiDAR) or even manually. How to collect rawFeaturePoints from ARKit: https://github.com/CurvSurf/ARKitPointCloudRecorder https://developer.apple.com/documentation/arkit/arframe/2887449-rawfeaturepoints Once a point set is prepared, it can be fed to FindSurfaceFramework. The following videos demonstrate object occlusion by detecting and measuring object geometry from the provided point set: 3-D Augmented Reality - Apple ARKit (2018), https://youtu.be/FzdrxtPQzfA Lee JaeHyo Gallery Ball Park - ARKit (2019), https://youtu.be/QhBtGHmfBOg Apple ARKit: Occlusion Tree Trunk (2019), https://youtu.be/rGW-FtA6P1Q Apple ARKit: Augmented Reality Based on Curved Object Surfaces (2019), https://youtu.be/4U4FlavRKa4
2w
Reply to Anchor Wall
Given: Width/height ratio of portal. PlaneAnchors of wall, floor and ceiling. Goal: Attaching the portal to the wall. Methods: Taking the ray (6DoF pose) of DeviceAnchor, HandAnchor, or eye tracking by Spatial Tap. Ray-casting the ray onto the PlaneAnchor of wall. The initial position of portal is the ray-casting point. The normal vector of portal is the normal vector of wall. Moving the center position of portal to the middle point on the wall between the floor and ceiling. Adjustment of the height of portal to the vertical distance between the floor and ceiling. Hope this is helpful.
Oct ’24
Reply to Geometry recognition and measurement from MeshAnchor
There was a delay in rendering the mesh triangle in the line of sight (ray-casting), so we tried to use the "LowLevelMesh" of visionOS 2.0. The problem is solved. We plan to apply "LowLevelMesh" to all mesh rendering, such as view-triangle, MeshAnchors, and object surface meshes. Current speed of object surface extraction: Plane: 400 Hz Sphere/cylinder/cone/torus: 200 Hz. The source code of the app is available: https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
Oct ’24