This may be helpful.
VideoMaterial
Advertisement
Post
Replies
Boosts
Views
Activity
The source code of the FindSurfaceAD app is now accessible on
GitHub CurvSurf. Photos/videos are imported from Photos app.
The FindSurface AD app's source code for Apple Vision Pro is now accessible on GitHub CurvSurf. This innovative app excels in detecting and quantifying object geometries, such as planes, spheres, cylinders, cones, and tori, from the meshes of object surfaces. Users can seamlessly view and play images (PNG) or videos (MP4) from the Photos app directly on these object geometries.
Check out the GitHub repository here: GitHub Link
FindSurface is revolutionizing the way we detect and measure geometries of real object surfaces, enabling the display of virtual ads on these surfaces. Check out the YouTube video "FindSurface AD Demo 2 - Apple Vision Pro" for a firsthand look at our innovative technology.
Exciting news for developers! The AVP app source code will soon be accessible on GitHub CurvSurf starting the week of December 16, 2024. Stay tuned for this valuable resource to further explore our spatial computing capabilities: GitHub Link.
Join us in unlocking the potential of spatial computing with FindSurface.
#spatialcomputing #visionpro #visionos #pointcloud #realtime
Q:
Why does CurvSurf FindSurface use only the vertex coordinates and ignore the vertex normals included in MeshAnchor data from ARKit in iOS/visionOS?
A:
The raw data is the best data!
The raw data of 3D measurement is a point cloud.
Meshing does decimate and smooth the point cloud.
Smoothing is a signal processing of integration.
Eventhough, the traces of the raw information remain at the vertices of meshes.
Calculating vertex normals is a signal processing of differentiation generating noises. Smoothing then noising?
Vertex normals through the above processes are of no value!
The raw 3D measurement information from Apple LiDAR 3D camera is the 576 distance points!
NOTE: With the recent visionOS 2.1 update, we noticed significant changes in the data provided by the MeshAnchor on Apple Vision Pro (The observations are based on our own analysis through our app and are not officially mentioned in Apple's patch notes). Based on our analysis, we recommend the following adjustments to the optimal set of parameters for FindSurface framework for visionOS.
[GitHub Link] https://github.com/CurvSurf/FindSurface-visionOS#optimal-parameter-set-for-apple-vision-pro
How about to try:
Web Demo:
https://developers.curvsurf.com/WebDemo/
iOS app (iPhone Pro, iPad Pro):
https://github.com/CurvSurf/FindSurface-iOS
visionOS app (Vision Pro):
https://github.com/CurvSurf/FindSurface-visionOS
The source code of visionOS apps is available on GitHub CurvSurf, processing vertex points of MeshaAnchor.
https://github.com/CurvSurf/FindSurface-visionOS
The numbers of 576, 256x192, 60 Hz, ..., and, the steps of data processing.
WWDC 2020 Video, Scene Geometry (15m42s):
https://developer.apple.com/kr/videos/play/wwdc2020/10611/
What kind of lidar cameras does apple use:
https://developer.apple.com/forums/thread/692724?answerId=692054022#692054022 .
The steps of data processing:
Laser 576 distance points are originals.
Interpolation of 576 points with RGB image to depthMap.
MeshAnchor from depthMap.
We expect a measurement accuracy of 2-3 mm in 576 points, but 10-15 mm in vertex points of MeshAnchor.
We prefer 576 points to vertex points of MeshAnchor. Both are sparse.
We are interested in real-time detection and measurement of object geometries in 3D points.
We hope that interface issues will be solved at iOS or VisionOS level.
User intent changes quickly and is sometimes misinterpreted, e.g. by mouse, eye, device or hand tracking. Recently we added an ad-hoc solution with confirmation dialog to our visionOS app.
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time
However, the better solution would be to consider the speed and acceleration of the mouse, eye, device or hand tracking.
Feedback FB15735753 is filed.
Any data processing (through HW or SW) makes the original information lose irreversibly.
The data processing steps:
Acquisition of ‘sparse’ 576 raw LiDAR distance points even in dark lighting (No API. R1 chip inside?)
Interpolation of the 576 distance points with RGB image, producing ‘dense’ 256x192 depthMap image of 60 Hz (API in iOS)
Generating and updating ‘sparse’ MeshAnchor of about 2 Hz from depthMap (API in iOS and visionOS).
Review on the data processing:
576 raw LiDAR distance points are original.
Object edges and textures cause artefacts in depthMap image.
Low lighting conditions make the existing original information lose.
Data density of sparse -> dense –> sparse.
In summary, 576 raw LiDAR distance points are preferable to MeshAnchor.
We are currently developing a set of visionOS apps that detect and measure object surface geometries from MeshAnchor.
https://github.com/CurvSurf/FindSurface-visionOS
FindSurfaceST (Spatial Tap): Object surface detection by spatial tap
FindSurfaceRR (Responce-to-Request): autonomous object surface detection
FindSurfaceRT (Real-Time): Real-time object surface detection
FindSurfaceAD (Ads): rendering photos/videos on detected object surfaces. The corresponding iOS app is here; https://github.com/CurvSurf/FindSurface-SceneKit-ARDemo-iOS
The source code of the FindSurfaceAD app is planned to be released in December 2024. Photos/videos are planned to be selected in the Photos app.
Please keep GitHub CurvSurf running.
The source code below includes 'Eye tracking (spatial tap)', 'Device tracking', and 'Hand tracking'.
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Response-to-Request
It's basically a ray-casting problem.
User-intention ray is:
Eye tracking
Device tracking
Hand tracking.
The source code of an example app of device tracking:
https://github.com/CurvSurf/FindSurface-RealityKit-visionOS-Real-Time