Dug in some more last night and I think you just can't scroll noise in the Reality Composer Pro shader graph right now. The "Texture Coordinates" seem to all switch a 2d (or 3d) image node (like noise or even a regular texture) to a output a single value, and there is no way to make that back into an image.
However, 'tiling and offset' seems to be the functionality that I want. Unfortunately, only the 'TiledImage' node supports UV tiling and offset. None of the noise nodes support this, and I can't figure out how to make a noise node be tiled or offset by using other nodes.
I did download an image of Perlin noise, and loaded that into a TiledImage node, then offset it using my example above and got similar behavior to what I want.
That results in a preview like this, which is sorta like what I want. I don't like the SIN back and forth, but I can prob fix that. The main thing is that it is changing each vertex. Just wish you could UV tile/offset the noise node!
Post
Replies
Boosts
Views
Activity
As far as I know there is not third party app camera access for the Vision Pro. To use ARKit, you have to be in a ImmersiveSpace (vs the shared space). The data you get from ARKit is via providers for things like anchors, planes and hand tracking. You can get meshes of the geometry of the room the visionpro is seeing, but no images.
Check out the cube demo from the ARKit WWDC23 videos for good examples Of ARKit and hand tracking.
Yeaaaah! I just figured this out today and posted on an older thread about it. The key for me was finding this repo: https://github.com/XRealityZone/what-vision-os-can-do/blob/main/WhatVisionOSCanDo/ShowCase/WorldScening/WorldSceningTrackingModel.swift#L70
Important code:
@MainActor fileprivate func generateModelEntity(geometry: MeshAnchor.Geometry) async throws -> ModelEntity {
// generate mesh
var desc = MeshDescriptor()
let posValues = geometry.vertices.asSIMD3(ofType: Float.self)
desc.positions = .init(posValues)
let normalValues = geometry.normals.asSIMD3(ofType: Float.self)
desc.normals = .init(normalValues)
do {
desc.primitives = .polygons(
// 应该都是三角形,所以这里直接写 3
(0..<geometry.faces.count).map { _ in UInt8(3) },
(0..<geometry.faces.count * 3).map {
geometry.faces.buffer.contents()
.advanced(by: $0 * geometry.faces.bytesPerIndex)
.assumingMemoryBound(to: UInt32.self).pointee
}
)
}
let meshResource = try await MeshResource.generate(from: [desc])
let material = SimpleMaterial(color: .red, isMetallic: false)
let modelEntity = ModelEntity(mesh: meshResource, materials: [material])
return modelEntity
}
Went deep in google and turned up a project from a few months ago that shows converting the MeshAnchor.Geometry to a ModelEntity! 🎉
Here is the link: https://github.com/XRealityZone/what-vision-os-can-do/blob/main/WhatVisionOSCanDo/ShowCase/WorldScening/WorldSceningTrackingModel.swift#L70
And here is the relevant code:
@MainActor fileprivate func generateModelEntity(geometry: MeshAnchor.Geometry) async throws -> ModelEntity {
// generate mesh
var desc = MeshDescriptor()
let posValues = geometry.vertices.asSIMD3(ofType: Float.self)
desc.positions = .init(posValues)
let normalValues = geometry.normals.asSIMD3(ofType: Float.self)
desc.normals = .init(normalValues)
do {
desc.primitives = .polygons(
// 应该都是三角形,所以这里直接写 3
(0..<geometry.faces.count).map { _ in UInt8(3) },
(0..<geometry.faces.count * 3).map {
geometry.faces.buffer.contents()
.advanced(by: $0 * geometry.faces.bytesPerIndex)
.assumingMemoryBound(to: UInt32.self).pointee
}
)
}
let meshResource = try await MeshResource.generate(from: [desc])
let material = SimpleMaterial(color: .red, isMetallic: false)
let modelEntity = ModelEntity(mesh: meshResource, materials: [material])
return modelEntity
}
Curious about this also.
From the SceneReconstructionExample there is this code:
let entity = ModelEntity()
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
I think you need to add this line:
entity.model = ModelComponent(mesh: <#T##MeshResource#>, materials: <#T##[Material]#>)
Just not sure how to make a MeshResource out of the MeshAnchor (or the ShapeResource that generateStaticMesh returns).
Answered my second question here:
https://developer.apple.com/documentation/visionos/setting-up-access-to-arkit-data#Open-a-space-and-run-a-session
"To help protect people’s privacy, ARKit data is available only when your app presents a Full Space and other apps are hidden. Present one of these space styles before calling..."
That seems to indicate that a 'Full Space' is an ImmersiveSpace set to any style (.full, .mixed or .progressive), not ONLY the .full style.
Still not sure about the .progressive use of Digital Crown. I wonder if maybe it detects getting below a certain point of immersion and programmatically switches the immersion style to .mixed, from .progressive? Hmmmmm.
I figured out my own problem.
The Vision Pro hardware really doesn't like starting the audio engine during launch. I wanted to minimize the amount of UI in my app, and needed the audio input the whole time, so I figured just start as soon as I have permission. No go. But if I separate out the audioEngine?.start() into a startMonitoring() call, and call that after a timer fires from my ContentView.onAppear (or I guess I could add a button for the user to start it), all works great!