Updating mesh vertex positions per frame (from CPU)

We have a reasonably complex mesh and need to update the vertex positions for every frame using custom code running on the CPU.

It seems like SceneKit is not really set up to make this easy, as the SCNGeometry is immutable.

What is the easiest (yet performant) way to achieve this?


So far I can see two possible approaches:

1) Create a new SCNGeometry for every frame. I suspect that this will be prohibitively expensive, but maybe not?

2) It seems that SCNProgram and its handleBinding... method would allow updating the vertex positions. But does using SCNProgram mean that we have to write all our own shaders from scratch? Or can we still use the default Scenekit vertex and fragment shaders even when using SCNProgram?

Accepted Reply

Just in case somebody else with the same problem finds this post:

The solution I found was to use a SCNGeometrySource wrapping a custom MTLBuffer.

We then use another MTLBuffer (actually multiple in a ring-buffer scheme) that we write to from our CPU code. Once the CPU has finished writing the vertex positions, the next time renderer(_:willRenderScene:atTime:) comes around, we queue up a blit shader that copies from this MTLBuffer into the MTLBuffer used by the SCNGeometrySource.

I also wrote a two stage compute shader for computing the vertex normals based on the updated vertex positions. These vertex normals are also written to a MTLBuffer that is wrapped in a SCNGeometrySource.

This scheme works beautifully and seems to have very low overhead in terms of both CPU and GPU load.

Replies

1. Depending on how large the mesh is and how many you would need, and what else takes up resources, this could work just fine.

2. Yes, if you start using a SCNProgram you lose the default shaders.


"Or can we still use the default Scenekit vertex and fragment shaders even when using SCNProgram?”


Yes, you may be able to use shadermodifiers to reach you goal. Shadermodifiers allow you to add snippets of shader code to the default shaders. For example, I have a shadermodifier with a uniform “transmat” in the geometry entry point. Every frame or less often this transmat (mat4) is updated by setValue forKey. Inside the shadermodifier code I multiply the position of the vertex with the transmat matrix.


For example (note I use the alpha value of the color to determine if it's one of the vertices that need to be moved:

NSValue *transmatVal = [NSValue valueWithSCNMatrix4:transmat];
[_solidNode.geometry setValue:transmatVal forKeyPath:@"transmat"];
                         
_solidNode.geometry.shaderModifiers = @{SCNShaderModifierEntryPointGeometry :
                                       @"uniform mat4 transmat;\n"
                                        "#pragma body\n"
                                        "if (_geometry.color.a==0.5) {\n"
                                        "_geometry.position = transmat * _geometry.position; }\n"
                                        "_geometry.color = vec4(_geometry.color.r, _geometry.color.g, _geometry.color.b, 1.0);\n"
                                       };


That works great (and blazing fast) because all the vertices that are moved need the same movement/rotation/scaling in my case.


Unfortunately, it’s (to my limited knowledge) not possible to add another buffer through shadermodifiers. But if your movement consist of only translations on x,y,z per vertex, AND you don’t use vertex colors, you could use the rgb values of the color semantic to store the translation. That way you can calculate the movement per vertex on the CPU, update only the geometrysource of the scngeometry that holds the colors, and let the gpu perform the actual movement.


This is how I update the SCNGeometry colors only by reusing the geometry sources for verts, normals and the element, i.e. only _colorSource is recalculated.

SCNGeometry *_rebuildGeometry = [SCNGeometry geometryWithSources:@[_SolidNode.geometry.geometrySources.firstObject, _SolidNode.geometry.geometrySources[1], _colorSource] elements:@[_SolidNode.geometry.geometryElements.firstObject]];

Thank you.

Yes, I think option 1 is definitely worth trying. Performance might just be fine.

If not, it gets tricky. Shadermodifier won't help (as far as I can tell), because they do not provide a way of getting the new vertex from the CPU to the GPU.


I don't think updating the vertex colors and then copying the info from color to position on the GPU would buy me anything. Because updating the color information isn't any easier than updating the vertex positions.


When updating the colors by rebuilding the SCNGeometry using that last line of code, how has performance been for you? Has it been an issue? For how complex a mesh? Because that approach is basically the option 1 solution I was thinking about (just replacing vertex positions instead of colors).

I just tested it with a mesh of 32k and 128k polygons, non shared vertices, so 4 times as many color scnvector. There is no noticable delay even at 120 fps (on ipad pro) in the first case. A minor delay with the 128k polygons but that does includes in my case going through all the half edges of each polygon. It seems it depends primarily on how you update the vertex positions (I use gdc to do that as well as update the scngeometry in the background) rather than updating one of the geometrysources of the SCNGeometry.

Awesome, thank you for trying that out. That is very encouraging!

Sounds like replacing the SCNGeometry every frame (option 1) might be fine then. We are targeting the new iPhones, so our framerate target is just 60fps, and the CPU & GPU performance should be pretty similar. And I believe the mesh will be around 20k triangles or so.

Hello, ppix!


Remember that SceneKit has morphing capabilities that allow natively to animate SCNGeometry! Take a look atWWDC 2013 source codes - they have a scene where 3D Map model is animated using SCNMorpher class.

Thanks, yes, I am quite aware of the SCNMorpher. However as I wrote in the question, we really did need to udated the vertex positions using custom code on the CPU. The SCNMorpher only allows you to interpolate between a set of target poses.

Just in case somebody else with the same problem finds this post:

The solution I found was to use a SCNGeometrySource wrapping a custom MTLBuffer.

We then use another MTLBuffer (actually multiple in a ring-buffer scheme) that we write to from our CPU code. Once the CPU has finished writing the vertex positions, the next time renderer(_:willRenderScene:atTime:) comes around, we queue up a blit shader that copies from this MTLBuffer into the MTLBuffer used by the SCNGeometrySource.

I also wrote a two stage compute shader for computing the vertex normals based on the updated vertex positions. These vertex normals are also written to a MTLBuffer that is wrapped in a SCNGeometrySource.

This scheme works beautifully and seems to have very low overhead in terms of both CPU and GPU load.

Hello,


I'm stuck with the exact issue and there is no materials or example related to SceneKit and Metal how to do that.

Can you please be very kind and give an example how to do that? Or any help from anyone else will be very appreciated

Did you open source this solution? Would love to take a look! Dealing with the exact same issue currently.

Hi, I'm having the same problem here. I'd appreciate it if anyone has an example.
I sorta figured out a way to do this. The key is to use SCNGeometrySource's proper initializer that takes an MTLBuffer as input: https://developer.apple.com/documentation/scenekit/scngeometrysource/1522873-init.

I'm posting my pseudo code here in case anyone comes across the same issue.

Code Block swift
/* initialize color buffer, similarly for vertex buffer */
var color_buffer_array : [UInt8] = []
/* appending rgba values (0...1) to the color buffer */
color_buffer_array.append(contentsOf: withUnsafeBytes(of: Float(UInt8(xyzrgb[6])!) / 255.0, Array.init))
color_buffer_array.append(contentsOf: withUnsafeBytes(of: Float(UInt8(xyzrgb[7])!) / 255.0, Array.init))
color_buffer_array.append(contentsOf: withUnsafeBytes(of: Float(UInt8(xyzrgb[8])!) / 255.0, Array.init))
color_buffer_array.append(contentsOf: withUnsafeBytes(of: Float(1.0), Array.init))
curr_point_cloud.color_buffer = Data(color_buffer_array)
/* NOTE: use 4 UInt8's for rgba in Data, use 4 Floats for rgba in MTLBuffer
below is an example of not using MTLBuffer, so color cannot be updated in real time
//    curr_point_cloud.color_source = SCNGeometrySource(data: curr_point_cloud.color_buffer!,
//                             semantic: .color,
//                             vectorCount: curr_point_cloud.points.count, // number of vertices
//                             usesFloatComponents: true, // this has to be true in order to display correct color
//                             componentsPerVector: 4, // 4 UInt8's: r, g, b, a
//                             bytesPerComponent: 1, // 1 UInt8 == 1 byte
//                             dataOffset: 0,
//                             dataStride: 4) // 4 * 1
*/
/* below is an example of using MTLBuffer, so color can be updated in real time */
curr_point_cloud.color_buffer!.withUnsafeBytes { rawBufferPointer in
      let rawPtr = rawBufferPointer.baseAddress!
      curr_point_cloud.color_mtl_buffer = mtl_device!.makeBuffer(bytes: rawPtr, length: curr_point_cloud.color_buffer!.count, options: [])
      curr_point_cloud.tmp_color_mtl_buffer = mtl_device!.makeBuffer(bytes: rawPtr, length: curr_point_cloud.color_buffer!.count, options: [])
      curr_point_cloud.color_source = SCNGeometrySource(buffer: curr_point_cloud.color_mtl_buffer!, vertexFormat: .float4, semantic: .color, vertexCount: curr_point_cloud.points.count, dataOffset: 0, dataStride: 16)
}
/* update MTLBuffer
    // NOTE: two options, change color_mtl_buffer directly, or change tmp_color_mtl_buffer and use blit command
    func renderer(_ renderer: SCNSceneRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) {
      if DataModel.shared.point_cloud_objects.count == 0 { return }
      // https://stackoverflow.com/questions/40476426/scenekit-metal-depth-buffer
      let commandBuffer = DataModel.shared.mtl_command_queue!.makeCommandBuffer()!
      let blitCommandEncoder: MTLBlitCommandEncoder = commandBuffer.makeBlitCommandEncoder()!
      blitCommandEncoder.copy(from: DataModel.shared.point_cloud_objects[0].tmp_color_mtl_buffer!, sourceOffset: 0, to: DataModel.shared.point_cloud_objects[0].color_mtl_buffer!, destinationOffset: 0, size: DataModel.shared.point_cloud_objects[0].color_buffer!.count)
      blitCommandEncoder.endEncoding()
      commandBuffer.commit()
    }
*/