Posts

Post not yet marked as solved
0 Replies
278 Views
Hi there, I can't find a way to share constants, e.g. an int with Swift via a bridging header. I have tried a number of ways. Would love a simple no or an explanation for how to do this, so I can ideally share some constants used by the CPU and GPU. Thank you!
Posted
by spamheat.
Last updated
.
Post not yet marked as solved
6 Replies
1.1k Views
Hi there, I am working on a 3d game engine in Swift and Metal. Currently I dynamically generate vertex buffers for terrain "chunks" on the CPU and pass all models to the GPU via argument buffer and make indirect draw calls. Calculating where vertices should be is costly and I would like to offload the work to a compute shader. Setting up the shader was straightforward and I can see that it (at least as an empty function) is being executed in the CommandBuffer. However, I come to this problem: since I do not know ahead of time how many vertices a chunk of terrain will have, I cannot create a correctly-sized MTLBuffer to pass into the compute function to be populated for later use in a draw call. The only solution I could think of is something like the following: For each chunk model, allocate a VertexBuffer and IndexBuffer that will accommodate the maximum possible number of vertices for a chunk. Pass in the empty too-large buffers to the compute function Populate the too-large buffers and set the actual vertex count and index count on the relevant argument buffer. On the CPU, before the render encoder executes commands in the indirect command buffer, do the following: for each chunk argument buffer, create new buffers that fix the actual vertex count and index count blit copy the populated sections of memory from the original too-large buffers to the new correctly-sized buffers replace the buffers on each chunk model and update the argument buffers for the draw kernel function But I am still a Metal novice and would like to know if there is any more straightforward or optimal way to accomplish something like this.
Posted
by spamheat.
Last updated
.
Post marked as solved
1 Replies
536 Views
Hi, I'm experiencing something I can't explain with the Metal debugger, on Xcode 15 Beta 2. I have 2 render passes that should accomplish the same thing from different approaches, which I use alternately for testing: Render Pass A (purple cubes) does direct instanced drawIndexedPrimitives calls from the CPU. Render Pass B (green cubes) uses an indirect command buffer and instanced drawIndexedPrimitives calls from the GPU. Render Pass A draws geometry exactly as expected in the application window. But when I use the Metal debugger, the command buffer step preview shows nothing, and debugging the geometry of a particular draw call shows some values to be null that cannot possibly be, as the rendered images are exactly as expected. Render Pass B draws nothing in the application window. But when I use the Metal debugger, the command buffer step preview shows exactly the expected image! And even the expected values in the geometry debugger. The opposite nature of what I am seeing here may be a red herring. A better question may be: why does Render Pass B seem to be producing the expected geometry, but then not draw it in the running application view? Adding to the confusion, even though the small previews show the expected geometry, the "CAMetalLayer Display Drawable" shows exactly what I get in the end result, just the Skybox. I would highly appreciate any tips on why what appears to be OK is not making it into the final drawable. Let me know if there is any specific code that could help debug this.
Posted
by spamheat.
Last updated
.
Post not yet marked as solved
2 Replies
553 Views
Looking to use Tier 2 AB in Swift. Since the struct can't be shared properly between header and Swift file like in Obj-C I have 2 structs: In Swift: struct VertexShaderArguments { var uniforms: MTLBuffer var materials: MTLBuffer } In Header: struct VertexShaderArguments { device Uniforms &uniforms; device Material *materials; }; And I construct and populate the argument buffer like so: let vertexShaderArgumentBuffer = Renderer.device.makeBuffer(length: MemoryLayout<VertexShaderArguments>.stride)! vertexShaderArgumentBuffer.label = "Vertex Shader Argument Buffer" self.vertexShaderArgumentBuffer = vertexShaderArgumentBuffer let vertexShaderArgumentBufferContents = vertexShaderArgumentBuffer.contents().assumingMemoryBound(to: VertexShaderArguments.self) vertexShaderArgumentBufferContents.pointee.uniforms = uniformsBuffer vertexShaderArgumentBufferContents.pointee.materials = scene.materialsBuffer I've followed this example closely (https://developer.apple.com/documentation/metal/buffers/managing_groups_of_resources_with_argument_buffers) and consulted other resources. Examining the VertexShaderArgumentBuffer in the Metal debugger reveals the error: "Not a valid buffer" for both members. I would appreciate any assistance, and ask that Apple please try to provide more examples in Swift in the future. The fact that virtually all Metal examples are only Objective-C is baffling.
Posted
by spamheat.
Last updated
.
Post not yet marked as solved
3 Replies
1.2k Views
I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance). While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too. * I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.
Posted
by spamheat.
Last updated
.