2D buffer visualization for GPGPU application

I am working on a project which involves intensive GPGPU calculations and need to visualize the process as 2d color buffer.

Is there any open source example on how to setup all the boilerplate code the easiest way? I do not really need vertex shaders and primitives, just putting 2d array into the view would be enough.

I expect most of the calculations logic will be in metal shaders code.
Simple answer:
You pass your buffer to your kernel, write to your texture, and then render the texture on screen as appropriate for your specific circumstance.

If you are performing intensive calculations, then the answer to 'what is the boilerplate for me', would be decided based on the specifics of your situation.

You mention that you don't need vertex shaders and primitives, but those are in fact, how you would get the texture onto the screen. And depending on your specific use case, will change how you architect the pipeline in your cpu code - and that part may be large for intense and complex calculations.

Aside:

I find, the metal developer community feels small, with a minority who are familiar with the entirety of it, and even Apple employees can be found to be unknowledgeable about simple things. (I had to find out the hard way - by building endless tests to measure and assess each thing myself.) Because of this, you are unlikely to find quality sample code anywhere online. Even if you do, the specifics of that sample may be incorrect for you.

Frequently, when you want to do one thing in combination with other things, you have to change how you do both. Because of this, when asking for a boilerplate for one thing, it is likely not what you will truly need for the real use case - causing you to either ask a second time for something else, or settle for something significantly inefficient just because that's what you got working. That isn't a great approach. And there are enough options in the possibility space, that I wouldn't be able to predict exactly what applies to you without a very thorough description of everything.

The more specific and clear you are about the entirety of your application use case, the more likely you will be to get not just an answer, but one that is the most correct answer for you in that specific circumstance.

However, an even better position for you to take, would be to less be concerned with trying to achieve your goal, and more concerned about trying to understand the fundamentals of the system more thoroughly, so that you could design architectures on your own, and know why things need to be a certain way. Then, when you came to ask for help, your questions would be less related to a lack of understanding basic things, and more related to Apple's bugs or the cutting edge algorithm you're trying to implement. If we can foster that level of thoroughness in the developer culture outside Apple, then we might see Apple follow along and adopt a culture of higher standards in their foundation software and hardware.


Thank you @MoreLightning, it is really tough to get up to date information on such subjects, and many examples are either in Obj-C or old swift versions which do not even build in the latest xcode :(

Finally after looking through many tutorials I have found exactly what I need for 2D buffers visualization - there is a technique to use compute pipeline instead of render pipeline, and use a very basic kernel function to output texture directly into the view:

Code Block Swift
func init() {
...
        let function = library?.makeFunction(name: "compute")
        device.makeComputePipelineState(function: function as! MTLFunction)
...
}
func draw(in view: MTKView) {
if let commandBuffer = commandQueue.makeCommandBuffer() {
if let renderEncoder = commandBuffer.makeComputeCommandEncoder() {
if let drawable = view.currentDrawable {
renderEncoder.setComputePipelineState(pipelineState)
renderEncoder.setTexture(drawable.texture, index: 0)
renderEncoder.setTexture(colorMap, index: 1)
var w = pipelineState.threadExecutionWidth
var h = pipelineState.maxTotalThreadsPerThreadgroup / w
let threadsPerGroup = MTLSizeMake(w, h, 1)
w = Int(drawable.texture.width)
h = Int(drawable.texture.height)
let threadsPerGrid = MTLSizeMake(w, h, 1)
renderEncoder.dispatchThreads(threadsPerGrid, threadsPerThreadgroup: threadsPerGroup)
renderEncoder.endEncoding()
commandBuffer.present(drawable)
commandBuffer.commit()
}
}
}
}


and compute.metal:
Code Block language
#include <metal_stdlib>
using namespace metal;
kernel void compute(texture2d<float, access::write> output [[texture(0)]],
texture2d<float, access::sample> input [[texture(1)]],
uint2 id [[thread_position_in_grid]])
{
uint2 index = uint2(id.x, id.y);
float4 color = input.read(index);
output.write(color, id);
}


Kudos to metalkit.org/2019/01/31/intro-to-metal-compute.html
One part of the reason for this, is that using Swift is ironically slow and inefficient compared to C, and not appropriate if your real goal is performance and maximizing the capability of the hardware.

In most all serious scientific applications, games, art tools, etc. where the Metal is to be used, there is a large amount of memory and data that is to be managed, transformed, and passed around to and from the cpu and gpu.

Using Metal itself might imply the developer is trying to get the full performance or stability possible at the sacrifice of ease of use or any other benefit the Swift coder might believe they are gaining.

So to combine something inefficient like Swift with Metal is a bit of strange practice for a use case where we sincerely are maximizing the potential of the hardware. What exactly is the developer trying to achieve by going only part way? Perhaps they are just following the cultural fads or caught up in novelty, and not measuring, or perhaps they are doing something where they just do not care about the performance consequences. Most of the Swift architecture and syntax choices over the years have reflected strange preferences of the community, that don't align with clear rationality, so I don't think it's something the community can readily engage a logical dialog about, without also confronting the irrationality of many ideas that were already adopted.

In any case, the typical lesson here, is to avoid the tendency to try to learn by example - not to learn by reading other people's code, but to come to understand the fundamentals. Then you will understand how to build anything, and clearly see the limits, problems, and incorrect practices of publicly published examples.

Agreeing on most what MoreLightning said, plus another major aspect here is that Swift cannot cope with C + + code at all. There is a reason why Apple chose C+ + as basis for their Metal language, not Swift. In high performance and/or real-time sensitive software, companies usually have a fairly high amount of code portion written in a system level language, typically C or C + +. Application level languages like Swift, Objective-C or Java don't fulfil the requirements for that. So application level languages (Swift, Objective-C, Java, ...) are usually only used in such applications to handle some of the UI API calls with the system, and accordingly they must somehow be capable to interface with the other code portions of the application written in a system level language like e.g. C or C + +.

Objective-C code can be mixed with both C and C+ + code (for the latter you just have to rename the .m file to .mm in Xcode or select "Objective-C + + Source" from the file inspector on the right side). Swift source code however can only be mixed with C code. So if you really wanted to use Swift for the iOS/macOS API handling, then in practice you would need to manually write a huge amount of C bindings for your app's C + + code portions. Not bored^TM.

BTW what's the deal with the forum these days, that it's now interpreting C + + (without spaces) as underline markup? @Apple: please fix this!
To extend this a little more...

(As a side note: It should go without saying, that even though you can call C from Swift, it is not the same and there is overhead.)

I almost always write software tools that try to be as efficient as possible, but they still hit the hardware limits for realtime interaction, and I need it to do more.

Because of this, in a real tool, it never makes sense for me to use slower languages and existing frameworks for UI either. The performance gain from rolling custom UI at the same level as the rest of the Metal code is really significant. Especially, if you are using any of the features of Swift that people do like - because those actually come at the highest cost.

I write most all my code in raw C, and only use Objective-C for Metal calls, because I am trying to get the most out of the hardware, at the sacrifice of making things convenient for the programmer. It's not like I enjoy this at all, I just accept the suffering in order to get the final tool to be better. I ended up that way, because I wasn't getting as good results the other ways.

The initial brainstorming ideas that lead to Swift 1 and playgrounds, reflected concepts like Bret Victors thoughts of interactivity. But the ideas were greatly misunderstood.

Ideally:
  • the final compiled machine code, should be the most optimized and stable as it can, because the end result is the first priority

  • the second priority is how we interact with the machine as a developer, and to distinguish features of the text language as distinct from the IDE - Swift's core implementation misunderstood this - even though LLVM's AST format is completely able to implement the proper solutions if the people had thought this through correctly. This was obvious all along, but it goes back to the cultural problems and miscommunication. (After all, I believe C uses the same underlying AST format, does it not? The failure of all other languages to achieve the same level of performance, comes from how people think about the languages/interface, system, and goals)

There are also additional moral concerns where performance is actually related to physical energy consumption and environmental damage.

If we take the responsibility for the impact of our efficiency choices as developers as a whole, then it means confronting that as well. If millions of coders take the position of thinking it doesn't matter so much (let's take for instance the wide spread adoption of Unity and Unreal), then as a whole, the impact is quite large, so it becomes something we'd have to address collectively as a culture. It's similar to how, if everyone litters only a little it is a big problem in the whole - though each person will say "It's just one".






2D buffer visualization for GPGPU application
 
 
Q