Basic Metal help needed. render loop just not working.

So, there are a number of Metal tutorials. Apple has zero of them updated to Swift 3. And zero of them updated to the version of Metal that is available in Swift 3.

So third parties are the only place to go. But... none of the third party tutorials do anything even remotely similarly.


I am at the point where I am trying to refactor a working HelloTriangle example. I am trying to follow along with code that is no longer valid (swift 2 or 1) and DID NOT WORK. Why am I following along with that code? because the code that does work, doesn't take it any further. So, learning is fractured, just be aware of that.


I am making a class that draws a triangle this is so simple, and yet it doesn't draw anything, and there is no indication why.



the view delegate, which works is this:


    /*code from elsewhere that defines the buffers.
      
        let floatPos : [Float] = [-1.0, -1.0, 0.0, 1.0,
                                   1.0, -1.0, 0.0, 1.0,
                                   0.0, 1.0, 0.0, 1.0]
     
        let colors : [Float] = [1,0,0,1,0,1,0,1,0,0,1,1]
      
         posBuffer = view.device?.makeBuffer(bytes: floatPos, length: floatPos.count * MemoryLayout<Float>.stride, options: .storageModeShared)
         colorBuffer = view.device?.makeBuffer(bytes: colors, length: colors.count * MemoryLayout<Float>.stride , options: .storageModeShared)
       */
     
    func draw(in view: MTKView){
     
        if let drawable = view.currentDrawable{
            if let passDescriptor = view.currentRenderPassDescriptor{
                let cmndQueue = view.device?.makeCommandQueue()
                    let cmdBuff = cmndQueue?.makeCommandBuffer()
                    let cmdEncoder = cmdBuff?.makeRenderCommandEncoder(descriptor: passDescriptor)
                    cmdEncoder?.setRenderPipelineState(renderPipelineState!)
                    cmdEncoder?.setVertexBuffer(posBuffer, offset: 0, at: 0)
                    cmdEncoder?.setVertexBuffer(colorBuffer, offset: 0, at: 1)
                    cmdEncoder?.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1)

                    cmdEncoder?.endEncoding()
                    cmdBuff?.present(drawable)
                    cmdBuff?.commit()
             
            }
        }
     
    }



refactoring that triangle into a class, suggests that the triangle class needs to be passed the : command Queue, pipelineState, drawable, and clear color, so that it can make it's own command encoder and command buffer.


to me, in the vaccuum of no information, it seems to be an overweight need. But I tried it. And it doesn't render anything, instead it ***** out for a variety of reasons, which are related to the lack of a tutorial that works with swift 3 and refactors the code.


next I tried, just passing an already built command encoder, running only this code in my triangle class's draw func:

cmdEncoder.setVertexBuffer(vertexBuff, offset: 0, at: 0)
cmdEncoder.setVertexBuffer(colorBuff, offset: 0, at: 1)
cmdEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertCount, instanceCount: vertCount/3)


you can assume that the two vertex buffers are exactly the same code, as before.

we don't get sigbarts, but we also do not get any triangles.


so next i tried to synthesize a version of the tutorial's approach, while making adjustments to make it work in swift 3 and with the approach from another tutorial:

    func draw(_ commandQueue : MTLCommandQueue, _ drawable: MTLDrawable , _ renderPassDescriptor : MTLRenderPassDescriptor, _ renderPipelineState : MTLRenderPipelineState?){
       
        let commandBuffer = commandQueue.makeCommandBuffer()
        let cmdEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
        cmdEncoder.setRenderPipelineState(renderPipelineState!)
       
        if vertCount > 0{
            cmdEncoder.setVertexBuffer(vertexBuff, offset: 0, at: 0)
            cmdEncoder.setVertexBuffer(colorBuff, offset: 0, at: 1)
            cmdEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: vertCount, instanceCount: vertCount/3)
        }
        cmdEncoder.endEncoding()
        commandBuffer.present(drawable)
        commandBuffer.commit()
    }
}


I added a check to make sure we were getting vertex data (if vertCount >0)

again: no triangles are drawn. no errors thrown. No tutorial that explains any of this in plain english.


I get it, we need to encapsulate and transport render commands to the GPU. it's a bit of a miracle and it's awesome. What i don't get is : why the exact same code, restructured slightly, does not work. I'm sure I am misunderstanding something. I just need help figuring out what that is.

Accepted Reply

OK, I have progress.

i was able to get my refactored triangle class to draw to the window. I made some significant changes to how the class creates the two buffers. Now it creates them at render time, from a literal copy of the float Array that worked originally.


the class used to build those buffers ahead of time, using an elaborate structure that generated the buffer. I wasn't going to find the issue with that mess.


but the next step is clearly not working. And I THINK I know what it is, but I wanted to ask.


I think the tutorial is sending me down the worng path. I want to be able to make multiple entities and draw them at once. And I think the division of labor in the classes is incorrect.


my entity class does this at render time:

func draw(_ commandQueue : MTLCommandQueue, _ drawable: MTLDrawable , _ renderPassDescriptor : MTLRenderPassDescriptor, _ renderPipelineState : MTLRenderPipelineState?){
       
        let commandBuffer = commandQueue.makeCommandBuffer()
        let cmdEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
        cmdEncoder.setRenderPipelineState(renderPipelineState!)
       
        cmdEncoder.setVertexBuffer(self.vertexBuffer(), offset: 0, at: 0)
        cmdEncoder.setVertexBuffer(self.colorBuffer(), offset: 0, at: 1)
        cmdEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1)
        cmdEncoder.endEncoding()
        commandBuffer.present(drawable)
        commandBuffer.commit()
    }


everything it does is perscribed by a tutorial.

and to my eye, it should definetly Not be be ending the encoding, presenting the drawable or committing the buffer.

that stuff seems like it should be done one time only in a render pass. and it would probably cause the SigBart that I get when I compile.


?

Replies

Unfortunately, GPU programming is hard and from time to time everybody (including the giants of the industry) wonders "where the triangles went". I think that your restructuring went wrong in some subtle way. Metal debugger helps with that (check out the presentation from previous WWDC). Or post a complete .zip of your source code somewhere, I can take a look at it. Guessing in the vacuum just doesn't make sense.

thnx.

currently regrouping. I took lunch and zoomed out. decided to simplify my vertex buffer code in the entity class I've written, and explore that. the goal now is to eliminate complications, and whittle the whole thing down to the basics, and then see what is what.


if I do not figure it out, I might take you up on the offer.

-td

OK, I have progress.

i was able to get my refactored triangle class to draw to the window. I made some significant changes to how the class creates the two buffers. Now it creates them at render time, from a literal copy of the float Array that worked originally.


the class used to build those buffers ahead of time, using an elaborate structure that generated the buffer. I wasn't going to find the issue with that mess.


but the next step is clearly not working. And I THINK I know what it is, but I wanted to ask.


I think the tutorial is sending me down the worng path. I want to be able to make multiple entities and draw them at once. And I think the division of labor in the classes is incorrect.


my entity class does this at render time:

func draw(_ commandQueue : MTLCommandQueue, _ drawable: MTLDrawable , _ renderPassDescriptor : MTLRenderPassDescriptor, _ renderPipelineState : MTLRenderPipelineState?){
       
        let commandBuffer = commandQueue.makeCommandBuffer()
        let cmdEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor)
        cmdEncoder.setRenderPipelineState(renderPipelineState!)
       
        cmdEncoder.setVertexBuffer(self.vertexBuffer(), offset: 0, at: 0)
        cmdEncoder.setVertexBuffer(self.colorBuffer(), offset: 0, at: 1)
        cmdEncoder.drawPrimitives(type: .triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1)
        cmdEncoder.endEncoding()
        commandBuffer.present(drawable)
        commandBuffer.commit()
    }


everything it does is perscribed by a tutorial.

and to my eye, it should definetly Not be be ending the encoding, presenting the drawable or committing the buffer.

that stuff seems like it should be done one time only in a render pass. and it would probably cause the SigBart that I get when I compile.


?

so, what I found was what i suspected.


I quickly commented out the code in the entity class that creates and sets up the commandEncoder, and passed that the entity from the main rendering function. no problems. added a second triangle, with an offset, both render. no more sigbarts.


this behavior is not unlike the animation system in CoreAnimation. build the transforms, set them, then commit them. it's probably that CoreAnimation draws that design from something like OpenGl in the first place.


anyway, MikeAlpha: Thanks.


users looking for guidance: in refactoring your data handling, the only thing I learned so far is that Metal follows rules. it's not irrational.

@eblu

Yeah, you're right, you want to do this stuff once per rendering pass (well, sometimes you have to do multiple encoders, for example when switching between compute and drawing operations). And yes, Metal has nothing to do with how do you structure classes and stuff at higher level.


Structure of Metal _is_ kind of similar to other graphic APIs, because it reflects very closely what the GPU does. And in fact this is Metal's raison d'entre - let you be more precise in sending commands to GPU than, say, OpenGL. And let you to do all "heavy" things in advance.


IF there is a problem with Metal/documentation, it is this: this documentation lets already experienced GPU programmer switch to the Metal. So if you have, say, year of serious OpenGL under your belt, you'll probably feel fine. On the other hand, it is (or must be) very hard to learn GPU programming AND Metal at the same time. There are some basic tutorials, and there are Apple Guides/Specs, and nothing (that I know of) in between.


So there are plenty of people coming to this forum, saying things like "it doesn't work/it stopped to work please help". And it turns of that they don't know how to lay out data in the buffer using correct alignment/sizes, they expect that Metal/Swift/whatever will do some magic trick for them and it will "just work". Sorry, it wont, if you want performance, it takes time and sweat.


Good luck!

Michal

well, I'm probably the ugly duckling.

I have experience with QD3d, Quartz, CoreAnimation, Cocoa drawing, (and any number of OTHER GLs) drawing Bezier's from my own home brewed code, limited experience with OpenGL. all self taught, so the depth is only where I had to put effort in to get what I needed. I have a fairly good High level understanding of GLs in general, a slipery grasp on OpenGL's approach to drawing (the weirdly inflected coding style is nausea inducing, almost as bad as Adobe's Plugin system,) and a slightly firmer high level understanding how Metal SHOULD work. With Metal, implementation details are uhh... clear as mud.


I have developed some sense of "not getting it" vs "it doesn't work." and I tend to hit the books when the books aren't yet written. I can remember when NSView was a fully documented class, with documentation about the behind-the-scenes, and I have been stymied by some of the stupidest things. (autoLayout for example) I know I come off frustrated at times, and I am shockingly under educated in some facets of development (debugging is a high art form that seems to always elude me.) But I make up for my other shortcomings with persistence and observational skills.


That said, I think you hit the nail on the head re: metal documentation. there's no on ramp. And scene kit... is a joke. unusable due to it's overly specific format. I was thrilled to find MTKView, because it seemed that Apple realized that something needs to be in the space between Metal , and SceneKit, for those of us who want to build UI with Metal. I belive they are working on it (or at least put a bookmark there and will come back to it,) and it will get there... eventually. I'm just there now. But in defense of the Docs... it's not like OpenGL has a decent on ramp either. you get thrown into the shark tank and told to learn how to swim.


there are a handful of decent tutorials, they are all out of date to some extent, and every single one of them is mutually exclusive... one solution cannot be applied to a different tutorial.


currently I could be using a 2d drawing system for what i am trying to accomplish. I am using Metal to build a basic level of understanding, that I can then use to bootstrap myself up to the next level, later. I've attempted this... 3 - 4 times. from scratch. starting with turorials, and the WWDC videos (pretty much all of them. it's darned frustrating watching those guys rocket off into the distance without explaining anything. But I get some understanding from it.)


anyway. moved on from Triangles to cubes. which are not rendering either. going to try something in between.

ok this is truly messed up.

moving to generating a cube instead of a triangle. It wasn't rendering. we don't get anything. same problem as I had in the beginning.


I deleted everything that was obfuscating the differences between the classes, and I found it.

I am creating a buffer, from an array of Floats.

var theFloats = [0,0,0,0,0,0,0,0,0]

that sort of thing. and for the vertex buffer, The triangle class vertex Float data uses 4 vertex values.

for each vertex I have 4 values, leading to 12 Values in the data, versus the expected 9 values.


it doesn't work otherwise. as far as I can tell, it also renders properly (keeping track of which value is x and which is y, and which is z for each vertex)


WHAT?

i think the buffer expects a 4x4 matrix.

Buffer isn't really "expecting" anything, it is just a bunch of bytes. Shader does. Please paste your shader code here.

I am not yet up to spec on Metal code. this is literally somebody else's code.

I plan on rewriting it as soon as I can make sense out of it.

struct ColoredVertex
{
    float4 position [[position]];
    float4 color;
};
struct Uniforms{
    float4x4 modelMatrix;
};
vertex VertexOut vertex_main(
                              const device VertexIn* vertex_array [[ buffer(0) ]],
                              const device Uniforms&  uniforms    [[ buffer(1) ]],           /
                              unsigned int vid [[ vertex_id ]]) {
  
    float4x4 mv_Matrix = uniforms.modelMatrix;                     /
  
    VertexIn VertexIn = vertex_array[vid];
  
    VertexOut VertexOut;
    VertexOut.position = mv_Matrix * float4(VertexIn.position,1);  /
    VertexOut.color = VertexIn.color;
  
    return VertexOut;
}
/
fragment float4 fragment_main(ColoredVertex vert [[stage_in]])
{
    return vert.color;
}


I found that the trailing value in the position has to be 1.0. this all seems to be by design, but I cannot fathom why that is so.

mikeAlpha,

don't bother with that.


I just refactored all of that code. part of a move to get Transforms up and running. forced me to adopt an Inline vertex format [x,y,z,r,g,b,a,x,y,z...]

I had to quickly rewrite all of that stuff, follow through on a brand new Shader, and it eliminated the 4x4 matrix, cleaned up some of my misunderstandings, and got me transforms.


I'm still ping-ponging between tutorials, but I'm getting better at it. thanks for your interest and help.

Yeah, and that code (you must have something elsewhere) won't even work. VertexIn and VertexOut types are used, but not defined. Looks like VertexIn is

struct { float3 position; float4 color; } which has it's own problems, BTW.


"trailing value in the position has to be 1.0. this all seems to be by design, but I cannot fathom why that is so."

Well, 3d graphics is dealing with, duh, 3d coordinates (vectors). Convenient (for some definitions of "convenience" :-) ) way of operating on vectors is multiplying them by matrices, which "describe" operations you want (like scaling, rotation, translation or superposition of these). Problem is (I am treating linear algebra very informally) with 3d vectors you have to use 3x3 matrices (again, skipping some stuff here), and 3x3 matrix can "describe" scaling OR rotation in 3d, but not translation. To do translations in 3d, you need 4x4 matrices, and then you need 4-d vectors (you cant multiply 3-vector and 4x4 matrix). So there is "fourth" coordinate added to vectors, "w" coordinate, and it works like that:

// x, y, z being your original "input" data

float4 original_vector = float4(x, y, z, 1.0);

float4 transformed_vector = transformation_matrix * original_vector;

// then after all transformation what GPU finally does is:

float3 final_coordinates = float3(transformed_vector.x / transformed_vector.w, transformed_vector.y / transformed_vector.w, transformed_vector.z / transformed.vector.w); // and this is final 3d result


And so unless you're really good at this stuff and want to use some advanced tricks you ALWAYS bring on 3 coordinates (or 2, if working in 2d plane), set "w" = 1.0 and let GPU worry about the rest. Of course you can send 1.0s (fourth coordinate) from the host CPU, or just generate them in the shader, that doesn't matter.