Using Indirect Command Buffers seems to break XCode's ability to debug Metal shaders.
Attempting to debug a vertex shader either yields errors like Unable to connect to device (6) or Function argument 'vertexBuffer.0' does not have a valid vertex buffer binding at index '0'. I've tried the XCode versions 13.1 (13A1030d) and 13.2 beta 2 (13C5081f).
Is this a known problem/limitation of XCode's Metal Shader debugger?
Are there any workarounds for this?
Are there better ways to "debug" Metal shaders in general?
I thought my code was doing something bad/weird (I'm still new to Metal), but found even the Apple Developer sample code Encoding Indirect Command Buffers on the CPU, XCode cannot debug the shaders.
I've reduced a reproduction to a simple, single file Swift MacOS application drawing a triangle (see Github Repo link below).
Setting up Render Pipeline:
let desc = MTLRenderPipelineDescriptor()
desc.label = "RenderPipeline"
desc.vertexFunction = vertexFn
desc.fragmentFunction = fragFn
desc.colorAttachments[0].pixelFormat = COLOR_PIXEL_FORMAT
desc.supportIndirectCommandBuffers = true
desc.inputPrimitiveTopology = .triangle
let vertexDesc = MTLVertexDescriptor()
vertexDesc.attributes[0].bufferIndex = 0
vertexDesc.attributes[0].offset = 0
vertexDesc.attributes[0].format = .float4
vertexDesc.layouts[0].stride = MemoryLayout<VertexPosition>.stride
vertexDesc.layouts[0].stepRate = 1
vertexDesc.layouts[0].stepFunction = .perVertex
desc.vertexDescriptor = vertexDesc
Setting up Indirect Command Buffer:
let desc = MTLIndirectCommandBufferDescriptor()
desc.commandTypes = .draw
desc.inheritBuffers = false
desc.inheritPipelineState = false
desc.maxVertexBufferBindCount = 1
desc.maxFragmentBufferBindCount = 0
let buf = device.makeIndirectCommandBuffer(descriptor: desc, maxCommandCount: 1, options: MTLResourceOptions.storageModeManaged)!
buf.label = "IndirectCommandBuffer"
let renderEncoder = buf.indirectRenderCommandAt(0)
renderEncoder.setRenderPipelineState(pipelineState)
renderEncoder.setVertexBuffer(vertexFnArgBuffer, offset: 0, at: 0)
renderEncoder.drawPrimitives(.triangle, vertexStart: 0, vertexCount: 3, instanceCount: 1, baseInstance: 0)
Issue command:
let commandBuffer = commandQueue.makeCommandBuffer()!
commandBuffer.label = "@CommandBuffer"
let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: view.currentRenderPassDescriptor!)!
renderEncoder.label = "@RenderCommandEncoder"
renderEncoder.setViewport(drawableViewport)
renderEncoder.use(vertexFnArgBuffer, usage: .read, stages: .vertex)
renderEncoder.executeCommandsInBuffer(indirectCommandBuffer, range: 0..<1)
renderEncoder.endEncoding()
commandBuffer.present(view.currentDrawable!)
commandBuffer.commit()
The XCode project is pushed to this Github Repo
peterwmwong/MetalGPUFrameCaptureDebugRepro and includes a GIF and MOV screen capture of the specific errors I encountered attempting to debug the simple shader.
Any help would be appreciated!
Thanks.
Post
Replies
Boosts
Views
Activity
I'm trying to use an Argument Buffer with a Buffer pointer and noticed crashes XCode 13.2 (19585) while debugging (Frame Capture) on a MBP M1 Max (macOS 12.2.1):
Crashed Thread: 24 Dispatch queue: gputools.GPUMTLVariablesViewContentProvider.0x2d0638a40
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000a10
Exception Codes: 0x0000000000000001, 0x0000000000000a10
Exception Note: EXC_CORPSE_NOTIFY
I've created a small reproduction project (with screen recordings): https://github.com/peterwmwong/ArgumentBufferXCodeCrashRepro
It's essentially a stripped down version of Apple Developer sample code: https://developer.apple.com/documentation/metal/buffers/managing_groups_of_resources_with_argument_buffers?language=objc
Basically, this is the setup of the Argument Buffer (SceneArgumentBuffer) and Buffer pointer (Rectangle *) it references:
typedef struct Rectangle
{
vector_float4 color;
vector_float2 size;
} Rectangle;
struct SceneArgumentBuffer {
device Rectangle *rects [[ id(SceneArgumentBufferIDRectangles) ]];
};
vertex VertexOut
vertexShader(
uint instanceID [[ instance_id ]],
uint vertexID [[ vertex_id ]],
constant SceneArgumentBuffer &args [[ buffer(VertexBufferIndexArgumentBuffer) ]])
{
const Rectangle rect = args.rects[instanceID];
// ...
}
uint16_t numRects = 2;
_rectanglesBuffer = [_device newBufferWithLength:sizeof(Rectangle) * numRects
options:MTLResourceStorageModeShared];
_rectanglesBuffer.label = @"Rects Buffer";
Rectangle * const rects = _rectanglesBuffer.contents;
rects[0].color = simd_make_float4(0.0, 1.0, 0.0, 1.0);
rects[0].size = simd_make_float2(0.75, 0.75);
rects[1].color = simd_make_float4(0.0, 0.0, 1.0, 1.0);
rects[1].size = simd_make_float2(0.5, 0.25);
// Argument buffer creation and encoding
id<MTLArgumentEncoder> argumentEncoder =
[vertexFunction newArgumentEncoderWithBufferIndex:VertexBufferIndexArgumentBuffer];
_argumentBuffer = [_device newBufferWithLength:argumentEncoder.encodedLength options:0];
_argumentBuffer.label = @"Argument Buffer";
[argumentEncoder setArgumentBuffer:_argumentBuffer
offset:0];
[argumentEncoder setBuffer:_rectanglesBuffer
offset:0
atIndex:SceneArgumentBufferIDRectangles];
// Encoding into a Render Command
id<MTLRenderCommandEncoder> renderEncoder = [commandBuffer renderCommandEncoderWithDescriptor:renderPassDescriptor];
[renderEncoder useResource:_rectanglesBuffer
usage:MTLResourceUsageRead
stages:MTLRenderStageVertex];
[renderEncoder setVertexBuffer:_argumentBuffer
offset:0
atIndex:VertexBufferIndexArgumentBuffer];
My application "works". Runs as expected, access to argument buffer and referenced buffer yields correct information to draw the screen correctly... it's just stinks (maintenance/development nightmare) that debugging is hosed. I can't debug shaders or look at pipeline statistics.
Is this suppose to work? Are there any weird XCode/Metal environment variables or settings that I should be using to get this to work?
Any help/suggestions would be appreciated, thanks!
Is there a limit to the number MTLIO load texture commands that can safely be executed in a single command buffer?
When I tried loading each face (6) of a cube texture using a single command buffer, some of the faces will have bad data. Basically...
let cmdBuf = ioCommandQueue.makeCommandBuffer()
for slice in 0...5 {
cmdBuf.load(cubeTexture,
slice: slice,
level: 0,
size: size,
sourceBytesPerRow:bytesPerRow,
sourceBytesPerImage: bytesPerImage,
destinationOrigin: destOrigin,
sourceHandle: fileHandles[slice],
sourceHandleOffset: 0)
}
cmdBuf.commit()
cmdBuf.waitUntilCompleted()
/* This passes, even though my texture is messed up...? */
assert(cmdBuf.status == .complete)
// Also, no errors or warnings when XCode Diagnostics / Metal Validation is turned on.
If I use multiple command buffers (one load texture command each), everything is fine...
for slice in 0...5 {
// IMPORTANT: Moved inside the LOOP
let cmdBuf = ioCommandQueue.makeCommandBuffer()
cmdBuf.load(cubeTexture,
slice: slice,
level: 0,
size: size,
sourceBytesPerRow:bytesPerRow,
sourceBytesPerImage: bytesPerImage,
destinationOrigin: destOrigin,
sourceHandle: fileHandles[slice],
sourceHandleOffset: 0)
// IMPORTANT: Moved inside the LOOP
cmdBuf.commit()
cmdBuf.waitUntilCompleted()
assert(cmdBuf.status == .complete)
}
I'm assuming this a bug (opened FB10582329)... Or at the very least, some documentation or Metal Validation log would be helpful to understanding what's going wrong/workaround.
I've uploaded reduced reproduction (Single Swift file, command line application, XCode project) to this GitHub Repository: https://github.com/peterwmwong/x-mtlio-load-multiple-textures
Thanks.
I tried with the latest Xcode 14 RC, but still am unable to extract a pipeline script from a harvested binary archive, as per WWDC 2022 session video.
Here's what I've tried (Runnable XCode project on Github):
// Create a simple render pipeline
let device = MTLCreateSystemDefaultDevice()!
let lib = device.makeDefaultLibrary()!
let pipelineDesc = MTLRenderPipelineDescriptor()
pipelineDesc.vertexFunction = lib.makeFunction(name: "main_vertex")
pipelineDesc.fragmentFunction = lib.makeFunction(name: "main_fragment")
pipelineDesc.colorAttachments[0]?.pixelFormat = .bgra8Unorm
// Harvest GPU Archive
let archivePath = NSTemporaryDirectory().appending("harvested-archive.metallib")
let archiveDesc = MTLBinaryArchiveDescriptor()
let archive = try device.makeBinaryArchive(descriptor: archiveDesc)
try archive.addRenderPipelineFunctions(descriptor: pipelineDesc)
try archive.serialize(to: NSURL.fileURL(withPath: archivePath))
// In a terminal...
> xcrun metal-source -flatbuffers=json harvested-archive.metallib -o descriptors.mtlp-json
Unfortunately, metal-source produces a directory of metallibs and no pipeline JSON script in sight.
Did I miss something?
Has the process changed since the WWDC 2022?
Thanks.
With the Xcode 15 RC, the documentation for Metal Pipelines Script (man metal-pipelines-script) doesn't mention anything about defining a Mesh Render Pipeline (MTLMeshRenderPipelineDescriptor).
Is there way to do offline compilation OR harvest binary archives for mesh shaders?