Posts

Post marked as solved
6 Replies
3.0k Views
Hi all,I'm trying to do a "render to texture" (i.e. create a render pass that is separate from the main rendering code, separate textures, etc) and read back the depth texture and store it in an image in my application. This is suposed to be a one-time step when loading a new scene. I had found hints on stackoverflow that one has to use MTLBlitCommandEncoder copyFromTexture because with recent releases the texture is in private memory. So far so good. My problem is that this only seems to work for large sizes, if the render resolution is for example 512x512 (or smaller) the copied value in the buffer looks partial or empty as if the blit would have occured in the middle of rendering (2 of 10 drawables not shown or in a larger mesh there are gaps that look like triangles haven'be been rasterized). If I keep the code exactly the same and increase the render size to 2048x2048 it works perfectly. Also if I use the same code to copy the color attachment it works all the time. Finally, I used instrument to verify that yes, first the render encoder is done and then the blit encoder.The interesting part is: If I artificially call this code at the beginning of every frame and capture a frame in XCode, it says: "Your application created a command encoder but did not encode any work on it" for the line that creates the encoder although just two lines later in the frame capture there indeed is the copyFromTexture call.Anyone got an idea why this might happen? Any suggestion is much appreciated. ThanksAlexP.S.: I'm running iOS 9.3.5 on an iPhone 6 and an iPad Pro. Implementation roughly looks like this (using non multi-sampling):MTLRenderPassDescriptor * renderPass = [MTLRenderPassDescriptor renderPassDescriptor]; MTLTextureDescriptor * colorBufferDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm width:imageSize.getWidth() height:imageSize.getHeight() mipmapped:NO]; colorBufferDescriptor.usage = MTLTextureUsageRenderTarget; renderPass.colorAttachments[0].texture = [self.mtlDevice newTextureWithDescriptor:colorBufferDescriptor]; renderPass.colorAttachments[0].clearColor = MTLClearColorMake(0.0, 0.0, 0.0, 0.0); renderPass.colorAttachments[0].loadAction = MTLLoadActionClear; MTLTextureDescriptor * depthBufferDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatDepth32Float_Stencil8 width:imageSize.getWidth() height:imageSize.getHeight() mipmapped:NO]; depthBufferDescriptor.usage = MTLTextureUsageRenderTarget; renderPass.depthAttachment.texture = [self.mtlDevice newTextureWithDescriptor:depthBufferDescriptor]; renderPass.depthAttachment.loadAction = MTLLoadActionClear; renderPass.stencilAttachment.texture = renderPass.depthAttachment.texture; id <MTLCommandBuffer> commandBuffer = [self.mtlCommandQueue commandBuffer]; // // [...] <- doing render encoding here // id<MTLBuffer> depthImageBuffer = [self.mtlDevice newBufferWithLength:(4 * pixelCount) options:MTLResourceOptionCPUCacheModeDefault]; id<MTLBlitCommandEncoder> blitCommandEncoder = commandBuffer.blitCommandEncoder; blitCommandEncoder.label = @"Depth buffer to CPU blit"; [blitCommandEncoder copyFromTexture:renderPass.depthAttachment.texture sourceSlice:0 sourceLevel:0 sourceOrigin:MTLOriginMake(0, 0, 0) sourceSize:MTLSizeMake(imageSize.getWidth(), imageSize.getHeight(), 1) toBuffer:depthImageBuffer destinationOffset:0 destinationBytesPerRow:(4 * imageSize.getWidth()) destinationBytesPerImage:(4 * pixelCount) options:MTLBlitOptionDepthFromDepthStencil]; [blitCommandEncoder endEncoding]; [commandBuffer commit]; [commandBuffer waitUntilCompleted];
Posted Last updated
.