Post

Replies

Boosts

Views

Activity

Strange artifacts when porting GLSL to Metal
I tried to port this shadertoy: https://www.shadertoy.com/view/4dX3zl to Metal with as little changes as possible. But there are little artefacts when I run it on Mac or iPhone: some voxels would sometime "jump" out of place. This happens with both - branchless and ordinary versions. The original shadertoy runs nicely in a browser without any issues. I wonder what causes these artefacts and how can I get rid of them... Here is the Swift+Metal code that you may simply run in a Playground or in Swift Playgrounds app: import MetalKit import PlaygroundSupport public let metalFunctions = """ #include <metal_stdlib> using namespace metal; constant constexpr int MAX_RAY_STEPS = 64; float sdSphere(float3 p, float d) { return length(p) - d; } float sdBox(float3 p, float3 b) { float3 d = abs(p) - b; return min(max(d.x, max(d.y, d.z)), 0.0) + length(max(d, 0.0)); } bool getVoxel(int3 c) { float3 p = float3(c) + float3(0.5); float d = min(max(-sdSphere(p, 7.5), sdBox(p, float3(6.0))), -sdSphere(p, 25.0)); return d < 0.0; } float2 rotate2d(float2 v, float a) { float sinA = sin(a); float cosA = cos(a); return float2(v.x * cosA - v.y * sinA, v.y * cosA + v.x * sinA); } kernel void shader(texture2d<float, access::write> out [[texture(0)]], constant float& time [[buffer(1)]], constant uint2& viewportSize [[buffer(0)]], uint2 tid [[thread_position_in_grid]]){ if (tid.x >= viewportSize.x || tid.y >= viewportSize.y) return; float2 res = float2(viewportSize); float2 fragCoord = float2(tid); float2 screenPos = (fragCoord / res) * 2.0 - 1.0; float3 cameraDir = float3(0.0, 0.0, 0.8); float3 cameraPlaneU = float3(1.0, 0.0, 0.0); float3 cameraPlaneV = float3(0.0, 1.0, 0.0) * res.y / res.x; float3 rayDir = cameraDir + screenPos.x * cameraPlaneU + screenPos.y * cameraPlaneV; float3 rayPos = float3(0.0, 2.0 * sin(time * 2.7), -12.0); rayPos.xz = rotate2d(rayPos.xz, time); rayDir.xz = rotate2d(rayDir.xz, time); int3 mapPos = int3(floor(rayPos + 0.5)); float3 deltaDist = abs(float3(length(rayDir)) / rayDir); int3 rayStep = int3(sign(rayDir)); float3 sideDist = (sign(rayDir) * (float3(mapPos) - rayPos) + (sign(rayDir) * 0.5) + 0.5) * deltaDist; bool3 mask; for (int i = 0; i < MAX_RAY_STEPS; i++){ if (getVoxel(mapPos)) continue; mask = sideDist.xyz <= min(sideDist.yzx, sideDist.zxy); sideDist += float3(mask) * deltaDist; mapPos += int3(float3(mask)) * rayStep; } float3 color; if (mask.x) { color = float3(0.5); } if (mask.y) { color = float3(1.0); } if (mask.z) { color = float3(0.75); } // float3 color = .5; out.write(float4(color, 1), tid); } """ class MainView: MTKView { var time: Float = 0 var viewportSize: simd_uint2 = [0,0] var commandQueue: MTLCommandQueue! var computePass: MTLComputePipelineState! init(frame:CGRect) { super.init(frame: frame, device: MTLCreateSystemDefaultDevice()) self.framebufferOnly = false self.commandQueue = device?.makeCommandQueue() var library:MTLLibrary! do { library = try device?.makeLibrary(source: metalFunctions, options: nil) } catch{ print(error) } let shader = library?.makeFunction(name: "shader") do{ computePass = try device?.makeComputePipelineState(function: shader!) } catch{ print(error) } } required init(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } extension MainView{ override func draw(_ dirtyRect: CGRect) { guard let drawable = self.currentDrawable else { return } viewportSize = [UInt32(drawable.texture.width), UInt32(drawable.texture.height)] time += 0.01 let commandbuffer = commandQueue.makeCommandBuffer() let computeCommandEncoder = commandbuffer?.makeComputeCommandEncoder() computeCommandEncoder?.setComputePipelineState(computePass) computeCommandEncoder?.setTexture(drawable.texture, index: 0) computeCommandEncoder?.setBytes(&viewportSize, length: MemoryLayout<simd_uint2>.stride, index: 0) computeCommandEncoder?.setBytes(&time, length: MemoryLayout<Float>.stride, index: 1) var w = computePass.threadExecutionWidth let h = computePass.maxTotalThreadsPerThreadgroup / w var threadsPerThreadGroup = MTLSize(width: w, height: h, depth: 1) var threadsPerGrid = MTLSize(width: drawable.texture.width, height: drawable.texture.height, depth: 1) var threadgroupsPerGrid = MTLSize(width: drawable.texture.width / w+1, height: drawable.texture.height / h+1, depth: 1) computeCommandEncoder?.dispatchThreadgroups(threadgroupsPerGrid, threadsPerThreadgroup: threadsPerThreadGroup) computeCommandEncoder?.endEncoding() commandbuffer?.present(drawable) commandbuffer?.commit() } } let frame = CGRect(x: 0, y: 0, width: 1000, height: 1000) PlaygroundPage.current.setLiveView(MainView(frame: frame))
2
0
688
May ’23
Distance to bounding_box in Metal ray tracing
I wonder if there is an easy way of getting the distance to ‘bounding_box’ primitive from ‘raytracing::intersector’ function. As I figured (though this is not very clear from the docs) if you provide no intersection function for ‘intersector.intersect’ it won’t detect any intersections even with opaque primitives or ‘force_opacity(opaque)’. So if you need to just get a distance to a bounding box primitive in your scene, you have to implement some intersection logic in an intersection function since ‘distance’ parameter is only available for ‘triangle’ primitive. And this is not very cheap. Am I getting it right and there is no way of getting the distance to a bounding box primitive directly from Metal?
0
1
353
May ’23
How to change storage mode of MTKView's depth texture
In the Metal's profiler I get this suggestion: "Texture:0x12880c010 "MTKView Depth"" has storage mode 'Private' but was a transient render target accessed exclusively by the GPU Consider changing the storage mode to 'Memoryless'. This texture is created by MTKView automatically if depthStencilPixelFormat property is set to a meaningful value. It is even possible to control the texture usage by setting depthStencilAttachmentTextureUsage property. But I can't see how can I change the storage mode of this texture. It seems that the MTKView should somehow set the right storage mode automatically as this excerpt from the documentation suggests: ...the view automatically creates those textures for you and configures them as part of any render passes that the view creates. But in my case it certainly fails to take into account that in my pipeline I don't read from this texture. So the question is how can I change the storage mode of the depth texture of MTKView to .memoryless?
2
0
621
May ’23
Render a SwiftUI view into an image with alpha
I need to render a SwiftUI view into an image with opacity, so that empty space of the view would be transparent when I layer the image above some background. I use this code for conversion: func convertViewToData<V>(view: V, size: CGSize) -> UIImage? where V: View { guard let rootVC = UIApplication.shared.windows.first?.rootViewController else { return nil } let imageVC = UIHostingController(rootView: view.edgesIgnoringSafeArea(.all)) imageVC.view.frame = CGRect(origin: .zero, size: size) rootVC.view.insertSubview(imageVC.view, at: 0) let uiImage = imageVC.view.asImage(size: size) imageVC.view.removeFromSuperview() return uiImage } extension UIView { func asImage(size: CGSize) -> UIImage { let format = UIGraphicsImageRendererFormat() format.opaque = false return UIGraphicsImageRenderer(bounds: bounds, format: format).image { context in layer.render(in: context.cgContext) } } } extension View{ func convertToImage(size: CGSize) -> UIImage?{ convertViewToData(view: self, size: size) } } And this code to test the resulting image: struct ContentView: View { var view: some View{ VStack { Image(systemName: "globe") .imageScale(.large) .foregroundColor(.accentColor) Text("Hello, world!") } } var body: some View { HStack{ view Image(uiImage: view.convertToImage(size: .init(width: 200, height: 200))!) } .background(LinearGradient(stops: [.init(color: .green, location: 0), .init(color: .red, location: 1)], startPoint: .bottom, endPoint: .top)) } } This code produces two instances of the text: the one on the left is layered on the gradient background, and the one on the right is on the black background. Clearly, the transparent parts are replaced by the black color in the image. I figured out that the alpha channel is discarded somewhere in the convertViewToData. Is there any way to make it preserve the alpha channel?
2
0
1.9k
Feb ’23