Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Posts under Metal tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Usage of colorCurves CIFilter
How can I use my RGB Curve points: let redCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.152), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let greenCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.247, y: 0.196), CIVector(x: 0.5, y: 0.5), CIVector(x: 1, y: 1)] let blueCurve = [CIVector(x: 0, y: 0), CIVector(x: 0.235, y: 0.184), CIVector(x: 0.466, y: 0.466), CIVector(x: 1, y: 1)] in colorCurvesFilter which I've found in Apple Docs: func colorCurves(inputImage: CIImage) -> CIImage { let colorCurvesEffect = CIFilter.colorCurves() colorCurvesEffect.inputImage = inputImage colorCurvesEffect.curvesDomain = CIVector(x: 0, y: 1) colorCurvesEffect.curvesData = Data( bytes: [Float32]([ 0.0,0.0,0.0, 0.8,0.8,0.8, 1.0,1.0,1.0 ]), count: 36) colorCurvesEffect.colorSpace = CGColorSpaceCreateDeviceRGB() return colorCurvesEffect.outputImage! }
0
0
75
3d
Swift playground + metal crashes on swift 6
Following code crashes (sigsegv in lldb-rpc-server) when run as swift 6, but runs correctly when run as swift 5 (from "Metal by tutorials"): import PlaygroundSupport import MetalKit print("start") guard let device = MTLCreateSystemDefaultDevice() else { fatalError("GPU is not supported") } let frame = CGRect(x: 0, y: 0, width: 600, height: 600) let view = MTKView(frame: frame, device: device) view.clearColor = MTLClearColor(red: 1, green: 1, blue: 0.8, alpha: 1) let allocator = MTKMeshBufferAllocator(device: device) let mdlMesh = MDLMesh(sphereWithExtent: [0.75,0.75,0.75], segments: [100, 100], inwardNormals: false, geometryType: .triangles, allocator: allocator) let mesh = try MTKMesh(mesh: mdlMesh, device: device) guard let commandQueue = device.makeCommandQueue() else { fatalError("Could not create a command queue") } let shader = """ #include <metal_stdlib> using namespace metal; struct VertexIn { float4 position [[attribute(0)]]; }; vertex float4 vertex_main(const VertexIn vertex_in [[stage_in]]) { return vertex_in.position; } fragment float4 fragment_main() { return float4(1, 0, 0, 1); } """ print("A") let library = try device.makeLibrary(source: shader, options: nil) let vertexFunction = library.makeFunction(name: "vertex_main") let fragmentFunction = library.makeFunction(name: "fragment_main") let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm pipelineDescriptor.vertexFunction = vertexFunction pipelineDescriptor.fragmentFunction = fragmentFunction print("X") pipelineDescriptor.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(mesh.vertexDescriptor) let pipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor) guard let commandBuffer = commandQueue.makeCommandBuffer(), let renderPassDescriptor = view.currentRenderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { fatalError() } renderEncoder.setRenderPipelineState(pipelineState) renderEncoder.setVertexBuffer(mesh.vertexBuffers[0].buffer, offset: 0, index: 0) guard let submesh = mesh.submeshes.first else { fatalError() } renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: submesh.indexCount, indexType: submesh.indexType, indexBuffer: submesh.indexBuffer.buffer, indexBufferOffset: 0) renderEncoder.endEncoding() guard let drawable = view.currentDrawable else { fatalError() } commandBuffer.present(drawable) commandBuffer.commit() print("test") PlaygroundPage.current.liveView = view Crash report: https://gist.githubusercontent.com/tumdum/8aa53bc806619c0d21c93a55fae07937/raw/370b00c07b08fff8856f9fc678de9888faa8d06e/crash.log I'm on macOS 15.1.1 (24B2091) + Xcode 16.2 (16C5032a)
0
0
82
1w
How to save a point cloud in the sample code "Capturing depth using the LiDAR camera" with the photoOutput
Hello dear community, I have the sample code from Apple “CapturingDepthUsingLiDAR” to access the LiDAR on my iPhone 12 Pro. My goal is to use the “photo output” function to generate a point cloud from a single image and then save it as a ply file. So far I have tested different approaches to create a .ply file from the depthmap, the intrinsic camera data and the rgba values. Unfortunately, I have had no success so far and the result has always been an incorrect point cloud. My question now is whether there are already approaches to this and whether anyone has any experience with it. Thank you very much in advance!!!
2
0
190
1w
metal-cpp syntax for MTL::Buffer float2 parameter
I'm trying to pass a buffer of float2 items from CPU to GPU. In the kernel, I can provide a parameter for the buffer: device const float2* values for example. How do I specify float2 as the type for the MTL::Buffer? I managed to get the code to work by "cheating" by defining a simple class that has the same data members as a float2, but there is probably a better way. class Coord_f { public: float x{0.0f}; float y{0.0f}; }; then using code to allocate like this: NS::TransferPtr(device->newBuffer(n_elements * sizeof(Coord_f), MTL::ResourceStorageModeManaged)) The headers for metal-cpp do not appear to define vector objects like float2, but I'm doubtless missing something. Thanks.
2
0
189
2w
Texture Definitions for MPSSVGF Denoise
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising. I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to: Is depth in red or alpha channel for the depth-normal texture? Are the normals in screen space? Is depth linear? Is it distance or z coordinate in view space? Or even logarithmically scaled or something else? Are the motion vectors supposed to be in pixels per frame? What is the orientation of the axis? Is y up or down? Are there are other restrictions on the formats? Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic). So how should I fill these textures. Can someone point me to the documentation where these kinds of questions are answered?
0
0
217
3w
MetalTools.framework Missing/Corrupted
Like I said in the title, it looks like MetalTools.framework is missing or corrupted. I think I saw that the symbolic link was broken. They look like aliases in the finder, but I can't find the original. This was a problem with Ventura (using the last compatible Xcode version) and Sequoia 15.2 (Xcode 16.2). I didn't use Xcode before that. Note that none of my apps need Metal API (I don't think). I only noticed it when Xcode gave an error regarding Metal. Sorry this is so long; I hope the Terminal info will help. I don't want to reinstall Sequoia and this has been a problem since at least Ventura. Recommendations? ls -l /System/Library/PrivateFrameworks/MetalTools.framework/ total 0 lrwxr-xr-x 1 root wheel 27 Dec 7 01:11 MetalTools -> Versions/Current/MetalTools lrwxr-xr-x 1 root wheel 26 Dec 7 01:11 Resources -> Versions/Current/Resources drwxr-xr-x 4 root wheel 128 Dec 7 01:11 Versions ls -la /System/Library/PrivateFrameworks/MetalTools.framework/ total 0 drwxr-xr-x 5 root wheel 160 Dec 7 01:11 . drwxr-xr-x 1885 root wheel 60320 Dec 7 01:11 .. lrwxr-xr-x 1 root wheel 27 Dec 7 01:11 MetalTools -> Versions/Current/MetalTools lrwxr-xr-x 1 root wheel 26 Dec 7 01:11 Resources -> Versions/Current/Resources drwxr-xr-x 4 root wheel 128 Dec 7 01:11 Versions codesign -v /System/Library/PrivateFrameworks/MetalTools.framework/MetalTools /System/Library/PrivateFrameworks/MetalTools.framework/MetalTools: No such file or directory ls -la /System/Library/PrivateFrameworks/MetalTools.framework/Versions/ total 0 drwxr-xr-x 4 root wheel 128 Dec 7 01:11 . drwxr-xr-x 5 root wheel 160 Dec 7 01:11 .. drwxr-xr-x 4 root wheel 128 Dec 7 01:11 A lrwxr-xr-x 1 root wheel 1 Dec 7 01:11 Current -> A ls -la /System/Library/PrivateFrameworks/MetalTools.framework/Versions/A/ total 0 drwxr-xr-x 4 root wheel 128 Dec 7 01:11 . drwxr-xr-x 4 root wheel 128 Dec 7 01:11 .. drwxr-xr-x 10 root wheel 320 Dec 7 01:11 Resources drwxr-xr-x 3 root wheel 96 Dec 7 01:11 _CodeSignature Note - -rwxr-xr-x 1 root wheel MetalTools should be in the above list (according to ChatGPT) system_profiler SPDisplaysDataType Intel UHD Graphics 630 and AMD Radeon Pro 5500M (includes: Metal Support: Metal 3 Playground code => "Metal is supported." Default device: Apple iOS simulator GPU. Thanks, Ashley
3
0
292
2w
[visionOS] How to render side-by-side stereo video?
I want to render a 3d/stereoscopic video in an Apple Vision Pro window using RealityKit/RealityView. The video is a left-right stereo. The straight forward approach would be to spawn a quad, and give it a custom Shader Graph material, which has a CameraIndexSwitch. The CameraIndexSwitch chooses between the right texture vs the left texture. https://i.sstatic.net/XawqjNcg.png The issue I have here is that I have to extract the video frames from my AVSampleBufferVideoRenderer. This should work ok, but not if I'm playing FairPlay content. So, my question is, how to render stereo FairPlay videos in a SwiftUI RealityView?
0
0
223
3w
Performance Regression: In iOS 18.2 3D rotation in volume rendering is not smooth as compared to iOS 18.1
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc. The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application. We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d Steps to Reproduce: Load the above mentioned test example on an iPhone running version 18.2 using safari. Perform volume rendering using the provided dataset. Measure the time taken by volume for each rotate or drag action. Repeat the same steps on an iPhone running version 18.1 for comparison. Additional Information: Device Model Tested: iPhone12, iPhone13, iPhone14 iOS Version With Issue: 18.2 18.3(Beta) I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know. Thank you.
1
2
250
2d
Performance Regression: iPhone Version 18.2 Takes More Time to Render Volume Than Version 18.1
Hello, I am experiencing a performance regression in my application when rendering volumes on iPhone. Specifically, I have noticed that iOS version 18.2 takes significantly more time for each render cycle as compared to iOS 18.1. Details: Affected Versions: iOS version 18.2 iOS version 18.1 (baseline for comparison) Issue Description: In iOS version 18.2, the time taken to render volumes has increased compared to iOS version 18.1. This performance regression is affecting the user experience in my application. Test Example: https://www.cornerstonejs.org/live-examples/volumeviewport3d Steps to Reproduce: Load the above test example on an iPhone running version 18.2 using safari. Perform volume rendering using the provided dataset. Measure the time taken by volume for each rotate or drag action. Repeat the same steps on an iPhone running version 18.1 for comparison. Additional Information: Device Model Tested: iPhone12, iPhone13, iPhone14 iOS Version With Issue: 18.2 18.3(Beta) I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know. Thank you.
1
0
230
3w
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
0
0
240
3w
How to use imageblock_slice
Is there a working example of imageblock_slice with implicit layout somewhere? I get a compilation error when i write this: imageblock_slilce color_slice = img_blk.slice(frag->color); Error: No matching member function for call to 'slice' candidate template ignored: couldn't infer template argument 'E' candidate function template not viable: requires 2 arguments, but 1 was provided Too few template arguments for class template 'imageblock_slice' It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/ I tried supplying the struct type of the image block between <> but it still does not work.
1
0
282
4w
Tile Shaders performance when writing to tile texture vs. resolve texture
I am working on a custom resolve tile shader for a client. I see a big difference in performance depending on where we write to: 1- the resolve texture of the color attachment 2- a rw tile shader texture set via [renderEncoder setTileTexture: myResolvedTexture] Option 2 is more than twice as slow than option 1. Our compute shader writes to 4 UAVs so just using the resolve texture entry is not possible. Why such a difference as there is no more data being written? Can option 2 be as fast as option 1? I can demonstrate the issue in a modified version of the Multisample code sample.
0
0
152
Dec ’24
M1 GPU violates atomic_thread_fence across threadgroups
I have an M1 Pro with a 16-core GPU. When I run a shader with 8193 threads, atomic_thread_fence is violated across the boundary between thread 8191 (the last thread in the 7th threadgroup) and 8192 (the first thread in the 9th threadgroup). I've attached the Metal and Swift files, but I'll repost the relevant kernel here. It's a function that launches N threads to iterate through a binary tree from the leaves, where the first thread to reach the parent terminates and the second one populates it with the sum of the nodes two children. // clang-format off void sum(device const int& size, device const int* __restrict__ in, device int* __restrict__ out, device atomic_int* visited, uint i [[thread_position_in_grid]]) { // clang-format on int val = in[i]; uint cur = (size + i - 1); out[cur] = val; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); cur = (cur - 1) / 2; int proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); while (proceed == 1) { uint left = 2 * cur + 1; uint right = 2 * cur + 2; uint val_left = out[left]; uint val_right = out[right]; uint val_cur = val_left + val_right; out[cur] = val_cur; if (cur == 0) { break; } cur = (cur - 1) / 2; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); } } What I'm observing is that thread 8192 hits the atomic_fetch_add first and terminates, while thread 8191 hits it second (observes that thread 8192 had incremented it by 1) and proceeds into the loop. Thread 8191 reads out[16383] (which it populated with 8191) and out[16384] (which thread 8192 populated with 8192 prior to the atomic_thread_fence). Instead of reading 8192 from out[16384] though, it reads 0. Maybe I'm missing something but this seems like a pretty clear violation of the atomic_thread_fence which (I thought) was supposed to guarantee that the write from thread 8192 to out[16384] would be visible to any thread observing the effects of the following atomic_fetch_add. Is atomic_fetch_add not a store operation? Modifying it to an atomic_store or atomic_exchange still results in the bug. Adding another atomic_thread_fence between the atomic_fetch_add and reading of out also doesn't change anything. I only begin to observe this on grid sizes of 8193 and upwards. That's 9 threadgroups per grid, which I assume could be related to my M1 Pro GPU having 16 cores. Running the same example on an A17 Pro GPU doesn't show any of this behavior up through a tested grid size of 4194303 (2^22-1), at which point testing larger grid sizes starts to run into other issues so I can't test anything larger. Removing the atomic_thread_fences on both the M1 and A17 cause the test to fail at much smaller grid sizes, as expected. sum.metal main.swift
2
0
197
Dec ’24
Metal GPU capture
I'm trying to get performance information in a Metal cpp program. I'm using Xcode 14.2. In the Counters tab, only the inscription No Data appears. What could be the problem, what am I doing wrong? The libraries Metal, MetalKit, AppKit are connected. The code is taken from the Apple website from the Learn Metal with C++ section.
1
0
208
Dec ’24
How to Train and Deploy PyTorch Models on Apple Hardware: A Unified Path for Deep ML Practice on Core ML?
Submited as : FB16052050 I am looking to adopt Machine Learning in a more granular manner, going beyond just using pre-built Metal, Core ML, or Create ML approaches. Specifically, I want to train models using Open Python PyTorch libraries, as these offer greater flexibility compared to Apple's native tools. However, these PyTorch APIs are primarily optimised for NVIDIA GPUs (or TPUs), not Apple's M3 or Apple Neural Engine (ANE). My goal is to train the models locally without resorting to cloud-based solutions for training or inference, and to then convert the models into Core ML format for deployment on Apple hardware. This would allow me to leverage Apple's hardware acceleration (via ANE, Metal, and MPS) while maintaining control over the training process in PyTorch. I want to know: What are my options for training models in PyTorch on local hardware (Apple M3 or equivalent), and how can I ensure that the PyTorch model can eventually be converted to Core ML without losing flexibility in model training and customisation? How can I perform training in PyTorch and avoid being restricted to inference-only workflows as Core ML typically allows? Is it possible to use the training capabilities of PyTorch and still get the performance benefits of Apple's hardware for both training and inference? What are the best practices or tools to ensure that my training pipeline in PyTorch is compatible with Apple's hardware constraints and optimised for local execution? I'm seeking a practical, cloud-free approach on Apple Hardware only that allows me to train models in PyTorch (keeping control over the training process) while ensuring that they can be deployed efficiently using Core ML on Apple hardware.
1
0
361
Dec ’24
MTKView draw method causes EXC_BAD_ACCESS crash
Hello, I am using MTKView to display: camera preview & video playback. I am testing on iPhone 16. App crashes at a random moment whenever MTKView is rendering CIImage. MetalView: public enum MetalActionType { case image(CIImage) case buffer(CVPixelBuffer) } public struct MetalView: UIViewRepresentable { let mtkView = MTKView() public let actionPublisher: any Publisher<MetalActionType, Never> public func makeCoordinator() -> Coordinator { Coordinator(self) } public func makeUIView(context: UIViewRepresentableContext<MetalView>) -> MTKView { guard let metalDevice = MTLCreateSystemDefaultDevice() else { return mtkView } mtkView.device = metalDevice mtkView.framebufferOnly = false mtkView.clearColor = MTLClearColor(red: 0, green: 0, blue: 0, alpha: 0) mtkView.drawableSize = mtkView.frame.size mtkView.delegate = context.coordinator mtkView.isPaused = true mtkView.enableSetNeedsDisplay = true mtkView.preferredFramesPerSecond = 60 context.coordinator.ciContext = CIContext( mtlDevice: metalDevice, options: [.priorityRequestLow: true, .highQualityDownsample: false]) context.coordinator.metalCommandQueue = metalDevice.makeCommandQueue() context.coordinator.actionSubscriber = actionPublisher.sink { type in switch type { case .buffer(let pixelBuffer): context.coordinator.updateCIImage(pixelBuffer) break case .image(let image): context.coordinator.updateCIImage(image) break } } return mtkView } public func updateUIView(_ nsView: MTKView, context: UIViewRepresentableContext<MetalView>) { } public class Coordinator: NSObject, MTKViewDelegate { var parent: MetalView var metalCommandQueue: MTLCommandQueue! var ciContext: CIContext! private var image: CIImage? { didSet { Task { @MainActor in self.parent.mtkView.setNeedsDisplay() //<--- call Draw method } } } var actionSubscriber: (any Combine.Cancellable)? private let operationQueue = OperationQueue() init(_ parent: MetalView) { self.parent = parent operationQueue.qualityOfService = .background super.init() } public func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) { } public func draw(in view: MTKView) { guard let drawable = view.currentDrawable, let ciImage = image, let commandBuffer = metalCommandQueue.makeCommandBuffer(), let ci = ciContext else { return } //making sure nothing is nil, now we can add the current frame to the operationQueue for processing operationQueue.addOperation( MetalOperation( drawable: drawable, drawableSize: view.drawableSize, ciImage: ciImage, commandBuffer: commandBuffer, pixelFormat: view.colorPixelFormat, ciContext: ci)) } //consumed by Subscriber func updateCIImage(_ img: CIImage) { image = img } //consumed by Subscriber func updateCIImage(_ buffer: CVPixelBuffer) { image = CIImage(cvPixelBuffer: buffer) } } } now the MetalOperation class: private class MetalOperation: Operation, @unchecked Sendable { let drawable: CAMetalDrawable let drawableSize: CGSize let ciImage: CIImage let commandBuffer: MTLCommandBuffer let pixelFormat: MTLPixelFormat let ciContext: CIContext init( drawable: CAMetalDrawable, drawableSize: CGSize, ciImage: CIImage, commandBuffer: MTLCommandBuffer, pixelFormat: MTLPixelFormat, ciContext: CIContext ) { self.drawable = drawable self.drawableSize = drawableSize self.ciImage = ciImage self.commandBuffer = commandBuffer self.pixelFormat = pixelFormat self.ciContext = ciContext } override func main() { let width = Int(drawableSize.width) let height = Int(drawableSize.height) let ciWidth = Int(ciImage.extent.width) //<-- Thread 22: EXC_BAD_ACCESS (code=1, address=0x5e71f5490) A bad access to memory terminated the process. let ciHeight = Int(ciImage.extent.height) let destination = CIRenderDestination( width: width, height: height, pixelFormat: pixelFormat, commandBuffer: commandBuffer, mtlTextureProvider: { [self] () -> MTLTexture in return drawable.texture }) let transform = CGAffineTransform( scaleX: CGFloat(width) / CGFloat(ciWidth), y: CGFloat(height) / CGFloat(ciHeight)) do { try ciContext.startTask(toClear: destination) try ciContext.startTask(toRender: ciImage.transformed(by: transform), to: destination) } catch { } commandBuffer.present(drawable) commandBuffer.commit() commandBuffer.waitUntilCompleted() } } Now I am no Metal expert, but I believe it's a very simple execution that shouldn't cause memory leak especially after we have already checked for whether CIImage is nil or not. I have also tried running this code without OperationQueue and also tried with @autoreleasepool but none of them has solved this problem. Am I missing something?
1
0
337
Dec ’24
How to draw directly to the pixels of the Vision Pro screens?
I have been playing around with the idea of drawing directly onto the pixels of the Vision Pro, as I am working on a telepresence app that streams a live stereoscopic feed from an articulated robot neck to the wearer. I was playing around in the Compositor Services demo and modified it to show the following. I created a grid pattern using normalized device coordinates (-1 to 1) and it looks great when it shows up in the simulator as shown below. I wanted to see the effects of lens distortion on the image so I launched this script inside the actual Vision Pro, it seems that each eye has only a portion of this screen visible. I have included a screen capture of a screen recording inside of the Vision Pro when running this modified app. The lines appear straight, which says to me that there must be some automatic pre-distortion correction applied (similar to the image shown below taken from an AVP teardown that I cannot link here). However, I am wondering why the grid appears cropped and what the bounds of the frame are defined by?
0
1
266
Dec ’24
Render to multiple offscreen images with SCNRenderer
I am trying to extract some built-in and custom render passes from SceneKit, so that I can pass them into a metal pipeline and do some additional work with them. I have a metal viewport, and have instantiated a SCNRenderer so that I can render a SCNScene using SceneKit to a texture as part of my metal draw pass. This works as expected. Now I want to output multiple textures from the SceneKit render, not just the final color. I want to extract Depth, Normal, Lighting, Colour and a custom SCNTechnique for world position. I can easily use a SCNTechnique to render one of these to the color output, but it's not clear how I would render multiple passes in one render call. Is there some way to pass a writeable buffer/texture to a SCNTechnique, so that I can populate it in my SCNTechnique shader at render time with the output from the pass? Similar to how one would bind a buffer for a metal shader. SCNTechnique obfuscates things, so it's not clear how to proceed. Does anyone have any ideas?
0
0
260
Dec ’24
TaskExecutor and Swift 6 question
I have the following TaskExecutor code in Swift 6 and is getting the following error: //Error Passing closure as a sending parameter risks causing data races between main actor-isolated code and concurrent execution of the closure. May I know what is the best way to approach this? This is the default code generated by Xcode when creating a Vision Pro App using Metal as the Immersive Renderer. Renderer @MainActor static func startRenderLoop(_ layerRenderer: LayerRenderer, appModel: AppModel) { Task(executorPreference: RendererTaskExecutor.shared) { //Error let renderer = Renderer(layerRenderer, appModel: appModel) await renderer.startARSession() await renderer.renderLoop() } } final class RendererTaskExecutor: TaskExecutor { private let queue = DispatchQueue(label: "RenderThreadQueue", qos: .userInteractive) func enqueue(_ job: UnownedJob) { queue.async { job.runSynchronously(on: self.asUnownedSerialExecutor()) } } func asUnownedSerialExecutor() -&gt; UnownedTaskExecutor { return UnownedTaskExecutor(ordinary: self) } static let shared: RendererTaskExecutor = RendererTaskExecutor() }
1
0
427
Dec ’24