Render advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.

Metal Documentation

Posts under Metal tag

278 Posts
Sort by:
Post not yet marked as solved
0 Replies
353 Views
Hi, I am looking for a routine to perform complex-valued linear algebra on the GPU in python for scientific programming, in particular quantum physics simulations. At the moment I am looking for a routine for complex-valued matrix multiplication. I found MLX has a routine for float matrix multiplication, but it does not directly work for complex-valued matrices. I figured a work-around by splitting the complex valued matrix into real and imaginary part and working with the pair, but it makes it cumbersome to integrate with the remainder of the code. I was hoping for a library-based implementation similar to cupy. I also tried out using the tensorflow linear algebra routines, but I couldn't get them to run on the GPU by now. Specifically, a testfile with a tensorflow.keras.applications.ResNet50 routine runs on the GPU, but the routines from tensorflow.linalg and tensorflow.math that I tested (matmul, expm, eigh) were not running on the GPU. Any advice on how to make linear algebra calculations on mac GPUs work is highly appreciated! For my application the unified memory might be especially beneficial. Thank you!
Posted
by MG607.
Last updated
.
Post not yet marked as solved
1 Replies
439 Views
I used metal and CompositorLayer to render an immersive space skybox. In this space, the window created by the Swift UI I created only displays the gray frosted glass background effect (it seems to ignore the metal-rendered skybox and only samples and displays the black background). why is that? Is there any solution to display the normal frosted glass background? Thank you very much!
Posted
by zane1024.
Last updated
.
Post not yet marked as solved
0 Replies
299 Views
Hi We have an issue where the blurred background shown behind the Views is incorrect. We are using the compositor layer to render a fully immersive scene using Metal, so there is no real world passthrough. We have a couple of SwiftUI Views in the scene that have translucency and the glassBackground effect enabled. We expected that the background would show the blurred immersive scene, in the same way it shows the blurred real world in passthrough mode. Instead, it seems to still show the blurred real world, which feels strange from the user perspective. This is only visible on-device, in the simulator the background is always black. Thanks Mark
Posted Last updated
.
Post not yet marked as solved
2 Replies
419 Views
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
Posted
by akash-ar.
Last updated
.
Post not yet marked as solved
0 Replies
325 Views
Hi there, I've met a problem that in my working project build settings. There is no Metal Compiler Build Setting, but it works well in my demo project. And I'm certain I've move the shader files(.metal) into the bundle. How could I resolve this problem? XCode Version 15.0 13.6.1 (22G313)
Posted Last updated
.
Post not yet marked as solved
0 Replies
283 Views
Is there a way to render stereoscopic (left/right) images in a 2d plane that resides in a swiftUI view? I know this is possible in realityKit shaders, and in immersive metal composits, but is it possible via swiftUI shaders, CAMetalLayer, etc? I'd like to draw a 2d window with standard UI chrome (resize, move etc) that displays stereoscopic content on the flat plane of the window.
Posted Last updated
.
Post not yet marked as solved
0 Replies
222 Views
When developping jax code locally, I use jax's debug_callback. Metal does not implement it. NotImplementedError: MLIR translation rule for primitive 'debug_callback' not found for platform METAL
Posted
by GeoffNN.
Last updated
.
Post not yet marked as solved
0 Replies
317 Views
I experience an issue with SceneKit that is driving me crazy ;( I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store) Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
Posted
by Gil.
Last updated
.
Post not yet marked as solved
0 Replies
276 Views
Hi, Re: WWDC2023-10089, I have a question about creating texture maps during pipeline setup. In traditional MTKView setups, it's easy to query for the view size to know what the dimensions of the texture map should be. But, after digging through all the documentation on the classes, I don't see any way to find this information. There's the drawable, and querying it, and then maybe getting the info from the default render texture maps – but, I'm trying to set these textures up when I set up the pipelines, and so I don't think that's going to work. (Because the render loop won't have started yet.) Secondly, I'm wondering w/ foviation if there's even more that needs to be considered regarding creating these types of auxiliary render passes. Basically, for example's sake, imagine you have a working visionOS Metal pipeline. But, now you want to add a special render pass to do some effects. Typically you'd create a texture map to store that pass, calculate the work in a fragment shader, etc, and then do another pipeline state to mix that with the default rendering pipeline. Any help appreciated, thanks!
Posted
by Gregory_w.
Last updated
.
Post not yet marked as solved
0 Replies
296 Views
In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this: // The 16 byte aligned size of our uniform structures let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100 If I understood correctly, this line of code does this: Calculates the size of the SharedUniforms structure in bytes Clears out the last 8 bits of the size representation Adds 256 bytes to the size So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests. Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
Posted
by BanSee.
Last updated
.
Post not yet marked as solved
2 Replies
226 Views
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]): colorSample.x = in.texCoord.x; Which produce this result: However, if I make a small change to the code like this: colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x; Then it will produce this result: If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression. Is this a bug in fast-math mode? How should I circumvent this problem?
Posted Last updated
.
Post not yet marked as solved
3 Replies
463 Views
Platfrom: iphone XR System: ios 17.3.1 using iphone front camera(normal camera), configure data output format to 'kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange' ('420v' (video range)) I found that Cb, Cr is inside [16, 240], but Y is outside range [16, 235], e.g 240, 255 It will lead that after convert to rbg, rgb may be negative number , and then clamp the r,g,b value between 0 and 255, finally convert clamped rgb back to yuv, yuv is different from origin yuv. The maxium difference of y channel will be 20. Both procssing by pure cpu and using metal shader will get this result CVPixelBuffer.h kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange = '420v', /* Bi-Planar Component Y'CbCr 8-bit 4:2:0, video-range (luma=[16,235] chroma=[16,240]). baseAddr points to a big-endian CVPlanarPixelBufferInfo_YCbCrBiPlanar struct */ // ... some code ... // config camra data output format NSDictionary* options = @{ (__bridge NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange), //(__bridge NSString*)kCVPixelBufferMetalCompatibilityKey : @(YES), }; [_videoDataOutput setVideoSettings:options]; // ... some code ... - (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferRef pixelBuffer = imageBuffer; CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); uint8_t* yBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); uint8_t* uvBase = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); int imageWidth = (int)CVPixelBufferGetWidth(pixelBuffer); // 720 int imageHeight = (int)CVPixelBufferGetHeight(pixelBuffer);// 1280 int y_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 0); // 720 int y_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 0); // 1280 int uv_width = (int)CVPixelBufferGetWidthOfPlane (pixelBuffer, 1); // 360 int uv_height = (int)CVPixelBufferGetHeightOfPlane(pixelBuffer, 1); // 640 int y_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); int uv_stride = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); // 768 // check Y-plane if (TRUE) { for(int i = 0 ; i < imageHeight ; i++) { for(int j = 0; j < imageWidth ; j++) { uint8_t nv12pixel = *(yBase + y_stride * i + j ); if (nv12pixel < 16 || nv12pixel > 235) { // [16, 235] NSLog(@"%s: y panel out of range, coord (x:%d, y:%d), h-coord (x:%d, y:%d) ; nv12 %u " ,__FUNCTION__ ,j ,i ,j/2, i/2 ,nv12pixel ); } } } } CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); } // ... some code ... How to deal with this case ? Hope to get reply, Thanks
Posted
by ZoGo996.
Last updated
.
Post not yet marked as solved
0 Replies
204 Views
On startup I'm getting a "We reached more than 3 frames in flight. That's too many. Did you forget to call cp_frame_end_submission()?" error despite cp_frame_end_submission() being called when needed. Nothing is rendered in the 1 frame that does go through. Is there something I'm missing that would cause cp_frame_end_submission to not register?
Posted
by rob42lou.
Last updated
.
Post not yet marked as solved
2 Replies
479 Views
Hi, I have a an issue with jax.numpy.linalg.inv(a). import jax.numpy.linalg as jnpl B = jnp.identity(2) jnpl.inv(B) Throws the following error: XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve' /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo&lt;transpose NO_TRANSPOSE&gt;, unit_diagonal = true} : (tensor&lt;2x2xf32&gt;, tensor&lt;2x2xf32&gt;) -&gt; tensor&lt;2x2xf32&gt; Any ideas what could be the issue or how to solve it?
Posted Last updated
.
Post not yet marked as solved
0 Replies
376 Views
Is there a way for an FXPlug to access the Source audio? Or do we need to make an AU plugin, apply it to a audio source [both video or audio track], and feed the info via shared memory to an FXPlug? Is there an AU plugin for external processes to "listen" to the audio?
Posted
by belisoful.
Last updated
.
Post not yet marked as solved
0 Replies
308 Views
Namaste! I'm putting together a FCPX Effect that is supposed to increase the resolution with AI upscale, but the only way to add resolution is by scaling. The problem is that scaling causes the video to clip. I want to be able to give a 480 video this "Resolution Upscale" Effect and have it output a 720 or 1080 AI upscaled video, however both FxPlug and Motion Effects does not allow such a thing. The FxPlug is always getting 640x480 input (correct) but only 640x480 output. What is the FxPlug code or Motion Configuration/Cncept for upscaling the resolution without affecting the scale? Is there a way to do this in Motion/FxPlug? Scaling up by FxPlug effect, but then scaling down in a parent Motion Group doesn't do anything. Setting the Group 2D Fixed Resolution doesn't output different dimensions; the debug output from the FxPlug continues saying the input and output is 640x480, even when the group is set at fixed resolution 1920x1080. Doing a hierarchy of Groups with different settings for 2D Fixed Resolution and 3D Flatten do not work. In these instances, the debug output continues saying 640x480 for both input and output. So the plug in isn't aware of the Fixed Resolution change. Does there need to be a new FxPlug property, via [properties:...], like "kFxPropertyKey_ResolutionChange" and an API for changing the dest image resolution? (and without changing the dest rect size) How do we do this?
Posted
by belisoful.
Last updated
.
Post not yet marked as solved
0 Replies
467 Views
I'm working on an app for macOS where it would be very useful to display the GPU (graphics card) workload usage as a percentage. CPU usage monitoring is easy, but GPU monitoring on Apple Silicon is next-to-impossible. Apple only seems to give us our app’s GPU usage which is not what we want, since we want to total GPU workload for the whole system. I'm using the latest version of Xcode and Swift, any ideas how to achieve this?
Posted
by Filip27.
Last updated
.
Post not yet marked as solved
3 Replies
855 Views
Is it possible to use the Metal API on vision Pro? I noticed that using MTKView in my visionOS app is not recognized, and also noticed other forum posts from months ago saying that MTKView is not yet supported. If it is still not an option, if and when will it be supported? Also wondering about metal-cpp support as well, since my app involves integrating an existing C++ library with visionOS (see here: https://github.com/MinVR/MinVR). Is this possible?
Posted Last updated
.