Is there new API for generating Indirect Commands for the Metal Shader Converter? Is there any example project? I currently use a shader to copy indirect commands. Is there a way to do that with the new Shader Converter pipeline?
Metal
RSS for tagRender advanced 3D graphics and perform data-parallel computations using graphics processors using Metal.
Posts under Metal tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi there,
My app uses the .mixed immersion mode with an ImmersiveSpace rendering metal content into a compositor frame while also using Windows for SwiftUI content.
In the screenshot below, you can see a red outline rendered in Metal, note that that the SwiftUI content is always rendered on top, even though the depth of the plane is behind the depth of the metal content.
Is this behaving as expected or should I be hunting for a bug in my code?
Thank you!
Hello.
I am trying to calculate rays from the NDC coordinates of the screen and the inverse of the projection and view matrices provided by the VisionOS API. It works perfectly in the simulator, but on device the projected rays do not match the (correct) projection of the raster scene rendered with the same projection and view matrices.
Are there some differences between the device and simulator projection matrices that might cause this issue?
i'm trying to figure out how to basically engrave some text into this ellipsoid mesh. so far the only thing i've learned that can sort of come close to what im looking for is SCNText but it floats above the ellipsoid and doesnt conform to the angular shape.
let allocator = MTKMeshBufferAllocator(device: MTLCreateSystemDefaultDevice()!)
let disc = MDLMesh.newEllipsoid(
withRadii: vector_float3(Float(discDiameter/2), Float(discDiameter/2), Float(discThickness/2)),
radialSegments: 64,
verticalSegments: 64,
geometryType: .triangles,
inwardNormals: false,
hemisphere: false,
allocator: allocator
)
let discGeometry = SCNGeometry(mdlMesh: disc)
let material = createIridescentMaterial()
discGeometry.materials = [material]
I'm trying to create heat maps for a variety of functions of two variables. My first implementation didn't use Metal and was far too slow so now I'm looking into doing it with Metal.
I managed to get a very simple example running but I can't figure out how to pass different functions to the fragment shader. Here's the example:
in ContentView.swift:
struct ContentView: View {
var body: some View {
Rectangle()
.aspectRatio(contentMode: .fit)
.visualEffect { content, gp in
let width = Shader.Argument.float(gp.size.width)
let height = Shader.Argument.float(gp.size.height)
return content.colorEffect(
ShaderLibrary.heatMap(width, height)
)
}
}
}
in Shader.metal:
#include <metal_stdlib>
using namespace metal;
constant float twoPi = 6.283185005187988;
// input in [0,1], output in [0,1]
float f(float x) { return (sin(twoPi * x) + 1) / 2; }
// inputs in [0,1], output in [0,1]
float g(float x, float y) { return f(x) * f(y); }
[[ stitchable ]] half4 heatMap(float2 pos, half4 color, float width, float height) {
float u = pos.x / width;
float v = pos.y / height;
float c = g(u, v);
return half4(c/2, 1-c, c, 1);
}
As it is, it works great and is blazing fast...
...but the function I'm heat-mapping is hardcoded in the metal file. I'd like to be able to write different functions in Swift and pass them to the shader from within SwiftUI (ie, from the ContentView, by querying a model to get the function).
I tried something like this in the metal file:
// (u, v) in [0,1] x [0,1]
// w = f(u, v) in [0,1]
[[ stitchable ]] half4 heatMap(
float2 pos, half4 color,
float width, float height,
float (*f) (float u, float v),
half4 (*c) (float w)
) {
float u = pos.x / width;
float v = pos.y / height;
float w = f(u, v);
return c(w);
}
but I couldn't get Swift and C++ to work together to make sense of the function pointers and and now I'm stuck. Any help is greatly appreciated.
Many thanks!
I'm testing on an iPhone 12 Pro, running iOS 17.5.1.
Playing an HDR video with AVPlayer without explicitly specifying a pixel format (but specifying Metal Compatibility as below) gives buffers with the pixel format kCVPixelFormatType_Lossless_420YpCbCr10PackedBiPlanarVideoRange (&xv0).
_videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:@{ (NSString*)kCVPixelBufferMetalCompatibilityKey: @(YES)
}
I can't find an appropriate metal format to use for these buffers to access the data in a shader. Using MTLPixelFormatR16Unorm for the Y plane and MTLPixelFormatRG16Unorm for UV plane causes GPU command buffer aborts.
My suspicion is that this compressed format isn't actually metal compatible due to the lack of padding bytes between pixels. Explicitly selecting kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange (which uses 16 bits per pixel) for the AVPlayerItemVideoOutput works, but I'd ideally like to use the compressed formats if possible for the bandwidth savings.
With SDR video, the pixel format is the lossless 8-bit one, and there are no problems binding those buffers to metal textures.
I'm just looking for confirmation there's currently no appropriate metal format for binding the packed 10-bit planes. And if that's the case, is it a bug that AVPlayerVideoOutput uses this format despite requesting Metal compatibility?
Here is the test code run in a macOS app (MacOS 15 Beta3).
If the excutable path does not contain Chinese character, every thing go as We expect. Otherwise(simply place excutable in a Chinese named directory) , the MTLLibrary We made by newLibraryWithSource: function contains no functions, We just got logs:
"Library contains the following functions: {}"
"Function 'squareKernel' not found."
Note: macOS 14 works fine
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
if (!device) {
NSLog(@"not support Metal.");
}
NSString *shaderSource = @
"#include <metal_stdlib>\n"
"using namespace metal;\n"
"kernel void squareKernel(device float* data [[buffer(0)]], uint gid [[thread_position_in_grid]]) {\n"
" data[gid] *= data[gid];\n"
"}";
MTLCompileOptions *options = [[MTLCompileOptions alloc] init];
options.languageVersion = MTLLanguageVersion2_0;
NSError *error = nil;
id<MTLLibrary> library = [device newLibraryWithSource:shaderSource options:options error:&error];
if (error) {
NSLog(@"New MTLLibrary error: %@", error);
}
NSArray<NSString *> *functionNames = [library functionNames];
NSLog(@"Library contains the following functions: %@", functionNames);
id<MTLFunction> computeShaderFunction = [library newFunctionWithName:@"squareKernel"];
if (computeShaderFunction) {
NSLog(@"Found function 'squareKernel'.");
NSError *pipelineError = nil;
id<MTLComputePipelineState> pipelineState = [device newComputePipelineStateWithFunction:computeShaderFunction error:&pipelineError];
if (pipelineError) {
NSLog(@"Create pipeline state error: %@", pipelineError);
}
NSLog(@"Create pipeline state succeed!");
} else {
NSLog(@"Function 'squareKernel' not found.");
}
I had a problem that my app did not work outside Xcode (had to include the metalkit.framework
I found a simple trick to get my NSLog outputs and locate the issue:
NSURL *downloadsDirectory = [[[NSFileManager defaultManager] URLsForDirectory:NSDownloadsDirectory inDomains:NSUserDomainMask] firstObject];
NSURL *fileURL = [downloadsDirectory URLByAppendingPathComponent:@"Log.txt"];
NSString *filePath = [fileURL path];
// Redirect stderr to the file
freopen([filePath fileSystemRepresentation], "a+", stderr);
Now you get debug info in your Download folder!
I have recently developed an interest in the shader effects commonly found in Apple's UI and have been studying them. Additionally, as I own a Vision Pro, I have a strong desire to understand LowLevelMesh and am currently analyzing the sample code after watching the related session.
The part where I am completely stuck and unable to understand is the initializer section of CurveExtruder.
/// Initializes the `CurveExtruder` with the shape to sweep along the curve.
///
/// - Parameters:
/// - shape: The 2D shape to sweep along the curve.
init(shape: [SIMD2<Float>]) {
self.shape = shape
// Compute topology //
// Triangle fan lists each vertex in `shape` once for each ring, except for vertex `0` of `shape` which
// is listed twice. Plus one extra index for the end-index (0xFFFFFFFF).
let indexCountPerFan = 2 * (shape.count + 1) + 1
var topology: [UInt32] = []
topology.reserveCapacity(indexCountPerFan)
// Build triangle fan.
for vertexIndex in shape.indices.reversed() {
topology.append(UInt32(vertexIndex))
topology.append(UInt32(shape.count + vertexIndex))
}
// Wrap around to the first vertex.
topology.append(UInt32(shape.count - 1))
topology.append(UInt32(2 * shape.count - 1))
// Add end-index.
topology.append(UInt32.max)
assert(topology.count == indexCountPerFan)
I have tried to understand why the capacity reserved for the topology array is 2 * (shape.count + 1) + 1, but I am struggling to figure it out.
I do not understand the principle behind the order in which vertexIndex is added to the topology.
The confusion is even greater because, while the comment mentions trianglefan, the actual creation of the LowLevelMesh.Part object uses the topology: .triangleStrip argument. (Did I misunderstand? I know that the topology option includes triangle, but this uses duplicated vertices.)
I am feeling very stuck. It's hard to find answers even through search options or LLMs. Maybe this requires specialized knowledge in computer graphics, which makes me feel embarrassed to ask.
However, personally, I have tried various directions without external help but still cannot find a clear path, so I am desperately seeking assistance!
P.S. As Korean is my primary language, I apologize in advance if there are any awkward or rude expressions.
Hey, I’m building a camera app where I am applying real time effects to the view finder. One of those effects is a variable blur, so to improve performance I am scaling down the input image using CIFilter.lanczosScaleTransform(). This works fine and runs at 30FPS, but when running the metal profiler I can see that the scaling transforms use a lot of GPU time, almost as much as the variable blur. Is there a more efficient way to do this?
The simplified chain is like this:
Scale down viewFinder CVPixelBuffer (CIFilter.lanczosScaleTransform)
Scale up depthMap CVPixelBuffer to match viewFinder size (CIFilter.lanczosScaleTransform)
Create CIImages from both CVPixelBuffers
Apply VariableDepthBlur (CIFilter.maskedVariableBlur)
Scale up final image to metal view size (CIFilter.lanczosScaleTransform)
Render CIImage to a MTKView using CIRenderDestination
From some research, I wonder if scaling the CVPixelBuffer using the accelerate framework would be faster? Also, Instead of scaling the final image, perhaps I could offload this to the metal view?
Any pointers greatly appreciated!
Hi, all.
I've been writing various computational functions using Metal.
However, in the following operation functions, unlike + and *, there is an accuracy issue in the / operation.
This is a function that divides a matrix of shape [n, x, y] and a scalar [1].
When compared to numpy or torch, if I change the operator of the above function to * or + instead of /, I can get completely the same results, but in the case of /, there is a difference in the mean of more than 1e-5.
(For reference, this was written with reference to the metal kernel code in llama.cpp)
kernel void kernel_div_single_f16(
device const half * src0,
device const half * src1,
device half * dst,
constant int64_t & ne00,
constant int64_t & ne01,
constant int64_t & ne02,
constant int64_t & ne03,
uint3 tgpig[[threadgroup_position_in_grid]],
uint3 tpitg[[thread_position_in_threadgroup]],
uint3 ntg[[threads_per_threadgroup]]) {
const int64_t i03 = tgpig.z;
const int64_t i02 = tgpig.y;
const int64_t i01 = tgpig.x;
const uint offset = i03*ne02*ne01*ne00 + i02*ne01*ne00 + i01*ne00;
for (int i0 = tpitg.x; i0 < ne00; i0 += ntg.x) {
dst[offset + i0] = src0[offset+i0] / *src1;
}
}
My mac book is,
Macbork Pro(16, 2021) / macOS 12.5 / Apple M1 Pro.
Are there any issues related to Div? Thanks in advance for your reply.
Hello!
I noticed that after WWDC 24 there was support added for MTKView in visionOS 1.0+. This is great! But when I use an MTKView in anything before visionOS 2.0 it doesn't work and the app ends up crashing.
Console error when running on a device that is on visionOS 1.2:
Symbol not found: _$s27_CompositorServices_SwiftUI0A5LayerV13configuration8rendererAcA0aE13Configuration_p_ySo019CP_OBJECT_cp_layer_G0CScMYcctcfC
Expected in: <EFD973D2-97E1-380B-B89A-13CC3820B7F7> /System/Library/Frameworks/_CompositorServices_SwiftUI.framework/_CompositorServices_SwiftUI
Looks like MTKView may be using compositor services under the hood?
Any help would be great.
Thank you!
I am using Metal for rendering, and when calling the newCommandQueue interface of Metal, there is a certain probability that I will get a nil return value. However, when I call the MTLCreateSystemDefaultDevice interface, I can get a non-empty return value, which means my device supports Metal. I would like to ask what causes the newCommandQueue to return nil? Is there any way to avoid this situation? Thank you.
tl;dr how can I get raw YUV in a Metal fragment shader from a VideoToolbox 10-bit/BT.2020 HEVC stream without any extra/secret format conversions?
With VideoToolbox and 10-bit HEVC, I've found that it defaults to CVPixelBuffers w/ formats kCVPixelFormatType_Lossless_420YpCbCr10PackedBiPlanarFullRange or kCVPixelFormatType_Lossy_420YpCbCr10PackedBiPlanarFullRange. To mitigate this, I have the following snippet of code to my application:
// We need our pixels unpacked for 10-bit so that the Metal textures actually work
var pixelFormat:OSType? = nil
let bpc = getBpcForVideoFormat(videoFormat!)
let isFullRange = getIsFullRangeForVideoFormat(videoFormat!)
// TODO: figure out how to check for 422/444, CVImageBufferChromaLocationBottomField?
if bpc == 10 {
pixelFormat = isFullRange ? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
}
let videoDecoderSpecification:[NSString: AnyObject] = [kVTVideoDecoderSpecification_EnableHardwareAcceleratedVideoDecoder:kCFBooleanTrue]
var destinationImageBufferAttributes:[NSString: AnyObject] = [kCVPixelBufferMetalCompatibilityKey: true as NSNumber, kCVPixelBufferPoolMinimumBufferCountKey: 3 as NSNumber]
if pixelFormat != nil {
destinationImageBufferAttributes[kCVPixelBufferPixelFormatTypeKey] = pixelFormat! as NSNumber
}
var decompressionSession:VTDecompressionSession? = nil
err = VTDecompressionSessionCreate(allocator: nil, formatDescription: videoFormat!, decoderSpecification: videoDecoderSpecification as CFDictionary, imageBufferAttributes: destinationImageBufferAttributes as CFDictionary, outputCallback: nil, decompressionSessionOut: &decompressionSession)
In short, I need kCVPixelFormatType_420YpCbCr10BiPlanar so that I have a straightforward MTLPixelFormat.r16Unorm/MTLPixelFormat.rg16Unorm texture binding for Y/CbCr. Metal, seemingly, has no direct pixel format for 420YpCbCr10PackedBiPlanar. I'd also rather not use any color conversion in VideoToolbox, in order to save on processing (and to ensure that the color transforms/transfer characteristics match between streamer/client, since I also have a custom transfer characteristic to mitigate blocking in dark scenes).
However, I noticed that in visionOS 2, the CVPixelBuffer I receive is no longer a compressed render target (likely a bug), which caused GPU texture read bandwidth to skyrocket from 2GiB/s to 30GiB/s. More importantly, this implies that VideoToolbox may in fact be doing an extra color conversion step, wasting memory bandwidth.
Does Metal actually have no way to handle 420YpCbCr10PackedBiPlanar? Are there any examples for reading 10-bit HDR HEVC buffers directly with Metal?
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
What is the info property of SwiftUI::Layer?
I couldn't find any document or resource about it.
It appears in SwiftUI::Layer's definition:
struct Layer {
metal::texture2d<half> tex;
float2 info[5];
/// Samples the layer at `p`, in user-space coordinates,
/// interpolating linearly between pixel values. Returns an RGBA
/// pixel value, with color components premultipled by alpha (i.e.
/// [R*A, G*A, B*A, A]), in the layer's working color space.
half4 sample(float2 p) const {
p = metal::fma(p.x, info[0], metal::fma(p.y, info[1], info[2]));
p = metal::clamp(p, info[3], info[4]);
return tex.sample(metal::sampler(metal::filter::linear), p);
}
};
Suppose I want to draw a red rectangle onto my render target using a compute shader.
id<MTLComputeCommandEncoder> encoder = [commandBuffer computeCommandEncoder];
[encoder setComputePipelineState:pipelineState];
simd_ushort2 position = simd_make_ushort2(100, 100);
simd_ushort2 size = simd_make_ushort2(50, 50);
[encoder setBytes:&position length:sizeof(position) atIndex:0];
[encoder setTexture:drawable.texture atIndex:0];
[encoder dispatchThreads:MTLSizeMake(size.x, size.y, 1)
threadsPerThreadgroup:MTLSizeMake(32, 32, 1)];
[encoder endEncoding];
#include <metal_stdlib>
using namespace metal;
kernel void
Compute(ushort2 position_in_grid [[thread_position_in_grid]],
constant ushort2 &position,
texture2d<half, access::write> texture)
{
texture.write(half4(1, 0, 0, 1), position_in_grid + position);
}
This works just fine:
Now, say for whatever reason I want to start using imageblocks in my compute kernel. First, I set the imageblock size on the CPU side:
id<MTLComputeCommandEncoder> encoder = [commandBuffer computeCommandEncoder];
[encoder setComputePipelineState:pipelineState];
MTLSize threadgroupSize = MTLSizeMake(32, 32, 1);
[encoder setImageblockWidth:threadgroupSize.width
height:threadgroupSize.height];
simd_ushort2 position = simd_make_ushort2(100, 100);
simd_ushort2 size = simd_make_ushort2(50, 50);
[encoder setBytes:&position length:sizeof(position) atIndex:0];
[encoder setTexture:drawable.texture atIndex:0];
MTLSize gridSize = MTLSizeMake(size.x, size.y, 1);
[encoder dispatchThreads:gridSize threadsPerThreadgroup:threadgroupSize];
And then I update the compute kernel to simply declare the imageblock – note I never actually read from or write to it:
#include <metal_stdlib>
using namespace metal;
struct Foo
{
int foo;
};
kernel void
Compute(ushort2 position_in_grid [[thread_position_in_grid]],
constant ushort2 &position,
texture2d<half, access::write> texture,
imageblock<Foo> imageblock)
{
texture.write(half4(1, 0, 0, 1), position_in_grid + position);
}
And now out of nowhere Metal’s shader validation starts complaining about mismatched texture usage flags:
2024-06-22 00:57:15.663132+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.672004+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.682422+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.687587+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
2024-06-22 00:57:15.698106+1000 TextureUsage[80558:4539093] [GPUDebug] Texture usage flags mismatch executing kernel function "Compute" encoder: "1", dispatch: 0
The texture I’m writing to comes from a CAMetalDrawable whose associated CAMetalLayer has framebufferOnly set to NO. What am I missing?
A sample of some error messages that are presented in the Xcode log for executon of a program. There is nothing in the messages that will help identify a component as the origin of the message, nor is there a locatable derinition for the various labels and fields of the text. What component or even framework does this set of messages originate? Your search engine is useless because it returns gibberish. It doesn’t even follow the common behavior of SEARCH ENGINES because it takes label strings compounded from common words and searches for the common word instead of using the catenated string that is the internal variable name that is in the text.
2024-06-22 19:45:58.089943-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation isRegisteredSubsystem:category:]) Permission denied: GenerativeFunctionMetrics / ANEInferenceOperationPrepareForEncode. I am looking for a definition of the error with a way to locate the context in which the error occurs.
2024-06-22 19:45:58.089967-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation sendEventWithIdentifier:payload:]) Invalid inputs: payload={
aneModelPath = "/System/Library/PrivateFrameworks/RoomScanCore.framework/PrecompiledModels/lcnn_floorplan_model.bundle/H14G.bundle/main/segment_0__ane/net.hwx";
bundleIdentfier = "com.example.apple-samplecode.RoomPlanExampleApp9QSS565686";
}
2024-06-22 19:45:58.094770-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation sendEventWithIdentifier:payload:]) Invalid inputs: payload={
bundleIdentfier = "com.example.apple-samplecode.RoomPlanExampleApp9QSS565686";
e5FunctionName = main;
numSegments = 1;
}
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space?
I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene.
One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene.
unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews
ImmersiveSpace(){
RealityView { content in
// load first view
} update: { content in
// update
}
}
RealityView{ content in
//load second view
}
} update: { content in
//update
}
}
That results in two coinciding realty kit views in the immersive space.
I can not however do something like this:
ImmersiveSpace(){
CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in
//set up my metal renderer and stuff
}
RealityView{ content in
//set up a view where I could use attachments
} update: { content in
}
}
I'm having issues compiling the visionOS app via XcodeCloud.
Here's the error:
2024-06-20T09:24:47.634651911Z compileSkybox /Volumes/workspace/DerivedData/Build/Intermediates.noindex/ArchiveIntermediates/Fi22/InstallationBuildProductsLocation/Applications/VR22.app/ImageBasedLighting.rclink /Volumes/workspace/repository/Fi22/ImageBasedLighting.skybox (in target 'Fi22' from project 'Fi22')
2024-06-20T09:24:47.634669847Z cd /Volumes/workspace/repository
2024-06-20T09:24:47.634681756Z /Applications/Xcode.app/Contents/Developer/usr/bin/rctool create skybox -skyboxPath=/Volumes/workspace/repository/Fi22/ImageBasedLighting.skybox -o=/Volumes/workspace/DerivedData/Build/Intermediates.noindex/ArchiveIntermediates/Fi22/InstallationBuildProductsLocation/Applications/VR22.app
2024-06-20T09:24:47.634730433Z [91mError: [39m There is no available Metal device on this system.
2024-06-20T09:24:47.634745602Z Command compileSkybox failed with a nonzero exit code
How do I configure XcodeCloud to use an instance with Metal support? Another option would be precompiling the skybox but I couldn't find any info on that either.