Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc.
The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions:
return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); }
arguments:@[src]
];
The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate:
float4 backwards(sampler image, destination dest)
{
float2 dc = dest.coord();
dc.x = image.size().x - dc.x;
return image.sample(image.transform(dc)));
}
When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large.
All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing.
I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet.
Any thoughts/input appreciated, thanks!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
We've recently updated a view which displays photos via a CoreImage chain from a NSOpenGLView subclass to a NSView with a backing CAMetalLayer.
Things are mostly working fine, but we occasionally hit a deadlock involving CALayer and CIMetalCommandQueue. I've made a spindump, it appears none of our code is involved in the locked threads. Despite this, I'm assuming the problem is ours 😅
I saw the mention in the CAMetalLayer documentation about releasing drawables with an @autoreleasepool in drawRect, we have done this and I can't find any places we're retaining a drawable outside drawRect.
https://developer.apple.com/documentation/quartzcore/cametallayer?language=objc
I am seeing this on macOS 15.0.1, M2 Max MacBookPro. We haven't seen it on macOS 14.x but it may be luck as we have not tested much on that OS.
I don't know how to move forward debugging this, any help much appreciated!
The two locking threads in the spindump are MainThread and CI::RenderCompletionQueue.
Thread 0xb3b0f8 DispatchQueue "com.apple.main-thread"(1)
…
CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 364 (QuartzCore + 178484) [0x1a5dba934]
invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 176 (QuartzCore + 1782676) [0x1a5f42394]
-[CALayer(CALayerPrivate) _copyRenderLayer:layerFlags:commitFlags:] + 720 (QuartzCore + 179304) [0x1a5dbac68]
-[NSImage(CALayerSupport) CA_copyRenderValue] + 52 (AppKit + 1517960) [0x1a0fe0988]
-[NSImage CGImageForProposedRect:context:hints:] + 440 (AppKit + 1246368) [0x1a0f9e4a0]
-[NSImage _usingBestRepresentationForRect:context:hints:body:] + 148 (AppKit + 1247980) [0x1a0f9eaec]
__48-[NSImage CGImageForProposedRect:context:hints:]_block_invoke + 80 (AppKit + 1248792) [0x1a0f9ee18]
-[NSCIImageRep CGImageForProposedRect:context:hints:] + 112 (AppKit + 6200292) [0x1a1457be4]
+[CIContext contextWithOptions:] + 40 (CoreImage + 549532) [0x1a8df129c]
-[CIContext initWithOptions:] + 588 (CoreImage + 65744) [0x1a8d7b0d0]
+[CIContext(Internal) internalContextWithMTLDevice:options:] + 76 (CoreImage + 66568) [0x1a8d7b408]
CIMetalCommandQueueCreate + 52 (CoreImage + 66692) [0x1a8d7b484]
-[CaptureMTLDevice newCommandQueue] + 168 (GPUToolsCapture + 130200) [0x1029e7c98]
-[CaptureMTLCommandQueue initWithBaseObject:captureDevice:] + 204 (GPUToolsCapture + 799812) [0x102a8b444]
GTMTLGuestAppClientAddMTLCommandQueueInfo + 108 (GPUToolsCapture + 313572) [0x102a148e4]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb41b08 - part of a deadlock)
and
Thread 0xb41b08 DispatchQueue "CI::RenderCompletionQueue"(535) 1000 samples (1-1000) priority 46 (base 46)
start_wqthread + 8 (libsystem_pthread.dylib + 52464) [0x1035f4cf0]
_pthread_wqthread + 288 (libsystem_pthread.dylib + 20736) [0x1035ed100]
_dispatch_workloop_worker_thread + 580 (libdispatch.dylib + 129956) [0x1026afba4]
_dispatch_root_queue_drain_deferred_wlh + 652 (libdispatch.dylib + 133360) [0x1026b08f0]
_dispatch_lane_invoke + 468 (libdispatch.dylib + 68516) [0x1026a0ba4]
_dispatch_lane_serial_drain + 860 (libdispatch.dylib + 64160) [0x10269faa0]
_dispatch_client_callout + 20 (libdispatch.dylib + 26788) [0x1026968a4]
_dispatch_call_block_and_release + 32 (libdispatch.dylib + 19300) [0x102694b64]
CI::Object::unref() const + 120 (CoreImage + 35360) [0x1a8d73a20]
CI::MetalContext::~MetalContext() + 16 (CoreImage + 192260) [0x1a8d99f04]
CI::MetalContext::~MetalContext() + 236 (CoreImage + 192536) [0x1a8d9a018]
-[CaptureMTLCommandQueue dealloc] + 44 (GPUToolsCapture + 797916) [0x102a8acdc]
GTMTLGuestAppClientRemoveMTLCommandQueueInfo + 236 (GPUToolsCapture + 314240) [0x102a14b80]
GTMTLGuestAppClient_allCaptureObjectsUnsafe + 392 (GPUToolsCapture + 298776) [0x102a10f18]
AllMetalLayers + 64 (GPUToolsCapture + 518224) [0x102a46850]
MakeLayerInfos + 320 (GPUToolsCapture + 518608) [0x102a469d0]
-[CALayer frame] + 88 (QuartzCore + 74624) [0x1a5da1380]
__ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c]
*??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb3b0f8 - part of a deadlock)
This game is where you can play over 100 games and every game is very different and unique and you can save your favorite game over the 100 in store them and you can store over 100 if you like them all make your wildest dreams that you can search up as games and they could have them Youtubers, you can make good videos with this game, the Creator.
:D
Hope you enjoy it also I’m a kid so I don’t know how to make an update.
I am looking to implement CAMetalDisplayLink on a separate thread on a macOS application. I am basing my implementation on the following example project:
Achieving Smooth Frame Rates with Metal Display Link
This project allows you to configure whether a separate thread is used for rendering by setting RENDER_ON_MAIN_THREAD in GameConfig to 0. However, when I set it to use a separate thread nothing is rendered. Stepping through the code shows that a separate thread is created, but a CAMetalDisplayLinkUpdate is never received. Does anyone know why this does not work?
Hi everyone,
I encountered a very strange shader bug that seems related to Metal only (not OpenGL).
You can find the full description of the issue on the Babylon.js forums here: https://forum.babylonjs.com/t/strange-shader-related-issue-on-macos-with-safari-and-chrome-not-firefox/54289 (sorry, I couldn't post a clickable link here as this seems to be blocked here).
I have a workaround to fix the issue (as described in the link above), but this really looks like an issue in Metal itself.
Let me know if you need more details or explanations.
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine.
Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer.
However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?
First I get this
ar_world_tracking_provider_query_device_anchor_at_timestamp <0x302b9c0a0>: The device_anchor can only be queried when the world tracking provider is running.
This seemed to all break with the auto-update to 2.0.1. Simulator runs the code fine.
I seem to see an infinite stall here
frameLayer.endUpdate()
// Pace frames by waiting for the optimal prediction time.
try await LayerRenderer.Clock().sleep(until: timing.optimalInputTime, tolerance: nil)
// Start submitting the updated frame.
frameLayer.startSubmission() <-
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
I'm trying to display a right-aligned timecode in my game. I had expected that digits would all have the same width, but this doesn't seem to be the case in SpriteKit, even though it seems to be the case in AppKit.
In SpriteKit, with the default font there is a noticeable difference in width between the digit 1 and the rest (1 is thinner), so whenever displaying a number with the least significant digit 1 all preceding digits shift slightly to the right. This happens even when setting a NSAttributedString with a font that has a fixedAdvance attribute.
class GameScene: SKScene {
override func didMove(to view: SKView) {
let label = SKLabelNode(text: "")
view.scene!.addChild(label)
// label.horizontalAlignmentMode = .left
label.horizontalAlignmentMode = .right
var i = 11
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { _ in
label.text = "\(i)"
// let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-UltraLight", .fixedAdvance: 20]), size: 30)!
// let paragraphStyle = NSMutableParagraphStyle()
// paragraphStyle.alignment = .right
// label.attributedText = NSAttributedString(string: "\(i)", attributes: [.font: font, .foregroundColor: SKColor.labelColor, .paragraphStyle: paragraphStyle])
i += 5
}
}
}
With AppKit, when using SpriteKit's default font HelveticaNeue-UltraLight, this issue doesn't exist, regardless whether I set the fixedAdvance font attribute.
class ViewController: NSViewController {
override func viewDidLoad() {
super.viewDidLoad()
let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-UltraLight"]), size: 30)!
// let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-Light", .fixedAdvance: 20]), size: 30)!
let paragraphStyle = NSMutableParagraphStyle()
paragraphStyle.alignment = .right
let textField = NSTextField(labelWithString: "")
textField.font = font
textField.alignment = .right
// textField.alignment = .left
textField.frame = CGRect(x: 100, y: 100, width: 100, height: 100)
view.addSubview(textField)
var i = 11
Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { _ in
textField.stringValue = "\(i)"
// textField.attributedStringValue = NSAttributedString(string: "\(i)", attributes: [.font: font, .paragraphStyle: paragraphStyle])
i += 5
}
}
}
Is there a solution to this problem?
I filed FB15553700.
I want to use Swift language to write code for drawing multiple polygons. I would like to find some examples as references. Can anyone provide example code or tell me where I can see such examples? Thank you!
I've been working with ARKit and Metal on the Vision Pro, but I've encountered a slight flickering issue with the mesh rendered using Metal. The flickering tends to occur around the edges of objects or on pixels with high color contrast, and it becomes more noticeable as the distance increases. Is there any way to resolve this issue?
Hello, I am trying to obtain the RealityKit URL. Can someone share please?
Technical Issue Report for Maple Tale App - Audio Format Compatibility
Dear Apple Technical Support Team,
I hope this message finds you well. My name is [Your Name], and I am part of the development team behind the Maple Tale app. We have encountered an issue with audio format compatibility within our app that we believe requires your assistance.
The issue pertains to the audio formats supported by our app. Currently, our app only supports WAV and OGG formats, which has led to a limitation in user experience. We are looking to expand our support to include additional formats such as MP3 and AAC, which are widely used by our user base.
To provide a clear understanding of the issue, I have outlined the steps to reproduce the problem:
Launch the Maple Tale app.
Proceed with the game normally.
Upon picking up equipment within the game, a warning box pops up indicating the audio format compatibility issue.
This warning box appears due to the app's inability to process audio files in formats other than WAV and OGG. We understand that this can be a significant hindrance to the user experience, and we are eager to resolve this as quickly as possible.
We have reviewed the documentation available on the official Apple Developer website but are still seeking clarification on the best practices for supporting a wider range of audio formats within our app. We would greatly appreciate any official recommendations or guidelines that could assist us in this endeavor.
Additionally, we are considering updating our app to inform users about the current audio format requirements and provide guidance on how to optimize their audio files for the best performance within our app. If there are any official documents or resources that we should reference when crafting this update, please let us know.
We appreciate your time and assistance in this matter and look forward to your guidance on how to best implement audio format support on the iOS platform.
Thank you once again for your support.
Warm regards,
The update was the only change I can see. Which I did just the other day. I did log on to iCloud.com, I also looked at Apple Developer. I didn't see any additional updates of terms that needed to be accepted. I am sure to log into iCloud on the simulator. It seems to stay logged in. Until I fetchSavedGames. Where I have it exit at 20 seconds due to timing out. Then when I go back to check my account in Settings, it's asking to "Sign in to iCloud" again. It does work properly on a device. So it doesn't stay logged into iCloud on the simulator but it seems like the fetchSavedGame from GKLocalPlayer is what resets that. Any help or suggestions would be appreciated. Thanks.
Is there anywhere that the format of error/warning/other? messages that are generated by the Metal compiler are documented?
Hi,
is there a way in visionOS to anchor an entity to the POV via RealityKit?
I need an entity which is always fixed to the 'camera'.
I'm aware that this is discouraged from a design perspective as it can be visually distracting. In my case though I want to use it to attach a fixed collider entity, so that the camera can collide with objects in the scene.
Edit:
ARView on iOS has a lot of very useful helper properties and functions like cameraTransform (https://developer.apple.com/documentation/realitykit/arview/cameratransform)
How would I get this information on visionOS? RealityViews content does not seem offer anything comparable.
An example use case would be that I would like to add an entity to the scene at my users eye-level, basically depending on their height.
I found https://developer.apple.com/documentation/realitykit/realityrenderer which has an activeCamera property but so far it's unclear to me in which context RealityRenderer is used and how I could access it.
Appreciate any hints, thanks!
Hey guys,
is it possible to implement mirror like reflections like in this project:
https://developer.apple.com/documentation/metal/metal_sample_code_library/rendering_reflections_in_real_time_using_ray_tracing
for visionOS? Or is the hardware not prepared for Metal Raytracing?
Thanks in advance
Hi, when I run my app, in console say:
NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed
Given
I do not understand much at all about how to write shaders
I do not understand the math associated with page-curl effects
I am trying to:
implement a page-curl shader for use on SwiftUI views.
I've lifted a shader from HIROKI IKEUCHI that I believe they lifted from a non-metal shader resource online, and I'm trying to digest it.
One thing I want to do is to paint the "underside" of the view with a given color and maintain the transparency of rounded corners when they are flipped over.
So, if an underside pixel is "clear" then I want to sample the pixel at that position on the original layer instead of the "curl effect" pixel.
There are two comments in the shader below where I check the alpha, and underside flags, and paint the color red as a debug test.
The shader gives this result:
The outside of those rounded corners is appropriately red and the white border pixels are detected as "not-clear". But the "inner" portion of the border is... mistakingly red?
I don't get it. Any help would be appreciated. I feel tapped out and I don't have any IRL resources I can ask.
//
// PageCurl.metal
// ShaderDemo3
//
// Created by HIROKI IKEUCHI on 2023/10/17.
//
#include <metal_stdlib>
#include <SwiftUI/SwiftUI_Metal.h>
using namespace metal;
#define pi float(3.14159265359)
#define blue half4(0.0, 0.0, 1.0, 1.0)
#define red half4(1.0, 0.0, 0.0, 1.0)
#define radius float(0.4)
// そのピクセルの色を返す
[[ stitchable ]] half4 pageCurl
(
float2 _position,
SwiftUI::Layer layer,
float4 bounds,
float2 _clickedPoint,
float2 _mouseCursor
) {
half4 undersideColor = half4(0.5, 0.5, 1.0, 1.0);
float2 originalPosition = _position;
// y座標の補正
float2 position = float2(_position.x, bounds.w - _position.y);
float2 clickedPoint = float2(_clickedPoint.x, bounds.w - _clickedPoint.y);
float2 mouseCursor = float2(_mouseCursor.x, bounds.w - _mouseCursor.y);
float aspect = bounds.z / bounds.w;
float2 uv = position * float2(aspect, 1.) / bounds.zw;
float2 mouse = mouseCursor.xy * float2(aspect, 1.) / bounds.zw;
float2 mouseDir = normalize(abs(clickedPoint.xy) - mouseCursor.xy);
float2 origin = clamp(mouse - mouseDir * mouse.x / mouseDir.x, 0., 1.);
float mouseDist = clamp(length(mouse - origin)
+ (aspect - (abs(clickedPoint.x) / bounds.z) * aspect) / mouseDir.x, 0., aspect / mouseDir.x);
if (mouseDir.x < 0.)
{
mouseDist = distance(mouse, origin);
}
float proj = dot(uv - origin, mouseDir);
float dist = proj - mouseDist;
float2 linePoint = uv - dist * mouseDir;
half4 pixel = layer.sample(position);
if (dist > radius)
{
pixel = half4(0.0, 0.0, 0.0, 0.0); // background behind curling layer (note: 0.0 opacity)
pixel.rgb *= pow(clamp(dist - radius, 0., 1.) * 1.5, .2);
}
else if (dist >= 0.0)
{
// THIS PORTION HANDLES THE CURL SHADED PORTION OF THE RESULT
// map to cylinder point
float theta = asin(dist / radius);
float2 p2 = linePoint + mouseDir * (pi - theta) * radius;
float2 p1 = linePoint + mouseDir * theta * radius;
bool underside = (p2.x <= aspect && p2.y <= 1. && p2.x > 0. && p2.y > 0.);
uv = underside ? p2 : p1;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME<----
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor; // underside
// }
// Shadow the pixel being returned
pixel.rgb *= pow(clamp((radius - dist) / radius, 0., 1.), .2);
}
else
{
// THIS PORTION HANDLES THE NON-CURL-SHADED PORTION OF THE SAMPLING.
float2 p = linePoint + mouseDir * (abs(dist) + pi * radius);
bool underside = (p.x <= aspect && p.y <= 1. && p.x > 0. && p.y > 0.);
uv = underside ? p : uv;
uv = float2(uv.x, 1.0 - uv.y); // invert y
pixel = layer.sample(uv * float2(1. / aspect, 1.) * float2(bounds[2], bounds[3])); // ME
if (underside && pixel.a == 0.0) { //<---- PIXEL.A IS 0.0 WHYYYYY
pixel = red;
}
// Commented out while debugging alpha issues
// if (underside && pixel.a == 0.0) {
// // If the new underside pixel is clear, we should sample the original image's pixel.
// pixel = layer.sample(originalPosition);
// } else if (underside) {
// pixel = undersideColor;
// }
}
return pixel;
}
I have a neural network model for segmentation, I successfully integrated it and am getting a grayscale image. Next, I need to apply the segmentation mask in RealityKit to achieve the occlusion effect (like person segmentation). I tried doing it through post-processing and other methods, but none of them worked. Is there any example of how this can be done in RealityKit?