Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

SK3DNode hitTest not working in SpriteKit/SceneKit
I have this minimum repro code: import SpriteKit import GameplayKit class MyGameScene3D: SCNScene { weak var node3D: MyNode3D! override init() { super.init() background.contents = UIColor.green let playground = SCNNode() playground.boundingBox = ( min: SCNVector3(x: 0, y: 0, z: 0), max: SCNVector3(x: 10, y: 10, z: 10)) let box = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)) box.position = SCNVector3(x: 5, y: 5, z: 5) playground.addChildNode(box) playground.position = SCNVector3(x: 0, y: 0, z: 0) rootNode.addChildNode(playground) let light = SCNLight() light.type = .ambient let lightNode = SCNNode() lightNode.light = light rootNode.addChildNode(lightNode) let camera = SCNCamera() let cameraNode = SCNNode() cameraNode.camera = camera cameraNode.eulerAngles = SCNVector3(x: -3.14/2, y: 0, z: 0) cameraNode.position = SCNVector3(x: 5, y: 11, z: 5) rootNode.addChildNode(cameraNode) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func handleTouchBegan(_ location: CGPoint) { let res = node3D.hitTest(location) print(res) } } class MyNode3D: SK3DNode { override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let touch = touches.first! let scene = scnScene as! MyGameScene3D let location = touch.location(in: self) print(location) scene.handleTouchBegan(location) } } class GameScene: SKScene { init() { super.init(size: CGSize(width: 500, height: 1000)) self.backgroundColor = .red let node3D = MyNode3D() let scene3D = MyGameScene3D() node3D.scnScene = scene3D scene3D.node3D = node3D node3D.isUserInteractionEnabled = true node3D.viewportSize = CGSize(width: 100, height: 200) node3D.position = CGPoint(x: 50, y: 100) addChild(node3D) let up = SKSpriteNode(color: .blue, size: CGSize(width: 500, height: 10)) up.anchorPoint = CGPoint(x: 0, y:0) up.position = CGPoint(x:0, y:200) addChild(up) let right = SKSpriteNode(color: .gray, size: CGSize(width: 10, height: 500)) right.anchorPoint = CGPoint(x:0,y: 0) right.position = CGPoint(x:100, y:0) addChild(right) } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } } Basically, I have a SK3DNode of size 100x200, positioned at lower left corner of the screen (see screenshot below). Then in this SK3DNode, I have a SCNScene, where I put a 10x10x10 Playground node at position (0, 0, 0). Then I put a camera node right at the top of the Playground at position (5, 11, 5), and the camera looks down along the -y axis, with euler angle = (-90, 0, 0). Then in this Playground, I put a small box of size 1x1x1, at the center of the Playground at (5, 5, 5). The 2 long bars (gray & blue) are just there to indicate the boundary of the SK3DNode. The result rendering is correct (see screenshot below). However, I can't get the hit test working. I tap on the center 1x1x1 box on screen, get the right coordinate printed out, but the hit test result is empty. I want to be get the center 1x1x1 box when hitting there. How can I do so? Update: I tried to loop through all the pixels from -2000 to 2000, and still no hit: func handleTouchBegan(_ location: CGPoint) { for x in -2000...2000 { print("handling x: \(x)") for y in -2000...2000 { let res = node3D.hitTest(location) if !res.isEmpty { print("\(x), \(y), \(res)") } } } print("Done") }
1
0
215
Oct ’24
Game Porting Toolkit Crashing Playing HighEnd Games
Updated to GPT 2 with the app format and whisky in October. I'm using steam in GPT to play the windows only games. The games consistently crash my computer to restart when I played games like Liar's Bar and Phasmophobia, the latter even with its performance optimization update released today. Both of them do that when I load into the game room, so the bar/ghost house or even while loading. It ran Outpath properly though, which was something with much lower quality than LB and Phas. I don't think it's a simple RAM config issue because Phasmophobia ran fine for me with older versions in GPT 1 although I did experience occasional crashes but I think those were only crashing the game not my mac. I'm not sure what I'm doing wrong that I can't run those games or if it's a GPT problem. I'm using a MPB Pro M1 with 16GB memory on Sequoia 15.0.1.
1
0
233
Oct ’24
Resolution for Games
Hi, When using a High Definition Display, is there a way to render at exactly the target resolution on the physical screen? My understanding is that the default behavior is to render to a backing store with a resolution (in pixels) which can be twice the size of the logical resolution (in points). Then we let the OS handle the down-scaling to the actual target resolution on the screen. This is all nice for non-graphics intensive apps, but it means that my game will render at a higher resolution than needed, which seems like an obvious loss of performance. My expectation is that, for graphics intensive application such as games, we should be able to query and render to the final resolution on the display. Can it / should it be done? Thank you for your help :) FYI I did find a document which explains how to setup your CAMetalLayer to render at a custom resolution. I suspect that this may be what I have to do?
2
0
407
Oct ’24
GameplayKit usage with Swift 6: Call to main actor-isolated instance method 'run' in a synchronous nonisolated context
Hi there, With a couple of other developers we have been busy with migrating our SpriteKit games and frameworks to Swift 6. There is one issue we are unable to resolve, and this involves the interaction between SpriteKit and GameplayKit. There is a very small demo repo created that clearly demonstrates the issue. It can be found here: https://github.com/AchrafKassioui/GameplayKitExplorer/blob/main/GameplayKitExplorer/Basic.swift The relevant code also pasted here: import SwiftUI import SpriteKit struct BasicView: View { var body: some View { SpriteView(scene: BasicScene()) .ignoresSafeArea() } } #Preview { BasicView() } class BasicScene: SKScene { override func didMove(to view: SKView) { size = view.bounds.size anchorPoint = CGPoint(x: 0.5, y: 0.5) backgroundColor = .gray view.isMultipleTouchEnabled = true let entity = BasicEntity(color: .systemYellow, size: CGSize(width: 100, height: 100)) if let renderComponent = entity.component(ofType: BasicRenderComponent.self) { addChild(renderComponent.sprite) } } } @MainActor class BasicEntity: GKEntity { init(color: SKColor, size: CGSize) { super.init() let renderComponent = BasicRenderComponent(color: color, size: size) addComponent(renderComponent) let animationComponent = BasicAnimationComponent() addComponent(animationComponent) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } @MainActor class BasicRenderComponent: GKComponent { let sprite: SKSpriteNode init(color: SKColor, size: CGSize) { self.sprite = SKSpriteNode(texture: nil, color: color, size: size) super.init() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } class BasicAnimationComponent: GKComponent { let action1 = SKAction.scale(to: 1.3, duration: 0.07) let action2 = SKAction.scale(to: 1, duration: 0.15) override init() { super.init() } override func didAddToEntity() { if let renderComponent = entity?.component(ofType: BasicRenderComponent.self) { renderComponent.sprite.run(SKAction.repeatForever(SKAction.sequence([action1, action2]))) } } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } As SKNode is designed to run on the MainActor, the BasicRenderComponent is attributed with MainActor as well. This is needed as this GKComponent is dedicated to encapsulate the node that is rendered to the scene. There is also a BasicAnimationComponent, this GKComponent is responsible for animating the rendered node. Obviously, this is just an example, but when using GameplayKit in combination with SpriteKit it is very common that a GKComponent instance manipulates an SKNode referenced from another GKComponent instance, often done via open func update(deltaTime seconds: TimeInterval) or as in this example, inside didAddToEntity. Now, the problem is that in the above example (but the same goes foupdate(deltaTime seconds: TimeInterval) the methoddidAddToEntity is not isolated to the MainActor, as GKComponent is not either. This leads to the error Call to main actor-isolated instance method 'run' in a synchronous nonisolated context, as indeed the compiler can not infer that didAddToEntity is isolated to the MainActor. Marking BasicAnimationComponent as @MainActor does not help, as this isolation is not propogated back to the superclass inherited methods. In fact, we tried a plethora of other options, but none resolved this issue. How should we proceed with this? As of now, this is really holding us back migrating to Swift 6. Hope someone is able to help out here!
2
2
401
Oct ’24
SceneKit app seriously hangs when run in fullscreen
I've been running my SceneKit game for many weeks in Xcode without performance issues. The game itself is finished, so I thought I could go on with publishing it on the App Store, but when archiving it in Xcode and running the archived app, I noticed that it seriously hangs. The hangs only seem to happen when I run the game in fullscreen mode. I tried disabling game mode, but the hangs still happen. Only when I run in windowed mode the game runs smoothly. Instruments confirms that there are many serious hangs, but it also reports that CPU usage is quite low during those hangs, on average about 15%. From what I know, hangs happen when the main thread is busy, but how can that be when CPU usage is so low, and why does it only happen in fullscreen mode for release builds?
2
0
296
Oct ’24
SceneKit custom physics fields using wrong position?
In the simplest case I can come up with, I create a scene (either fully or partially in code) with a single dynamic body, located slightly away from the origin. I give the body a charge as well as adding an electric field to the node. Body does nothing (as to be expected, since it's the source of the field). However if I replace that field with a custom field (does nothing except reports back the passed in position value) the position shown is the location of the body in the local space of its parent (in this case, the root node) rather than the node the field is attached to (i.e. itself). I've attached the code customising the SwiftUI app template. Hopefully someone can tell me what I'm doing wrong? ContentView customisation… struct ContentView: View { var body: some View { SceneView(scene: ElectricScene(), options: [.allowsCameraControl, .autoenablesDefaultLighting]) } } And the code to create the scene… import Foundation import SceneKit class ElectricScene: SCNScene { override init() { super.init() physicsWorld.gravity = SCNVector3(0, 0, 0) let cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(0, 0, 10) rootNode.addChildNode(cameraNode) let ballNode = SCNNode(geometry: SCNSphere(radius: 0.5)) ballNode.position = SCNVector3(2, 0, 0) ballNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil) ballNode.physicsBody?.charge = -1 rootNode.addChildNode(ballNode) // ballNode.physicsField = SCNPhysicsField.electric() ballNode.physicsField = SCNPhysicsField .customField {position, _, _, _, _ in print(position) return SCNVector3Zero } } @available(*, unavailable) required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } This (repeatedly) prints out the following… SCNVector3(x: 2.0, y: 0.0, z: 0.0) …which is the position of the node relative to the root node, rather than relative to the source of the field (itself).
1
0
267
Oct ’24
CAMetalDisplayLink does not work on separate thread
I am looking to implement CAMetalDisplayLink on a separate thread on a macOS application. I am basing my implementation on the following example project: Achieving Smooth Frame Rates with Metal Display Link This project allows you to configure whether a separate thread is used for rendering by setting RENDER_ON_MAIN_THREAD in GameConfig to 0. However, when I set it to use a separate thread nothing is rendered. Stepping through the code shows that a separate thread is created, but a CAMetalDisplayLinkUpdate is never received. Does anyone know why this does not work?
1
0
254
Oct ’24
Strange Metal related shader issue
Hi everyone, I encountered a very strange shader bug that seems related to Metal only (not OpenGL). You can find the full description of the issue on the Babylon.js forums here: https://forum.babylonjs.com/t/strange-shader-related-issue-on-macos-with-safari-and-chrome-not-firefox/54289 (sorry, I couldn't post a clickable link here as this seems to be blocked here). I have a workaround to fix the issue (as described in the link above), but this really looks like an issue in Metal itself. Let me know if you need more details or explanations.
0
0
206
Oct ’24
Getting stuck in first frame of renderLoop.
First I get this ar_world_tracking_provider_query_device_anchor_at_timestamp <0x302b9c0a0>: The device_anchor can only be queried when the world tracking provider is running. This seemed to all break with the auto-update to 2.0.1. Simulator runs the code fine. I seem to see an infinite stall here frameLayer.endUpdate() // Pace frames by waiting for the optimal prediction time. try await LayerRenderer.Clock().sleep(until: timing.optimalInputTime, tolerance: nil) // Start submitting the updated frame. frameLayer.startSubmission() <-
0
0
223
Oct ’24
Nimbus Steel Series not working with AVP Simulator
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS. I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
3
0
251
Oct ’24
Will CVPixelBuffer created from CVPixelBufferPoolCreatePixelBuffer keep existing pixels
We have a pixel buffer pool managed by the system(created using CVPixelBufferPoolCreate API). And each time when we need a pixel buffer, we call CVPixelBufferPoolCreatePixelBuffer to created one from the pool. Then we override all pixels of the buffer, getting IOSurface from the buffer, and then set the IOSurface as CALayer's contents property in another process to show it, everything works fine. Now we want to do some optimization by only override pixels that's changed between frames. The way we'd like to do is that after we call CVPixelBufferPoolCreatePixelBuffer to create a buffer, we get the underlying IOSurface id map it with a frame info. Next time if we get the same IOSurface id, we just compare the current frame info with the one we stored and only update the changed pixels in CVPixelBuffer. However, there is no document mentioning whether the CVPixelBuffer created using CVPixelBufferPoolCreatePixelBuffer will contain previous pixels(content before it's returned to the pool). Do we have this guarantee? If not, is there any way we can know whether the created buffer contains the previous pixels or not?
2
0
184
Oct ’24
SKLabelNode keeps jumping back and forth when displaying different numbers with equal number of digits
I'm trying to display a right-aligned timecode in my game. I had expected that digits would all have the same width, but this doesn't seem to be the case in SpriteKit, even though it seems to be the case in AppKit. In SpriteKit, with the default font there is a noticeable difference in width between the digit 1 and the rest (1 is thinner), so whenever displaying a number with the least significant digit 1 all preceding digits shift slightly to the right. This happens even when setting a NSAttributedString with a font that has a fixedAdvance attribute. class GameScene: SKScene { override func didMove(to view: SKView) { let label = SKLabelNode(text: "") view.scene!.addChild(label) // label.horizontalAlignmentMode = .left label.horizontalAlignmentMode = .right var i = 11 Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { _ in label.text = "\(i)" // let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-UltraLight", .fixedAdvance: 20]), size: 30)! // let paragraphStyle = NSMutableParagraphStyle() // paragraphStyle.alignment = .right // label.attributedText = NSAttributedString(string: "\(i)", attributes: [.font: font, .foregroundColor: SKColor.labelColor, .paragraphStyle: paragraphStyle]) i += 5 } } } With AppKit, when using SpriteKit's default font HelveticaNeue-UltraLight, this issue doesn't exist, regardless whether I set the fixedAdvance font attribute. class ViewController: NSViewController { override func viewDidLoad() { super.viewDidLoad() let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-UltraLight"]), size: 30)! // let font = NSFont(descriptor: NSFontDescriptor(fontAttributes: [.name: "HelveticaNeue-Light", .fixedAdvance: 20]), size: 30)! let paragraphStyle = NSMutableParagraphStyle() paragraphStyle.alignment = .right let textField = NSTextField(labelWithString: "") textField.font = font textField.alignment = .right // textField.alignment = .left textField.frame = CGRect(x: 100, y: 100, width: 100, height: 100) view.addSubview(textField) var i = 11 Timer.scheduledTimer(withTimeInterval: 0.5, repeats: true) { _ in textField.stringValue = "\(i)" // textField.attributedStringValue = NSAttributedString(string: "\(i)", attributes: [.font: font, .paragraphStyle: paragraphStyle]) i += 5 } } } Is there a solution to this problem? I filed FB15553700.
0
0
168
Oct ’24
Core Image deadlock on Sequoia
We've recently updated a view which displays photos via a CoreImage chain from a NSOpenGLView subclass to a NSView with a backing CAMetalLayer. Things are mostly working fine, but we occasionally hit a deadlock involving CALayer and CIMetalCommandQueue. I've made a spindump, it appears none of our code is involved in the locked threads. Despite this, I'm assuming the problem is ours 😅 I saw the mention in the CAMetalLayer documentation about releasing drawables with an @autoreleasepool in drawRect, we have done this and I can't find any places we're retaining a drawable outside drawRect. https://developer.apple.com/documentation/quartzcore/cametallayer?language=objc I am seeing this on macOS 15.0.1, M2 Max MacBookPro. We haven't seen it on macOS 14.x but it may be luck as we have not tested much on that OS. I don't know how to move forward debugging this, any help much appreciated! The two locking threads in the spindump are MainThread and CI::RenderCompletionQueue. Thread 0xb3b0f8 DispatchQueue "com.apple.main-thread"(1) … CA::Layer::commit_if_needed(CA::Transaction*, void (CA::Layer*, unsigned int, unsigned int) block_pointer) + 364 (QuartzCore + 178484) [0x1a5dba934] invocation function for block in CA::Context::commit_transaction(CA::Transaction*, double, double*) + 176 (QuartzCore + 1782676) [0x1a5f42394] -[CALayer(CALayerPrivate) _copyRenderLayer:layerFlags:commitFlags:] + 720 (QuartzCore + 179304) [0x1a5dbac68] -[NSImage(CALayerSupport) CA_copyRenderValue] + 52 (AppKit + 1517960) [0x1a0fe0988] -[NSImage CGImageForProposedRect:context:hints:] + 440 (AppKit + 1246368) [0x1a0f9e4a0] -[NSImage _usingBestRepresentationForRect:context:hints:body:] + 148 (AppKit + 1247980) [0x1a0f9eaec] __48-[NSImage CGImageForProposedRect:context:hints:]_block_invoke + 80 (AppKit + 1248792) [0x1a0f9ee18] -[NSCIImageRep CGImageForProposedRect:context:hints:] + 112 (AppKit + 6200292) [0x1a1457be4] +[CIContext contextWithOptions:] + 40 (CoreImage + 549532) [0x1a8df129c] -[CIContext initWithOptions:] + 588 (CoreImage + 65744) [0x1a8d7b0d0] +[CIContext(Internal) internalContextWithMTLDevice:options:] + 76 (CoreImage + 66568) [0x1a8d7b408] CIMetalCommandQueueCreate + 52 (CoreImage + 66692) [0x1a8d7b484] -[CaptureMTLDevice newCommandQueue] + 168 (GPUToolsCapture + 130200) [0x1029e7c98] -[CaptureMTLCommandQueue initWithBaseObject:captureDevice:] + 204 (GPUToolsCapture + 799812) [0x102a8b444] GTMTLGuestAppClientAddMTLCommandQueueInfo + 108 (GPUToolsCapture + 313572) [0x102a148e4] __ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c] *??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb41b08 - part of a deadlock) and Thread 0xb41b08 DispatchQueue "CI::RenderCompletionQueue"(535) 1000 samples (1-1000) priority 46 (base 46) start_wqthread + 8 (libsystem_pthread.dylib + 52464) [0x1035f4cf0] _pthread_wqthread + 288 (libsystem_pthread.dylib + 20736) [0x1035ed100] _dispatch_workloop_worker_thread + 580 (libdispatch.dylib + 129956) [0x1026afba4] _dispatch_root_queue_drain_deferred_wlh + 652 (libdispatch.dylib + 133360) [0x1026b08f0] _dispatch_lane_invoke + 468 (libdispatch.dylib + 68516) [0x1026a0ba4] _dispatch_lane_serial_drain + 860 (libdispatch.dylib + 64160) [0x10269faa0] _dispatch_client_callout + 20 (libdispatch.dylib + 26788) [0x1026968a4] _dispatch_call_block_and_release + 32 (libdispatch.dylib + 19300) [0x102694b64] CI::Object::unref() const + 120 (CoreImage + 35360) [0x1a8d73a20] CI::MetalContext::~MetalContext() + 16 (CoreImage + 192260) [0x1a8d99f04] CI::MetalContext::~MetalContext() + 236 (CoreImage + 192536) [0x1a8d9a018] -[CaptureMTLCommandQueue dealloc] + 44 (GPUToolsCapture + 797916) [0x102a8acdc] GTMTLGuestAppClientRemoveMTLCommandQueueInfo + 236 (GPUToolsCapture + 314240) [0x102a14b80] GTMTLGuestAppClient_allCaptureObjectsUnsafe + 392 (GPUToolsCapture + 298776) [0x102a10f18] AllMetalLayers + 64 (GPUToolsCapture + 518224) [0x102a46850] MakeLayerInfos + 320 (GPUToolsCapture + 518608) [0x102a469d0] -[CALayer frame] + 88 (QuartzCore + 74624) [0x1a5da1380] __ulock_wait2 + 8 (libsystem_kernel.dylib + 60540) [0x19d24bc7c] *??? (kernel.release.t6020 + 6102048) [0xfffffe0008cd5c20] (blocked by turnstile waiting for Phocus [11343] [unique pid 1001657] thread 0xb3b0f8 - part of a deadlock)
1
0
248
Oct ’24
Shrink the world game
This game is where you can play over 100 games and every game is very different and unique and you can save your favorite game over the 100 in store them and you can store over 100 if you like them all make your wildest dreams that you can search up as games and they could have them Youtubers, you can make good videos with this game, the Creator. :D Hope you enjoy it also I’m a kid so I don’t know how to make an update.
1
0
229
Oct ’24
Technical Issue Report for Maple Tale App
Technical Issue Report for Maple Tale App - Audio Format Compatibility Dear Apple Technical Support Team, I hope this message finds you well. My name is [Your Name], and I am part of the development team behind the Maple Tale app. We have encountered an issue with audio format compatibility within our app that we believe requires your assistance. The issue pertains to the audio formats supported by our app. Currently, our app only supports WAV and OGG formats, which has led to a limitation in user experience. We are looking to expand our support to include additional formats such as MP3 and AAC, which are widely used by our user base. To provide a clear understanding of the issue, I have outlined the steps to reproduce the problem: Launch the Maple Tale app. Proceed with the game normally. Upon picking up equipment within the game, a warning box pops up indicating the audio format compatibility issue. This warning box appears due to the app's inability to process audio files in formats other than WAV and OGG. We understand that this can be a significant hindrance to the user experience, and we are eager to resolve this as quickly as possible. We have reviewed the documentation available on the official Apple Developer website but are still seeking clarification on the best practices for supporting a wider range of audio formats within our app. We would greatly appreciate any official recommendations or guidelines that could assist us in this endeavor. Additionally, we are considering updating our app to inform users about the current audio format requirements and provide guidance on how to optimize their audio files for the best performance within our app. If there are any official documents or resources that we should reference when crafting this update, please let us know. We appreciate your time and assistance in this matter and look forward to your guidance on how to best implement audio format support on the iOS platform. Thank you once again for your support. Warm regards,
1
0
239
Oct ’24
Core Image Tiling and ROI
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc. The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions: return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); } arguments:@[src] ]; The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate: float4 backwards(sampler image, destination dest) { float2 dc = dest.coord(); dc.x = image.size().x - dc.x; return image.sample(image.transform(dc))); } When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large. All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing. I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet. Any thoughts/input appreciated, thanks!
3
0
284
Oct ’24