Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Nimbus Steel Series not working with AVP Simulator
I have this game controller connected to my M1, and the Simulator won't announced it via .GCControllerDidConnect. This works fine on iOS and macOS. I have the simulator set to "Send Game Controller to Device" which the Simulator does. If I disable that, then I can control the simulator view. But once enabled, the Simulator doesn't tell the app about the controller.
3
0
251
Oct ’24
Does the SpriteView of an SKScene have layers? Unable to get magnifying glass view to work with scene.
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen. I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in: public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) magnifiedView?.addSubview(self) } can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass. Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work. Any pointers would be greatly appreciated. . . . Full code below: import UIKit public class MagnifyingGlassView: UIView { public weak var magnifiedView: UIView? = nil { didSet { removeFromSuperview() magnifiedView?.addSubview(self) } } public var magnifiedPoint: CGPoint = .zero { didSet { center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y) } } public var offset: CGPoint = .zero public var radius: CGFloat = 50 { didSet { frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2)) layer.cornerRadius = radius crosshair.path = crosshairPath(for: radius) } } public var scale: CGFloat = 2 public var borderColor: UIColor = .lightGray { didSet { layer.borderColor = borderColor.cgColor } } public var borderWidth: CGFloat = 3 { didSet { layer.borderWidth = borderWidth } } public var showsCrosshair = true { didSet { crosshair.isHidden = !showsCrosshair } } public var crosshairColor: UIColor = .lightGray { didSet { crosshair.strokeColor = crosshairColor.cgColor } } public var crosshairWidth: CGFloat = 5 { didSet { crosshair.lineWidth = crosshairWidth } } private let crosshair: CAShapeLayer = CAShapeLayer() public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) { self.init(frame: .zero) layer.masksToBounds = true layer.addSublayer(crosshair) defer { self.offset = offset self.radius = radius self.scale = scale self.borderColor = borderColor self.borderWidth = borderWidth self.showsCrosshair = showsCrosshair self.crosshairColor = crosshairColor self.crosshairWidth = crosshairWidth } } public func magnify(at point: CGPoint) { guard magnifiedView != nil else { return } magnifiedPoint = point layer.setNeedsDisplay() } private func crosshairPath(for radius: CGFloat) -> CGPath { let path = CGMutablePath() path.move(to: .init(x: radius, y: 0)) path.addLine(to: .init(x: radius, y: bounds.height)) path.move(to: .init(x: 0, y: radius)) path.addLine(to: .init(x: bounds.width, y: radius)) return path } public override func draw(_ rect: CGRect) { guard let context = UIGraphicsGetCurrentContext() else { return } context.translateBy(x: radius, y: radius) context.scaleBy(x: scale, y: scale) context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y) removeFromSuperview() magnifiedView?.layer.render(in: context) //If above disabled, no change //Possible that nothing's being rendered into context //Could it be that SKScene view has no layer? magnifiedView?.addSubview(self) } }
0
0
200
3w
Using the same texture for both input & output of a fragment shader
Hello, This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer: Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere? I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident. Would love to get a clarification! Thanks ahead!
8
1
758
May ’24
SK3DNode hitTest not working in SpriteKit/SceneKit
I have this minimum repro code: import SpriteKit import GameplayKit class MyGameScene3D: SCNScene { weak var node3D: MyNode3D! override init() { super.init() background.contents = UIColor.green let playground = SCNNode() playground.boundingBox = ( min: SCNVector3(x: 0, y: 0, z: 0), max: SCNVector3(x: 10, y: 10, z: 10)) let box = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0)) box.position = SCNVector3(x: 5, y: 5, z: 5) playground.addChildNode(box) playground.position = SCNVector3(x: 0, y: 0, z: 0) rootNode.addChildNode(playground) let light = SCNLight() light.type = .ambient let lightNode = SCNNode() lightNode.light = light rootNode.addChildNode(lightNode) let camera = SCNCamera() let cameraNode = SCNNode() cameraNode.camera = camera cameraNode.eulerAngles = SCNVector3(x: -3.14/2, y: 0, z: 0) cameraNode.position = SCNVector3(x: 5, y: 11, z: 5) rootNode.addChildNode(cameraNode) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } func handleTouchBegan(_ location: CGPoint) { let res = node3D.hitTest(location) print(res) } } class MyNode3D: SK3DNode { override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let touch = touches.first! let scene = scnScene as! MyGameScene3D let location = touch.location(in: self) print(location) scene.handleTouchBegan(location) } } class GameScene: SKScene { init() { super.init(size: CGSize(width: 500, height: 1000)) self.backgroundColor = .red let node3D = MyNode3D() let scene3D = MyGameScene3D() node3D.scnScene = scene3D scene3D.node3D = node3D node3D.isUserInteractionEnabled = true node3D.viewportSize = CGSize(width: 100, height: 200) node3D.position = CGPoint(x: 50, y: 100) addChild(node3D) let up = SKSpriteNode(color: .blue, size: CGSize(width: 500, height: 10)) up.anchorPoint = CGPoint(x: 0, y:0) up.position = CGPoint(x:0, y:200) addChild(up) let right = SKSpriteNode(color: .gray, size: CGSize(width: 10, height: 500)) right.anchorPoint = CGPoint(x:0,y: 0) right.position = CGPoint(x:100, y:0) addChild(right) } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } } Basically, I have a SK3DNode of size 100x200, positioned at lower left corner of the screen (see screenshot below). Then in this SK3DNode, I have a SCNScene, where I put a 10x10x10 Playground node at position (0, 0, 0). Then I put a camera node right at the top of the Playground at position (5, 11, 5), and the camera looks down along the -y axis, with euler angle = (-90, 0, 0). Then in this Playground, I put a small box of size 1x1x1, at the center of the Playground at (5, 5, 5). The 2 long bars (gray & blue) are just there to indicate the boundary of the SK3DNode. The result rendering is correct (see screenshot below). However, I can't get the hit test working. I tap on the center 1x1x1 box on screen, get the right coordinate printed out, but the hit test result is empty. I want to be get the center 1x1x1 box when hitting there. How can I do so? Update: I tried to loop through all the pixels from -2000 to 2000, and still no hit: func handleTouchBegan(_ location: CGPoint) { for x in -2000...2000 { print("handling x: \(x)") for y in -2000...2000 { let res = node3D.hitTest(location) if !res.isEmpty { print("\(x), \(y), \(res)") } } } print("Done") }
1
0
215
Oct ’24
ARKit body tracking (ARBodyAnchor) broken in iOS 18.0 + 18.1
I'm wondering if anyone can suggest a workaround for the broken ARKit body tracking in iOS / iPadOS 18.0 and 18.1? The orientation for foot bones (and possibly other bones) is incorrect in the ARBodyAnchor returned via ARView.session.delegate update method. It works correctly in iOS / iPadOS 17.x. The same failure occurs in a SceneKit app via a ARBodyAnchor in ARSCNViewDelegate. You can easily verify the problem using Apple’s own sample app: https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d Any help would be greatly appreciated.
2
0
283
Oct ’24
Cannot disable "Show Graphics HUD"
For some reason I can't disable the Graphics HUD. Not really a problem for development, but it's also showing in Testflight apps. For example when swiping down on the keyboard but also in some other places. Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work. Is this a known issue? I already scrolled through possibly every Google search result but I can't figure out how to solve this.
0
0
166
3w
Metal rendering shows incorrect color, gputrace shows fragment function returning correct color
I'm experiencing a strange issue where I'm seeing black in a metal drawable where it should be a different color. When I capture the frame and inspect the returned value from the fragment function, it's correct, but the drawable isn't. This screenshot hopefully illustrates the issue. I've not found any references to similar issues. I saw something about some out of bounds or NaN values being dropped to 0 (which would be black), but the debugger doesn't indicate this is happening.
1
0
176
3w
GameplayKit usage with Swift 6: Call to main actor-isolated instance method 'run' in a synchronous nonisolated context
Hi there, With a couple of other developers we have been busy with migrating our SpriteKit games and frameworks to Swift 6. There is one issue we are unable to resolve, and this involves the interaction between SpriteKit and GameplayKit. There is a very small demo repo created that clearly demonstrates the issue. It can be found here: https://github.com/AchrafKassioui/GameplayKitExplorer/blob/main/GameplayKitExplorer/Basic.swift The relevant code also pasted here: import SwiftUI import SpriteKit struct BasicView: View { var body: some View { SpriteView(scene: BasicScene()) .ignoresSafeArea() } } #Preview { BasicView() } class BasicScene: SKScene { override func didMove(to view: SKView) { size = view.bounds.size anchorPoint = CGPoint(x: 0.5, y: 0.5) backgroundColor = .gray view.isMultipleTouchEnabled = true let entity = BasicEntity(color: .systemYellow, size: CGSize(width: 100, height: 100)) if let renderComponent = entity.component(ofType: BasicRenderComponent.self) { addChild(renderComponent.sprite) } } } @MainActor class BasicEntity: GKEntity { init(color: SKColor, size: CGSize) { super.init() let renderComponent = BasicRenderComponent(color: color, size: size) addComponent(renderComponent) let animationComponent = BasicAnimationComponent() addComponent(animationComponent) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } @MainActor class BasicRenderComponent: GKComponent { let sprite: SKSpriteNode init(color: SKColor, size: CGSize) { self.sprite = SKSpriteNode(texture: nil, color: color, size: size) super.init() } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } class BasicAnimationComponent: GKComponent { let action1 = SKAction.scale(to: 1.3, duration: 0.07) let action2 = SKAction.scale(to: 1, duration: 0.15) override init() { super.init() } override func didAddToEntity() { if let renderComponent = entity?.component(ofType: BasicRenderComponent.self) { renderComponent.sprite.run(SKAction.repeatForever(SKAction.sequence([action1, action2]))) } } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } As SKNode is designed to run on the MainActor, the BasicRenderComponent is attributed with MainActor as well. This is needed as this GKComponent is dedicated to encapsulate the node that is rendered to the scene. There is also a BasicAnimationComponent, this GKComponent is responsible for animating the rendered node. Obviously, this is just an example, but when using GameplayKit in combination with SpriteKit it is very common that a GKComponent instance manipulates an SKNode referenced from another GKComponent instance, often done via open func update(deltaTime seconds: TimeInterval) or as in this example, inside didAddToEntity. Now, the problem is that in the above example (but the same goes foupdate(deltaTime seconds: TimeInterval) the methoddidAddToEntity is not isolated to the MainActor, as GKComponent is not either. This leads to the error Call to main actor-isolated instance method 'run' in a synchronous nonisolated context, as indeed the compiler can not infer that didAddToEntity is isolated to the MainActor. Marking BasicAnimationComponent as @MainActor does not help, as this isolation is not propogated back to the superclass inherited methods. In fact, we tried a plethora of other options, but none resolved this issue. How should we proceed with this? As of now, this is really holding us back migrating to Swift 6. Hope someone is able to help out here!
2
2
401
Oct ’24
PingFang.ttc font file is missing in iOS 18.0
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0. I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future? My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue. In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either. I traversed the following system font directories and could not find the PingFang.ttc font file. /System/Library/Fonts/AppFonts /System/Library/Fonts/Core /System/Library/Fonts/CoreAddition /System/Library/Fonts/CoreUI /System/Library/Fonts/LanguageSupport /System/Library/Fonts/UnicodeSupport /System/Library/Fonts/Watch Looking for answers, thanks for the help!
6
2
2.3k
Jun ’24
Normally distributed MPSMatrixRandom number generation generates NaN
When generating large arrays of random numbers, NaNs show up. They also show up at the same indices when using the same seed, leading me to believe that this is a bug with MPSMatrixRandom's normally distributed Float32 random number distribution. Happens with both Philox and MTGP32. Is this intentional and how do I work around this? See the original post for a MWE in Swift and Julia: https://github.com/JuliaGPU/Metal.jl/issues/474
0
1
196
4w
Accessibility and Screen Recording Permissions for Helper and Main Bundles
Guys, In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for. However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper. This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey. I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
1
1
677
Feb ’23
Metal and NVIDIA graphic driver
Hi, A user sent us a crash report that indicates an error occurring just after loading the default Metal library of our app. Application Specific Information: Crashing on exception: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array The report pointed me to these (simplified) lines of codes in the library setup: _vertexFunctions = [[NSMutableArray alloc] init]; _fragmentFunctions = [[NSMutableArray alloc] init]; id<MTLLibrary> library = [device newDefaultLibrary]; 2 vertex shaders and 5 fragment shaders are then loaded and stored in these two arrays using this method: -(BOOL) addShaderNamed:(NSString *)name library:(id<MTLLibrary>)library isFragment:(BOOL)isFragment { id shader = [library newFunctionWithName:name]; if (!shader) { ALOG(@"Error : Unable to find the shader named : “%@”", name); return NO; } [(isFragment ? _fragmentFunctions : _vertexFunctions) addObject:shader]; return YES; } As you can see, the arrays are not filled if the method fails... however, a few lines later, they are used without checking if they are really filled, and that causes the crash... But this coding error doesn't explain why no shader of a certain type (or both types) have been added to the array, meaning: why -newFunctionWithName: returned nil for all given names (since the implied array appears completely empty)? Clue This error has only be detected once by a user running the app on macOS 10.13 with a NVIDIA Web Driver instead of the default macOS graphic driver. Moreover, it wasn't possible to reproduce the problem on the same OS using the native macOS driver. So my question is: is it some known conflicts between NVIDIA drivers and the use of Metal libraries? Or does this case would require some specific options in the Metal implementation? Any help appreciated, thanks!
0
0
202
4w
Game Porting Toolkit Crashing Playing HighEnd Games
Updated to GPT 2 with the app format and whisky in October. I'm using steam in GPT to play the windows only games. The games consistently crash my computer to restart when I played games like Liar's Bar and Phasmophobia, the latter even with its performance optimization update released today. Both of them do that when I load into the game room, so the bar/ghost house or even while loading. It ran Outpath properly though, which was something with much lower quality than LB and Phas. I don't think it's a simple RAM config issue because Phasmophobia ran fine for me with older versions in GPT 1 although I did experience occasional crashes but I think those were only crashing the game not my mac. I'm not sure what I'm doing wrong that I can't run those games or if it's a GPT problem. I'm using a MPB Pro M1 with 16GB memory on Sequoia 15.0.1.
1
0
233
Oct ’24
Resolution for Games
Hi, When using a High Definition Display, is there a way to render at exactly the target resolution on the physical screen? My understanding is that the default behavior is to render to a backing store with a resolution (in pixels) which can be twice the size of the logical resolution (in points). Then we let the OS handle the down-scaling to the actual target resolution on the screen. This is all nice for non-graphics intensive apps, but it means that my game will render at a higher resolution than needed, which seems like an obvious loss of performance. My expectation is that, for graphics intensive application such as games, we should be able to query and render to the final resolution on the display. Can it / should it be done? Thank you for your help :) FYI I did find a document which explains how to setup your CAMetalLayer to render at a custom resolution. I suspect that this may be what I have to do?
2
0
407
Oct ’24
How to attach point cloud(or depth data) to heic?
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
3
2
810
Nov ’23
RealityKit crashes randomly in the simulator but not on the device
I'm writing a RealityKit/ARKit app that runs on iOS. Starting with Xcode 16.0 beta 1, at least through Xcode 16.1 beta 2 (16B5014f), in the iOS 18 simulator, my app randomly crashes in about 20% of app sessions the first time it attempts to present an ARView. The crashes seem to occur at multiple points within RealityKit and Metal. Below, I've included screenshots of the call stacks of the crashes, which occur as a result of both EXC_BAD_ACCESS and assertion failures within RealityKit. The app only crashes in the iOS 18 simulator, and does not crash in the iOS 17 simulator or earlier. The app only crashes in the simulator, and does not crash on a device running iOS 18. Before I investigate further, I'd appreciate it if an Apple engineer could give me a sense of if these crashes are most likely the result of known issues within RealityKit and/or the simulator, or if your opinion is that there are probably bugs in my app's code. I've submitted several feedback issues in the past, and I'd love to submit this issue too, but I expect that I would spend many hours attempting to create a repro case in a sample app. Understandably, I'd rather not spend this time if an Apple engineer could tell me this is a known issue, for example. Thank you.
5
0
443
Oct ’24