I'm displaying a GKGameCenterViewController after successfully authenticating and on iOS 18.0 and 18.1, I get a black screen. As a sanity check GKLocalPlayer.local.isAuthenticated is also returning true. The same code works just fine on iOS 17. Is there something that needs to be done on iOS 18 and above?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I have multiple CAMetalLayers that I render content to and noticed that the graphics overview HUD does not function properly when I have more than one CAMetalLayer. The values reported will be very strange. For example, FPS may report 999 or some large negative value. It the HUD simply not designed to work with multiple CAMetalLayers or MTKViews? When I disable all but one of my CAMetalLayers, the HUD works as expected.
I'm sure this question was asked many times before but I cannot find a good answer. In my case it just doesn't work.
Here's what I did
Created a couple of test user accounts on appstore connect under the sandbox section.
Launched two different simulators via xcode. On each simulator I logged in into test icloud account and game center accordingly.
Deployed and launched my code via xcode.
What happens is that GameKit code fails to find any peers no matter what I do. Sending Invite doesn't work either because the simulator displays an error message saying "you need to log-in into icloud account first" although it's already logged in. I tried the same code on two different physical devices with two real icloud accounts and it works as expected but it's not a viable path to develop and debug an app. I'm using the latest Xcode 16.1 running on 15.1.
Anybody has any clue how to solve this ?
I want use SwiftUI views as RealityKit entities to display AR Labels within a RealityKit scene, and the labels could be more complicated than just text and window as they might include images, dynamic texts, animations, WebViews, etc. Vision OS enables this through RealityView attachments, and there is a RealityView support on iOS 18.
Tried running RealityView attachments code samples from VisionOS on iOS 18. However, the code below gives errors on iOS 18:
import SwiftUI
import RealityKit
struct PassportRealityView: View {
let qrCodeCenter: SIMD3<Float>
let assetID: String
var body: some View {
RealityView { content, attachments in
// Setup your AR content, such as markers or 3D models
if let qrAnchor = try? await Entity(named: "QRAnchor") {
qrAnchor.position = qrCodeCenter
content.add(qrAnchor)
}
} attachments: {
Attachment(id: "passportTextAttachment") {
Text(assetID)
.font(.title3)
.foregroundColor(.white)
.background(Color.black.opacity(0.7))
.padding(5)
.cornerRadius(5)
}
}
.frame(width: 300, height: 400)
}
}
When I remove "attachments" keyword and the block, the errors are kind of gone. That does not help me as I want to attach SwiftUI views to Anchor Entities in RealityKit.
As I understand, RealityView attachments are not supported on iOS 18. I wonder if there is any way of showing SwiftUI views as entities on iOS 18 at this point. Or am I forced to use the text meshes and 3d planes to build the UI? I checked out the RealityUI plugin, but it's too simple for my use case of building complex AR labels. Any advice would be appreciated. Thanks!
Hello,
We are experimenting with Metal to accelerate some peculiar numerical computation. Our workloads are relatively small, so the ability to avoid moving data to and from the GPU's memory is very appealing. However, we are observing higher overhead compared to CUDA, which negates the benefits of avoiding data transfer.
In our tests using an empty kernel, CUDA completes in 0.001 ms (Intel i7 10700K, RTX 3080), while Metal's waitUntilCompleted takes 0.12 ms (M2 Max). As we do not have prior experience with Metal, we are wondering if we are using the APIs just fine and this timing is expected, or if there is a way to reduce it.
Thank you in advance for any comment!
test-metal.cpp
Once GKAccessPoint is active, then enter an arview page, the arview will lose camera feed.
OSVersion: iOS 18.0.1, iOS 18.1
Hi!
How to define and call an inline function in Metal? Or simple function that will return some value.
Case:
inline uint index4D(constant _4D& shape,
constant uint& n,
constant uint& c,
constant uint& h,
constant uint& w) {
return n * shape.C * shape.H * shape.W + c * shape.H * shape.W + h * shape.W + w;
}
When I call it in my kernel function I get No matching function for call error.
Thx in advance.
Hello,
I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs).
I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone.
My app does not use Lidar so it's not the problem.
I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible).
Did I miss something that could explain these restrictions ?
Is there any plan to make this class usable by every iPhone enough powerful to run it ?
Thanks in advance for answering me
I tried using the GameController APIs for this, but they didn't seem to work. Is that the recommended API for handling keyboard/mouse? The notifications for mouse and keyboard connect/disconnect don't seem to be defined for visionOS.
The visionOS 2.0 touts keyboard and mouse support. The simulator can even forward keyboard/mouse to the app. But there don't seem to be any sample code of how to programatically receive either of these. The game controller works fine (on device, not on Simulator).
I'm trying to make a magnifying glass that shows up when the user presses a button and follows the user's finger as it's dragged across the screen.
I came across a UIKit-based solution (https://github.com/niczyja/MagnifyingGlass-Swift), but when implemented in my SKScene, only the crosshairs are shown. Through experimentation I've found that magnifiedView?.layer.render(in: context) in:
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
magnifiedView?.addSubview(self)
}
can be removed without altering the situation, suggesting that line is not working as it should. But this is where I hit a brick wall. The view below is shown but not offset or magnified, and any attempt to add something to context results in a black magnifying glass.
Does anyone know why this is? I don't think it's an issue with the code, so I'm suspecting its something specific to SpriteKit or SKScene, likely related to how CALayers work.
Any pointers would be greatly appreciated.
.
.
.
Full code below:
import UIKit
public class MagnifyingGlassView: UIView {
public weak var magnifiedView: UIView? = nil {
didSet {
removeFromSuperview()
magnifiedView?.addSubview(self)
}
}
public var magnifiedPoint: CGPoint = .zero {
didSet {
center = .init(x: magnifiedPoint.x + offset.x, y: magnifiedPoint.y + offset.y)
}
}
public var offset: CGPoint = .zero
public var radius: CGFloat = 50 {
didSet {
frame = .init(origin: frame.origin, size: .init(width: radius * 2, height: radius * 2))
layer.cornerRadius = radius
crosshair.path = crosshairPath(for: radius)
}
}
public var scale: CGFloat = 2
public var borderColor: UIColor = .lightGray {
didSet {
layer.borderColor = borderColor.cgColor
}
}
public var borderWidth: CGFloat = 3 {
didSet {
layer.borderWidth = borderWidth
}
}
public var showsCrosshair = true {
didSet {
crosshair.isHidden = !showsCrosshair
}
}
public var crosshairColor: UIColor = .lightGray {
didSet {
crosshair.strokeColor = crosshairColor.cgColor
}
}
public var crosshairWidth: CGFloat = 5 {
didSet {
crosshair.lineWidth = crosshairWidth
}
}
private let crosshair: CAShapeLayer = CAShapeLayer()
public convenience init(offset: CGPoint = .zero, radius: CGFloat = 50, scale: CGFloat = 2, borderColor: UIColor = .lightGray, borderWidth: CGFloat = 3, showsCrosshair: Bool = true, crosshairColor: UIColor = .lightGray, crosshairWidth: CGFloat = 0.5) {
self.init(frame: .zero)
layer.masksToBounds = true
layer.addSublayer(crosshair)
defer {
self.offset = offset
self.radius = radius
self.scale = scale
self.borderColor = borderColor
self.borderWidth = borderWidth
self.showsCrosshair = showsCrosshair
self.crosshairColor = crosshairColor
self.crosshairWidth = crosshairWidth
}
}
public func magnify(at point: CGPoint) {
guard magnifiedView != nil else { return }
magnifiedPoint = point
layer.setNeedsDisplay()
}
private func crosshairPath(for radius: CGFloat) -> CGPath {
let path = CGMutablePath()
path.move(to: .init(x: radius, y: 0))
path.addLine(to: .init(x: radius, y: bounds.height))
path.move(to: .init(x: 0, y: radius))
path.addLine(to: .init(x: bounds.width, y: radius))
return path
}
public override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.translateBy(x: radius, y: radius)
context.scaleBy(x: scale, y: scale)
context.translateBy(x: -magnifiedPoint.x, y: -magnifiedPoint.y)
removeFromSuperview()
magnifiedView?.layer.render(in: context)
//If above disabled, no change
//Possible that nothing's being rendered into context
//Could it be that SKScene view has no layer?
magnifiedView?.addSubview(self)
}
}
I would like to preload and use some images for both SpriteKit and SceneKit models (my game uses SceneKit with a SpriteKit overlay), and as far as I can see the only efficient way would be to create and preload SKTexture objects which can be supplied to SKSpriteNode(texture:) and SCNMaterial.diffuse.contents.
The problem is that SKTexture are rendered too bright in SceneKit, for some unknown reason. Here a comparison between rendering an image (from URL) and a SKTexture:
And the code that produces it:
let url = Bundle.main.url(forResource: "art.scnassets/texture.png", withExtension: nil)!
let plane1 = SCNPlane(width: 10, height: 10)
plane1.firstMaterial!.diffuse.contents = url.path
let node1 = SCNNode(geometry: plane1)
node1.position.x = -5
scene.rootNode.addChildNode(node1)
let plane2 = SCNPlane(width: 10, height: 10)
plane2.firstMaterial!.diffuse.contents = SKTexture(image: NSImage(byReferencing: url))
let node2 = SCNNode(geometry: plane2)
node2.position.x = 5
scene.rootNode.addChildNode(node2)
This issue was already mentioned in this other post, but since I wasn't notified of the reply from Quinn asking about the feedback number I created at the time, it didn't make any progress.
Can VisionOS take screenshots besides simultaneously pressing buttons
I want to know why there is no video frame data when RePlaykit enters the background and then enters the foreground?
For some reason I can't disable the Graphics HUD.
Not really a problem for development, but it's also showing in Testflight apps.
For example when swiping down on the keyboard but also in some other places.
Of course I tried disabling the toggle, but even when it's off the HUD is still showing. Even completely disabling Developer mode does not work.
Is this a known issue?
I already scrolled through possibly every Google search result but I can't figure out how to solve this.
I'm experiencing a strange issue where I'm seeing black in a metal drawable where it should be a different color. When I capture the frame and inspect the returned value from the fragment function, it's correct, but the drawable isn't.
This screenshot hopefully illustrates the issue.
I've not found any references to similar issues. I saw something about some out of bounds or NaN values being dropped to 0 (which would be black), but the debugger doesn't indicate this is happening.
The welcome banner is off the top left side of the screen instead of coming down the center. This behavior is encountered when running my iOS application on macOS.
Hi, does anyone know if it is an easy way to determine the distance between floor and ceiling in vision Pro?
When generating large arrays of random numbers, NaNs show up. They also show up at the same indices when using the same seed, leading me to believe that this is a bug with MPSMatrixRandom's normally distributed Float32 random number distribution.
Happens with both Philox and MTGP32.
Is this intentional and how do I work around this?
See the original post for a MWE in Swift and Julia: https://github.com/JuliaGPU/Metal.jl/issues/474
Hi,
A user sent us a crash report that indicates an error occurring just after loading the default Metal library of our app.
Application Specific Information:
Crashing on exception: *** -[__NSArrayM objectAtIndex:]: index 0 beyond bounds for empty array
The report pointed me to these (simplified) lines of codes in the library setup:
_vertexFunctions = [[NSMutableArray alloc] init];
_fragmentFunctions = [[NSMutableArray alloc] init];
id<MTLLibrary> library = [device newDefaultLibrary];
2 vertex shaders and 5 fragment shaders are then loaded and stored in these two arrays using this method:
-(BOOL) addShaderNamed:(NSString *)name library:(id<MTLLibrary>)library isFragment:(BOOL)isFragment {
id shader = [library newFunctionWithName:name];
if (!shader) {
ALOG(@"Error : Unable to find the shader named : “%@”", name);
return NO;
}
[(isFragment ? _fragmentFunctions : _vertexFunctions) addObject:shader];
return YES;
}
As you can see, the arrays are not filled if the method fails... however, a few lines later, they are used without checking if they are really filled, and that causes the crash...
But this coding error doesn't explain why no shader of a certain type (or both types) have been added to the array, meaning: why -newFunctionWithName: returned nil for all given names (since the implied array appears completely empty)?
Clue
This error has only be detected once by a user running the app on macOS 10.13 with a NVIDIA Web Driver instead of the default macOS graphic driver. Moreover, it wasn't possible to reproduce the problem on the same OS using the native macOS driver.
So my question is: is it some known conflicts between NVIDIA drivers and the use of Metal libraries? Or does this case would require some specific options in the Metal implementation?
Any help appreciated, thanks!
I have this minimum repro code:
import SpriteKit
import GameplayKit
class MyGameScene3D: SCNScene {
weak var node3D: MyNode3D!
override init() {
super.init()
background.contents = UIColor.green
let playground = SCNNode()
playground.boundingBox = (
min: SCNVector3(x: 0, y: 0, z: 0),
max: SCNVector3(x: 10, y: 10, z: 10))
let box = SCNNode(geometry: SCNBox(width: 1, height: 1, length: 1, chamferRadius: 0))
box.position = SCNVector3(x: 5, y: 5, z: 5)
playground.addChildNode(box)
playground.position = SCNVector3(x: 0, y: 0, z: 0)
rootNode.addChildNode(playground)
let light = SCNLight()
light.type = .ambient
let lightNode = SCNNode()
lightNode.light = light
rootNode.addChildNode(lightNode)
let camera = SCNCamera()
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.eulerAngles = SCNVector3(x: -3.14/2, y: 0, z: 0)
cameraNode.position = SCNVector3(x: 5, y: 11, z: 5)
rootNode.addChildNode(cameraNode)
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func handleTouchBegan(_ location: CGPoint) {
let res = node3D.hitTest(location)
print(res)
}
}
class MyNode3D: SK3DNode {
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let scene = scnScene as! MyGameScene3D
let location = touch.location(in: self)
print(location)
scene.handleTouchBegan(location)
}
}
class GameScene: SKScene {
init() {
super.init(size: CGSize(width: 500, height: 1000))
self.backgroundColor = .red
let node3D = MyNode3D()
let scene3D = MyGameScene3D()
node3D.scnScene = scene3D
scene3D.node3D = node3D
node3D.isUserInteractionEnabled = true
node3D.viewportSize = CGSize(width: 100, height: 200)
node3D.position = CGPoint(x: 50, y: 100)
addChild(node3D)
let up = SKSpriteNode(color: .blue, size: CGSize(width: 500, height: 10))
up.anchorPoint = CGPoint(x: 0, y:0)
up.position = CGPoint(x:0, y:200)
addChild(up)
let right = SKSpriteNode(color: .gray, size: CGSize(width: 10, height: 500))
right.anchorPoint = CGPoint(x:0,y: 0)
right.position = CGPoint(x:100, y:0)
addChild(right)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
Basically, I have a SK3DNode of size 100x200, positioned at lower left corner of the screen (see screenshot below).
Then in this SK3DNode, I have a SCNScene, where I put a 10x10x10 Playground node at position (0, 0, 0). Then I put a camera node right at the top of the Playground at position (5, 11, 5), and the camera looks down along the -y axis, with euler angle = (-90, 0, 0).
Then in this Playground, I put a small box of size 1x1x1, at the center of the Playground at (5, 5, 5).
The 2 long bars (gray & blue) are just there to indicate the boundary of the SK3DNode.
The result rendering is correct (see screenshot below). However, I can't get the hit test working. I tap on the center 1x1x1 box on screen, get the right coordinate printed out, but the hit test result is empty. I want to be get the center 1x1x1 box when hitting there. How can I do so?
Update:
I tried to loop through all the pixels from -2000 to 2000, and still no hit:
func handleTouchBegan(_ location: CGPoint) {
for x in -2000...2000 {
print("handling x: \(x)")
for y in -2000...2000 {
let res = node3D.hitTest(location)
if !res.isEmpty {
print("\(x), \(y), \(res)")
}
}
}
print("Done")
}