I would like to implement zoom functionality in my SceneKit game: when the user performs the pinch gesture on a point on the screen, the scene zooms in to make that point larger.
Until now I simply changed SCNCamera.focalLength, but this simply zooms in to the center of what is currently visible on screen. Is it somehow possible to implement the zoom functionality described above by perhaps interactively rotating the camera at the same time towards the pinched point? Is there a formula for this? I would like to avoid suddenly rotating the camera to face the pinched point when the pinch gesture begins and then zoom in while the pinch is in progress.
Graphics and Games
RSS for tagBuild captivating gaming experiences for Apple platforms.
Posts under Graphics and Games tag
32 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have a very basic usdz file from this repo
I call loadTextures() after loading the usdz via MDLAsset. Inspecting the MDLTexture object I can tell it is assigning a colorspace of linear rgb instead of srgb although the image file in the usdz is srgb.
This causes the textures to ultimately render as over saturated.
In the code I later convert the MDLTexture to MTLTexture via MTKTextureLoader but if I set the srgb option it seems to ignore it.
This significantly impacts the usefulness of Model I/O if it can't load a simple usdz texture correctly. Am I missing something?
Thanks!
Hi, I’m creating a game and I’m just wondering if I can integrate GCVirtualController in my SwiftUI app.
I have developed an application using Kotlin and Swift languages, which has been installed and run on an iPhone. It can also be installed on an iPad. Do I need to go through a testing process to publish it on the app store? Also, as a developer from China, are there convenient payment channels for developers?
XCode: 16.0
MacBook Pro: M1 Pro, Sonoma 14.5
Device: iPhone16, 18.0.1
Game: UnrealEngine 5.4.2 based
100% crash after gpu cature, even only renders a login UI
Hi folks,
I'm working on a Tile based Deferred renderer, similar to this Apple example. I'm wondering how to add MSAA to the renderer, and I see two choices:
Copy the single-sampled texture at the end of the GBuffer/Lighting render pass to a multi-sampled texture and resolve from that
Make all render targets (GBuffer) multi-sampled and deal with sampling/resolving all intermediate textures as well as the final, combined texture.
Which is the proper approach, and are there any examples of how to implement it?
Thanks!
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
Hey,
Wondering how other developers have been able to determine the location of a mouse event or tap (ie NSEvent) when using MetalView (MKTView) with SKRenderer with a SpriteKit scene (.sks scene) for a 2D game.
In the original scenario with SpriteKit, we could use SKViews convertPoint(fromView:) to determine where in the scene the user tapped. But with the SKRenderer we can no longer use convertPoint(fromView:) as its reliant on SKView being used and thus its making it difficult to determine.
What I do have is:
locationInWindow: NSPoint for showing me where in the MKTView which was touched
Any ideas, would be great
Many thanks
Guten Tag,
my project is simple, first I want draw wired Hexa,-Tetra- and Octahedrons.
I draw a cube with Metal but I didn't found rotation, translation and scale.
I have searched help , the examples I found are too complicated for me.
Mit freundlichen Grüßen
VanceRegnet
I tried to understand the view matrix.
The part from original code as below:
private func updateGameState() {
/// Update any game state before rendering
uniforms[0].projectionMatrix = projectionMatrix
let rotationAxis = SIMD3<Float>(1, 1, 0)
let modelMatrix = matrix4x4_rotation(radians: rotation, axis: rotationAxis)
let viewMatrix = matrix4x4_translation(0.0, 0.0, -8.0)
uniforms[0].modelViewMatrix = simd_mul(viewMatrix, modelMatrix)
rotation += 0.01
}
If the view matrix is initialed in x = -0.5, as:let viewMatrix = matrix4x4_translation(-0.5, 0.0, -8.0)
The cube in the MetalView will move left.
I think it should move to right hand side because View Matrix is camera position, am I wrong?
Hello,
I would like to know if anyone has or still using the GKVoiceChat capabilities in their apps. I wanted to use it for my online game but I am coming across issues using it and wondering if their are alternatives?. The documentation mentions to use Share-play but that wont be possible with random online players. Any help will be appreciated!.
Hello, I try to invite a friend to play my app , however when the friend try to press invite link component via iMessage, it shows “Retrieving” and then disappear, nothing happens,
it doesn't redirect to my app, what I'm missing? or doing wrong I can leave some part of my code
import Foundation
import GameKit
extension RealTimeGame: GKLocalPlayerListener {
/// Handles when the local player sends requests to start a match with other players.
func player(_ player: GKPlayer, didRequestMatchWithRecipients recipientPlayers: [GKPlayer]) {
print("\n\nSending invites to other players.")
}
/// Presents the matchmaker interface when the local player accepts an invitation from another player.
func player(_ player: GKPlayer, didAccept invite: GKInvite) {
// Present the matchmaker view controller in the invitation state.
if let viewController = GKMatchmakerViewController(invite: invite) {
viewController.matchmakerDelegate = self
rootViewController?.present(viewController, animated: true) { }
}
}
}
also I don't have "<key>CFBundleURLTypes</key>" in my info.plist, I don't know I need that or not...
Hello,
Asking the following as, I was unable to find answers via search on the forum and in the documentation:
Invitations sent via iMessage seem to work correctly with my custom image ( GKMessageImage.png ) however, notifications sent to Game Center Friends via invites generated in Game Center do not include the custom image ( GKMessageImage.png ).
Questions:
Is this expected behavior? Is there a different way to customize the image in the notification? Note the Game Center notification includes the App name correctly.
I also noted in the WWDC session in 2016 ( saw video recently ) that there was some mention of no longer adding friends via Game Center. Is that currently true?
Thanks in advance.
I am having a difficult time to create particle systems in Reality Composer Pro (visionOS beta 3). They tend to start to flicker and all particles disappear and reappear in semi-random intervals.
I can clearly see that happening with one effect that I put inside a small box consisting of 4 transparent walls that has a solid floor. When I change the view angle the particle system starts to flicker when viewed from below its emission height.
I tried all combinations of particle rendering: billboard->free, additive etc and it does not change anything. I am using the default particle image.
Any help appreciated
I am trying to simulate a pinball game and I want to use PhysicsBody & PhysicsMotion to achieve that. I tuned the parameters around in PhysicsBodyComponent, but the result is not quite ideal for now.
Imagine a fully inflated basketball bouncing high off the ground (ground vs basketball). I assign PhysicsBodyComponent and CollisionComponent to both basketball and the ground.
For basket ball, I set it as:
dynamic mode
mass 1, inertia .one
Material.Restitution 1
Angular Damping and Linear Damping to 0
AddForce to make the basketball move to hit the ground
For ground, I set it as:
static mode
mass 1, inertia .zero
Material.Restitution 1
Angular Damping and Linear Damping to 0
However, when the basket ball hit the ground, it isn't that bouncy, the basketball behaves like hitting to a cotton and the linear speed just dumps fast. Wonder how I could achieve the bouncing effect like real basketball vs ground.
I want to create framework for this repo:https://github.com/BradLarson/GPUImage
but failed.
1.I downloaded this repo and run below:
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS"
-archivePath "archives/GPUImage"
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS Simulator"
-archivePath "archivessimulator/GPUImage"
xcodebuild -create-xcframework
-archive archives/GPUImage.xcarchive -framework GPUImage.framework
-archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework
-output xcframeworks/GPUImage.xcframework
there is error :cryptexDiskImage' is an unknown content type
and com.apple.platform.xros' is an unknown platform identifier
I have code such as the following. The performance on the Vision Pro seems to get quite bad once I hit a few thousand of these models. It feels like I should be able to optimise this somehow, perhaps using instancing. Is that possible with RealityKit in visionOS 2?
let material = UnlitMaterial(color: .white)
let sphereModel = ModelEntity(
mesh: .generateSphere(radius: 0.001),
materials: [material])
for index in 0..<5000 {
let point = generatedPoints[index]
let model = sphereModel.clone(recursive: false)
model.position = [point.x, point.y, point.z]
parent.addChild(starModel)
}
I am having problems getting button input from an Xbox game controller.
I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS.
The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement.
buttonA.valueChangedHandler = { button, value, pressed in
os_log("Got valueChangedHandler")
}
At the end of RealityView, I have the modifier
RealityView { content in
// stuff
}
.handlesGameControllerEvents(matching: .gamepad)
But I am never seeing the log message appear in the console when I press the 'A' button (or any other button).
Any ideas what I might be doing wrong?
The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
Hi all,
I am a UI/UX designer working on several commercial projects, and I have a few questions that I need you to answer:
Can I use the icons from your SF Symbols set in my application? This is a SAAS application and is used on various platforms such as macOS and Windows.
Is SF Symbols only allowed for use in applications running on Apple platforms? Meaning, if my application is used on a MacBook or iPhone, am I allowed to use your icon set?
If usage is not permitted on platforms other than Apple’s, how can I legally use them on those platforms? Does Apple sell licenses for using SF Symbols on other platforms? If so, what is the cost?
Looking forward to your response.
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here.
Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking.
I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events.
My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes.
For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.)
The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster.
Another simple example:
Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available.
This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection.
Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this.
In short, I propose:
a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions
the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking.
alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.