I am trying to port SceneKit projects to Swift 6, and I just can't figure out how that's possible. I even start thinking SceneKit and Swift 6 concurrency just don't match together, and SceneKit projects should - hopefully for the time being only - stick to Swift 5.
The SCNSceneRendererDelegate methods are called in the SceneKit Thread.
If the delegate is a ViewController:
class GameViewController: UIViewController {
let aNode = SCNNode()
func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) {
aNode.position.x = 10
}
}
Then the compiler generates the error "Main actor-isolated instance method 'renderer(_:updateAtTime:)' cannot be used to satisfy nonisolated protocol requirement"
Which is fully understandable.
The compiler even tells you those methods can't be used for protocol conformance, unless:
Conformance is declare as @preconcurrency SCNSceneRendererDelegate like this:
class GameViewController: UIViewController, @preconcurrency SCNSceneRendererDelegate {
But that just delays the check to runtime, and therefore, crash in the SceneKit Thread happens at runtime...
Again, fully understandable.
or the delegate method is declared nonisolated like this:
nonisolated func renderer(_ renderer: any SCNSceneRenderer, updateAtTime time: TimeInterval) {
aNode.position.x = 10
}
Which generates the compiler error: "Main actor-isolated property 'position' can not be mutated from a nonisolated context".
Again fully understandable.
If the delegate is not a ViewController but a nonisolated class, we also have the problem that SCNNode can't be used.
Nearly 100% of the SCNSceneRendererDelegate I've seen do use SCNNode or similar MainActor bound types, because they are meant for that.
So, where am I wrong ? What is the solution to use SceneKit SCNSceneRendererDelegate methods with full Swift 6 compilation ? Is that even possible for now ?
SceneKit
RSS for tagCreate 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.
Posts under SceneKit tag
64 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello dear forum,
I need to emit an exact amount of particles from a SCNParticleSystem in a burst (this is for an UI effect). This worked for me perfectly in the scene editor by setting the birthrate to the amount and emission duration to 0.
Sadly when I either load such a particle system from a scene or creating it by code, the emitted particles are sometimes one less, or one more.
The first time I run it in the simulator, it seems to work fine but then amount of particles varies as described.
video: https://youtube.com/shorts/MRzqWBy2ypA?feature=share
Does anybody know how to make this predictable?
Thanks so much in advance,
Seb
There is a significant regression in Xcode 16.0 and 16.1 (16B5014f) in that it no longer allows asset files other than .SCN files to be added to a .scnasset group and folder. This really wrecks our ability to maintain our app since we heavily depend on additional assets like .USDZ, .JPEG, .PNG, etc. files to be able to be included in the group for on-demand resource tagging and loading.
I filed FB15239224 but I'm wondering if this is intentional and if anyone knows of a workaround.
In my app, I have an ARView that has cameraMode set to nonAR.
I occasionally hide the ARView when it is not needed and reveal it again later.
While the ARView is hidden, I'd like to pause the animation to save iPhone battery life. I'd also like to do this when I know that animation in my scene has paused and the contents of the view, although still visible, is static.
This was possible using SceneKit, but I can't seem to find an equivalent way to do it using RealityKit.
At least as of iOS 18, a hidden ARView with an empty scene appears to use approximately 30% of the CPU.
How can I pause ARView so that it won't use the battery unnecessarily?
Thank you for considering this question.
Every now and then my SceneKit game app crashes and I have no idea why. The SCNView has a overlaySKScene, so it might also be SpriteKit's fault.
The stack trace is
#0 0x0000000241c1470c in jet_context::set_fragment_texture(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, jet_texture*) ()
#27 0x000000010572fd40 in _pthread_wqthread ()
Does anyone have an idea where I could start debugging this, without being able to consistently reproduce it?
After scanning the room I use the .export method passing a ModelProvider. Then I import the USDZ into a SCNView and continue processing the scene: I would like to apply a texture to the walls and floor, but I can't do it for the walls and floor because they don't contain the TextureCoordinates. Creating them from the USDZ file is not easy. I tried to combine the CapturedRoom data for the walls and floor only, adding the TextureCoordinates. I'm managing, but I'm struggling a lot. Isn't there an easier way to do it? Is there a ModelProvider planned for surfaces in the future? If so, where can I access the RoomPlan beta documentation?
Hi everyone,
I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated.
I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either.
Thanks!
Hello,
I’m playing around with making an fully immersive multiplayer, air to air dogfighting game, but I’m having trouble figuring out how to attach a camera to an entity.
I have a plane that’s controlled with a GamePad. And I want the camera’s position to be pinned to that entity as it moves about space, while maintaining the users ability to look around.
Is this possible?
--
From my understanding, the current state of SceneKit, ARKit, and RealityKit is a bit confusing with what can and can not be done.
SceneKit
Full control of the camera
Not sure if it can use RealityKits ECS system.
2D Window. - Missing full immersion.
ARKit
Full control of the camera* - but only for non Vision Pro devices. Since Vision OS doesn't have a ARView.
Has RealityKits ECS system
2D Window. - Missing full immersion.
RealityKit
Camera is pinned to the device's position and orientation
Has RealityKits ECS system
Allows full immersion
I'm trying to update my projects to use Swift6, if I change the project settings to use Swift6 then my app crashes when I add a closure to the SCNAnimation animationDidStop property. The error is inside the SceneKit renderingQueue and indicates that the callback is being called on the wring queue.
Maybe I need to do something in the code to fix this but I can't seem to make it work, maybe a SceneKit bug?
If you create a new game template in Xcode using SceneKit and replace the contents of GameViewController.swift with the following you will see the app crash after it is launched.
import UIKit
import SceneKit
class GameViewController: UIViewController {
let player: SCNAnimationPlayer = {
let a = CABasicAnimation(keyPath: "opacity")
return SCNAnimationPlayer(animation: SCNAnimation(caAnimation: a))
}()
override func viewDidLoad() {
super.viewDidLoad()
let scnView = self.view as! SCNView
scnView.scene = SCNScene()
// Change the project settings to use Swift6
// Setting this closure will then cause a _dispatch_assert_queue_fail
// EXC_BREAKPOINT error in the scenekit.renderingQueue.SCNView queue,
// the only thing on the stack is:
// "%sBlock was %sexpected to execute on queue [%s (%p)]"
player.animation.animationDidStop = { (a: SCNAnimation, b: SCNAnimatable, c: Bool) in
print("stopped")
}
scnView.scene?.rootNode.addAnimationPlayer(player, forKey: nil)
player.play()
}
}
I have an AR game using ARKit with SceneKit that works just fine in iOS 17.
In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless.
I narrowed down the issue to showing the Game Center Access Point.
My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene.
Following is the code I use to start Game Center in VC1:
GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in
let isGameCenterReady = (gcAuthVC == nil) && (error == nil)
if let viewController = gcAuthVC {
self.present (viewController, animated: true, completion: nil)
}
if error != nil {
print(error?.localizedDescription ?? "")
}
if isGameCenterReady {
GKAccessPoint.shared.location = .topLeading
GKAccessPoint.shared.showHighlights = true
GKAccessPoint.shared.isActive = true
}
}
When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black.
If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows:
If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly
If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem.
So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists.
Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly!
I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code.
But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe???
I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well).
So could this be a bug in iOS 18?
As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience.
Any help would be greatly appreciated. Thanks!
HI guys,
I'm integrating the RoomPlan framework into my app.
I'm able to scan a room and extract the nodes from the CaptureStructure object. So far, I can rebuild the 3D object in the SceneView, but I can't render the openings and the windows correctly. I'm struggling to add these two objects correctly in the wall, in order to make the wall transparent where they are supposed to be.
If I export the CaptureStructure into a usda file and then I load it directly in the SceneView, all the doors, windows and openings are correctly rendered, therefore I do believe that I'm doing something wrong.
Could you please tell me what I'm doing wrong?
I added here a screenshot of my problem:
I have also a prototype, which you can run and see the problem I'm talking about: https://github.com/renanstig/3d-scenekit-prototype
i'm trying to figure out how to basically engrave some text into this ellipsoid mesh. so far the only thing i've learned that can sort of come close to what im looking for is SCNText but it floats above the ellipsoid and doesnt conform to the angular shape.
let allocator = MTKMeshBufferAllocator(device: MTLCreateSystemDefaultDevice()!)
let disc = MDLMesh.newEllipsoid(
withRadii: vector_float3(Float(discDiameter/2), Float(discDiameter/2), Float(discThickness/2)),
radialSegments: 64,
verticalSegments: 64,
geometryType: .triangles,
inwardNormals: false,
hemisphere: false,
allocator: allocator
)
let discGeometry = SCNGeometry(mdlMesh: disc)
let material = createIridescentMaterial()
discGeometry.materials = [material]
I want to convert CGPoint into SCNVector3. I am using ARFaceTrackingConfiguration for face tracking.
Below is my code to convert SCNVector3 to CGPoint
let point = faceAnchor.verticeAndProjection(to: sceneView, facePoint: faceAnchor.geometry.vertices[0])
print(point, faceAnchor.geometry.vertices[0])
which prints below values
CGPoint = (350.564453125, 643.4456787109375)
SIMD3<Float>(0.014480735, 0.01397189, 0.04508282)
extension ARFaceAnchor{
// struct to store the 3d vertex and the 2d projection point
struct VerticesAndProjection {
var vertex: SIMD3<Float>
var projected: CGPoint
}
// return a struct with vertices and projection
func verticeAndProjection(to view: ARSCNView, facePoint: Int) -> CGPoint{
let point = SCNVector3(geometry.vertices[facePoint])
let col = SIMD4<Float>(SCNVector4())
let pos = SIMD4<Float>(SCNVector4(point.x, point.y, point.z, 1))
let pworld = transform * simd_float4x4(col, col, col, pos)
let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))
let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y))
return p
}
}
extension matrix_float4x4 {
/// Get the position of the transform matrix.
public var position: SCNVector3 {
get{
return SCNVector3(self[3][0], self[3][1], self[3][2])
}
}
}
Now i want to convert same CGPoint to SCNVector3.
I tried using below code but it is not giving expected values, which is SIMD3(0.014480735, 0.01397189, 0.04508282)
let projectedOrigin = sceneView.projectPoint(SCNVector3Zero)
let unproject = sceneView.unprojectPoint(SCNVector3(point.x, point.y, CGFloat(projectedOrigin.z)))
let vector = SCNVector3(unproject.x, unproject.y, unproject.z)
Is there any way to convert CGPoint to SCNVector3? I cannot use hitTest because this CGPoint is not present on the node. It is present somewhere on the face area.
As the title suggests, clicking the "Export to SceneKit" button indeed converts a USD to .scn but removes the normals in the process if the mesh has blendshapes.
When I export the same file without any blendshapes / morphtargets, the normals stay on as expected.
If I try to create normals in the scenekit editor (adding them as a new geometry source) Xcode crashes (no matter if there are blendshapes or not)
I've tried loading the resulting scene with
[SCNSceneSource.LoadingOption.createNormalsIfAbsent : true]
but this doesn't change anything either.
I suppose this is a bug?
My last resort is to load my character without any blendshapes and then add the targets from a different scene.
Thanks for any insight!
seb
Hi,
I am initializing a SCNNode from a OBJ file. Let's suppose the object is a sphere, and its pivot after loading it from the OBJ file is the bottom of the sphere (where it would rest on the floor). Its default position is the zero vector.
However, I must change the pivot to the center of the sphere. After doing so (based on its bbox), since the position is still the zero vector, does that mean that the object was translated so that the new pivot lies at (0,0,0)? Or should set its position to (0,0,0), which will now be based on the new pivot?
To test whether this is needed, I am using a separate button to change the node's position to (0,0,0) after changing its pivot, but I do not see any change visually, which leads me to believe that after changing the pivot, the object is automatically moved to (0,0,0) based on its new pivot. This is probably done faster than the scene renders which is why I do not notice any difference between the two methods.
I cannot tell which of the two is correct, meaning that I do no know whether I should set the position again to (0,0,0) after changing the pivot or not. Right now it seems like it makes no difference. Any thoughts?
使用xib方式的scnview 加载点云图3D模型,在苹果12上无法展示,在苹果13上可以正常显示.以下为报错信息:Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)
2024-07-10 11:01:22.403196+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)
2024-07-10 11:01:22.403458+0800 不愁物联网[26648:1375452] [SceneKit] Error: Resource command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=1 "Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)" UserInfo={NSLocalizedDescription=Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)}
(
)
2024-07-10 11:01:22.403539+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
2024-07-10 11:01:22.403556+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
2024-07-10 11:01:22.403586+0800 不愁物联网[26648:1375452] [SceneKit] Error: Main command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=3 "Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)" UserInfo={NSLocalizedDescription=Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)}
(
)
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind:
ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly.
many times, then
RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108
ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108
again several times and then:
ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. }
and then again the pose issues several times.
I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
So I am trying to create a certain amount of spheres in a SceneKit scene based on the number of objects in a list. So I think I would put an addChild in a for loop, but how would I assign them all to the same type of model? Especially because sometimes the list will be empty, so the model would not need to show up at all.
I am trying to make an app that uses SceneKit to display some 3D models, but started it up from the regular app format. When I try to build I get this error: shell script build rule for "/somePath/Scene.scn' must declare at least one output file.
The .scn is being provided to a view, is that not an output? Or is there some formatting issue that I need to solve.
I have a visionOS app that utilizes DrawableQueue and CADisplayLink to update an Entity, TextureResource tied to the drawable, and a Material that uses that TextureResource. TextureResource gets updated with when a video frame is ready. Material properties can get updated from the video or from other sources.
Current process: when each video frame is ready, we get the next drawable, render to it, present it, and make an Entity update (e.g. transform). However, I’m experiencing jitter in the rendered content where it seems that the updates to the entity and the drawable being presented are milliseconds off from each other.
Should I be using Drawable.presentOnSceneUpdate() to ensure all updates happen in the same update cycle? And if so, do you have any additional details on how to correctly use this function (the docs are unclear)?