Hello, I’m trying to move my app into vision OS, my app is used for pilot to study the airplane system, is a 3d airplane cockpit build with scene kit and I use sprite scene to animate the cockpit instruments .
Scenekit allow to apply as material a sprite scene , so I could animate easy all the different instruments and indication there, but I can’t find this option on reality compose pro , is this possible? any suggestions I can look into to animate and simulate instruments.
SceneKit
RSS for tagCreate 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.
Posts under SceneKit tag
62 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
We have requirement adding usdz file to UIView and showing the it’s content and archive the data and save to file. When user open the file, we need to unarchive that usdz content and binding with UIView and showing it to user. Initially, we created SCNScene object passing usdz file url like below.
do {
usdzScene = try SCNScene(url: usdzUrl)
} catch let error as NSError {
print(error)
}
Since SCNScene support to NSSecureCoding protocol , we directly archive that object and save it file and load back it from file and created SCNScene object using NSKeyedUnarchiver process.
But for some files, we realised high memory consumption while archiving the SCNScene object using below line.
func encode(with coder: NSCoder) {
coder.encode(self.scnScene, forKey: "scnScene")
}
File referene link : toy_drummer_idle.usdz
When we analyse apple documentation (check discussion section) , it said, scn file extension is the fastest format for processing than the usdz.
So we used SCNSecne write to feature for creating scn file from given usdz file.
After that, When we do the archive SCNScene object that was created by sun file url, the archive process is more faster and it will not take high memory as well. It is really faster previous case now.
But unfortunately, SCNScene write method will take lot of time for this conversion and memory meter is also going high and it will be caused to app crash as well.
I check the output file size as well. The given usdz file size is 18MB and generated scn file size is 483 MB. But SCNScene archive process is so fast.
Please, analyse this case and please, provide some guideline how we can optimise this behaviour. I really appreciate your feedback.
Full Code:
import UIKit
import SceneKit
class ViewController: UIViewController {
var scnView: SCNView?
var usdzScene: SCNScene?
var scnScene: SCNScene?
lazy var exportButton: UIButton = {
let btn = UIButton(type: UIButton.ButtonType.system)
btn.tag = 1
btn.backgroundColor = UIColor.blue
btn.addTarget(self, action: #selector(buttonPressed(_:)), for: .touchUpInside)
btn.setTitle("USDZ to SCN", for: .normal)
btn.setTitleColor(.white, for: .normal)
btn.layer.borderColor = UIColor.gray.cgColor
btn.titleLabel?.font = .systemFont(ofSize: 20)
btn.translatesAutoresizingMaskIntoConstraints = false
return btn
}()
func deleteTempDirectory(directoryName: String) {
let tempDirectoryUrl = URL(fileURLWithPath: NSTemporaryDirectory())
let tempDirectory = tempDirectoryUrl.appendingPathComponent(directoryName, isDirectory: true)
if FileManager.default.fileExists(atPath: URL(string: tempDirectory.absoluteString)!.path) {
do{
try FileManager.default.removeItem(at: tempDirectory)
}
catch let error as NSError {
print(error)
}
}
}
func createTempDirectory(directoryName: String) -> URL? {
let tempDirectoryUrl = URL(fileURLWithPath: NSTemporaryDirectory())
let toBeCreatedDirectoryUrl = tempDirectoryUrl.appendingPathComponent(directoryName, isDirectory: true)
if !FileManager.default.fileExists(atPath: URL(string: toBeCreatedDirectoryUrl.absoluteString)!.path) {
do{
try FileManager.default.createDirectory(at: toBeCreatedDirectoryUrl, withIntermediateDirectories: true, attributes: nil)
}
catch let error as NSError {
print(error)
return nil
}
}
return toBeCreatedDirectoryUrl
}
@IBAction func buttonPressed(_ sender: UIButton){
let scnFolderName = "SCN"
let scnFileName = "3D"
deleteTempDirectory(directoryName: scnFolderName)
guard let scnDirectoryUrl = createTempDirectory(directoryName: scnFolderName) else {return}
let scnFileUrl = scnDirectoryUrl.appendingPathComponent(scnFileName).appendingPathExtension("scn")
guard let usdzScene else {return}
let result = usdzScene.write(to: scnFileUrl, options: nil, delegate: nil, progressHandler: nil)
if (result) {
print("exporting process is success.")
} else {
print("exporting process is failed.")
}
}
override func viewDidLoad() {
super.viewDidLoad()
let usdzUrl: URL? = Bundle.main.url(forResource: "toy_drummer_idle", withExtension: "usdz")
guard let usdzUrl else {return}
do {
usdzScene = try SCNScene(url: usdzUrl)
} catch let error as NSError {
print(error)
}
guard let usdzScene else {return}
scnView = SCNView(frame: .zero)
guard let scnView else {return}
scnView.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(scnView)
self.view.addSubview(exportButton)
NSLayoutConstraint.activate([
scnView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor),
scnView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor),
scnView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
scnView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor, constant: -30),
exportButton.widthAnchor.constraint(equalToConstant: 200),
exportButton.heightAnchor.constraint(equalToConstant: 40),
exportButton.centerXAnchor.constraint(equalTo: view.safeAreaLayoutGuide.centerXAnchor),
exportButton.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
])
DispatchQueue.main.asyncAfter(deadline: .now() + 0.01) {[weak self] in
guard let self else {return}
loadModel(scene: usdzScene)
}
}
func loadModel(scene: SCNScene){
guard let scnView else {return}
scnView.autoenablesDefaultLighting = true
scnView.scene = scene
scnView.allowsCameraControl = true
}
}

Hello,
I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with:
Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this?
Integrating SceneKit with RealityKit: I've also experimented with using
SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView.
I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices?
Any guidance, sample code, or references to documentation would be greatly appreciated.
Thank you in advance for your help!
Hi everyone,
I'm choosing a framework for developing a game that doesn't involve augmented reality (AR) and I'm unsure whether to use SceneKit or RealityKit. I would like to hear from Apple engineers on this matter. Which of these frameworks is better suited for creating non-AR games?
Additionally, I'd like to know if it's possible to disable AR in RealityKit using the updated RealityView? Thanks in advance for your insights and recommendations!
We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines.
Tapping on a 3D model to place a SCNNode
// pointInScene: tapped point
let hitResults = sceneView.hitTest(pointInScene, options: nil)
return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
Dear all,
I have several scenes, each with it’s own camera at different positions. The scenes will be loaded with transitions.
If I set the pointOfView in every Scene to the scene-camera, the transitions don’t work properly. The active scene View switches to the position of the camera of the scene, which is fading in.
If I comment the pointOfView out, the transitions works fine, but the following error message appears:
Error: camera node already has an authoring node - skip
Has someone an idea to fix this?
Many Thanks,
Ray
Hi,
Since iOS 17, when setting weight on a SCNMorpher, the normals become completely wrong. As you can see below it only happens when there are vertices along an edge.
Has anyone encountered that problem and found a solution?
Thanks
Reported: FB13798652
Hello!
I need to display a .scnz 3D model in an iOS app. I tried converting the file to a .scn file so I could use it with SCNScene but the file became corrupted.
I also tried to instantiate a SCNScene with the .scnz file but that didn't work either (crash when instantiating it).
After all this, what would be the best way to use this file knowing that converting it or exporting it to a .scn file with scntool hasn't worked?
Thank you!
I am wanting to create a 3D video game in Xcode for macOS, iOS, iPadOS, tvOS, and visionOS. I have heard that there are a few different ways to go about this such as MetalKit or SceneKit. These libraries seem to have little examples and documentation so I am wondering:
Are they still be developed/supported?
Which platform should I make a game in?
Where are some resources to learn how to use these platforms?
Are there other better platforms that I am just not aware of?
Thanks!
I am trying to change the color of usdz asset provide by my designer. But I am unable to do. Can some help me with some sample code
Dear all,
I'm new in coding, for maybe the fifth time :), and I hope I find the right words.
Right now I'm prototyping a 3D game with scenekit for IOS devices. At the moment the prototyp of the MainScene.scn (the first game scene) is ready. The 3D objects are located in the assetcatalog, and the game behavior with object movements, gyro data, 3D Text and so on, is codet in the GameViewController.swift.
Now I’m at a point where I think I can do many things wrong, at the cost of days, weeks or months of work, to reconstruct my app, afterwards.
So I want to understand, what's the „sexiest“ way to structure my project Scenes, Views, Controller, etc.
Simplified, my game will have a user interface to store the name, change gameplay setting and so on, and it will have multiple 3D scenes where the game takes place.
At first I thought, it would be a good idea to arrange multiple scenes, all controlled by the GameViewController, due to not having to duplify recurring methods or so. But as I thought of, I saw a GameViewController file bigger and bigger and I had the fear to more and more loosing the focus the more scenes I added.
The thought of multiple Controller for each or a fiew Scenes are not as „****“ too, because by changes on a recurring method, I maybe have to change it for every Controller.
I then thought of instancing the GameViewController but in no case I had the feeling „that’s the way to go“.
So long things short: How would you arrange a game project like this?
Thank you in advance,
Ray
Hi, i want to place a object in 3d world space without the use of hittest or plane detection in ios swift code. Suggest the best method.
Now, I take the camera center matrix and use simd_mul to place the object, it works but the object gets placed at the centre of the mobile screen. I want to select the x and y positino on the screen 2d coordinate and place the object.
I tried using the unprojectpoint function, to get the AR scene world coordinate of the point i touch on the mobile screen. I get the x, y,z values, they are very close to the values from camera center matrix. When i try to replace the unprojectpoint values in the cameracenter matrix, i dont see a difference in the location of the placed object.
The below code always place object from center screen with specified depth, But i need to place object in user specified position(x,y) of the screen with depth.
2D pixel coordinate system of the renderer to the 3D world coordinate system of the scene.
/* Create a transform with a translation of 0.2 meters in front of the camera. */
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.2
let transform = simd_mul(view.session.currentFrame.camera.transform, translation)
Refer from : [https://developer.apple.com/documentation/arkit/arskview/providing_2d_virtual_content_with_spritekit)
The code i used for replacing the camera center matrix with the unprojectpoint is
let vpWithZ = SCNVector3(x: 100.0, y: 100.0, z: -1.0)
let worldPoint = sceneView.unprojectPoint(vpWithZ)
var translation = matrix_identity_float4x4
translation.columns.3.z = Float(Depth)
var translation2 = sceneView.session.currentFrame!.camera.transform
translation2.columns.3.x = worldPoint.x
translation2.columns.3.y = worldPoint.y
translation2.columns.3.z = worldPoint.z
let new_transform = simd_mul(translation2, translation)
/* add object name you wanted in your project*/
let sphere = SCNSphere(radius: 0.03)
let objectNode = SCNNode(geometry: sphere)
objectNode.position = SCNVector3(x: transform.columns.3.x, y: transform.columns.3.y, z: transform.columns.3.z)
The below image shows outline of my idea.
Hello,
I am currently working on a project where I am creating a bookstore visualization with racks and shelves(Full immersive view). I have an array of names, each representing a USDZ object that is present in my working directory.
Here’s the enum I am trying to iterate over:
enum AssetName: String, Codable, Hashable, CaseIterable {
case book1 = "B1"
case book2 = "B2"
case book3 = "B3"
case book4 = "B4"
}
and the code for adding objects I wrote:
import SwiftUI
import RealityKit
struct LocalAssetRealityView: View {
let assetName: AssetName
var body: some View {
RealityView { content in
if let asset = try? await ModelEntity(named: assetName.rawValue) {
content.add(asset)
}
}
}
}
Now I get the error, when I try to add multiple objects on Button click:
Unable to present another Immersive Space when one is already requested or connected
please suggest any solutions. Also suggest if anything can be done to add positions for the objects as well programatically.
I'm trying to add dynamic shadows by adding a directional light to the scene.
I implemented a POC based on the latest documentation.
Basically, the way shadows are being rendered in RealityKit is by a adding a ModelEntity into an AnchorEntity with a target of type planes.
The result is that I'm getting shadows that are terribly flickering.
I'd add that in SceneKit, there are many more shadow-related properties that let you tweak the look and feel of the shadows, and it's not hard to get a decent shadow there.
I'm wondering if having accurate dynamic shadows is possible in RealityKit and if not, if there's a plan to fix it in the next RealityKit version.
I'm working on a project in Xcode where I need to use a 3D model with multiple morph targets (shape keys in Blender) for animations. The model, specifically the Wolf3D_Head node, contains dozens of morph targets which are crucial for my project. Here's what I've done so far:
I verified the morph targets in Blender (I can see all the morph targets correctly when opening both the original .glb file and the converted .dae file in Blender).
Given that Xcode does not support .glb file format directly, I converted the model to .dae format, aiming to use it in my Xcode project. After importing the .dae file into Xcode, I noticed that Xcode does not show any morph targets for the Wolf3D_Head node or any other node in the model.
I've already attempted using tools like ColladaMorphAdjuster and another version by JakeHoldom to adjust the .dae file, hoping Xcode would recognize the morph targets, but it didn't resolve the issue.
My question is: How can I make Xcode recognize and display the morph targets present in the .dae file exported from Blender? Is there a specific process or tool that I need to use to ensure Xcode properly imports all the morph target information from a .dae file?
Tools tried: https://github.com/JonAllee/ColladaMorphAdjuster, https://github.com/JakeHoldom/ColladaMorphAdjuster
Thanks in advance!
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately.
When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
Hi,
My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes.
Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes.
This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open.
Thank you
Hi there -
Where would a dev go these days to get an initial understanding of SceneKit?
The WWDC videos linked in various places seem to be gone?!
For example, the SceneKit page at developer.apple.com lists features a session videos link that comes up without any result, https://developer.apple.com/scenekit/
Any advice..?
Cheers,
Jan
We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue."
for obj in Floor_grp[0].childNodes {
let node = obj.flattenedClone()
node.transform = obj.transform
let imageMaterial = SCNMaterial()
node.geometry?.materials = [imageMaterial]
node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown
obj.removeFromParentNode()
Floor_grp[0].addChildNode(node)
}
Hi there, i have recently started development in swift Ui. I wanted to ask whether it is possible to design an AR app which generates and tracks a 3d model or .scn based on a real world 3d model if .usdz format is used. for example i want to generate and track the movement of an aeroplane in AR and i have .scn file but i want a real world object as an anchor like a pen or pencil and i want to use its 3d data in .usdz format. i know you can use ARobjects abnd object tracking but it uses .arobject format and doesnot use LiDAR. important thing is that i want to use Lidar tracking not point cloud. is it possible? point me in right direction
Thank you.
I am using xcode 15 &
ios 17 beta