SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

85 Posts
Sort by:
Post not yet marked as solved
0 Replies
406 Views
is any one else having issues with game scene-view .dae and or .scn files it seems these new beta release is very incompatible with files that work perfect with previous Xcode releases up to Xcode 14 I'm working on upgrading a simple striped down version of my chess game and run in to strange and bogus errors messages and crashes /Users/helmut/Desktop/schachGame8423/schach2023/scntool:1:1 failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] all tools reality converter exporting the .dae file to other graphic files working fine as in prior Xcode releases but some thing is missing in current beta release of Xcode 15
Posted Last updated
.
Post marked as solved
1 Replies
930 Views
I've been working on an app that combines CoreML and ARKit/SceneKit to detect and measure some objects, with success. Now I need to make it available to a React Native app, and I'm trying this approach here: https://github.com/riteshakya037/react-native-native-module where I can navigate and instantiate the view controller. The problem occurs when my view gets called. I have errors at the sceneView, not being loaded. Is there a way to use it without the Storyboard? For now it seems the incompatibility.
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.7k Views
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
540 Views
Hello everyone 👋 Occasional Apple developer yet first time poster Flo here. I've had this idea floating around my head for a while now, to develop a little toy that would make use of Apple's XDR displays, i.e. the one in my MBP. So essentially, I'm trying to do real-time 3D graphics utilising the HDR colour space, but I don't have the motivation to learn the bare metal Metal graphics API. SceneKit, so I figured, would allow me to explore the EDR-rendering pipeline, since to my knowledge they all (SpriteKit, RealityKit etc.) use Metal under the hood anyway. As per the WWDC '21 - Explore HDR rendering with EDR presentation, all I had to do was set a few properties on my view's underlying CAMetalLayer to enable EDR rendering for my macOS app. However, the SceneKit template in Xcode seems to be instantiating my view with a CALayer by default and when I try to replace it with a CAMetalLayer nothing gets rendered to the screen / window. Am I oversimplifying things? All I want to do is display a bunch of colours that are brighter than reference white :< If this is possible at all, I would appreciate any pointers. Thanks for reading 🙏
Posted Last updated
.
Post not yet marked as solved
0 Replies
593 Views
Hi all. I am new to swift and AR. I'm trying a project on AR and ran into a problem that I can't change the material on the models. With geometry such as a sphere or a cube, everything is simple. Tell me what am I doing wrong? My simple code: @IBOutlet var sceneView: ARSCNView! var modelNode: SCNNode! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self sceneView.showsStatistics = true let scene = SCNScene(named: "art.scnassets/jacket.usdz")! modelNode = scene.rootNode.childNode(withName: "jacket", recursively: true) let material = SCNMaterial() material.diffuse.contents = UIImage(named: "art.scnassets/58.png") modelNode.childNodes[0].geometry?.materials = [material] sceneView.scene = scene
Posted Last updated
.
Post not yet marked as solved
2 Replies
757 Views
We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17. in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
Posted
by lanxinger.
Last updated
.
Post not yet marked as solved
0 Replies
384 Views
How do I trace a map on the floor as the user walks through their house, like a trail or heatmap, and then save this trail to CoreData Would it be possible to load and view this map later in the same spot? Or rescan the trail in the same area?
Posted Last updated
.
Post not yet marked as solved
0 Replies
516 Views
I am developing a game where users can use their finger to move objects, and when the user releases their finger the game checks for overlap between objects and move the moved object back to it's original position if there is overlap. I am using SCNScene.PhysicsWorld.contactTest(with: ) to check for overlap between my nodes. However, the method only works correctly when nodes have physicsbodys using .convexHull, when I change it to .concavePolyHedron everything stops working and no contact is reported. I have set the physicbodys to be static so I am at a loss of what to do. Here is my code configuring the physicsbody for each node parentNode.physicsBody = SCNPhysicsBody(type: .static, shape: SCNPhysicsShape(node: parentNode, options: [.type: SCNPhysicsShape.ShapeType.concavePolyhedron, .collisionMargin: 0.0, .scale: scaleVector])) Here is my code calling contact test : if let test = currentNode?.physicsBody { let list = view.scene!.physicsWorld.contactTest(with: test) { ... } }
Posted
by JustinZh.
Last updated
.
Post not yet marked as solved
0 Replies
704 Views
I am getting 4 warnings compiling the current template of sceneKit game /Users/helmut/Documents/spaceTime/scntool:1:1 Could not find bundle inside /Library/Developer/CommandLineTools any idea why the bundle is not installed I did a new download twice now still stuck getting these warnings about bundles not installed or not found any solution to correct the download install so the base dependencies are present thank you
Posted Last updated
.
Post not yet marked as solved
0 Replies
663 Views
A feature of my app I am in need of is to allow users to go through their room and mark specific positions, then be able to navigate these positions with a floor plan like a map. Think google maps but for your room, showing the user's position. I know it is possible to make a floor plan with RoomPlan, which could act as a map, but would it be possible after the plan is made to track a user's location in the room and show it? Is this too complex for RoomPlan? And if so how would I tackle this problem?
Posted Last updated
.
Post not yet marked as solved
1 Replies
557 Views
Understanding SCNCameraController TLDR; I'm able to create my own subclassed camera controller, but it only works for rotation, not translation. I made a demo repo here. Background I want to use SceneKit's camera controller to drive my scene's camera. The reason I want to subclass it is that my camera is on a rig where I apply rotation to the rig and translation to the camera. I do that because I animate the camera, and applying both translation and rotation to the camera node doesn't create the animation I want. Setting up Instantiate my own SCNCameraController Set its pointofView to my scene's pointOfView (or its parent node I guess) Using the camera controller We now want the new camera controller to drive the scene. When interactions begin (e.g. mouseDown), call beginInteraction(_ location: CGPoint, withViewport viewport: CGSize) When interactions update and end call the corresponding functions on the camera controller Actual behavior It works when I begin/update/end interactions from mouse down events. It ignores any other event types, like magnification, scrollwheel, which work in e.g. the SceneKit Editor in Xcode. See MySCNView.swift in the repo for a demo. By overriding the camera controller's rotate function, I can see that it is called with deltas. This is great. But when I override translateInCameraSpaceBy my print statements don't appear and the scene doesn't translate. Expected behavior I expected SCNCameraController to also apply translations and rolls to the pointOfView by inspecting the currentEvent and figuring out what to do. I'm inclined to think that I'm supposed to call translateInCameraSpaceBy myself, but that seems inconsistent with how Begin/Continue/End interaction seems to call rotate. Demo repo: https://github.com/mortenjust/Camera-Control-Demo
Posted Last updated
.
Post not yet marked as solved
0 Replies
487 Views
I'm trying to create an app similar to PolyCam using Lidar . I'm using SceneKit mesh reconstruction and able to apply some random textures. But need real-world textures in generated output 3D Model. Found few examples available which are related to MetalKit and Point Cloud, which was not helpful. Can you help me out with any references/steps/tutorial to how to achieve it .
Posted
by muqaddir.
Last updated
.
Post not yet marked as solved
0 Replies
386 Views
Hello, does anyone know of any way to learn apple modules such as scene kit, uikit, reality kit etc? I have seen tutorials on the apple and swift playgrounds page but so far I have not found one that refers to these more advanced modules, I have seen the apple documentation but it is not the most optimal to learn by that means, so I wanted to ask if anyone knows means such as a playground or others to learn how to use these advanced modules
Posted Last updated
.
Post not yet marked as solved
2 Replies
805 Views
Hello I am a bit stuck with a silly challenge I set myself : I want to have a node with a simple geometry ( let's say a triangle ) and I want to render those triangles N time with passing in some information per triangle. Let s say an offset to apply at shader time. I understand that may be I should create a sourcegeo and create multiple nodes to reflect that but here the point is to implement some Metal stuff in the renderNode override ( SCNode.rendererDelegate / SCNRenderNodeDelegate ). so I set up some vertex shader like this : vertex VertexOut brush_vertex_main(const VertexIn vertexIn [[stage_in]], constant BrushNodeBuffer& scn_node [[buffer(BufferNode)]], constant BrushInstances *instances [[buffer(BufferBrush)]], uint instanceID [[instance_id]]) { float4 vertexOffset = float4(instances[instanceID].offset.xyz,1.0) + float4(vertexIn.position.xyz,1.0); // float4 vertexOffset = float4(vertexIn.position.xyz,1.0); VertexOut out = { .position = scn_node.modelViewProjectionTransform * vertexOffset, .color = vertexIn.color }; return out; } Did some binding as well to declare a pipelinerenderState like for eg let defLib = wd.device!.makeDefaultLibrary() let vertFunc = defLib?.makeFunction(name: vertexFunctionName) let fragFunc = defLib?.makeFunction(name: fragmentFunctionName) // add geo desc ( geometries should be doing that underhood anyway let vertexDescriptor = MTLVertexDescriptor() // pos (SCNVertexSemanticPosition) vertexDescriptor.attributes[0].format = .float3 vertexDescriptor.attributes[0].bufferIndex = 0 vertexDescriptor.attributes[0].offset = 0 // color ( SCNVertexSemanticColor) vertexDescriptor.attributes[3].format = .float3 vertexDescriptor.attributes[3].bufferIndex = 0 vertexDescriptor.attributes[3].offset = MemoryLayout<simd_float3>.stride vertexDescriptor.layouts[0].stride = MemoryLayout<simd_float3>.stride * 2 let pipelineDescriptor = MTLRenderPipelineDescriptor() pipelineDescriptor.vertexFunction = vertFunc pipelineDescriptor.fragmentFunction = fragFunc pipelineDescriptor.vertexDescriptor = vertexDescriptor did some buffers creation, setting them properly in the rendering loop rendererCmdEnc.setRenderPipelineState(brushRenderPipelineState!) rendererCmdEnc.setVertexBuffer(vertBuff!, offset: 0, index: 0) // node info rendererCmdEnc.setVertexBuffer(nodeBuff! , offset: 0, index: Int(BufferNode.rawValue)) // per instance info rendererCmdEnc.setVertexBuffer(primBuff! , offset: 0, index: Int(BufferBrush.rawValue)) rendererCmdEnc.drawIndexedPrimitives(type: .triangle, indexCount: primitiveIdx.count, indexType: .uint16, indexBuffer: indexBuff!, indexBufferOffset: 0, instanceCount: 6) and I keep banging my head when this executes : I have a miss match between my renderpipelineState vs the RenderPassDescriptor. Either it s the colorAttachment or the sample count/rastersamplecount that is invalid. -[MTLDebugRenderCommandEncoder setRenderPipelineState:]:1604: failed assertion `Set Render Pipeline State Validation For depth attachment, the texture sample count (1) does not match the renderPipelineState rasterSampleCount (4). The color sample count (1) does not match the renderPipelineState's color sample count (4) The raster sample count (1) does not match the renderPipelineState's raster sample count (4) I used the passdescriptor collorattachment to be a close as possible when describing the renderpipelinestate. changed the rastersamplecount. Tried without any specific for collorattachment etc... Alas, either the API validation will tell me I have the wrong colorAttachment info when I set up the renderpipelinestate in the renderloop and if I fixed the colorattach info at renderStatePipeline creation, some invalid sample count. In a nutshell : is there any way to do this kind of geo instancing in a single node using SceneKit ? thanks in advance for any support you would find interesting to provide!
Posted
by Pom73.
Last updated
.
Post not yet marked as solved
0 Replies
686 Views
I am writing to report an issue that I have been encountering while developing an iOS game using SceneKit. In my application, I have programmatically added various animations to my scene, which include position, blend shape, and rotation animations. When I attempt to export the scene to a "USD" type file, the export process fails. The error that I receive is as follows: 2 SceneKit 0x1b19d3528 USDKitConverter::processBlendShapeAnimation(USKNode*, CAAnimation*, std::__1::vector<double, std::__1::allocator<double>>&, std::__1::vector<std::__1::vector<float, std::__1::allocator<float>>, std::__1::allocator<std::__1::vector<float, std::__1::allocator<float>>>>&) + 484 Given the nature of the error, I suspect there may be an issue with SceneKit's handling of blend shape animations when converting to USD. As a test, I removed the blend shape animations, keeping only the position animations, and attempted to export again. The export process succeeded, however, when I tried to play the resulting USD file, none of the animations were present. Furthermore, I attempted a workaround by first exporting to an SCN file, which also succeeded. However, when I then tried to open this SCN file with SceneEditor and export it to USD using the application's menu, Xcode crashed. I am reaching out to request assistance in resolving this issue, or if this is indeed a bug, to bring it to your attention for further investigation. Please let me know if you require any additional information from my end.
Posted Last updated
.
Post not yet marked as solved
0 Replies
515 Views
I am attempting to take a screenshot of a SCNScene using SCNView's snapshot() function. However, when I take a snapshot the image turns out to be just the background color and does not contain any nodes. let tempScene = try SCNScene(url: SceneUrl!) let tempView = SCNView() tempView.scene = tempScene tempView.frame = CGRect(x: 0, y: 0, width: 1000, height: 1200) let cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.name = "Camera" tempScene.rootNode.addChildNode(cameraNode) // place the camera cameraNode.position = SCNVector3(x: 0, y: 0, z: 15) let list = tempView.scene!.rootNode.childNodes(passingTest: { (node, stop) -> Bool in if node.childNodes.count == 0 { return true; } else { return false; } }) cameraNode.constraints = [SCNLookAtConstraint(target: list.first)] //add a light to the scene let lightNode1 = SCNNode() lightNode1.light = SCNLight() lightNode1.light!.type = .ambient lightNode1.position = SCNVector3(x: 0, y: 30, z: 30) tempScene.rootNode.addChildNode(lightNode1) tempView.backgroundColor = UIColor.darkGray SCNTransaction.flush() let image = tempView.snapshot() code-block
Posted
by JustinZh.
Last updated
.
Post not yet marked as solved
0 Replies
792 Views
Hello With Unity you can import an animation and apply it to any character. For example, I import a walking animation and I can apply it to all my characters. Is there an equivalent with SceneKit? I would like to apply animations by programming without having to import for each character specifically Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
549 Views
I can't post a video I don't think. But in the screen shots, I'm sightly making a circle with the phone and the green lines will disappear and then reappear. Those green lines are drawn via .addChildNode(). We're using RoomPlan to detect cabinets(.storage), and then we outline the cabinets with SCNNodes. We have other methods to capture cabinets that don't use RoomPlan. And the lines for those cabinets do not wink in and out. Perhaps there is a bug with visibility culling? We're pretty dang sure the nodes are not disappearing because we are calling .hide() anywhere. Perhaps object detection from RoomPlan running in the background is interfering?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.6k Views
I am attempting to build an AR app using Storyboard and SceneKit. When I went to run an existing app I have already used it runs but nothing would happen. I thought this behavior was odd so I decided to start from scratch on a new project. I started with the default AR project for Storyboard and SceneKit and upon run it immediately fails with an unwrapping nil error on the scene. This scene file is obviously there. I am also given four build time warnings: Could not find bundle inside /Library/Developer/CommandLineTools failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] Conversion failed, will simply copy input to output. Copy failed file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn -> file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn error:Error Domain=NSCocoaErrorDomain Code=516 "“ship.scn” couldn’t be copied to “art.scnassets” because an item with the same name already exists." UserInfo={NSSourceFilePathErrorKey=/Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn, NSUserStringVariant=( I currently am unsure how to fix these errors? It appears as if they must be in the command line tools because after moving the device support files back to a stable version of Xcode the same issue is present. Is anyone else having these issues?
Posted Last updated
.