We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17.
in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference
we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
SceneKit
RSS for tagCreate 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.
Post
Replies
Boosts
Views
Activity
Hi, my app displays a video as texture on SceneKit's SCNNode. Now I've just found that some videos looks different in iOS 16.4 from previous versions. The videos look more pale than they should be. I looked up some documents and set SCNDisableLinearSpaceRendering as true in info.plist, they look exactly what they should be but then the problem is that the other videos which already looked fine now turned different. But anyway it seems to relate to the linear or gamma color spaces regarding to this answer (https://developer.apple.com/forums/thread/710643).
Definitely those problematic videos have some different color space setting or something. I am not really expert in these field, where should I start to dig in?
Or how can I just make iOS 16.4 behave same as previous versions? It worked well for all the videos then. What was actually updated?
Xcode template "iOS game swift SceneKit" run with error as shown. It might be related to scntool. Any idea how to fix this?
SceneKit has started filling my console with this log message:
"Pass FloorPass is not linked to the rendering graph and will be ignored check it's input/output"
Feels like I'm the only one on the planet using SceneKit, but if anyone can guess at what is happening, or the reason for this - I'm thankful.
I am in need to create a 2D floor plan with the dimensions mentioned, from the generated 3D result of RoomPlan. Is there a way to create it in a little easy-to-understand manner? Or will it require manual elaborate coding?
Does anyone have a working example on how to play OGG files with swift?
I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes.
I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac)
I also tried using Cricket Audio framework below.
https://github.com/sjmerel/ck
It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue
https://github.com/sjmerel/ck/issues/3
Right now I believe every player that can play OGGs on mac is written in Objective-C or C++.
Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
Summary:
I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face.
Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera.
Problem:
When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases.
My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue.
Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here.
Sample of Vision Request Setup:
// Setup Vision request options
var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
// Setup Camera Intrinsics
let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
if cameraIntrinsicData != nil {
requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
}
// Set EXIF orientation
let exifOrientation = self.exifOrientationForCurrentDeviceOrientation()
// Setup vision request handler
let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
orientation: exifOrientation,
options: requestHandlerOptions)
// Setup the completion handler
let completion: VNRequestCompletionHandler = {request, error in
let observations = request.results as! [VNFaceObservation]
// Draw faces
DispatchQueue.main.async {
drawFaceGeometry(observations: observations)
}
}
// Setup the image request
let request = VNDetectFaceLandmarksRequest(completionHandler: completion)
// Handle the request
do {
try handler.perform([request])
} catch {
print(error)
}
Sample of SCNView Setup:
// Setup SCNView
let scnView = SCNView()
scnView.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(scnView)
scnView.showsStatistics = true
NSLayoutConstraint.activate([
scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor),
scnView.topAnchor.constraint(equalTo: self.view.topAnchor),
scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor),
scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor)
])
// Setup scene
let scene = SCNScene()
scnView.scene = scene
// Setup camera
let cameraNode = SCNNode()
let camera = SCNCamera()
cameraNode.camera = camera
scnView.scene?.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 16)
// Setup light
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = SCNLight.LightType.ambient
ambientLightNode.light?.color = UIColor.darkGray
scnView.scene?.rootNode.addChildNode(ambientLightNode)
Sample of "face processing"
func drawFaceGeometry(observations: [VNFaceObservation]) {
// An array of face nodes, one SCNNode for each detected face
var faceNode = [SCNNode]()
// The origin point
let projectedOrigin = sceneView.projectPoint(SCNVector3Zero)
// Iterate through each found face
for observation in observations {
// Setup a SCNNode for the face
let face = SCNNode()
// Setup the found bounds
let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height))
// Verify we have landmarks
if let landmarks = observation.landmarks {
// Landmarks are relative to and normalized within face bounds
let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y)
.scaledBy(x: faceBounds.size.width, y: faceBounds.size.height)
// Add all points as vertices
var vertices = [SCNVector3]()
// Verify we have points
if let allPoints = landmarks.allPoints {
// Iterate through each point
for (index, point) in allPoints.normalizedPoints.enumerated() {
// Apply the transform to convert each point to the face's bounding box range
_ = index
let normalizedPoint = point.applying(affineTransform)
let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z))
let unprojected = sceneView.unprojectPoint(projected)
vertices.append(unprojected)
}
}
// Setup Indices
var indices = [UInt16]()
// Add indices
// ... Removed for brevity ...
// Setup texture coordinates
var coordinates = [CGPoint]()
// Add texture coordinates
// ... Removed for brevity ...
// Setup texture image
let imageWidth = 2048.0
let normalizedCoordinates = coordinates.map { coord -> CGPoint in
let x = coord.x / CGFloat(imageWidth)
let y = coord.y / CGFloat(imageWidth)
let textureCoord = CGPoint(x: x, y: y)
return textureCoord
}
// Setup sources
let sources = SCNGeometrySource(vertices: vertices)
let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates)
// Setup elements
let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles)
// Setup Geometry
let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements])
geometry.firstMaterial?.diffuse.contents = textureImage
// Setup node
let customFace = SCNNode(geometry: geometry)
sceneView.scene?.rootNode.addChildNode(customFace)
// Append the face to the face nodes array
faceNode.append(face)
}
// Iterate the face nodes and append to the scene
for node in faceNode {
sceneView.scene?.rootNode.addChildNode(node)
}
}
Hello!I have been using this piece of code in order to create a custom plane geometry using the SCNGeometryPrimitiveType option ".polygon".extension SCNGeometry {
static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry {
var indices: [Int32] = [Int32(vertices.count)]
var index: Int32 = 0
for _ in vertices {
indices.append(index)
index += 1
}
let vertexSource = SCNGeometrySource(vertices: vertices)
let indexData = Data(bytes: indices, count: indices.count * MemoryLayout.size)
let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout.size)
let geometry = SCNGeometry(sources: [vertexSource], elements: [element])
let material = SCNMaterial()
material.diffuse.contents = UIColor.blue
material.isDoubleSided = true
geometry.firstMaterial = material
return geometry
}
}This works by sending in vertex coordinates in an order which represent the outline of the desired plane. As an example I might make an array with vertices representing a rectangle geometry as follows: [lowerLeft, upperLeft, upperRight, lowerRight].This method seems to work well for simpler shapes, but I sometimes get an error which I haven't been able to find the cause of when using more complex shapes or vertex coordinates which are randomly scattered in a plane (eg. when the method recieves an array where the vertices order is not outlining a shape. In the rectangle case it could look like this: [lowerLeft, upperRight, lowerRight, upperLeft] ). The error seems more likely to occur when the number of vertices used increases.I'm using this method to allow the user of my app to "paint" an outline of the desired plane, and as I can't control how the user chooses to do so I want this method to be able to handle those cases as well.This is the error print i recieve after calling this method:-[MTLDebugDevice validateNewBufferArgs:options:]:467: failed assertion `Cannot create buffer of zero length.'(lldb)And this is what appears in the debug navigator:libsystem_kernel.dylib`__pthread_kill:
0x219cad0c4 <+0>: mov x16, #0x148
0x219cad0c8 <+4>: svc #0x80
-> 0x219cad0cc <+8>: b.lo 0x219cad0e4 ; <+32>
0x219cad0d0 <+12>: stp x29, x30, [sp, #-0x10]!
0x219cad0d4 <+16>: mov x29, sp
0x219cad0d8 <+20>: bl 0x219ca25d4 ; cerror_nocancel
0x219cad0dc <+24>: mov sp, x29
0x219cad0e0 <+28>: ldp x29, x30, [sp], #0x10
0x219cad0e4 <+32>: ret Where line 4 has the error: com.apple.scenekit.scnview-renderer (16): signal SIGABRTAny help or explanation for this error would be greatly appriciated!