I managed to solve this issue. In my scenario it was caused by a race condition in reading the Keychain too early in the app launch. I was checking for an auth token on the keychain within SceneDelegate.scene(_:willConnectTo:options:).
Instead I wait for the keychain protected data to become available like so:
func refreshAuthFromKeychain(_ callback: @escaping (Bool) -> Void) {
/// Avoid race condition where the app might try to access keychain data before the device has decrypted it
guard UIApplication.shared.isProtectedDataAvailable else {
NotificationCenter
.default
.publisher(for: UIApplication.protectedDataDidBecomeAvailableNotification)
.first()
.sink { _ in
self.refreshAuthFromKeychain(callback)
}.store(in: &cancellables)
return
}
....
/// Then load from the keychain
Post
Replies
Boosts
Views
Activity
I think I am also experiencing this issue, although it is difficult to verify. I store an auth token on the keychain but have had the app seemingly delete it after a few hours. We are not experiencing this on iOS14.
Did you end up finding a way to do this @pprovins? I’m also trying to access per-node uniforms in a DRAW_SCENE pass with custom shaders.
it’s simple enough to bind a global uniform with symbols, but I can’t find any details on how to do it per node.
Is setObject(_:forKeyedSubscript:) safe to call every frame? The documentation appears to suggest otherwise:
Use this method when you need to set a value infrequently or only once. To update a shader value every time SceneKit renders a frame, use the handleBinding(ofSymbol:using:) method instead.
I am using ARKit + Scenekit and I am trying to pass in the smoothedSceneDepth from the LiDAR Scanner into a custom SCNTechnqiue. I convert the CVPixelBuffer into a MTLTexture, then pass that into my technique like so:
let mtlTex = PixelBufferToMTLTexture(pixelBuffer: pixelBuffer)
arscnView.technique?.setObject(SCNMaterialProperty(contents: mtlTex), forKeyedSubscript: "camera_depth" as NSCopying)
This appears to work correctly, however I have had the occasional crash when calling setObject every frame.
Has there been any updates on this, or has anyone managed to find a work around? It still appears to be affecting out ARKit app.
Nice investigation @lenk.
I copied what you did and also managed to track down the raw shader string that Scenekit is injecting. ( I used a symbolic break on -[MTLDebugDevice newLibraryWithSource:options:error:]).
From what I can see, the same master-shader is used every time (labeled as Common Profile v2), but the preprocessorMacros - https://developer.apple.com/documentation/metal/mtlcompileoptions/1516172-preprocessormacros?language=objc options passed in are different depending on the node. The master-shader is stripped down based on the preprocessor values and then I guess cached against those options?
I see someone has uploaded the Common Profile shader to Github - https://gist.github.com/warrenm/794e459e429daa8c75b5f17c000600cf - you can see some of the different preprocessor macros scattered throughout.
Thanks @jmousse. I really hope this gets addressed quickly. The issue is quite obviously present even in the Apple supplied sample project: https://developer.apple.com/documentation/arkit/world_tracking/placing_objects_and_handling_3d_interaction.
On my iPhone X the scene freezes each time an object is added.
Has there been any updates on this? We've noticed a big spike in the time spent compiling shaders for dynamic objects we add to our ARKit scene. The problem persists even when the logs are silenced with Courance's method.
Until this is fixed by Apple, does anyone know if it is possible to shift the compilation to a background thread? Currently the main ARKit thread is freezing when we import our models which causes the app to lose tracking and give an overall bad experience. I don't mind the extra time spent compiling, but the way it causes so much stuttering is really frustrating.
Alternatively, I've done a small test using a custom SCNProgram referencing some basic vertex and fragment shaders written in a metal file. I'm guessing when you do this, the shader is compiled at build-time because it completely removes the issue. We need to use the Scenekit PBR shaders though which would be a mammoth task to try and recreate.
Hi there, I also have a similar problem. Did you manage to find a solution?
I am trying to create a photo mode for my AR app that is able to capture images at a higher resolution than is natively shown in ARSCNView. The snapshot method available on SCNView works, but it does not allow me to render at a higher resolution.
My solution was to create a seperate SCNRenderer, point it to the same scene as my AR scene and render it in the background using snapshot(atTime:with:antialiasingMode:).
This works great, except in landscape mode. The 3D content of the output rendered image is correct, but the background camera feed doesn't rotate as expected.
I have tried a few solutions:
/* applying a rotation transform to the background seems to have no affect */
renderer.scene?.background.contentsTransform
/* Drawing the SCNView directly - can't scale this up like I want to achieve */
sceneView.drawHierarchy(in: , afterScreenUpdates: )
/* capturing the current frame camera pixel buffer & rendering manually. Couldn't get this to work */
sceneView.session.currentFrame?.capturedImage
Any help would be greatly appreciated.