Posts

Post marked as solved
1 Replies
216 Views
Please see also the video demo of the problem I'm encountering: https://youtu.be/V0ZkF-tVgKE I've noticed that the custom Systems I've been creating for my RealityKit/visionOS app do not get updated every frame as the documentation (and common sense) would suggest. Instead, they appear to tick for a time after each UI interaction and then "stall". The systems will be ticked again after some interaction with the UI or sometimes with a large enough movement of the user. My understanding was that these Systems should not be tied to UI by default so I'm a bit lost as to why this is happening. I've reproduced this by starting from a template project and adding a very simple couple of systems. Here is the main System, which simply rotates the pair of spheres: import RealityKit import RealityKitContent import SwiftUI public struct RotationSystem: System { static let query = EntityQuery(where: .has(RealityKitContent.WobblyThingComponent.self)) public init(scene: RealityKit.Scene) { } public func update(context: SceneUpdateContext) { print("system update, deltaTime: \(context.deltaTime)") let entities = context.scene.performQuery(Self.query).map({ $0 }) for entity in entities { let newRotation = simd_quatf(angle: Float(context.deltaTime * 0.5), axis: [0, 1, 0]) * entity.transform.rotation entity.transform.rotation = newRotation } } } The component (WobblyThingComponent) is attached to a parent of the two spheres in Reality Composer Pro, and both system and component are registered on app start in the usual way. This system runs smoothly in the simulator, but not in the preview in XCode and not on the Vision Pro itself, which is kinda the whole point. Here is a video of the actual behaviour on the Vision Pro: https://youtu.be/V0ZkF-tVgKE The log during this test confirms that the system is not being ticked often. You can see the very large deltaTime values, representing those long stalled moments: system update, deltaTime: 0.2055550068616867 system update, deltaTime: 0.4999987483024597 I have not seen this problem when running the Diaroma sample project, yet when comparing side-by-side with my test projects I cannot for the life of me identify a difference which could account for this. If anyone could tell me where I'm going wrong it would be greatly appreciated as I've been banging my head against this one for days. Xcode: Version 15.3 (15E204a) visionOS: 1.1 and 1.1.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
214 Views
I'm trying to get started working with volumes for the Vision Pro. Making use of the tutorials and provided assets. But in simulator, on device and in the XCode preview the volumes are always strangely huge, like double what is required in each dimension. Even the initial project template is like this, looking quite different from what is in the tutorial videos. There is also a full panel backing the volume, where the tutorial suggests this code should produce a background just behind the buttons. Aside from changing the sphere to a cube and adding .previewLayout(.sizeThatFits) as per the tutorial, this is the template: Did I miss something crucial here? I want/expect the volume to be roughly the size of the bounding box of the cube, plus a little for the button. (usually a Unity dev, newb to Swift.)
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.4k Views
Trying the "People Occlusion" sample for ARKit 3 and I have to say that it's very impressive! And it's delightful to see that it can also occlude up-close hands. HOWEVER, I have to say that the depth assigned to hands is way out of whack.See attached image: The vase is supposed to be sitting on the table. The occlusion applied to the hand behaves as if the hand were gigantic (able to reach as far as the table and be larger than the vase itself), when in reality the hand is obviously much nearer and should be fully occluding still.https://ibb.co/kJYvNTQOn the plus side, the SHAPE of the occlusion for the hand seems great.
Posted Last updated
.
Post not yet marked as solved
5 Replies
2.8k Views
I've been checking out the ARKit 3 beta examples and am specifically interested in tracking a person's body while ALSO using person segmentation. (Use case: a person moves among virtual objects, and those objects react to approximated collisions with the user's body. Body Detection used to generate approximate collision for the user's body, and Person Segmentation used to enforce the sense of the user moving between objects.)However, my attempts so far to create an AR configuration which allows this have not been successful. Here is just creating a frame semantics object with the desired features: var semantics = ARConfiguration.FrameSemantics() semantics.insert(.bodyDetection) semantics.insert(.personSegmentationWithDepth) guard ARWorldTrackingConfiguration.supportsFrameSemantics(semantics) else { fatalError("Desired semantics not available.") }This example throws the indicated error. But if either body detection OR person segmentation is left out, it does not.Similarly, this returns false:ARBodyTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)So:Am I just doing this wrong? (v possible as I'm new to Swift and native iOS in general — usually use Unity & C#)Or, is this simply not supported? And if so, is this likely to change by time of release?Is there a document somehwere which indicates clearly which ARKit features can be used in conjunction with one another? (I could not find one.)
Posted Last updated
.