I have a usdz model with animation that I can preview in RCP. When I create a new base/example visionOS project in Xcode it's set up to load in the 'Scene" and "Immersive" reality kit content. But my models don't play the animation.
How do I fire off the contained animations in those files?
Is there a code snippet that someone can share that takes into account how the example project is setup?
Post
Replies
Boosts
Views
Activity
Does the PBR material setup in RCP support packed RGB channels from a single image?
Does the material graph support splitting the output of RGB values on the image node for custom material setup?
Can anyone point me to an approach for handling drag, rotation and scale on a 'TargetedToAnyEnity' asset coming from a realityKitContentBundle?
I've looked through all of the code examples, and have cobbled together something using PlacementGesturesModifer and DragRotationModifier from the HelloWorld code example but I can't figure out how to make it work on individual assets -- it only works on the root.
When I do something simple like this (outside the modifier script I mentioned above) I can make individual drag work... but can't figure out how to apply the same thing to rotation and scale.
.gesture(DragGesture()
.targetedToAnyEntity()
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
Are there any examples of a solution for drag, rotation and scale on an individual basis in the code examples? Any advice or hints would be appreciated. :)
I'm trying to better understand how loading entities works. If I do this:
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) {
content.add(scene)
}
}
It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
In the code example provided there is a bool in the Video object to set a video as 3D:
/// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool
I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error:
iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}]
Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using:
https://drive.google.com/uc?export=download&id= <file ID>
And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
In the example code provided in the tutorial the following error is thrown when attempting to store actions in an animation library on the root. Specifically when trying to add actions. Is there another way to do this? The example code provided does not compile.
From my early testing it seems like the object tracking works best for static objects. For example, if I am holding something in my hand the object tracker is slow to update.
Is there anything that can be modified to decrease the tracking latency?
I noticed that the Enterprise API has some override features is this something that can only be done using Enterprise?
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input.
I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it?
error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed
I'm using Xcode 16 beta 6
Unity 6000.0.17f1
VisionOS 2.0 beta 9
I'm setting:
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
In UnityVisionOSSettings.swift before build out in Xcode.
I'm having an issue where this only works on occasion. Seems random. I'll either get no immersion level available (crown dial is greyed out and no changes can be made) or it will only allow 0.5 - 1.0 immersion (dial will go below 0.5 but springs back to 0.5 when released).
With no changes to my setup or how I'm setting immersionStyle I've been able to get this to work as I would expect. Wondering if there is some bug that would be causing this to fail. I've tested a simple NativeSDK progressive immersion style with same code for custom setting and it works everytime, so it's something related to Unity.
Here is the entire UnityVisionOSSettings that, from as far as I can tell, are controlling this:
`// GENERATED BY BUILD
import Foundation
import SwiftUI
import PolySpatialRealityKit
import UnityFramework
let unityStartInBatchMode = false
extension UnityPolySpatialApp {
func initialWindowName() -> String { return "Unbounded" }
func getAllAvailableWindows() -> [String] { return ["Bounded-0.500x0.500x0.500", "Unbounded"] }
func getAvailableWindowsForMatch() -> [simd_float3] { return [] }
func displayProviderParameters() -> DisplayProviderParameters { return .init(
framebufferWidth: 1830,
framebufferHeight: 1600,
leftEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
rightEyePose: .init(position: .init(x: 0, y: 0, z: 0),
rotation: .init(x: 0, y: 0, z: 0, w: 1)),
leftProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1),
rightProjectionHalfAngles: .init(left: -1, right: 1, top: 1, bottom: -1)
)
}
@SceneBuilder
var mainScenePart0: some Scene {
ImmersiveSpace(id: "Unbounded", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(1.000, 1.000, 1.000), maxSize: .init(1.000, 1.000, 1.000))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Unbounded", .init(1.000, 1.000, 1.000)))
.onImmersionChange() { oldContext, newContext in
PolySpatialWindowManagerAccess.onImmersionChange(oldContext.amount, newContext.amount)
}
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .upperLimbVisibility(.automatic)
.immersionStyle(selection: .constant(.progressive(0.1...1.0, initialAmount: 0.1)), in: .progressive(0.1...1.0, initialAmount: 0.1))
WindowGroup(id: "Bounded-0.500x0.500x0.500", for: UUID.self) { uuid in
PolySpatialContentViewWrapper(minSize: .init(0.100, 0.100, 0.100), maxSize: .init(0.500, 0.500, 0.500))
.environment(\.pslWindow, PolySpatialWindow(uuid.wrappedValue, "Bounded-0.500x0.500x0.500", .init(0.500, 0.500, 0.500)))
KeyboardTextField().frame(width: 0, height: 0).modifier(LifeCycleHandlerModifier())
} defaultValue: { UUID() } .windowStyle(.volumetric).defaultSize(width: 0.500, height: 0.500, depth: 0.500, in: .meters).windowResizability(.contentSize) .upperLimbVisibility(.automatic) .volumeWorldAlignment(.gravityAligned)
}
@SceneBuilder
var mainScene: some Scene {
mainScenePart0
}
struct LifeCycleHandlerModifier: ViewModifier {
func body(content: Content) -> some View {
content
.onOpenURL(perform: { url in
UnityLibrary.instance?.setAbsoluteUrl(url.absoluteString)
})
}
}
}`