Hi!
I'm working on trying to integrate SharePlay with Personas into a visionOS app. I'm still at the beginning stages and I'm following some of the 1st party documentation + videos from WWDC.
https://developer.apple.com/videos/play/wwdc2023/10087
https://developer.apple.com/documentation/groupactivities/adding-spatial-persona-support-to-an-activity#Update-the-immersion-level-automatically-for-a-Full-Space
I'm using:
Xcode 15.3 (15E204a)
visionOS 1.1 (21O209)
In both the examples linked above, they reference
systemCoordinator.groupImmersionStyle
However this doesn't seem to exist.
Also, it's not clear how to open/transition to the various spaces if groupImmersionStyle did exist and I could detect full, mixed, progressive spaces.
Am I missing something?
Was this removed or did I configure something incorrectly?
Is there a full sample for SharePlay + visionOS somewhere that I might have missed?
Thanks for any help!
Post
Replies
Boosts
Views
Activity
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit.
As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/
I think the process should be:
Download the file from the web and store in temporary storage with the FileManager API
Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
Add the entity to content in the Reality View.
I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect.
Is there any official guidance or a code sample for this use case?
Context
https://developer.apple.com/forums/thread/751036
I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack
At runtime I'm loading:
Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project)
A Model3D view that pulls in the robot model from the web url
A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView
Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason.
Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot).
Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once?
Screenshot
Code
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add an ImageBasedLight for the immersive content
guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return }
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
// Put skybox here. See example in World project available at
// https://developer.apple.com/
}
}
Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!)
SkyboxView()
// RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz")
RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")
}
}
Have a bug I'm trying to resolve on an app review through the store.
The basic flow is this:
User presses a button and enters a fully immersive space
While in the the fully immersive space, user presses the digital crown button to exit fully immersive mode and return to shared space (Note: this is not rotating the digital crown to control immersion level)
At this point I need an event or onchange (or similar) to know when a user is in immersive mode so I can reset a flag I've been manually setting to track whether or not the user is currently viewing an immersive space.
I have an onchange watching the scenePhase changes and printing to console the old/new values however this is never triggered.
Seems like it might be an edge case but curious if there's another way to detect whether or not a user is in an immersive scene.
Hi!
I'm trying to set up a test account in the App Connect Sandbox for testing payments and I'm getting this error:
"Something Went Wrong. Try again later."
Steps to repro:
Login to App Store Connect
Go to Users and Access
Go to Sandbox
Go to Test Accounts - Note: I see the same error here before even starting to add an account
Click Add Test Account button
Fill out form
Click Create button
Result:
I receive the "Something Went Wrong. Try again later." error with no test account created.
Expected result:
A test account created that I can use to test payment flows in the sandbox before submitting app for review.
Any help here would be awesome so we can test before we submit this app! 🙏