How to add a light source to the View of the visionOS APP.
Post
Replies
Boosts
Views
Activity
How does visionOS play an MP4 audio to Spatial Audio through SwiftUI or RealityKit?
Note: Since I can only test the App through Simulator, in order to ensure that my Spatial Audio is played correctly in the space, please tell me how to display the location of Spatial Audio in the space. Ew and how to delete this View after the test, thank you!
Question 1:
The manipulationState variable has been used many times in the 3D gestures part of wwdc2023-10113 Video. What do I need to use Expected Type for this variable?
I tried to use the following code:
@State private var manipulationState: GestureState
Xcode error Reference to generic type 'GestureState' requires arguments in <... > Insert '<Any>' But I don't know what parameter to refer to in <Any>
Question 2:
If you have all the code snippets (or similar) of the 3D gestures part in this tutorial, please send it to me. Thank you.
Some functions cannot be tested in the normal state of the visionOS virtual machine, such as hand tracking, and I need to test these functions. Excuse me, in addition to applying for Dev Kit and participating in the laboratory, how can I test these special functions?
I used Model3D to display a model:
Model3D(named: "Model", bundle: realityKitContentBundle) { phase in
switch phase {
case .empty:
ProgressView()
case .failure(let error):
Text("Error \(error.localizedDescription)")
case .success(let model):
model.resizable()
}
}
However, when I ran, I found that the width and length were not stretched, but when I looked at the depth from the side, they were seriously stretched. What should I do?
Note: For some reason, I can't use the Frame modifier.
Image:
width and length
error depth
I have a USDZ model called 'GooseNModel' in the visionOS App project. I'm sure that this model contains an animation, so I wrote the following code to display the model with animation:
import SwiftUI
import RealityKit
RealityView{ content in
if let GooseNModel = try? await Entity(named: "GooseNModel"),
let animation = GooseNModel.availableAnimations.first {
GooseNModel.playAnimation(animation)
content.add(GooseNModel)
}
}
But when I ran it, I found that it did not contain animation as I imagined, but a static model. How to solve it?
I created a RealityKitContent in the Packages folder of the visionOS app project. At first, I tried to add a USDA model directly to its rkassets. I used Model3D(named: "ModelName", bundle: realityKitContentBundle) can The model is displayed normally, but then when I add a folder in rkassets and then put the USDA model in that folder, use Model3D(named: "ModelName", bundle: realityKit ContentBundle) cannot display the model normally. What should I do?
If you know how to solve the above problems, please let me know and check if you know how to solve the following problems. If you know, please also tell me the answer. Thank you!
The USDA model I mentioned in the question just now contains an animation, but when I used Model3D(named: "ModelName", bundle: realityKitContentBundle) , I found that the animation was not played by default, but needed additional code. Is there any documentation, video or listing code in this regard?
In reality composer pro, when importing an USDZ model and inserting it into the scene, reality composer pro will remove the material of the model itself by default, but I don't want to do this. So how can reality composer pro not remove the material of the model itself?
Just now, I accidentally deleted Xcode15.1 Beta3. Now I want to download it again, but I found that Xcode 15.1 Release Candidate replaced Xcode15.1 Beta3 in https://developer.apple.com/download/applications/, but there is no visionOS SDK in Xcode 15.1 Release Candidate, so I really need to download Xcode15.1 Beta3 now. Who has any way for me to download it again?
I successfully changed a picture to the background in ImmersiveSpace in a full state with the following code.
import RealityKit
struct MainBackground: View {
var body: some View {
RealityView { content in
guard let resource = try? await TextureResource(named: "Image_Name") else {
fatalError("Error.")
}
var material = UnlitMaterial()
material.color = .init(texture: .init(resource))
let entity = Entity()
entity.components.set(ModelComponent(
mesh: .generateSphere(radius: 1000),
materials: [material]
))
entity.scale *= .init(x: -1, y: 1, z: 1)
content.add(entity)
}
}
}
However, when running, I found that when the user moves, the background is not fixed, but follows the user's movement, which I feel unrealistic. How to fix the background in the place where it first appears, and give the user a kind of movement that is really like walking in the real world, instead of letting the background follow the user.
In visionOS, have many code with 3D attributes (SwiftUI) is adapted from the code in iOS, like: .padding(_:) to .padding3D(_:). In iOS have .offset(x:_, y:_), it only have X and Y, but in visionOS, view is in a 3D scene, so I want offset have Z, but offset can't use Z, so I try offset3D:
import SwiftUI
//Some View
.offset3D(x: Number, y: Number, z: Number)
Xcode report an error:
Error: No member 'offset3D'
So do you now how to use like offset's Modifiers, and can use Z in visionOS.
In visionOS, in ImmersiveSpace, how to let users drag a specific view?
I have some question about visionOS:
Does Apple open the eye tracking API to developers? If I want to know how to achieve it, when the eyes are stared at a specific View, a Boolean value can be changed to true, and when the eyes are removed from this View, it will become false.
In ImmersiveSpace, when immersionStyle is .full or .progressive, a black background will appear by default. How can I turn this background into a panorama of my own?
In ImmersiveSpace, how to make a View always follow the user?
In visionOS, in order to fully implement Group Activities, do we need to write additional code?
Is TipKit compatible with visionOS?