How can I guide users to set their preferred language in visionOS? At this point, the behavior seems to be different from that of iOS.
Post
Replies
Boosts
Views
Activity
I use a AVplayer in a window view, I found that when I move the window to different positions, the default behavior is that the sound will change according to the window position. However, in some cases, I don't need this default behavior. I hope the sound doesn't change.
I would like to receive some guidance and discussion on the ideas implemented with RealityKit.
As the title states, this severely limits the flexibility of multi-window applications in creating a good user experience.
Even effects like the ones shown below cannot be achieved.
I found that there is such a click-to-expand horizontally and smoothly effect in the system application called "message", which is good. I wonder if I can add a similar effect to my own app. If possible, are there any implementation ideas or examples that I can refer to? Thanks!
As you can see, it is a transparent spherical shell model with a ball inside. Everything is normal on the front side, but there are strange mesh triangles on the side and back view. I don't know if this is as expected and what I need to do to remove these strange effects.
This restriction causes me to be unable to use Metal to create images and simultaneously use Swift to add UI controls or RealityKit content (without using a window) in immersive mode.
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
VStack(spacing: 8) {
}
.padding(20)
.frame(width: 320)
.glassBackgroundEffect()
.cornerRadius(10)
UI:
Attachment(id: "tooptip") {
if isRecording {
TooltipView {
HStack(spacing: 8) {
Image(systemName: "waveform")
.font(.title)
.frame(minWidth: 100)
}
}
.transition(.opacity.combined(with: .scale))
}
}
Trigger:
Button("Toggle") {
withAnimation{
isRecording.toggle()
}
}
The above code did not show the animation effect when running. When I use isRecording to drive an element in a common SwiftUI view, there is an animation effect.
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.
func testMLTensor() {
let t1 = MLTensor(shape: [2000, 1], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 2000), scalarType: Float.self)
let t2 = MLTensor(shape: [1, 3000], scalars: [Float](repeating: Float.random(in: 0.0...10.0), count: 3000), scalarType: Float.self)
for _ in 0...50 {
let t = Date()
let x = (t1 * t2)
print("MLTensor", t.timeIntervalSinceNow * 1000, "ms")
}
}
testMLTensor()
The above code took more time than expected, especially in the early stage of iteration.