I am trying to position the share sheet popped up by the shareLink API on VisionOS, but the share sheet is always anchored by at the label position.
I checked the Photos app is achieving this already, the share sheet there appears at very center of the window while the share button locates at the corner inside the menu.
How is this possible to make it?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection.
I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal.
Here are the steps I took take to replicate the issue I have:
FaceTime another person with the app.
Open the app and click the SharePlay button to SharePlay it with the other person.
Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.)
Once the SharePlay has been established, change the image by clicking change 1 image.
Select a jpg image.
The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image.
The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.)
Here are the classes for the example project I created:
Content View
Game Model Class
Activity Manager
Main Starter Class
Hi, we have in our app an immersive space and we taught the palm menu button is not available in immersive spaces, but when I look in the hand and tap the menu button appear. Is it possible to keep it hidden? Because we a have an hand tracking feature in palm and when we try to press a button to overlap the palm it triggers the menu button and then when the user presses again by mistake, it sends the application to the background.
This is very important for us because we would like to release this hand-tracking feature as soon as possible.
Here is a link with to a video with the problem:
https://drive.google.com/file/d/1cfOcdzF19h_mbmpvkVNCJjXEBJecVeJL/view?usp=sharing
I have a created an AnchorEntity for my index finger tip and then created a model entity (A sphere) as a child of it. This model entity has a collision component and a physics body component. I tried using dynamic and kinematic modes for the physics body component.
I have created a plane from a cube that has collision component and a static physics body.
I have subscribed to the CollisionEvents.Began on this plane. I have also stored it in a EventSubscription state variable.
@State private var collisionSubscription: EventSubscription?
The I subscribed as follows
collisionSubscription = content.subscribe(to: CollisionEvents.Began.self,
on: self.boxTopCollision, { collisionEvent in
print("something collided with the box top")
})
The collision event fires when I directly put the sphere above the plane and let gravity do the collision, but when the the sphere is the child of the anchor entity, the collision events don't happen.
I tried adding collision and physics body component directly to the anchor entity and that doesn't work too.
I created another sphere with a physics body and a collision component and input target component and manipulate it with a drag gesture.
When the manipulation is happening and collide the plane and the sphere the events don't happen when my sphere is touching the plane, but when the gesture end and the sphere is in contact with the plane, the event gets fired. I am confused as to why this is happening.
All I want to do is have a collider on my finger tip and want to detect the collision with this plane. How can I make this work?
Is there some unstated rule somewhere as where a physics body is manipulated manually it cannot trigger collision events?
For more context. I am using SpatialTrackingSession with the tracking configuration of .hand. I am successfully able to track the finger tip.
Hello everyone,
I'm developing an app for visionOS that utilizes HealthKit to query heart rate data. However, I'm encountering an issue where the app doesn't retrieve the latest heart rate values. Specifically, it fails to get live heart rate data even after the data has been saved to the Health app. The readings my app displays are outdated and do not match the current values shown in the Health app.
Here's what I've tried so far:
Fetching Heart Rate Samples: Used HKSampleQuery and HKAnchoredObjectQuery to fetch the most recent heart rate samples. Despite this, the data retrieved is still not up-to-date.
Checking Permissions: Ensured that all necessary HealthKit permissions are granted. The app has authorization to read heart rate data and write workout data.
My questions are:
Is there a known issue or limitation with HealthKit on visionOS that prevents apps from accessing the latest heart rate data?
Are there additional steps or configurations required to access live heart rate data in visionOS apps?
Has anyone successfully implemented live heart rate monitoring on visionOS, and if so, could you share how you achieved it?
I'm getting the following error in my swift build targeting VisionOS 2.0 :
" 'defaultDisplay' is unavailable in visionOS "
TLDR : how do I specify an initial window position in visionOS? The docs seem to be off? - see below.
The docs say it is available, but it is not, or at least my XCODE (Version 16.0 ) is throwing errors on it :
https://developer.apple.com/documentation/swiftui/scene/defaultwindowplacement(_:)
I know apple is opinionated about window placement in visionOS, and maybe it will never be available, but the docs say it is in visionOS 2.0+ and it sure would be nice to be able to specify a default position toward the bottom of one's FOV, etc .
Side-note -- the example code in that doc also has the issue that "Window" is not available in visionOS ( WindowGroup is ).
example code -- barely modified from example code in doc :
var body: some Scene {
WindowGroup("MyLilWindow", id: "MyLilWindow") {
TestView()
}
.windowResizability(.contentSize)
.defaultWindowPlacement { content, context in
let displayBounds = context.defaultDisplay.visibleRect
let size = content.sizeThatFits(.unspecified)
let position = CGPoint(
x: displayBounds.midX - (size.width / 2),
y: displayBounds.maxY - size.height - 140)
return WindowPlacement(position)
}
}
Images are not appearing on tab bar on visionOS despite it shows up in perfect on iOS.
I tried rendering mode API to make the original image visible, and it is working fine on iOS. But on visionOS the image stays white like masked by the tab bar default content color.
Did anyone achieve solving this problem? I might be able to create my custom ornament to make it look like tab bar, but I think it‘s too much coding to do so.
I am getting the error "Initializing hosting entity without a context" in the console when I build and run my game in XCode 16.0 beta, targeting Vision Pro OS 2.0 (22N5252n).
Not sure where the error is originating.
Hello,
I keep running into the below warning when pushing a window of type volumetric. Although pushing the windows is achieved, we always get the warning regardless of pushing the window via the Attachment button or via the buttons in the ToolbarItemGroup.
Illustrated is all the code: app file, first volume and second volume. You can see in my app file that all volumetric window are indeed in a WindowGroup.
What is wrong? How can I get rid of that warning?
Warning:
PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
I'm developing a VisionOS app and I'm trying to load a ModelEntity from a USDZ file which is inside my custom RealityKit package called R2UVisionOficial. But it keeps giving me an resourceNotFound error.
import RealityKit
import R2UVisionOficial
import ARKit
/* more code */
do {
let newEntity: Entity
//...
// Loads entity from USDZ inside package
newEntity = try await ModelEntity(named: "Salas", in: r2UVisionOficialBundle)
//...
return newEntity
} catch {
print("wtManager >>> **** FAILED to load entity:", error.localizedDescription)
throw error
}
I'm sure I have the Salas.usdz file in the root folder of my package and that I'm using the correct paths. However I keep getting the error:
Failed to find resource with name "Salas" in bundle
It's funny because when I try to load a USDA (scenes) from the same packages, it works fine. So I guess there's something to do with ModelEntity or USDZ files.
Can you please help me?
P.S. This issue is similar to https://developer.apple.com/forums/thread/746842?answerId=780415022#780415022
I'm trying to downgrade my visionPro to visionOS 1.3. I downloaded the visionOS 1.3 ipsw file from the Apple Developer site (on September 25, 2024), but I'm unable to restore the device using this file.
After checking ipsw.me, I noticed that visionOS 1.3 is no longer signed. This makes me wonder if the 1.3 IPSW file, although available on the developer site, might not be usable anymore.
Has anyone else encountered this issue? Is there any official confirmation on whether visionOS 1.3 can still be restored?
I am trying to only apply a drag gesture to specific entities that has a specific component. My entities has the component on it along with the input target and collision component. The gestures work when I use .targetedToAnyEntity() modifier but .targetedToEntity(where:) modifier fails
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(scene)
}
}
.gesture(
DragGesture()
.targetedToEntity(where: .has(ToyComponent.self))
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
)
}
}
What could be wrong here?
Hello,
To me, it does not seem to be entirely clear why, when I'm trying to display my attachment, no matter the positioning, it will always be hidden/covered by my visionOS app window. I'm trying to achieve displaying the attachment one layer above/in front of the window. When my head isn't directed towards the window I can see the attachment but else it's covered by it.
I appreciate any help!
ContentView.swift
import SwiftUI
import RealityKit
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
public var body: some View {
VStack {
Text("Hello World")
.font(.largeTitle)
Button("Start") {
Task {
await openImmersiveSpace(id: "AppSpace")
}
}
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var loader: EnvironmentLoader
public var body: some View {
RealityView { content, attachments in
content.add(try! await loader.getEntity())
let headEntity = AnchorEntity(.head)
content.add(headEntity)
if let text = attachments.entity(for: "at01") {
text.position = [0, 0, -0.25]
headEntity.addChild(text)
}
}
attachments: {
Attachment(id: "at01") {
Text("Hello World!")
.font(.extraLargeTitle)
.padding()
}
}
}
}
App.swift
import SwiftUI
@main
private struct App: App {
@State var loader = EnvironmentLoader()
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView(loader: loader)
}
.immersionStyle(selection: .constant(.progressive), in: .progressive)
}
}
I am using RealityView for an iOS program.
Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience.
I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this:
arView.environment.background = .color(.black)
Is there something similar in RealityView for iOS?
Here are some snippets of my current work around inside RealityView.
First create the sphere to surround the user:
// Create sphere
let blackMaterial = UnlitMaterial(color: .black)
let sphereMesh = MeshResource.generateSphere(radius: 100)
let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial])
let sphereEntity = Entity()
sphereEntity.components.set(sphereModelComponent)
sphereEntity.scale *= .init(x: -1, y: 1, z: 1)
content.add(sphereEntity)
Then turn off occlusion:
// Turn off occlusion
let configuration = SpatialTrackingSession.Configuration(
tracking: [],
sceneUnderstanding: [],
camera: .back)
let session = SpatialTrackingSession()
await session.run(configuration)
Hi,
We are a team of student working on a project featuring the Vision Pro, and we'd simply like to know if a 3rd party app can access the video stream of the front cameras?
From our tests, FaceTime for exemple, is able to screen share the entire stream that the user is seeing. (real world + app windows), but apps as Discord, are only able to share app windows, the real world is fully black.
Is it a privacy security, or is it because 3rd parties app doesn't yet support the stream of the front cameras?
To give some more context, we'd like to screenshot an area of the view (real world), with a pinch and drag gesture, and then access the screenshot to work on it. How would we be able to access the video stream?
Thanks in advance for your help,
MrCubic
HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30)
However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record".
So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
Hi,
I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one.
Thank you for your help!
Below an example:
Feedback.swift
import SwiftUI
struct Feedback: View {
let message: String
var body: some View {
VStack {
Text(message)
}
}
.position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI?
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var feedbackMessage = "Hello World"
public var body: some View {
VStack {}
.overlay(
Feedback(message: feedbackMessage)
)
RealityView { content in
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
let spatialTrackingSession = SpatialTrackingSession.init()
_ = await spatialTrackingSession.run(configuration)
// Head
let headEntity = AnchorEntity(.head)
content.add(headEntity)
}
}
}
I try vision frameworks with VisionPro but does not work only with VisionOS2.0.
When I perform requests, do not work and below error is caught.
I try same code with VisionOS1.2, iOS18.0beta it works.
I try also new beta API but does not work and same error.
ex.GenerateForegroundInstanceMaskRequest
do you have any idea? is it any permission for use vision framework with visionOS2.0.
This is my try list
with VisionOS2.0beta4
GenerateForegroundInstanceMaskRequest (not work error1)
VNGenerateForegroundInstanceMaskRequest(not work error1)
VNRecognizeTextRequest (not work error2)
with VisionOS1.2
VNRecognizeTextRequest (work)
with iOS 18beta
GenerateForegroundInstanceMaskRequest (work)
My Development Env
Env1
VisionPro: VIsionOS2.0beta4
Xcode: 16.0beta4,16.0beta2.
macOS: 14.5(23F79)
Env2
VisionPro: VIsionOS1.2.
Xcode: 15.4
macOS: 14.5(23F79).
Error1
Error Domain=com.apple.Vision Code=9 "Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)" UserInfo={NSLocalizedDescription=Could not build inference plan - ANECF error: failed to load ANE model file:///System/Library/Frameworks/Vision.framework/subject_lifting_gen1_rev5_gv8dsz6vxu_multihead_int8.espresso.net Error= (DESIGN)}
Error2
Error Domain=com.apple.Vision Code=11 "VNRecognizeTextRequest produced an internal error" UserInfo={NSLocalizedDescription=VNRecognizeTextRequest produced an internal error, NSUnderlyingError=0x3001f6850 {Error Domain=CRImageReaderErrorDomain Code=-5 "Unknown error" UserInfo={NSLocalizedDescription=Unknown error}}}
Hi all,
I’m quite new to XR development in general and need some guidance.
I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start.
I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this:
Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the user’s palm is showing or not.
While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. I’m not asking for the entire implementation, but rather just enough to get me on the right track.
Heres basically all I have so far (no idea if this is correct or not):
func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool {
// Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors
let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x,
hand.joint(.wrist).anchorFromJointTransform.columns.3.y,
hand.joint(.wrist).anchorFromJointTransform.columns.3.z)
let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z)
let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z)
}