Coordinate conversion was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 22:00), in which the demonstration is an entity that jumps out of volume into space, but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!
Post
Replies
Boosts
Views
Activity
In this code:
https://developer.apple.com/documentation/visionos/incorporating-real-world-surroundings-in-an-immersive-experience
It contains a physical collision reaction between virtual objects and the real world, which is realized by creating a grid with physical components. However, I don't understand the information in the document very well. Who can give me a solution? Thank you!
In Reality View, I want to move an entity A to the position of entity B, but I can't determine the coordinates of entity B (for example, entity B is tracking the hand). What's the solution?
The light of RealityView can only be effective on virtual objects. I hope it can be projected into the real world. What API can be implemented?
Apple Intelligents is here, but I have some problems. First of all, it often shows that something is being downloaded on the settings page. Is this normal? And the Predictive Code Completion Model in Xcode seems to have been suddenly deleted and needs to be re-downloaded, and the error The operation couldn't be complet has occurred. Ed. (ModelCatalog.CatalogErrors.AssetErrors error 1.), detailed log:
The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
User Info: {
DVTErrorCreationDateKey = "2024-08-27 14:42:54 +0000";
}
--
Failed to find asset: com.apple.fm.code.generate_small_v1.base - no asset
Domain: ModelCatalog.CatalogErrors.AssetErrors
Code: 1
--
System Information
macOS Version 15.1 (Build 24B5024e)
Xcode 16.0 (23049) (Build 16A5230g)
Timestamp: 2024-08-27T22:42:54+08:00
In RealityView I have two entities that contain tracking components and collision components, which are used to follow the hand and detect collisions. In the Behaviors component of one of the entities, there is an instruction to execute action through onCollision. However, when I test, they cannot execute action after collisions. Why is this?
Is there any action that can clone the entity in RealityView to the number I want? If there is, please let me know. Thank you!
I can execute an action by allowing Xcode to send a notification to Reality Composer Pro via NotificationCenter, or I can send notifications to Xcode through the Notification Action in Reality Composer Pro. However, I discovered that they were unable to accept notifications from both parties within my project. To ascertain whether there was an error in my code, I created a simple Demo project. I utilized the same code and determined that it functioned normally within the Demo project. It is perplexing that I am unable to resolve this issue. Do I require additional modifications?
In visionOS2, there exists a function that enables users to raise their hand to display the home button. However, this functionality conflicts with the interaction required for the mixed display space utilized within my application. Therefore, I seek a method to disable this functionality.
The entity in My RealityView contains tracking components and allows them to track different places of the hand. However, I found that except for the fingertip of the index finger, the fingertip of the thumb, the palm and the wrist, all other positions cannot be tracked normally (such as the fingertip of the middle finger). How can I solve it (I think it may be a beta version of the bug)
In my Volume, there is a RealityView that includes lighting effects. However, when the user drags the position of the window back and forth, the farther the distance between the volume and the user, the greater the brightness of the light effect. ( I believe this may be a Beta version of a bug.)
Note: The volume windowGroup has the .defaultWorldScaling(.dynamic) property.
In the reality view, I found that the entity could not cast a shadow on the reality. What configuration should I add to achieve this function?
I am attempting to execute actions after clicking an entity in Reality View using the Behaviors component. I have added the Input Target component and the Tap gesture as follows:
TapGesture().targetedToAnyEntity()
.onEnded({ value in
_ = value.entity.applyTapForBehaviors()
})
)
However, during testing, I have observed that the entity does not appear to recognize the click gesture. Could you kindly provide any relevant documentation or guidance on this matter?
Apple Intelligence is now available in macOS 15.1 and iOS 18.1 (Beta). However, it is currently not supported on visionOS. Despite the fact that visionOS runs on M2 silicon with 16GB of unified memory, Apple Intelligence cannot be utilized on this platform. I want enable Apple Intelligence for my visionOS app.
During testing, I encountered an issue with SharePlay. Since SharePlay necessitates multi-device testing, I intend to use my Mac and Vision Pro for testing. However, since these two devices are also my primary devices, I am reluctant to switch Apple IDs for testing purposes. Instead, I would like to test the original Apple ID. However, since both devices belong to the same Apple ID and rely on the same phone number, they are unable to FaceTime each other. I am at a loss as to how to proceed.