In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
Post
Replies
Boosts
Views
Activity
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor.
The pathway should encompass the following key areas:
Knowledge Base:
Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning.
Metal3 (very important) :
Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks.
Compositor Services and ARKit (important) :
Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences.
Metal Performance Shaders:
Acquire expertise in optimizing material rendering to enhance performance.
MetalKit:
Simplifies the tasks that display your Metal content onscreen.
MetalFX:
Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects.
I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
In visionOS, ARKit is to integrate virtual and reality. However, most of the functions RealityKit can be easily implemented (except for Scene reconstruction, Room Tracking and enterprise API), so do I still need to use ARKit? Is there any difference between them?
I have created a portal and attached it to a wall using the AnchorEntity. However, I am seeking guidance on how to determine the size of the wall so that the portal can fully occupy it. Initially, I attempted to locate relevant information within the demo code, but I encountered difficulties in comprehending certain sections. I would appreciate it if someone could provide a step-by-step explanation or a reference to the appropriate code. Thank you for your assistance.
In the visionOS App, I want to detect whether the user is in the room. My idea is as follows:
Check whether there are walls around the user
May I ask how to do it? Thanks!
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
In WWDC24, visionOS hand tracking has a new function that can make an entity track the hand faster (but at the expense of a certain degree of accuracy), and the video only explains how to implement ARKit, so please ask how to implement the anchorEntiy in the reality view.
In visionOS, the virtual content is covered by the hand by default, so I want to know that in the hybrid space, if the distance of an entity is behind a real object, how can the object in the room be covered like the virtual content is covered by the hand?
How to detect whether two entities collide in RealityView and execute some instructions after the collision. It is assumed that both entities have collision components, and one of them also has an anchor component (such as a binding hand).
How to make a specified entity in RealityView be captured by users:
This entity has physical and collision components, and the user will not change when he does not grasp the action. However, when the user makes a grab hand gesture and is very close to the entity (there can be a small deviation), an Anchor component will be enabled to bind the entity to the hand, but when the user lets go, he will fall along the y-axis of the current position (affected by the physical component).
I hope you can help me. Thank you.
I saw onnoffitacation in the Behavior configuration of Reality Composer pro, which asked me to enter the Nofficatition name, that is to say, this requires swift in Xcode to send a message. There is a message name in the message, so I hope you can write a list for me how to use Swift in Xcode to send a message containing the message name.(There is an answer in https://developer.apple.com/forums/thread/756978, but it doesn't work.)
and in the time line in Reality Composer Pro, there is a Notification action, which is used to send messages to swift. How can I ask swift to detect whether the Notification action has sent a message?(There is an answer in https://developer.apple.com/videos/play/wwdc2024/10102/, but it doesn't work.)
I have asked this question before (https://developer.apple.com/forums/thread/756978). Those answers were available before, but now they are all invalid in the latest system. I hope you can help me. Thank you.
Given my limited knowledge of physics, I would appreciate it if individuals with a solid understanding of the subject could provide insights into this matter. I have added a physical component to a entity in Reality Composer Pro, but I am seeking guidance on how to achieve the following:
Make an object float in the air (with a slight downward motion reminiscent of the moon’s surface)
Enable the object to move at a slow pace
Implement a strong rebound force
I would be grateful if you could provide appropriate values for these parameters. Thank you for your assistance.
I am honored that I successfully participated in the "Envision the future: Build great apps for visionOS"(https://developer.apple.com/events/view/ZCH7ZUY24C/dashboard) conference. However, unfortunately, I am in China, and due to the visa problem (because it usually takes at least 2 months to apply for a visa, but it only takes about 10 days from the time I received the notice to the meeting), I can't go to the United States to participate in the site. So I hope Apple can place an iPad on site and create a FaceTime link, and then I can make a call to this iPad. I also told Apple about this suggestion, but now there are only a few days left to start. They didn't reply to me. Even Apple has sent me the ticket to the Developer Center and asked me to add it to the Apple Wallet App, which means that Apple has not There is a request to deal with me. So I hope you can give me some advice or help me for those who know about this aspect. For Apple's internal engineers, if possible, I hope you can contact the person who manages this meeting. I'm very grateful for this. Thank you.
In iOS, to display a RealityView, you can assign a value to the content.camera property:
content.camera = .virtual
However, how can this be implemented in macOS and tvOS?