Posts

Post not yet marked as solved
1 Replies
743 Views
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
Posted Last updated
.
Post marked as solved
1 Replies
596 Views
I thiught that RealityKit’s CustomMaterial didn’t exist in visionOS, but it‘s listed here: https://developer.apple.com/documentation/realitykit/custommaterial Can it in fact be used in mixed / ar passthrough mode and something changed? What is the situation?
Posted Last updated
.
Post not yet marked as solved
0 Replies
868 Views
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck. I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes: Here's a 2D example: Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles. The result: How it works: In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise. So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black. The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there. In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world. For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness. I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world. Furthermore, I don't see a way to apply a full-screen quad to the scene, period. I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing. I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough. Please advise on how I could achieve this effect in the meantime. However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
Posted Last updated
.
Post not yet marked as solved
4 Replies
1k Views
In ARKit for iPad, I could 1) build a mesh on top of the real world and 2) request a people occlusion map for use with my application so people couls move behind or in fromt of virtual content via compositing. However, in VisionOS, there is no ARFrame image to pass to the function that would generate the occlusion data. Is it possible to do people occlusion in visionOS? If so, how it is done—through a data provider, or is it automatic when passthrough is enabled? If it’s not possible, is this something that might have a solution in future updates as the platform develops? Being able to combine virtual content and the real world with people being able to interact with the content convincingly is a really important aspect to AR, so it would make sense for this to be possible.
Posted Last updated
.
Post marked as solved
16 Replies
1.8k Views
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit: https://codingxr.com/articles/shadows-lights-in-realitykit/ Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out. On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible? Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.5k Views
We‘ve learned this week that Vision Pro in full immersive mode does not allow you to move around. However, many of the most exciting use cases for immersive computing include medical rehabilitation and exercise, immersive classroom lab spaces for running experiments, virtual museum galleries and exhibits, and escape rooms. These use cases require free movement capability, however. It would be a shame for Vision Pro to limit full immersive mode to stand-still tasks when many of these interesting and beneficial use cases exist. Many existing products allow for mobility in VR by asking the user to define a safe walkable zone in their environment. For the use cases above, these are controlled environments in which bumping into real world objects is not a risk. Also, many existing VR solutions do use VR in this capacity, and I think Vision Pro would be a great platform for extending the potential for these sorts of experiences given the additional capability of the software and hardware. Is Apple interested in exploring potential solutions for enabling movement in full VR mode / is this feedback assistant-worthy? I understand this is a v1 of the hardware and perhaps this problem is still being explored, and that future iterations might see major improvements. It happens to be the case that almost all of the projects I’d like to pursue using Vision Pro require free mobility.
Posted Last updated
.
Post marked as solved
10 Replies
2.0k Views
I wanted to try structured logging with os_log in C++, but I found that it fails to print anything given a format string and a variable: eg. void example(std::string& str) { os_log_info(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_debug(OS_LOG_DEFAULT, "%s", str.c_str()); os_log_error(OS_LOG_DEFAULT, "%s", str.c_str()); } This prints a blank row in the console, but with no text. How is this meant to work with variables? It only works with literals and constants now as far as I can tell. I'm looking forward to getting this working.
Posted Last updated
.
Post not yet marked as solved
0 Replies
331 Views
Whenever I create a feedback assistant request, all my whitespace formatting disappears, making it unreadable for long posts. I’d like to make posts with sections such as “context,” “motivation,” etc. so the reader can better understand the request. Is it allowed just to put the body of the request in an attached text file, or will that risk making my request ignored or discarded? What is the proper etiquette for this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
982 Views
On VisionOS, is a combination of full passthrough, unbounded volumes, and my own custom 3D rendering in Metal Possible? According to the RealityKit and Unity VisionOS talk, towards the end, it’s shown that an unbounded volume mode allows you to create full passthrough experiences with graphics rendering in 3D — essentially full 3D AR in which you fan move around the space. It’s also shown that you can get occlusion for the graphics. This is all great, however, I don’t want to use RealityKit or Unity in my case. I would like to be able to render to an unbounded volume using my own custom Metal renderer, and still get AR passthrough and the ability to walk around and composit virtual graphical content with the background. To reiterate, this is exactly what is shown in the video using Unity, but I’d like to use my own renderer instead of Unity or RealityKit. This doesn’t require access to the video camera texture, which I know is unavailable. Having the flexibility to create passthrough mode content in a custom renderwr is super important for making an AR experience in which I have control over rendering. One use case I have in mind is: Wizard’s Chess. You see the real world and can walk around a room-size chessboard with virtual chess pieces mixed with the real world, and you can see the other player through passthrough as well. I’d like to render graphics on my living room couches using scene reconstruction mesg anchors, for example, to change the atmosphere. The video already shows several nice use cases like being able to interact with a tabletop fantasy world with characters. Is what I’m describing possible with Metal? Thanks! EDIT: Also, if not volumes, then full spaces? I don’t need access to the camera images that are off-limits. I would just like passthrough + composition with 3D Metal content + full ARKit tracking and occlusion features.
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
I notice new C++ 23 features such as the multi subscript operator overload mentioned in Xcode beta release notes, but I don’t see a way to enable C++ 23 in the build flags. What is the correct flag, or is C++ 23 unusable in Apple Clang?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.4k Views
Regarding Stage Manager on iPadOS, I am hoping for the possibility of spreading the content of one app across the iPad screen AND the external screen. This would enable me to have a single running process, with part of the content on an Apple Pencil/multitouch screen, and the other part on a large monitor. However, it looks like app windows are restricted to one screen at a time. Alternatively, it would be good if one app could have multiple windows as macOS does, so I could have shared state across all the windows. To anyone's knowledge, are either of these scenarios planned, or are they completely out-of-scope? Thanks.
Posted Last updated
.
Post not yet marked as solved
1 Replies
995 Views
Xcode 13.4 only provides an SDK for macOS 12.3 according to the release notes. Can I build to macOS 12.4 using the lower point version SDK? I would not want to update the OS if I could not build to it yet. Thanks.
Posted Last updated
.
Post not yet marked as solved
7 Replies
3.1k Views
I updated Xcode to Xcode 13 and iPadOS to 15.0. Now my previously working application using SFSpeechRecognizer fails to start, regardless of whether I'm using on device mode or not. I use the delegate approach, and it looks like although the plist is set-up correctly (the authorization is successful and I get the orange circle indicating the microphone is on), the delegate method speechRecognitionTask(_:didFinishSuccessfully:) always returns false, but there is no particular error message to go along with this. I also downloaded the official example from Apple's documentation pages: SpokenWord SFSpeechRecognition example project page Unfortunately, it also does not work anymore. I'm working on a time-sensitive project and don't know where to go from here. How can we troubleshoot this? If it's an issue with Apple's API update or something has changed in the initial setup, I really need to know as soon as possible. Thanks.
Posted Last updated
.