For my project, I would really benefit from continuous on-device speech recognition without the automatic timeout, or at least with a much longer one.
In the WebKit web speech implementation, it looks like there are some extra setters for SFSpeechRecognizer exposing exactly this functionality:
https://github.com/WebKit/WebKit/blob/8b1a13b39bbaaf306c9d819c13b0811011be55f2/Source/WebCore/Modules/speech/cocoa/WebSpeechRecognizerTask.mm#L105
Is there a chance Apple could enable programmable duration/time-out? If it’s available in WebSpeech, then why not in native applications?
Post
Replies
Boosts
Views
Activity
OSLog’s structured logging is nice, but the output length is limited compared with stdio’s. Currently, it looks like if I expect long variable-length print-outs, I’m forced to revert to using stdio. —or is this just an Xcode 15 beta 2 bug (discussed in the release notes) and fixed versions will match what stdio gives me?
If not, could there be a way to configure oslog to fall-back to stdio dynamicallty based on whether the printout is too long or not? A custom fallback buffer allocator? Alternatively, what if I could still get the structured logging with the metadata, and use stdio for the rest of the message that doesn’t fit? That would be a nice way to guarantee that you get structured logging info without dropping the entire message.
In passthrough mode on the Vision Pro, what is the maximum distance one can walk from the origin while keeping stable tracking?
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph )
Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures?
EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7
However, how do you turn a MTLTexture into a TextureResource?
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how?
If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation)
https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures
https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport
What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
I thiught that RealityKit’s CustomMaterial didn’t exist in visionOS, but it‘s listed here: https://developer.apple.com/documentation/realitykit/custommaterial
Can it in fact be used in mixed / ar passthrough mode and something changed?
What is the situation?
How do you enable WebXR support in visionOS's Safari using the simulator? Is there a hidden option or flag somewhere? I've seen videos showcasing WebXR in the simulator, I believe, so I think it is possible.
In regular Metal, I can do all sorts of tricks with texture masking to create composite objects and effects, similar to CSG. Since for now, AR-mode in visionOS requires RealityKit without the ability to use custom shaders, I'm a bit stuck.
I'm pretty sure so far that what I want is impossible and requires a feature request, but here it goes:
Here's a 2D example:
Say I have some fake circular flashlights shining into the scene, depthwise, and everything else is black except for some rectangles that are "lit" by the circles.
The result:
How it works:
In Metal, my per-instance data contain a texture index for a mask texture. The mask texture has an alpha of 0 for spots where the instance should not be visible, and an alpha of 1 otherwise.
So in an initial renderpass, I draw the circular lights to this mask texture. In pass 2, I attach the fullscreen mask texture (circular lights) to all mesh instances that I want hidden in the darkness. A custom fragment shader multiplies the alpha of the full-screen mask texture sample at the given fragment with the color that would otherwise be output. i.e. out_color *= mask.a. The way I have blending and clear colors set-up, wherever the mask alpha is 0, an object will be hidden. The background clear color is black.
The following is how the scene looks if I don't attach the masking texture. You can see that behind the scenes, the full rectangle is there.
In visionOS AR-mode, the point is for the system to apply lighting, depth, and occlusion information to the world.
For my effect to work, I need to be able to generate an intermediate representation of my world (after pass 2) that shows some of that world in darkness.
I know I can use Metal separately from RealityKit to prepare a texture to apply to a RealityKit mesh using DrawableQueue
However, as far as I know there is no way to supply a full-screen depth buffer for RealityKit to mix with whatever it's doing with the AR passthrough depth and occlusion behind the scenes. So my Metal texture would just be a flat quad in the scene rather than something mixed with the world.
Furthermore, I don't see a way to apply a full-screen quad to the scene, period.
I think my use case is impossible in visionOS in AR mode without customizable rendering in Metal (separate issue: I still think in single full app mode, it should be possible to grant access to the camera and custom rendering more securely) and/or a RealityKit feature enabling mixing of depth and occlusion textures for compositing.
I love these sorts of masking/texture effects because they're simple and elegant to pull-off, and I can imagine creating several useful and fun experiences using this masking and custom depth info with AR passthrough.
Please advise on how I could achieve this effect in the meantime.
However, I'll go ahead and say a specific feature request is the ability to provide full-screen depth and occlusion textures to RealityKit so it's easier to mix Metal rendering as a pre-pass with RealityKit as a final composition step.
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
Will a new example project be released to cover the new raytracing features in session WWDC2023-10128?
Whenever I create a feedback assistant request, all my whitespace formatting disappears, making it unreadable for long posts. I’d like to make posts with sections such as “context,” “motivation,” etc. so the reader can better understand the request.
Is it allowed just to put the body of the request in an attached text file, or will that risk making my request ignored or discarded?
What is the proper etiquette for this?
Is full hand tracking on the Vision Pro available in passthrough AR (fully immersed with one application running), or only in fully immersive VR (no passthrough)?
Apparenrly, shadows aren’t generated for procedural geometry in RealityKit:
https://codingxr.com/articles/shadows-lights-in-realitykit/
Has this been fixed? My projects tend to involve a lot of procedurally-generated meshes as opposed to importes-models. This will be even more important when VisionOS is out.
On a similar note, it used to be that ground shadows were not per-entity. I’d like to enable or disable them per-entity. Is it possible?
Since currently the only way to use passthrough AR in Vision OS will be to use RealityKit, more flexibility will be required. I can‘t simply apply my own preferences.