Posts

Post not yet marked as solved
5 Replies
1.3k Views
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy? A specific example would be great. Thank you!
Posted Last updated
.
Post not yet marked as solved
1 Replies
294 Views
Does Video Toolbox’s compression session yield data I can decompress on a different device that doesn’t have Apple’s decompression? i.e. so I can network data to other devices that aren’t necessarily Apple? or is the format proprietary rather than just regular h.264 (for example)? If I can decompress without video toolbox, may I have reference to some examples for how to do this using cross-platform APIs? Maybe FFMPEG has something?
Posted Last updated
.
Post not yet marked as solved
4 Replies
466 Views
Years ago, JSCore on non-macOS disabled JIT, leading to much worse performance than could possibly be achieved with JIT on. Has anything changed recently to permit greater optimizations for JSCore on mobile platforms? (iPadOS, visionOS). My guess is ”no” since the docs still llist only macOS under the MAP_JIT flag, but as far as I know, Apple could still choose to enable JSCore optimizations behind the scenes if this option were available to developers.
Posted Last updated
.
Post not yet marked as solved
1 Replies
891 Views
In full immersive (VR) mode on visionOS, if I want to use compositor services and a custom Metal renderer, can I still get the user’s hands texture so my hands appear as they are in reality? If so, how? If not, is this a valid feature request in the short term? It’s purely for aesthetic reasons. I’d like to see my own hands, even in immersive mode.
Posted Last updated
.
Post not yet marked as solved
1 Replies
278 Views
Does the Vision Pro allow usb peripherals like cameras, microphones, or video feeds from an iPhone or iPad? Can I use AVFoundation to access external camera feeds or microphones? Note that I am not asking about the internal cameras, which I am aware are off-limits. One use case is to support multiple viewing angles comparable to what we do with slide projectors. For example, draw using an iPad flat on your desk while wearing the Vision Pro in full passthrough mode. Simultaneously mirror the iPad’s screen on multiple walls in real-time at minimum latency (by thunderbolt connection), similar to how I can use Quicktime in macOS to mirror my iPad’s screen.
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.8k Views
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
Posted Last updated
.
Post not yet marked as solved
0 Replies
742 Views
In Xcode 15 Beta (5), I am noticing my breakpoints randomly seem to duplicate themselves multiple times for the exact same breakpoint. I have 3 targets in my project, and I wonder whether what I am experiencing is a bug related to that. Similarly, I also see duplicates of the same symbol in the symbol navigator. I've attached a screenshot of several identical breakpoints (in this case placed in some Objective C methods that relate to speech recognition). I haven't seen this happen in Xcode 14, or at least as often. Has anyone else experienced this and/or filed a bug report? I've tried deleting derivedData and the usual tricks.
Posted Last updated
.
Post not yet marked as solved
1 Replies
551 Views
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case. However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay? It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works. It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
Posted Last updated
.
Post marked as solved
1 Replies
708 Views
I’m still a little unsure about the various spaces and capabilities. I’d like to make full use of hand tracking, joints and all. In the mode with passthrough and a single application present (not a shared space), is that available? (I am pretty sure that the answer is “yes,” but I’d like to confirm.) What is this mode called in the system? Mixed full-space?
Posted Last updated
.
Post not yet marked as solved
0 Replies
645 Views
For my project, I would really benefit from continuous on-device speech recognition without the automatic timeout, or at least with a much longer one. In the WebKit web speech implementation, it looks like there are some extra setters for SFSpeechRecognizer exposing exactly this functionality: https://github.com/WebKit/WebKit/blob/8b1a13b39bbaaf306c9d819c13b0811011be55f2/Source/WebCore/Modules/speech/cocoa/WebSpeechRecognizerTask.mm#L105 Is there a chance Apple could enable programmable duration/time-out? If it’s available in WebSpeech, then why not in native applications?
Posted Last updated
.
Post not yet marked as solved
1 Replies
704 Views
OSLog’s structured logging is nice, but the output length is limited compared with stdio’s. Currently, it looks like if I expect long variable-length print-outs, I’m forced to revert to using stdio. —or is this just an Xcode 15 beta 2 bug (discussed in the release notes) and fixed versions will match what stdio gives me? If not, could there be a way to configure oslog to fall-back to stdio dynamicallty based on whether the printout is too long or not? A custom fallback buffer allocator? Alternatively, what if I could still get the structured logging with the metadata, and use stdio for the rest of the message that doesn’t fit? That would be a nice way to guarantee that you get structured logging info without dropping the entire message.
Posted Last updated
.
Post not yet marked as solved
1 Replies
783 Views
For the MaterialX shadergraph, the given example hard-codes two textures for blending at runtime ( https://developer.apple.com/documentation/visionos/designing-realitykit-content-with-reality-composer-pro#Build-materials-in-Shader-Graph ) Can I instead generate textures at runtime and set what those textures are as dynamic inputs for the material, or must all used textures be known when the material is created? If the procedural texture-setting is possible, how is it done, since the example shows a material with those hard-coded textures? EDIT: It looks like the answer is ”yes” since setParameter accepts textureResources https://developer.apple.com/documentation/realitykit/materialparameters/value/textureresource(_:)?changes=l_7 However, how do you turn a MTLTexture into a TextureResource?
Posted Last updated
.
Post not yet marked as solved
1 Replies
728 Views
Related to “what you can do in visionOS,” what are all of these camera-related functionalities for? (As of yet, not described in the documentation) https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/colortextures https://developer.apple.com/documentation/realitykit/realityrenderer/cameraoutput/relativeviewport What are the intended use cases? Is this the equivalent to render-to-texture? I also see some interop with raw Metal happening here.
Posted Last updated
.