visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

1,173 Posts
Sort by:
Post marked as solved
3 Replies
111 Views
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
0 Replies
41 Views
In my app I play HLS streams via AVPlayer. It works well! However, when I try to download those same HLS urls via MakeAssetDownloadTask I regularly come across the error: Download error for identifier 21222: Error Domain=CoreMediaErrorDomain Code=-12938 "HTTP 404: File Not Found" UserInfo={NSDescription=HTTP 404: File Not Found, _NSURLErrorRelatedURLSessionTaskErrorKey=( "BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>" ), _NSURLErrorFailingURLSessionTaskErrorKey=BackgroundAVAssetDownloadTask <CE9B10ED-E749-49FF-9942-3F8728210B20>.<1>} I have a feeling that the AVPlayer has a way to resolve this that the MakeAssetDownloadTask lacks. I am wondering if any of you have come across this or have insight. Thank you! BTW this is using Xcode Version 15.3 (15E204a) and developing for visionOS 1.0.1
Posted Last updated
.
Post not yet marked as solved
0 Replies
80 Views
Please treat me as a beginner of Unity. Now I want to learn to develop visionOS VR App through unity. I try to find a relatively complete route and start learning, but Unity's official website does not have much explanation for visionOS VR App, so I hope you can give me a comparison. The whole route, thank you!
Posted
by lijiaxu.
Last updated
.
Post not yet marked as solved
1 Replies
46 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
0 Replies
85 Views
I think it's kind of essential to have eye tracking data available to apps in VR mode (with the user's permission). The biggest problem I've observed is that Unity isn't able to implement dynamic foveated rendering without eye tracking data. Without the eye tracking it's only possible to have fixed foveated rendering. That gives a performance boost to rendering, but it also makes it so it gets blurry for the user if they look to the side without turning their head. I understand why it's a privacy issue to have apps tracking where the user is looking in the real world, but video passthrough is disabled in VR -- so it should be ok to enable eye tracking in VR (with the user's permission). Unity already supports dynamic foveated rendering (with eye tracking) for other VR headsets, and Vision Pro has the best eye tracking -- so Vision Pro should definitely have the best dynamic foveated rendering in VR.
Posted
by Hawaiianz.
Last updated
.
Post not yet marked as solved
1 Replies
79 Views
xtension Entity { func addPanoramicImage(for media: WRMedia) { let subscription = TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } func updateRotation(for media: WRMedia) { let angle = Angle.degrees( 0.0) let rotation = simd_quatf(angle: Float(angle.radians), axis: SIMD3<Float>(0, 0.0, 0)) self.transform.rotation = rotation } struct WRSubscribeComponent: Component { var subscription: AnyCancellable } } case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
1 Replies
110 Views
When i use the MFMailComposeViewController in visionOS, there is no cancel button for the controller. The button at the bottom closes the app. Is anyone else experiencing this? if([MFMailComposeViewController canSendMail]) { MFMailComposeViewController* controller = [[MFMailComposeViewController alloc] init]; controller.mailComposeDelegate = (id <MFMailComposeViewControllerDelegate>)view; [controller setToRecipients:toAddresses]; [controller setSubject:subject]; [controller setMessageBody:body isHTML:isHtml]; [view presentViewController:controller animated:YES completion:nil]; }
Posted Last updated
.
Post not yet marked as solved
4 Replies
629 Views
I am a Apple Developer member signed into my developer Apple ID on my Vision Pro and I‘m unable to access Beta Updates in Settings -&gt; General -&gt; Software Update. The option doesn’t even show up. I’ve tried restarting a few times and signing out and back in on my Vision Pro. I’ve been able to successfully deploy builds from Xcode to my Vision Pro. I’m also able to access Beta Updates from my other Apple devices on the same Apple ID. I’ve also noticed that my Apple ID avatar isn’t syncing—it shows the default initials in visionOS Settings and updating it there does not seem to sync across devices. Anyone have any ideas how I might fix this?
Posted
by n8chur.
Last updated
.
Post not yet marked as solved
3 Replies
294 Views
I am running VisionOS 1.0.3. On the software update page, there is no option to install a beta. I don't see any setting that would enable/disable this... anyone have any suggestions?
Posted
by JustMark.
Last updated
.
Post not yet marked as solved
0 Replies
61 Views
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m ) Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing. However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS. Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ). Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer. Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
Posted
by netshade.
Last updated
.
Post marked as solved
7 Replies
1.2k Views
Hello, When an iOS app runs on Vision Pro in compatible mode, is there a flag such as isiOSAppOnVision to determine the underlying OS at runtime? Just like the ProcessInfo.isiOSAppOnMac. It will be useful to optimize the app for visionOS. Already checked but not useful: #if os(xrOS) does not work in compatible mode since no code is recompiled. UIDevice.userInterfaceIdiom returns .pad instead of .reality. Thanks.
Posted
by Gong.
Last updated
.
Post not yet marked as solved
0 Replies
68 Views
Hello, I have an app that is using WorldAnchorProvider, basically soemthing similar to the obeject placement example. I'd like to show to the user a specific UI when no anchors where loaded. However, no matter where I move withing my house, they always load. So I was wondering, how far do I need to go in order to for the device not be able to load my placed world anchors? Thanks
Posted
by elmotron.
Last updated
.
Post not yet marked as solved
0 Replies
77 Views
For example, can I place items in vr in my living room, then walk into my bedroom and no longer see them as they are hidden behind a wall? Could I place something inside a cupboard?
Posted
by jinrui.
Last updated
.
Post not yet marked as solved
0 Replies
59 Views
Hi, I'm an indie developer trying to make a 2D prototype of a simple game where I have to drag and drop items from one box to another. I have so far implemented a working prototype with the .draggable (https://developer.apple.com/documentation/swiftui/view/draggable(_:)) function which works well in the simulator, but as soon as I use my vision pro, the finger pinch action doesn't register half the time. I can only select the object around 30% of the time. My code is as follows. DiskView(size: diskSize, rod: rodIndex) .draggable(DiskView(size: diskSize, rod: rodIndex)) .hoverEffect() I have also registered DiskView as a UTType and have it able to be transferred around. The business logic works, just the pinch gesture does not work half the time.
Posted Last updated
.
Post not yet marked as solved
1 Replies
103 Views
I am trying add Sign in with Apple but when I attempt to capability in my app nothing happens in the list does apple not able to provide this feature yet in Vision OS or is there any bug or may be ami missing something which does not seems?
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.6k Views
Hi community, I have a pair of stereo images, one for each eye. How should I render it on visionOS? I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images. I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
Posted Last updated
.
Post not yet marked as solved
0 Replies
96 Views
Hi, does anyone know how to capture audio input in vision os? I tried the sample code from official examples https://developer.apple.com/documentation/avfoundation/avcapturesession , but it did work.
Posted
by Kuoxen.
Last updated
.
Post not yet marked as solved
1 Replies
159 Views
Hello. I'm creating a fully immersive application and I need to use hand tracking. I've included the corresponding key (SHandsTrackingUsageDescription) in the info.plist. Everything works correctly. At the beginning, the application displays the corresponding permission. But now here's the question: if I make a mistake and you click on Don't Allow, the permission prompt won't appear again and the application stops working. How can I request permission again? Thanks.
Posted Last updated
.