Post

Replies

Boosts

Views

Activity

visionOS: How to tap annotations on a map?
I am trying to create a Map with markers that can be tapped on. It also should be possible to create markers by tapping on a location in the map. Adding a tap gesture to the map works. However, if I place an image as an annotation (marker) and add a tap gesture to it, this tap will not be recognized. Also, the tap gesture of the underlying map fires. How can I a) react on annotation / marker taps b) prevent that the underlying map receives a tap as well (i.e. how can I prevent event bubbling)
0
0
312
Jan ’24
US prescription requirement for preordering Vision Pro?!
We're an AR startup in the US, but our founders live in Europe. We definitely want to order the VP once it gets available in the States, but I just saw in my mail that Apple requires a US prescription if you wear glasses. This is a bummer for us. We can forward VP to Europe, but we won't be able to travel to the States just to get such a prescription. Why can't Apple just accept any prescription from an optician?!
1
0
296
Jan ’24
Lots of "garbage" in the Xcode logs, like "Decoding completed without errors"
Hi, if I run an app on the visionOS simulator, I get tons of "garbage" messages in the Xcode logs. Please find some samples below. Because of these messages, I can hardly see really relevant logs. Is there any way to get rid of these? [0x109015000] Decoding completed without errors [0x1028c0000] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 11496 [0x1028c0000] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060 [0x1021f3200] Releasing session [0x1031dfe00] Options: 1x-1 [FFFFFFFF,FFFFFFFF] 00054060 [0x1058eae00] Releasing session [0x10609c200] Decoding: C0 0x01000100 0x00003048 0x22111100 0x00000000 10901 [0x1058bde00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20910 [0x1028d5200] Releasing session [0x1060b3600] Releasing session [0x10881f400] Decoding completed without errors [0x1058e2e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 9124 [0x1028d1e00] Decoding: C0 0x01000100 0x0000304A 0x22111100 0x00000000 20778 [0x1031dfe00] Decoding completed without errors [0x1031fe000] Decoding completed without errors [0x1058e2e00] Options: 256x256 [FFFFFFFF,FFFFFFFF] 00025060```
0
1
312
Jan ’24
No object detection on visionOS?
I recently had a chat with a company in the manufacturing business. They were asking if Vision Pro could be used to guide maintenance workers through maintenance processes, a use-case that is already established on other platforms. I thought Vision Pro would be perfect for this as well, until I read in this article from Apple that object detection is not supported: https://developer.apple.com/documentation/visionos/bringing-your-arkit-app-to-visionos#Update-your-interface-to-support-visionOS To me, this sounds like sacrificing a lot of potential for business scenarios, just for the sake of data privacy. Is this really the case, i.e. is there no way to detect real-world objects and place content on top of them? Image recognition would not be enough in this use-case.
3
0
1.2k
Nov ’23
Taking photos or shooting videos on device: Possible?
In the WWDC23 sessions it was mentioned that the device won't support taking photos or recording videos through the cameras. Which I think is a huge limitation. However, in another forum I read that it actually works, using AVFoundation. So I went back into the docs, and they said it was not possible. Hence, I am pretty confused. Has anyone tried this out yet and confirm whether camera access is blocked completely or not? For our app, it would be a bummer if it was.
1
0
308
Nov ’23
Place content in arbitrary locations, not bound to walls, deskts etc.
Our iOS app relies heavily on the ability to place objects in arbitrary locations, and we would like to know if this is possible on visionOS as well. It should work like this: The user faces into a certain direction. We place an object approx. 5m in front of the user. The object then gets pinned to this position (in air) and won't move any more. It should not be anchored to a real-world item like a wall, the floor or a desk. Placing the object should even work, if the user looks down while placing the object. The object should then appear 5m in front of him once he looks up. On iOS, we implemented this using Unity and AR Foundation on iOS. For visionOS, we haven't decided yet if we go for native instead. So, if that's only possible using native code, that's also fine.
1
0
333
Nov ’23
visionOS simulator broken on Intel macBook since upgrade to Sonoma
On my macBook Pro 2019 with Intel processor, I could run apps in the visionOS simulator without any problems when I was running macOS Ventura. But since I upgraded the Mac to Sonoma, the visionOS simulator seems to be broken. The display in Xcode sticks to "Loading visionOS 1.0", and the simulator page under "Devices and Simulators" says "No runtime". This is independent of which Xcode version I am using. I used Xcode 15 beta2, but also tried out more recent versions. Could it be that developing on Intel Macs was dropped on macOS Sonoma without any notice? I can see that the Xcode 15.1 specs state you need a Silicon Mac, but the Xcode 15 specs don't. And it worked for me, at least on Ventura. The "only" change I made since was upgrading the OS to Sonoma.
3
2
1.6k
Oct ’23