This solved the problem for me as well, thanks! Any potential consequences of this change to a new build system?
Post
Replies
Boosts
Views
Activity
After watching the session, I believe the device has only WIFI-based location access, but can improve this to precise location data if an iPhone is connected. However, I am wondering how this deals with heading, i.e. if it also takes the compass into account. For location-based AR, this is crucial. Deriving compass data from the phone would be tricky, because you would have to take the relative position between the Vision Pro and the phone into account as well. In realtime.
You say: "Apps will not have access to cameras.", but the cited text says "many of these cameras are not available for app use." This gives room for at least one camera to be usable. So my question is: Will it be possible to capture a photo or a video using the Vision Pro? I would mainly need this for capturing the real world, without any AR content.
How exactly would you re-instantiate the session in the relaunch case? Would you persist the session ID on disk so you can retrieve it then?
If an upload breaks, what exactly will happen then? Will you receive an error to -URLSession:task:didCompleteWithError: or will the system just retry the full upload until it works?
I am doing the very same right now (replacing the S3 TransferUtililty with my own implementation, since they buried it under the whole Amplify stuff). I am pretty far with my implementation, but I am wondering how to deal with app suspension and relaunches. Since this is not really documented anywhere: can you pls share more details on this, especially what happens if uploads fail in these cases? Would I have to retry everything myself, or does the system handle that?
@Hamletlere I am trying to use the simulator on an Intel MacBook Pro 2019 as well, but since upgrading the MacBook to macOS Sonoma, it won't work anymore. Can you confirm it still works on Sonoma release version for you?
@HeshamAtApple Is this still true for macOS Sonoma? I tried to run the visionOS simulator after upgrading my Intel Mac to the latest OS version, but it did not work anymore. Tried with Xcode 15 beta 2, 5 and 8. Xcode 15.1 states that visionOS requires a Silicon Mac, but Xcode 15 doesn't state this limiation. Still, it the simulator won't run there.
@CYannickNovelab Were you able to resolve this? I wasn't able to start the visionOS simulator either since I upgraded my (Intel) Mac to Sonoma.
Thanks, Quinn. The fun thing is: The visionOS simulator DID work on my Intel Mac, but not in combination with Unity. AND, it stopped working after I upgraded to macOS Sonoma, even when I used the very same configuration as on Ventura. I just wanted to try at least taking the "native route" (i.e. without Unity), but now I am blocked again. This is pretty disappointing. And I don't even know if Sonoma actually is the problem or not :/
Yes, it does not work in newer versions of Xcode. But the simulator in Xcode 15 beta 2 works just fine. I installed macOS Ventura on a separate drive again and was able to launch it there. Seems as if Apple has disabled it on macOS Sonoma. Not nice.
Just confirmed that it is a Sonoma thing after installing Ventura on a separate drive. Why didn't Apple even mention this limitation?! I wouldn't have upgraded my macBook then.
I just learned that this technique only works if the canvas is open. Otherwise, the context menu items are missing. Why is this so? And is there a way to do this using keyboard shortcuts? I hate programming with the mouse.
Yeah, those error messages really suck.
Any updates on this? I have to restart the simulator quite often to make the models appear right. Sometimes, they are dark "forever".