A teacher from the VisionOS team is in need.

As a 3D AR developer who has been struggling to put the pieces together to create a decent interface and quality 3D into an AR format the past couple years, I’ve really been looking forward to Apples Glasses or AR goggles.

There has been, and I can’t imagine that I’m in any way alone in this feeling, a complete lack of help from Apple in helping people get started in a meaningful way.

The WWDC content, while being perfect snippets to explain new classes and function, is the absolute worst at giving any help whatsoever to give meaning to these snippets of code. To let us know what to do with them. The sample projects are great, but a simple few pages of written explanations to walk us through the maze of function, and what calls what, would make these sample projects gold, instead of semi useless eye candy. It’s pure programmer hubris to toss us the box of puzzle pieces and expect miracles. I have world changing app ideas that would make people scramble to buy the Vision Pro, but hit the usual wall.

There have been a few rays of light in the Apple AR world, and certainly not from Apple. Ryan Kopinsky with his Reality School YouTube Channel is number one. A hand full of others create some helpful tutorials as well.

What if Apple put a pittance of its might into actually creating something useful to teach a comprehensive AR coarse? Hire a few Ryan Kopinsky’s, put together a course that strings together all those gold nuggets on display from WWDC and put them to actual use.

Now is the time, the exact time, to do this right. You have a world changing product here that could possibly languish in relative doldrums of simple 3D windows. We need actual help to help you.

Answered by Saijanai in 756990022

There has been, and I can’t imagine that I’m in any way alone in this feeling, a complete lack of help from Apple in helping people get started in a meaningful way.

Look at my rants about the inability to stream 2D video out. There's been no official response from Apple, but numerous people have pointed to the existing WWDC info and videos that suggests that Apple doesn't WANT people to be able to share the process of creating their work but only the outcome.

If a programmer could record their programming process via Apple Vision Pro or even live stream, that would go a long way towards making things more accessible to other programers, but the only blessed way that one can let people look over your shoulder is via the extremely limited shareplay process. You can't demo how to do program and record it or even stream it for the entire world to see, but only for the limited audience of people who already own an AVP.

Accepted Answer

There has been, and I can’t imagine that I’m in any way alone in this feeling, a complete lack of help from Apple in helping people get started in a meaningful way.

Look at my rants about the inability to stream 2D video out. There's been no official response from Apple, but numerous people have pointed to the existing WWDC info and videos that suggests that Apple doesn't WANT people to be able to share the process of creating their work but only the outcome.

If a programmer could record their programming process via Apple Vision Pro or even live stream, that would go a long way towards making things more accessible to other programers, but the only blessed way that one can let people look over your shoulder is via the extremely limited shareplay process. You can't demo how to do program and record it or even stream it for the entire world to see, but only for the limited audience of people who already own an AVP.

My impression is that they can't even do it internally.

Listen to the two VPs from apple (the guy onn the right is the one that started the vOS/AVP projects).

It doesn't sound like they had an easy time producing the keynote video, and if the AVP could do what we were talking about even if only using a private API, it would have been trivial to do: use streams from two AVPs: one showing the woman sitting, and the otehr showing what she is looking at. Add a little bridge animation about putting the AVP on, and you're done. Instead, they make it sound like it was a difficult task:

https://youtu.be/DgLrBSQ6x7E?t=4163

Greg Joswiak: Right, look, because one of the challenges we had in making teh [keynote] video is the fact that we have to take this incredible spatial experience and try to translate it onto a 2D screen. But all the UI you see, all the stuff that we were showing coming out of the device was rendered on device. And its out there, even in the third person view, that's composited onto a scene. So this isn't like us having graphic artists with an M2 ultra coming up with all this stuff this is all coming off of...

Mike Rockwell: It's all rendered real time.

Greg Joswiak: Yeah. Realtime and then that's how we showed it in the film and that's important. That was not fake.

.

Of course, maybe this is meant to be a hint of a future feature that they'll unlock for the release or visionOS x.x, but this is THE killer app that enables all other killer apps, IMHO, and should have been the first thing they showed.

A teacher from the VisionOS team is in need.
 
 
Q