I just went through this. At first the multi-platform seem the “right” way to go. But, multi-platform means your SwiftUI code is heavily peppered with #if os(iOS) since so many of the view modifiers for navigation do not exist on the mac. You are really irking in APPKIT and UIKIT in multi-platform. For my use case WAY more work than catalyst. The only reason I see for multi-platform is if you want to look like a Mac app, and basically have two different UIs. Or you need something not available in catalyst.
Post
Replies
Boosts
Views
Activity
In swiftui sheet visibility is controlled by a binding. You can certainly set the binding value in response to but on or other inputs (network task, timer, etc.
Per the docs lookupInfo is the email or phone used to lookup the identity. Since this is using the CKRecord.ID to do the lookup these are blank.
In StoreKit2 you can attach an app account identifier to the transaction so you know which account in your system the transaction is for. If you generated a unique id for each purchase and mapped that to your account you can ensure 2 copies of the same transaction are detected as such.
You can have a window group as your primary UI and then explicitly open the DocumentGroup to open a new or existing document. I have not found a way to create a document and save it to disk without the Apple UI, unless you use an exporter UI. In either case apple will control the file dialog so you are not able to access arbitrary files.
I assume you logged in to your iCloud account? Did you restore from backup at the end of the iPadOS 17 install?
The Xcode 15 download produces an Xcode_beta.app so no conflict. Simulator and runtime is yet to be proven isolated though.
It will be Xcode 15 only once the SDK is available later this month. I doubt any other tools will be supported for quite some time. If those tools are producing iOS or iPadOS apps you MAY be able to load them into a device or simulator, but not clear how that would be done, and they would just run in a window, no access to Vision OS features (I assume).
I am seeing this as well. I have iPad and Mac Catalyst enabled for the same target. in Xcode 14 it has macOS as the SDK for Mac Catalyst, but in Xcode 15 it uas iOS as the SDK.
I would think an M2 is about the minimum to get a feel for performance in the Vision Pro. I use an m1max which is mostly overkill, but i wanted the 64GB of ram, so the machine stayed out of my way. But, if you intend to do a graphics heavy app you may want an m2max or more.
You do not name the presentation, so hard to say.
I did ask them at WWDC since I have both the conditions you list.
for nystagmus I was asking about eye tracking, but optic id is a good question also. For eye tracking they will have fall back to a hand based cursor, or mouse/trackpad.
for lazy eye (or non-binocular vision) they said it might just work.
it seems like they have had many discussions, but what made it into v1 was unclear. It seems like this is still a way away from a finished thing. They are not done building it yet.
but, the accessibility person I spoke with knew the name of nystagmus at least.
They said that the only camera access is when the user takes a picture or captures video explicitly, no ambient capture.
It supports bluetooth for things like MIDI interfaces, etc. But not wired connections.
If you mean take your Mac mini to the cafe maybe. If you mean from home, I would not expect that to work. I would think it would be short distance wifi only.