it is way above of control panel....
the code I show usdz modeling is as below
`RealityView{
content in
let entity = try! await Entity(named: "idle")
entity.setScale([0.15,0.15,0.15], relativeTo: entity)
// Enable interactions on the entity.
entity.components.set(InputTargetComponent())
entity.components.set(CollisionComponent(shapes: [.generateBox(size: [2,2,2])] ) )
content.add(entity )
}`
iPad and iOS apps on visionOS
RSS for tagDiscussion about running existing iPad and iOS apps directly on Apple Vision Pro.
Posts under iPad and iOS apps on visionOS tag
73 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey, all!
I've been trying to upload a video preview to the AVP storefront for our app, but some of the export requirements seem to contradict one another.
For the AVP, a resolution of 4K is needed... which would require H264 level 5.2.
Yet, the H264 level can't be any higher than 4... which is 1080p.
It seems like a catch-22 where either the H264 level will be too high, or the resolution will be too low.
Does anyone have a fix or a way around this issue?
In Flutter application, Is there FCM related any changes for new iOS versions? I didn't find any solution regarding this. Working fine in android all versions and in iOS working fine till version 16.
Also verified from app setting (version 17 and above), Notification permissions enabled.
Also get success from this API - https://fcm.googleapis.com/fcm/send
response -
"multicast_id": 912044660XXXXXX, "success": 1, "failure": 0,
But not receiving any message, tried with debug and console log but there is no message/log or error message.
Is it possible to initiate Guest Mode and the Guest Mode setup from an application? If there was a demo application for example that would start by launching the Guest mode setup, instead of the new user being allowed to enter Guest Mode by the owner of the device? This way the application could be running, a new user could put on the headset then press some Guest Mode Setup button first to begin their session.
Hi,
I'm working on developing an app. I need my app to work in a crowd with multiple people (who have multiple hands).
From what I understand, the Vision Framework currently uses a heuristic of "largest hand" to assign as the detected hand. This won't work for my application since the largest hand won't always be the one that is of interest. In fact, the hand of interest will be the one that is pointing.
I know how to train a model using CreateML to identify a hand that is pointing, but where I'm running into issues is that there is no straightforward way to directly override the Vision framework's built-in heuristic of selecting the largest hand when you're solely relying on Swift and Create ML.
I would like my framework to be:
Request hand landmarks
Process image
CreateML reports which hand is pointing
We use the pointing hand to collect position data on the points of the index finger
But within vision's framework, if you set the number of hands to collect data for to 1, it will just choose the largest hand and report position data for that hand only. Of course, the easy work around here is to set it to X number of hands, but within the scope of an IOS device, this is computationally intensive (since my app could be handling up to 10 hands at a time).
Has anyone come up with a simpler solution to this problem or aware of something within visionOS to do it?
Hello, we have a universal app that runs on iOS and iPadOS today but we're having an issue where it crashes on launch on visionOS.
When I try to run our app, I see messages like these in the console logs:
AMFI: constraint violation /private/var/containers/Bundle/Application/***/***.app/Frameworks/***.framework/*** has entitlements but is not a main binary
I see these for what seems to be all of our internal frameworks, we use cocoapods for all of these.
The following output is from running: codesign -d --entitlements :- ***.framework
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "https://www.apple.com/DTDs/PropertyList-1.0.dtd"><plist version="1.0"><dict></dict></plist>
Why would this cause a crash on launch for visionOS, but not iOS or iPadOS?
Why does AMFI think there are entitlements for the framework when they are empty?
As the title already suggests, is it possible with the current Apple Vision Simulator to recognize objects/humans, like it is currently possible on the iPhone. I am not even sure, if we have an api for accessing the cameras of the Vision Pro?
My goal is, to recognize for example a human and add to this object an 3D object, for example a hat. Can this be done?
App's architecture (Intel 64-bit) include none that Apple Vsion Pro can execute (arm64)
Facing this issue only for Apple Vision Pro 1.0 Simulator, works fine for iPhone simulators.
I have an iPad App that works/available on visionOS store.
However, TestFlight releases are displaying that this in
iOS app only, and Incompatible on this Apple Vision Pro.
How do I enable my iPadOS app for TestFlight in vision OS
PS. Native visionOS can appear there,
I don't have any approved or released builds yet for visionOS.
I also see the same issue with "app not compatible" in TestFlight without visionOS section present. The same app is available in App Store in visionOS/iPad apps
Hello, community.
We are adding visionOS support to our application and have an issue without a solution because of a system UIKit bug.
There is a system bug with cycled recalling textFieldShouldReturn function after the first Return button usage (with this function returning true), and calling resignFirstResponder textField will get in the cycle, and there is no way to stop calling the textFieldShouldReturn func.
Repro steps:
Enter text to UITextField
Press Return button on keyboard
textFieldShouldReturn called (return true after step 4)
call resignFirstResponder
call becomeFirstResponder or tap on UITextField
return to step 3
The same problem exists in system application Reminders.
Repro steps:
Create a new reminder with a title and description
Set pointer to title textField
Press Return button on keyboard
App will try to create a new reminder and, after less than a second, return to the first reminder
The bug is only for visionOS, on iOS/iPadOS all is OK.
We assume that there is a flag for Return button usage, which checks on becomeFirstResponder logic, and in visionOS (iPadOS adaption), it does not clear this after handling button pressing.
Closure containing control flow statement cannot be used with result builder 'ViewBuilder'
below was the code
let feet = 9
let inch = 45
var body: some View {
VStack{
for number in 1...10 {
print(number)
}
}
I've been trying to make a native version of my iPad app which uses AVAudioPlayer. Everything works fine in iOS and iPad OS, however, when running on visionOS, it sounds like it's constantly skipping (both in the simulator and on an actual device).
Anyone know why this might be or how to fix or a workaround?
Are there templates to code on visionOS?
It would be nice to be able to copy n' paste code that works to build stuff out for it with ease that reinventing the wheel so many times.
Hello!
Im a new ios Developer and i dont know how to remove my udid and my accont app id and name, i created certificate but hen i open it i see my personal information.
Can any tell me how i can remove my personal inforation or the udid?
I've been pouring over the code for Apple's visionOS demo: "Destination" trying to figure out something about its skybox. (not really a box, there, Apple).
The skybox contains a 360deg panorama, but only one half shows with the other half fades out to nothing. I cannot find anywhere there is an alpha channel being set, a lighting or material effect causing that. But "alpha" doesn't show up and the only gradients are use to make controls standout a little more in its window.
I need a full panorama that shows., no fading.
I have a free app on the App Store for iOS without any in-app purchase or subscription. If I initially decide to make my iOS app available on the Apple Vision Pro on the App Store, can I then also decide to remove it? How does that work?
These are 2 links that I found about it, but I can't find a clear answer to that:
https://developer.apple.com/help/app-store-connect/manage-your-apps-availability/manage-availability-of-iphone-and-ipad-apps-on-apple-vision-pro
https://developer.apple.com/support/universal-purchase/
I have an app with destinations for both visionOS and iOS. I created a new platform for my app in App Store Connect for visionOS. Then I selected "Any VisionOS Device (Designed for iPad, arm64)" for my destination in Xcode and archived it. The first red flag that I saw was when it finished archiving, the archive listed in the organizer window read "iOS App Archive." Then after I uploaded the build to App Store Connect and it finished processing, it did not appear in the builds section for the visionOS App, but it did appear in the iOS app section. I tried quitting Xcode and cleaning the build folder, but it still shows up as an iOS App Archive.
Am I doing something wrong or is this a bug? Thanks.
I have an iPad app on the App Store, and since recently every time I upload a new build to App Store Connect I receive an automated email with the following warning.
ITMS-90984: Apple Vision Pro support issue - The details associated with your Apple Developer Program membership indicate that you’re not eligible to publish apps on the App Store for Apple Vision Pro. For more information, contact us: https://developer.apple.com/contact.
At the present, my app does not target visionOS API specifically, but I have checked the checkbox for availability on Apple Vision Pro. According to the warning, something makes me not able to distribute apps for Vision Pro, but lacking any details I am not sure how I can solve this issue.
What specifically about my account makes me ineligible to distribute apps for Apple Vision Pro and triggers this warning?
I have contacted support at the provided URL, but they were unable to help me and redirected me to post in Developer Forums.
When migrating a document app to visionOS, I noticed that two windows are created for a doc, and the new ornament toolbars get obscured by the foremost window. This can be replicated easily by creating a SwiftUI doc app and opening it in the Vision simulator.
Has anyone found a workaround? The Feedback Assistant is not showing VisionOS as an option for reporting bugs.
struct TestApp: App {
var body: some Scene {
DocumentGroup(newDocument: TestDocument()) { file in
ContentView(document: file.$document)
.toolbar{
ToolbarItemGroup(placement: .bottomOrnament) {
Button(){}label:{Image(systemName: "gearshape.fill")}
Button(){}label:{Image(systemName: "photo")}
Button(){}label:{Image(systemName: "paintbrush.pointed.fill")}
}
}
}
}
}
code-block
For visionOS App Store listings, screenshots are supposed to be 3840 × 2160.
However, when I save screenshots from the Simulator, they are only 2732 × 2048.
Is there a setting to generate full-size screenshots from the simulator? Or is there a way to save screenshots of the app window without the scene background?
As the Apple Vision Pro is not being sold yet (and won't be outside the US for a while) taking screenshots on the device is not really an option.
Of course, we can add borders or scale up the Simulator screenshots, but it seems weird that the expected screenshot size does not match the Simulator output.