I'm working on a custom spatial video player that uses AVSampleBufferDisplayLayer as render layer. When I feed it with CMSampleBuffer that output from VTCompressionSession using new encoding API it can display normally but I don't know if it can work in VisionPro. Anyone has idea?
iPad and iOS apps on visionOS
RSS for tagDiscussion about running existing iPad and iOS apps directly on Apple Vision Pro.
Posts under iPad and iOS apps on visionOS tag
73 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Apple asks us to submit apps with Vision Pro support.
I did submit to App Store Connect with destination Apple Vision (Designed for iPad), and it gets rejected because of references to pre release software or product.
What is Apple expecting us to do?
在 Full 模式下,
我创建一球体 半径 10 ,给球添加 CollisionComponent 与 InputTargetComponent
我接着创建一个0.2 正方体 也添加了 上面的两组件
又添加。一个 attrach 的附件信息
代码如下
` RealityView{content,attachments in
let meshgenerate = MeshResource.generateSphere(radius: 10)
let collisionShape = ShapeResource.generateSphere(radius: 10 )
var sp = ModelEntity(mesh: meshgenerate)
sp.components.set(CollisionComponent(shapes: [collisionShape]))
sp.components.set(InputTargetComponent())
sp.transform.scale *= .init(-1, 1, 1)
sp.name = "sp"
content.add(sp)
let ont = ModelEntity(mesh: MeshResource.generateBox(size: 0.2) )
ont.components.set(CollisionComponent(shapes: [ShapeResource.generateBox(size: .init(x: 0.2, y: 0.2, z: 0.2))]))
ont.components.set(InputTargetComponent())
ont.name = "ont"
ont.position = .init(x: 0, y: 0, z: -2)
content.add(ont)
if let stack = attachments.entity(for: "aid")
{
stack.name = "sssssss"
stack.setPosition(.init(x: 0, y: 1.5, z: -1), relativeTo: nil)
// stack.generateCollisionShapes(recursive: false)
//stack.components.set(InputTargetComponent())
content.add(stack)
}
}
attachments: {
let rostion = Rotation3D(angle: Angle2D(degrees: 30), axis: .x)
Attachment(id: "aid") {
Button {
print("sss","Button")
} label: {
Text("New Color")
.font(.extraLargeTitle)
.padding(40)
}
.background(.yellow)
}
} .gesture(TapGesture().targetedToAnyEntity().onEnded({ value in
print("sss" ,"TapGesture",value.entity.name)
//openwind(id: "main")
}))
只有球台可以出发 gesture 其他的 EntityModel 及 附加的信息 都无法触发 gestrue
我知道问题出在 其他实体放到了球内,同时因为球体有 InputTargetComponent 组件我如果想 不求出 InputTargetComponent 情况下 希望他的附件信息也能触发gesture,应该如何解决
Is there a way of integrating the RealityKitContent to an app created with Xcode12 using UIKit?
The non AR parts are working ok in VisionOS, the AR parts need to be rewritten in SwiftUI. In order to be able to do so,I need to access the RealityKit content and be able to work it seamlessly with Reality Composer Pro, but unsure how to integrate RealityKitContent is such pre-SwitftUI/VisionOS project. I am using Xcode 15
Thank you.
Hi all, I'm new to Apple Ads Guides, my excuses if topic has been discussed or I missed it in Policies documentation, please give a link if so.
Would like to know more about VisionOS and in-app rules for it. Are there any limitations for integration a code of ad networks or non related to app sponsored content into the applications there? What are the options for additional monetization of apps for AR/MR/VR?
My app name "Freepaystubnow" is one word and it's not indicate any price.
also with the same name i uploaded this app with different account 2 month ago and it was approved but some reason i delete this app and now i try to upload same app (i change app bundle id) with new account this app but app store reject my app below reason
Guideline 2.3.7 - Performance - Accurate Metadata
Your app name to be displayed on the App Store include references to the price of your app or the service it provides, which is not considered a part of these metadata items.
Hi guys,
has any individual develper received Vision Pro dev kit or is it just aimed at big companies?
Basically I would like to start with one or 2 of my apps that I removed from the store already, just to get familiar with VisionOS platform and gain knowledge and skills on a small, but real project.
After that I would like to use the Dev kit on another project. I work on a contract for mutlinational communication company on a pilot project in a small country and extending that project to VisionOS might be very interesting introduction of this new platform and could excite users utilizing their services. However I cannot quite reveal to Apple details for reasons of confidentiality. After completing that contract (or during that if I manage) I would like to start working on a great idea I do have for Vision Pro (as many of you do).
Is it worth applying for Dev kit as an individual dev? I have read some posts, that guys were rejected.
Is is better to start in simulator and just wait for actual hardware to show up in App Store? I would prefer to just get the device, rather than start working with the device that I may need to return in the middle of unfinished project.
Any info on when pre-orders might be possible?
Any idea what Mac specs are for developing for VisionOS - escpecially for 3D scenes. Just got Macbook Pro M3 Max with 96GB RAM, I'm thinknig if I should have maxed out the config. Anybody using that config with Vision Pro Dev kit?
Thanks.
Can AR projects run on a visionOS simulator?
I submitted an application for the "Apple Vision Pro developer kit" about a month ago (submitted ~ 11/1/23). I have not heard anything back. If my application is rejected, will I ever receive a rejection notice for "Apple Vision Pro developer kit"? Or will there just be radio silence?
Part of my wonders if my application ever got submitted, or if the application doesn't work dependably in Firefox, and if I should submit again. If I haven't received a rejection night this mean my application is still being considered? It would be great to know my application was received.
My device has an M2 Max chip, and I am using Xcode version 15.1 Beta 3. My app runs normally in iOS and iPad simulators, but when I attempt to run it in the Vision Pro simulator, even though the compilation is successful, a dialog box appears stating, 'AppName's architectures (Intel 64-bit) include none that Apple Vision Pro can execute (arm64).' Consequently, the app is not successfully installed in the Vision Pro simulator. Additionally, my project uses Cocoapods for dependency management. I would appreciate any help, thank you!
I'm creating an immersive experience with RealityView (just consider it Fruit Ninja like experience). Saying I have some random generated fruits that were generated by certain criteria in System.update function. And I want to interact these generated fruits with whatever hand gesture.
Well it simply doesn't work, the gesture.onChange function isn't fire as I expected. I put both InputTargetComponent and CollisionComponent to make it detectable in an immersive view. It works fine if I already set up these fruits in the scene with Reality Composer Pro before the app running.
Here is what I did
Firstly I load the fruitTemplate by:
let tempScene = try await Entity(named: fruitPrefab.usda, in: realityKitContentBundle)
fruitTemplate = tempScene.findEntity(named: "fruitPrefab")
Then I clone it during the System.update(context) function. parent is an invisible object being placed in .zero in my loaded immersive scene
let fruitClone = fruitTemplate!.clone(recursive: true)
fruitClone.position = pos
fruitClone.scale = scale
parent.addChild(fruitClone)
I attached my gesture to RealityView by
.gesture(DragGesture(minimumDistance: 0.0)
.targetedToAnyEntity()
.onChanged { value in
print("dragging")
}
.onEnded { tapEnd in
print("dragging ends")
}
)
I was considering if the runtime-generated entity is not tracked by RealityView, but since I have added it as a child to a placeholder entity in the scene, it should be fine...right?
Or I just needs to put a new AnchorEntity there?
Thanks for any advice in advance. I've been tried it out for the whole day.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
Hello! Could please someone help me with such question: in Xcode I could see that there are two possible destinations for Apple Vision - Apple vision vs Apple vision (Designed for iPad). So while I tried to test my app, I noticed that it is possible that my app crashes on Apple Vision for visionOs sdk, but runs on the designed for iPad version and when launched they looks differently.
Does it mean that if I want to make sure that my app works correctly for real device, then I should test it on the Apple Vision for visionOs sdk?