I found that the app AirDraw can export users' draw scenes to a USDZ file. So how can I implement this function using RealityKit?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
The project was developed using Unity, and the requirement is to place a virtual model in the real world. When the user leaves the environment or the machine is turned off and then on again, the virtual model is still in its original real position. I found that the worldtracking function of Arkit is useful, but I don't know how to use it in Unity. Is that have any related example projects?
I am trying to make the immersive version of AVplayerViewController bigger, but I can't find any information on how I can go about it. It seems that if I wanted to change immersive video viewing experience, only thing I can do is using VideoMaterial and put it on ModelEntity with .generatePlane. is there a way to change video size on immersive mode for AVplayerViewController?
hello
i wanna play mp4 file in VideoMaterial avPlayer.
so first i make to use reality composer pro.
I created matterial using the sphere provided by default in Reality Composer Pro and exported it to usdz.
and when i play mp4 file in sphere matterial, it's good play
But i wanna custom created matterial (ex. shaper3d create 3d modeling) not good play.
i make custom created matterial - it's curved matterial
curved matterial in shaper3d and exported it to usdz.
curved matterial in Reality Composer Pro Scene and exported it to usdz.
when i play mp4 file in curved matterial, it's not good play
-> not adjust screen play
How can I adjust and display the video in a custom usda file?
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Hello,
I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with:
Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this?
Integrating SceneKit with RealityKit: I've also experimented with using
SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView.
I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices?
Any guidance, sample code, or references to documentation would be greatly appreciated.
Thank you in advance for your help!
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black.
<<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0)
<<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405)
<<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0)
Did I miss something?
We are developing apps for visionOS and need the following capabilities for a consumer app:
access to the main camera, to let users shoot photos and videos
reading QR codes, to trigger the download of additional content
So I was really happy when I noticed that visionOS 2.0 has these features.
However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only:
https://developer.apple.com/videos/play/wwdc2024/10139/
I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space.
IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
Hello,
I am new to swiftUI and VisionOS but I developed an app with a window and an ImmersiveSpace. I want the Immersive space to be dismissed when the window/app is closed.
I have the code below using the state of ScenePhase and it was working fine in Vision OS 1.1 but it stopped working with VisionOS 2.0.
Any idea what I am doing wrong? Is there another way to handle the dismissal of ImmersiveSpace when my main Window is closed?
@main
struct MyApp: App {
@State private var viewModel = ViewModel()
var body: some Scene {
@Environment(\.scenePhase) var scenePhase
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
WindowGroup {
SideBarView()
.environment(viewModel)
.frame(width: 1150,height: 700)
.onChange(of: scenePhase, { oldValue, newValue in
if newValue == .inactive || newValue == .background {
Task {
await dismissImmersiveSpace()
viewModel.immersiveSpaceIsShown = false
}
}
})
}.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView(area: viewModel.currentModel)
.environment(viewModel)
}
}
}
I tested the new visionOS object tracking and it worked really well.
I have created a reference object using Create ML and it really detected the object.
My question is: does it work also with iOS and, if not right now, is it planned to work in mobile iOS in the future?
hello
i watched WWDC24,
Ultra-Wide Mac Display.
i wanna use to my player like that Ultra-wide mac Display.
i wanna play for mp4 movie file in Ultra-wide mode (like that curved mode)
Can i use Ultra-Wide AVKit, VisionOS 2 ?
when i check in Apple documentation, AVExperienceController.Experience.expanded,
Is this the function(Ultra-wide mode) I think it is?
(https://developer.apple.com/documentation/avkit/avexperiencecontroller/experience-swift.enum/expanded#discussion)
How can I remove or hide this part under a SwiftUI panel?
What Swift code should I write to control this?
HI eveyone
i've read that USDZ supports LOD to have 3 meshes with high medium and low polygon detail to be visible depending on the distance from the user to the entity...
but i dont know how to use it...
any experience or... god bless you... a downloadable file with a sample???
thanks a lot !!!!
this week i was watching https://developer.apple.com/videos/play/wwdc2024/10105/
with the amazing "configuration" feature to change the color or mesh straight in quick look, but i tried a lot with goarounds but nothing bring me to success
how do i write in the usda files?
anytiome i overwrite the usda even with just a "{}" inside... Reality composer pro rejects the file to be open again
where is the developer man in the tutorial writing the usda?
how is the usda compressed in usdz? (none of the compressors i tried accepeted the modified usda file)
this is the code it's suggested in the video
#usda 1.0
(
defaultPrim = "iPhone"
)
def Xform "iPhone" (
variants = {
string Color = "Black_Titanium"
}
prepend variantSets = ["Color"]
)
{
variantSet "Color" = {
"Black_Titanium" { }
"Blue_Titanium" { }
"Natural_Titanium" { }
"White_Titanium" { }
}
}
but i dont understand how to do it with my own files,
I would like to code some RealityViews to run on my Mac first (and then incorporate them in a visionOS project) so that my code/test loop is faster, but I have not been able to find a simple example that supports Mac.
Is it possible to have volumes on a Mac? Is there support for using a game controller to move around the RealityView, like in the visionOS simulator?
Has there been an adjustment in the maximum number of DrawableQueues that can be swapped for textures in VisionOS 2? Or an adjustment in the total amount of RAM allowed in a scene?
I have been having a difficult time getting more than one DrawableQueue to appear when it worked fine in VisionOS 1.x.
When using .mov video files for creating video materials in RealityKit, they display correctly on my modelEntity. However, when I tried using a video file in the .mp4 format, I only get a solid black material. Does AVKit support playing .mp4 video files on visionOS 1.2?
Hello just getting this error as I am trying to run the any new project this error is looking up
Thanks
Zipzy Games
X
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows:
I have already taken a seat.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?