I currently have an iOS app that transmits h264 code through wifi, uses videotoolbox to decode and displays it with MTKView, and I want to implement similar functions in visionOS. What should I do? MTKView is not available on visionOS
Reality Composer Pro
RSS for tagLeverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps
Posts under Reality Composer Pro tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi,
I'm trying to have an entity (and some attachments to it) to rotate.
If I add the entity to content, add the attachment as a child entity, and set the entity as InputTargetComponent, then when I add a gesture ONLY the entity rotates and NOT the attachments (added as child entities).
If I add a parent entity with let parentEntity = ModelEntity(), add my entity to the parentEntity, then add the attachments to an entity (which is now a child of the ModelEntity) and set the ModelEntity as InputTargetComponent then the whole thing rotates (including attachments)
I'm sure there must be a bug, why would it work only with an added ModelEntity?
Anyway, bug or not a bug, the problem I have now is that it rotates around the axes of the ModelEntity, not my primary entity, which is what I want.
Is there a way to set the ModelEntity axes to be the axes of my primary child entity so it rotates like I want?
What call should I use to move the axes where would I find the axes of the first child entity which should be the focus of my app?
Here is my code:
var body: some View {
RealityView { content, attachments in
// Add the initial RealityKit content
if let specimenentity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
let parentEntity = ModelEntity()
parentEntity.addChild(specimenentity)
content.add(parentEntity)
let entityBounds = specimenentity.visualBounds(relativeTo: parentEntity)
parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)])
parentEntity.generateCollisionShapes (recursive: true)
parentEntity.components.set(InputTargetComponent())
if let Left_Hemisphere = attachments.entity(for: "Left_Hemisphere") {
//4. Position the Attachment and add it to the RealityViewContent
Left_Hemisphere.position = [-0.5, 1, 0]
specimenentity.addChild(Left_Hemisphere)
}
}
} attachments: {
Attachment(id: "Left_Hemisphere") {
//2. Define the SwiftUI View
Text("Left_Hemisphere")
.font(.extraLargeTitle)
.padding()
.glassBackgroundEffect()
}
}
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
var orientation = Rotation3D(entity.orientation(relativeTo: nil))
var newOrientation: Rotation3D
if (value.location.x >= lastGestureValueX) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueX = value.location.x
orientation = Rotation3D(entity.orientation(relativeTo: nil))
if (value.location.y >= lastGestureValueY) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .x))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .x))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValueY = value.location.y
}
)
}
}
hello,
i want to video play movie file(ex mp4, mov...) in vision pro app. and i want to video play movie panorama and curved view (ex albums app > panorama picture > panorama button) in my app.
I've got a couple 2D PNG assets that I want to add to a scene made of a couple other udsz files in RCP (picture adding a couple 2D videogame characters to a simple 3D diorama).
When I try to drag the PNGs to the workspace or the file tree…nothing happens.
I found a walkthrough on Medium (called "Importing and Exporting Personalized Objects for Augmented Reality: Reality Composer and SwiftUI" for those curious as I can't link to Medium posts here) that makes it look like users could do this with simple drag-and-drop. The Medium post is from June 2023, and in the screenshots RCP visually looks a lot more like Reality Composer on iPad, so I'm assuming it's changed a lot since then?
Is there still a way to do this? I've tried adding the 2D elements to a scene with Blenders "import images as planes," but I'm getting weird halos around them and was hoping RCP could make the process a bit easier/cleaner.
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected.
When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from.
I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent.
I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates.
While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
I'm trying to make a simple demo of using ShaderGraphMaterial in a USDZ file that I can preview on Mac and VisionOS but I'm having trouble.
In Reality Composer, I make a sphere, then assign a ShaderGraphMaterial to the material, with a simple diffuse color (green) input. When I save the file as .usda, it displays as a gray sphere on mac rather than the green sphere shown in reality composer. If I then convert to usdz using Reality Converter, I get a warning on import:
"Shader nodes must have “id” as the implementationSource, with id values that begin with “Usd”. Also, shader inputs with connections must each have a single, valid connection source."
And the exported .usdz also shows as a gray sphere.
Is there a simple demo of a .usda file using ShaderGraphMaterial that displays on Mac, iOS, and VisionOS that I can look at to see how it looks internally?
My actual problem is creating usdz / usda files on visionOS for viewing on iOS / Mac / VisionOS.. but the first step is showing it's possible to even use ShaderGraphMaterial across all platforms.
Thanks
I would like to add text to a Reality Composer Pro scene and set the actual text via code. How can I achieve this? I haven't seen any "Text" element in the editor.
I'm trying to better understand how loading entities works. If I do this:
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) {
content.add(scene)
}
}
It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
Hi,
I am investigating how to emit the following in my visionOS app.
https://www.hiroakit.com/archives/1432
https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/
Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them.
I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know.
Thanks.
I have a custom material in Reality Composer.
When I attach it to a cube and try loading the scene in XCode, the material cannot be cast to a ShaderGraphMaterial because it has been changed to a PhysicallyBasedMaterial.
The material was always a Custom material, I did not change the type in Reality Composer.
Does anyone know how to fix?
Hi,
I create an entity and add a bunch of attachments (code is based on the Diorama demo).
I can rotate the entity with this:
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let orientation = Rotation3D(entity.orientation(relativeTo: nil))
let newOrientation: Rotation3D
if (value.location.x >= lastGestureValue) {
newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y))
} else {
newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y))
}
entity.setOrientation(.init(newOrientation), relativeTo: nil)
lastGestureValue = value.location.x
}
)
But the attachments stay still.
How can I rotate the entity AND the attachment at the same time?
Hello,
I've been trying to render these models in a VisionOS app using RealityKit's Model3D API. The heart seem to appear dark all the time. Any thoughts on why this would happen?
Color.clear
.overlay {
Model3D(named: modelName, bundle: realityKitContentBundle) { model in
model.resizable()
.scaledToFit()
.rotation3DEffect(
Rotation3D(
eulerAngles: .init(angles: orientation, order: .xyz)
)
)
.frame(depth: modelDepth)
.offset(z: -modelDepth / 2)
.accessibilitySortPriority(1)
} placeholder: {
ProgressView()
.offset(z: -modelDepth * 0.75)
}
}
.dragRotation(yawLimit: .degrees(120), pitchLimit: .degrees(20))
.offset(z: modelDepth)
objc[27000]: Class XROS1_1SimRuntime is implemented in both /Library/Developer/CoreSimulator/Volumes/xrOS_21O209/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 1.1.simruntime/Contents/MacOS/xrOS 1.1 (0x1025f80e0) and /Library/Developer/CoreSimulator/Volumes/xrOS_21O5181e/Library/Developer/CoreSimulator/Profiles/Runtimes/xrOS 1.1.simruntime/Contents/MacOS/xrOS 1.1 (0x1027c00e0). One of the two will be used. Which one is undefined.
error: Tool terminated by signal 'Segmentation fault: 11'
This build failed issue occur every time when I play build
// swift-tools-version:5.9
// The swift-tools-version declares the minimum version of Swift required to build this package.
import PackageDescription
let package = Package(
name: "RealityKitContent",
// platforms check needed
platforms: [
.custom("xros", versionString: "1.0")
],
products: [
// Products define the executables and libraries a package produces, and make them visible to other packages.
.library(
name: "RealityKitContent",
targets: ["RealityKitContent"]),
],
dependencies: [
// Dependencies declare other packages that this package depends on.
// .package(url: /* package url */, from: "1.0.0"),
],
targets: [
// Targets are the basic building blocks of a package. A target can define a module or a test suite.
// Targets can depend on other targets in this package, and on products in packages this package depends on.
.target(
name: "RealityKitContent",
dependencies: []),
]
)
and here is the path
/Users/momo/b2db2d.github.io/B2D/Packages/RealityKitContent
Every time I build, every time it keep showing 'build failed' and there is always same issue. ^^Segmentation fault: 11^^!!!!!!!
I'm so annoying about this. I updated to latest package version, deleted cache, resolve package version, clean build folder etc. But IDK why.
Please fix this issue or tell me what to do!!
Is it possible to use an image sequence, .mov or sprite sheet as a node source for a custom material in Reality Composer Pro?
I have noticed that in the particle emitter, the magic preset uses a 4x4 sprite sheet as a particle source. Can this be done within the shader graph for the diffuse or normal slot?
I am struggling to figure out how to make a shader to animate each vertex of a model separately using noise. I watched a video on how to do this in Unity, but I think something must be different with how Reality Composer Pro handles the noise nodes?
For example, in this graph I just hooked up the noise node directly to the geometry modifier:
In my output you can see the plane is adjust per-vertex using the noise node. My goal would be to animate this like waves, but moving the noise.
So in this graph I use time with sin to adjust the UV of the noise. This seems to change the noise node to output a single value (I guess that makes sense, since I modify the UV, it results in a single value, at that UV in the noise map). So then, I take that as the Y value and put it back into the geometry modifier. But now it doesn't work per-vertex, it moves the whole model up and down (based on the single value coming out of the noise map).
How do I make this apply to each vertex of the noise map individually?
This is an example of the output I want in Unity, the plane is being adjusted per-vertex by a scrolling 2d noise node:
I have been digging into learning shader graphs by watching Unity shader graph content, cause lots of the same concepts apply.
One thing I noticed was that in Unity, each node in the shader graph has a little preview. I don't think this exists in Reality Composer Pro, but is there anyway to mimic it (like can I hook up a node that allows me to debug the graph at that point?)
If not, I'm happy to just file a feedback about it, but just thought I'd ask!
We are building an AR experience for deployment on iphones. We are using Unity but it looks as if Reality Composer Pro has better features for spatial audio. I am not sure if Reality Composer Pro can only be used for Vision Pro or can it also be used for deployment on Iphone or ipad.
Hi guys,
if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback:
app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro.
when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way.
3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box.
I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon.
We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment.
Well, that's just a beginning and I could go on and on and on, but tell me what you guys think.
Regards and enjoy new virtual adventure!
Robert
Can anyone point me to an approach for handling drag, rotation and scale on a 'TargetedToAnyEnity' asset coming from a realityKitContentBundle?
I've looked through all of the code examples, and have cobbled together something using PlacementGesturesModifer and DragRotationModifier from the HelloWorld code example but I can't figure out how to make it work on individual assets -- it only works on the root.
When I do something simple like this (outside the modifier script I mentioned above) I can make individual drag work... but can't figure out how to apply the same thing to rotation and scale.
.gesture(DragGesture()
.targetedToAnyEntity()
.onChanged({ value in
value.entity.position = value.convert(value.location3D, from: .local, to: value.entity.parent!)
})
Are there any examples of a solution for drag, rotation and scale on an individual basis in the code examples? Any advice or hints would be appreciated. :)