I wanted to drag EntityA while also dragging EntityB independently.
I've tried to separate them by entity but it only recognizes the latest drag gesture
RealityView { content, attachments in
...
}
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
)
.gesture(
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
also tried using the simultaneously but didn't work too, maybe i'm missing something
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
.simultaneously(with:
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map.
So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online:
import Combine
import Foundation
import RealityKit
import SwiftUI
extension Entity {
func addSkybox(for skybox: Skybox) {
let subscription = TextureResource
.loadAsync(named: skybox.imageName)
.sink(receiveCompletion: { completion in
switch completion {
case .finished: break
case let .failure(error): assertionFailure("\(error)")
}
}, receiveValue: { [weak self] texture in
guard let self = self else { return }
var material = UnlitMaterial()
material.color = .init(texture: .init(texture))
let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material])
self.components.set(sphere)
/// flip sphere inside out so the texture is inside
self.scale *= .init(x: -1, y: 1, z: 1)
self.transform.translation += SIMD3(0.0, 1.0, 0.0)
})
components.set(Entity.SubscriptionComponent(subscription: subscription))
}
struct SubscriptionComponent: Component {
var subscription: AnyCancellable
}
}
This works fine and is looking awesome.
However, I can't get a gesture work on this.
If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this:
import RealityKit
import SwiftUI
struct ImmersiveMap: View {
@State private var rotationAngle: Float = 0.0
var body: some View {
RealityView { content in
let rootEntity = Entity()
rootEntity.addSkybox(for: .worldmap)
rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)]))
rootEntity.generateCollisionShapes(recursive: true)
rootEntity.components.set(InputTargetComponent())
content.add(rootEntity)
}
.gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in
log("drag gesture")
}))
But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events.
Is there a way to achieve this?
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle.
Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene?
Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source?
Thanks in advance!
We have a custom photo booth for taking photos of people for use with photogrammetry - the usual vertical cylinder of cameras with the human subject stood in the middle.
We've found that often the lower legs of the subject are missing - this is particularly likely if the subject is wearing dark pants.
The API for PhotogrammetrySession is really very limited, but we've tried all the combinations or detail and sensitivity and object masking we can think of - nothing results in a reliable scan.
Personally I think this is related to the automatic isolation of the subject, rather than the photogrammetry itself. Often we get just the person, perfectly modelled. Occasionally we get everything the cameras can see - including the booth itself and the room it's in! But sometimes we get this footless result.
Is there anything we can try to improve the situation?
I was able to add a spotlight effect to my entities using ImageBasedLightComponent and the sample code. However, I noticed that whenever you set ImageBasedLightComponent the environmental lighting is completely turned off. Is it possible to merge them somehow?
So imagine you have a toy in a the real world, and you shine a flashlight on it. The environment light should still have an effect right?
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black.
In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content?
Basically, I want to punch a hole through the virtual content and see the room behind it.
As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard.
Is this possible?
Does RealityKit support a clipping plane, where I can define a plane and have all content on one side of the plane not rendered?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace.
For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it.
So far, the portal isn't showing up.
Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this?
You can look at 10:34 in this video.
https://developer.apple.com/documentation/realitykit/videomaterial
I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)
Question:
How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?
Using the following code:
var dragGesture: some Gesture {
DragGesture(minimumDistance: 0)
.targetedToAnyEntity()
.onChanged { value in
let entity = value.entity
let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
}
}
Where's the xcode project for the "World App" referenced in the Build spatial experiences with RealityKit?
At 3 minutes in, "the world app" is shown with a 2D window, and seems to be the expected starting place for the 3 module series.
I see the code snippets below the video, which seem to intend adjustments to the original project.
I've searched a..
I found it by searching github, maybe I'm missing an obvious link on the page.
It is available here: https://developer.apple.com/documentation/visionos/world under the documentation page.
Hope this helps someone.
I have a view attachment attached to a hand anchor. When the attachment is facing away I don't want it to render.
I might be missing something obvious, but I've made a System that runs on every render loop. In the update call I'm getting a reference to the Attachment using components.
And this is as long as I got. I can't figure out how to get the normal of an Entity I receive in the update function.
My plan was to take the head anchor normal and compare it to the entity normal. If they are facing each other I render the viewAttachment, otherwise not.
Is there a simpler way? And if not, how do I get the normal of an entity?
I can't figure this one out. I've been able to load image textures from a struct model but not a class Model for my modelEntity.
This for example, works for me, this is what I have been using up to now, without SwiftData, using a struct to hold my model
if let imageURL = model.imageURL {
let picInBox2 = ModelEntity(mesh: .generateBox(size: simd_make_float3(0.6, 0.5, 0.075), cornerRadius: 0.01))
picInBox2.position = simd_make_float3(0, 0, -0.8)
if let imageURL = model.imageURL {
if let texture = try? TextureResource.load(contentsOf: imageURL) {
var unlitMaterial = UnlitMaterial()
var imageMaterial = UnlitMaterial()
unlitMaterial.baseColor = MaterialColorParameter.texture(texture)
picInBox2.model?.materials = [imageMaterial]
}
}
However, when I try to use my SwiftData model it doesn't work. I need to convert Data to url and I am not able to do this.
This is what I would like to use for my image texture, from my SwiftData model
@Attribute(.externalStorage)
var image: Data?
If/when I try to do this, substitute
if let imageURL = item.image {
`
for the old
if let imageURL = model.imageURL {
in
if let imageURL = model.imageURL {
if let texture = try? TextureResource.load(contentsOf: imageURL) {
var unlitMaterial = UnlitMaterial()
var imageMaterial = UnlitMaterial()
unlitMaterial.baseColor = MaterialColorParameter.texture(texture)
picInBox2.model?.materials = [imageMaterial]
}
it doesn't work.
I get the error:
Cannot convert value of type 'Data' to expected argument type 'URL'
How can i convert the type 'Data' to expected argument type 'URL'?
The original imageURL I am using here comes from the struct Model where it's saved as a variable
var imageURL: URL? = Bundle.main.url(forResource: "cat", withExtension: "png")
I am at my wit's end. Thank you for any pointers!
I've added the Starfield image from Apple's World sample code to the Progressive immersive project template, and I've experimented with a few other images I had around. I have a few questions:
(1) Lighter shots look fairly pixelated. Does Apple recommend any minimum/maximum resolutions for images used for the giant sphere? (I noticed Starfield is 4096x4096)
(2) I just put the other images in the 2x well for the image set. Should I put other images in their own 2x well no matter the DPI of the image?
(3) Apple's Starfield image is square, but skybox images I've used before tend to be much wider (with the top and bottom areas distorted). Is there a particular aspect ratio I should be using?
(4) In at least one case, I think the center of the image was rotated to the right by about 20 degrees. Is this expected? Could it have been an artifact of the image's size or aspect ratio?
Hey friends, I'm using a drag gesture to rotate a parent object that contains several child colliders. When I drag slowly, sometimes the child colliders don't rotate along with the parent. Any help would be appreciated, thanks!
.gesture(
DragGesture()
.targetedToAnyEntity()
.onChanged { value in
let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene)
let currentLocation = value.convert(value.location3D, from: .local, to: .scene)
let delta = currentLocation - startLocation
let spinX = Double(delta.y)
let spinY = Double(delta.x)
let pitch = Transform(pitch: Float(spinX * -1)).matrix
let roll = Transform(roll: Float(spinY * -1)).matrix
value.entity.transform.matrix = roll * pitch
})
Version details:
Xcode Version 15.3 beta (15E5178i)
visionOS 1.0 (21N301) SDK + visionOS 1.0 (21N305) Simulator (Installed)
I'm trying to make a ModelEntity with a CustomMaterial.GeometryModifier for which I also created a metal shader file.
The said shader file is extremely simple at this time:
#include <metal_stdlib>
#include <RealityKit/RealityKit.h>
using namespace metal;
[[visible]]
void ExpandGeometryModifier(realitykit::geometry_parameters params)
{
// Nothing.
}
When trying to compile my project, I get the following error:
'RealityKit/RealityKit.h' file not found
Is this not supported on VisionOS?
I'm rebuilding a Unity app in Swift because Unity's Polyspatial library doesn't support LineRenderers yet, and that's like 90% of my app.
So far I can draw 2D lines in the VisionOS "Hello World" project using paths and CGPoints in the body View of the Globe.swift file. I don't really know what I'm doing, just got some example lines from ChatGPT that work for a line. I can't make these 3D though.
I haven't been able to find anything on drawing lines for the Vision Pro. Not just 2D lines. I need to draw helixes (helices?) Am I missing something? Thanks, Adam
How do I respond to a SpatialTapGesture in my RealityView when the tap is on no entity whatsoever? I tried just doing
RealityView {}
.gesture(SpatialTapGesture().onEnded { print("foo") })
but that doesn't get called.
All I can find searching is advice to add Collision and Input components to entities, but I don't want this on an entity; I want it when the user is not looking at any specific entity.
Hi!
I have a Flutter project that targets Web and iOS. Overall, our app works quite well on Vision Pro, with the only issue being that our UI elements do not highlight when the user looks at them. (Our UI will highlight on mouseover, however. We have tried tinkering with the mouseover visuals, but this did not help.)
We're considering writing some native Swift code to patch this hole in Flutter's visionOS support. However, after some amount of searching, the documentation doesn't provide any obvious solutions.
The HoverEffectComponent ( https://developer.apple.com/documentation/realitykit/hovereffectcomponent ) in RealityKit seems like the closest there is to adding focus-based behavior. However, if I understand correctly, this means adding an Entity for every Flutter UI element the user can interact with, and then rebuilding the list of Entities every time the UI is repainted... doesn't sound especially performant.
Is there some other method of capturing the user's gaze in the context of an iOS app?
Hi,
I'm trying to display an STL model file in visionOS. I import the STL file using SceneKit's ModelIO extension, add it to an empty scene USDA and then export the finished scene into a temporary USDZ file. From there I load the USDZ file as an Entity and add it onto the content.
However, the model in the resulting USDZ file has no lighting and appears as an unlit solid. Please see the screenshot below:
Top one is created from directly importing a USDA scene with the model already added using Reality Composer through in an Entity and works as expected.
Middle one is created from importing the STL model as an MDLAsset using ModelIO, adding onto the empty scene, exporting as USDZ. Then importing USDZ into an Entity. This is what I want to be able to do and is broken.
Bottom one is just for me to debug the USDZ import/export. It was added to the empty scene using Reality Composer and works as expected, therefore the USDZ export/import is not broken as far as I can tell.
Full code:
import SwiftUI
import ARKit
import SceneKit.ModelIO
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
@State private var showImmersiveSpace = false
@State private var immersiveSpaceIsShown = false
@Environment(\.openImmersiveSpace) var openImmersiveSpace
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
var modelUrl: URL? = {
if let url = Bundle.main.url(forResource: "Trent 900 STL", withExtension: "stl") {
let asset = MDLAsset(url: url)
asset.loadTextures()
let object = asset.object(at: 0) as! MDLMesh
let emptyScene = SCNScene(named: "EmptyScene.usda")!
let scene = SCNScene(mdlAsset: asset)
// Position node in scene and scale
let node = SCNNode(mdlObject: object)
node.position = SCNVector3(0.0, 0.1, 0.0)
node.scale = SCNVector3(0.02, 0.02, 0.02)
// Copy materials from the test model in the empty scene to our new object (doesn't really change anything)
node.geometry?.materials = emptyScene.rootNode.childNodes[0].childNodes[0].childNodes[0].childNodes[0].geometry!.materials
// Add new node to our empty scene
emptyScene.rootNode.addChildNode(node)
let fileManager = FileManager.default
let appSupportDirectory = try! fileManager.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let permanentUrl = appSupportDirectory.appendingPathComponent("converted.usdz")
if emptyScene.write(to: permanentUrl, delegate: nil) {
// We exported, now load and display
return permanentUrl
}
}
return nil
}()
var body: some View {
VStack {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(contentsOf: modelUrl!) {
// Displays middle and bottom models
content.add(scene)
}
if let scene2 = try? await Entity(named: "JetScene", in: realityKitContentBundle) {
// Displays top model using premade scene and exported as USDA.
content.add(scene2)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack (spacing: 12) {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.font(.title)
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
}
.frame(width: 360)
.padding(36)
.glassBackgroundEffect()
}
.onChange(of: showImmersiveSpace) { _, newValue in
Task {
if newValue {
switch await openImmersiveSpace(id: "ImmersiveSpace") {
case .opened:
immersiveSpaceIsShown = true
case .error, .userCancelled:
fallthrough
@unknown default:
immersiveSpaceIsShown = false
showImmersiveSpace = false
}
} else if immersiveSpaceIsShown {
await dismissImmersiveSpace()
immersiveSpaceIsShown = false
}
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
To test this even further, I exported the generated USDZ and opened in Reality Composer. The added model was still broken while the test model in the scene was fine. This also further proved that import/export is fine and RealityKit is not doing something weird with the imported model.
I am convinced this has to be something with the way I'm using ModelIO to import the STL file.
Any help is appreciated. Thank you