UI:
Attachment(id: "tooptip") {
if isRecording {
TooltipView {
HStack(spacing: 8) {
Image(systemName: "waveform")
.font(.title)
.frame(minWidth: 100)
}
}
.transition(.opacity.combined(with: .scale))
}
}
Trigger:
Button("Toggle") {
withAnimation{
isRecording.toggle()
}
}
The above code did not show the animation effect when running. When I use isRecording to drive an element in a common SwiftUI view, there is an animation effect.
RealityKit
RSS for tagSimulate and render 3D content for use in your augmented reality apps using RealityKit.
Post
Replies
Boosts
Views
Activity
Hi, I'm trying to capture some images from WKWebView on visionOS. I know that there's a function 'takeSnapshot()' that can get the image from the web page. But I wonder if 'drawHierarchy()' cannot work properly on WKWebView because of GPU content, is there any other methods I can call to capture images correctly?
Furthermore, as I put my webview into an immersive space, is there any way I can get the texture of this UIView attachment? Thank you
Context being VisionOS development, I was trying to do something like
let root = ModelEntity()
child1 = ModelEntity(...)
root.addChild(child1)
child2 = ModelEntity(...)
root.addChild(child2)
only to find that, despite seemingly being together, I can only pick by children entities when I apply a DragGesture in VisionOS. Any idea what's going on?
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
If you create a custom shader you get access to a collection of uniform values, one is the uniforms::time() parameter which is defined as "the number of seconds that have elapsed since RealityKit began rendering
the current scene" in this doc: https://developer.apple.com/metal/Metal-RealityKit-APIs.pdf
Is there some way to get this value from Swift code? I want to animate a value in my shader based on the time so I need to get the starting time value so I can interpolate the animation offset from that point. If I create a System in the update() function I get a SceneUpdateContext instance and that has a deltaTime property but not an elapsedTime property which I would assume would map to the shader time() value.
Is it possible with iOS 18 to use RealityView with world tracking but without the camera feed as background?
With content.camera = .worldTracking the background is always the camera feed, and with content.camera = .virtual the device's position and orientation don't affect the view point. Is there a way to make a mixture of both?
My use case is that my app "Encyclopedia GalacticAR" shows astronomical objects and a skybox (a huge sphere), like a VR view of planets, as you can see in the left image. Now that iOS 18 offers RealityView for iOS and iPadOS, I would like to make use of it, but I haven't found a way to display my skybox as environment, instead of the camera feed.
I filed the suggestion FB14734105 but hope that somebody knows a workaround...
Hello Guys,
I'm looking for how to look at the 3D object around me, pinch it (from far with eyes and 2 fingers, not touch them) and display a little information on top of it (like little text).
If you have the solution you will make me happy (and a bit less stupid too :) )
Cheers
Mathis
Starting with Xcode Beta 4+, any ModelEntity I load from usdz that contain a skeletal pose has no pins. The pins used to be accessible from a ModelEntity so you could use alignment with other pins.
Per the documentation, any ModelEntity with a skeletal pose should have pins that are automatically generated and contained on the entity.pins object itself.
https://developer.apple.com/documentation/RealityKit/Entity/pins
Is this a bug with the later Xcode betas or is the documentation wrong?
I'm trying to clone an entity that's somewhere deeper in hierarchy and I want it together with transform that takes into account parents.
Initially I made something that would go back through parents, get their transforms and then reduce them to single one. Then I realized that what I'm doing is same as .transformMatrix(relativeTo: rootEntity), but to validate that what I made gives same results I started to print them both and I noticed that for some reason the last row instead of stable (0,0,0,1) is sometimes (0,0,0,0.9999...). I know that there are rounding errors, but I'd assume that 0 and 1 are "magical" in FP world.
The only way I can try to explain it, is that .transformMatrix is using some fancy accelerated matrix multiplication and those produce some bigger rounding errors. That would explain slight differences in other fields between my version and function call, but still - the 1 seems weird.
Here's function I'm using to compare:
func cloneFlattened(entity: Entity, withChildren recursive: Bool) -> Entity {
let clone = entity.clone(recursive: recursive)
var transforms = [entity.transform.matrix]
var parent: Entity? = entity.parent
var rootEntity: Entity = entity
while parent != nil {
rootEntity = parent!
transforms.append(parent!.transform.matrix)
parent = parent!.parent
}
if transforms.count > 1 {
clone.transform.matrix = transforms.reversed().reduce(simd_diagonal_matrix(simd_float4(repeating: 1)), *)
print("QWE CLONE FLATTENED: \(clone.transform.matrix)")
print("QWE CLONE RELATIVE : \(entity.transformMatrix(relativeTo: rootEntity))")
}
else {
print("QWE CLONE SINGLE : \(clone.transform.matrix)")
}
return clone
}
Sometimes last one is not 1
QWE CLONE FLATTENED: [
[0.00042261832, 0.0009063079, 0.0, 0.0],
[-0.0009063079, 0.00042261832, 0.0, 0.0],
[0.0, 0.0, 0.0010000002, 0.0],
[-0.0013045187, -0.009559666, -0.04027118, 1.0]
]
QWE CLONE RELATIVE : [
[0.00042261826, 0.0009063076, -4.681872e-12, 0.0],
[-0.0009063076, 0.00042261826, 3.580335e-12, 0.0],
[3.4256328e-12, 1.8047965e-13, 0.0009999998, 0.0],
[-0.0013045263, -0.009559661, -0.040271178, 0.9999997]
]
Sometimes it is
QWE CLONE FLATTENED: [
[0.0009980977, -6.1623556e-05, -1.7382005e-06, 0.0],
[-6.136851e-05, -0.0009958588, 6.707259e-05, 0.0],
[-5.8642554e-06, -6.683835e-05, -0.0009977464, 0.0],
[-1.761913e-06, -0.002, 0.0, 1.0]
]
QWE CLONE RELATIVE : [
[0.0009980979, -6.1623556e-05, -1.7382023e-06, 0.0],
[-6.136855e-05, -0.0009958589, 6.707254e-05, 0.0],
[-5.864262e-06, -6.6838256e-05, -0.0009977465, 0.0],
[-1.758337e-06, -0.0019999966, -3.7252903e-09, 1.0]
]
0s in last row seem to be stable.
It happens both for entities that are few levels deep and those that have only anchor as parent.
So far I've never seen any value that would not be "technically a 1", but my hierarchies are not very deep and it makes me wonder if this rounding could get worse.
Or is it just me doing something stupid? :)
I'm trying to load up a virtual skybox, different from the built-in default, for a simple macOS rendering of RealityKit content.
I was following the detail at https://developer.apple.com/documentation/realitykit/environmentresource, and created a folder called "light.skybox" with a single file in it ("prairie.hdr"), and then I'm trying to load that and set it as the environment on the arView when it's created:
let ar = ARView(frame: .zero)
do {
let resource = try EnvironmentResource.load(named: "prairie")
ar.environment.lighting.resource = resource
} catch {
print("Unable to load resource: \(error)")
}
The loading always fails when I launch the sample app, reporting "Unable to load resource ..." and when I look in the App bundle, the resource that's included there as Contents/Resources/light.realityenv is an entirely different size - appearing to be the default lighting.
I've tried making the folder "light.skybox" explicitly target the app bundle for inclusion, but I don't see it get embedded with it toggle that way, or just default.
Is there anything I need to do to get Xcode to process and include the lighting I'm providing?
(This is inspired from https://stackoverflow.com/questions/77332150/realitykit-how-to-disable-default-lighting-in-nonar-arview, which shows an example for UIKit)
Hello,
I’m playing around with making an fully immersive multiplayer, air to air dogfighting game, but I’m having trouble figuring out how to attach a camera to an entity.
I have a plane that’s controlled with a GamePad. And I want the camera’s position to be pinned to that entity as it moves about space, while maintaining the users ability to look around.
Is this possible?
--
From my understanding, the current state of SceneKit, ARKit, and RealityKit is a bit confusing with what can and can not be done.
SceneKit
Full control of the camera
Not sure if it can use RealityKits ECS system.
2D Window. - Missing full immersion.
ARKit
Full control of the camera* - but only for non Vision Pro devices. Since Vision OS doesn't have a ARView.
Has RealityKits ECS system
2D Window. - Missing full immersion.
RealityKit
Camera is pinned to the device's position and orientation
Has RealityKits ECS system
Allows full immersion
Hey everyone,
I'm working on an iOS app where I use AVPlayer to play videos, then process them through Metal to apply effects. The app has controls that let users tweak these effects in real-time, and I want the final processed video to be streamed via AirPlay. I use a custom rendering layer that uses a Metal texture to display the processed video on the screen an that works as intended.
The problem is, when I try to AirPlay the video after feeding it the processed metal frames, it’s just streaming the original video from AVPlayer, not the version with all the Metal effects.
The final processed output is a Metal texture that gets rendered in a MTKView. I even tried capturing that texture and sending it through a new AVPlayer setup, but AirPlay still grabs the original, unprocessed video instead of the final, fully-rendered output. It's also clear that the airplayed video has the full length of the original built in so it's not even that it's 'live streaming' the wrong feed.
I need help figuring out how to make AirPlay stream the live, processed video with all the effects, not just the raw video. Any ideas? Happy to share my code if that helps but I'm not sure I have the right underlying approach yet.
Thanks!
Is it possible to animate some property on a RealityKit component? For example, the OpacityComponent has an opacity property that allows the opacity of the entities it's attached to, to be modified. I would like to animate the property so the entity fades in and out.
I've been looking at the animation API for RealityKit and it either assumes the animation is coming from a USDZ (which this is not), or it allows properties of entities themselves to be animated using a BindTarget. I'm not sure how either can be adapted to modify component properties?
Am I missing something?
Thanks
I am trying to make a shader that resembles a laser like this:
I've been experimenting with a basic Fresnel shader to start, but the Fresnel shader has a problem at high viewing angles where the top has a very different color than the rest of the capsule.
This might work for a laser shader once inverted and fine tuned:
However, when viewed from the top, it doesn't look so good anymore:
Ideally, the purple edge is always ONLY on the edge, and the rest of the surface is the light pink color, no matter the viewing angle. How can I accomplish this to create something that looks like a laser?
Have some older code running an ARView in .nonAR mode with a perspective camera that is moved around. It seem if running this on an iPhone with iOS18 Beta or iOS18.1 beta it ignores the camera and the view looks incorrect.
Hi, I have a question, is it possible to create an occlusion material that hide only specific entities. For example I would. like to create a mask that hides the an cube entity which is in front of another sphere entity and would like to be able to see the sphere but not the cube trough occlusion.
Hi I'm working on a project that uses RealityKit including the placement of 3d objects.
However, I want to be able to run the background camera through Metal post-processing before being rendered but haven't been able to find a working approach. I'm open to it rendering directly into the ARview or a separate MTKview or swiftui layer.
I've tried using the default xcode project of an Augmented Reality App with Metal Content. However it seems to use a 1.33 aspect camera by default instead of the iphone 15s standard ratio which works by default when I use the regular realitykit pathway and doesnt seem to have the proper ratio available as an option
Open to any approach that gets the job done here.
Thank you,
Any direction would be
Hello,
I'm developing a RealityKit based app. As part of this, I would like to have a material applied to 3d objects which is essentially contains a texture which is the live camera feed from the arsession.
I have the code below which does apply a texture of the camera feed to the box but it essentially only shows the camera snapshot at the time the app loads and doesn't update continuously.
I think the issue might be that there is some issue with how the delegate is setup and captureOutput is only called when the app loads instead of every frame. Open to any other approach or insight that gets the job done.
Thank you for the help!
class CameraTextureViewController: UIViewController {
var arView: ARView!
var captureSession: AVCaptureSession!
var videoOutput: AVCaptureVideoDataOutput!
var material: UnlitMaterial?
var displayLink: CADisplayLink?
var currentPixelBuffer: CVPixelBuffer?
var device: MTLDevice!
var commandQueue: MTLCommandQueue!
var context: CIContext!
var textureCache: CVMetalTextureCache!
override func viewDidLoad() {
super.viewDidLoad()
setupARView()
setupCaptureSession()
setupMetal()
setupDisplayLink()
}
func setupARView() {
arView = ARView(frame: view.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.addSubview(arView)
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
arView.session.run(configuration)
arView.session.delegate = self
}
func setupCaptureSession() {
captureSession = AVCaptureSession()
captureSession.beginConfiguration()
guard let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back),
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice),
captureSession.canAddInput(videoDeviceInput) else { return }
captureSession.addInput(videoDeviceInput)
videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "cameraQueue"))
videoOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
guard captureSession.canAddOutput(videoOutput) else { return }
captureSession.addOutput(videoOutput)
captureSession.commitConfiguration()
DispatchQueue.global(qos: .userInitiated).async { [weak self] in
self?.captureSession.startRunning()
}
}
func setupMetal() {
device = MTLCreateSystemDefaultDevice()
commandQueue = device.makeCommandQueue()
context = CIContext(mtlDevice: device)
CVMetalTextureCacheCreate(nil, nil, device, nil, &textureCache)
}
func setupDisplayLink() {
displayLink = CADisplayLink(target: self, selector: #selector(updateFrame))
displayLink?.preferredFrameRateRange = CAFrameRateRange(minimum: 60, maximum: 60, preferred: 60)
displayLink?.add(to: .main, forMode: .default)
}
@objc func updateFrame() {
guard let pixelBuffer = currentPixelBuffer else { return }
updateMaterial(with: pixelBuffer)
}
func updateMaterial(with pixelBuffer: CVPixelBuffer) {
let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
var tempPixelBuffer: CVPixelBuffer?
let attrs = [
kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferMetalCompatibilityKey: kCFBooleanTrue
] as CFDictionary
CVPixelBufferCreate(kCFAllocatorDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, attrs, &tempPixelBuffer)
guard let tempPixelBuffer = tempPixelBuffer else { return }
context.render(ciImage, to: tempPixelBuffer)
var textureRef: CVMetalTexture?
let width = CVPixelBufferGetWidth(tempPixelBuffer)
let height = CVPixelBufferGetHeight(tempPixelBuffer)
CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, textureCache, tempPixelBuffer, nil, .bgra8Unorm, width, height, 0, &textureRef)
guard let metalTexture = CVMetalTextureGetTexture(textureRef!) else { return }
let ciImageFromTexture = CIImage(mtlTexture: metalTexture, options: nil)!
guard let cgImage = context.createCGImage(ciImageFromTexture, from: ciImageFromTexture.extent) else { return }
guard let textureResource = try? TextureResource.generate(from: cgImage, options: .init(semantic: .color)) else { return }
if material == nil {
material = UnlitMaterial()
}
material?.baseColor = .texture(textureResource)
guard let modelEntity = arView.scene.anchors.first?.children.first as? ModelEntity else {
let mesh = MeshResource.generateBox(size: 0.2)
let modelEntity = ModelEntity(mesh: mesh, materials: [material!])
let anchor = AnchorEntity(world: [0, 0, -0.5])
anchor.addChild(modelEntity)
arView.scene.anchors.append(anchor)
return
}
modelEntity.model?.materials = [material!]
}
}
extension CameraTextureViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
currentPixelBuffer = pixelBuffer
}
}
extension CameraTextureViewController: ARSessionDelegate {
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// Handle AR frame updates if necessary
}
}
This question is about USD animations playing correctly in macOS Preview but not with RealityKit on visionOS.
I have a USD file created with 3D Studio Max that contains mesh-based smoke animation:
https://drive.google.com/file/d/1L7Jophgvw0u0USSv-_0fPGuCuJtapmzo/view
(5.6 MB)
Apple's macOS 14.5 Preview app is able to play the animation correctly:
However, when a visionOS app uses RealityKit to load that same USD file in visionOS 2.0 beta 4, built with 16.0 beta 3 (16A5202i), and Entity/playAnimation is called, the animation does not play as expected:
This same app is able to successfully play animation of a hierarchy of solid objects read from a different USD file.
When I inspect the RealityKit entities loaded from the USD file, the ground plane entity is a ModelEntity, as expected, but the smoke entity type is Entity, with no associated geometry.
Why is it that macOS Preview can play the animation in the file, but RealityKit cannot?
Thank you for considering this question.
Hello,
is it possible to take a screenshot of the whole immersive view, including or excluding SwiftUI components? ARView has a snapshot method for this, but it seems there's no equivalent for RealityView.
I've tried to use ImageRenderer on a parent of RealityView, but I'm only getting plain white bitmap so far.
Thanks in advance,
Rlu