In my Metal-based app, I ray-march a 3D texture. I'd like to use RealityKit instead of my own code. I see there is a LowLevelTexture (beta) where I could specify a 3D texture. However on the Metal side, there doesn't seem to be any way to access a 3D texture (realitykit::texture::textures::custom returns a texture2d).
Any work-arounds? Could I even do something icky like cast the texture2d to a texture3d in MSL? (is that even possible?) Could I encode the 3d texture into an argument buffer and get that in somehow?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
I am trying to convert a ThreeJS project to Metal for the Vision Pro. The issue is ThreeJS doesn't do any color space conversion (when I output a color in a fragment shader and then read it using the digital color meter in SRGB mode I get the same value I inputed in the fragment shader) This is not the case when using metal. When setting up my LayerRenderer I set the colorFormat to rgba16Unorm since it is the only non srgb color format supported on the vision pro apps. However switching between bgra8Unorm_srgb and rgba16Unorm seems to have no affect.
when I set up the renderPassDescriptor I use the drawable colorTexture
renderPassDescriptor.colorAttachments[0].texture = drawable.colorTextures[0]
and when printing its pixel format it seems to be passed from the configuration.
If there is anyway to disable this behavior or perform an inverse function of such that I get the original value out from the shader, that would be appreciated.
So, I've been messing around with SteamVR on Apple Silicon and it runs as expected under Rosetta translation, I've even got a game to run. But for some reason SteamVR cannot detect a headset, even when using one that SteamVR has drivers for such as the 2017 Vive headset. Would there be any explanation as to why this is because SteamVR works as expected so that leads me to believe it's something with MacOS.
UI:
Attachment(id: "tooptip") {
if isRecording {
TooltipView {
HStack(spacing: 8) {
Image(systemName: "waveform")
.font(.title)
.frame(minWidth: 100)
}
}
.transition(.opacity.combined(with: .scale))
}
}
Trigger:
Button("Toggle") {
withAnimation{
isRecording.toggle()
}
}
The above code did not show the animation effect when running. When I use isRecording to drive an element in a common SwiftUI view, there is an animation effect.
Hi, I'm trying to capture some images from WKWebView on visionOS. I know that there's a function 'takeSnapshot()' that can get the image from the web page. But I wonder if 'drawHierarchy()' cannot work properly on WKWebView because of GPU content, is there any other methods I can call to capture images correctly?
Furthermore, as I put my webview into an immersive space, is there any way I can get the texture of this UIView attachment? Thank you
I'm trying to update my projects to use Swift6, if I change the project settings to use Swift6 then my app crashes when I add a closure to the SCNAnimation animationDidStop property. The error is inside the SceneKit renderingQueue and indicates that the callback is being called on the wring queue.
Maybe I need to do something in the code to fix this but I can't seem to make it work, maybe a SceneKit bug?
If you create a new game template in Xcode using SceneKit and replace the contents of GameViewController.swift with the following you will see the app crash after it is launched.
import UIKit
import SceneKit
class GameViewController: UIViewController {
let player: SCNAnimationPlayer = {
let a = CABasicAnimation(keyPath: "opacity")
return SCNAnimationPlayer(animation: SCNAnimation(caAnimation: a))
}()
override func viewDidLoad() {
super.viewDidLoad()
let scnView = self.view as! SCNView
scnView.scene = SCNScene()
// Change the project settings to use Swift6
// Setting this closure will then cause a _dispatch_assert_queue_fail
// EXC_BREAKPOINT error in the scenekit.renderingQueue.SCNView queue,
// the only thing on the stack is:
// "%sBlock was %sexpected to execute on queue [%s (%p)]"
player.animation.animationDidStop = { (a: SCNAnimation, b: SCNAnimatable, c: Bool) in
print("stopped")
}
scnView.scene?.rootNode.addAnimationPlayer(player, forKey: nil)
player.play()
}
}
Greetings! I have been battling with a bit of a tough issue. My use case is running a pixelwise regression model on a 2D array of images using CIImageProcessorKernel and a custom Metal Shader.
It mostly works great, but the issue that arises is that if the regression calculation in Metal takes too long, an error occurs and the resulting output texture has strange artifacts, for example:
The specific error is:
Error excuting command buffer = Error Domain=MTLCommandBufferErrorDomain Code=1 "Internal Error (0000000e:Internal Error)" UserInfo={NSLocalizedDescription=Internal Error (0000000e:Internal Error), NSUnderlyingError=0x60000320ca20 {Error Domain=IOGPUCommandQueueErrorDomain Code=14 "(null)"}} (com.apple.CoreImage)
There are multiple levels of concurrency: Swift Concurrency calling the Core Image code (which shouldn't have an impact) and of course the Metal command buffer.
Is there anyway to ensure the compute command encoder can complete its work?
Here is the full implementation of my CIImageProcessorKernel subclass:
class ParametricKernel: CIImageProcessorKernel {
static let device = MTLCreateSystemDefaultDevice()!
override class var outputFormat: CIFormat {
return .BGRA8
}
override class func formatForInput(at input: Int32) -> CIFormat {
return .BGRA8
}
override class func process(with inputs: [CIImageProcessorInput]?, arguments: [String : Any]?, output: CIImageProcessorOutput) throws {
guard
let commandBuffer = output.metalCommandBuffer,
let images = arguments?["images"] as? [CGImage],
let mask = arguments?["mask"] as? CGImage,
let fillTime = arguments?["fillTime"] as? CGFloat,
let betaLimit = arguments?["betaLimit"] as? CGFloat,
let alphaLimit = arguments?["alphaLimit"] as? CGFloat,
let errorScaling = arguments?["errorScaling"] as? CGFloat,
let timing = arguments?["timing"],
let TTRThreshold = arguments?["ttrthreshold"] as? CGFloat,
let input = inputs?.first,
let sourceTexture = input.metalTexture,
let destinationTexture = output.metalTexture
else {
return
}
guard let kernelFunction = device.makeDefaultLibrary()?.makeFunction(name: "parametric") else {
return
}
guard let commandEncoder = commandBuffer.makeComputeCommandEncoder() else {
return
}
let imagesTexture = Texture.textureFromImages(images)
let pipelineState = try device.makeComputePipelineState(function: kernelFunction)
commandEncoder.setComputePipelineState(pipelineState)
commandEncoder.setTexture(imagesTexture, index: 0)
let maskTexture = Texture.textureFromImages([mask])
commandEncoder.setTexture(maskTexture, index: 1)
commandEncoder.setTexture(destinationTexture, index: 2)
var errorScalingFloat = Float(errorScaling)
let errorBuffer = device.makeBuffer(bytes: &errorScalingFloat, length: MemoryLayout<Float>.size, options: [])
commandEncoder.setBuffer(errorBuffer, offset: 0, index: 1)
// Other buffers omitted....
let threadsPerThreadgroup = MTLSizeMake(16, 16, 1)
let width = Int(ceil(Float(sourceTexture.width) / Float(threadsPerThreadgroup.width)))
let height = Int(ceil(Float(sourceTexture.height) / Float(threadsPerThreadgroup.height)))
let threadGroupCount = MTLSizeMake(width, height, 1)
commandEncoder.dispatchThreadgroups(threadGroupCount, threadsPerThreadgroup: threadsPerThreadgroup)
commandEncoder.endEncoding()
}
}
Context being VisionOS development, I was trying to do something like
let root = ModelEntity()
child1 = ModelEntity(...)
root.addChild(child1)
child2 = ModelEntity(...)
root.addChild(child2)
only to find that, despite seemingly being together, I can only pick by children entities when I apply a DragGesture in VisionOS. Any idea what's going on?
Hello,
I would like to know if anyone has or still using the GKVoiceChat capabilities in their apps. I wanted to use it for my online game but I am coming across issues using it and wondering if their are alternatives?. The documentation mentions to use Share-play but that wont be possible with random online players. Any help will be appreciated!.
I have a RealityView in my visionOS app. I can't figure out how to access RealityRenderer. According to the documentation (https://developer.apple.com/documentation/realitykit/realityrenderer) it is available on visionOS, but I can't figure out how to access it for my RealityView. It is probably something obvious, but after reading through the documentation for RealityView, Entities, and Components, I can't find it.
If you create a custom shader you get access to a collection of uniform values, one is the uniforms::time() parameter which is defined as "the number of seconds that have elapsed since RealityKit began rendering
the current scene" in this doc: https://developer.apple.com/metal/Metal-RealityKit-APIs.pdf
Is there some way to get this value from Swift code? I want to animate a value in my shader based on the time so I need to get the starting time value so I can interpolate the animation offset from that point. If I create a System in the update() function I get a SceneUpdateContext instance and that has a deltaTime property but not an elapsedTime property which I would assume would map to the shader time() value.
The Metal feature set tables specifies that beginning with the Apple4 family, the "Maximum threads per threadgroup" is 1024. Given that a single threadgroup is guaranteed to be run on the same GPU shader core, it means that a shader core of any new Apple GPU must be capable of running at least 1024/32 = 32 warps in parallel.
From the WWDC session "Scale compute workloads across Apple GPUs (6:17)":
For relatively complex kernels, 1K to 2K concurrent threads per shader core is considered a very good occupancy.
The cited sentence suggests that a single shader core is capable of running at least 2K (I assume this is meant to be 2048) threads in parallel, so 2048/32 = 64 warps running in parallel.
However, I am curious what is the maximum theoretical amount of warps running in parallel on a single shader core (it sounds like it is more than 64). The WWDC session mentions 2K to be only "very good" occupancy. How many threads would be "the best possible" occupancy?
Our app encountered the following error:
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
Is it possible with iOS 18 to use RealityView with world tracking but without the camera feed as background?
With content.camera = .worldTracking the background is always the camera feed, and with content.camera = .virtual the device's position and orientation don't affect the view point. Is there a way to make a mixture of both?
My use case is that my app "Encyclopedia GalacticAR" shows astronomical objects and a skybox (a huge sphere), like a VR view of planets, as you can see in the left image. Now that iOS 18 offers RealityView for iOS and iPadOS, I would like to make use of it, but I haven't found a way to display my skybox as environment, instead of the camera feed.
I filed the suggestion FB14734105 but hope that somebody knows a workaround...
A sample of some error messages that are presented in the Xcode log for executon of a program. There is nothing in the messages that will help identify a component as the origin of the message, nor is there a locatable derinition for the various labels and fields of the text. What component or even framework does this set of messages originate? Your search engine is useless because it returns gibberish. It doesn’t even follow the common behavior of SEARCH ENGINES because it takes label strings compounded from common words and searches for the common word instead of using the catenated string that is the internal variable name that is in the text.
2024-06-22 19:45:58.089943-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation isRegisteredSubsystem:category:]) Permission denied: GenerativeFunctionMetrics / ANEInferenceOperationPrepareForEncode. I am looking for a definition of the error with a way to locate the context in which the error occurs.
2024-06-22 19:45:58.089967-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation sendEventWithIdentifier:payload:]) Invalid inputs: payload={
aneModelPath = "/System/Library/PrivateFrameworks/RoomScanCore.framework/PrecompiledModels/lcnn_floorplan_model.bundle/H14G.bundle/main/segment_0__ane/net.hwx";
bundleIdentfier = "com.example.apple-samplecode.RoomPlanExampleApp9QSS565686";
}
2024-06-22 19:45:58.094770-0500 RoomPlanExampleApp[733:30145] [ClientDonation] (+[PPSClientDonation sendEventWithIdentifier:payload:]) Invalid inputs: payload={
bundleIdentfier = "com.example.apple-samplecode.RoomPlanExampleApp9QSS565686";
e5FunctionName = main;
numSegments = 1;
}
How many 32-bit variables can I use concurrently in a single thread of a Metal compute kernel without worrying about the variables getting spilled into the device memory? Alternatively: how many 32-bit registers does a single thread have available for itself?
Let's say that each thread of my compute kernel needs to store and work with its own array of N float variables, where N can be 128, 256, 512 or more. To achieve maximum possible performance, I do not want to the local thread variables to get spilled into the slow device memory. I want all N variables to be stored "on-chip", in the thread memory space.
To make my question more concrete, let's say there is an array thread float localArray[N]. Assuming an unrealistic hypothetical scenario where localArray is the only variable in the whole kernel, what is the maximum value of N for which no portion of localArray would get spilled into the device memory?
I searched in the Metal feature set tables, but I could not find any details.
I am creating a 3D model from multiple images using the photogrammetry session. Now, when the session generates an OBJ file and I measure the distance between two points, the distance is displayed sporadically in different units. Sometimes it's meters, then centimeters, or another unit altogether. How can I tell the photogrammetry session to always create the model in millimeters?
Minecraft Launcher gives message "minecraft launcher quit unexpectedly"
when opened, this began happening after I updated to macOS Sequoia Beta 15.0 (24A5327a)
Anyone know a fix?
Hello Guys,
I'm looking for how to look at the 3D object around me, pinch it (from far with eyes and 2 fingers, not touch them) and display a little information on top of it (like little text).
If you have the solution you will make me happy (and a bit less stupid too :) )
Cheers
Mathis