Hi there, I have some existing metal rendering / shader views that I would like to use to present stereoscopic content on the Vision Pro. Is there a metal shader function / variable that lets me know which eye we're currently rendering to inside my shader? Something like Unity's unity_StereoEyeIndex? I know RealityKit has GeometrySwitchCameraIndex, so I want something similar (but outside of a RealityKit context).
Many thanks,
Rich
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Seems that metal-shaderconverter can build a metallib, but I need .air files. Then I link the .air files into a single metallib and metallibdsym file.
HLSL -> dxc -> DXIL -> metal-shaderconverter -> .metallib
But there's no way to link together multiple metallib into a single metallib is there?
Hi, I have a small question. Is it possible to place the entities from a reality view (Immersive space) at the eye level on Y axis? Is it enough to set the position to (x, 0 , z)?
Hi, i'm trying to adapt our project to run on visionOS and faced with problem from the topic while running command:
xcrun --sdk xros metal --target=arm64-apple-xros1.0 input.metal -c -o output.air
The full command output looks like that:
While building module 'metal_types' imported from <built-in>:1:
In file included from <built-in>:1:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_types:90:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_extended_vector:121:49: error: bfloat is not supported on this target
typedef __attribute__((__ext_vector_type__(2))) bfloat bfloat2;
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_extended_vector:122:49: error: bfloat is not supported on this target
typedef __attribute__((__ext_vector_type__(3))) bfloat bfloat3;
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_extended_vector:123:49: error: bfloat is not supported on this target
typedef __attribute__((__ext_vector_type__(4))) bfloat bfloat4;
^
While building module 'metal_types' imported from <built-in>:1:
In file included from <built-in>:1:
In file included from /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_types:91:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_packed_vector:121:52: error: bfloat is not supported on this target
typedef __attribute__((__packed_vector_type__(2))) bfloat packed_bfloat2;
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_packed_vector:122:52: error: bfloat is not supported on this target
typedef __attribute__((__packed_vector_type__(3))) bfloat packed_bfloat3;
^
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/lib/clang/32023.98/include/metal/metal_packed_vector:123:52: error: bfloat is not supported on this target
typedef __attribute__((__packed_vector_type__(4))) bfloat packed_bfloat4;
I'm using Xcode 15.2 (15C500b) on MacBook 16 Pro (M1 Pro) and xcrun --sdk xros metal --version gives me this:
Apple metal version 32023.98 (metalfe-32023.98)
Target: air64-apple-darwin23.2.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/metal/ios/bin
in Diorama project,
let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle)
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
// Offset the scene so it doesn't appear underneath the user or conflict with the main window.
entity.position = SIMD3<Float>(0, 0, -2)
Object doesn't move around with Camera - with the simulator workthrough wasd key
I can work around the object.
But with different composer file, that I created
let entity = try await Entity(named: "ImmersiveScene", in: realityKitContentBundle) {
viewModel.rootEntity = entity
content.add(entity)
viewModel.updateScale()
with wasd key in the simulator, model moves with it.
What confirugation that I'm missing with ImmersiveScene Entity?
Hello Apple community,
I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans.
I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation.
Point Cloud:
How can I extract the point cloud data from my Object Capture scans?
Are there any specific tools or methods recommended for this purpose?
Facial Measurements:
Is there a way to extract facial measurements accurately using Object Capture?
Are there any built-in features or third-party tools that can assist with this?
I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated!
Thank you in advance.
CGImageSourceCreateThumbnailAtIndex function isn't generating cgImage for majority of the images on iOS 17.4 OS version. It works if I pass in option kCGImageSourceThumbnailMaxPixelSize, but doesn't work if this key is missing. This function works with and without kCGImageSourceThumbnailMaxPixelSize in stable OS versions. Is this a new change in iOS 17.4 beta versions?
Hi,
I'm building a simple game for iOS. I have background music.
The ring/silent switch is not disabling sound when switched to silent.
So far I'm testing on devices through TestFlight (still internal testing, not beta).
Do I need to code this function myself or does iOS know it's a game and disable sound automatically?
and/or
would my game be rejected if the switch doesn't disable sound?
(I have an internal setting to enable/disable sounds in the game)
Due to the way it's coded (capacitor app), I can't access the ring/silent switch to disable/enable sound.
Thanks, this problem makes me feel like a preserved moose.
In my RealityKit-based app I was using DirectionalLightComponent and DirectionalLightComponent.Shadow to cast shadows.
As far as I can see, on visionOS only ImageBasedLightComponent is currently supported, so I transitioned from DirectionalLightComponent to ImageBasedLightComponent. The lighting is working fine, but I'm not able to cast shadows onto other entities (in my case, casting a shadow from a Moon onto a planet).
Looking at ImageBasedLightReceiverComponent, there's GroundingShadowComponent which isn't what I'm looking for.
Is there any way with ImageBasedLightComponent & ImageBasedLightReceiverComponent to cast shadows from an entity onto another entity?
Hello fellow developers
here is something that I don t fully grasp :
1/ I have a fake SceneKit with two nodes both having light
2/ I have a small widget to explore those lights and tweak some param
-> in the small widget I can t update a toggle item when a new light is selected while other params are updated !
here is a short sample that illustrate what I am trying to resolve
import SwiftUI
import SceneKit
class ShortScene {
var scene = SCNScene()
var lightNodes : [SCNNode] {
get {scene.rootNode.childNodes(passingTest: { current, stop in current.light != nil} ) }
}
init() {
let light1 = SCNLight()
light1.castsShadow = false
light1.type = .omni
light1.intensity = 100
let nodelight1 = SCNNode()
nodelight1.light = light1
nodelight1.name = "nodeLight1"
scene.rootNode.addChildNode(nodelight1)
let light2 = SCNLight()
light2.castsShadow = false
light2.type = .ambient
light2.intensity = 300
let nodelight2 = SCNNode()
nodelight2.light = light2
nodelight2.name = "nodeLight2"
scene.rootNode.addChildNode(nodelight2)
}
}
extension SCNLight : ObservableObject {}
extension SCNNode : ObservableObject {}
struct LightViewEx : View {
@ObservedObject var lightParam : SCNLight
@ObservedObject var lightNode : SCNNode
var bindCol : Binding<Color>
@State var castShadows : Bool
init( _ _lightNode : SCNNode) {
if let _light = _lightNode.light {
lightParam = _light
lightNode = _lightNode
bindCol = Binding<Color>( get: { if let _lightcol = _lightNode.light!.color as! NSColor? { return Color(_lightcol)} else { return Color.red } },
set: { newCol in _lightNode.light!.color = NSColor(newCol) } )
castShadows = _lightNode.light!.castsShadow
print( "For \(lightNode.name!) : CShadows \(castShadows)")
} else {
fatalError("No Light attached to Node")
}
}
var body : some View {
VStack(alignment: .leading) {
Text("Light Params")
Picker("Type",selection : $lightParam.type) {
Text("IES").tag(SCNLight.LightType.IES)
Text("Ambient").tag(SCNLight.LightType.ambient)
Text("Directionnal").tag(SCNLight.LightType.directional)
Text("Directionnal").tag(SCNLight.LightType.directional)
Text("Omni").tag(SCNLight.LightType.omni)
Text("Probe").tag(SCNLight.LightType.probe)
Text("Spot").tag(SCNLight.LightType.spot)
Text("Area").tag(SCNLight.LightType.area)
}
ColorPicker("Light Color", selection: bindCol)
Text("Intensity")
TextField("Intensity", value: $lightParam.intensity, formatter: NumberFormatter())
Divider()
// Toggle("shadows", isOn: $lightParam.castsShadow ).onChange(of: lightParam.castsShadow, { lightParam.castsShadow.toggle() })
Toggle("CastShadows", isOn: $castShadows )
.onChange(of: castShadows) { lightParam.castsShadow = castShadows;print("castsShadows changed to \(castShadows)") }
}
}
}
struct sceneView : View {
@State var _lightIdx : Int = 0
@State var shortScene = ShortScene()
var body : some View {
VStack(alignment: .leading) {
if shortScene.lightNodes.isEmpty == false {
Picker("Lights",
selection: $_lightIdx) {
ForEach(0..<shortScene.lightNodes.count, id: \.self) { index in
Text(shortScene.lightNodes[index].name ?? "NoName" ).tag(index)
}
}
GridRow(alignment: .top) {
LightViewEx(shortScene.lightNodes[_lightIdx])
}
}
}
}
}
struct testUIView: View {
var body: some View {
sceneView()
}
}
#Preview {
testUIView()
}
Something is obviously not right ! Anyone has some idea ?
I have my Xbox Controller plugged into my Macbook, but my controller won't stay on (no batteries are needed for this controller, and the plug I am using works). Also in system settings when I scroll to Xbox Controllers it says no devices found. How do I fix this?
How do I change a UIBezierPath.currentPoint to a SKSpriteNode.position?
Here are the appropriate code snippets:
func createTrainPath() {
let startX = -tracksWidth/2,
startY = tracksPosY
savedTrainPosition = CGPoint(x: startX, y: startY!)
trackRect = CGRect(x: savedTrainPosition.x,
y: savedTrainPosition.y,
width: tracksWidth,
height: tracksHeight)
trainPath = UIBezierPath(ovalIn: trackRect)
trainPath = trainPath.reversing() // makes myTrain move CW
} // createTrainPath
Followed by:
func startFollowTrainPath() {
let theSpeed = Double(5*thisSpeed)
var trainAction = SKAction.follow(
trainPath.cgPath,
asOffset: false,
orientToPath: true,
speed: theSpeed)
trainAction = SKAction.repeatForever(trainAction)
createPivotNodeFor(myTrain)
myTrain.run(trainAction, withKey: runTrainKey)
} // startFollowTrainPath
So far, so good (I think?) ...
Within other places in my code, I call:
return trainPath.currentPoint
I need to convert trainPath.currentPoint to myTrain.position ...
When I insert the appropriate print statements, I see for example:
myTrain.position = (0.0, -295.05999755859375)
trainPath.currentPoint = (392.0, -385.0)
which obviously disqualifies a simple = , as in:
myTrain.position = trainPath.currentPoint
Since this = is not correct, what is ?
After more investigation, my guess is that .currentPoint is in SKSpriteNode coordinates and .position is in SKScene coordinates.
Hello, I have a crash in the Metal framework under Sonoma 14.4 public beta on a Mac Mini M1 2020:
Thread 1 crashed with ARM Thread State (64-bit):
x0: 0x0000000000000000 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x0000000000000000
x4: 0x0000000000000000 x5: 0x0000000000000000 x6: 0x0000000000000000 x7: 0x0000000000000000
x8: 0x17c2770b7ca20001 x9: 0x17c2770b7ca20001 x10: 0x0000000000000025 x11: 0x0000000000000001
x12: 0x000000016bb555b2 x13: 0x0000000000000000 x14: 0x0000000104acc7e9 x15: 0x0000000207c5c5b0
x16: 0xfffffffffffffff4 x17: 0x0000000211f42c48 x18: 0x0000000000000000 x19: 0x000000016bb55898
x20: 0x0000600002901180 x21: 0x0000600003cd0e20 x22: 0x0000000000000003 x23: 0x0000000277b7e040
x24: 0x00000000000002ec x25: 0x0000000000000001 x26: 0x0000000000000000 x27: 0x0000000000000000
x28: 0x0000000207c96b50 fp: 0x000000016bb55880 lr: 0x2d648001a439d394
sp: 0x000000016bb557b0 pc: 0x00000001a439d394 cpsr: 0x60001000
far: 0x0000000000000000 esr: 0xf2000001 (Breakpoint) brk 1
Binary Images:
0x139c00000 - 0x139c6bfff com.apple.AppleMetalOpenGLRenderer (1.0) <8b69c871-19c2-3d46-b8de-8dbc62e532cd> /System/Library/Extensions/AppleMetalOpenGLRenderer.bundle/Contents/MacOS/AppleMetalOpenGLRenderer
0x109b74000 - 0x109baffff libjogl_mobile.dylib () <9c3ef505-8828-36ab-a776-5ffdb9d4cd79> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libjogl_mobile.dylib
0x13b494000 - 0x13b50ffff libjogl_desktop.dylib () <543b42ae-90a4-325c-8850-84951b1fa6ee> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libjogl_desktop.dylib
0x108588000 - 0x10858ffff libnativewindow_macosx.dylib (*) <2c256988-735b-38b7-9712-0bfc58c3ff90> /Applications/scilab-2024.0.0.app/Contents/lib/thirdparty/libnativewindow_macosx.dylib
How can I get rid of ot ?
S.
Hi!
I have a Flutter project that targets Web and iOS. Overall, our app works quite well on Vision Pro, with the only issue being that our UI elements do not highlight when the user looks at them. (Our UI will highlight on mouseover, however. We have tried tinkering with the mouseover visuals, but this did not help.)
We're considering writing some native Swift code to patch this hole in Flutter's visionOS support. However, after some amount of searching, the documentation doesn't provide any obvious solutions.
The HoverEffectComponent ( https://developer.apple.com/documentation/realitykit/hovereffectcomponent ) in RealityKit seems like the closest there is to adding focus-based behavior. However, if I understand correctly, this means adding an Entity for every Flutter UI element the user can interact with, and then rebuilding the list of Entities every time the UI is repainted... doesn't sound especially performant.
Is there some other method of capturing the user's gaze in the context of an iOS app?
Hi,
I'm trying to display an STL model file in visionOS. I import the STL file using SceneKit's ModelIO extension, add it to an empty scene USDA and then export the finished scene into a temporary USDZ file. From there I load the USDZ file as an Entity and add it onto the content.
However, the model in the resulting USDZ file has no lighting and appears as an unlit solid. Please see the screenshot below:
Top one is created from directly importing a USDA scene with the model already added using Reality Composer through in an Entity and works as expected.
Middle one is created from importing the STL model as an MDLAsset using ModelIO, adding onto the empty scene, exporting as USDZ. Then importing USDZ into an Entity. This is what I want to be able to do and is broken.
Bottom one is just for me to debug the USDZ import/export. It was added to the empty scene using Reality Composer and works as expected, therefore the USDZ export/import is not broken as far as I can tell.
Full code:
import SwiftUI
import ARKit
import SceneKit.ModelIO
import RealityKit
import RealityKitContent
struct ContentView: View {
@State private var enlarge = false
@State private var showImmersiveSpace = false
@State private var immersiveSpaceIsShown = false
@Environment(\.openImmersiveSpace) var openImmersiveSpace
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
var modelUrl: URL? = {
if let url = Bundle.main.url(forResource: "Trent 900 STL", withExtension: "stl") {
let asset = MDLAsset(url: url)
asset.loadTextures()
let object = asset.object(at: 0) as! MDLMesh
let emptyScene = SCNScene(named: "EmptyScene.usda")!
let scene = SCNScene(mdlAsset: asset)
// Position node in scene and scale
let node = SCNNode(mdlObject: object)
node.position = SCNVector3(0.0, 0.1, 0.0)
node.scale = SCNVector3(0.02, 0.02, 0.02)
// Copy materials from the test model in the empty scene to our new object (doesn't really change anything)
node.geometry?.materials = emptyScene.rootNode.childNodes[0].childNodes[0].childNodes[0].childNodes[0].geometry!.materials
// Add new node to our empty scene
emptyScene.rootNode.addChildNode(node)
let fileManager = FileManager.default
let appSupportDirectory = try! fileManager.url(for: .applicationSupportDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
let permanentUrl = appSupportDirectory.appendingPathComponent("converted.usdz")
if emptyScene.write(to: permanentUrl, delegate: nil) {
// We exported, now load and display
return permanentUrl
}
}
return nil
}()
var body: some View {
VStack {
RealityView { content in
// Add the initial RealityKit content
if let scene = try? await Entity(contentsOf: modelUrl!) {
// Displays middle and bottom models
content.add(scene)
}
if let scene2 = try? await Entity(named: "JetScene", in: realityKitContentBundle) {
// Displays top model using premade scene and exported as USDA.
content.add(scene2)
}
} update: { content in
// Update the RealityKit content when SwiftUI state changes
if let scene = content.entities.first {
let uniformScale: Float = enlarge ? 1.4 : 1.0
scene.transform.scale = [uniformScale, uniformScale, uniformScale]
}
}
.gesture(TapGesture().targetedToAnyEntity().onEnded { _ in
enlarge.toggle()
})
VStack (spacing: 12) {
Toggle("Enlarge RealityView Content", isOn: $enlarge)
.font(.title)
Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace)
.font(.title)
}
.frame(width: 360)
.padding(36)
.glassBackgroundEffect()
}
.onChange(of: showImmersiveSpace) { _, newValue in
Task {
if newValue {
switch await openImmersiveSpace(id: "ImmersiveSpace") {
case .opened:
immersiveSpaceIsShown = true
case .error, .userCancelled:
fallthrough
@unknown default:
immersiveSpaceIsShown = false
showImmersiveSpace = false
}
} else if immersiveSpaceIsShown {
await dismissImmersiveSpace()
immersiveSpaceIsShown = false
}
}
}
}
}
#Preview(windowStyle: .volumetric) {
ContentView()
}
To test this even further, I exported the generated USDZ and opened in Reality Composer. The added model was still broken while the test model in the scene was fine. This also further proved that import/export is fine and RealityKit is not doing something weird with the imported model.
I am convinced this has to be something with the way I'm using ModelIO to import the STL file.
Any help is appreciated. Thank you
Hi, I am using metallib to generate shader cache shaders offline, but I have noticed that for certain .air files, metallib's behavior is unpredictable. Sometimes it runs correctly, sometimes it may crash or generate an invalid .lib file.
// crash info
0x00007FF6705AA821 (0x000001C5F7218E30 0x00000084F9F8F089 0x000001C5F72B57E0 0x000001C5F7218E30)
0x00007FF6705A9062 (0x00007FF6709200E0 0x000001C5F7208CE0 0x0000000000000002 0x00007FF670920140)
0x00007FF6704C8FD6 (0x00007FF6709200E0 0x00007FF600000000 0x00007FF670920140 0x0000000000000000)
0x00007FF66FF7F1F4 (0x0000000000000000 0x000001C5F71EC210 0x000001C5F71EC210 0x0000000000000000)
0x00007FF66FF6C8D1 (0x0000000000000004 0x000001C5F71EC210 0x0000000000000000 0x0000000000000000)
0x00007FF670633974 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000)
0x00007FFB9C137614 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), BaseThreadInitThunk() + 0x14 bytes(s)
0x00007FFB9DB826A1 (0x0000000000000000 0x0000000000000000 0x0000000000000000 0x0000000000000000), RtlUserThreadStart() + 0x21 bytes(s)
I tried the latest version (Metal Tools for Windows 4.1), but the issue still exists.
I placed the input .air file at:
https://drive.google.com/file/d/1MQQRbwKi-bcEZ9jy_dimRjJBovjB0ru2/view?usp=drive_link
DS4macOS Compiled Things Partially and Run But it Has 30+ Yellow Warnings & Doesn't Show the Setting
Hi Apple and Swift friends.
I'm not really a fluent Swift programmer (or any language) just knowledgable, I just a vague logic of what's going on. This is a gamepad controller remapper similar to DS4Windows on the PC and DSX on Steam gaming.
On macOS Sonoma, it successfully compiled and connected the amazing Sony Playstation DualSense but because it has 33 yellow warning, the Setting doesn't show. It says something about outdated something and needs to be replaced by a newer framework.network. It also says it can't use self:
It should look like these:
What could be the syntax changes that won't produce the 30+ yellow warnings?
The file can be had here:
[https://github.com/marcowindt/ds4macos)
God bless.
Please clarify if latest guides update allow to html5 games without binary inclusion.
Does that mean that gambling apps do not required embedded resources for real money games in the app?
Hi, I'm displaying linear gray by CAMetalLayer with the shader below.
fragment float4 fragmentShader(VertexOut in [[stage_in]],
texture2d<float, access::sample> BGRATexture [[ texture(0) ]])
{
float color = in.texCoordinates.x;
return float4(float3(color), 1.0);
}
And my CAMetalLayer has been set to linearSRGB.
metalLayer.colorspace = CGColorSpace(name: CGColorSpace.linearSRGB)
metalLayer.pixelFormat = .bgra8Unorm
Why the display seem add gamma? Apparently the middle gray is 187 but not 128.
I init openGL
now i wanna set
glPixelZoom(pixelSizeX, -pixelSizeY);
to dispaly a image with width and height to the entire window
for this I do:
rect:= window.frame;
WindowBackingRect := window.convertRectToBacking(rect);
pixelSizeX := WindowBackingRect.size.width / width / NSScreen.mainScreen.backingScaleFactor;
pixelSizeY := WindowBackingRect.size.height / height / NSScreen.mainScreen.backingScaleFactor;
under High Sierra 10.13.6 on an intel mac from 2011 NSScreen.mainScreen.backingScaleFactor return 1 because there is no retina.
under Sonoma 14.3 an an intel mac from 2020 NSScreen.mainScreen.backingScaleFactor return 2 because of retina.
this works correct.
under Venture 13.6 on an aarch64 mac from 2020 NSScreen.mainScreen.backingScaleFactor returns 2 BUT the image is only half so big as it should be.
(if i let the scale factor away on the new intel mac, the image is twice big as it should be. On the new AArch64 mac it is correct.)
what to do?