Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Post

Replies

Boosts

Views

Activity

How to Specify Pixel-Specific Depths of Views in xrOS?
With the advent of the third dimension, I wanted to know wether if it's currently possible to display the flat swiftUI Views with some thickness in xrOS? While the .frame(depth: CGFloat?) does the job for Views in general, I am eager for a more granular level of control at the pixel-specific level. I was hoping that there are lower level APIs to achieve this & I've looked into the fairly new layerEffect shader API, yet it seems it's incapable of setting the depths of pixels...
0
0
627
Mar ’24
Gamma issue when display linear color
Hi, I'm displaying linear gray by CAMetalLayer with the shader below. fragment float4 fragmentShader(VertexOut in [[stage_in]], texture2d<float, access::sample> BGRATexture [[ texture(0) ]]) { float color = in.texCoordinates.x; return float4(float3(color), 1.0); } And my CAMetalLayer has been set to linearSRGB. metalLayer.colorspace = CGColorSpace(name: CGColorSpace.linearSRGB) metalLayer.pixelFormat = .bgra8Unorm Why the display seem add gamma? Apparently the middle gray is 187 but not 128.
1
0
903
Feb ’24
MPSNNGraph: use custom compute/render metal during training?
Hello, I have been following the excellent/informative "Metal for Machine Learning" from WWDC19 to learn how to do on device training (I have a specific use case for this) and it is all working really well using the MPSNNGraph. However, I would like to call my own metal compute/render function/pipeline to transform the inference result before calculating the loss, does anyone know if this possible and what would this look like in code? Please see my current code below, at the comment I need to call an intermediate compute/render function to transform the inference result image before passing to the MPSNNForwardLossNode. let rgbImageNode = MPSNNImageNode(handle: nil) let inferGraph = makeInferenceGraph() let reshape = MPSNNReshapeNode(source: inferGraph.resultImage, resultWidth: 64, resultHeight: 64, resultFeatureChannels: 4) //Need to call render or compute pipeline to post process in the inference result image let rgbLoss = MPSNNForwardLossNode(source:reshape.resultImage, labels:rgbImageNode, lossDescriptor:lossDescriptor) let initGrad = MPSNNInitialGradientNode(source:rgbLoss.resultImage) let gradNodes = initGrad.trainingGraph(withSourceGradient:nil, nodeHandler:nil) guard let trainGraph = MPSNNGraph(device: device, resultImage: gradNodes![0].resultImage, resultImageIsNeeded: true) else{ fatalError("Unable to get training graph.") } Thanks
0
0
684
Mar ’24
Detecting touching a SKSpriteNode within a touchesBegan event?
Detecting touching a SKSpriteNode within a touchesBegan event? My experience to date has focused on using GamepadControllers with Apps, not a touch-activated iOS App. Here are some short code snippets: Note: the error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse" Yes, there is a "horse", but it is no where near the "creditsInfo" SKSpriteNode within my .sksfile. Please note that this "creditsInfo" SKSpriteNode is programmatically generated by my addCreditsButton(..) and will be placed very near the top-left of my GameScene. override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { if let ourScene = GameScene(fileNamed: "GameScene") { if let touch:UITouch = touches.first { let location = touch.location(in: view) let node:SKNode = ourScene.atPoint(location) print("node.name = \(node.name!)") // <== shows "horse" if (node.name == "creditsInfo") { showCredits() } } } // if let ourScene } // touchesBegan The above touchesBegan function is an extension GameViewController which according to the docs is okay, namely, touchesBegan is a UIView method besides being a UIViewController method. Within my primary showScene() function, I have: if let ourScene = GameScene(fileNamed: "GameScene") { #if os(iOS) addCreditsButton(toScene: ourScene) #endif } with: func addCreditsButton(toScene: SKScene) { if thisSceneName == "GameScene" { itsCreditsNode.name = "creditsInfo" itsCreditsNode.anchorPoint = CGPoint(x: 0.5, y: 0.5) itsCreditsNode.size = CGSize(width: 2*creditsCircleRadius, height: 2*creditsCircleRadius) itsCreditsNode.zPosition = 3 creditsCirclePosY = roomHeight/2 - creditsCircleRadius - creditsCircleOffsetY creditsCirclePosX = -roomWidth/2 + creditsCircleRadius + creditsCircleOffsetX itsCreditsNode.position = CGPoint(x: creditsCirclePosX, y: creditsCirclePosY) toScene.addChild(itsCreditsNode) } // if thisSceneName } // addCreditsButton To finish, I repeat what I stated at the very top: The error I am trying to correct is noted in the very first snippet = touchesBegan within the comment <== shows "horse"
5
0
1k
Feb ’24
PortalComponent – allow world content to peek out
Hello, I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?). I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it. If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this. Thanks for any hints!
4
0
1k
Feb ’24
Unable to draw textures on SCNGeometry which is created from ARKit FaceAnchor points.
In the below code I have extracted face mesh vertices from ARKit face anchors and created a custom face mesh using SceneKit SCNGeometry. This enabled me to stretch face mesh vertices as per my requirement. Now the problem I am facing is as follows. I am trying to apply a lipstick texture material which is of type SCNMaterial. Although ARSCNFaceGeometry lets me apply different textures through SCNMaterial and SCNNode, I am not able to do the same using mu CustomFaceGeometry. When I am applying a lipstick texture which looks like the image attached below, the full face is getting colored or modified, I want only that part of the face which has texture transparency as >0 and I dont want other part of the face to be modified. Can you give me a detailed solution using code? // ViewController.swift import UIKit import ARKit import SceneKit import simd class ViewController: UIViewController, ARSCNViewDelegate, ARSessionDelegate{ @IBOutlet weak var sceneView: ARSCNView! let vertexIndicesOfInterest = [250] var customFaceGeometry: CustomFaceGeometry! var scnFaceGeometry: SCNGeometry! private var faceUvGenerator: FaceTextureGenerator! var faceGeometry: ARSCNFaceGeometry! override func viewDidLoad() { super.viewDidLoad() sceneView.delegate = self override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let configuration = ARFaceTrackingConfiguration() sceneView.session.run(configuration) } } extension ViewController { func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor else { return } customFaceGeometry = CustomFaceGeometry(fromFaceAnchor: faceAnchor) let customGeometryNode = SCNNode(geometry: customFaceGeometry.geometry) customFaceGeometry.geometry.firstMaterial?.fillMode = .lines customFaceGeometry.geometry.firstMaterial?.transparency = 0.0 customFaceGeometry.geometry.firstMaterial?.isDoubleSided = true node.addChildNode(customGeometryNode) } func renderer(_ renderer: SCNSceneRenderer, willUpdate node: SCNNode, for anchor: ARAnchor) { guard let faceAnchor = anchor as? ARFaceAnchor, let faceMeshNode = node.childNodes.first else { return } DispatchQueue.main.async { self.customFaceGeometry.update(withFaceAnchor: faceAnchor, node: faceMeshNode) } } } class CustomFaceGeometry { var geometry: SCNGeometry let lipImage = UIImage(named: "Face.scnassets/lip_arks_y7.png") init(fromFaceAnchor faceAnchor: ARFaceAnchor) { self.geometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor)! } static func createCustomFaceGeometry(fromVertices vertices_o: [SCNVector3]) -> SCNGeometry { var vertices = vertices_o let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .point, primitiveCount: vertices.count, bytesPerIndex: MemoryLayout<Int32>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } static func createGeometry(fromFaceAnchor faceAnchor: ARFaceAnchor) -> SCNGeometry let vertices = faceAnchor.geometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } return CustomFaceGeometry.createCustomFaceGeometry(fromVertices: vertices) } func update(withFaceAnchor faceAnchor: ARFaceAnchor, node: SCNNode) { if let newGeometry = CustomFaceGeometry.createCustomSCNGeometry(from: faceAnchor) { node.geometry = newGeometry let lipstickNode = SCNNode(geometry: newGeometry) let lipstickTextureMaterial = SCNMaterial() lipstickTextureMaterial.diffuse.contents = lipImage lipstickTextureMaterial.transparency = 1.0 lipstickNode.geometry?.firstMaterial = lipstickTextureMaterial node.geometry?.firstMaterial?.fillMode = .lines node.geometry?.firstMaterial?.transparency = 0.5 } } static func createCustomSCNGeometry(from faceAnchor: ARFaceAnchor) -> SCNGeometry? { let faceGeometry = faceAnchor.geometry var vertices: [SCNVector3] = faceGeometry.vertices.map { SCNVector3($0.x, $0.y, $0.z) } print(vertices[250]) let ll_ratio_y = Float(0.969999) vertices[290] = SCNVector3(x: vertices[290].x, y: vertices[290].y*ll_ratio_y, z: vertices[290].z) vertices[274] = SCNVector3(x: vertices[274].x, y: vertices[274].y*ll_ratio_y, z: vertices[274].z) vertices[265] = SCNVector3(x: vertices[265].x, y: vertices[265].y*ll_ratio_y, z: vertices[265].z) vertices[700] = SCNVector3(x: vertices[700].x, y: vertices[700].y*ll_ratio_y, z: vertices[700].z) vertices[730] = SCNVector3(x: vertices[730].x, y: vertices[730].y*ll_ratio_y, z: vertices[730].z) vertices[25] = SCNVector3(x: vertices[25].x, y: vertices[25].y*ll_ratio_y, z: vertices[25].z) vertices[709] = SCNVector3(x: vertices[709].x, y: vertices[709].y*ll_ratio_y, z: vertices[709].z) vertices[725] = SCNVector3(x: vertices[725].x, y: vertices[725].y*ll_ratio_y, z: vertices[725].z) vertices[710] = SCNVector3(x: vertices[710].x, y: vertices[710].y*ll_ratio_y, z: vertices[710].z) let vertexData = Data(bytes: vertices, count: vertices.count * MemoryLayout<SCNVector3>.size) let vertexSource = SCNGeometrySource(data: vertexData, semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<SCNVector3>.stride) let indices: [UInt16] = faceGeometry.triangleIndices.map(UInt16.init) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<UInt16>.size) let element = SCNGeometryElement(data: indexData, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<UInt16>.size) return SCNGeometry(sources: [vertexSource], elements: [element]) } }
2
0
737
Feb ’24
indoor sky box is displayed large and far in the field of view in visionos?
The indoor sky box is displayed large and far in the field of view in visionos? why? func addSkybox(for destination: Destination) { let subscription = TextureResource.loadAsync(named: destination.imageName).sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("\(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) // We flip the sphere inside out so the texture is shown inside. self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3<Float>(0.0, 1.0, 0.0) // Rotate the sphere to show the best initial view of the space. updateRotation(for: destination) } ) components.set(Entity.SubscriptionComponent(subscription: subscription)) } by https://developer.apple.com/documentation/visionos/destination-video
3
0
596
Feb ’24
SceneKit Hangs
I experience an issue with SceneKit that is driving me crazy ;( I have severe hangs when I disable Metal API Validation (which is default when you don't run from Xcode). So is there any way to force enable Metal API Validation for AppStore binary? (run MTL_DEBUG_LAYER=1 for Testflight or App Store) Hangs happen on Catalyst but also on iOS if I use lightingEnvironment...
0
0
555
Feb ’24
USDZ file not working
I'm working on a project wherein RealityKit for iOS will be used to display 3D files (USDZ) in a real-world environment. The model will also need to animate differently depending on which button is pressed. When using models that are downloaded from various websites or via Apple QuickLook, the code functions well. I can hold the animation in place and click a button to play it. Unfortunately, although the model (through blender) my team provided is animating in SceneKit, it does not play at all when left in the real world, not even when a button is pressed. I checked RealityKit USDZ tool, and found usdz file is not valid, they are not figure out whats wrong. Could you please help me figure out what's wrong with my USDZ file? Working USDZ: https://developer.apple.com/augmented-reality/quick-look/models/drummertoy/toy_drummer_idle.usdz My file: https://drive.google.com/file/d/1UibIKBy2fx4q0XxSNodOwQZMLgktKiKF/view?usp=sharing
1
0
881
Feb ’24
Custom render pass texture maps with LayerRenderer pipeline
Hi, Re: WWDC2023-10089, I have a question about creating texture maps during pipeline setup. In traditional MTKView setups, it's easy to query for the view size to know what the dimensions of the texture map should be. But, after digging through all the documentation on the classes, I don't see any way to find this information. There's the drawable, and querying it, and then maybe getting the info from the default render texture maps – but, I'm trying to set these textures up when I set up the pipelines, and so I don't think that's going to work. (Because the render loop won't have started yet.) Secondly, I'm wondering w/ foviation if there's even more that needs to be considered regarding creating these types of auxiliary render passes. Basically, for example's sake, imagine you have a working visionOS Metal pipeline. But, now you want to add a special render pass to do some effects. Typically you'd create a texture map to store that pass, calculate the work in a fragment shader, etc, and then do another pipeline state to mix that with the default rendering pipeline. Any help appreciated, thanks!
1
0
505
Feb ’24
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation?
How to integrate UIDevice rotation and creating a new UIBezierPath after rotation? My challenge here is to successfully integrate UIDevice rotation and creating a new UIBezierPath every time the UIDevice is rotated. (Please accept my apologies for this Post’s length .. but I can’t seem to avoid it) As a preamble, I have bounced back and forth between NotificationCenter.default.addObserver(self, selector: #selector(rotated), name: UIDevice.orientationDidChangeNotification, object: nil) called within my viewDidLoad() together with @objc func rotated() { } and override func viewWillLayoutSubviews() { // please see code below } My success was much better when I implemented viewWillLayoutSubviews(), versus rotated() .. so let me provide detailed code just for viewWillLayoutSubviews(). I have concluded that every time I rotate the UIDevice, a new UIBezierPath needs to be generated because positions and sizes of my various SKSprieNodes change. I am definitely not saying that I have to create a new UIBezierPath with every rotation .. just saying I think I have to. Start of Code // declared at the top of my `GameViewController`: var myTrain: SKSpriteNode! var savedTrainPosition: CGPoint? var trackOffset = 60.0 var trackRect: CGRect! var trainPath: UIBezierPath! My UIBezierPath creation and SKAction.follow code is as follows: // called with my setTrackPaths() – see way below func createTrainPath() { // savedTrainPosition initially set within setTrackPaths() // and later reset when stopping + resuming moving myTrain // via stopFollowTrainPath() trackRect = CGRect(x: savedTrainPosition!.x, y: savedTrainPosition!.y, width: tracksWidth, height: tracksHeight) trainPath = UIBezierPath(ovalIn: trackRect) trainPath = trainPath.reversing() // makes myTrain move CW } // createTrainPath func startFollowTrainPath() { let theSpeed = Double(5*thisSpeed) var trainAction = SKAction.follow( trainPath.cgPath, asOffset: false, orientToPath: true, speed: theSpeed) trainAction = SKAction.repeatForever(trainAction) createPivotNodeFor(myTrain) myTrain.run(trainAction, withKey: runTrainKey) } // startFollowTrainPath func stopFollowTrainPath() { guard myTrain == nil else { myTrain.removeAction(forKey: runTrainKey) savedTrainPosition = myTrain.position return } } // stopFollowTrainPath Here is the detailed viewWillLayoutSubviews I promised earlier: override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() if (thisSceneName == "GameScene") { // code to pause moving game pieces setGamePieceParms() // for GamePieces, e.g., trainWidth setTrackPaths() // for trainPath reSizeAndPositionNodes() // code to resume moving game pieces } // if (thisSceneName == "GameScene") } // viewWillLayoutSubviews func setGamePieceParms() { if (thisSceneName == "GameScene") { roomScale = 1.0 let roomRect = UIScreen.main.bounds roomWidth = roomRect.width roomHeight = roomRect.height roomPosX = 0.0 roomPosY = 0.0 tracksScale = 1.0 tracksWidth = roomWidth - 4*trackOffset // inset from screen edge #if os(iOS) if UIDevice.current.orientation.isLandscape { tracksHeight = 0.30*roomHeight } else { tracksHeight = 0.38*roomHeight } #endif // center horizontally tracksPosX = roomPosX // flush with bottom of UIScreen let temp = roomPosY - roomHeight/2 tracksPosY = temp + trackOffset + tracksHeight/2 trainScale = 2.8 trainWidth = 96.0*trainScale // original size = 96 x 110 trainHeight = 110.0*trainScale trainPosX = roomPosX #if os(iOS) if UIDevice.current.orientation.isLandscape { trainPosY = temp + trackOffset + tracksHeight + 0.30*trainHeight } else { trainPosY = temp + trackOffset + tracksHeight + 0.20*trainHeight } #endif } // setGamePieceParms // a work in progress func setTrackPaths() { if (thisSceneName == "GameScene") { if (savedTrainPosition == nil) { savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY) } else { savedTrainPosition = CGPoint(x: tracksPosX - tracksWidth/2, y: tracksPosY) } createTrainPath() } // if (thisSceneName == "GameScene") } // setTrackPaths func reSizeAndPositionNodes() { myTracks.size = CGSize(width: tracksWidth, height: tracksHeight) myTracks.position = CGPoint(x: tracksPosX, y: tracksPosY) // more Nodes here .. } End of Code My theory says when I call setTrackPaths() with every UIDevice rotation, createTrainPath() is called. Nothing happens of significance visually as far as the UIBezierPath is concerned .. until I call startFollowTrainPath(). Bottom Line It is then that I see for sure that a new UIBezierPath has not been created as it should have been when I called createTrainPath() when I rotated the UIDevice. The new UIBezierPath is not new, but the old one. If you’ve made it this far through my long code, the question is what do I need to do to make a new UIBezierPath that fits the resized and repositioned SKSpriteNode?
8
0
1.2k
Feb ’24
Understanding Buffer Memory Alignment
In the project template for using ARKit with Metal, there's a definition for the memory alignment of the buffer that holds the SharedUniforms structure. It is defined like this: // The 16 byte aligned size of our uniform structures let kAlignedSharedUniformsSize: Int = (MemoryLayout<SharedUniforms>.size & ~0xFF) + 0x100 If I understood correctly, this line of code does this: Calculates the size of the SharedUniforms structure in bytes Clears out the last 8 bits of the size representation Adds 256 bytes to the size So if I'm not mistaken, this will round up the size of the SharedUniforms structure to the 256 bytes, and not 16 bytes as the comment suggests. Is there something I've overlooked since I can't wrap my head around how will this align the size to 16 bytes?
0
1
542
Feb ’24
Poor precision with fract in MSL fast-math mode
Here is an example fragment shader code (Rendering a cube with texCoord in [0, 1]): colorSample.x = in.texCoord.x; Which produce this result: However, if I make a small change to the code like this: colorSample.x = fract(ceil(0.1 + in.texCoord.x * 0.8) * 1000000) + in.texCoord.x; Then it will produce this result: If I disable fast-math in the second case, then it will produce the same image as in the first case. It seems that in fast-math mode, large parameter for fract() will affect precision of other operand in the same expression. Is this a bug in fast-math mode? How should I circumvent this problem?
2
0
376
Feb ’24
Help me understand the crash report
help me understand the crash report this started happening from last update only Translated Report (Full Report Below) Process: dota2 [7353] Path: /Users/USER/Library/Application Support/Steam/*/dota2.app/Contents/MacOS/dota2 Identifier: com.valvesoftware.dota2 Version: 1.0.0 Code Type: X86-64 (Translated) Parent Process: launchd [1] User ID: 501 Date/Time: 2024-02-18 18:00:45.9766 -0500 OS Version: macOS 14.3.1 (23D60) Report Version: 12 Anonymous UUID: 0F5E4D0D-9839-DF78-5C28-93F6D26A5763 Sleep/Wake UUID: 52D18CB1-ADD8-4A75-B6A1-C0CF4CF2A306 Time Awake Since Boot: 85000 seconds Time Since Wake: 1722 seconds System Integrity Protection: enabled Notes: PC register does not match crashing frame (0x0 vs 0x1032D1C08) Crashed Thread: 0 MainThrd Dispatch queue: com.apple.main-thread Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000441f0f660002 Exception Codes: 0x0000000000000001, 0x0000441f0f660002 Termination Reason: Namespace SIGNAL, Code 11 Segmentation fault: 11 Terminating Process: exc handler [7353] VM Region Info: 0x441f0f660002 is not in any region. Bytes after previous region: 48357375344643 Bytes before following region: 65536781844478 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL Memory Tag 255 1823fb340000-1823fb380000 [ 256K] rw-/rwx SM=PRV ---> GAP OF 0x67960cc80000 BYTES MALLOC_MEDIUM 7fba08000000-7fba10000000 [128.0M] rw-/rwx SM=PRV Error Formulating Crash Report: PC register does not match crashing frame (0x0 vs 0x1032D1C08) Kernel Triage: VM - (arg = 0x3) mach_vm_allocate_kernel failed within call to vm_map_enter
1
0
1.4k
Feb ’24
Frame not rendered, too many frames in flight.
On startup I'm getting a "We reached more than 3 frames in flight. That's too many. Did you forget to call cp_frame_end_submission()?" error despite cp_frame_end_submission() being called when needed. Nothing is rendered in the 1 frame that does go through. Is there something I'm missing that would cause cp_frame_end_submission to not register?
0
1
362
Feb ’24
jax-metal error jax.numpy.linalg.inv
Hi, I have a an issue with jax.numpy.linalg.inv(a). import jax.numpy.linalg as jnpl B = jnp.identity(2) jnpl.inv(B) Throws the following error: XlaRuntimeError: UNKNOWN: /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: error: failed to legalize operation 'mhlo.triangular_solve' /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: called from /var/folders/pw/wk5rfkjj6qggqp8r8zb2bw8w0000gn/T/ipykernel_34334/2572982404.py:9:0: note: see current operation: %120 = \"mhlo.triangular_solve\"(%42#4, %119) {left_side = true, lower = true, transpose_a = #mhlo&lt;transpose NO_TRANSPOSE&gt;, unit_diagonal = true} : (tensor&lt;2x2xf32&gt;, tensor&lt;2x2xf32&gt;) -&gt; tensor&lt;2x2xf32&gt; Any ideas what could be the issue or how to solve it?
2
0
885
Feb ’24
How To Resize An Image and Retain Wide Color Gamut
I'm trying to resize NSImages on macOS. I'm doing so with an extension like this. extension NSImage { // MARK: Resizing /// Resize the image to the given size. /// /// - Parameter size: The size to resize the image to. /// - Returns: The resized image. func resized(toSize targetSize: NSSize) -> NSImage? { let frame = NSRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height) guard let representation = self.bestRepresentation(for: frame, context: nil, hints: nil) else { return nil } let image = NSImage(size: targetSize, flipped: false, drawingHandler: { (_) -> Bool in return representation.draw(in: frame) }) return image } } The problem is, as far as I can tell, the image that comes out of the drawing handler has lost the original color profile of the original image rep. I'm testing it with a wide color gamut image, attached. This becomes pure red when examing the image result. If this was UIKit I guess I'd use the UIGraphicsImageRenderer and select the right UIGraphicsImageRendererFormat.Range and so I'm suspecting I need to use the NSGraphicsContext here to do the rendering but I can't see what on that I would set to make it use wide color or how I'd use it?
2
0
1.1k
Apr ’23