Based on these Swift docs and this forum thread, keyboard shortcuts / key equivalents are expected to be declared based on a US English layout.
As an application developer, you may then rely on the auto localization provided by allowsAutomaticKeyEquivalentLocalization, (for menus) or localize your key equivalents manually (menus and other controls with key equivalents).
But how does AppKit handle non localized key equivalents when faced with a non-US English keyboard layout? In particular, with a Hebrew layout active, the C key unmodified produces "ב", but when modified with ⌘ produces "c".
Does AppKit compare the key equivalent ⌘c to the modified or un-modified char of the event, or both? Or is there more to this logic? E.g. does it also try to match the incoming event against a US-English layout, even if not active at the moment?
The use-case here is implementing performKeyEquivalent for a custom control, where the documentation says:
You should extract the characters for a key equivalent using the NSEvent method charactersIgnoringModifiers.
So would simply comparing the event modifiers to the key equvialent modifers, and the event charactersIgnoringModifiers to the key equivalent give similar behavior to AppKit's own logic, e.g. in [NSEvent _matchesKeyEquivalent:modifierMask:]?
Based on the observed behavior when pressing ⌘c with a Hebrew layout active, it does trigger an NSButton with a key equivalent of ⌘c, which doesn't seem to match the documented behavior of using the unmodified chars ("ב") as basis.
Post
Replies
Boosts
Views
Activity
What's the right way to implement key equivalent matching that handles non-Roman/Latin layouts?
E.g. pressing Cmd+Option+C in a Greek layout produces an NSEvent with chars="ç" unmodchars="ψ", neither of which is going to match a key equivalent of Cmd+Option+C by simile comparison, yet performKeyEquivalent on a button with that exact key equivalent returns YES and activates the button.
How would someone replicate that?
[NSEvent charactersByApplyingModifiers:] also reports "ç", and so does UCKeyTranslate. Yet the Keyboard Viewer shows a modifier layer with "c", not the "ç" that the event reports:
The "Debug View Hierarchy" feature of Xcode is very convenient to break down and debug a complex view hierarchy. But Metal content/layers/views do not render anything in this mode.
Is there some special flag that needs to be set on the layer to support the Xcode feature? Or some special callback that needs to be implemented?
I would naively assume that Xcode could do some kind of read-back from the window server or GPU, even for layers rendered via accelerated APIs like Metal or OpenGL, similar to what's done when recording/capturing the screen.
Filed as FB13509137
The Drawing fully immersive content using Metal guide describes how to use Metal for visionOS immersive experiences, but seemingly requires swift to bring up the required CompositorLayer.
@main
struct MyApp: App {
var body: some Scene {
ImmersiveSpace(id: "MyContent") {
CompositorLayer { layerRenderer in
let renderThread = Thread {
let engine = myEngineCreate(layerRenderer)
myEngineRenderLoop(engine)
}
renderThread.name = "Render Thread"
renderThread.start()
}
}
}
The ImmersiveSpace scene can presumably be replaced with a call to
[UIApplication.sharedApplication activateSceneSessionForRequest:[UISceneSessionActivationRequest requestWithRole:UISceneSessionRoleImmersiveSpaceApplication] errorHandler:nil]
But is there a replacement for CompositorLayer? Or some other way to produce a cp_layer_renderer?
Perhaps it would be possible to write a small swift helper for this, but given the swift interface for CompositorLayer how would that be tied to an existing UIScene as created above?
@available(visionOS 1.0, *)
public struct CompositorLayer : SwiftUI.ImmersiveSpaceContent {
public init(configuration: any _CompositorServices_SwiftUI.CompositorLayerConfiguration = .default, renderer: @escaping (CompositorServices.LayerRenderer) -> Swift.Void)
public var body: Swift.Never {
get
}
public typealias Body = Swift.Never
}
As mentioned in https://developer.apple.com/forums//thread/759955 I was having trouble on macOS 15 with a launch agent accessing local network resources, even if the local network permission dialog pops up, and Settings app visually claims the app has permission granted.
The following was logged:
nehelper +[NEProcessInfo copyUUIDsForExecutable:]_block_invoke: failed to get UUIDs for /Users/foo/my-binary
It turned out that the problem was caused by the default golang toolchain not producing a LC_UUID load command, which seems to be critical for the network privacy subsystem to determine whether the binary is allowed access or not.
The issue has been reported upstream here: https://github.com/golang/go/issues/68678
To work around this I added -ldflags="-linkmode=external" when building the go binary, so that the system linker (which does add LC_UUID) is invoked.
It does not seem to be stored in the system or user TCC database?
Having a way to programatically grant the permission to a given app without user interaction, for example when automatically provisioning a CI node for macOS testing (with SIP disabled, so full disk access available), would be nice.
Filed as FB14878596
I'm running a launch agent in a CI node. The agent is responsible for launching CI build/test jobs. The agent, being the responsible process, has been granted kTCCServiceScreenCapture permission. With this in place I can run /usr/sbin/screencapture during CI test jobs, archiving the visual state of the CI machine if a test fails, which makes it easier to diagnose why the test failed.
However with macOS 15 I get weekly/monthly notifications about the agent being able to record the screen.
The general advice for this is that apps should migrate to ScreenCaptureKit, but I'm using a built in tool in macOS, /usr/sbin/screencapture, so how am I supposed to deal with that?
According to Technical Note TN2083 the Window Server advertises itself in the global bootstrap namespace, which is why you can launch GUI applications from SSH sessions, even if sshd/sshd-keygen-wrapper is launched as a launch daemon (in a non-GUI per-session bootstrap namespace).
As I understand it this is also why SessionGetInfo() reports NO for sessionHasGraphicAccess, as the SSH session is not an Aqua session type, while CGSessionCopyCurrentDictionary() does return a valid dict, because in practice you have access to the window server.
However, the tech note advices against running GUI programs from SSH sessions, as other GUI services may not be exposed to the global or non-GUI per-session bootstrap namespace. It uses com.apple.dock.server as an example of such a service, showing how Activity Monitor has different behavior when launched via SSH than via the UI.
Based on the advice of the tech note, articles like https://aahlenst.dev/blog/accessing-the-macos-gui-in-automation-contexts/ recommends running CI UI tests via a Launch Agent instead of SSH.
Now, I've tried to reproduce the the Activity Monitor case on macOS 12 and macOS 15, and I can not reproduce the missing Dock features. The Testing with Xcode documentation also says that:
By default, when you use ssh to login to an macOS system that has no active user session running, a command-line session is created. To ensure that an Aqua session is created for an ssh login, you must have a user logged in on the remote macOS host system. The existence of a user running on the remote system forces Aqua session for the ssh login. Once there is a user running on the host system, running xcodebuild from an ssh login works for all types of tests.
Which begs the question: Does modern macOS versions expose GUI services to the global or non-GUI per-session bootstrap namespace, or otherwise enable UI testing from SSH sessions, so that UI tests can safely be run from SSH sessions (as long as the user is logged in to the remote system's UI). Has things changed in this regard?