Post

Replies

Boosts

Views

Activity

Re-Visiting NSViewController.loadView's behaviour in 2024 and under macOS 15...
This is a post down memory lane for you AppKit developers and Apple engineers... TL;DR: When did the default implementation of NSViewController.loadView start making an NSView when there's no matching nib file? (I'm sure that used to return nil at some point way back when...) If you override NSViewController.loadView and call [super loadView] to have that default NSView created, is it safe to then call self.view within loadView? I'm refactoring some old Objective-C code that makes extensive use of NSViewController without any use of nibs. It overrides loadView, instantiates all properties that are views, then assigns a view to the view controller's view property. This seems inline with the documentation and related commentary in the header. I also (vaguely) recall this being a necessary pattern when not using nibs: @interface MyViewController: NSViewController // No nibs // No nibName @end @implementation MyViewController - (void)loadView { NSView *hostView = [[NSView alloc] initWithFrame:NSZeroRect]; self.button = [NSButton alloc...]; self.slider = [NSSlider alloc...]; [hostView addSubview:self.button]; [hostView addSubview:self.slider]; self.view = hostView; } @end While refactoring, I was surprised to find that if you don't override loadView and do all of the setup in viewDidLoad instead, then self.view on a view controller is non-nil, even though there was no nib file that could have provided the view. Clearly NSViewController has realized that: There's no nib file that matches nibName. loadView is not overridden. Created an empty NSView and assigned it to self.view anyways. Has this always been the behaviour or did it change at some point? I could have sworn that if there as no matching nib file and you didn't override loadView, then self.view would be nil. I realize some of this behaviour changed in 10.10, as noted in the header, but there's no mention of a default NSView being created. Because there are some warnings in the header and documentation around being careful when overriding methods related to view loading, I'm curious if the following pattern is considered "safe" in macOS 15: - (void)loadView { // Have NSViewController create a default view. [super loadView]; self.button = [NSButton...]; self.slider = [NSSlider...]; // Is it safe to call self.view within this method? [self.view addSubview:self.button]; [self.view addSubview:self.slider]; } Finally, if I can rely on NSViewController always creating an NSView for me, even when a nib is not present, then is there any recommendation on whether one should continue using loadView or instead move code the above into viewDidLoad? - (void)viewDidLoad { self.button = [NSButton...]; self.slider = [NSSlider...]; // Since self.view always seems to be non-nil, then what // does loadView offer over just using viewDidLoad? [self.view addSubview:self.button]; [self.view addSubview:self.slider]; } This application will have macOS 15 as a minimum requirement.
0
0
140
1w
Possibly Incorrect Statement in AppKit Release Notes for macOS 14.
While trying to debug some weird drawing issues under macOS 14, I remembered that there was a comment in the AppKit Release notes related to drawing and NSView.clipsToBounds. AppKit Release Notes for macOS 14 Under the section titled NSView, the following statement is made: For applications linked against the macOS 14 SDK, the default value of this property is true. Apps linked against older SDKs default to false. Some classes, like NSClipView, continue to default to true. Is this statement possibly backwards? From what I can tell, under macOS 14 NSView.clipsToBounds now defaults to false. I came across this while trying to debug an issue where views that override drawRect with the intent of calling NSFillRect(self.bounds) with a solid color are, sometimes, briefly flickering because self.bounds is NSZeroRect, even though self.frame is not (nor is the dirtyRect). This seems to be happening when views are added as subviews to a parent view. The subviews, which override drawRect, periodically "miss" a repaint and thus flicker. This seems to happen when views are frequently added or removed, like what happens in a scrolling view that is "recycling" views as they go offscreen. Views that scroll into the viewport are added as subviews and, sometimes, briefly flicker. Replacing calls to drawRect with wantsUpdateLayer and updateLayer eliminates the flickering, which makes me think something is going astray in drawRect and the various rects you can use. This is with Xcode 15.4, linking against macOS 14.5 and running on macOS 14.6.1
2
0
279
Oct ’24
How do you allow an XPC service to create a new file based on an NSURL that the user selected from an NSSavePanel?
How do you send an NSURL representing a new file, as returned from an NSSavePanel, to an XPC service such that the service is granted permission to create the file? I can successfully pass an NSURL to the XPC process if the NSURL represents an existing file. This is documented in Apple's Documentation: Share file access between processes with URL bookmarks This involves creating bookmark date while passing 0 in as the options. However, if you try to create bookmark data for an NSURL that represents a file that is not yet created, you do not get any bookmark data back and an error is returned instead: Error Domain=NSCocoaErrorDomain Code=260 "The file couldn’t be opened because it doesn’t exist." Simply passing the file path to the XPC process, by way of: xpc_dictionary_set_string(message, "file_path", url.fileSystemRepresentation); Does not grant the XPC create/write permissions. Is there an API or trick I'm missing? Note that the user should be allowed to save and create new files anywhere of their choosing, thus restricting URLs to only those within a group or container shared between the app and service isn't really viable. Using the latest of everything on macOS with the xpc_session API...
1
0
582
Jun ’24
How do you encrypt an AppleArchive with a "human" password?
In WWDC21 session 10233: Bring Encrypted Archives and Performance Improvements to Your App with Accelerate, there is an example of encrypting a directory using the AppleArchive framework. There is also accompanying sample code. However, that sample code uses a SymmetricKey and the hkdf_sha256_aesctr_hmac__symmetric__none profile. The key is set by calling context.setSymmetricKey(encryptionKey). How can you perform the same operation of encrypting a directory using AppleArchive but with a "human" password? (i.e.: A password provided by the user from a prompt?) Simply changing the profile to hkdf_sha256_aesctr_hmac__scrypt__none and then calling `context.setPassword("MyPassword") producing the following output "Error setting password (invalidValue)." I also tried using the command line aea application, but received the output Password is too short. Prompt: > aea encrypt -v -password-value "password" -profile 5 -i MyDirectory -o MyDirectory.aea Operation: encrypt input: FOO output: FOO.aea profile: hkdf_sha256_aesctr_hmac__scrypt__none worker threads: 10 auth data (raw): 0 B compression: lzfse 1 MB Error 0xb9075800 Password is too short Main key derivation failed (-2) Main key derivation Invalid encryption parameters Finally, in the file AEAContext.h, there is a comment associated with the method AEAContextSetPassword() that states: Set context password Stores a copy of password in context. Required to encrypt / decrypt a stream when encryption mode is SCRYPT. An internal size range is enforced for the password. The caller is expected to enforce password strength policies. @param context target object @param password password (raw data) @param password_size password size (bytes) @return 0 on success, a negative error code on failure I cannot find any other documentation that states what the password policy. And if there is a password policy for AppleEncryptedArchives, does that mean AEA is not a good fit for encrypting personal directories where the user just wants to use "any old password", regardless of the password's strength?
0
0
901
Apr ’23
A VZMacGraphicsDisplayConfiguration with a large resolution causes macOS Ventura to incorrectly draw its menu bar items.
Filed as rdar://FB11975037 When macOS Ventura is run as a guest OS within the virtualization framework, the main menu bar items will not be displayed correctly if VZMacGraphicsDisplayConfiguration defines a large resolution. The menu bar titles appear to be using the same color as the menu bar itself. When the Appearance is set to Light, the menu bar items are effectively invisible. When the Appearance is set to Dark, the menu bar items are drawn in what looks like a disabled state. This only affects the menu bar item titles on the left-hand side. The date-time and menu bar icons on the right side are always displayed in the correct color. This appears to be a regression in macOS Ventura as this issue is not present in macOS 12 running as a guest. This bug can be easily reproduced using Apple's own Virtualization sample code titled: "Running macOS in a Virtual Machine on Apple Silicon Macs" Steps to reproduce: Follow the sample code instructions for building and installing a VM.bundle. Before running 'macOSVirtualMachineSampleApp', change the VZMacGraphicsDisplayConfiguration to use: width = 5120, height = 2880, ppi = 144. Run 'macOSVirtualMachineSampleApp' and notice that the menu bar titles on the left side of the screen are not correctly drawn in the guest instance. This has been tested on: Host: macOS 13.1 Guest: macOS 13.x (All versions) Hardware: MBP 14" M1 Pro 32GB/2TB Is there anything that can be done to resolve this issue?
4
2
1.4k
Jan ’23
Correct way to hide sidebar after selection.
In a triple-column split view, it's common to hide the primary view when the user selects an item from a list. In some cases, the supplemental view is also hidden when an item is selected from that list, thus leaving only the detail view visible. What is the correct way to hide the primary view on selection and then to optionally hide the supplemental view on selection using the new NavigationStack? (Notes is a good example of the sidebars hiding after the user selects a folder in the sidebar.)
1
2
1.4k
Jun ’22
Will the Virtualization Framework support iCloud accounts?
The new Virtualization framework (and sample code!) are great. It's a lot of fun to run the sample code and quickly fire up multiple VMs of macOS running as a guest. However, the inability to authenticate with any iCloud services is a significant roadblock. Xcode, for example, is not allowing me to authenticate my developer account. Are there any plans to resolve this issue so that iCloud accounts can be authenticated from within a VM?
9
4
4.1k
Jun ’22
How do you set the color of an NSImage symbol when used as the contents of a CALayer?
I'm trying to use an NSImage that represents an SF Symbol as the contents of a CALayer. NSImage has an API for this in the form of [NSImage layerContentsForContentsScale:]. On the NSImage documentation page, there's even a few paragraph at the top dedicated to using this very method. But how do you set the color you want the image to render as if the image is an SF Symbol? NSImageView has .contentTintColor which works great, but CALayer has no such property. final class SymbolLayer: CALayer { func display() { // Just an example... let image = NSImage(systemSymbolName: "paperclip", accessibilityDescription: nil)! let actualScaleFactor = image.recommendedLayerContentsScale(contentsScale) // This obviously produces a black image because there's no color or tint information anywhere.     contents = image.layerContents(forContentsScale: actualScaleFactor) } } Is there a way you can configure the CALayer or the NSImage itself to have some sort of color information when it generates the layer contents? I've attempted to play around with the SymbolConfiguration coolers but without any success. (Even when wrapped inside NSAppearance.performAsCurrentDrawingAppearance.) The best I can come up with is to use CALayer.draw(in:) and then use the old NSImage.cgImage(forProposedRect:...) API. I can then set the fill color on the CGContext and go from there. Is there a more efficient way? override func draw(in context: CGContext) { let image = NSImage(systemSymbolName: "paperclip", accessibilityDescription: nil)!   var rect = bounds image.size = bounds.size let cgImage = image.cgImage( forProposedRect: &rect, context: nil, hints: [.ctm: AffineTransform(scale: contentsScale)] )! NSApp.effectiveAppearance.performAsCurrentDrawingAppearance { // Draw using an Appearance-sensitive color.     context.clip(to: bounds, mask: cgImage)     context.setFillColor(NSColor.labelColor.cgColor)     context.fill(bounds)   } } This is for macOS 12+.
0
0
1.4k
Sep ’21
How to configure CoreService UCCollation to match NSString localizedStandardCompare?
NSString localizedStandardCompare can be used to sort strings similar to how the Finder sorts strings. However, for large sets of strings, the performance is quite poor. Core Service offers collation support through the use of UCCreateCollator, which takes a mask of UCCollateOptions. Is there a set of options that can be used such that a sorted set of strings produced using UCCompareCollationKeys produces the exact same ordering as localizedStandardCompare? Bonus Round: When using UCCompareCollationKeys, the ordering of the two keys is returned as an SInt32. What is the correct way to convert that ordering into a boolean for use in algorithms like std::sort or Swift's sort which require their implementations adhere to strict weak ordering? SInt32 compareOrder = 0; OSStatus statusResult = 0; statusResult = UCCompareCollationKeys( 	 lhsKeys.data(),    lhsKeys.size(),    rhsKeys.data(),    rhsKeys.size(),    NULL,    &compareOrder); // How should compareOrder be converted to a Bool that satisfies strict weak ordering?
0
0
879
Jun ’20
Revisiting the recommendations from WWDC 2017-706 regarding GCD queue hierarchies.
TL;DR: Why is a hierarcy of serial queues the recommended way for managing concurrency in a modern application? Years later, the recommendations made in WWDC 2017-709 "Modernizing Grand Central Dispatch Usage" regarding the use of a hierarchy of serial queues to manage concurrency in an application remain unclear to me. Old posts on former Apple mailing lists, StackOverflow and Swift's Forums add to the confusion. Hopefully there's an opportunity for some clarity here. (I'm writing in the context of a macOS application developer.) In the WWDC video, to improve concurrency performance, it's recommended that one should split up their application into sub-systems and back each sub-system by a serial queue. It's then recommended that those sub-systems should target a single, "root" queue that is also a serial queue. The talk mentions that use of serial queues improved concurrency performance in many of Apple's own application. But with laptop and desktops having so many cores, I'm struggling to reconcile how running everything through a single serial queue helps with concurrency. On the surface, it feels like you'd be seriously under-utilizing available cores. For example, I have an application that has the following sub-systems: Export Service - Used for exporting images or videos.Rendering Service - Used for rendering thumbnail previews.Caching Service - Used for caching random data.Database Service - Used for reads and writes to the database.With the exception of maybe the database service, I want each of the other services to run as many concurrent requests and reasonable for the given machine. So each of those services is backed by a concurrent queue with an appropriate quality-of-service level. On a multi-core system, I should be able to render multiple thumbnails at once so using a serial queue does not make any sense. Same goes for exporting files. An export of a small image should not have to wait for the export of a large video to finish in front of it. So a concurrent queue is used. Along with using sub-systems, the WWDC recommends that all sub-systems target a single, root serial queue. This doesn't make too much sense to me either because that implies that there's no reason to use a concurrent queue anywhere in your tree of queues because its concurrency is negated by the serial queue it targets, or at least that's how I understand it. So if I did back each service by a serial queue, then target a root serial queue, I'd be in the situation where a thumbnail request has to wait for an export request to complete, which is not what I would want at all. (The WWDC talk also makes heavy use of DispatchSources, but those are serial in execution as well.) For the example sub-systems above, I actually use a hierarchy of concurrent queues that all target a root concurrent queue. Each sub-system runs at a different quality of service to help manage execution priority. In some cases, I manually throttle the number of concurrent requests in a given sub-system based on the available cores, as that seems to help a lot with performance. (For example, generating thumbnails of RAW files where it's better for me to explicitly restrict that to a maximum limit rather than relying on GCD.) As someone who builds a ton of concurrency into their apps, and as someone who felt that they had a reasonably good grasp on how to use GCD, I've never been able to understand why a hierarchy of serial queues is the recommended way for doing concurrency in a modern app. Hopefully someone can shed a bit more light on that for me.
7
0
2.9k
Apr ’20
Efficiently sending data from an XPC process to the host application.
Given an XPC process, what is the most efficient way to get data back to the host application? I have an XPC process, primarily written in C++ and a host application written in Swift. It generates a bunch of data that it serializes into an std::vector<std::byte>. When the process is finished, I want to efficiently transfer that buffer of bytes back to the host application. At the moment, I copy the std::vector data() into an NSData, then encode that NSData into an object conforming to NSSecureCoding that is then sent back to the app. At a minimum this is creating two copies of the data (one in the vector and the other in the NSData) but then I suspect that the XPC transporter might be creating another? * When using NSData, can I use the bytesNoCopy version if I guarantee that the underlying vector is still alive when I initiate the XPC connection response? When that call returns, am I then free to deallocate the vector even if the NSData is still in-flight back to the main app? * In one of the WWDC videos, it is recommended to use DispatchData as DispatchData might avoid making a copy when being transported across XPC. Does this apply when using NSXPCConnection or only when using the lower-level C APIs? * Is there a downside to using DispatchData that might increase the overhead? * Finally, where does Swift's Data type fit into this? On the application side, I have Swift code that is reading the buffer as a stream of bytes, so I ideally want the buffer to be contiguous and in a format that doesn't require another copy in order for Swift to be able to read it. (On average, the buffers tend to be small. Maybe only 1-2 megabytes, if that. But occasionally a buffer might ballon to 100-200 megabytes.)
6
0
4.9k
Dec ’19
Hosting a database in an XPC process.
In a macOS application I'm working on, I'm experimenting with the idea of moving all of my database code into an XPC process. The database in question is RocksDB and much of the code related to that part of the application is written in C++ or Objective-C++. However, in Apple's Developer documentation and supporting WWDC videos, XPC processes are always presented as "stateless" processes that can be terminated at any time by macOS. Is that still the prevailing train of thought? Safari makes extensive use of XPC processes for each open tab and those processes are never automatically terminated by macOS. Outside of the architectual issues of moving this much code into an XPC process, I had the following questions: * Is there a way to tell macOS that my XPC process is "long lived" and should not be terminated? Perhaps an Info.plist flag or something. * If my main application crashes, does the XPC process shutdown gracefully or is it abruptly terminated as well? It might not always have an open XPC connection back to the main application, otherwise I guess it could just install an interruption handler on the connection. But if such a connection isn't open, does the XPC process receive any notification (or time) to clean up? (I've struggled to figure out how to test this with Xcode.)
7
0
2.3k
Dec ’19