Posts

Post not yet marked as solved
4 Replies
1k Views
Filed as rdar://FB11975037 When macOS Ventura is run as a guest OS within the virtualization framework, the main menu bar items will not be displayed correctly if VZMacGraphicsDisplayConfiguration defines a large resolution. The menu bar titles appear to be using the same color as the menu bar itself. When the Appearance is set to Light, the menu bar items are effectively invisible. When the Appearance is set to Dark, the menu bar items are drawn in what looks like a disabled state. This only affects the menu bar item titles on the left-hand side. The date-time and menu bar icons on the right side are always displayed in the correct color. This appears to be a regression in macOS Ventura as this issue is not present in macOS 12 running as a guest. This bug can be easily reproduced using Apple's own Virtualization sample code titled: "Running macOS in a Virtual Machine on Apple Silicon Macs" Steps to reproduce: Follow the sample code instructions for building and installing a VM.bundle. Before running 'macOSVirtualMachineSampleApp', change the VZMacGraphicsDisplayConfiguration to use: width = 5120, height = 2880, ppi = 144. Run 'macOSVirtualMachineSampleApp' and notice that the menu bar titles on the left side of the screen are not correctly drawn in the guest instance. This has been tested on: Host: macOS 13.1 Guest: macOS 13.x (All versions) Hardware: MBP 14" M1 Pro 32GB/2TB Is there anything that can be done to resolve this issue?
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
9 Replies
3.2k Views
The new Virtualization framework (and sample code!) are great. It's a lot of fun to run the sample code and quickly fire up multiple VMs of macOS running as a guest. However, the inability to authenticate with any iCloud services is a significant roadblock. Xcode, for example, is not allowing me to authenticate my developer account. Are there any plans to resolve this issue so that iCloud accounts can be authenticated from within a VM?
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
0 Replies
628 Views
In WWDC21 session 10233: Bring Encrypted Archives and Performance Improvements to Your App with Accelerate, there is an example of encrypting a directory using the AppleArchive framework. There is also accompanying sample code. However, that sample code uses a SymmetricKey and the hkdf_sha256_aesctr_hmac__symmetric__none profile. The key is set by calling context.setSymmetricKey(encryptionKey). How can you perform the same operation of encrypting a directory using AppleArchive but with a "human" password? (i.e.: A password provided by the user from a prompt?) Simply changing the profile to hkdf_sha256_aesctr_hmac__scrypt__none and then calling `context.setPassword("MyPassword") producing the following output "Error setting password (invalidValue)." I also tried using the command line aea application, but received the output Password is too short. Prompt: > aea encrypt -v -password-value "password" -profile 5 -i MyDirectory -o MyDirectory.aea Operation: encrypt input: FOO output: FOO.aea profile: hkdf_sha256_aesctr_hmac__scrypt__none worker threads: 10 auth data (raw): 0 B compression: lzfse 1 MB Error 0xb9075800 Password is too short Main key derivation failed (-2) Main key derivation Invalid encryption parameters Finally, in the file AEAContext.h, there is a comment associated with the method AEAContextSetPassword() that states: Set context password Stores a copy of password in context. Required to encrypt / decrypt a stream when encryption mode is SCRYPT. An internal size range is enforced for the password. The caller is expected to enforce password strength policies. @param context target object @param password password (raw data) @param password_size password size (bytes) @return 0 on success, a negative error code on failure I cannot find any other documentation that states what the password policy. And if there is a password policy for AppleEncryptedArchives, does that mean AEA is not a good fit for encrypting personal directories where the user just wants to use "any old password", regardless of the password's strength?
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
1 Replies
1.1k Views
In a triple-column split view, it's common to hide the primary view when the user selects an item from a list. In some cases, the supplemental view is also hidden when an item is selected from that list, thus leaving only the detail view visible. What is the correct way to hide the primary view on selection and then to optionally hide the supplemental view on selection using the new NavigationStack? (Notes is a good example of the sidebars hiding after the user selects a folder in the sidebar.)
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
0 Replies
1.1k Views
I'm trying to use an NSImage that represents an SF Symbol as the contents of a CALayer. NSImage has an API for this in the form of [NSImage layerContentsForContentsScale:]. On the NSImage documentation page, there's even a few paragraph at the top dedicated to using this very method. But how do you set the color you want the image to render as if the image is an SF Symbol? NSImageView has .contentTintColor which works great, but CALayer has no such property. final class SymbolLayer: CALayer { func display() { // Just an example... let image = NSImage(systemSymbolName: "paperclip", accessibilityDescription: nil)! let actualScaleFactor = image.recommendedLayerContentsScale(contentsScale) // This obviously produces a black image because there's no color or tint information anywhere.     contents = image.layerContents(forContentsScale: actualScaleFactor) } } Is there a way you can configure the CALayer or the NSImage itself to have some sort of color information when it generates the layer contents? I've attempted to play around with the SymbolConfiguration coolers but without any success. (Even when wrapped inside NSAppearance.performAsCurrentDrawingAppearance.) The best I can come up with is to use CALayer.draw(in:) and then use the old NSImage.cgImage(forProposedRect:...) API. I can then set the fill color on the CGContext and go from there. Is there a more efficient way? override func draw(in context: CGContext) { let image = NSImage(systemSymbolName: "paperclip", accessibilityDescription: nil)!   var rect = bounds image.size = bounds.size let cgImage = image.cgImage( forProposedRect: &rect, context: nil, hints: [.ctm: AffineTransform(scale: contentsScale)] )! NSApp.effectiveAppearance.performAsCurrentDrawingAppearance { // Draw using an Appearance-sensitive color.     context.clip(to: bounds, mask: cgImage)     context.setFillColor(NSColor.labelColor.cgColor)     context.fill(bounds)   } } This is for macOS 12+.
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
6 Replies
4.3k Views
Given an XPC process, what is the most efficient way to get data back to the host application? I have an XPC process, primarily written in C++ and a host application written in Swift. It generates a bunch of data that it serializes into an std::vector<std::byte>. When the process is finished, I want to efficiently transfer that buffer of bytes back to the host application. At the moment, I copy the std::vector data() into an NSData, then encode that NSData into an object conforming to NSSecureCoding that is then sent back to the app. At a minimum this is creating two copies of the data (one in the vector and the other in the NSData) but then I suspect that the XPC transporter might be creating another? * When using NSData, can I use the bytesNoCopy version if I guarantee that the underlying vector is still alive when I initiate the XPC connection response? When that call returns, am I then free to deallocate the vector even if the NSData is still in-flight back to the main app? * In one of the WWDC videos, it is recommended to use DispatchData as DispatchData might avoid making a copy when being transported across XPC. Does this apply when using NSXPCConnection or only when using the lower-level C APIs? * Is there a downside to using DispatchData that might increase the overhead? * Finally, where does Swift's Data type fit into this? On the application side, I have Swift code that is reading the buffer as a stream of bytes, so I ideally want the buffer to be contiguous and in a format that doesn't require another copy in order for Swift to be able to read it. (On average, the buffers tend to be small. Maybe only 1-2 megabytes, if that. But occasionally a buffer might ballon to 100-200 megabytes.)
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
7 Replies
2.5k Views
TL;DR: Why is a hierarcy of serial queues the recommended way for managing concurrency in a modern application? Years later, the recommendations made in WWDC 2017-709 "Modernizing Grand Central Dispatch Usage" regarding the use of a hierarchy of serial queues to manage concurrency in an application remain unclear to me. Old posts on former Apple mailing lists, StackOverflow and Swift's Forums add to the confusion. Hopefully there's an opportunity for some clarity here. (I'm writing in the context of a macOS application developer.) In the WWDC video, to improve concurrency performance, it's recommended that one should split up their application into sub-systems and back each sub-system by a serial queue. It's then recommended that those sub-systems should target a single, "root" queue that is also a serial queue. The talk mentions that use of serial queues improved concurrency performance in many of Apple's own application. But with laptop and desktops having so many cores, I'm struggling to reconcile how running everything through a single serial queue helps with concurrency. On the surface, it feels like you'd be seriously under-utilizing available cores. For example, I have an application that has the following sub-systems: Export Service - Used for exporting images or videos.Rendering Service - Used for rendering thumbnail previews.Caching Service - Used for caching random data.Database Service - Used for reads and writes to the database.With the exception of maybe the database service, I want each of the other services to run as many concurrent requests and reasonable for the given machine. So each of those services is backed by a concurrent queue with an appropriate quality-of-service level. On a multi-core system, I should be able to render multiple thumbnails at once so using a serial queue does not make any sense. Same goes for exporting files. An export of a small image should not have to wait for the export of a large video to finish in front of it. So a concurrent queue is used. Along with using sub-systems, the WWDC recommends that all sub-systems target a single, root serial queue. This doesn't make too much sense to me either because that implies that there's no reason to use a concurrent queue anywhere in your tree of queues because its concurrency is negated by the serial queue it targets, or at least that's how I understand it. So if I did back each service by a serial queue, then target a root serial queue, I'd be in the situation where a thumbnail request has to wait for an export request to complete, which is not what I would want at all. (The WWDC talk also makes heavy use of DispatchSources, but those are serial in execution as well.) For the example sub-systems above, I actually use a hierarchy of concurrent queues that all target a root concurrent queue. Each sub-system runs at a different quality of service to help manage execution priority. In some cases, I manually throttle the number of concurrent requests in a given sub-system based on the available cores, as that seems to help a lot with performance. (For example, generating thumbnails of RAW files where it's better for me to explicitly restrict that to a maximum limit rather than relying on GCD.) As someone who builds a ton of concurrency into their apps, and as someone who felt that they had a reasonably good grasp on how to use GCD, I've never been able to understand why a hierarchy of serial queues is the recommended way for doing concurrency in a modern app. Hopefully someone can shed a bit more light on that for me.
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
0 Replies
726 Views
NSString localizedStandardCompare can be used to sort strings similar to how the Finder sorts strings. However, for large sets of strings, the performance is quite poor. Core Service offers collation support through the use of UCCreateCollator, which takes a mask of UCCollateOptions. Is there a set of options that can be used such that a sorted set of strings produced using UCCompareCollationKeys produces the exact same ordering as localizedStandardCompare? Bonus Round: When using UCCompareCollationKeys, the ordering of the two keys is returned as an SInt32. What is the correct way to convert that ordering into a boolean for use in algorithms like std::sort or Swift's sort which require their implementations adhere to strict weak ordering? SInt32 compareOrder = 0; OSStatus statusResult = 0; statusResult = UCCompareCollationKeys( 	 lhsKeys.data(),    lhsKeys.size(),    rhsKeys.data(),    rhsKeys.size(),    NULL,    &compareOrder); // How should compareOrder be converted to a Bool that satisfies strict weak ordering?
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
7 Replies
1.9k Views
In a macOS application I'm working on, I'm experimenting with the idea of moving all of my database code into an XPC process. The database in question is RocksDB and much of the code related to that part of the application is written in C++ or Objective-C++. However, in Apple's Developer documentation and supporting WWDC videos, XPC processes are always presented as "stateless" processes that can be terminated at any time by macOS. Is that still the prevailing train of thought? Safari makes extensive use of XPC processes for each open tab and those processes are never automatically terminated by macOS. Outside of the architectual issues of moving this much code into an XPC process, I had the following questions: * Is there a way to tell macOS that my XPC process is "long lived" and should not be terminated? Perhaps an Info.plist flag or something. * If my main application crashes, does the XPC process shutdown gracefully or is it abruptly terminated as well? It might not always have an open XPC connection back to the main application, otherwise I guess it could just install an interruption handler on the connection. But if such a connection isn't open, does the XPC process receive any notification (or time) to clean up? (I've struggled to figure out how to test this with Xcode.)
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
0 Replies
870 Views
I'm trying to understand what the reference coordinate system is when making a VNCoreMLRequest that has a regionOfInterest set. The results I'm getting to not appear to be consistent with other parts of Vision, nor do they appear to be consistent valid bounding boxes in general.For example:var humanRequest = VNDetectHumanRectanglesRequest() humanRequest.regionOfInterest = CGRect(0.5, 0, 0.5, 1)This request instructs Vision to only consider the right-half of the image when looking for humans. However, the bounding boxes of any observations I get are defined within the identity region of interest (0,0,1,1). In other words, a observation who's boundingBox has an x-value of 0.5 starts in the middle of the image, not at the 3/4 mark, which would be the case if the boundingBox was defined within the coordinate system of the regionOfInterest.In short, most Vision Observations do not appear to return the bounding box within the region of interests coordinate system.However, when I make a request to VNCoreMLRequest() with a custom trained detection model (trained using CreateML), the coordinate system does appear to be based off the region of interest, sort of. I get relatively stable results if I use VNNormalizedIdentityRect as the regionOfInterest, but when I specify a custom region then the boundingBox values don't really line up with much. Attempting to denormalize them back to the identity rect sort of works, but not really. And when doing so, sometimes the width and height values would be invalid for Vision.ex: CGRect(0.75, 0, 0.95, 1)In such a bounding box, it's implying that the x-value starts at 0.75 but then the width should be 1.0 - 0.75 at most, otherwise the box is being defined to be way off the screen.Given that, how do I interpret the boundingBox of a VNRecognizedObjectObservation, generated by a VNCoreMLRequest who's regionOfInterest has been set to something other than the identity rect?
Posted
by kennyc.
Last updated
.
Post not yet marked as solved
1 Replies
799 Views
Updated to 10.15 beta 5 (19A526h) when it came out. Ever since, the pkd and *** processes have been using 100% CPU. Rebooting made no difference. The pkd process appears to be preventing some apps from showing an Open or Save panel. When attempting to show an Open Panel, the application in question just gets the spinning beach ball. If I use Activity Monitor to kill pkd, then the application is immediately able to show the dialog.Neither of theses processes presented a problem in earlier betas.As it stands, beta 5 is almost unusable for me whereas I had been using beta 4 and beta 3 for active, daily devleopment.
Posted
by kennyc.
Last updated
.