I'd like to support multiple windows in my macOS app, which provides previews of cameras in the system, using the SwiftUI app life cycle, on macOS 13.5.2 and later.
I can make multiple window without any problem, using the default behavior of WindowGroup and the File/New menu item.
WindowGroup(id: "main-viewer", for: String.self) { $cameraUniqueID in
ContentView(cameraUniqueID: cameraUniqueID)
I can make a specific window on a camera using the .openWindow environment variable:
.openWindow(id: "main-viewer", value:someSpecificCameraID)
What I would like to be able to do is change the 'value' of my window at run time. When a user chooses "New Window", they get a window with a view of the first (or default) camera in it. They can then choose another camera to show in that window. I would like to be able to persist the chosen camera and the position and size of that window (originally opened with File/New Window).
Windows opened with New Window are always opened with a nil value.
Windows opened with .openWindow have their size and content saved, but I don't want to add UI to open specific windows. I want to open a generic window, then specify what camera it is looking at, move and resize it, and I'd like to save that window state.
Is this possible, or am I holding SwiftUI wrong?
Post
Replies
Boosts
Views
Activity
How does one get the list of controls which a CMIOObject has to offer?
How do the objects in the CMIO hierarchy map to CMIOExtension objects?
I expected the hierarchy to be something like this:
the system has owned objects of type:
'aplg' `(kCMIOPlugInClassID)` has owned objects of type
'adev' `(kCMIODeviceClassID,` which may have owned objects of type
'actl' `(kCMIOControlClassID)` and has at least one owned object of type
'astr' `(kCMIOStreamClassID),` each of which may have owned objects of type
'actl' `(kCMIOControlClassID)`
Instead, when I recursively traverse the object hierarchy, I find the devices and the plug-ins at the same level (under the system object). Only some of the device in my system have owned streams, although they all have a kCMIODevicePropertyStreams ('stm#') property.
None of the devices or streams appear to have any controls, and none of the streams have any owned objects. I'm not using the qualifier when searching for owned objects, because the documentation implies that it may be nil if I'm not interested in narrowing my search.
Should I expect to find any devices or streams with controls? And if so, how do I get a list of them? CMIOHardwareObject.h says that "Wildcards... are especially useful ...for querying an CMIOObject's list of CMIOControls. ", but there's no example of how to do this.
My own device (from my camera extension) has no owned objects of type stream. I don't see any API call to convey ownership of the stream I create by the device it belongs to. How does the OS decide that a stream is 'owned' by a device?
I've tried various scopes and elements - kCMIOObjectPropertyScopeGlobal, kCMIOObjectPropertyScopeWildcard, kCMIOControlPropertyScope, and kCMIOObjectPropertyElementMain, kCMIOObjectPropertyElementWildcard and kCMIOControlPropertyElement. I can't get a list of controls using any of these.
Ultimately, I'm trying to find my provider, my devices and my streams using the CMIO interface, so that I can set and query properties on them. Is it reasonable to assume that the CMIOObject of type 'aplg' is the one corresponding to a CMIOExtensionProviderSource?
This is on Ventura 13.4.1 on M1.
I am developing a CMIO Camera Extension on macOS Ventura.
Initially, I based this on the template camera extension (which creates its own frames). Later, I added a sink stream so that I could send the extension video from an app. That all works.
Recently, I added the ability for the extension itself to initiate a capture session, so that it can augment the video from any available AVCaptureDevice without running its controlling app. That works, but I have to add the Camera capability to the extension's sandbox configuration, and add a camera usage string.
This caused the OS to put up the user permission dialog, asking for permission to use the camera. However, the dialog uses the extension's bundle ID for its name, which is long and not user friendly. Furthermore, the extension isn't visible to the user (it is packaged inside the app which installs and controls it), so even a user-friendly name doesn't make that much sense to the end user.
I tried adding a CFBundleDisplayName to the extension's plist, but the OS didn't use it in the permissions dialog.
Is there a way to get the OS to present a more user-friendly name?
Should I expect to see a permissions dialog pertaining to the extension at all?
Where does the OS get the name from?
After the changes (Camera access, adding a camera usage string), I noticed that the extension's icon (the generic extension icon) showed up in the dock, with its name equal to its bundle ID.
Also, in Activity Monitor, the extension's process is displayed, using its CFBundleDisplayName (good). But about 30s after activation, the name is displayed in red, with " (not responding)" appended, although it is still working.
The extension does respond to the requests I send it over the CMIO interface, and it continues to process video, but it isn't handling user events, while the OS thinks that it should, probably because of one or more of the changes to the plist that I have had to make.
To get the icon out of the dock, I added LSUIElement=true to its plist. To get rid of the red "not responding", I changed the code in its main.swift from the template. It used to simply call CFRunLoopRun(). I commented out that call and instead make this call
_ = NSApplicationMain(CommandLine.argc, CommandLine.unsafeArgv)
That appears to work, but has the unfortunate side effect of increasing the CPU usage of the extension when it is idle from 0.3% to 1.0%.
I do want the extension to be able to process Intents, so there is a price to be paid for that. But it doesn't need to do so until it is actively dealing with video.
Is there a way to reduce the CPU usage of a background app, perhaps dynamically, making a tradeoff between CPU usage and response latency?
Is it to be expected that a CMIOExtension shows up in the Dock, ever?
My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle.
I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player.
I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output.
This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames.
However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension.
Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs?
My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames.
Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?
when I'm not yet logged in to the forums, some text blocks look like this:
once I'm logged in, the same text block looks like this:
This is on Ventura 13.2.1 with Safari Version 16.3 (18614.4.6.1.6)
Does anyone else experience this or is just me?
I would like to use a DisclosureGroup in a VStack on macOS, but I'd like it to look like a DisclosureGroup in a List. (I need to do this to work around a crash when I embed a particular control in a List).
I'll append some code below, and a screenshot.
You can see that a List background is white, not grey. The horizontal alignment of the disclosure control itself is different in a List. In a List, the control hangs to the left of the disclosure group's content, so the content is all aligned on its leading edge. Inside a VStack, my VStack with .leading horizontal alignment places the DisclosureGroup so that its leading edge (the leading edge of the disclosure control) is aligned to the leading edge of other elements in the VStack. The List is taking account of the geometry of the disclosure arrow, while the VStack does not.
The vertical alignment of the disclosure triangle is also different - in a VStack, the control is placed too high.
And finally, in a VStack, the disclosure triangle lacks contrast (its RGB value is about 180, while the triangle in the List has an RGB value of 128).
Does anyone know how to emulate the appearance of a DisclosureGroup in a List when that DisclosureGroup is embedded in a VStack?
here's my ContentView.swift
struct ContentView: View {
var body: some View {
HStack {
List {
Text("List")
DisclosureGroup(content: {
Text("content" )},
label: {
Text("some text")
})
}
VStack(alignment: .leading) {
Text("VStack")
DisclosureGroup(content: {
Text("content" )},
label: {
Text("some text")
})
Spacer()
}
.padding()
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Does anyone know why this crashes, or could anyone tell me how to restructure this code so it doesn't crash.
(this is FB11917078)
I have a view which displays two nested rectangles of a given aspect ratio (here 1:1). The inner rectangle is a fixed fraction of the outer rectangle's size.
When embedded in a List, if I rapidly resize the window, the app crashes.
If the View is not in a List, there's no crash (and the requested aspect ratio is not respected, which I don't yet know how to fix).
Here's the code for the ContentView.swift file. Everything else is a standard macOS SwiftUI application template code from Xcode 14.2.
import SwiftUI
struct ContentView: View {
@State var zoomFactor = 1.2
var body: some View {
// rapid resizing of the window causes a crash,
// if the TwoRectanglesView is not embedded in a
// List, there is no crash
List {
ZStack {
Rectangle()
TwoRectanglesView(zoomFactor: $zoomFactor)
}
}
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
struct TwoRectanglesView: View {
@State private var fullViewWidth: CGFloat?
@Binding var zoomFactor: Double
private let aspectRatio = 1.0
var body: some View {
ZStack {
Rectangle()
.aspectRatio(aspectRatio, contentMode: .fit)
GeometryReader { geo in
ZStack {
Rectangle()
.fill(.black)
.border(.blue)
Rectangle()
.fill(.red)
.frame(width:geo.size.width/zoomFactor,
height: geo.size.height/zoomFactor)
}
}
}
}
}
struct TwoRectanglesView_Previews: PreviewProvider {
@State static var zoomFactor = 3.1
static var previews: some View {
TwoRectanglesView(zoomFactor: $zoomFactor)
}
}
The "deployment target" for a DEXT is a number like 19.0 or 21.4. Xcode seems to pick the latest version on the machine you are creating the target on as a default - so if I make a new Driver target on Xcode 14 and Ventura, the Deployment Target for the driver will be 21.4. If I'm targeting macOS 12 (for example), what version of DriverKit should I choose, and where is this documented?
I'm trying to implement an app Shortcut (Custom Intent) for a macOS app on Monterey. Shortcuts.app finds the shortcut, but when I run it, the progress bar goes to 50% and stops. My handler and resolution code is not called. I'm implementing the handling in-app (not in an extension)
I'm following instructions from the WWDC 2021 video "Meet Shortcuts for macOS" and this link https://developer.apple.com/documentation/sirikit/adding_user_interactivity_with_siri_shortcuts_and_the_shortcuts_app?language=objc
If I filter on "shortcuts" in the Console app, and press the run button in Shortcuts.app for my Shortcut,
I see this message (amongst others)
-[WFAction processParameterStates:withInput:skippingHiddenParameters:askForValuesIfNecessary:workQueue:completionHandler:]_block_invoke Action <WFHandleCustomIntentAction: 0x15c1305b0, identifier: finished processing parameter states. Values:
which looks sort of promising
but I also see this
Sandbox: Shortcuts(9856) deny(1) file-read-data /Users/stu/Library/Developer/Xcode/DerivedData/-hghdaydxzeamopexvfsgfeuvsejw/Build/Products/Debug/.app
I've tried moving my app to /Applications and launching it from there, I see a similar message in the log, but the path leads to the app in /Applications.
I've tried deleting all copies of my app aside from the one I'm currently building and debugging. I've tried deleting the derived data folder, restarting the Mac, re-launching the Shortcuts app. I've tried sandboxing my app. Other Shortcuts (for other apps) work on this machine.
I'm probably missing something extremely simple - does anyone have a suggestion?
Some related questions:
At WWDC 2022, Apple introduced "App Intents", without adequately explaining how these differ from the intents described in the WWDC 2021 video. Can anyone tell me what the difference is?
In the Xcode editor for the .intentdefinition file, there's a button "Convert to App Intent". Clicking it produces some new Swift files in my app, but the thing is an intent handled by an app, and now it is an App Intent - what's the difference? Is one better than the other? Do I have to click the convert button again if I subsequently modify the .intentdefinition file, or is this conversion process intended to replace the .intentdefinition file with those .swift files?
I have an app with a dext, which I developed using Xcode 13.4.1. I used to sign it manually using our Developer ID Distribution certificate and profile, because Xcode 13 didn't support automatic dext signing, and most of my problems stemmed from signing or entitlement configuration problems, not coding problems, so I never used 'sign to run locally'.
I tried to build the same app & extension with Xcode 14; the build fails with this helpful error:
Xcode 14 and later requires a DriverKit development profile enabled for IOS and macOS. Visit the developer website to create or download a DriverKit profile
So I went to the portal, selected the DriverKit App Development profile type, selected my App ID, selected my development certificate, selected all my test devices, selected my entitlements, named it, clicked Generate - and nothing happens. The "Generate" button title briefly changes to "Processing...", but I can't see how to get to the Download stage.
Anyone have any idea what I'm doing wrong?
I'm trying to make a DEXT target within my project. It compiles and links fine if I build just its own scheme.
However, if I build my app's target, which includes the DEXT as a dependency, the build fails when linking the DEXT.
The linker commands are different in the two cases. When built as part of the larger project, the DEXT linker command includes -fsanitize\=undefined. This flag is absent when I build using the DEXT's scheme alone. I searched the .pbxproj for "sanitize" - it doesn't appear, so it looks like Xcode is adding this flag.
The linker failure is this:
File not found: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib
The only files with "driver kit" in their name in that directory are these two:
libclang_rt.cc_kext_driverkit.a
libclang_rt.driverkit.a
The successful link command includes this directive:
-lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.driverkit.a
while the unsuccessful link command includes this one:
-lc++ /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/13.1.6/lib/darwin/libclang_rt.ubsan_driverkit_dynamic.dylib
I tried adding -fno-sanitize=undefined to the OTHER_LINKER_FLAGS for the DEXT target, hoping that this would cancel the effect of the previous -fsanitize, but then I get undefined symbol errors:
Undefined symbol: ___ubsan_handle_shift_out_of_bounds
Undefined symbol: ___ubsan_handle_type_mismatch_v1
These appear to be referred to by the macros used in the iig magic.
I'm using Xcode 13.4.1 (13F100).
Does anyone know how I can fix this?
I built an app which hosts a CMIOExtension. The app works, and it can activate the extension. The extension loads in e.g. Photo Booth and shows the expected video (a white horizontal line which moves down the picture).
I have a couple of questions about this though.
The sample Camera Extension is built with a CMIOExtension dictionary with just one entry, CMIOExtensionMachServiceName which is $(TeamIdentifierPrefix)$(PRODUCT_BUNDLE_IDENTIFIER)
This Mach service name won't work though. When attempting to activate the extension, sysextd says that the extensions has an invalid mach service name or is not signed, the value must be prefixed with one of the App Groups in the entitlement.
So in order to get the sample extension to activate from my app, I have to change its CMIOExtensionMachServiceName to
<my team ID>.com.mycompany.my-app-group.<myextensionname>
Is this to be expected?
The template CMIOExtension generates its own video using a timer. My app is intended to capture video from a source, filter that video, then feed it to the CMIOExtension, somehow. The template creates an app group called "$(TeamIdentifierPrefix)com.example.app-group", which suggests that it might be possible to use XPC to send frames from the app to the extension.
However, I've been unable to do so. I've used
NSXPCConnection * connection = [[NSXPCConnection alloc] initWithMachServiceName:, using the CMIOExtensionMachServiceName with no options and with the NSXPCConnectionPrivileged option. I've tried NSXPCConnection * connection = [[NSXPCConnection alloc] initWithServiceName: using the extension's bundle identifier. In all cases when I send the first message I get an error in the remote object proxy's handler:
Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named <whatever name I try> was invalidated: failed at lookup with error 3 - No such process."
According to the "Daemons and Services Programming Guide" an XPC service should have a CFBundlePackageType of XPC!, but a CMIOExtension is of type SYSX. It can't be both.
Does the CMIOExtension loading apparatus cook up a synthetic name for the XPC service, and if so, what is it? If none, how is one expected to get pixel buffers into the camera extension?
I added a Camera Extension to my app, using the template in Xcode 13.3.1. codesign tells me that the app and its embedded system extension are correctly signed, their entitlements seem to be okay. But when I submit an activation request for the extension, it returns with this failure:
error: Error Domain=OSSystemExtensionErrorDomain Code=9 "(null)"
localized failure reason: (null)
localizedDescription: The operation couldn’t be completed. (OSSystemExtensionErrorDomain error 9.)
localizedRecoverySuggestion: (null)
What could be the reason? code 9 appears to mean a "validation error", but how do I figure out what is invalid?
In my keychain, I have one Developer ID Application certificate, with a private key, for my Team.
In Xcode's Accounts/Manage Certificates dialog, there are three Developer ID Application certificates, two of which have a red 'x' badge and the status 'missing private key'.
I can right click on any of those three entries and my only enabled choice is "Export". Email creator or Delete are disabled. Why?
In my Team's account, there are indeed three Developer ID Application certificates, with different expiration dates, but I only have the private key for one of them.
By choosing Manual signing, I can choose a specific certificate from my keychain, but Xcode 13.2.1 tells me that this certificate is missing its private key - but I can see that private key in my keychain!
For some time I've been sharing an internal macOS app with my colleagues by simply building it locally, zipping it up and emailing, or sharing on Slack or Teams.
In the Target Settings in Xcode, Signing and Capabilities, the Team is set to my company, the Signing Certificate is set to Development (not "Sign to run locally").
This has worked for some time. None of the recipients complained that they couldn't run the app. Of course it is not notarized so they need to right-click and select Open the first time around.
When I examine the signature of the app I distribute this way, using `codesign -dvvv, the signing authority is me (not my company).
One of my colleagues recently migrated to a new Mac Mini M1. On this Mac, when attempting to open the app, he saw the "you do not have permission to open the application" alert. He's supposed to consult his sys admin (himself).
I fixed the problem by Archiving a build and explicitly choosing to sign it using the company's Developer ID certificate. The version produced this way has a signing authority of my company, not me, and my colleague can run it.
Does anyone know why my previous builds work on other machines for other users? It appears that the locally-built app was actually signed by my personal certificate, although Xcode's UI said it would be signed by my company - but it didn't only work for me?
What is the expected behavior if you try to open an app signed with a personal certificate on a machine owned by a different person? Should Security & Privacy offer the option of approving that particular personal certificate?