Posts

Post not yet marked as solved
9 Replies
2.8k Views
I'm new to Xcode cloud - working with a Mac OS app, build is working great. Now I am trying to add a Test action; the testing target builds but won't run, and the error indicates it can't find the testing bundle in the expected build output. There's also mention of a code signing error, but I have automatic code signing enabled with the same settings on test target as the app. I am only running the unit test (XCTest) scheme, not the UI tests. When I run it locally from the IDE it works fine, either selecting the test scheme explicitly or as the test step of the app scheme. I notice the XCTest target's scheme setup uses Debug builds and expects the test output to be in the Debug .app bundle, I thought perhaps that was the problem (in case only the release app bundle actually gets built in the Xcode Cloud environment). So I created a duplicate scheme and set the build targets to Release - again I can run this fine locally (after creating a release build), but it fails with the same error in Xcode cloud. I also tried changing the code signing certificate from "Development" to "Sign to run locally" to see if that made a difference, but I get the same error. (It's using my developer account Team, and "Automatically manage signing".) Can anyone relate the proper way to set up an XCTest scheme so that the tests will actually run in a Mac OS Xcode Cloud workflow? I'm using Xcode 14.0.1. Here's the full error output, with [AppName] and [TestTargetName] substituted for the actual: [AppName] (....) encountered an error (Failed to load the test bundle. If you believe this error represents a bug, please attach the result bundle at /Volumes/workspace/resultbundle.xcresult. (Underlying Error: The bundle “[TestTargetName]” couldn’t be loaded. The bundle couldn’t be loaded. Try reinstalling the bundle. dlopen(/Volumes/workspace/TestProducts/Debug/[AppName].app/Contents/PlugIns/[TestTargetName].xctest/Contents/MacOS/[TestTargetName], 0x0109): tried: '/Volumes/workspace/TestProducts/Debug/[TestTargetName]' (no such file), '/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/usr/lib/[TestTargetName]' (no such file), '/Volumes/workspace/TestProducts/Debug/[AppName].app/Contents/PlugIns/[TestTargetName].xctest/Contents/MacOS/[TestTargetName]' (code signature in <....> '/Volumes/workspace/TestProducts/Debug/[AppName].app/Contents/PlugIns/[TestTargetName].xctest/Contents/MacOS/[TestTargetName]' not valid for use in process: mapped file has no Team ID and is not a platform binary (signed with custom identity or adhoc?)))) Thanks!
Posted
by ccorbell.
Last updated
.
Post marked as solved
7 Replies
7.2k Views
I'm getting an error building my Mac app for both Apple Silicon and Intel, related to a Swift package dependency. The dependency is a pure Swift package (SwiftyXMLParser), and I'm bringing it into my app with a package dependency from the git repository URL. My app is configured to build standard architectures to run natively on both Apple Silicon and Intel. The build fails because it looks like the package manager only provides the active architecture, not a universal library, when the latter is configured. It happens when I set "Build Active Architecture Only" to NO (if it's set to YES it succeeds, but then the app won't run on the other architecture). In particular, if I build on my M1 mini, the error is Could not find module 'SwiftyXMLParser' for target 'x86_64-apple-macos'; found: arm64, arm64-apple-macos If I build on my x86 MBP, same failure but the error is opposite (no arm64, found: x86_64). So clearly SPM+Xcode can provide either architecture - is it possible for it to provide a universal lib? Or do I need to download the source of the package and build it through a more manual process?
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
0 Replies
992 Views
Customers of an app I work on are having some issues on MacBook Pros with external monitors, which don't occur if the Automatic Graphics Switching is turned off. This is on more recent OS versions (Big Sur, Monterey). Is there a reliable way to check programmatically if this preference setting is turned on, so the app could at least alert the user? I've looked at the com.apple.PowerManagement files in /Library/Preferences, however I don't see any key-values in those files changing when I change this setting (tried rebooting system after changing etc.). Online I've seen some reference to use of the pmset command-line tool to check this however I also do not see its output reflecting any change, and I see conflicting notes on what field might correspond to this setting. Is there any definitive answer or documentation around this setting? Thanks in advance!
Posted
by ccorbell.
Last updated
.
Post marked as Apple Recommended
739 Views
As someone who learned to program on Mac OS 7.5.5, I don't mind the return of Clarus the dogcow! However others on my team are wondering if there will be a way for our app to still use the system Page Setup dialog, but not display Clarus (basically retain the appearance of Monterey's Page Setup). Anyone know what the plan is for Clarus in the official Ventura release? Will it be possible to hide the dogcow, or substitute a different preview graphic?
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
0 Replies
683 Views
This is a fairly bizarre behavior and I wonder if anyone with deep knowledge of NSView layout internals has a clue. I have a view that hosts a number of toolbar-like buttons, some subclasses of NSButton and other subclasses of NSPopUpButton. The latter are used in pull-down mode and do not display a title (just a pull-down indicator and icon), they are similar in size and appearance to the NSButtons. When the app is switched between light mode and dark mode, some layout action is triggered which always adds 5 pixels of widths to the pull-down buttons. When I override setFrame and set a breakpoint, this is happening below NNAApplication nextEventMatchingMask in an undocumented function called NSViewActuallyUpdateFromLayoutEngine. This method always adds 5 pixels to width for repeated light/dark mode changes. It does so on a pull-down even if it has already done so before, so these controls just keep growing, and this does not seem to be related to any obvious intrinsic content. Some things I would note in trying to track it down: It only happens to the popup buttons, not the regular NSButtons in the same superviews I have some of these in a horizontal stack view and some positioned statically in a regular NSView - both scenarios get this 5-pixel-width-resize I've tried overriding both instrinsicContentSize and fittingSize to return the current frame size, as a way to tell the layout system not to change the size, and that didn't work There are no explicit constraints created for these views; they are created in code not in Interface Builder I've tried using the autolayout mask value NSViewNotSizable with the translates flag set to YES, that does not prevent the issue I've tried setting the superview autoresizesSubviews flag to false and also making sure translates flags were false to not create unwanted constraints, these also do not prevent the issue The only code values I had related to 5 pixels were a spacing setting and a left inset on the horizontal stack view - so I removed those as a check, but as I'd expect that did not change the behavior The only way I've been able to defeat it is to add a .sizeLocked property to my subclass and, if set, replace the incoming frame.size with the current one to prevent the resize while still allowing origin to change. Obviously that's not ideal but as a brute-force workaround it works for this specific issue. However I'm seeing other weird layout creep as well on repeated light/dark change. I wonder why this happens, and what the correct fix might be! Why is dark/light mode switching resizing anything in layout, when the size of windows and views have not changed?
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
1 Replies
2.4k Views
My team is working on the arm64/universal conversion of a large application with lots of third-party dependencies, but while that work is in progress we are trying to support customers running the app under Rosetta on M1 macs. To that end it will be helpful if we can actually debug the x86&#92;&#95;64 build running under Rosetta on M1, but we see lots of instability when we try this. The app will not even launch if the thread sanitizer setting is turned off. Sometimes the app still fails to launch and/or the debugger fails to attach. It will sometimes show the error message "LLDB provided no error string". No breakpoints are hit during attempted debugging. Note the app is a plugin host, and we sometimes see a main queue crash on ImageLoader::recursiveInitialization when we attempt to debug. Is debugging a non-trivial x86&#92;&#95;64 app under Rosetta on an M1 machine supported by Xcode? Are there any special configuration steps to get it to work? (We can debug a trivial Hello World x86&#92;&#95;64 app fine this way so it seems to be related to more complex app architecture). Also FWIW we tried launching Xcode itself under Rosetta on the M1 machine as well to see if that would resolve any debugging conflicts, but it didn't correct these issues. Thanks for any guidance!
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
1 Replies
1.8k Views
I'm implementing a horizontal NSStackView with child views of varying size. The stack view should resize horizontally with its parent view, which in turn resizes with the window. I expect the stack view's arranged distribution (fill proportionally) to adjust the distribution of child views as its width changes. I keep hitting the bizarre behavior where the window can't be resized horizontally at all - the vertical/diagonal resize cursors no longer appear, and the window can only be resized vertically. The problem is not that the child view doesn't resize with the window: the child view actually prevents resizing of the window! I first saw this creating an empty NSStackView and adding it to its parent view in code, with layout constraints (top, leading, trailing; also tried width). I tried messing with constraint priorities but that did not correct the issue. (With the stack view not there, its parent view resizes with the window just fine). When I created the NSStackView in the .xib this problem went away, the empty NSStackView resizes horizontally along with its parent, so that's what I'm doing now. However, now when I add a set of arranged subviews in code, it happens again - they are distributed perfectly, but the window again cannot be horizontally resized! The subviews are minimal NSView implementations (drawing their bounds); they do not have intrinsic size. They do each have child NSTextField labels, with autolayout constraints to horizontally resize along with their parents. There are no constraints besides those internal to each child view and its label subview. What am I not understanding about AutoLayout and NSStackView? I cannot imagine an intentional design that would break the user's ability to resize the window in one direction based on anything an NSStackView does, but that's exactly what happens.
Posted
by ccorbell.
Last updated
.
Post marked as solved
1 Replies
981 Views
I'm investigating some font/typography issues in a creative application. I am hoping for some information from Apple developers who know how font activation works in Core Text, and how/whether auto-activation is still supported. I'm running on Big Sur in an Obj-C Cocoa app. In applicationDidFinishLaunching the app calls CTFontManagerSetAutoActivationSetting(NULL, kCTFontManagerAutoActivationEnabled); After this, what is the correct API to request auto-activation of a font that may not be active but could be activated, e.g. by a third-party font manager? I have tried using CTFontCreateWithNameAndOptions. For options, I see there is an CTFontOption value kCTFontOptionsPreventAutoActivation - I assume that as long as I don't set this option, the call will attempt auto-activation. However, I don't see any auto-activation resulting, either from the third-party font manager or the OS. Instead CTFontCreateWithNameAndOptions always returns a substitute font (Helvetica). The font I am attempting to auto-activate is an OTF. It is stored in a third-party font manager, and I also have copies of it on the desktop. I have also tried having it present / disabled in FontBook, which also does not auto-activate it in this case. I've also cleaned font caches after making changes. How is the kCTFontManagerAutoActivationEnabled supposed to work in Big Sur? Is there a different Core Text call used to auto-activate a font, or is this behavior no longer supported? Thanks!
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
3 Replies
1.6k Views
I've got an iOS app with an add-on non-consumable. In sandbox testing, the purchase attempt *always* shows an extra Buy prompt, and then fails if the sandbox account is new. Then, it always succeeds if you just try again. Is this normal? Details: Test begins with a brand new Sandbox tester account, in USA region and verified via email invitation. Device iTunes & App Store SANDBOX ACCOUNT is logged in with this new account. App is completely deleted from device, then build/deployed with Xcode. Note that launching the app initially always prompts to enter the password if this is the first time I've used this sandbox test account. This prompt doesn't say anything about sandbox, but it uses the sandbox email as Apple ID and sandbox login works. Receipt retrieval and local validation works in the app, as does initial retrieval of the add-on product information with SKProductsRequest Requesting purchase of add-on with SKPaymentQueue.add() correctly puts up the "Confirm Your In-App Purchase" modal, showing [Environment: Sandbox]. A paymentQueue:upatedTransactions invocation happens, with transactionState .purchasing. After entering correct password and clicking Buy, there's a brief pause, then a second purchase confirmation modal appears exactly the same way, again with [Environment: Sandbox] and again requiring password When I enter the password and click Buy again, the transaction fails. I get a .failed transaction paymentQueue update and the failure alert comes up saying try again later. At this point, however, all I have to do is try again - immediately, or after quitting the app and restarting - and everything works perfectly and the purchase succeeds. The success in the second case makes me think that the code is correct, and the problem with the first attempt is just some known sandbox or configuration issue. The double prompt seems especially weird since there's no transaction update between them. Do others see this behavior? Is there any way to make a sandbox purchase attempt succeed on the first attempt in this kind of environment? Xcode 12.2, running on Intel 11.0.1; device running iOS 13.7.
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
0 Replies
934 Views
Our Mac app targets 10.13 and later. A number of minor but hard to fix appearance issues showed up in an NSOutlineView after we updated to Xcode 12 and the latest SDK. The outline view is created in code (not from .xib) and is view-based. When the outline view has top-level parents toggled open and you scroll down, the topmost parent will 'freeze' in place in the first row of the table to show what container you are in. This still functions properly, however under 10.15.x and earlier our outline view apparently no longer draws an opaque control background on the table row, so you see the child item view contents scrolling up behind the frozen parent row, making it illegible. This bug does not occur on Big Sur, where it still appears correctly. It also appeared correct on older OS's right up until we updated to the latest Xcode / SDK. There have been a few other appearance changes in NSOutlineView that were solely caused by this update, so this makes me think that this is due to some change in the NSOutlineView implementation. The Big Sur release notes mention only a change in default row height (which we did see and were able to fix easily). Can anyone offer guidance on how to make a table row (not cell) explicitly opaque in the latest SDK's NSOutlineView, and have it work properly on earlier OS's? Also is there any release note on this granularity of change in the SDK that might offer guidance, apart from the Big Sur release notes? Thank you!
Posted
by ccorbell.
Last updated
.
Post marked as solved
3 Replies
2.8k Views
I'm working getting a large app to Universal Binary, which uses a number of libraries and frameworks with third-party dependencies, a few of which are not yet available as arm64. If a framework is marked as Optional in the app's build settings, the app will still link when it is absent. I expected that if the framework only had one architecture needed for UB (e.g. just x86_64), we should be able to optionally link it successfully, and it would not matter if one architecture was missing. However this isn't how Xcode handles it, we get a linker error for the missing architecture even if the framework is optional. Is there a way to do what I'm expecting? Include an optional framework with only one architecture in a UB app, with the app taking responsibility for correct handling of the framework at runtime (just as it would for a missing optional framework in a single-architecture app)?
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
0 Replies
913 Views
I'm working on a substantial Mac AppKit app that has a large number of third-party libraries and frameworks, mostly C++. I'm getting a crash on initialization of some boost::asio code that I'm fairly sure indicates different versions of boost referenced by one or more linked libs. However I can't tell from the stack trace exactly which library is causing the issue - the caller of the boost code is just ::__cxx_global_var_init.10(). I see some other forum threads about crashes in ImageLoaderMachO but they tend to focus on fixing a specific crash - I have a slightly more general question. Are there some advanced techniques for debugging and analyzing the pre-main() work done by ImaeLoaderMachO()? I'd like to fix this crash (which I'll explore through some trial and error) but more than that I'd like to have a deterministic way to see - ideally, even breakpoint - all code that gets executed before main() to understand this app's startup/load behaviors. Thanks for any wizardry!
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
3 Replies
1k Views
Is anyone able to view the Platforms SOTU 2021 video? In both the Developer app and online it just shows a static image. (Keynote video works fine). In the Developer app it says Monday 2 - 3:30 PM, it's now 4: 34 PM... is it not available for replay? Also FWIW, even though it hasn't shown the video, the Developer app actually now says "Rewatch SOTU" under the link. Online, link with no video: https://developer.apple.com/videos/play/wwdc2021/102/
Posted
by ccorbell.
Last updated
.
Post marked as solved
1 Replies
2.6k Views
I'm working on migrating some image-drawing code away from NSImage lockFocus() to a bitmap CGContext. This is intended to compose image content for export to formats that support alpha like PNG and TIFF, with precise control over raster resolution etc. (not solely or primarily for window/device content). I'm trying to create a generic 32-bit RGBA color space for the bitmap, which I thought would be straightforward, but Core Graphics rejects the CGImageAlphaInfo.last info value for a generic RGB color space (it only allows none, or premultiplied options). There is no generic "RGBA" color space constant/name. Is there a way to do this? Attempt: if let colorspace = CGColorSpace(name: CGColorSpace.genericRGBLinear) {       if let cgc = CGContext(data: nil,                     width: Int(pixelWidth),                     height: Int(pixelHeight),                     bitsPerComponent: 8,                     bytesPerRow: 0,                     space: colorspace,                     bitmapInfo: CGImageAlphaInfo.last.rawValue) { // use cgc... } Error logged by Core Graphics: CGBitmapContextCreate: unsupported parameter combination:   8 bits/component; integer;   32 bits/pixel; RGB color space model; kCGImageAlphaLast; default byte order; 320 bytes/row. Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents  See Quartz 2D Programming Guide (available online) for more information. TIA, Christopher
Posted
by ccorbell.
Last updated
.
Post not yet marked as solved
1 Replies
966 Views
Pretty sure this is just a bug, and looking at the forums it seems like GKNoise/GKNoiseMap has a number of known issues and not much support... I have a perlin noise source and map it to a GKNoiseMap with some arbitrary square model coordinate system, size 20.0 x 20.0, origin (-10.0, -10.0). I can map this to a square area to render with the sample size matching my output and it all draws fine. Then I want to shift the noise incrementally, say 10% up in the Y+ axis direction. So I change the origin by adding 2.0 to Y and leave X and size the same, and a make a new GKNoiseMap with the same (or identical) noise source and these values. The result is that instead of shifting the slice, it stretches it. The noise is compressed or warped rather than moved. It's weird. I was able to get slightly better results trying to use GKNoise move() instead, but now it isn't working at all! Not to mention that GKNoise is document as being an 'infinite' noise object, so what does move() or scale() even mean there? I would think the GKNoiseMap coordinate bounds would be the definitive way to do this - to use the same noise data but shift around your window or 'slice'. I am really close to scrapping my use of GKNoise API completely, but maybe if there's a developer who worked on this API you can share the secret to its behavior.
Posted
by ccorbell.
Last updated
.