Thanks, my feedback report # is FB11845080.
Post
Replies
Boosts
Views
Activity
Update, back to square 1... After switching code signing setting to Run Locally and back to Developer, and cleaning, the xcodebuild test command succeeds from the Terminal with this target. Still fails on Xcode Cloud however, including after a clean environment.
Additional info: I discovered that this is not just an issue with Xcode cloud, I can reproduce by just trying to run the testing target from the command-line locally, with
xcodebuild -scheme (XCTest-scheme-name) test
I get the same error shown above, which does not occur when I run the Test action from the IDE. In this case it's looking for the test file in the Debug .app build in derived data - and the file is actually there. Any suggestions? Is this a code-signing issue?
Many of us would like this! I'm mainly interested in use on Mac in NSImage, but it's the same issue - clearly NSImage has the capability since it can be created from an asset, why isn't it possible from loose .svg resource or svg NSData?
I think I answered my question by searching for "macos sandbox entitlement font auto-activation" online... I came across a troubleshooting article on limitations of Suitcase auto-activation, which won't work for Sandboxed apps (including Pages etc.), which unfortunately this forum won't let me post a link to for some reason. (Can't post the search link either, gah).
So it would seem that auto-activation for sandboxed apps is unsupported. I will file a feedback request to ask for an entitlement for this feature, it's an important use case for designers that have large collections of fonts and Sandboxed apps shouldn't be denied the ability to leverage it. A global font auto-activation entitlement would make sense for apps that use typography in significant ways.
Also a follow-up: could there be an entitlement needed for auto-activation to work from a Sandboxed app?
Oh I also tried CTFontManagerSetAutoActivationSetting(kCTFontManagerBundleIdentifier, kCTFontManagerAutoActivationEnabled); which is supposed to set "global auto-activation", but no difference in behavior.
Thanks @edford for the answer - I was able to get it working as expected with my existing project by just adding the guards you mention to my source, so the symbols were not referenced for the not-yet-supported arm64 architecture - I used #if defined(__x86_64__)
After experimenting a bit I'm concluding that this just isn't really a use case supported by the weak-linking mechanism. It looks to me like any time you're embedding a framework in your bundle, the Xcode build sequence is going to try and link its symbols, and if it's a universal app this will mean for both architectures. There's no workaround here even if you're doing things like setting different search paths and bundle locations for x86 vs. arm64. I think to make this work we would need a 'stub library' implementation of the framework for arm64 to get us past the link errors, but I'm still not confident the right thing would happen at runtime (which is, x86 would link with the x86 framework, arm64 would behave as if the optional framework is missing).
If anyone from Apple or with expertise can confirm this conclusion, that would be appreciated! And this question then becomes, more generally, what's the approach to have a universal app make use of a library/framework only available for one architecture - is XPC the only way?
Posting a reply to my own question here in case others encounter the same... While I'm not certain about the underlying design or when a non-precomposed RGBA setting would be possible or valid for a bitmap CGContext, that configuration turned out to not be needed.
I'm now using this code to generate my CGContext - it's sRGB color space, and the alpha value is set to premultiplied. I was concerned this would mean I would somehow need to do some premultiplied calculation for the use of semi-opaque colors in my drawing code, but this turns out not to be the case - I can obtain an image with a transparent background, and also set partial alpha values in colors set for fill or stroke while drawing, and everything works properly.
if let colorspace = CGColorSpace(name: CGColorSpace.sRGB) {
if let cgc = CGContext(data: nil,
width: Int(pixelWidth),
height: Int(pixelHeight),
bitsPerComponent: 16,
bytesPerRow: 0,
space: colorspace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue |
CGBitmapInfo.byteOrder16Little.rawValue |
CGBitmapInfo.floatComponents.rawValue) {
...
So my answer in a nutshell is, if you want to use a bitmap CGContext for use in a Mac app with RGB color space that includes transparency, you can just use sRGB color space and premultiplied alpha values for the bitmapInfo parameter.
With more investigation I think I've solved this issue! Hope this helps others who are investigating performance with XCTest and SIMD or other Accelerate technologies...
XCTest performance tests can work great for benchmarking and investing alternate implementations even with micro performance, but the trick is to make sure you're not testing code built for debug or running under debugging.
I now have XCTest running the performance tests from my original post and showing meaningful (and actionable) results. On my current machine, the 100000 regular Double calculation block has an average measurement of 0.000328 s, while the simd_double2 test block has an average measurement of 0.000257 s, which is about 78% of the non-SIMD time, very close to the difference I measured in my release build. So now I can reliably measure what performance gains I'll get from SIMD and other Accelerate APIs as I decide whether to adopt.
Here's the approach I recommend: Put all of your performance XCTests in separate files from functional tests, so you can have a separate target compile them.
Create a separate Performance Test target in the Xcode project. If you already have a UnitTest target, it's easy just to duplicate it and rename.
Separate your tests between these targets, with the functional tests only in the original Unit Test target, and the performance tests in the Performance Test target.
Create a new Performance Test Scheme associated with the Performance Test Target.
THE IMPORTANT PART: Edit the Performance Test Scheme, Test action, and set its Build Configuration to Release, uncheck Debug Executable, and uncheck everything under Diagnostics. This will make sure that when you run Project->Test, it's Release-optimized code that's getting run for your performance measurements.
There are a couple of additional steps if you want to be able to run performance tests ad hoc from the editor, with your main app set as the current scheme. First you'll need to add the Performance Test target to your main app scheme's Test section.
The problem now is that your main app's scheme only has one setting for test configuration (Debug vs. Release), so assuming it's set to Debug when you run your performance test ad hoc it will display the behavior in my OP, with SIMD code especially orders of magnitude slower.
I do want my main app's test configuration to remain Debug for working with functional unit test code. So to make performance tests work tolerably in this scenario, I edited the build settings of the Performance Test target (only) so that it's Debug settings were more like Release - the key setting being Swift Compiler Code Generation, changing Debug to Optimize for Speed [-O]. While I don't think this is going to be quite as accurate as running under the Performance Test scheme with Release configuration and all other debug options disabled, I'm now able to run the performance test under my main app's scheme and see reasonable results - it again shows SIMD time measurement in the 75-80% range compared to non-SIMD for the test in question.
Thanks for the reply OOPer.
OK since XCTest isn't useful for this I've approached it by adopting SIMD in some production code and comparing actual performance at runtime - please see new code and results below. I now see about 15% improvement in release-build performance, I'm wondering if that's about one would expect.
The code here is converting a set of x,y Double values from model coordinates to screen coordinates, so there's a multiply and an add for every x, y. I'm pre-populating the output array with zeros and passing in to keep allocation out of the picture:
Regular implementation:
final func xyPointsToPixels(points: [(Double, Double)],
output: inout [CGPoint]) {
if output.count < points.count {
Swift.print("ERROR: output array not pre-allocated")
return
}
let start = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
for n in 0..<points.count {
output[n].x = CGFloat(self._opt_xAdd + points[n].0 * self._opt_xfactorDouble)
output[n].y = CGFloat(self._opt_yAdd + points[n].1 * self._opt_yfactorDouble)
}
let end = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
let time = Double(end - start) / 1_000_000_000
os_log(.info, "=== regular conversion of %d points took %g", points.count, time)
}
SIMD implementation:
final func xyPointsToPixels_simd(points: [simd_double2],
output: inout [CGPoint]) {
if output.count < points.count {
Swift.print("ERROR: output array not pre-allocated")
return
}
let start = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
for n in 0..<points.count {
let xyVec = self._opt_simd_add + points[n] * self._opt_simd_factor
output[n].x = CGFloat(xyVec.x)
output[n].y = CGFloat(xyVec.y)
}
let end = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
let time = Double(end - start) / 1_000_000_000
os_log(.info, "=== simd conversion of %d points took %g", points.count, time)
}
A debug build run in the debugger with this is just as misleading as the XCTest - and again SIMD is an order of magnitude slower there, so in general it seems to be bad for debugging, though maybe there are some build settings that could improve that.
Changing the build scheme to release and launching the app normally with console log set to info I was able to finally get more reasonable looking data. Here, the SIMD implementation was slightly faster than the normal implementation, confirming that the slower execution was a debug-build issue. The SIMD average time is around 85% of the regular average - is that about what would be expected? (I as actually hoping for a little better, considering we're only executing one instruction where we were executing two esp. for the multiply).
My outputs:
info 11:38:27.658463-0800 MathPaint === simd conversion of 4122 points took 4.741e-06
info 11:38:28.303478-0800 MathPaint === simd conversion of 4123 points took 5.876e-06
info 11:38:28.724909-0800 MathPaint === simd conversion of 4122 points took 5.793e-06
info 11:38:31.132216-0800 MathPaint === simd conversion of 4122 points took 7.305e-06
info 11:38:31.675180-0800 MathPaint === simd conversion of 4123 points took 6.942e-06
info 11:38:32.186911-0800 MathPaint === simd conversion of 4123 points took 5.849e-06
info 11:38:34.185091-0800 MathPaint === simd conversion of 4122 points took 5.832e-06
info 11:38:34.603739-0800 MathPaint === simd conversion of 4122 points took 5.425e-06
info 11:38:37.465219-0800 MathPaint === simd conversion of 4123 points took 7.502e-06
info 11:38:38.840133-0800 MathPaint === simd conversion of 4123 points took 8.319e-06
simd average: 6.356-e06
info 11:49:35.332700-0800 MathPaint === regular conversion of 4123 points took 7.058e-06
info 11:49:36.014312-0800 MathPaint === regular conversion of 4122 points took 5.488e-06
info 11:49:38.079446-0800 MathPaint === regular conversion of 4122 points took 7.05e-06
info 11:49:39.658169-0800 MathPaint === regular conversion of 4122 points took 9.533e-06
info 11:49:41.327541-0800 MathPaint === regular conversion of 4122 points took 8.659e-06
info 11:49:42.779920-0800 MathPaint === regular conversion of 4122 points took 8.923e-06
info 11:49:43.286273-0800 MathPaint === regular conversion of 4122 points took 5.422e-06
info 11:49:43.847928-0800 MathPaint === regular conversion of 4122 points took 7.464e-06
info 11:49:49.293082-0800 MathPaint === regular conversion of 4123 points took 8.986e-06
info 11:49:49.793853-0800 MathPaint === regular conversion of 4122 points took 5.573e-06
regular average: 7.516-e06
Just thought I'd post a follow up on this, though the root cause is still unresolved.
As Etresoft guessed I *was* making some improper StoreKit calls. The error(173) documentation and sample code seemed older to me than more recent StoreKit documentation which advertises itself as being supported on Mac - see for example SKReceiptRefreshRequest, which says "Mac OS 9.0+" in its Availability - https://developer.apple.com/documentation/storekit/skreceiptrefreshrequest - sim. for may other StoreKit functions which are actually iOS only. It's a little confusing to have a framework called 'available' on an OS where it isn't actually supported.
At any rate, my Mac app (which has no IAP) now does not include any StoreKit calls at all nor does it link in the frame work; it only attempts to validate the receipt when present locally using openssl calls to decrypt and the Apple cert to validate, and calls error(173) if receipt is missing or invalid.
This still triggers the 'app damaged' error in App Store test review, but not anywhere else. Working with DTS we saw it occur with the app installed via the App Store .pkg, but I have never reproduced it with my own notarized build of the app tested on various machines through simple drag-install to /Applications - there everything works as expected including prompt for Apple ID, download of sandbox receipt, validation of receipt signing and contents, etc.
My DTS ticket is still open, this looks like a bug in the App Store. The only workaround for me is to not validate my receipt (just ensure its presence), but obviously that's not a secure solution if I'm going to prevent free use of the app on unauthorized machines. According to DTS the error is being caused by the App Store treating my bundle ID as unregistered, but it's clearly registered (an earlier version of the app is already shipped), so it's being investigated as an App Store / StoreKit issue.
You might post a little code, and also clarify whether you're using AppKit or SwiftUI.
I'm guessing your background calculation is happening in a background DispatchQueue, and when it completes, you notify the window to redraw its contents using DispatchQueue.main.async to call some UI methods or post a notification?
If so, check this post-calculation UI update code - what does it do? Does it just call the view's setNeedsDisplay? Or does it do some other things like show the window? If you want to keep the window in the background, you should only call the view's setNeedsDisplay - that should make its content redraw but not change anything with regard to the window order or activation status.
Just adding a note this also happens testing the app under iOS 14, and it happens with a production-build app deployed through TestFlight. Not good!