Just posting a follow-up on this - we did confirm that the "in use" name was from another app developer for an unreleased app, and we were able to get it resolved through developer support since my company had the name as a registered trademark. So it wasn't our earlier free-account prototype that marked the app name as "in use" as it did for the bundle id.
Post
Replies
Boosts
Views
Activity
I am having this same issue, was on Xcode 11.5, my device that updated to iOS 13.6.1 was flagged as unsupported, so I've updated to Xcode 11.7. I've tried cleaning the build folder, deleting intermediates folder, restarting Xcode and device, still stuck, can't debug on the phone. I did download newer simulators and Xcode is fine with 13.6 (and 13.7) on simulator. Any solution to get un-stuck on this error?
When you say "Once it has been trusted and recognized by Xcode"... how does one trigger that? When I connect it still shows up immediately as an available device in Xcode, even though it's marked as unsupported in the Devices window. I see no way to make Xcode itself 'forget' and re-trust a device. I have tried resetting & renaming one of the phones, and it prompts me again to trust the computer, but there's no new prompt to trust the device. (And I think it's true that I can't delete the device from my Developer account - that only happens at renewal?) So... this doesn't fix it.
Also FWIW I was momentarily on 11.6 and had the same issue, which is why I upgraded to 11.7, hoping it would fix it. I can try downgrading but this seems rather iffy.
Should I file a bug report?
Tried creating a new project & using same setup (company Team profile).
There is still an error preventing debug, though I'll note a few differences from the existing project:
The message given from Xcode on the Run attempt now says "Could not locate device support files." and basically displays the same message as in the Devices and Simulators window ("running iOS 13.6 (17G80), which may not be supported by this version of Xcode.
With the new project Xcode prompts for permission before debugging simulator for the first time (it didn't prompt on the same updated Xcode version when using the old project)
The new project by default lists a much smaller set of available devices as available simulators
Also looks like many others are seeing this problem, viz. the thread tagged "Xcode" right after this one, "What's a non-beta version of Xcode that works with iOS 13.6.1?"
It does seem to start working for some people, so while I believe this is a bug, there must be some magic workaround. I wonder if there are Xcode preference files etc. that one can delete to force a re-pairing with devices, or if completely uninstalling and reinstalling Xcode would work, or something else. If anyone on the Xcode team is reproducing this & finds a fix it would help, iOS development at my company is blocked.
https://developer.apple.com/forums/thread/658774
Final update: the only fix that worked for me was similarly upgrading the device to iOS 13.7, once I saw that was available. iOS 13.6.1 seems to just be broken with Xcode 11.7, at least on my system & devices.
I'm just adding a reply to bump the question, ridged noise seems broken to me too - I can't get any results like that shown in the documentation (the lightning-style branching). I've tried all combinations of frequency, octave count and lacunarity, and the result ranges from something like particle board texture to fine pixel static.
Just posting a follow-up here, I did eventually decide to jettison the use of SKTexture because of this issue - there does not appear to be any workaround.
Using the other init method of GKNoiseMap() that just takes noise, size, origin, sampleCount and seamless params, using noisemap.value() to get the generative values and using my own drawing code (supporting NSColors with alphas in the gradients I calculate) to render the output. It's slower so far (will be working on optimization) but the lack of alpha support in the SKTexture output was a deal-breaker. On the plus side, I can also now generate noise with more than two colors across the -1.0 to 1.0 value output; SKTexture only supported two opaque colors.
Solved: in the debugger output, Xcode warned about min and max sizes for toolbar items being deprecated.
After changing all toolbar item sizes to "Automatic", the toolbar items appear again, and the issues with the Customize panel are fixed.
Additionally, I was able to retain the existing separator-spacing of items by changing the toolbar type of the Window from "Automatic" to "Expanded".
Also a little more info - the package in question does have its Xcode project macOS target configured to build universal architecture and, for Release builds, build-active-architecture-only is set to NO.
https://github.com/yahoojapan/SwiftyXMLParser
Okay, good news, after trying several approaches I finally found a very simple fix/workaround...
In the app build settings under Architecture, get rid of $(ARCHS\_STANDARD) (click the minus to delete it) and add both architectures as explicit values, arm64 and x86\_64. Apparently there's an issue with that default setting not really conveying that both architectures are needed when the package dependency is built, completely regardless of the "active architecture" setting.
Everything works great with this change, I was able to build and notarize the app on M1 and verify it now also runs on x86.
Just adding a note this also happens testing the app under iOS 14, and it happens with a production-build app deployed through TestFlight. Not good!
You might post a little code, and also clarify whether you're using AppKit or SwiftUI.
I'm guessing your background calculation is happening in a background DispatchQueue, and when it completes, you notify the window to redraw its contents using DispatchQueue.main.async to call some UI methods or post a notification?
If so, check this post-calculation UI update code - what does it do? Does it just call the view's setNeedsDisplay? Or does it do some other things like show the window? If you want to keep the window in the background, you should only call the view's setNeedsDisplay - that should make its content redraw but not change anything with regard to the window order or activation status.
Just thought I'd post a follow up on this, though the root cause is still unresolved.
As Etresoft guessed I *was* making some improper StoreKit calls. The error(173) documentation and sample code seemed older to me than more recent StoreKit documentation which advertises itself as being supported on Mac - see for example SKReceiptRefreshRequest, which says "Mac OS 9.0+" in its Availability - https://developer.apple.com/documentation/storekit/skreceiptrefreshrequest - sim. for may other StoreKit functions which are actually iOS only. It's a little confusing to have a framework called 'available' on an OS where it isn't actually supported.
At any rate, my Mac app (which has no IAP) now does not include any StoreKit calls at all nor does it link in the frame work; it only attempts to validate the receipt when present locally using openssl calls to decrypt and the Apple cert to validate, and calls error(173) if receipt is missing or invalid.
This still triggers the 'app damaged' error in App Store test review, but not anywhere else. Working with DTS we saw it occur with the app installed via the App Store .pkg, but I have never reproduced it with my own notarized build of the app tested on various machines through simple drag-install to /Applications - there everything works as expected including prompt for Apple ID, download of sandbox receipt, validation of receipt signing and contents, etc.
My DTS ticket is still open, this looks like a bug in the App Store. The only workaround for me is to not validate my receipt (just ensure its presence), but obviously that's not a secure solution if I'm going to prevent free use of the app on unauthorized machines. According to DTS the error is being caused by the App Store treating my bundle ID as unregistered, but it's clearly registered (an earlier version of the app is already shipped), so it's being investigated as an App Store / StoreKit issue.
Thanks for the reply OOPer.
OK since XCTest isn't useful for this I've approached it by adopting SIMD in some production code and comparing actual performance at runtime - please see new code and results below. I now see about 15% improvement in release-build performance, I'm wondering if that's about one would expect.
The code here is converting a set of x,y Double values from model coordinates to screen coordinates, so there's a multiply and an add for every x, y. I'm pre-populating the output array with zeros and passing in to keep allocation out of the picture:
Regular implementation:
final func xyPointsToPixels(points: [(Double, Double)],
output: inout [CGPoint]) {
if output.count < points.count {
Swift.print("ERROR: output array not pre-allocated")
return
}
let start = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
for n in 0..<points.count {
output[n].x = CGFloat(self._opt_xAdd + points[n].0 * self._opt_xfactorDouble)
output[n].y = CGFloat(self._opt_yAdd + points[n].1 * self._opt_yfactorDouble)
}
let end = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
let time = Double(end - start) / 1_000_000_000
os_log(.info, "=== regular conversion of %d points took %g", points.count, time)
}
SIMD implementation:
final func xyPointsToPixels_simd(points: [simd_double2],
output: inout [CGPoint]) {
if output.count < points.count {
Swift.print("ERROR: output array not pre-allocated")
return
}
let start = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
for n in 0..<points.count {
let xyVec = self._opt_simd_add + points[n] * self._opt_simd_factor
output[n].x = CGFloat(xyVec.x)
output[n].y = CGFloat(xyVec.y)
}
let end = clock_gettime_nsec_np(CLOCK_UPTIME_RAW)
let time = Double(end - start) / 1_000_000_000
os_log(.info, "=== simd conversion of %d points took %g", points.count, time)
}
A debug build run in the debugger with this is just as misleading as the XCTest - and again SIMD is an order of magnitude slower there, so in general it seems to be bad for debugging, though maybe there are some build settings that could improve that.
Changing the build scheme to release and launching the app normally with console log set to info I was able to finally get more reasonable looking data. Here, the SIMD implementation was slightly faster than the normal implementation, confirming that the slower execution was a debug-build issue. The SIMD average time is around 85% of the regular average - is that about what would be expected? (I as actually hoping for a little better, considering we're only executing one instruction where we were executing two esp. for the multiply).
My outputs:
info 11:38:27.658463-0800 MathPaint === simd conversion of 4122 points took 4.741e-06
info 11:38:28.303478-0800 MathPaint === simd conversion of 4123 points took 5.876e-06
info 11:38:28.724909-0800 MathPaint === simd conversion of 4122 points took 5.793e-06
info 11:38:31.132216-0800 MathPaint === simd conversion of 4122 points took 7.305e-06
info 11:38:31.675180-0800 MathPaint === simd conversion of 4123 points took 6.942e-06
info 11:38:32.186911-0800 MathPaint === simd conversion of 4123 points took 5.849e-06
info 11:38:34.185091-0800 MathPaint === simd conversion of 4122 points took 5.832e-06
info 11:38:34.603739-0800 MathPaint === simd conversion of 4122 points took 5.425e-06
info 11:38:37.465219-0800 MathPaint === simd conversion of 4123 points took 7.502e-06
info 11:38:38.840133-0800 MathPaint === simd conversion of 4123 points took 8.319e-06
simd average: 6.356-e06
info 11:49:35.332700-0800 MathPaint === regular conversion of 4123 points took 7.058e-06
info 11:49:36.014312-0800 MathPaint === regular conversion of 4122 points took 5.488e-06
info 11:49:38.079446-0800 MathPaint === regular conversion of 4122 points took 7.05e-06
info 11:49:39.658169-0800 MathPaint === regular conversion of 4122 points took 9.533e-06
info 11:49:41.327541-0800 MathPaint === regular conversion of 4122 points took 8.659e-06
info 11:49:42.779920-0800 MathPaint === regular conversion of 4122 points took 8.923e-06
info 11:49:43.286273-0800 MathPaint === regular conversion of 4122 points took 5.422e-06
info 11:49:43.847928-0800 MathPaint === regular conversion of 4122 points took 7.464e-06
info 11:49:49.293082-0800 MathPaint === regular conversion of 4123 points took 8.986e-06
info 11:49:49.793853-0800 MathPaint === regular conversion of 4122 points took 5.573e-06
regular average: 7.516-e06
With more investigation I think I've solved this issue! Hope this helps others who are investigating performance with XCTest and SIMD or other Accelerate technologies...
XCTest performance tests can work great for benchmarking and investing alternate implementations even with micro performance, but the trick is to make sure you're not testing code built for debug or running under debugging.
I now have XCTest running the performance tests from my original post and showing meaningful (and actionable) results. On my current machine, the 100000 regular Double calculation block has an average measurement of 0.000328 s, while the simd_double2 test block has an average measurement of 0.000257 s, which is about 78% of the non-SIMD time, very close to the difference I measured in my release build. So now I can reliably measure what performance gains I'll get from SIMD and other Accelerate APIs as I decide whether to adopt.
Here's the approach I recommend: Put all of your performance XCTests in separate files from functional tests, so you can have a separate target compile them.
Create a separate Performance Test target in the Xcode project. If you already have a UnitTest target, it's easy just to duplicate it and rename.
Separate your tests between these targets, with the functional tests only in the original Unit Test target, and the performance tests in the Performance Test target.
Create a new Performance Test Scheme associated with the Performance Test Target.
THE IMPORTANT PART: Edit the Performance Test Scheme, Test action, and set its Build Configuration to Release, uncheck Debug Executable, and uncheck everything under Diagnostics. This will make sure that when you run Project->Test, it's Release-optimized code that's getting run for your performance measurements.
There are a couple of additional steps if you want to be able to run performance tests ad hoc from the editor, with your main app set as the current scheme. First you'll need to add the Performance Test target to your main app scheme's Test section.
The problem now is that your main app's scheme only has one setting for test configuration (Debug vs. Release), so assuming it's set to Debug when you run your performance test ad hoc it will display the behavior in my OP, with SIMD code especially orders of magnitude slower.
I do want my main app's test configuration to remain Debug for working with functional unit test code. So to make performance tests work tolerably in this scenario, I edited the build settings of the Performance Test target (only) so that it's Debug settings were more like Release - the key setting being Swift Compiler Code Generation, changing Debug to Optimize for Speed [-O]. While I don't think this is going to be quite as accurate as running under the Performance Test scheme with Release configuration and all other debug options disabled, I'm now able to run the performance test under my main app's scheme and see reasonable results - it again shows SIMD time measurement in the 75-80% range compared to non-SIMD for the test in question.