I'm working on implementing printing for my graphics app. This is an NSDocument-based app written in Swift.I'm setting printInfo fields as follows, on the .printInfo member of the document class, and passing that instance into NSPrintOperation:self.printInfo.topMargin = 0.0
self.printInfo.rightMargin = 0.0
self.printInfo.leftMargin = 0.0
self.printInfo.bottomMargin = 0.0
self.printInfo.isHorizontallyCentered = true
self.printInfo.isVerticallyCentered = true
self.printInfo.orientation = .landscape
let newPrintOp = NSPrintOperation(view: self.printView!, printInfo: self.printInfo)
newPrintOp.showsPrintPanel = true
return newPrintOpNow, I know that the *actual* printer page bounds I get from the system will depend on what printer is selected in Page Setup. This buggy behavior I'm seeing occurs when I select the "Any" default system printer. It sends back (and enforces) the following imageablePageBounds:(18.0, 18.0, 734.0, 576.0)This is a page with 1/4" margins on top, left, and bottom, and 40pt margin on the right! (792-734-18 = 40). So not only is it not centered as requested, it doesn't even permit a centered image with 1/2" margin to be printed without clipping.I think this is a bug. The "Any" printer should give a reasonable 1/4" minimum margin - particularly for printing to PDF.Note this error does not happen if I select my actual printer in page setup, which provides both normal and borderless options. The normal option then gives me back a reasonable, centered imageablePageBounds (with a consistent small margin), while the borderless option gives me the expected (0.0, 0.0, 792.0, 612.0) for borderless letter/leandscape.Is this 40-pt right margin a known bug in the system default "Any" printer page setup?Also FWIW I've posted this issue on stackoverflow with some screenshots of the print panel showing the issue:https://stackoverflow.com/questions/59691305/cocoa-printing-with-swift-any-printer-setup-has-irregular-margins-page-sizeThanks for any tips or workarounds, or replies if you've seen this as well.Christopher
Post
Replies
Boosts
Views
Activity
I'm currently working with an app that was initially created for local development only using a free account, and the company account holder is now setting up the app store record.
Because I don't usually create apps with free accounts I was unaware that XCode will actually (without any notification) reserve the bundle ID for an app created this way. This is referenced in a few forum issues, e.g. https://developer.apple.com/forums/thread/80294
We encountered this and can't create the bundle ID in the paid account because of it, which is a bummer, but easy to work around by changing the bundle ID.
My question: there isn't anything similar done with app name, is there? All my experience + docs I've read says app name must be explicitly reserved with a paid account, but since I was surprised by this auto-registration of bundle ID I want to confirm.
Xcode does not automatically reserve app names ever, correct? If we're getting an "in use" error creating the app and it isn't published, it can't be our free account project that has reserved it, it must be another paid app developer?
Thanks,
Christopher
I'm using GKNoise with a gradient map to generate color noise, getting a CGImage via SKTexture, on Mac OS.
The GKNoise gradientColors: init parameter takes a map of NSNumber to NSColor instances. I'm setting two to the min and max values (-1.0, 1.0), with a GKPerlinNoiseSource.
It works as expected if the two colors are opaque. If one or both colors has an alpha component less than 1.0, I expect the output to have transparency. However, it looks to me like the alpha is completely ignored in GKNoise's gradient color input (treated as if it's always 1.0).
Is there anything I overlooked to make GKNoise/SKTexture support alpha components of gradient input colors, corresponding to transparency in the output CGImage? Or is this 'as designed' or a known bug?
Below is test code that reproduces it - in the view, both CGImages draw identically; I expect the one drawn with the red alpha=0.5 to be darker in the red parts when the background is black, lighter when it's white, etc.
import GameplayKit
class GKNoiseGradientIssue {
var noiseSource: GKNoiseSource
var color0_opaque = NSColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 1.0)
var color0_halfAlpha = NSColor(red: 1.0, green: 0.0, blue: 0.0, alpha: 0.5)
var color1_opaque = NSColor(red: 0.0, green: 0.0, blue: 1.0, alpha: 1.0)
var opaqueNoise: GKNoise
var halfAlphaNoise: GKNoise
var opaqueNoiseMap: GKNoiseMap
var halfAlphaNoiseMap: GKNoiseMap
var opaqueImage: CGImage
var halfAlphaImage: CGImage
init() {
let source = GKPerlinNoiseSource(frequency: 0.15,
octaveCount: 7,
persistence: 1.25,
lacunarity: 0.5,
seed: 12345)
self.noiseSource = source
let opaqueGradient: [NSNumber: NSColor] = [-1.0: color0_opaque, 1.0: color1_opaque]
self.opaqueNoise = GKNoise(source, gradientColors: opaqueGradient)
let halfAlphaGradient: [NSNumber: NSColor] = [-1.0: color0_halfAlpha, 1.0: color1_opaque]
self.halfAlphaNoise = GKNoise(source, gradientColors: halfAlphaGradient)
self.opaqueNoiseMap = GKNoiseMap(self.opaqueNoise,
size: [200.0, 200.0],
origin: [0.0, 0.0],
sampleCount: [200, 200],
seamless: false)
self.halfAlphaNoiseMap = GKNoiseMap(self.halfAlphaNoise,
size: [200.0, 200.0],
origin: [0.0, 0.0],
sampleCount: [200, 200],
seamless: false)
let opaqueTexture = SKTexture(noiseMap: self.opaqueNoiseMap)
self.opaqueImage = opaqueTexture.cgImage()
let halfAlphaTexture = SKTexture(noiseMap: self.halfAlphaNoiseMap)
self.halfAlphaImage = halfAlphaTexture.cgImage()
}
}
class GradientIssueView: NSView {
var issue: GKNoiseGradientIssue?
override func awakeFromNib() {
self.issue = GKNoiseGradientIssue()
}
override func draw(_ dirtyRect: NSRect) {
NSColor.black.setFill()
self.bounds.fill()
if let cgc = NSGraphicsContext.current?.cgContext {
cgc.draw(self.issue!.opaqueImage,
in: CGRect(origin: CGPoint(x: 10.0, y: 10.0),
size: CGSize(width: 200.0, height: 200.0)))
cgc.draw(self.issue!.halfAlphaImage,
in: CGRect(origin: CGPoint(x: 10.0, y: 220.0),
size: CGSize(width: 200.0, height: 200.0)))
}
}
}
There are similar questions in the forums but none of the solutions there are working, I'm not seeing new solutions offered in those threads.
I was on Xcode 11.5 and my iPhones updated to 13.6.1, at which point they stopped working for debugging - error alert "(Device-name) not available. Please reconnect the device" from Xcode on run attempt, and "The current device configuration is unsupported" error in the Devices tab of Devices and Simulators.
I've updated Xcode to 11.7 and have tried every combination you can imagine of: cleaning the project
deleting derived data
disconnecting/reconnecting device (I'm connected by USB cable)
restarting Xcode, the phone, and my Mac
running an iOS 13.6 simulator build (saw a stackoverflow post that said doing this before running on the device seemed to help)
None of this has solved the problem so I'm totally blocked now.
Does anyone know if this is a known issue & can you recommend a fix?
Also one more point: some other threads talk about making changes in Settings/Developer on the phone but I'm actually no longer seeing Developer in Settings at all.
Thanks for help!
I can't tell if I am misunderstanding the expected behavior of WKNavigationDelegate or if something is misconfigured or broken.
My goal is to load the initial web content in my WKWebView, but then intercept link-clicks within that content and (at least sometimes) open those in Safari, as described in these forum posts:
WKWebView - Open external link websites in Safari
https://developer.apple.com/forums/thread/125641
Open WKWebView Links in Safari
https://developer.apple.com/forums/thread/68427?answerId=199512022#199512022
Here's my simplified delegate method code. Because it gets called on the initial request load, it checks against a copy of the original URL and if it matches calls decisionHandler(.allow).
(I'd hoped to be able to use .navigationType instead of checking the URL, but it comes in as .other on first load, which seems too vague to always allow).
If it's any other URL, I call decisionHandler(.cancel) and attempt to open with UIApplication open().
func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) {
		Swift.print(#function)
		if let navActionURL = navigationAction.request.url {
				if navActionURL == self.originalURL {
						decisionHandler(.allow)
				} else {
						decisionHandler(.cancel)
						UIApplication.shared.open(navActionURL, options: [:], completionHandler: nil)
				}
		}
		
		
}
This doesn't work however because the delegate method is only called once, on the WKWebiew's initial load() - on that call the URLs match and .allow is returned. The delegate method is not called at all for the click on the link. (FWIW the link is a relative href in the source HTML; it is to a different page not just a hashtag on the same page).
Can anyone clarify how this is supposed to work? It makes no sense to me to have the delegate control the initial load() - if I didn't want to show the content in the WebView, I wouldn't load the URLRequest.
It makes sense for me to have the delegate callback invoked for subsequent navigation, but that isn't happening.
I'm releasing a Mac app through the app store. It's a one-time purchase app; no in-app purchases, no subscriptions.
I'm doing local validation of the receipt on application launch.
The Receipt Validation Programming Guide - https://developer.apple.com/library/archive/releasenotes/General/ValidateAppStoreReceipt/Chapters/ValidateLocally.html#//apple_ref/doc/uid/TP40010573-CH1-SW3 says "When an application is installed from the App Store, it contains an application receipt..."
I've done validation in in an iOS app before which has in-app purchases, so there, in addition to checking the receipt on launch, I would send an SKReceiptRefreshRequest to get additional in-app purchase data etc. - and retry the refresh request periodically if needed.
Is it correct that there's no need for me to send a refresh request for this simpler case, with no subscription or in-app purchase? I.e. I can always just rely on the receipt in the bundle? Or should one still make some refresh-receipt attempts before assuming an invalid receipt in this basic case?
Pretty sure this is just a bug, and looking at the forums it seems like GKNoise/GKNoiseMap has a number of known issues and not much support...
I have a perlin noise source and map it to a GKNoiseMap with some arbitrary square model coordinate system, size 20.0 x 20.0, origin (-10.0, -10.0). I can map this to a square area to render with the sample size matching my output and it all draws fine.
Then I want to shift the noise incrementally, say 10% up in the Y+ axis direction. So I change the origin by adding 2.0 to Y and leave X and size the same, and a make a new GKNoiseMap with the same (or identical) noise source and these values.
The result is that instead of shifting the slice, it stretches it. The noise is compressed or warped rather than moved. It's weird.
I was able to get slightly better results trying to use GKNoise move() instead, but now it isn't working at all! Not to mention that GKNoise is document as being an 'infinite' noise object, so what does move() or scale() even mean there? I would think the GKNoiseMap coordinate bounds would be the definitive way to do this - to use the same noise data but shift around your window or 'slice'.
I am really close to scrapping my use of GKNoise API completely, but maybe if there's a developer who worked on this API you can share the secret to its behavior.
I have an NSDocument-based Mac app built for 10.14+, written in Swift.
Just updated my (intel) OS to Big Sur to test, and while the app builds and generally functions fine the document window toolbar is totally messed up when I run the app.
In particular: only one toolbar item (Save) shows up on the toolbar in icon+text mode, the rest are all shoved into the menu, regardless of window size
same is true if I switch to Icon only mode
when I right-click on the menu bar to customize it in the app, everything is haywire - tons of blank space between titles and icons, some icons as you scroll down are huge, default set looks like it only contains one button
If I turn off icons and show only text toolbar items, they all show up properly on the toolbar.
In Xcode (12.2), the toolbar in the document .xib looks totally fine - all buttons and icons in the right place, default set looks great, etc.
There's nothing very custom about my toolbar - it's just using standard toolbar buttons with icons set from image assets, and a few separators between items. Default is visible, regular size, Icon+Text. All works perfectly under 10.14 and 10.15.
Anyone else seen this bizarre toolbar behavior under Big Sur? This is my app's last release-blocking issue!
I'm getting an error building my Mac app for both Apple Silicon and Intel, related to a Swift package dependency.
The dependency is a pure Swift package (SwiftyXMLParser), and I'm bringing it into my app with a package dependency from the git repository URL.
My app is configured to build standard architectures to run natively on both Apple Silicon and Intel.
The build fails because it looks like the package manager only provides the active architecture, not a universal library, when the latter is configured. It happens when I set "Build Active Architecture Only" to NO (if it's set to YES it succeeds, but then the app won't run on the other architecture).
In particular, if I build on my M1 mini, the error is
Could not find module 'SwiftyXMLParser' for target 'x86_64-apple-macos'; found: arm64, arm64-apple-macos
If I build on my x86 MBP, same failure but the error is opposite (no arm64, found: x86_64).
So clearly SPM+Xcode can provide either architecture - is it possible for it to provide a universal lib? Or do I need to download the source of the package and build it through a more manual process?
I've got an iOS app with an add-on non-consumable. In sandbox testing, the purchase attempt *always* shows an extra Buy prompt, and then fails if the sandbox account is new. Then, it always succeeds if you just try again. Is this normal?
Details: Test begins with a brand new Sandbox tester account, in USA region and verified via email invitation. Device iTunes & App Store SANDBOX ACCOUNT is logged in with this new account. App is completely deleted from device, then build/deployed with Xcode.
Note that launching the app initially always prompts to enter the password if this is the first time I've used this sandbox test account. This prompt doesn't say anything about sandbox, but it uses the sandbox email as Apple ID and sandbox login works.
Receipt retrieval and local validation works in the app, as does initial retrieval of the add-on product information with SKProductsRequest
Requesting purchase of add-on with SKPaymentQueue.add() correctly puts up the "Confirm Your In-App Purchase" modal, showing [Environment: Sandbox].
A paymentQueue:upatedTransactions invocation happens, with transactionState .purchasing.
After entering correct password and clicking Buy, there's a brief pause, then a second purchase confirmation modal appears exactly the same way, again with [Environment: Sandbox] and again requiring password
When I enter the password and click Buy again, the transaction fails. I get a .failed transaction paymentQueue update and the failure alert comes up saying try again later.
At this point, however, all I have to do is try again - immediately, or after quitting the app and restarting - and everything works perfectly and the purchase succeeds.
The success in the second case makes me think that the code is correct, and the problem with the first attempt is just some known sandbox or configuration issue. The double prompt seems especially weird since there's no transaction update between them. Do others see this behavior? Is there any way to make a sandbox purchase attempt succeed on the first attempt in this kind of environment?
Xcode 12.2, running on Intel 11.0.1; device running iOS 13.7.
This is mainly a Mac app issue, though it's perhaps related to a similar issue with iOS app receipt validation on App Store review testing.
I'm stuck in app store rejections because they are seeing a 'damaged app' error trying to launch my app on some of their test environments. I cannot reproduce in numerous attempts.
Has anyone else been stuck this way (esp. recently) and if so what did you do to resolve?
My app runs perfectly if I compile without app store code and notarize it myself, on a variety of systems. It's universal (M1 and intel), tested on all the above, both admin and standard accounts, so that makes me think the issue is only in the StoreKit / receipt-validation code.
The app does local receipt validation in a very standard way, requesting the receipt if it's absent. I've tested this with the exact app submitted (and rejected) to app store review, exported from the Xcode organizer, and in my testing the app successfully refreshes or validates the receipt. At worst (e.g. if offline) it prompts to log in with the apple ID. I'm using a sandbox App Store account to test.
I've never seen a 'damaged app' error happen even when I manually delete the receipt. The worst I've seen is a notarization error if I copy the app to a different machine (since app store archived build is not notarized, docs state that App Store will notarize it).
Since there's no Mac TestFlight I can't download the actual binary that App Store review is testing which presumably has modifications (at least, notarization). And their error reports are very vague, just a screenshot of the error and generic testing advice.
I can't be the only one who has been in this situation, and although I've used a DTS support ticket I don't know how soon they'll get back to me. Any helpful knowledge is appreciated!
I have an app in Swift that does a lot of numerical processing on ordered pairs and vectors, so I'm looking into some ways to improve performance including adopting SIMD from the Accelerate framework for some calculations, but I'm not seeing performance improve.
There are some related posts on the forums that seem inconclusive and also a bit more complex. I pared my testing down to a couple of brief XCTests with self.measure blocks on repeated add and multiply operations of two double values. The tests set random initial values to ensure there's no compiler optimization of loop calculations based on constants. There's also no big collection of fixture data, so there's no chance allocations or vector index dereference or similar issues could be involved.
The regular multiple-instruction code runs an order of magnitude faster than the SIMD code! I don't understand why this is - would the SIMD code be faster in C++, could it be some Swift conversion? Or is there some aspect of my SIMD code that is incurring some known penalty? Curious if anyone out there is using SIMD in Swift in production and if you see anything in my test code that explains the difference.
func testPerformance_double() {
var xL = Double.random(in: 0.0...1.0)
var yL = Double.random(in: 0.0...1.0)
let xR = Double.random(in: 0.0...1.0)
let yR = Double.random(in: 0.0...1.0)
let increment = Double.random(in: 0.0...0.1)
		Swift.print("xL: \(xL), xR: \(xR), increment: \(increment)")
var result: Double = 0.0
self.measure {
for _ in 0..<100000 {
result = xL + xR
result = yL + yR
result = xL * xR
result = yL * yR
xL += increment
yL += increment
}
}
Swift.print("last result: \(result)") // read from result
}
func testPerformance_simd() {
var vL = simd_double2(Double.random(in: 0.0...1.0), Double.random(in: 0.0...1.0))
let vR = simd_double2(Double.random(in: 0.0...1.0), Double.random(in: 0.0...1.0))
let increment = Double.random(in: 0.0...0.1)
let vIncrement = simd_double2(increment, increment)
var result = simd_double2(0.0, 0.0)
Swift.print("vL.x: \(vL.x), vL.y: \(vL.y), increment: \(increment)")
self.measure {
for _ in 0..<100000 {
result = vL + vR
result = vL * vR
vL = vL + vIncrement
}
}
Swift.print("last result: \(String(describing: result))")
}
The measurements show the block with SIMD operations taking an order of magnitude more time than the multiple operations!
...testPerformance\_double measured [Time, seconds] average: 0.049, relative standard deviation: 3.059%, values: [0.049262, 0.049617, 0.048499, 0.047859, 0.048270, 0.048564, 0.047529, 0.052578, 0.047267, 0.047432], performanceMetricID:com.apple.XCTPerformanceMetric\_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
...testPerformance\_simd measured [Time, seconds] average: 0.579, relative standard deviation: 5.932%, values: [0.626196, 0.605790, 0.635180, 0.611197, 0.553179, 0.548163, 0.552648, 0.549264, 0.552745, 0.551465], performanceMetricID:com.apple.XCTPerformanceMetric\_WallClockTime, baselineName: "", baselineAverage: , maxPercentRegression: 10.000%, maxPercentRelativeStandardDeviation: 10.000%, maxRegression: 0.100, maxStandardDeviation: 0.100
My team is working on the arm64/universal conversion of a large application with lots of third-party dependencies, but while that work is in progress we are trying to support customers running the app under Rosetta on M1 macs.
To that end it will be helpful if we can actually debug the x86\_64 build running under Rosetta on M1, but we see lots of instability when we try this. The app will not even launch if the thread sanitizer setting is turned off. Sometimes the app still fails to launch and/or the debugger fails to attach. It will sometimes show the error message "LLDB provided no error string". No breakpoints are hit during attempted debugging.
Note the app is a plugin host, and we sometimes see a main queue crash on ImageLoader::recursiveInitialization when we attempt to debug.
Is debugging a non-trivial x86\_64 app under Rosetta on an M1 machine supported by Xcode? Are there any special configuration steps to get it to work? (We can debug a trivial Hello World x86\_64 app fine this way so it seems to be related to more complex app architecture).
Also FWIW we tried launching Xcode itself under Rosetta on the M1 machine as well to see if that would resolve any debugging conflicts, but it didn't correct these issues.
Thanks for any guidance!
I'm working on migrating some image-drawing code away from NSImage lockFocus() to a bitmap CGContext. This is intended to compose image content for export to formats that support alpha like PNG and TIFF, with precise control over raster resolution etc. (not solely or primarily for window/device content).
I'm trying to create a generic 32-bit RGBA color space for the bitmap, which I thought would be straightforward, but Core Graphics rejects the CGImageAlphaInfo.last info value for a generic RGB color space (it only allows none, or premultiplied options). There is no generic "RGBA" color space constant/name. Is there a way to do this?
Attempt:
if let colorspace = CGColorSpace(name: CGColorSpace.genericRGBLinear) {
if let cgc = CGContext(data: nil,
width: Int(pixelWidth),
height: Int(pixelHeight),
bitsPerComponent: 8,
bytesPerRow: 0,
space: colorspace,
bitmapInfo: CGImageAlphaInfo.last.rawValue) {
// use cgc...
}
Error logged by Core Graphics:
CGBitmapContextCreate: unsupported parameter combination:
8 bits/component; integer;
32 bits/pixel;
RGB color space model; kCGImageAlphaLast;
default byte order;
320 bytes/row.
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
TIA,
Christopher
Is anyone able to view the Platforms SOTU 2021 video?
In both the Developer app and online it just shows a static image. (Keynote video works fine). In the Developer app it says Monday 2 - 3:30 PM, it's now 4: 34 PM... is it not available for replay?
Also FWIW, even though it hasn't shown the video, the Developer app actually now says "Rewatch SOTU" under the link.
Online, link with no video:
https://developer.apple.com/videos/play/wwdc2021/102/