A lot of apps use undocumented App-prefs URLs to help users get to the iOS Settings screen needed to set up the app. In iOS 18, it seems like these all stopped working.
Here are the ones I currently use:
App-prefs:MESSAGES - broken in iOS 18
Used for SMS Protection.
App-prefs:Phone - broken in iOS 18
Used for Live Voicemail, Silence Unknown Callers, and SMS Reporting.
Some but not most paths have specific documented replacements. E.g. for Call Blocking & Identification you can use CXCallDirectoryManager.sharedInstance.openSettings() and this still works in iOS 18. But I don't see any other direct replacements.
Apple probably doesn't consider this a bug but I filed FB14378568 anyway.
I consider this an accessibility issue because many older, inexperienced, or users with disabilities have trouble finding the right Settings screen based on a textual description alone.
Post
Replies
Boosts
Views
Activity
As of a few weeks ago or so I'm finding it hard to test my apps in the simulator or on test devices. When I try to sign into iCloud (my apps use CloudKit) or sometimes even make a sandbox StoreKit purchase, I get a "unlock account" alert that refers me to one of my real devices, which then prompts me to change my password. I did that once but then the next day it wanted me to change the password again.
Is this expected, and if so how are developers expected to handle this?
I'm having an issue with a TextKit2 NSTextView (AppKit). In my subclass's keyDown, I'm doing a bit of manipulation of the textStorage but it seems that if I try to alter any part of the textStorage where a text paragraph and accompanying fragment have already been created, I then get duplicate paragraphs and fragments.
For example, say I want to insert new text at the end of an existing paragraph. I've tried:
Inserting a new attr string (with attrs matching the preceding text) at the end of the existing paragraph text
Replacing the existing paragraph text with a new attr string that includes the new text
First deleting the existing paragraph text and then inserting the new full attr string including the new text
I am wrapping those edits in a textContentStorage.performEditingTransaction closure.
But in all cases, it seems that TextKit 2 wants to create new NSTextParagraph and NSTextFragment objects and doesn't remove the old ones, resulting in duplicate elements in my UI.
Some sample code:
let editLocation = editRange.location
guard editLocation > 0 else {
break
}
let attrs = textStorage.attributes(at: editLocation, effectiveRange: nil)
// paragraphStartLocation is in an NSAttributedString extension not shown
guard let paragraphStartLoc = textStorage.paragraphStartLocation(from: editLocation) else {
assertionFailure(); return
}
var paragraphRange = NSRange(location: paragraphStartLoc, length: editLocation - paragraphStartLoc + 1)
var fullParagraph = textStorage.attributedSubstring(from: paragraphRange).string
fullParagraph += newText
let newAttrStr = NSAttributedString(string: fullParagraph, attributes: attrs)
textContentStorage.performEditingTransaction {
textStorage.deleteCharacters(in: paragraphRange)
textStorage.insert(newAttrStr, at: paragraphStartLoc)
}
Today someone added me to their team as role = Admin. I accepted and now I can see their team on iTunes Connect, i.e. it's listed in the top right popup menu. However if I go to developer.apple.com (where you manage certs and profiles etc) signed in as the same account, that team is not listed in the top right account popup menu. I've tried signing out and back in and checked to make sure that the account is all paid up.Why wouldn't a team that I'm an admin member of show up on developer.apple.com?
I recently refactored a project to break out common code into a Swift Package Manager (SPM) package. The code itself references macros such as DEBUG that are defined by the host application via a Swift Active Compilation Condition.
However from what I can tell, modules outside of the main app are unable to access these macros. E.g. if the SPM package does this:
#if DEBUG
print("debug mode!")
#endif
…the print() will never execute.
Is there any way for an SPM package to access these macro definitions? If not, is there a recommended best practice for including "debug only" code in an SPM package?
Thanks!
My NEDNSProxyProvider subclass basically does this for each incoming UDPFLow:open()while moreDataComing { read() filter() // modify EDNS0 write()}closeWrites()closeReads()What I sometimes see though is that after the first read/write sequence, iOS appears to close the UDPFlow for me. I see these in the logs:default 16:12:09.186963 -0800 MyProxy (2552644817): Closing reads, not closed by plugindefault 16:12:09.187134 -0800 MyProxy (2552644817): Closing writes, not sending closeThen later when I attempt to close the flow explicitly I get:default 16:12:09.273912 -0800 MyProxy writeDatagrams finished with error: Optional(Error Domain=NEAppProxyFlowErrorDomain Code=1 "The operation could not be completed because the flow is not connected" UserInfo={NSLocalizedDescription=The operation could not be completed because the flow is not connected})Sometimes I also seeerror 17:01:55.396243 -0800 MyProxy (3009813469): flow is closed for writes, cannot write 111 bytes of dataI'm not 100% sure if this is actually a problem or not as far as the actual functioning of the proxy goes, but I'd like to understand why it's happening. Is there something I'm doing or not doing that's causing iOS (or CFNetwork or whatever) to close the socket before I can do so explicitly? My class has a strong reference to the NEAppProxyUDPFlow object so it's not like it's getting deallocated early or anything.
I'm trying to verify receipts with Xcode 12 and a StoreKit config file, but keep getting a 21002 error. According to the docs - https://developer.apple.com/documentation/xcode/setting_up_storekit_testing_in_xcode, I need to use a different cert. I generated the cert but it's not clear what to do with it?
#if DEBUG
		let certificate = “StoreKitTestCertificate”
#else
		let certificate = “AppleIncRootCertificate”
#endif
That's great, but what uses certificate ?
If I use AVAssetDownloadTask to download an encrypted audio-only HLS stream, is it possible to get access to the raw audio data, for the purposes of integrating with AVAudioEngine, AVAudioPlayerNode, and associated effects? If so, what's the process like for doing that?
Thanks!
I have an iOS music game (-ish) app that requires some basic synchronization between audio playback and other non-video events.
To keep it simple, let's say I have some sound effects in files and I want to show a dot on the screen when a sound effect is played with a high degree of accuracy. Playing through the built-in speaker is fine, but users will often be using AirPods and the added latency is too noticeable.
I don't need this to be super-low latency, I just need to know what the latency is so that I can show the dot exactly when the user hears the sound.
I've mostly been experimenting in the AVAudioEngine/AVPlayerNode space. Those provide a variety of latency/ioBufferDuration properties (see below), but unfortunately they seem pretty inaccurate. Are they supposed to be?
Is there a better way to sync non-video things with audio playback?
func printLatencyInfo() {
		print("audioSession.inputLatency: \(audioSession.inputLatency)")
		print("audioSession.outputLatency: \(audioSession.outputLatency)")
		print("audioSession.ioBufferDuration: \(audioSession.ioBufferDuration)")
		print("engine.mainMixerNode.auAudioUnit.latency: \(engine.mainMixerNode.auAudioUnit.latency)")
		print("engine.inputNode.auAudioUnit.latency: \(engine.inputNode.auAudioUnit.latency)")
		print("engine.outputNode.auAudioUnit.latency: \(engine.outputNode.auAudioUnit.latency)")
		print("engine.mainMixerNode.outputPresentationLatency: \(engine.mainMixerNode.outputPresentationLatency)")
		print("engine.inputNode.outputPresentationLatency: \(engine.inputNode.outputPresentationLatency)")
		print("engine.outputNode.outputPresentationLatency: \(engine.outputNode.outputPresentationLatency)")
		print("engine.inputNode.auAudioUnit.latency: \(engine.inputNode.auAudioUnit.latency)")
		print("engine.outputNode.auAudioUnit.latency: \(engine.outputNode.auAudioUnit.latency)")
		print("playerNode.auAudioUnit.latency: \(playerNode.auAudioUnit.latency)")
}
The CloudKit Dashboard has been down for more than a day. Apple's System Status board - https://developer.apple.com/system-status/ shows a green circle, available.
Hopefully someone on the CloudKit teams is aware of this, but could you a) update the system status page and b) reboot that server? 😀
Thanks
Our DNSProxyProvider network extension is currently provisioned for customers by a "siteKey" that they include in their MDM config. This works fine - when the app launches it pulls the siteKey from the providerConfiguration vended by NEDNSProxyManager.Of course, some of our customers would like the DNS proxy to run without making their users open the app even once. This is proving to be challenging. The proxy itself can't get the providerConfiguration from NEDNSProxyManager because, as we see in the console:NEDNSProxyManager objects cannot be instantiated from NEProvider processesOn macOS you can force settings into an app via the mobileConfig plist, which are then accessible via the "com.apple.configuration.managed" standard user defaults key. But as far as I can tell, this doesn't work on iOS. (Or maybe it does but the format is different? I can't find any documentation on this for iOS.)Is there any way to get this bit of information at the proxy level without ever launching the main app? Thanks!
I'm trying to convert a AVAudioPCMBuffer with a 44100 sample rate to one with a 48000 sample rate, but I always get an exception (-50 error) when converting. Here's the code: guard let deviceFormat = AVAudioFormat(standardFormatWithSampleRate: 48000.0, channels: 1) else {
preconditionFailure()
}
// This file is saved as mono 44100
guard let lowToneURL = Bundle.main.url(forResource: "Tone220", withExtension: "wav") else {
preconditionFailure()
}
guard let audioFile = try? AVAudioFile(forReading: lowToneURL) else {
preconditionFailure()
}
let tempBuffer = AVAudioPCMBuffer(pcmFormat: audioFile.processingFormat,
frameCapacity: AVAudioFrameCount(audioFile.length))!
tempBuffer.frameLength = tempBuffer.frameCapacity
do { try audioFile.read(into: tempBuffer) } catch {
assertionFailure("*** Caught: \(error)")
}
guard let converter = AVAudioConverter(from: audioFile.processingFormat, to: deviceFormat) else {
preconditionFailure()
}
guard let convertedBuffer = AVAudioPCMBuffer(pcmFormat: deviceFormat,
frameCapacity: AVAudioFrameCount(audioFile.length)) else {
preconditionFailure()
}
convertedBuffer.frameLength = tempBuffer.frameCapacity
do { try converter.convert(to: convertedBuffer, from: tempBuffer) } catch {
assertionFailure("*** Caught: \(error)")
}Any ideas?