On apple dev site in the news section here you can find two announcements about their renewal of:
USERTrust RSA Certification Authority certificate.
Context:
now, I have an app delivered via in-house distribution due to Apple developer Enterprise program. My app uses push notifications, but we are using auth tokens.
Should I do something on the app?
Should I advice backend colleague to check or do something server-side?
below you can find the two announcements:
sanbox link
APNs Certificate Update Begins January 20, 2025 The Apple Push Notification service (APNs) will be updated with a new server certificate in sandbox on January 20, 2025. Update your application’s Trust Store to include the new server certificate: SHA-2 Root : USERTrust RSA Certification Authority certificate.
and
production link
APNs Certificate Update Begins February 24, 2025 The Apple Push Notification service (APNs) will be updated with a new server certificate in production on February 24, 2025. Update your application’s Trust Store to include the new server certificate: SHA-2 Root : USERTrust RSA Certification Authority certificate.
Delve into the world of built-in app and system services available to developers. Discuss leveraging these services to enhance your app's functionality and user experience.
Post
Replies
Boosts
Views
Activity
My macOS app is developed using SwfitUI, SwiftData, and CloudKit. In the development environment, CloudKit works well. Locally added models can be quickly viewed in the CloudKit Console. macOS app and iOS app with the same BundleID can also synchronize data normally when developing locally. However, in the production environment, the macOS app cannot synchronize data with iCloud. But iOS app can. The models added in the production environment are only saved locally and cannot be viewed in CloudKit Console Production.
I am sure I have configured correctly, container schema changes to deploy to the Production environment. I think there may be a problem with CloudKit in macOS.
Please help troubleshoot the problem. I can provide you with any information you need.
var body: some Scene {
WindowGroup {
MainView()
.frame(minWidth: 640, minHeight: 480)
.environment(mainViewModel)
}
.modelContainer(for: [NoteRecord.self])
}
I didn't do anything special. I didn’t do anything special. I just used SwiftData hosted by CloudKit.
We want to ressolve dns for predefined sets of private app domains.
We've added this rule:
NENetworkRule(destinationHost: NWHostEndpoint(hostname: Private Domain1(example.com), port: 53), protocol: .UDP)
As per apple documentation: A rule that matches all DNS queries/responses for hosts in the example.com domain.
do you think it will work i.e it will forward DNS requests UDP flow to transparent provider in all the cases?
or do you think the text is a bit misleading. it should instead say: "A rule that matches all DNS queries/responses for nameservers in the example.com domain"?
This rule that look for port 53 of that domain only works if the system really asks a nameserver of that specific domain, right?
So, what if a local DNS server or a different nameserver are taking care of the resolution?
Our product (rockhawk.ca) uses the Multipeer Connectivity framework for peer-to-peer communication between multiple iOS/iPadOS devices. My understanding is that MC framework communicates via three methods: 1) infrastructure wifi (i.e. multiple iOS/iPadOS devices are connected to the same wifi network), 2) peer-to-peer wifi, or 3) Bluetooth. In my experience, I don't believe I've seen MC use Bluetooth. With wifi turned off on the devices, and Bluetooth turned on, no connection is established. With wifi on and Bluetooth off, MC works and I presume either infrastructure wifi (if available) or peer-to-peer wifi are used.
I'm trying to overcome two issues:
Over time (since iOS 9.x), the radio transmit strength for MC over peer-to-peer wifi has decreased to the point that range is unacceptable for our use case. We need at least 150 feet range.
We would like to extend this support to watchOS and the MC framework is not available.
Regarding #1, I'd like to confirm that if infrastructure wifi is available, MC uses it. If infrastructure wifi is not available, MC uses peer-to-peer wifi. If this is true, then we can assure our customers that if infrastructure wifi is available at the venue, then with all devices connected to it, range will be adequate.
If infrastructure wifi is not available at the venue, perhaps a mobile wifi router (battery operated) could be set up, devices connected to it, then range would be adequate. We are about to test this. Reasonable?
Can we be assured that if infrastructure wifi is available, MC uses it?
Regarding #2, given we are targeting minimum watchOS 7.0, would the available networking APIs and frameworks be adequate to implement our own equivalent of the MC framework so our app on iOS/iPadOS and watchOS devices could communicate? How much work? Where would I start? I'm new to implementing networking but experienced in using the MC framework. I'm assuming that I would write the networking code to use infrastructure wifi to achieve acceptable range.
Many thanks!
Tim
I'm use iPad OS 17.5.1, when I try to use socket to connect to an ipv6 address created by PacketTunnelProvider in my iOS device, an error occurs. Here is the code to create socket server and client:
#include <stdio.h>
#include <string.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <unistd.h>
int dx_create_ipv6_server(const char *ipv6_address, int port) {
int server_fd;
struct sockaddr_in6 server_addr;
server_fd = socket(AF_INET6, SOCK_STREAM, 0);
if (server_fd == -1) {
perror("socket() failed");
return -1;
}
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin6_family = AF_INET6;
server_addr.sin6_port = htons(port);
if (inet_pton(AF_INET6, ipv6_address, &server_addr.sin6_addr) <= 0) {
perror("inet_pton() failed");
close(server_fd);
return -1;
}
if (bind(server_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) == -1) {
perror("bind() failed");
close(server_fd);
return -1;
}
if (listen(server_fd, 5) == -1) {
perror("listen() failed");
close(server_fd);
return -1;
}
printf("Server is listening on [%s]:%d\n", ipv6_address, port);
return server_fd;
}
int dx_accept_client_connection(int server_fd) {
int client_fd;
struct sockaddr_in6 client_addr;
socklen_t client_addr_len = sizeof(client_addr);
client_fd = accept(server_fd, (struct sockaddr *)&client_addr, &client_addr_len);
if (client_fd == -1) {
perror("accept() failed");
return -1;
}
char client_ip[INET6_ADDRSTRLEN];
inet_ntop(AF_INET6, &client_addr.sin6_addr, client_ip, sizeof(client_ip));
printf("Client connected: [%s]\n", client_ip);
return client_fd;
}
int dx_connect_to_ipv6_server(const char *ipv6_address, int port) {
int client_fd;
struct sockaddr_in6 server_addr;
client_fd = socket(AF_INET6, SOCK_STREAM, 0);
if (client_fd == -1) {
perror("socket() failed");
return -1;
}
memset(&server_addr, 0, sizeof(server_addr));
server_addr.sin6_family = AF_INET6;
server_addr.sin6_port = htons(port);
if (inet_pton(AF_INET6, ipv6_address, &server_addr.sin6_addr) <= 0) {
perror("inet_pton() failed");
close(client_fd);
return -1;
}
if (connect(client_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) == -1) {
perror("connect() failed");
close(client_fd);
return -1;
}
printf("Connected to server [%s]:%d\n", ipv6_address, port);
close(client_fd);
return 0;
}
@implementation SocketTest
+ (void)startSever:(NSString *)addr port:(int)port {
[[NSOperationQueue new] addOperationWithBlock:^{
int server_fd = dx_create_ipv6_server(addr.UTF8String, port);
if (server_fd == -1) {
return;
}
int client_fd = dx_accept_client_connection(server_fd);
if (client_fd == -1) {
close(server_fd);
return;
}
close(client_fd);
close(server_fd);
}];
}
+ (void)clientConnect:(NSString *)addr port:(int)port{
[[NSOperationQueue new] addOperationWithBlock:^{
dx_connect_to_ipv6_server(addr.UTF8String, port);
}];
}
@end
PacketTunnelProvider code:
override func startTunnel(options: [String : NSObject]?, completionHandler: @escaping (Error?) -> Void) {
let settings = NEPacketTunnelNetworkSettings(tunnelRemoteAddress: "fd84:306d:fc4e::1")
let ipv6 = NEIPv6Settings(addresses: ["fd84:306d:fc4e::1"], networkPrefixLengths: 64)
settings.ipv6Settings = ipv6
setTunnelNetworkSettings(settings) { error in
if error == nil {
self.readPackets()
}
completionHandler(error)
}
}
private func readPackets() {
// do nothing
packetFlow.readPackets { [self] packets, protocols in
self.packetFlow.writePackets(packets, withProtocols: protocols)
self.readPackets()
}
}
At main target, in viewcontroller's viewDidAppear, after starting the VPN, executed following code:
[SocketTest startSever:@"fd84:306d:fc4e::1" port:12345];
sleep(3);
[SocketTest clientConnect:@"fd84:306d:fc4e::1" port:12345];
The startSever is executed correctly, but when executing:
connect(client_fd, (struct sockaddr *)&server_addr, sizeof(server_addr))
in clientConnect, the code is blocked until it times out and returns -1.
**Even if I use GCDAsyncSocket or BlueSocket, I get the same error. The strange thing is that if I use the ipv4 address in PacketTunnelProvider, and change the above code to the ipv4 version and connect to ipv4 address, or use GCDAsyncSocket to perform the corresponding operation, it can be executed correctly.
**
I tried to search Google for problems with ios-related ipv6 addresses, but I still couldn't find a solution. Is this a bug in the ios system or is there something wrong with my code? I hope to get your help!
Stackoverflow url: iOS Socket cannot connect ipv6 address when use PacketTunnelProvider
I have a dedicated render thread with a run loop that has a CADisplayLink added to it (that's the only input source attached). The render thread has this loop in it:
while (_continueRunLoop)
{
[runLoop runMode:NSDefaultRunLoopMode beforeDate:[NSDate distantFuture]];
}
I have some code to stop the render thread that sets _continueRunLoop to false in a block, and then does a pthread_join on the render thread:
[_renderThreadRunLoop performBlock:^{
self->_continueRunLoop = NO;
}];
pthread_join(_renderThread, NULL);
I have noticed recently (iOS 18?) that if the Display Link is paused or invalidated before trying to stop the loop then the pthread_join blocks forever and the render thread is still sitting in the runMode:beforeDate: method. If the display link is still active then it does exit the loop, but only after one more turn of the display link callback.
The most likely explanation I can think of is there has been a behaviour change to performBlock - I believe this used to "consume" a turn of the run loop, and exit the runMode:beforeDate call but now it happens without leaving that function.
I can't find specific mention in the docs of the expected behaviour for performBlock - just that other RunLoop input sources cause the run method to exit, and timer sources do not. Is it possible that the behaviour has changed here?
(Also have a case ID, 9879068)
We have an app that user use to check in/out from work for example. We have a button in-app do do this. Now I'm trying to add buttons to our widgets and our new live activity so that users don't have to open the app.
It's crucial that the live activity and widgets always show the exact same state.
Otherwise it'll look pretty bad if a user has both a live activity and a widget showin at the same time.
However, we have noticed that sometimes, pressing the button in the live activity, running the app intent, will not always make the widget update (we call reloadAllTimelines()). The other way around, i.e. press the button on widget to update live activity always works. (they both call the same app intent)
When running it in debug mode on a phone from Xcode, it always works, but when running it just on the phone it's unreliable.
My first thought was, of course, that's related to the widget "budget", but according to the docs HERE, it should not be applied when interacting with a widget, calling an app intent.
My question: HOW can I make my widget reliably refresh using an app intent invoked from a live activity??
I have a ready small project with simple buttons and trace labels that display this issue that I'm happy to supply to someone.
Few user space applications are available in market for example xnvme, but does not have any interaction with Admin Submission/Completion queues.
Also IOCTLs are not very prominent . Is there any ways to get access to the native NVMe Mac driver source code?
Thanks, hopefully we will get some positive response here.
I'm testing auto renewable subscription, specifically using Xcode testing (not sandbox). It seems that subscriptions are automatically cancelled after some time. I haven't found any documentation on how long time this is, so does anyone know?
I'm working on an app that uses EventKit to access calendar events. For users with external calendars like Google Calendar, they can sync these by adding the account through iOS Calendar settings. Once added, the events appear in my app as expected.
However, if a user adds a new event in Google Calendar, there’s often a delay before it appears in my app, since the iOS Calendar doesn't sync with external sources like Google in real time.
Currently, users can manually trigger a sync by opening the Apple Calendar app and using the pull-to-refresh feature under the "Calendars" tab. This works reliably but isn’t an ideal solution.
I tried using the EventKit method refreshSourcesIfNecessary() to minimize the delay, as it claims to "[Pull] new data from remote sources, if necessary" (link to docs). I trigger this method when the app returns to the foreground. But, I'm not seeing the expected results. Here’s a typical sequence:
Open my app and send it to the background.
Add an event in Google Calendar.
Return to my app.
Despite invoking refreshSourcesIfNecessary(), the new event doesn’t appear in Apple Calendar (or accordingly in my app), until some random delay (30 seconds to several minutes). In contrast, the Apple Calendar app’s pull-to-refresh fetches the event immediately, every time.
Am I misinterpreting how refreshSourcesIfNecessary() is intended to work? Or is there another way to achieve a faster sync with external calendars?
I have two Dockkit anomolies to report. Hoping a DTS person has seen these and/or can comment.
First, my setup: I am controlling the accessory by making repeated calls to set the angular velocity. And the first thing I do is make a call
dockManager.setSystemTrackingEnabled(false)
because I'm doing my own tracking.
I would note that I tried calling track() on my own, with a bunch of observation rectangles (or even just one) but it didn't work well, even though I was calling at the correct rate. Instead, I measure the angular deviation to where I wish my camera was pointed, and set the angular velocity proportional to the error.
First issue: in normal operation, the green tracking light is on, on the hardware (the Instaflow Pro 360 motorized dock). Squeezing the trigger toggles the green light on/off; only when the light is on will the dock accept my calls to set the angular velocity. Fine.
But sometimes squeezing the trigger won't reactivate the green light. In this case, the ONLY thing that seems to work is switching to the Instaflow Pro 360 app, and activating the camera. Immediately the green light turns on, and I'm good (and can return to my own app, with the green light still on).
So what hidden API call does Instaflow have, that I don't that can make this happen? Sure, it's their own app, but I imagine they don't have access to calls I don't, so how does their app manage to get the green light back on?
It doesn't always happen. Would love to know how to snap out of this.
Second issue: While I usually use rectangle from running the vision system to guide my camera position, sometimes I left the user directly control the angular "yaw" velocity (rotation around the vertical axis) directly (by issuing commands over the network).
Sometimes, when the user sets a non-zero velocity, when they set a zero velocity a short time later, the camera doesn't immedately respond and stop. (It's not a network issue. I can verify the API sends a call to set the angular velocity to zero, and the camera keeps rotating for a good fraction of a second.) Most times the camera stops immediately, but sometimes it doesn't.
Oddly, I never see this issue when letting the user set the angular velocity in the "pitch up/down" axis. Just the yaw axis.
Anybody else seen this? I feel like it wasn't a problem till I got to iOS18 but I won't swear to it.
Any advice/assistance/discussion greatly appreciated.
I'm tryng to develop a software that can connect an iPhone to a HMI Box in order to use CarPlay and test an app for CarPlay.
Since the starting point is to ask the device, i.e. the iPhone, if it supports CarPlay, I have to write a USB Vendor-Specific Request that the accessory send to know for this capability.
I would like to know what are the specific parameters to include in the control transfer request from accessory to device, especially: bRequest, wValue, wIndex. I've studied the whole Accessory Interface Specification Carplay Addendum, but i couldn't find anything.
Thanks in advance for your support.
We are experiencing an infrequent issue with the handoff between our Siri intent, our iOS app, and our CarPlay extension. Siri correctly understands the request, and the
handler(for intent: INIntent)
method is called. In the final step, we respond using:
INStartCallIntentResponse(code: .continueInApp, userActivity: userActivity)
with an instance of NSUserActivity initialized as:
NSUserActivity(activityType: "our.unique.StartCallIntent")
This "our.unique.StartCallIntent" type is included in the app’s NSUserActivityTypes attribute within the Info.plist.
The callback is handled in the main view of the app through:
view.onContinueUserActivity("our.unique.StartCallIntent", perform: handleSiriIntent)
Additionally, we handle the callback in the CarPlay extension using:
func scene(_: UIScene, continue userActivity: NSUserActivity)
This is necessary because when Siri is invoked while CarPlay is active, the CarPlay extension should receive the callback.
Most of the time, both callbacks are triggered as expected. However, on rare occasions, the handoff fails, and neither onContinueUserActivity nor scene(_: UIScene, continue userActivity:) receives a callback from the Siri intent.
Is this a known issue? If so, are there any guidelines or best practices for ensuring that our Siri intent handoff consistently triggers the callbacks?
I am really hoping somebody can help. I in the process of having our app relaunched with CarPlay and a few other features. However, after nearly 4 weeks I've still not had confirmation of Carplay being accepted. I've submitted several times without any response. When I've contacted Apple Support I simply get a generic reply (see below)
Hello Gareth,
CarPlay apps are editorially selected, you will be contacted if your app is selected to proceed.
If you have already submitted your request to have your app support CarPlay, there is no actions needed. Estimates and status updates are not available.
Please let us know if you have any questions or need further assistance.
WWDC videos suggest that existing apps should continue using the old SiriKit domains, such as INPlayMediaIntent. But what about new apps for playing audio? Should we implement Siri functionality for audio playback using the old SiriKit domains, or should we create our own AppEntities and trigger them via custom AudioPlaybackIntent implementations?
Interactive widgets require an AppIntent and don’t support the old INPlayMediaIntent. To achieve the same functionality as the Music app widgets, it seems logical to adopt the new AudioPlaybackIntent. However, I can't find any information about this in the documentation.
Hello, I'm currently configuring Universal Links and I'm getting error SWCERR00201 from Apple CDN.
$ curl -I -v https://app-site-association.cdn-apple.com/a/v1/pamestoixima.gr
...
< Apple-Failure-Details: {"location":"http://www.pamestoixima.gr/.well-known/apple-app-site-association/"}
Apple-Failure-Details: {"location":"http://www.pamestoixima.gr/.well-known/apple-app-site-association/"}
< Apple-Failure-Reason: SWCERR00201 Insecure (non-https) redirects forbidden
Apple-Failure-Reason: SWCERR00201 Insecure (non-https) redirects forbidden
< Apple-From: https://pamestoixima.gr/.well-known/apple-app-site-association
Apple-From: https://pamestoixima.gr/.well-known/apple-app-site-association
...
I cannot understand why it is mentioning http as the AASA is hosted at pamestoixima.gr that uses https, not http. I can get it via accessing https://www.pamestoixima.gr/.well-known/apple-app-site-association/.
I would greatly appreciate any help on this.
Thank you
So I have a small homebuilt device that has a simple Arduino-like chip with wifi capabilities (to be precise, the Xiao Seeed ESP32C, for anyone who cares), and I need my iOS app to talk to this device.
Using the CoreBluetooth framework, we've had no problems --- except that in "noisy" environments sometimes we have disconnects. So we want to try wifi.
We assume that there is no public wifi network available. We'd love to do peer-to-peer networking using Network, but that's only if both devices are from Apple. They're not.
Now, the Xiao device can act as an access point, and presumably I could put my iPhone on that network and use regular TCP calls to talk to it. The problem is that my app wants to both talk to this home-built device, but ALSO make http calls to my server an amazon.
So: how do I let my iOS app talk over wifi to this simple chip, while not losing the ability to also have my app reach a general server (and receive push notifications, etc.)
To be more concrete, imagine that my app needs to be able to discover the access point provided by my device, use low-level TCP socket calls to talk to this local wifi device, all without losing the ability to also make general http calls and be just accessible to push notifications as it was before connecting to this purely local (and very short range, i.e. no more than 30 meters distant) device.
Does this make sense? Have I explained it well enough?
Hello!
I am wondering about the status of Nested Hyper-V Support for VM's?
This is specifically regarding this issue with Parallels Desktop, which claims the issue is on Apple's side:
Parallels Article: https://kb.parallels.com/en/116239
Within the Article, the no longer accessible previous Apple discussion post for this issue (at least I cannot access it): https://discussions.apple.com/thread/255546412
Is this something that will be fixed and supported soon?
Thank you!
(If this should be posted somewhere else please just let me know where!)
Whenever I'm working on my content filter for macOS, I usually keep SIP disabled and with developer mode on (systemextensionsctl) as a convenience.
The issue: content filter stopped receiving any kind of traffic when SIP is disabled. I don't see any log lines in Console for new flows, and the filter can't block anything, since it doesn't get any flows. Issue started yesterday.
I tried several things and did some investigation, here are some findings:
Reboot: rebooting did not fix the issue (while keeping SIP disabled).
Reenabling SIP fixes the issue for both App Store and Xcode builds.
Code: latest published version also stopped working with SIP disabled. This version is stable and confirmed to work as reported by users.
Clean Xcode + rebuild did not fix the issue.
Lastly, I inspected the logs and did not see any errors standing out. I noticed the filter does get started (startFilter is called) and registered, but after that there are no errors/new flows or anything, just silence (logs below).
com.apple.networkextension default 15:22:22.270746-0300 : Calling startFilterWithCompletionHandler
com.extension.MyExtension info 15:22:22.270998-0300 Success applying filter settings
com.apple.networkextension debug 15:22:22.272705-0300 NESMFilterSession[My Extension:B9F3F30E-E0E0-4E53-8B32-EFC285E3CF6A]: Checking providerBundleIdentifier com.extension.MyExtension for pluginClass 4
com.apple.networkextension debug 15:22:22.272717-0300 Checking for com.extension.MyExtension - com.apple.networkextension.filter-data
com.apple.networkextension default 15:22:22.272728-0300 Found 1 registrations for com.extension.MyExtension (com.apple.networkextension.filter-data)
com.apple.networkextension debug 15:22:22.272778-0300 NESMFilterSession[My Extension:B9F3F30E-E0E0-4E53-8B32-EFC285E3CF6A]: com.extension.MyExtension is registered for pluginClass 4
Here are some additional info about my system:
macOS 15.1
Between yesterday and today, the only new Installation is XProtectPlistConfigData at 12:10AM
Thanks!
Hello.
tell application "Microsoft Excel"
Where can I find complete help for all Excel commands?