Hi,
thanks for your answer and and video recommendation.
Yes, I have the same impression that people are using GCD for everything. The sad part is that apple didn't stop the trend or at least to come with some videos, maybe something like GCD in depth, to make devs aware of the +/-.
One result of this is that more and more forums have as accepted answers things like Never use NSLock, always use queue.sync from any synchronization without any explanation.
I've read that os_unfair_lock is faster than pthread_mutex/NSLock, but as you mention it is a little problematic in swift. I've used in the past pthread_mutex, or std::mutex in C++ for UNIX-like, and they performed ok plus it is available in the UNIX-like world.
Regarding the video, I think it is interesting.
Please correct me if I'm wrong. I think the approach of this queue "tree" is a result of overuse dispatch/queues, too many threads created or blocked. They also emphasize to try to have queues per subsystem and not per class.
Have a nice day :)
Post
Replies
Boosts
Views
Activity
Hi,
thanks for suggestion, it looks nice.
But is there a difference between invalidationHandler/interruptionHandler and the remoteObjectProxyWithErrorHandler?
In case of an error if I saw they are both called.
Thanks
Hi,
Yes, thanks that is what I need.
Unfortunately I need from MacOS version 10+, so I'll have to implement SecCodeCreateWithXPCMessage for 11 and up, and the private method for 10.
I was using NSXPCConnection, but so far I didn't saw any way to get to the xpc_object_t (except maybe the private method _xpcConnection). So I'll have to rewrite using C API.
Thanks for your help
Yes, it will be with App Store. That's why I try to avoid private APIs and I'll switch to C API instead of NSXPCConnection.
For version 10, I've found xpc_connection_get_audit_token. If that is not working, I'll find something else to prevent possible rejection.
Actually I was wrong that part is not distributed on App Store.
Yes, you're right I was mixing them, not it is clear.
Thanks.
Thanks for suggestion, unfortunately it is a little complicate to change to that API, because it would require to change some internals from openvpn.
I did give it a try and it worked with createTCPConnection, but I see in kernel the same problem for the socked created for NWTCPConnection.
Meaning when the socket is created for NWTCPConnection by the kernel, necp matches it on the "broken" rule, and it is bound to utun interface, but later when it is into the connected state it is bound to en0 interface. So what I think it happens, maybe I'm wrong, createTCPConnection creates the socket with the utun interface because necp, and then createTCPConnection internally bounds it to the en0 interface. Because the socket is created in NE app and cannot be bound to utun interface.
Bellow is the match policy (I'm connecting on a remote server on port 80):
error 15:32:36.187216+0100 kernel necp_socket_find_policy_match_with_info_locked: DATA-TRACE <SOCKET>: EXAMINING - policy id=16174 session_order=2002 policy_order=10806 result=IP_TUNNEL (cond_policy_id 0)
error 15:32:36.187220+0100 kernel necp_socket_check_policy: DATA-TRACE <SOCKET>: ------ matching <NECP_KERNEL_CONDITION_BOUND_INTERFACE> <value (22 / 0x16) (0 / 0x0) (0 / 0x0) input (22 / 0x16) (0 / 0x0) (0 / 0x0)>
error 15:32:36.187223+0100 kernel necp_socket_check_policy: DATA-TRACE <SOCKET>: ------ matching <NECP_KERNEL_CONDITION_APP_ID> <value (66373 / 0x10345) (0 / 0x0) (0 / 0x0) input (66373 / 0x10345) (0 / 0x0) (0 / 0x0)>
error 15:32:36.187227+0100 kernel necp_socket_check_policy: DATA-TRACE <SOCKET>: ------ matching <NECP_KERNEL_CONDITION_PID> <value (45946 / 0xB37A) (0 / 0x0) (0 / 0x0) input (45946 / 0xB37A) (0 / 0x0) (0 / 0x0)>
error 15:32:36.187231+0100 kernel necp_socket_find_policy_match_with_info_locked: DATA-TRACE <SOCKET 0>: MATCHED POLICY - proto 6 port <local 59836/59836 remote 80/80> <drop-all order 11001> <pid=45946 Application 66373 Real Application 0 BoundInterface 22> (policy id=16174 session_order=2002 policy_order=10806 result=IP_TUNNEL)
thanks again for the suggestion. I would have used it, but we have too many 3rd party libraries, not only openvpn, that work with sockets.
I'm sorry, but not quite sure what you mean.
Today I've made some more testing and got some more interesting results.
This is how I've rested the NNE app
The NE app connects to the VPN server (using socket/connect). Then setTunnelNetworkSettings is called. Into the callback from setTunnelNetworkSettings 2 connections are create after 0.5 seconds (to be sure OS finished setting up everything):
a socket with C-API (socket/connect) to a IP (not VPN server) and port 80, and
a createTCPConnection and wait to connect to another IP (not VPN server) and port 80. I'm using addObserver to know when it is connected.
Example code
setTunnelNetworkSettings(networkSettings) { error in
DispatchQueue.main.async {
complete_from_start_tunnel(error)
self.reasserting = false
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
// create socket with C-API for 85.120.19.5:80
// create tcp connection with createTCPConnection(85.120.19.250, 80)
}
}
}
}
In the same time I'm checking with lsof get the information.
In both cases in the end there is an socket created and both sockets use utun interface.
IPv4 0xb394555908552715 0t0 TCP 192.168.0.163:52596->194.233.50.248:1231 (ESTABLISHED) <--- VPN server
IPv4 0xb394555908506715 0t0 TCP 10.7.0.7:52617->85.120.19.5:80 (ESTABLISHED) <-- this is the IP for C-API (socket/connect)
IPv4 0xb394555908443225 0t0 TCP 10.7.0.7:52618->85.120.19.250:80 (SYN_SENT) <-- this is the IP for createTCPConnection
Both connections work fine, can send data/read data as long as the VPN socket is alive.
If for some reason I need to recreate the VPN socket nothing works anymore, because any socket created from this point on is using the utun interface. (this is made with C-API and I cannot change that...).
If I add a delay to the setTunnelNetworkSettings completion block, everything works fine and all the sockets are on en0.
setTunnelNetworkSettings(networkSettings) { error in
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) {
thanks
In this case the port 80 it is just an example, so I can check if the sockets are working or not.
In out production application we do have the case that sometimes we need to connect to more servers, and all the sockets from tunnel must not go through VPN server, they have their own packages encryption.
But my original question is why are the sockets bound to utun interface from NE app, no matter what port or IP are connecting?
Is there a new limitation on ventura and NE app must create only one socket for the entire app lifetime? Even if I need in some cases to recreated the socket to connect to VPN server, because it will not work? This used to work fine in previous OS versions.
And why does everything works fine if I add a delay of 0.5 to setTunnelNetworkSettings?
thanks
Hi,
Did you've managed to find a solution for this problem?
I don't know if it helps, but we had the same issue when includeAllNetworks is used.
We've saw that adding a delay at setTunnelNetworkSettings "fixes" the problem with sockets. We are using the C API, socket(). As far as I saw this is not working with NWConnection, but it works with createTCPConnection too. And for us was reproduced only starting with macos 13.X.
The idea is something like:
setTunnelNetworkSettings(networkSettings) { error in
DispatchQueue.main.asyncAfter(deadline: .now() + 0.5) { <--- without this 0.5 delay is not working
completeBlockFromStartTunelImpl(error)
}
}
This is the link with our bug: https://developer.apple.com/forums/thread/723314
Another issue that we still have is that after reboot we cannot connect, even with the above delay. So the other "fix" is to delete the VPN profile from system. And in this case, until reboot everything works fine, until it breaks again :).
Hope it helps.
Bug was fixed in macos 13.3.
Fixed with macos 13.3.
Creating an users group with the ID smaller then 500, it will not be displayed into the System Settings list.
regarding includeAllNetworks, if we reproduce this on e.g. iOS 14 or 15, what do you recommend, does it make sense to create a ticket for it, or those versions will not get fixed?
DNS leak = DNS query requests that doesn't go through the tunnel.
From what I've saw when setTunnelNetworkSettings(_:completionHandler:) is called, from the call point until almost its completion block is executed, the route to utun is deleted from system and then recreated. Because of this, requests made in this short time will not be able to go thru the tunnel and will most escape on e.g. en0. In the same time mDNS will fire lots of DNS queries at every network configuration change and some of the requests will manage to go around the tunnel, until the route is recreated.
Thanks for info, very helpful.
Actually we have this case in our app, the keys were created without kSecUseDataProtectionKeychain and at a later app update it was added kSecUseDataProtectionKeychain. In this case, if the keys were created without kSecUseDataProtectionKeychain and then updated, will they use kSecUseDataProtectionKeychain or they must be deleted and recreated so they are moved to data protection keychain?
Regarding the random creation place for me I always delete the keys from keychain using Keychain.app, then I'm running the app app and the keys are recreated(always with kSecUseDataProtectionKeychain). I'll spend some more time to reproduce the issue maybe from another test app.
Thanks