Posts

Post not yet marked as solved
0 Replies
292 Views
I am in the process of transitionning from using Network Kernel Extensions to Network Extensions for socket-level filtering, 10.15.4 API additions having removed the showstoppers for our app (verdict change on open flows and data volume accounting), but I'm seeing some other limitations while testing and comparing against the existing KEXT-based solution.Namely, no FaceTime traffic (AV streams from identityservicesd) seems to ever be seen, likely due to the specific networking setup FaceTime uses. The NEFilterDataProvider is setup to grab all traffic / all directions / all protocols and sees all other TCP and UDP networking just fine. According to FB7665551, not seeing FT traffic is as designed. This is disappointing compared to KEXTs, because they saw that traffic just as normal and enabled accounting data volume used on FaceTime. From my limited testing it seems that Transparent Proxy does not catch that traffic either. Note that nettop and other similar tools do account for that traffic. Has anyone encountered similar limitations ?
Posted Last updated
.
Post not yet marked as solved
12 Replies
2.7k Views
Hi,I am currently working on moving the Network Kernel Extensions-based socket filtering in our application to the newly available NetworkExtension-based filtering available in Catalina. There are a few points where I would like to have some more details, especially how some of our NKE-implemented requirements can be translated to the NetworkExtension, more precisely:Abitility to monitor traffic volume in (near) real-timeAbility to disconnect / block a stream that is currently transferring data, i.e. that was previously allowed to startBoth of those enable our app to act as a kill switch when users are tethering and about to go over their data limit during a big file download for example.This steered me towards using the new additions to Content Filter Providers with NEFilterDataProvider, but seeing some issues during testing I want to be sure if this is the correct choice or if I’m misusing the API and so would need to evaluate other options like transparent proxy.In order to get the data count as the flow progresses, I’m returning a NEFilterNewFlowVerdict.filterDataVerdict() from the NEFilterDataProvider subclass handleNewFlow() and then NEFilterDataVerdict(passBytes:, peekBytes:) until the complete() methods are called. This seems to fulfil the requirements as I can use the readBytes.count to do the accounting and eventually return a .block() verdict in case I have to stop the flow mid-transfer. But:1. The readBytes object seems to contain an unspecified header, apparently a size followed by something similar to the auditToken, for the first call when offset is 0. It seems its size has changed during the beta cycle and is currently 136 bytes. Are there any constants and / or definitions so that I can skip over the header or subtract its size?Developer documentation states in https://developer.apple.com/documentation/networkextension/nefilterdataverdict/1619005-init?changes=latest_minor that 'the system does not pass the bytes to the destination until a "final" (i.e. allow/drop/remediate) verdict is returned’. This seems to not be the case on macOS at least in some test cases such as the one in the code below, using two nc processes to establish and keep an open TCP stream on the local machine, one can observe that bytes are transferred immediately from one to the other, even with the (passBytes:, peekBytes:) verdict. Methods appear to only be called once a sufficient number of bytes have been accumulated though (experimentally 136 in this case, same as the initial header size).2. Is this use case of returning a filterData verdict for all packets until the end of the stream an intended use of the API (which fits our requirements well) or am I misusing a mechanism that should only be reserved for the first packets until a final decision is made?In the latter case that would compromise our app's ability to stop an open stream and would need to use some other API for data accounting, if available.3. During the above usage, networking performance takes a heavy hit in the beta when on high speed links (above 100Mbps), with a very large amount of time spent in NetworkExtension, CPU usage spiking to 100% and above for the filter extension, and 30 seconds pauses in traffic. Is this a known obvious beta issue that should be solved once release / optimized builds arrive or due to my specific usage? Our current NKE solution has negligible measured impact in the same conditions as it does not need copying to user-space.Partial callstack for reference, all this runs in NetworkExtension in the filter app extension.30.16 s 100.0% 70.00 ms -[NEFilterDataExtensionProviderContext handleSocketSourceEventWithSocket:] 26.04 s 86.3% 11.00 ms -[NEFilterDataSavedMessageHandler enqueueWithFlow:] 25.93 s 85.9% 16.00 ms -[NEFilterDataSavedMessageHandler executeWithFlow:] 18.30 s 60.6% 6.00 ms __74-[NEFilterDataExtensionProviderContext handleSocketSourceEventWithSocket:]_block_invoke_2.520 18.13 s 60.0% 12.00 ms __74-[NEFilterDataExtensionProviderContext handleSocketSourceEventWithSocket:]_block_invoke.516 17.83 s 59.1% 28.00 ms __74-[NEFilterDataExtensionProviderContext handleSocketSourceEventWithSocket:]_block_invoke.515 17.75 s 58.8% 35.00 ms -[NEFilterDataExtensionProviderContext socketContentFilterWriteMessageWithControlSocket:socketID:drop:inboundPassOffset:inboundPeekOffset:outboundPassOffset:outboundPeekOffset:] 17.71 s 58.7% 17.71 s write4. Is there some more documentation about the lifecycle of NEFilterFlow objects? I tried using their hash value to key a dictionary of flow decisions (allow and continue to monitor = filterData verdict, or block verdict if not) based on our app- and process-level rules to avoid re-evaluating the rules for each packet in the open flow, more or less related to the way I've been the cookie in KEXT socket filters, but it seems that NEFilterFlow objects are somehow reused for other flows of unrelated processes which defeats the hash based keying. Note that this is not a showstopping issue as it apparently can be mitigated adding the auditToken in the hash.I'm most concerned about 2 & 3 as those determine the rest of the implementation so it would be great if I could have some hints on that matter.Thanks /* Use two Terminal windows with 'nc [IP on local network] -l 1234’ for listening 'nc [IP on local network] 1234’ for sending, then type text and enter. Note that lo0 traffic does not seem to appear hence the usage of local network addresses. */ override func handleNewFlow(_ flow: NEFilterFlow) -> NEFilterNewFlowVerdict { if let f = flow as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { NSLog("\(path) New flow \(f.direction) \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } // Can’t use NEFilterFlowBytesMax below (UInt64 VS Int required) return NEFilterNewFlowVerdict.filterDataVerdict(withFilterInbound: true, peekInboundBytes: Int.max, filterOutbound: true, peekOutboundBytes: Int.max) } override func handleOutboundData(from flow: NEFilterFlow, readBytesStartOffset offset: Int, readBytes: Data) -> NEFilterDataVerdict { if let f = flow as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { let headerSize = offset == 0 ? 136 : 0 NSLog("\(path) \(f.direction) out:\(readBytes.count - headerSize) offset:\(offset) \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } return NEFilterDataVerdict(passBytes: readBytes.count, peekBytes: Int.max) } override func handleInboundData(from flow: NEFilterFlow, readBytesStartOffset offset: Int, readBytes: Data) -> NEFilterDataVerdict { if let f = flow as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { let headerSize = offset == 0 ? 136 : 0 NSLog("\(path) \(f.direction) in:\(readBytes.count - headerSize) offet:\(offset) \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } return NEFilterDataVerdict(passBytes: readBytes.count, peekBytes: Int.max) } override func handleOutboundDataComplete(for flow: NEFilterFlow) -> NEFilterDataVerdict { if let f = flow as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { NSLog("\(path) \(f.direction) Out complete \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } let verdict = NEFilterDataVerdict.allow() verdict.shouldReport = true return verdict } override func handleInboundDataComplete(for flow: NEFilterFlow) -> NEFilterDataVerdict { if let f = flow as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { NSLog("\(path) \(f.direction) In complete \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } let verdict = NEFilterDataVerdict.allow() verdict.shouldReport = true return verdict } override func handle(_ report: NEFilterReport) { if report.event == .flowClosed, let f = report.flow { if let f = f as? NEFilterSocketFlow, let l = f.localEndpoint as? NWHostEndpoint, let r = f.remoteEndpoint as? NWHostEndpoint { if let path = Pid.processPathForPid(f.pid), path.contains("/usr/bin/nc") { NSLog("\(path) \(f.direction) Flow closed \(l.hostname):\(l.port)<->\(r.hostname):\(r.port)") } } } // An extension has been defined on NEFilterFlow to get the pid from the audit token.
Posted Last updated
.