Don't need it, obviously! (Of possible interest, though, I'm only interested in finding out information about our own processes, so I assume there's a way to get a task handle for those, even if it involves setting up some XPC between them.)
Post
Replies
Boosts
Views
Activity
As always, @DTS Engineer is extremely helpful.
D'oh! No man pages so I wasn't sure what flavor was supposed to be -- I ended up looking at some XNU code but clearly didn't check all the cases. Thanks!
Sorry to hijack, but that didn't work for me. I'm trying a command-line utility, doing:
static size_t
get_thread_count(pid_t pid)
{
mach_port_t me = mach_task_self();
mach_port_t task;
kern_return_t res;
thread_array_t threads;
mach_msg_type_number_t n_threads;
res = task_for_pid(me, pid, &task);
if (res != KERN_SUCCESS) {
fprintf(stderr, "Unable to get task for pid %d: %d\n", pid, res);
return 0;
}
res = task_threads(task, &threads, &n_threads);
if (res != KERN_SUCCESS) {
fprintf(stderr, "Could not get threads: %d\n", res);
return 0;
}
res = vm_deallocate(me, (vm_address_t)threads, n_threads * sizeof(*threads));
// Ignore error
return n_threads;
}```
and using an entitlements plist of
and using codesign --sign - --entitlements ./ent.plist --deep ./t3 --force to get it in there, but it fails with error 5. (Even when run as root. π)
This could be how I'm codesigning it, of course; I was just doing a simple CLI tool test first.
Yay that works!
But how can I get thread count? I can't seem to use task_for_pid() (even with an Info.plist that has SecTaskAccess set to true).
I didn't really solve it -- it's a hacky work-around. When I call dio?.close()... it should close. It doesn't. And because it's busy doing one byte of I/O at a time, there was pending data; I had to change it to close it after a delay (0.5 seconds).
I don't know what amount the other side is sending. My read-length is 1024 (this is my go-to length when I'm reading from a network), my sample case is 18 bytes in my unit test, it's written all at once, but because of the water marks, the read handler is called 18 times. Well, 19 because of the delayed .close(flags: .stop).
If I set the water-marks to something other than 1, then it always waits until the length is available. Which doesn't work when the connection is interactive.
Thus:
Also, I really wish it would get the data available at once for the read, rather than 1 at a time.)
And yes, that seems to have fixed it. It's still very weird.
As far as I can tell, this is a conflict between the nhollman json library, and a set of formatter headers added in Xcode 16. But since it's C++, I can't figure out what exactly is going on, or how to work around it.
But... something that is compiled with Xcode 15 for macOS 14, with a minimum deployment of 12.0, really should still work with Xcode 16 & macOS 15, shouldn't it?
See I always KNEW Xcode was out to get me it's just taken TWENTY FIVE YEARS to prove it! π
The thing is, I do have the right entitlements in it -- see my output. But I still have to manually notarize in this case? (Our actual product is all command-line and I wrote scripts for all of this, but then promptly forgot everything I wrote, as is of course expected.)
Well, that's the best way for Apple. It doesn't help me at all, although I guess there's no point to filing a TSI. π
My question at the beginning was whether anyone else has run into this, and if they have, did they have any mitigations (other than not including UDP in the includedRules set). I'd still love to know if anyone has, especially to the mitigations part. π
I went into far more details and analyses and question-asking in Transparent Proxy Provider, UDP, mbufs, and inevitable panics
Ok. I just took my simple extension test program (which sets up a TPP that returns false from both handleNew*Flow methods) and it exhibits the same behaviour. Again, only with UDP.
About 2.5 hours later:
50175/784366 mbufs in use:
50173 mbufs allocated to data
2 mbufs allocated to packet headers
734191 mbufs allocated to caches
Yeah, based on all of this, I think I'm going to take a stab at refactoring all of it to just use DispatchIO. If I end up choosing to pass it over XPC, I could just use the FileHandle and turn it into a DispatchIO on the other side.