Not too hopeful that anyone can explain this but here is goes.
I have some C code being used from an iOS app in Swift.
Logs in the C code are passed by a callback to Swift and put on a serial queue using: Log.serialQueue.async {}
So, the C function could look like:
int do_some_c_stuff(void) {
log("Do some logging");
}
And in Swift we have something like this to process the log that came through the callback:
class func log(_ message: String, logInfo: LogInfo = appLogInfo, type: OSLogType = .default) {
Log.serialQueue.async {
os.os_log("%@", log: logInfo.log, type: type, message)
}
}
This works perfectly in all cases except one (Intel iPhone simulator only).
Now, some C functions allocate a static buffer to parse incoming messages. Like this:
int do_some_c_stuff(void) {
log("Do some logging");
char buf[100000];
}
and here is the interesting part. If this buffer exceeds exactly 249440 bytes, any call to Log.serialQueue.async in the swift layer gets a EXC_BAD_ACCESS code=2 but only when running on Intel simulator. Running on device or M1 simulator works just fine.
So on the Intel simulator this will crash calling Log.serialQueue.async:
int do_some_c_stuff(void) {
log("Do some logging"); // This will trigger the callback inside log which ends up in the swift layer.
char buf[249441]; // buffer exceeds 249440 bytes
}
Also note that it is the presence of this allocation that causes issues on Intel, returning before the allocation does not help, if the allocation is present in the C function, the call to Log.serialQueue.async crash. Further, it is not the logging in the swift layer that causes the problem, simply calling Log.serialQueue.async without anything inside crashes.
So, the example below still crash on Intel when accessing the serialQueue.async so I assume the large memory chunk is allocated when the function is "created", not when the buf variable is instantiated.
int do_some_c_stuff(void) {
log("Do some logging");
return 0;
char buf[249441];
}
It only happens in the Intel simulator and only in Debug mode. It is 100% reproducible in various places in the codebase, all of them using C functions that declare a local buffer larger than 249440 bytes.
I do not have a minimal example at this time, hoping that someone might have an idea on why it happens but if someone is interested, maybe I can whip something up. In general, just having the C function allocate this large block and from the same function callback to swift and use dispatch.async should do the trick. Is there some sort of memory swapping, paging etc that would cause problems in a scenario like this mixing C and dispatchqueue (on intel only)?
Since the solution is to reduce the stack allocation or use heap memory, this is not critical. However, if anyone knows why this is happening on Intel CPU's it would be super interesting to know.