Different numbers of thread on different MacBook

Hi,

I have a python program use libwx_osx_cocoau-3.0.0.3.0.dylib, and it explicitely create 3 threads in source codes. I tried to run on 4 MacBooks, os version from 10.14.6 to 10.15.1, the performance are really bad on two of these MacBooks. By using Activity Monitor, I found that on the two bad-performance MacBooks, my program has 14-18 threads, and 400M real Memory usage. While on the other two MacBooks, my program has 8-9 threads, and <200M real Memory usage.

I am very new to OSX and have some experience in Linux system only. My understanding is that the threads and real memory should be kind of in the same range on different machines.

(BTW, no sure whether this will provide more information, but this program worked very well before 10.14 for years)

Can anyone give me some hint on why the numbers of threads differs on those MacBooks and how should I debug, fix source codes or change configuration to make all MacBook work well?


Thanks a lot.

Replies

You would have to ask the developers of that library.

To expand on john daniel’s response, there are various ways that this library could decide to spin up more threads, and it’s pretty much impossible to know how they chose to do this from the ‘outside’. It’s most likely that they’re doing something with the number of CPUs, but in a world of multiple cores, NUMA, hyperthreading, big/LITTLE [1], and so on, a simple CPU count makes little sense.

Share and Enjoy

Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware

let myEmail = "eskimo" + "1" + "@apple.com"

[1] This is an Arm thing, so you won’t see it on the Mac, but it’s all part of the some problem.

You said "python" but that is a loaded word. It reminds me of a funy story. I once worked at a big "data science" place where they used "python" extensively. But by "python" I means a horribly complex system based on anacoda with innumberable different versions of tools and libraries. They had one particular program that was crashing. They convinced themselves that this was because their data was too complex and they were running out of RAM. Against my objections, they set about buying a server with 1 TB of RAM to run this app while I analyzed the algorithm to determine just how much the data needed to be subdivided to fit into 1 TB of RAM.


I bring this up because one of the problems I found was an uncontrollable number of threads, very much like what you are seeing. I also discovered that the process had very conservative RAM use. The problem was that their environment was completely out of control and they had no idea what versions of software and libraries they were using. They whole time they had been using an obsolete version of a library and that was causing the crash. Of course, then I had the awkward task of finding something useful to do with 5 1 TB Linux servers.


So I suggest you carefully review exactly which code and libraries are running. You may need to emit versions numbers in your output logs.