Ian Bicking: the old part of his blog


I take issue with the "it doesn't matter to me, therefore it isn't important" attitude of the original post, that smacks of denial. If I had to make a list of things I would like fixed in Python, the GIL would top that list. Sure, it can be worked around. But the fact that there are work-arounds does not make it any less a wart on an otherwise fine language. On some platforms, notably Win32, multiple processes incur a significantly higher overhead than threading, and pushing towards multiprocessing reduces overall system capacity, and the GIL acts as a kind of glass ceiling on performance.

We use Corba servers (via omniORB) at my company and had to develop all sorts of ugly load-balancing hacks to get around the concurrency issues. Multi-threading a single process is the way the standard works. There are many applications that maintain caches, and splitting them into multiple processes loses cache hit ratios. We also have to maintain multiple Corba connection pools, which reduces efficiency over a single pool due to queueing theory effects. Sure, the caches can be maintained in shared memory, but I don't think anybody can maintain with a straight face Python handles objects in shared memory particularly elegantly or well.

Finally, multiprocessors are far from exotic, and with hyperthreading, they are actually becoming the rule rather than the exception.
Comment on GIL of Doom!
by Fazal Majid


FWIW, you can resolve many of the performance issues you are running into with your partitioned workload by assigning work to idle engines in LIFO order. This will ensure that the idle engines at the most common workload levels end up paged out, and not detracting from the processes which you are actually keeping loaded.

This technique is called "hot engine scheduling".


# Terry Lambert