I used to be a serious python 'head'.. (pythonista, whateva).. now I still appreciate python, but this 'defending the GIL' and 'just fork' business is a load of crap and continually makes me -not- want to use python, although I still find it's minimalism, clarity, and large availability of extensions wonderful.
first of all- as to the question of 'who needs threads' - I hope this is a joke.
If you need performance, which some people do, and threading helps them achieve performance and clear programming abstractions which it would, and it wouldn't hurt non-threading too much, which it wouldn't if designed properly, and these people would use your language and would remove barriers to people considering the languages seriously, then why would you -not- want it?
There are at least 2 interpreted (or should I say incrementally compiled) lisp/scheme interpreters that exist, for real, today.. chez scheme and SBCL on x86. And then of course, there is java's bytcoded VM which is threaded in some cases as well.. as far as I can remember perl threads are native as well (when they work).. and parrot definately has an interpreter framework for multithreading - if python ever moves in that direction instead of towards PyPy 'language feature obsession' like I hope it would.
R.Kent Dyvbig, the author of chez scheme has a wonderfully detailed paper (http://www.cs.indiana.edu/~dyb/pubs/hocs.pdf) about the development of chez scheme over the past 20+! years with plenty of technical details (conceptually anyway, it's still a closed-source scheme implementation) about VM and RTTI in dynamic languages. The section on the implementation of version 7 deals specifically with adding threading to the system - not trivial, but managable if you want to do it. This article took me many passes to even comprehend - so don't be afraid :)
As for the 'it might have a performance hit' or 'what about the broken extension modules' crowd, even if this were true (if designed properly, the performance hit could be designed around and would be minimal, and you could always lock around all entry to broken threaded modules) there's nothing saying that true threads could not be a compile-time option to building the interpreter with or without native threads for those that prefer the single threaded approach "with less overhead" (or even have it determined at start-up-time via some kind of system check)
Not to mention all the argument to the negative about MP locking in the FreeBSD world (that lead to Dfly breaking off) and as mentioned by this thread in the NPTL-implementing-Linux world - all of which states that 'locking with too much underneath the lock' is bad.
Big locks are bad long-term design decisions (e.g. it would be a hack for the case of broken modules as I mentioned it above), working around them is very difficult, but is not impossible, and defending bad design decisions is simply an excuse, not a reason.
For those that say 'just fork' I pose the simple question :
"If threads are unecessary when fork() exists, why were they invented for unix in the first place?"
which is a unix-specific argument, but its a good example where fork wasn't good enough and threading was added to overcome it's limitations. It doesn't necissarily answer the question of threading being
necessary, but noone is arguing that 'real threads' shouldn't exist in Unix or Unix-Like systems
on this thread.. and probably wouldn't dare.
I do understand that a big locking strategy is often a good first step towards a better implementation, but sometimes it's time to take a -second- step.. (and a third, etc) to get an even better implementation.
And I don't want it bad enough to code it - even if that makes me a whiner.