Ian Bicking: the old part of his blog

The challenge of metaprogramming comment 000

"I am sure you're dead wrong on tail-call elimination. It's a compiler/interpreter optimization that removes a penalty for recursive expression. I'm at a loss for why you think it makes the language harder, unless you feel that the forms people use to take advantage of it are obfuscating. Certainly, nobody's code is going to stop working because of it."

There's two problems I see with tail-call elimination:

First, it messes up tracebacks; you lose frames that, for debugging, contain useful information. This was the justification for not adding tail-call elimination to Squeak Smalltalk, which is where I originally became suspicious of it (since I very much respect the opinions and guidance of the Squeak Smalltalk people; e.g., Dan Ingalls and Alan Kay).

Second, it is unpredictable unless you understand it well. Tail-call elimination can be used to create functions that simply won't work in its absence, because of stack limits. Why something will or will not work is somewhat confusing; adding a seemingly-innocuous function call to the end of a recursive function can break the function. For example:

def munge_items(lst, done=[]):
    if not lst:
        return done+lst
    return munge_items(lst[1:], done+[munge(lst[0])])

That's okay with tail-call elimination. But this isn't:

def munge_items(lst):
    if not lst:
        return lst
    return [munge(list[0])] + munge_items(lst[1:])

Heck, even this isn't:

def munge_items(lst, done=[]):
    if not lst:
        return done+lst
    result = munge_items(lst[1:], done+[munge(lst[0])])
    return result

How could you understand that without understanding the underlying process? In isolation, are you really going to know that tail-call elimination is an essential part of an algorithm someone else wrote? Little things can break tail-call elimination, and that's no good. OK, maybe that's just bad programming to depend on it; it's par for the course in Lisps, but let's agree it wouldn't be in Python. But if it isn't something you depend upon, will the performance gains really be worth it? I used to do a fair amount of Scheme programming, and in a functional style that made a lot of use of recursion. I didn't dislike it at the time, but I don't miss it either. I just don't see a compelling need for something that enables more use of recursion.

Comment on Re: The Challenge Of Metaprogramming
by Ian Bicking

Comments:

I realized that this was your objection after I made my post. It doesn't change my opinion, but does give me something concrete to prepare an argument against. Your example hinged on a few "not quite a tail-call" cases and the idea that you would have to understand the underlying implementation to be able figure out why one runs out of memory and the other doesn't. I doubt it would take long to find several examples like those ALREADY IN THE LANGUAGE. The fact that Python is iteration-friendly and recursion-hostile is something that developers have to understand today.

Every language implementation has correct programs that run so poorly that they fail or effectively fail due to running out of space or time (an "infinite loop" is merely one that exceeds the patience of the developer/user). By your logic, any optimization that would make one of these correct programs run acceptably is suspect.

The funny thing is, I'm opposed to cluttering up Python. I'm not sure it was a good idea to add decorators to the language. It's certainly going to make for more challenging code than tail-call elimination ever will.

BY THE WAY: The stack trace problem is not intractable:

In debugging mode, when the interpreter makes a tail-call, it could push a debugging frame onto the stack. The debugging frame contains a pointer to a circular list in which each entry represents a parameter set for a tail call. Subsequent tail-calls would use the existing frame (as long as it is at the top of the stack) and place their parameter sets on the list. The circular list is of finite size and wraps around when that limit is reached, incrementing a counter of abridged calls.

If a non-tail-call is made, it makes use of the stack as normal. Note that the debugging frame is now no longer at the top of the stack. A subsequent tail-call will push a new debugging frame onto the stack (as above).

The return through the frame "returns" to code which cleans up the circular list and returns to the context calling the context that made the tail-call just liek it should.

The stack trace display function needs to know about the debugging frames and how to read the circular lists. If the circular list was not exhausted, then the trace will be no different than for non-optimized tail-calls. If it was, it will need to say something like " ... 95 Calls abridged ... ". Frankly, I think this will make the stack traces MORE readable for deeply recursive code.

Note1: Due to co-recursion the circular list has to support multiple parameter set signatures (A tcalls B tcalls A tcalls B ...). Note2: I would keep the initial parameter set for the tail-call creating the frame, overwriting tail calls 2 thru N (where N is the maximum size of the circular list). This would have the advantage that the "abridgement" would be bracketed nicely.

# Reilly Hayes

I forgot one major point. I don't really care about tail-recursion from a performance point of view. I care because there is a large set of problems for which the recursive code is MUCH clearer than the iterative version.
# Reilly Hayes

why the third definition should not be tail recursive? I don't think that assignment to a local variable would hinder the compiler to not notice the tail recursion..
# anonymous

Good point. The third example is eligible for tail-call elimination.
# Reilly Hayes

"The distance between lisp code and the AST is very small, which limits the textual/structural disonance."

Very well put, as I like to say in Lisp the syntax is the semantics. This feature of Lisp is profound and it makes Lisp unlike any other language. Lisp and her pretty sister Scheme are essentially sugared lambda calculus. They blur the distinction between program and data. Since a program is itself just data this makes Metaprogramming easy. Using these for DSLs is imho the best approach to higher level programming.

# Bob Dionne

I think Tcl is has a surprisingly unified syntactic model, based on strings and quoting roles, instead of lists and symbols.
# Ian Bicking

Yes, but it's also the source of some of its biggest problems. Parameter passing is a MESS. I used Tcl heavily at one point (starting when it was pretty much the only game in town for embedding into C). We built some neat stuff that was easily extensible.

One measure of a language is what happens when a program/system approaches the complexity limits of the language. In most languages, development gradually gets more difficult. Each feature takes more time to develop. In Tcl, users hit the wall HARD when those limits get near. This is mostly because of the string substitution model and its impact on program structure.

I'd still use Tcl for some things, but only when I KNEW that Tcl would only be used to express very simple programs.

# Reilly Hayes

Personally, I understood how to make proper tail calls long before I understood the underlying mechanism, or at least before I understood it well. In Scheme/Lisp, all you have to know is that to make things efficient, you should put your recursive call not surrounded by any other function. In Python, it's just as simple: make the thing after return be a recursive function call, and don't use anything to operate on the result of that recursive call.
# Daniel Ehrenberg