Ian Bicking: the old part of his blog

The challenge of metaprogramming comment 000

Dylan's macros were a kludge, sadly. I doubt more than a dozen people ever actually understood how the standard Dylan 'for'-loop macro actually worked.

Basically, I've never seen really good macros in an infix language, and I sincerely doubt good macros can be retrofitted to an existing infix language. (I strongly suspect that a new infix language could have good macros, but you would need to make a few simplifying assumptions when designing the language's grammar.)

Macros are far easier to understand than C++ template metaprogramming, and plenty of popular C++ libraries make extensive use of template metaprogramming. So I don't think the difficulty of writing macros is a barrier to including them in a language.

Using macros, on the other hand, can be extremely easy, even for novice programmers. At work, we have scripters who routinely build complex, nearly-bug-free model-view-controller (MVC) systems using a few tiny macros. MVC and many other high-level architectural patterns are punishingly difficult for novice programmers. But write a few simple macros and a short tutorial, and novice programmers can reliably use tricky design patterns.

I think that programmatic macros are a simpler and more powerful abstraction than C++ template metaprogramming. But I don't expect to see them in a mainstream language for quite a few years, because of the difficulty of implementing them well in an infix language.

Comment on Re: The Challenge Of Metaprogramming
by EK

Comments:

Dylan macros aren't as much of a kludge as it is suggested above. They're in fact quite powerful, not harder to write and read than, say, CL macros, and tremendously useful. What's missing in the standard is a procedural macro facility, but there's a draft available that provides this (google for DEXPRs).

The reason Dylan hasn't become successful is probably that the implementations suffered severe blows just before they were done (Apple downsized the Cambridge Labs, Harlequin went bankrupt). Now that the former Harlequin implementation went Open Source, there's hope Dylan will gain momentum. In fact, we have about 200 users of the pre-alpha snapshot right now.

# Andreas Bogk

Hi, Andreas! I worked with you in the early years of the open-source Gwydion Dylan project, and I'm glad to see you're making so much progress. Dylan is still one of my favorite languages.

I stand by the argument that Dylan macros are a kludge. They work fine for relatively simple source-to-source transformations--largely thanks to a lot of "do what I mean" magic in the spec--but as soon as you try to do anything complicated, they're at least an order of magnitude harder than creating LALR(1) grammars. Look at how the pattern "var :: type = expr" actually binds; it defaults in helpful ways, but it's incredibly arbitrary and ad-hoc. These individual kludges add up, and by the time you're building a large, novel macro, they become pretty overwhelming.

(I'll have to take a look at the dexpr proposal. It sounds quite promising, but I imagine it still has a few kludgy issues with Dylan's low-level pattern grammar.)

I think the best macro system, to date, is the combination of Scheme R(n)RS high-level macros and the Chez Scheme low-level macro system. It has all the hygiene Common LISP lacks, and it's only slightly harder to write programmatic macros in Chez Scheme than in Common Lisp.

I think the answer for truly good infix macros requires an extensible grammar system. However, every extensible grammar system I've examined to date has been ugly. That's because they all allowed programmers to extend an underlying LALR(1)-style grammar (or something similar). LALR(1)-style grammars, unfortunately, are tricky and--most importantly--completely non-modular. You can't easily combine multiple macros from different modules, you can't produce good error messages, and you can't formally explain why certain things work and other things don't.

Basically, I think the key to good infix macros is to replace LALR(1) with a different, more-modular parsing model, and then adapt as much of the LISP/Scheme approach as possible. LALR(1), quite frankly, is a clever kludge, and you can almost certainly get by with something a lot simpler if you make a couple of assumptions. I'd be delighted to discuss those assumptions with you in e-mail.

# Eric Kidd

Hi Eric!

Didn't hear of your for quite a while... If you have the time, please visit us at IRC channel #dylan, on irc.freenode.net. I'm sure we have much to discuss.

But yes, implementing infix macros by adding rules to the grammar is a thought that has crossed my mind. This would be much more straightforward to implement, and give better error reporting.

In case you haven't seen it yet, take a look at Peter Housel's Monday project (http://monday.sf.net). It contains a nice extensible parser framework.

# Andreas Bogk