November 29, 2006
Jarrett Billingsley wrote:
> "Brad Anderson" <brad@dsource.org> wrote in message news:ekhh7s$2e7$1@digitaldaemon.com...
> 
>> Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and ready
>> to go, while the imperative languages try to inch closer over the decades.
> 
> In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;)
> 
> I really can't get past the parentheses.  I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren.  I'm sure bracket highlighting in a code editor can help, but why should that be necessary?  I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax. 

I completely agree. Lisp has a terrible "Hello, World" problem. The first Lisp program I saw had a mass of parentheses, and introduced the functions 'car' and 'cdr' (The year is 1952, apparently). For a newbie, Lisp debugging involves counting parentheses.
The failure of Lisp to gain traction is a great demonstration of the importance of syntactic sugar. Poor old Lisp.

Forth was another great language for metaprogramming. Even the language primitives were written in Forth, except for a few dozen lines of asm.
Completely unmaintainable, though -- asm is much easier.



November 29, 2006
Jarrett Billingsley wrote:
> "Brad Anderson" <brad@dsource.org> wrote in message news:ekhh7s$2e7$1@digitaldaemon.com...
> 
>> Poor Lisp.  It just sits there, 50 years old, debugged, optimized, and
>> ready
>> to go, while the imperative languages try to inch closer over the decades.
> 
> In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;)

But the existing prefix notation is exactly why it can be extended so many ways with macros.  Change that and you lose most of, or at least a lot of, the metaprogramming facilities (see Dylan).  You don't even have Lisp anymore. It's why I'm skeptical of how far imperative languages can go with metaprogramming before it turns into an awful beast.

I kind of hope I'm wrong and D can pull a lot of it off.  As Georg said, Lisp is scary in its breadth and capability.  As Steve Horne lamented, there are no high-level standard libs, because it's so easy to roll your own.  Any standard lib that's come along has not served the needs of all, so they roll their own anyway.  Those two things have slowed the adoption of Lisp.  Potentially syntax, too, see below.

My original post in this thread was more of an observation that languages keep getting more sophisticated, because the power users drive the compiler writers further and further.  D is here because C++ can't grow as nimbly as it used to, and Walter has some cool ideas to add to it.  And here's Lisp, sitting there, arguably the most powerful language ever created, and people are afraid of its power, its syntax, its (insert excuse here) so they choose to reimplement.  I'd hate for people to get pretty far, hit a wall on something, and then look and see how easy it is to do in Lisp.  On the other hand, I would never try to quash the efforts people in D-land make.  It's just an observation and a curiosity on my part.  Maybe Lisp isn't all that I'm giving it credit for.

> 
> I really can't get past the parentheses.  I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren.  I'm sure bracket highlighting in a code editor can help, but why should that be necessary?  I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax.

That's probably because you're comfortable with the imperative langs, and 'C' style.  Once people get over DVORAK keyboards, they claim the difference is amazing.  S-expressions and prefix notation really is a powerful thing once you see all the benefits.  You don't have to do quoted blocks like Nemerle.

And most people get past their difficulties of parens with indentation

#(defun save-db (filename)
#  (with-open-file (out filename
#                   :direction :output
#                   :if-exists :supersede)
#    (with-standard-io-syntax
#      (print *db* out))))

The parens on the end don't really matter.  It's where the indenting starts.
And of course, good editors take care of the parens and the indenting.  Even
without the metaprogramming facilities the prefix notation allows, you end up
with programs that speak more directly to the problem you're trying to solve.
 That may just be functional languages in general, though, ending up with code
that seemingly is a language designed for your domain.

For the uninformed, check out this great online book.  I've only given two chapters: a whirlwind tour, and macros.  Even if you're hooked on D, this makes you think differently about programming, and got me thinking of cool uses for delegates in D and much more.

Quick Intro: http://gigamonkeys.com/book/practical-a-simple-database.html

Macros: http://gigamonkeys.com/book/macros-defining-your-own.html

For D to approach some of this extensibility would be phenomenal, and as Georg suggested, we may be closer than anyone suspects..

BA
November 29, 2006
Don Clugston wrote:
> Jarrett Billingsley wrote:
>> "Brad Anderson" <brad@dsource.org> wrote in message news:ekhh7s$2e7$1@digitaldaemon.com...
>>
>>> Poor Lisp.  It just sits there, 50 years old, debugged, optimized,
>>> and ready
>>> to go, while the imperative languages try to inch closer over the
>>> decades.
>>
>> In that case.. it'd be another interesting experiment to try to come up with a new syntax for Lisp that appeals to more programmers than it does now ;)
>>
>> I really can't get past the parentheses.  I know Georg said it's an excuse, but I really, truly cannot understand most Lisp code because I can't tell which right paren out of a group of six is closing which left paren.  I'm sure bracket highlighting in a code editor can help, but why should that be necessary?  I'm sure a good deal of those parens can be stripped out, or replaced by other brackets, or just moved around to get a more algebraic syntax.
> 
> I completely agree. Lisp has a terrible "Hello, World" problem.

I understand I'm reaching fanboi status here, and I'll stop soon.  But:

# (print "Hello, World!")

doesn't seem too awful.

> The failure of Lisp to gain traction is a great demonstration of the importance of syntactic sugar. Poor old Lisp.

I don't think this is the primary reason.  As mentioned before, syntax is a part of it, but so is the total power given to the programmer.  This power leads to a lack of standard or cohesive libs, b/c it's so easy to make it exactly the way you want it.  I imagine that if some of the D power users wrapped themselves in Lisp for a while, they'd be able to do for themselves what they beg Walter to do for them in D.

BA

November 29, 2006
On Wed, 29 Nov 2006 11:25:43 -0500, Brad Anderson <brad@dsource.org> wrote:

>But the existing prefix notation is exactly why it can be extended so many ways with macros.  Change that and you lose most of, or at least a lot of, the metaprogramming facilities (see Dylan).

So don't change it. Just add a standard syntax-sugar library on top for expressions with precedence and associativity (which sadly Lisp - or at least Scheme - macros can't handle, but which can be handled by using a more Von Neumann approach).

Nemerle has been mentioned recently, and I've been reading up a bit today, and my impression is very positive. You start out, from the beginning, using real world high level practical tools. There's a heavy functional flavour, so it helps to have played with something like Haskell in the past, but get past the "this is different" and there is real workhorse stuff.

And of course that goes beyond the fact that this is a usable high-level language out of the box. Making it a .NET language is both an obvious plus point and my main reservation. It means there is a solid set of libraries to use, without the need for a whole bunch of Nemerle-specific porting. The downside obviously being that it is limited to the .NET platform - no systems level coding etc.

Anyway, it's not until you've got the tools to do 99% of your work that the 'by the way, the if/else, for loop etc etc are just standard library macros - you can do it different if you really need to' becomes an issue.


Some people have mentioned a key problem with metaprogramming/code generation in terms of tools (e.g. the debugging issue). Well, I'm glad I've picked up the 'concept oriented' terminology from that XLR link because it helps me say this more easily...

It doesn't matter whether a concept is implemented directly in the compiler or in a library. What matters is whether the tools understand the concept. If you have a standard set of concepts in a library that handle 99% of all requirements, tools like debuggers can be written to be aware of them, and so the problem only relates to the 1% of code. The principle is not so different from having source-level debugging instead of assembler-level debugging. And even for that 1%, the alternatives are all IMO just as bad as generated code anyway. Code that has been force-fitted to badly matched language concepts is hard to understand and maintain, just like the generated code.

Of course if the library that describes a new concept could also give special instructions to the debugger on how to present it, along with perhaps documentation handling instructions etc etc, then that would be a very good thing. It would mean that you could treat a mature metaprogramming library much as you would a built-in compiler feature - so long as the library itself is working, you only worry about what you are doing with it, not the internals of how it works.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 29, 2006
On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad@dsource.org> wrote:

>I don't think this is the primary reason.  As mentioned before, syntax is a part of it, but so is the total power given to the programmer.  This power leads to a lack of standard or cohesive libs, b/c it's so easy to make it exactly the way you want it.  I imagine that if some of the D power users wrapped themselves in Lisp for a while, they'd be able to do for themselves what they beg Walter to do for them in D.

Not really.

There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.

That means you have to deal with parsing to ASTs, manipulating ASTs, and back-end code generation. In short, you have to design a language and write a compiler. And you have to do it without the benefit of those high level tools, since you haven't written them yet - the bootstrap thing.

You can get Scheme libraries for parsing and so on, so you're not quite working from scratch, but you are working from a level that's not substantially different than using Yacc and C. Except, of course, that you've already got some of those higher level tools in C, and if you need something higher level than that you could always use C++ or some other language that has parsing tools available for it.

I shouldn't need to point out that designing a language and writing a compiler from scratch isn't everyones favorite passtime.

Having a standard one as part of the library, maybe with support for extending the dialect it provides - that sounds promissing. And while Lisp implemented in D (as in dLisp) is a good thing if you have the need, the way to make me really sit up and take notice is to show me D implemented in Scheme.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 29, 2006
Steve Horne wrote:
> On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad@dsource.org> wrote:
> 
>> I don't think this is the primary reason.  As mentioned before, syntax is a part of it, but so is the total power given to the programmer.  This power leads to a lack of standard or cohesive libs, b/c it's so easy to make it exactly the way you want it.  I imagine that if some of the D power users wrapped themselves in Lisp for a while, they'd be able to do for themselves what they beg Walter to do for them in D.
> 
> Not really.
> 
> There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.

I'm not following.  Do you have definitions or examples of these?  I did find this...

http://lambda-the-ultimate.org/node/1605

Not trying to be thick,
BA
November 29, 2006
On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad@dsource.org> wrote:

>Steve Horne wrote:

>> There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
>
>I'm not following.  Do you have definitions or examples of these?

No, but it's implicit in the subset of the Scheme language that I understand.

The term to look up is 'quoting'. A quoted expression may look like code, but to Scheme it is just a list of tokens. You pass that list as a parameter to a function that can make sense of it, and you have a new language extension. And the translation should happen at compile time, though at this point we are running into the limits of my knowledge of Scheme.

>http://lambda-the-ultimate.org/node/1605

I can't seem to access that ATM, but I'll give it another go later.

Going purely on the URL, though, lambdas (first class functions) aren't really the issue here. It's a powerful tool - one thats widely imitated these days - but it isn't a metaprogramming thing.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 29, 2006
Steve Horne wrote:
> On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad@dsource.org> wrote:
> 
>> Steve Horne wrote:
> 
>>> There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
>> I'm not following.  Do you have definitions or examples of these?
> 
> No, but it's implicit in the subset of the Scheme language that I understand.

Okay, I have worked with Common Lisp, and not much with Scheme.  Although for Scheme, I've done a bit while reading Structure and Interpretation of Computer Programs, an excellent book.

> 
> The term to look up is 'quoting'. A quoted expression may look like code, but to Scheme it is just a list of tokens. You pass that list as a parameter to a function that can make sense of it, and you have a new language extension. And the translation should happen at compile time, though at this point we are running into the limits of my knowledge of Scheme.
> 
>> http://lambda-the-ultimate.org/node/1605
> 
> I can't seem to access that ATM, but I'll give it another go later.
> 
> Going purely on the URL, though, lambdas (first class functions) aren't really the issue here. It's a powerful tool - one thats widely imitated these days - but it isn't a metaprogramming thing.
> 

Understood.  lambda-the-ultimate.org is a programming language discussion site, iirc.  Here's the google cache:

http://216.239.51.104/search?q=cache:o2sGhHoc57cJ:lambda-the-ultimate.org/node/1605+lisp+associativity&hl=en&gl=us&ct=clnk&cd=6

BA
November 29, 2006
On Wed, 29 Nov 2006 14:11:01 -0500, Brad Anderson <brad@dsource.org> wrote:

>Steve Horne wrote:
>> On Wed, 29 Nov 2006 11:47:10 -0500, Brad Anderson <brad@dsource.org> wrote:
>> 
>>> I don't think this is the primary reason.  As mentioned before, syntax is a part of it, but so is the total power given to the programmer.  This power leads to a lack of standard or cohesive libs, b/c it's so easy to make it exactly the way you want it.  I imagine that if some of the D power users wrapped themselves in Lisp for a while, they'd be able to do for themselves what they beg Walter to do for them in D.
>> 
>> Not really.
>> 
>> There are things you just can't do with Scheme macros. Associativity and precedence, for instance. This means that if you want to do these things, you have to go the Von Neumann route - treat code as data and manipulate it at compile time using Scheme functions.
>
>I'm not following.  Do you have definitions or examples of these?  I did find this...

Sorry, I'm being stupid. On reflection, you're asking for examples of things you can't do with Scheme macros.

Well, it's hard to provide examples of things that can't be done beyond listing them. Disproof by example is much stronger than proof by example. If your link gives examples of precedence and associativity using macros, well, that just makes me twice stupid.

The claim about associativity and precedence, though, just fell out of my reading of the Scheme manual. At the time, I could see no way to do it.

I'm aware that it is possible to build them in by creating an unambiguous set of BNF rules for a grammar (as opposed to the more normal approach of using disambiguating rules) but I couldn't see a way to do either. I had the distinct impression that the matching is always from left to right.

Based on that, you can write (1 + 2 * 3) if you want, but the result
will be 9, not 7.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 29, 2006
I have the definite feeling that I'm confusing myself at the moment :-(


On Wed, 29 Nov 2006 15:57:03 -0500, Brad Anderson <brad@dsource.org> wrote:

>Understood.  lambda-the-ultimate.org is a programming language discussion site, iirc.  Here's the google cache:
>
>http://216.239.51.104/search?q=cache:o2sGhHoc57cJ:lambda-the-ultimate.org/node/1605+lisp+associativity&hl=en&gl=us&ct=clnk&cd=6

OK.

On a quick scan through that, there doesn't seem to be anything to say that Scheme macros can do associativity and precedence. That's fine by me, as I don't feel quite as stupid as I did a minute ago ;-)

It's an interesting link. There's a lot of languages mentioned that I have only a very superficial knowledge of - e.g. I've played with Prolog a bit, but although I knew there's parsing stuff there, I never used it. The 'take a look at prolog' bit of your link makes it look interesting, though.

Defining Haskell operators seemed easy, but there was something that worried me about it - can't remember what.

A major issue mentioned on that link is having different precedence and associativity in different bits of the code. In Scheme, using quoting, that's not a problems of course - any more than if quoting meant text strings as opposed to lists of tokens. The area where a particular syntax applies is delimited. A related issue *may* have been one of my Haskell concerns, though even if it was there's a danger that I was reasoning from ignorance.

-- 
Remove 'wants' and 'nospam' from e-mail.