January 22, 2014
On Wednesday, 22 January 2014 at 10:38:40 UTC, bearophile wrote:
> Walter Bright:
>
>> http://www.reddit.com/r/programming/comments/1vtm2l/so_you_want_to_write_your_own_language_dr_dobbs/
>
> Thank you for the simple nice article.
>
>
>>The poisoning approach. [...] This is the approach we've been using in the D compiler, and are very pleased with the results.<
>
> Yet, even in D most of the error messages after the first few ones are often not so useful to me. So perhaps I'd like a compiler switch to show only the first few error messages and then stop the compiler.

Could you give an example? We've tried very hard to avoid useless error messages, there should only be one error message for each bug in the code.
Parser errors still generate a cascade of junk, and the "cannot deduce function from argument types" message is still painful -- is that what you mean? Or something else?
January 22, 2014
On 1/22/2014 3:40 AM, Chris wrote:
> Syntax is getting simplified due to the fact that the listener "knows what we
> mean", e.g. "buy one get one free". I wonder to what extent languages will be
> simplified one day. But this is a topic for a whole book ...

There was this article recently:

http://www.onthemedia.org/story/yesterday-internet-solved-20-year-old-mystery/

about how english is so redundant one can write sentences using just the first letter of each word, and it is actually understandable.
January 22, 2014
On 1/22/2014 4:53 AM, Don wrote:
> On Wednesday, 22 January 2014 at 04:29:05 UTC, Walter Bright wrote:
>> http://www.reddit.com/r/programming/comments/1vtm2l/so_you_want_to_write_your_own_language_dr_dobbs/
>>
>
> Great article. I was surprised that you mentioned lowering positively, though.
>
> I think from DMD we have enough experience to say that although lowering sounds
> good, it's generally a bad idea. It gives you a mostly-working prototype very
> quickly, but you pay a heavy price for it. It destroys valuable semantic
> information. You end up with poor quality error messages, and
> counter-intuitively, you can end up with _more_ special cases (eg, lowering
> ref-foreach in DMD means ref local variables can spread everywhere). And it
> reduces possibilities for the optimizer.
>
> In DMD, lowering has caused *major* problems with AAs, foreach. and
> builtin-functions, and some of the transformations that the inliner makes. It's
> also caused problems with postincrement and exponentation. Probably there are
> other examples.
>
> It seems to me that what does make sense is to perform lowering as the final
> step before passing the code to the backend. If you do it too early, you're
> shooting yourself in the foot.

On the other hand, the lowering of loops to for uncovered numerous bugs, and the lowering of scope to try-finally made it actually implementable and fairly bug-free.
January 22, 2014
Don:

> Could you give an example? We've tried very hard to avoid useless error messages, there should only be one error message for each bug in the code.
> Parser errors still generate a cascade of junk, and the "cannot deduce function from argument types" message is still painful -- is that what you mean? Or something else?

There are situations where I see lots and lots of error messages caused by some detail that breaks the instantiability of for some function from std.algorithm.

While trying to find you an example, I have found and filed this instead :-)
https://d.puremagic.com/issues/show_bug.cgi?id=11971

Bye,
bearophile
January 22, 2014
Am 22.01.2014 14:28, schrieb Dejan Lekic:
> On Wednesday, 22 January 2014 at 10:38:40 UTC, bearophile wrote:
>>
>> In Haskell the GHC compiler goes one step further, it translates all
>> the Haskell code into an intermediate code named Core, that is not the
>> language of a virtual machine, it's still a functional language, but
>> it's simpler, lot of the syntax differences between language
>> constructs is reduced to a much reduced number of mostly functional
>> stuff.
>>
>
> Same story is with Erlang as far as I know.

Most likely due to its Prolog influence, which also does it.
January 22, 2014
On 1/22/14 4:53 AM, Don wrote:
> On Wednesday, 22 January 2014 at 04:29:05 UTC, Walter Bright wrote:
>> http://www.reddit.com/r/programming/comments/1vtm2l/so_you_want_to_write_your_own_language_dr_dobbs/
>>
>
> Great article. I was surprised that you mentioned lowering positively,
> though.
>
> I think from DMD we have enough experience to say that although lowering
> sounds good, it's generally a bad idea. It gives you a mostly-working
> prototype very quickly, but you pay a heavy price for it. It destroys
> valuable semantic information. You end up with poor quality error
> messages, and counter-intuitively, you can end up with _more_ special
> cases (eg, lowering ref-foreach in DMD means ref local variables can
> spread everywhere). And it reduces possibilities for the optimizer.
>
> In DMD, lowering has caused *major* problems with AAs, foreach. and
> builtin-functions, and some of the transformations that the inliner
> makes. It's also caused problems with postincrement and exponentation.
> Probably there are other examples.
>
> It seems to me that what does make sense is to perform lowering as the
> final step before passing the code to the backend. If you do it too
> early, you're shooting yourself in the foot.

There's a lot of value in defining a larger complex language in terms of a much simpler core. This technique has been applied successfully by a variety of languages (Java and Haskell come to mind).

For us, I opine that the scope statement would've had a million subtle issues if it weren't defined in terms of try/catch/finally.

My understanding is that your concern is related to the stage at which lowering is performed, which I'd agree with.


Andrei

January 22, 2014
On 1/22/2014 3:21 PM, Andrei Alexandrescu wrote:
> My understanding is that your concern is related to the stage at which lowering
> is performed, which I'd agree with.

I also think we did a slap-dash job of it, not that the concept is wrong.

January 23, 2014
On Wednesday, 22 January 2014 at 18:46:06 UTC, Walter Bright wrote:
> On 1/22/2014 3:40 AM, Chris wrote:
>> Syntax is getting simplified due to the fact that the listener "knows what we
>> mean", e.g. "buy one get one free". I wonder to what extent languages will be
>> simplified one day. But this is a topic for a whole book ...
>
> There was this article recently:
>
> http://www.onthemedia.org/story/yesterday-internet-solved-20-year-old-mystery/
>
> about how english is so redundant one can write sentences using just the first letter of each word, and it is actually understandable.

These examples are more about context than redundancy in the grammar. This is very interesting, because the burden is more and more on the listener and less on the speaker. The speaker can omit things relying on the listener's common sense or knowledge of the world (or "you know what I mean" skills). In the beginning, languages were quite complicated (8 or more cases, inflections), but over the centuries things have been simplified, probably due to the fact that humans are experienced enough and can now trust the "interpreter" in the listener's head.
A good example are headlines. A classic is "Driver refused license". Now, everybody will assume that it was not the driver who refused the license (default assumption or the _unmarked case_). If it were in fact the driver who refused the license, the headline would have been different, some sort of linguistic flag would have been raised. This goes into the realms of pragmatics, a very interesting discipline. Some of the concepts found in natural languages can also be found in programming languages. I find it extremely interesting how the human mind (not just language) is reflected in programming languages.

January 23, 2014
On 1/23/2014 5:24 AM, Chris wrote:
> I find it extremely interesting how the human
> mind (not just language) is reflected in programming languages.
>

They way I usually see it is that the human mind HAS to be reflected in programming languages as that's the whole point.

We already knew how to program computers back with manual switches, Altair-style. Every programming tool since then (and *including* Altair-style) has fundamentally been about bridging the gap between the way humans work and the way computers work. That naturally requires that the tool (ex. programming language) reflects a lot about the core nature of both humans and computers, because the language's whole job is to interface with both.

January 24, 2014
On Thursday, 23 January 2014 at 20:11:15 UTC, Nick Sabalausky wrote:
> On 1/23/2014 5:24 AM, Chris wrote:
>> I find it extremely interesting how the human
>> mind (not just language) is reflected in programming languages.
>>
>
> They way I usually see it is that the human mind HAS to be reflected in programming languages as that's the whole point.
>
> We already knew how to program computers back with manual switches, Altair-style. Every programming tool since then (and *including* Altair-style) has fundamentally been about bridging the gap between the way humans work and the way computers work. That naturally requires that the tool (ex. programming language) reflects a lot about the core nature of both humans and computers, because the language's whole job is to interface with both.

Yes, there is no other way. Humans cannot create anything that is not based on the human mind. However, it is interesting to see how it is done. Man against machine (or rather man in machine), how to make a computer work the way we work. Even the simplest things like

x++;
x += 5;

are fascinating. It is already reflected in the development of writing systems, long before there was any talk of computers. And it is also interesting to see how different human ways of tackling problems are enshrined in programming languages. E.g. the ever patronizing Python vs C style (";"). One could write a book about it.