May 01, 2022
On 5/1/2022 12:37 AM, Paulo Pinto wrote:
> On Saturday, 30 April 2022 at 17:14:24 UTC, Walter Bright wrote:
>> On 4/30/2022 1:02 AM, Paulo Pinto wrote:
>>> As if all those people doing CS research in programming languages needed D's existence to notice what is know in academia for decades.
>>
>> Bjarne Stroustrup has a PhD in CS. Why didn't C++ have it? Why does every iteration of C++ make CTFE work more like D's?
>>
>> Why didn't any of the other mainstream native compiled languages have it? Fortran? Ada? Pascal? Modula 2? (The latter two by CS academic researcher Niklaus Wirth.)
>>
>> CTFE is a *huge* win. Why was this well-known thing languishing in complete obscurity?
> 
> C++ doesn't pretend to have invented features that have preceded it by decades, other than what they might have taken away from D.
> 
> Other programming languages had other design goals, which is not the same as clamming to have invented something.

That doesn't explain why C and C++ did not implement this well-known feature. Nor any of those other major native compilation languages I mentioned.

Instead, C went the route of macros. C++ did the Turing complete template programming language. I remember the articles saying how great C++ templates were because you could calculate factorials with them at compile time. I don't recall reading in them "geez, why not just interpret a function at compile time?"

But if you can find one that did, I'll buy you a beer at DConf, too!

May 01, 2022
On 5/1/2022 12:33 AM, Paulo Pinto wrote:
> First Lisp compiler was in 1960's....

I know. And Lisp 1 was an interpreter, page 9 of:

http://jmc.stanford.edu/articles/lisp/lisp.pdf

I know perfectly well that interpreters have long evolved to generate native code. I did one myself (Symantec's Java) in the 1990s. I considered it for the Javascript interpreter I wrote around 2000.

I've also seen C interpreters in the 1980s. Why native C compilers still didn't add interpretation to functions is a mystery. The UCSD P-System had interpreting compilers for C, Pascal, and Fortran in the 1980s.

***** Note that even the C interpreters would reject things like: `int a[foo()];` i.e. CTFE was not part of the *language* semantics. *****

After D did it, suddenly the other native languages moved in that direction. If you have another explanation for the timing, I'd like to hear it.

If you have a reference to a natively compiled language specification that had compile-time constant-expressions that could interpret a function at compile time, I'd appreciate it. No, not an interpreted language that JITs whatever it can.

Thanks!
May 01, 2022
On Sunday, 1 May 2022 at 09:04:11 UTC, Walter Bright wrote:
> On 5/1/2022 12:33 AM, Paulo Pinto wrote:
>> First Lisp compiler was in 1960's....
>
> I know. And Lisp 1 was an interpreter, page 9 of:
>
> http://jmc.stanford.edu/articles/lisp/lisp.pdf
>
> I know perfectly well that interpreters have long evolved to generate native code. I did one myself (Symantec's Java) in the 1990s. I considered it for the Javascript interpreter I wrote around 2000.
>
> I've also seen C interpreters in the 1980s. Why native C compilers still didn't add interpretation to functions is a mystery. The UCSD P-System had interpreting compilers for C, Pascal, and Fortran in the 1980s.
>
> ***** Note that even the C interpreters would reject things like: `int a[foo()];` i.e. CTFE was not part of the *language* semantics. *****
>
> After D did it, suddenly the other native languages moved in that direction. If you have another explanation for the timing, I'd like to hear it.
>
> If you have a reference to a natively compiled language specification that had compile-time constant-expressions that could interpret a function at compile time, I'd appreciate it. No, not an interpreted language that JITs whatever it can.
>
> Thanks!

At the end of the day, the busy programmer doesn't care who invented what, or when. He wants to get something done with as little friction as possible, and chooses a tool appropriately.

He might have a look at D and get seduced by it's syntax, which is quite similar to C/C++, C#,Java, but more pleasant and succinct. As he spends more time in the language, he realises it's even better than expected, as he learns about the not so obvious features, like CTFE, static if, templates, ranges/algorithms...

His next step is to try and use D for everything. Why not? D is amazing as a language, the compiler is faster than C++, he tells himself...

However, as the size of the project grows, compile times are no longer that great, compared to Java, C# or C.
As he starts encountering frictions, he realises a lot of this friction derives from issues in the language/compiler, and that some issues have been there for a long time. The community is not big enough, and currently has no interest in addressing these issues.

He realises the direction of the community is not to improve the busy programmer's life, but to increase their own joy and usability of the features they use, and to somehow find ways to claim D is better than other languages, an illusion based on their belief that more (unique) features are what makes a programming language better, despite the very obvious evidence of the contrary seen in the real world.

The D community is talented and above average in terms of knowledge/skills. However, this is not representative of the reality for a small company building a simple SaaS, website, or web application. So, even if the community (comprised of a lot of enthusiasts) is comfortable with the status quo, this does not mean that is really the case for the "real world".

D has a lot of good things, but the focus has never been on making D popular -
 sometimes it felt the goal was to keep D as "elite" as possible.

The reason I dropped D after over 6 years of using it exclusively, was a loss of belief in the direction, and the fact that after many years, not a lot had improved in regards to frictions for the busy programmer. It really felt like an enthusiasts' project, and not really an effort to build a widely adopted tool.

This might be an outdated view.
May 01, 2022

On Sunday, 1 May 2022 at 08:26:50 UTC, Walter Bright wrote:

>

text

I think it is obvious to the casual observer that D had an enormous influence.
After D, it became rare to have a new native language without CTFE, fast build times, static if, or unittest blocks.

My interpretation of CTFE prior from a quick research:

2003:
If you read the EDG presentation from 2003 that does indeed the same thing:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1471.pdf

What is transparent is that "metacode" is seen as code separate from normal code, whereas one could say the D innovation is to avoid this distinction, bringing it much closer in spirit to LISP. And indeed for D programmers reflecting on the structure of the program progressively becomes second nature after a few years. Possibly core.reflect would enhance this effect.

EDG authors probably knew very well what they are getting into and envision all it can do for native programming. However C++ doesn't get constexpr until 14 years later, with C++11.

2007:
CTFE was in DMD 1.006 (2007). Critically, as there is no committee or implementers to convince, it can be adopted immediately.

One could said the merit of D would have to have pushed through the implementation, making it ubiquitous and so easy stuff that wasn't ever written for meta-programming often works on the first time ; and pushing other languages to have it.

When you are not the first ever to implement something but do it in a way that has better UX, then you are doing more to popularize the feature than just inventing it. It is a bit strange the obvious syntax didn't take of likewise but maybe we can attribute that to the "new thing => loud syntax" bias of programmers.

And also, D tend to implement CTFE more completely: https://nim-lang.org/docs/manual.html#restrictions-on-compileminustime-execution
With objects, reference types, pointers, floats, exp/pow/log...

(Bonus: the point of view of Nim designer: https://www.mail-archive.com/digitalmars-d@puremagic.com/msg88688.html
Which of course could be possible as some ideas "float in the air" to be independently (re)discovered)

static if:
The sanctionned way before static if / if constraints was "policy-based design" and techniques popularized by Alexandrescu in Modern C++ Design (2001). From then on "meta-programming" in a C++ concept primarily looked like: "traits" + template specializations + type lists (AliasSeq but recursive) and SFINAE, with enormous damage done to C++ build times across the world. Templates are much slower than CTFE to compile. Such a style is very much non-LISPy, needing extra-data and headache instead of code as data. D codebases have relatively few extra code generators, but this is common in C++ contexts.

All in all, meta-programming in C++ used to be a very different beast than in D, and didn't require expert knowledge. Making it quite a cultural change vs C++.

slices
Go and Rust had it from the start, Nimrod got them etc.
I unfortunately lack the time to do a complete research about prior, because it seems surprising to me no other native language had them before D. I have a strong feeling that like other successful features, the experience of D was strongly influencing other designs.

In sharp contrast, there are less-impressive ideas that - like it or not - were left behind:

  • pure
  • TLS by default
  • shared
  • transitive immutability
  • insert features you hate here
May 01, 2022

On Sunday, 1 May 2022 at 13:35:46 UTC, Guillaume Piolat wrote:

rust is copying d.
c++ is copying d.

D is very good.

May 01, 2022

On Sunday, 1 May 2022 at 13:52:56 UTC, zjh wrote:

And d is very creative and elegant.
I believe,Yes ,D can!

May 01, 2022

On Sunday, 1 May 2022 at 14:04:04 UTC, zjh wrote:

>

And d is very creative and elegant.
I believe,Yes ,D can!

I like and respect those who keep the original spirit or language.
Those plagiarists, they are born ugly.

May 01, 2022

On Sunday, 1 May 2022 at 08:10:28 UTC, Walter Bright wrote:

>

I.e. if you write portable code it is portable. But that wasn't your complaint - which was about getting portability wrong. C++ offers many more options for that.

The code wasn't wrong, it was C code that I ported to C++ and modified to be allocation free, so the delay lengths had to be known at compile time. If I had ported it to D, maybe it would have saved me a few key-strokes but I would not have realized that the delay-length was computed both at compile time and runtime. So I prefer consteval over constexpr for such cases because it gives stronger typing.

I like strong typing. I also don't like implicit conversion from int to float, and float to double, but it seems like the C-family are stuck on it.

Strong typing is very useful when you port or modify code written by others, or when you refactor your own code. A modern language ought to provide gradual typing IMHO so that you can increase the rigidity of the model as it evolves.

>

I haven't made it clear. There is no ambiguity in D about when a function is run at compile time or run time. None. Zero. It is entirely unnecessary to add a keyword for that.

There is no ambiguity for the compiler, but that is not the same as programmers having a full overview of what goes on in a complex code base that they might not even have written themselves. consteval is just a stronger version of constexpr, what D does is roughly equivalent to making everything constexpr in C++.

>

Many times obvious things are obvious only in retrospect, and we get so comfortable with them we can no longer imagine otherwise.

But in this case it is obvious. It is so obvious that people added macro-languages to their builds to get similar effects.

>

I implemented modules in 10 lines of code for C. It seems so obvious - why didn't I do it 40 years ago? I can't explain it. Why did it take C++ 35 years to come up with modules?

Probably because C++ just was an addition to C and they got around that in a more generic way by introducing namespaces. I like namespaces btw.

The only reason to add modules to C++ is that people are undisciplined and #include everything rather than just what they need. There are no technical reasons to add modules to C++, IMHO.

>

I kick myself about Autotune. It's so obvious even the inventor didn't think of it. His wife, a singer, casually mentioned to him that it would be nice to have a device that fixed her pitch.

Autotune is more about fashion and marketing, being picked up by influential producers, at the time it became a plague the music market was accustomed to "electronic sound" on the radio. Fairly advanced usage of phase vocoders and pitch-trackers were in use in music prior to this. There is a difference between existing and becoming fashionable. The original autotune effect sounds bad in terms of musical quality. You could say the same thing about bit crushers (basically taking a high fidelity signal and setting the lower bits to zero) that create aliasing in the sound. Things that "objectively" sounds bad can become fashionable for a limited time period (or become a musical style and linger on).

May 01, 2022

On Sunday, 1 May 2022 at 14:36:12 UTC, Ola Fosheim Grøstad wrote:

>

Autotune is more about fashion and marketing

Things that "objectively" sounds bad can become fashionable for a limited time period (or become a musical style and linger on).

It has been just a fad for over 23 years now.

May 01, 2022

On Sunday, 1 May 2022 at 14:36:12 UTC, Ola Fosheim Grøstad wrote:

>

On Sunday, 1 May 2022 at 08:10:28 UTC, Walter Bright wrote:

Autotune is more about fashion and marketing, being picked up by influential producers, at the time it became a plague the music market was accustomed to "electronic sound" on the radio. Fairly advanced usage of phase vocoders and pitch-trackers were in use in music prior to this.

There was no automatic pitch correction before AutoTune. There were pitch shifters, and offline editing, but nothing automatic and real time as far as I remember. Im not even sure phase vocoders would have been feasible on dsp hardware in those days.