February 18, 2016
On 2/18/16 8:11 AM, Márcio Martins wrote:
> On Thursday, 18 February 2016 at 12:05:12 UTC, Jonathan M Davis wrote:
>> On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
>>> On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins wrote:
>>>> I suppose it's a lot easier to address the compilation speed issue
>>>> in LDC/GDC, than to improve and maintain DMD's backend to the
>>>> expected levels, right?
>>>
>>> LLVM has about 2.5 million code lines. I am anything than sure if it
>>> is easy to improve compilation speed.
>>
>> On some level, I would expect compilation speed and generating
>> well-optimized binaries to be mutually exclusive. To get those extra
>> optimizations, you usually have to do more work, and that takes more
>> time. I'm sure that some optimizations can be added to dmd without
>> particularly compromising compilation speed, and gdc and ldc can
>> probably be made to compile faster without losing out on
>> optimizations, but you can only go so far without either losing out on
>> compilation speed or on optimizations. And obviously, it's not
>> necessarily easy to make improvements to either, regardless of whether
>> it comes at the cost of the other.
>>
>> - Jonathan M Davis
>
> I agree with that. It also means that it would be considerably easier to
> have a setting in LDC/GDC that generates slightly worst code, and
> compiles slightly faster... perhaps never reaching the speed of DMD, but
> compilation speed is not the only factor, is it?
>
> GCC/LLVM have many more supported platforms and architectures, produce
> faster code, and have large communities behind them, constantly
> optimizing and modernizing, backed by it giants like Google, Apple, ...
>
> I cannot say for GCC but LDC also has considerably better tooling with
> the sanitizers.
>
> LDC seems to also be the closest to support all major platforms and
> architectures, including iOS and Android which are huge markets. It
> supports Win64/Win32 (experimental) out-of-the-box.
>
> Both LDC and GDC have no weird legal strings attached.
> Both can be distributed with major Linux distros.

Which of these advantages cannot be taken advantage of today?

> All that DMD has going for it is it's compilation speed.

Walter does most of the feature implementation work. Having a familiar back-to-back codebase is a big asset. Compilation speed is a big asset, too, probably not as big.

> These are all big points towards having more users experience and enjoy
> D as we do!
>
> To get more contributors, more people have to use and believe in the
> language. DMD has a lot of clear barriers for this.
>
> Really, not a lot has to change to start with, just fix the installers
> and slap the official tag in either LDC or GDC.

A step everybody would agree is good would be to make it easy for the three compilers to stay in sync.

Thanks for your work on the GC!


Andrei

February 18, 2016
Lots of programmers out there use and love languages that are far slower than any code DMD produces (think JavaScript, Python, Ruby). So I see no point here. If someone is learning D, and they know there are different compilers available, they would find out what are the differences. OpenJDK's JVM is not the best JVM in the world, yet millions of people use it.

What I find in having DMD being a *reference compiler* useful is to have a compiler which has latest language changes.
February 18, 2016
On Thursday, 18 February 2016 at 13:23:34 UTC, Andrei Alexandrescu wrote:

> Which of these advantages cannot be taken advantage of today?
>

I suppose if you combine the feature sets of all compilers you will to some degree be able to get the best of all worlds. But the compiler *representing* the language in the wild, in benchmarks could be one with an offering that fits the largest amount of potential users, and the least possible friction towards the adoption, could it not?
Is it optimal that the compiler labelled *official* offers the least "advantages" of all?

There is "Strong optimization" under LDC and GDC in the downloads page, however, we still see people downloading DMD and benchmarking with it, don't we? Yes, people don't read a lot on the web, as soon as they see "official" most people pick that and stop reading.

> Walter does most of the feature implementation work. Having a familiar back-to-back codebase is a big asset. Compilation speed is a big asset, too, probably not as big.
>

I agree, but I don't see why this would have to change. It shouldn't change. Frontend development could happen on DMD as the *reference* compiler.

> A step everybody would agree is good would be to make it easy for the three compilers to stay in sync.
>

That would be the cherry on top.
February 18, 2016
On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins wrote:
> I agree, but I don't see why this would have to change. It shouldn't change. Frontend development could happen on DMD as the *reference* compiler.

And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be?

- Jonathan M Davis
February 18, 2016
On Thursday, 18 February 2016 at 13:05:53 UTC, bachmeier wrote:
> "If you're careful, as fast as C++" isn't by itself the most compelling sales pitch.

That's never going to be a good selling pitch, if it were true 100% of the time. If that's all that someone cares about, they're just going to continue to use C++. Why bother switching? It may matter that their performance isn't going to take a huge hit by switching to D, but it's all of the other stuff about D that's going to actually get someone interested - the stuff that it does better than other languages, not the stuff that it's "as good at" as other languages. And as long as we have alternative compilers which produce code on par with the big C++ compilers, I really don't think that we have an issue here.

- Jonathan M Davis
February 18, 2016
On 02/18/2016 09:22 AM, Dejan Lekic wrote:
> Lots of programmers out there use and love languages that are far slower
> than any code DMD produces (think JavaScript, Python, Ruby). So I see no
> point here.

While that's true, my impression is most of the users and fans of those languages use them *because* they're fundamentally dynamic (unlike D), deliberately lacks every feature they possibly CAN lack (unlike D), and lack all the compile-time safety that D promotes as features.

So regarding those langauges' reduced speed, while many of their users don't care one bit ("why aren't you a consumer whore like me? go buy a new machine, you dinosaur!"), there seems to also be a large population that merely *tolerates* the lower speed for the sake of the dynamicness and the lack-of-compile-time-anything that they love. The first group will likely never be tempted by D regardless, but for the second group, a language that's fast like C/C++ but not nearly as unproductive IS appealing, and even seems to be something they're often on the lookout for.


February 18, 2016
On Thursday, 18 February 2016 at 15:36:42 UTC, Jonathan M Davis wrote:
> On Thursday, 18 February 2016 at 14:23:12 UTC, Márcio Martins wrote:
>> I agree, but I don't see why this would have to change. It shouldn't change. Frontend development could happen on DMD as the *reference* compiler.
>
> And what exactly is the difference between the "official" compiler and the "reference" compiler supposed to be?
>
> - Jonathan M Davis

"official" carries a connotation of endorsement doesn't it? In other words, if you are given a choice of 3 and you know very little about each, which would you predict would give you a better user experience?

Reference in this case is the one that most closely follows the bleeding edge of the language spec, which new users don't benefit a lot from. In this case it's also where all the frontend development would happen. But what we call it this doesn't really matter to end users.

What I have been defending this far is that we could entertain the possibility that end users could be better off if we "suggested" they try out one of the other compilers before they try DMD. The easiest way to suggest that is to stamp "official" on one of the stronger alternatives. Once installers for LDC and GDC are on par with DMD, is there still a pragmatic reason to suggest DMD to new users? Given that all that DMD has going for it from the perspective of a new user is the compilation speed?

For everyone else nothing would change, we'd go about our daily lives, using our favorite compiler as always. But meanwhile, people exploring and looking to try D could try out it's amazing features and get proof in first-hand, that these awesome features come at no efficiency cost, as advertised.
February 18, 2016
On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
> On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins wrote:
>> I suppose it's a lot easier to address the compilation speed issue in LDC/GDC, than to improve and maintain DMD's backend to the expected levels, right?
>
> LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed.
>
> Regards,
> Kai

Sorry for being off topic,

Rustc(uses LLVM) has a parallel codegen compilation mode that decreases optimization for a (major AFAIK?) decrease in compilation time when compiling multiple files. Would it be possible for LDC to offer the same thing without a major rewrite? I'm unfamiliar with the LDC codebase which is why I ask.
Probably worth noting that even with parallel codegen rustc is still far slower than ldc.

reference:
https://internals.rust-lang.org/t/default-settings-for-parallel-codegen/519
February 18, 2016
On Thursday, 18 February 2016 at 12:16:49 UTC, Radu wrote:
> As a casual user of the language I see that there is a fragmentation of resources and a waste in this regard with people developing in mainline, then some of you LDC guys catching up.

As Iain already pointed out the main problem is (undocumented or weird) AST changes. This makes a merge sometimes painful. This can (and will) go better.

This is IMHO the only "waste". Nobody of the LDC team does frontend development, we are all focused on the glue layer.

> My simple assumption is that if presumably the dmd backend is not maintained anymore, a lot of the core dmd people can focus on improving whatever problems the frontend or glue layers have.

As far as I know only Walter (and Daniel I think) work on the backend. This is not "a lot of the core dmd people".

> This could only mean that you core LDC guys could focus on llvm backend optimizations (both code gen and performance related). I'm going to assume that those kind of performance optimizations are also constantly done by upstream llvm, so more win here.

By chance I am an LLVM committer, too. But the LDC team only focuses on getting the glue library and the runtime library right. Adding new useful optimizations is hard work. The people working on it are either researchers or backed by a big company.

> Users will not magically turn to contributors if their perception is that there is always going to be a catch-up game to play somewhere. Not to mention that if one want's to get something in LDC, one has to commit it in mainline, which is DMD, you just multiplied the know-how someone needs to have to do some useful work...

It depends on the feature you want. If you want a new language feature then yes. But then you do not change LDC, you change the language specification and therefore the reference compiler.

You can add a lot of features without ever touching DMD frontend code. The sanitizers, for example. Or the not-yet-merged PR for profile-guided optimizations.

> And finally, just pointing people to ldc/gdc (always a version or 2 behind, another grief) each time dmd performance is poor, looks awfully wrong.

I really find this "speed" argument doubtful. My experience is that if you really need performance you must *know* what you are doing. Just picking some code from a web site, compiling it and then complaining that the resulting binary is slower than that of language xy is not a serious approach.

For a novice user, LDC can be discouraging: just type ldc2 -help-hidden. But you may need to know about these options to e.g. enable the right auto-vectorizer for your problem.

I once wrote an MD5 implementation in pure Java which was substantially faster than the reference implementation in C from RFC 1321 (gcc -O3 compiled). C is not faster than Java if you know Java but not C. The same is true for D.

I really like the compiler diversity. What I miss (hint!) is a program to verify the compiler/backend correctness. Just generate a random D program, compile with all 3 compilers and compare the output. IMHO we could find a lot of backend bugs this way. This would help all D compilers.

Regards,
Kai
February 18, 2016
On Thursday, 18 February 2016 at 17:23:09 UTC, rsw0x wrote:
> On Thursday, 18 February 2016 at 11:41:26 UTC, Kai Nacke wrote:
>> On Thursday, 18 February 2016 at 10:45:54 UTC, Márcio Martins wrote:
>>> I suppose it's a lot easier to address the compilation speed issue in LDC/GDC, than to improve and maintain DMD's backend to the expected levels, right?
>>
>> LLVM has about 2.5 million code lines. I am anything than sure if it is easy to improve compilation speed.
>>
>> Regards,
>> Kai
>
> Sorry for being off topic,
>
> Rustc(uses LLVM) has a parallel codegen compilation mode that decreases optimization for a (major AFAIK?) decrease in compilation time when compiling multiple files. Would it be possible for LDC to offer the same thing without a major rewrite? I'm unfamiliar with the LDC codebase which is why I ask.
> Probably worth noting that even with parallel codegen rustc is still far slower than ldc.
>
> reference:
> https://internals.rust-lang.org/t/default-settings-for-parallel-codegen/519

From time to time I dream about compiling modules in parallel. :-)

This needs some investigation but I think it could be possible to spawn a thread per module you are compiling (after the frontend passes). Never digged deeper into this...

Regards,
Kai