May 30, 2015
On 30 May 2015 1:41 pm, "weaselcat via Digitalmars-d" < digitalmars-d@puremagic.com> wrote:
>
> On Saturday, 30 May 2015 at 03:24:45 UTC, Rikki Cattermole wrote:
>>
>>
>> Both you and ketmer are evil.
>> I'm liking these ideas...
>>
>> Now we just need some pretty and nice packages for e.g. Windows for ldc
with debugger full support and we will be good.
>> Last time I looked llvm still needs a lot of work for Windows
unfortunately. It may be time to direct some people to help them out ;)
>
>
> LDC seemed to work for the author of the blog on windows after fixing a
path issue. After a quick look in the LDC NG, it seems to be mostly(?)
working.

There's a big difference between compiling a few lines of code, and building a project with particular requirements, dependencies on various foreign libs, cross language linkage, etc.

Ldc does a valiant effort, but there are still quite a lot of gaps. I can't hold that against them, the whole dmd community needs to take gdc/ldc as first class considerations.


May 30, 2015
On Friday, 29 May 2015 at 19:04:05 UTC, weaselcat wrote:
> Maybe this should be brought up on LDC's issue tracker(that is, slower compilation times compared to dmd.)
> Although it might have already been discussed.

We are aware of this: https://github.com/ldc-developers/ldc/issues/830

Regards,
Kai
May 30, 2015
On Friday, 29 May 2015 at 19:04:05 UTC, weaselcat wrote:
> Not to mention that GDC and LDC benefit heavily from GCC and LLVM respectively, these aren't exactly one man projects(e.g, Google, Redhat, Intel, AMD etc contribute heavily to GCC and LLVM is basically Apple's baby.)

Google, Intel, AMD, Imagination, ... also contribute to LLVM. I think most companies contributing to GCC contribute to LLVM, too.

Regards,
Kai
May 30, 2015
Honestly I've never taken DMD to be "the production compiler". I've always left that to the GNU compilers. GDC has all the magic and years of work with it's backend, so I'm not sure how dmd can compare. As others of said, it's really the frontend that DMD is providing that matters; once you have that you can more or less just stick that onto which ever backend works for you. Though DMD is definitely not entirely useless, I use it all the time, mainly for prototypes, quick builds, and testing libraries.

Also if someone is to do speed tests to see how powerful D is, how clueless would they have to be to check only dmd? You don't just compile C++ with MSVC, and then say "Welp, it looks like C++ is just slow and shitty". :P
You can probably safely dismiss any speed test that shows you only one compiler.


So personally I vote that speed optimizations on DMD are a waste of time at the moment.
May 30, 2015
On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:

> So personally I vote that speed optimizations on DMD are a waste of time at the moment.

it's not only waste of time, it's unrealistic to make DMD backend's quality comparable to GDC/LDC. it will require complete rewrite of backend and many man-years of work. and GDC/LDC will not simply sit frozen all this time.

May 30, 2015
On Saturday, 30 May 2015 at 14:29:56 UTC, ketmar wrote:
> On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:
>
>> So personally I vote that speed optimizations on DMD are a waste of time
>> at the moment.
>
> it's not only waste of time, it's unrealistic to make DMD backend's
> quality comparable to GDC/LDC. it will require complete rewrite of backend
> and many man-years of work. and GDC/LDC will not simply sit frozen all
> this time.

+1 for LDC as first class!

D would become a lot more appealing if it could take advantage of the LLVM tooling already available!

Regarding the speed problem - One could always have LDC have a nitro switch, where it simply runs less of the expensive passes, thus reducing the codegen quality, but improving speed. Would that work? I'm assuming the "slowness" in LLVM comes from the optimization passes.

Would clang's thread-sanitizer and address-sanitizer be adaptable and usable with D as well?
May 30, 2015
On Saturday, 30 May 2015 at 17:00:18 UTC, Márcio Martins wrote:
> Would clang's thread-sanitizer and address-sanitizer be adaptable and usable with D as well?

these are already usable from LDC.
make sure you use the -gcc=clang flag.
May 30, 2015
On 30 May 2015 19:05, "via Digitalmars-d" <digitalmars-d@puremagic.com> wrote:
>
> On Saturday, 30 May 2015 at 14:29:56 UTC, ketmar wrote:
>>
>> On Sat, 30 May 2015 12:00:57 +0000, Kyoji Klyden wrote:
>>
>>> So personally I vote that speed optimizations on DMD are a waste of time at the moment.
>>
>>
>> it's not only waste of time, it's unrealistic to make DMD backend's quality comparable to GDC/LDC. it will require complete rewrite of
backend
>> and many man-years of work. and GDC/LDC will not simply sit frozen all this time.
>
>
> +1 for LDC as first class!
>
> D would become a lot more appealing if it could take advantage of the
LLVM tooling already available!
>
> Regarding the speed problem - One could always have LDC have a nitro
switch, where it simply runs less of the expensive passes, thus reducing the codegen quality, but improving speed. Would that work? I'm assuming the "slowness" in LLVM comes from the optimization passes.
>

I'd imagine the situation is similar with GDC.  For large compilations, it's the optimizer, for small compilations, it's the linker.  Small compilations is at least solved by switching to shared libraries.  For larger compilations, only using -O1 optimisations should be fine for most programs that aren't trying to beat some sort of benchmark.


May 30, 2015
On 30/05/15 03:57, Steven Schveighoffer wrote:

> I saw the slide from Liran that shows your compiler requirements :) I
> can see why it's important to you.

Then you misunderstood Liran's slides.

Our compile resources problem isn't with GDC. It's with DMD. Single object compilation requires more RAM than most developers machines have, resulting in a complicated "rsync to AWS, run script there, compile, fetch results" cycle that adds quite a while to the compilation time.

Conversly, our problem with GDC is that IT !@$#%&?!@# PRODUCES ASSEMBLY THAT DOES NOT MATCH THE SOURCE.

I have not seen LDC myself, but according to Liran, situation there is even worse. The compiler simply does not finish compilation without crashing.
>
> But compiled code outlives the compiler execution. It's the wart that
> persists.
So does algorithmic code that, due to a compiler bugs, produces an assembly that does not implement the correct algorithm.

When doing RAID parity calculation, it is imperative that the correct bit gets to the correct location with the correct value. If that doesn't happen, compilation speed is the least of your problems.

Like Liran said in the lecture, we are currently faster than all of our competition. Still, in a correctly functioning storage system, the RAID part needs to take considerable amount of the total processing time under load (say, 30%). If we're losing x3 speed because we don't have compiler optimizations, the system, as a whole, is losing about half of its performance.

> But I don't see how speed of compiler should sacrifice runtime performance.
Our plan was to compile with DMD during the development stage, and then switch to GDC for code intended for deployment. This plan simply cannot work if each time we try and make that switch, Liran has to spend two months, each time yanking a different developer from the work said developer needs to be doing, in order to figure out which line of source gets compiled incorrectly.

>
> -Steve

Shachar
May 30, 2015
On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
>
> When he says Windows, he means MSVC, gcc backend will never support
> interfacing that ABI (at least I see no motivation as of writing).
>
I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?

Shachar