January 14, 2013
> On 1/14/13, Walter Bright <newshound2@digitalmars.com> wrote:
>> Have fun!
>> https://github.com/DigitalMars/optlink

I'm getting a build failure when running the root build.bat:

Fatal error: unable to open input file 'scio.h'

Tried with both VC9 and VC10's nmake, I can't find this file in DMC, optlink, or VCs include dirs.
January 14, 2013
On 1/13/13 10:22 PM, David Nadlinger wrote:
> On Monday, 14 January 2013 at 02:24:38 UTC, Walter Bright wrote:
>> I don't really want to get into a long back-and-forth thing where we
>> just keep repeating our statements.
>
> Neither do I. You listed your reasons for sticking with the DMC backend,
> and I tried to outline how I would judge things differently and why. I
> guess discussions generally work like this. :)
>
> Anyway, thanks for responding to my post in the first place. I very much
> appreciate that, as I don't think you ever commented on the topic in the
> last few years in any detail at all, at least as far as the public
> forums are concerned.

I'll candidly mention that David's request was unclear to me. It's been winding enough that I missed the key sentence. Was it that we switch dmd to using llvm as backend?

Andrei
January 14, 2013
On 1/13/2013 8:00 PM, Andrej Mitrovic wrote:
>> On 1/14/13, Walter Bright <newshound2@digitalmars.com> wrote:
>>> Have fun!
>>> https://github.com/DigitalMars/optlink
>
> I'm getting a build failure when running the root build.bat:
>
> Fatal error: unable to open input file 'scio.h'
>
> Tried with both VC9 and VC10's nmake, I can't find this file in DMC,
> optlink, or VCs include dirs.
>

That's part of the dmc source code.


January 14, 2013
On Monday, 14 January 2013 at 04:42:33 UTC, Andrei Alexandrescu wrote:
> I'll candidly mention that David's request was unclear to me. It's been winding enough that I missed the key sentence. Was it that we switch dmd to using llvm as backend?

If you want, yes - but not in the form of an actionable proposal yet. I was trying to argue that the benefits of using an existing solution like GCC or LLVM are large enough (resp. the costs of using a custom backend high enough) that we should seriously consider doing so. Especially because it looks as if the amount of work needed to keep the DMD backend and thus the reference D compiler competitive is going to increase further as other backends are gaining things like auto-vectorization to light up modern CPUs and ARM is gaining in importance.

David
January 14, 2013
On 01/14/2013 12:33 AM, David Nadlinger wrote:
> On Monday, 14 January 2013 at 04:42:33 UTC, Andrei Alexandrescu wrote:
>> I'll candidly mention that David's request was unclear to me. It's
>> been winding enough that I missed the key sentence. Was it that we
>> switch dmd to using llvm as backend?
>
> If you want, yes - but not in the form of an actionable proposal yet. I
> was trying to argue that the benefits of using an existing solution like
> GCC or LLVM are large enough (resp. the costs of using a custom backend
> high enough) that we should seriously consider doing so. Especially
> because it looks as if the amount of work needed to keep the DMD backend
> and thus the reference D compiler competitive is going to increase
> further as other backends are gaining things like auto-vectorization to
> light up modern CPUs and ARM is gaining in importance.
>
> David

"gaining" he says ;)

D just missed out on the leading edge of smartphone games.  That is a HUGE market packed with small developers that can easily adopt new tooling.  We got the invite and we stood them up.  IMO, sticking to an x86-centric toolset cost D one of its perfect opportunities for being a killer tool.  That makes me kinda sad.

Sorry for the downer.  I bring it up in the hope that we can learn from it.  I like to think that we'll see more opportunities in the future.
January 14, 2013
On Mon, Jan 14, 2013 at 06:33:44AM +0100, David Nadlinger wrote:
> On Monday, 14 January 2013 at 04:42:33 UTC, Andrei Alexandrescu wrote:
> >I'll candidly mention that David's request was unclear to me. It's been winding enough that I missed the key sentence. Was it that we switch dmd to using llvm as backend?
> 
> If you want, yes - but not in the form of an actionable proposal yet. I was trying to argue that the benefits of using an existing solution like GCC or LLVM are large enough (resp. the costs of using a custom backend high enough) that we should seriously consider doing so. Especially because it looks as if the amount of work needed to keep the DMD backend and thus the reference D compiler competitive is going to increase further as other backends are gaining things like auto-vectorization to light up modern CPUs and ARM is gaining in importance.
[...]

I have to say, as a bystander to this discussion, that I *have* felt the dilemma of whether to use dmd (I want to try out the latest D features) vs. gdc/ldc (I want maximal performance for compute-heavy apps).

I don't mean to downplay dmd in any way, but IME code generated by gdc -O3 has been *consistently* 30-40% faster than code generated by dmd -O, sometimes even 50% or more. I have compared the assembly code generated by either compiler, and I have to say that, at least IME, the code produced by gdc is indisputably superior to the code produced by dmd. Due to this, I have wanted to use only gdc for my D projects, and I'd expect that any project where performance is a concern would feel the same way. But I was forced to use dmd because I wanted to try out the newest features and get the latest bugfixes as well.  So now I'm torn between wanting competitive performance vs. getting the bugfixes I need or the latest and greatest features.

This is far from ideal, IMHO. Users shouldn't have to choose between up-to-date language support vs. performance. Having the reference D compiler match gdc/ldc in performance would be a big factor in D adoption IMO (how many people would spend the time to look up gdc, or even know it exists, instead of just compiling a C/C++ program with gcc/g++, compare it to dmd output for a comparable D program, and walk away?).


T

-- 
WINDOWS = Will Install Needless Data On Whole System -- CompuMan
January 14, 2013
On 01/13/2013 05:23 AM, Jonathan M Davis wrote:
> On Sunday, January 13, 2013 04:58:16 Chad J wrote:
>> On 01/13/2013 02:28 AM, Jonathan M Davis wrote:
>>> I really should ask Andrei why he made length require O(log n) instead
>>> O(1)...
>>>
>>> - Jonathan M Davis
>>
>> Are there cases where it can't be O(1)?
>
> Most definitely. Take a doubly linked list for example. Either length can be
> O(1) and splicing is then O(n), or length is O(n) and splicing is then O(1).
> That's because if you want to keep track of the length, you have to count the
> number of elements being spliced in. For instance, it's a relatively common
> mistake in C++ to use std::List's size function and compare it with 0 to see
> whether the list is empty, because size is O(n). The correct thing to do is to
> call empty and check its result.
>
> You _could_ make std::list's size function be O(1), but that would mean that
> splicing becomes O(n), which C++98 definitely did not do (though I hear that
> C++11 made the interesting choice of changing how std::list works so that size
> is O(1) and splicing is O(n); I don't know if that's good or not).
>
> std.container.slist and std.container.dlist don't define length precisely
> because it can't be O(1) given their design.
>
> - Jonathan M Davis

Thanks!

That's a good example.  It made sense once I realized that you could be splicing in arbitrary sections of another list, not just an entire other list.

If I were to used cached lengths and splice in another list with a cached length, then I would just add the lengths of the two lists and do an O(1) splice.  I can see how this wouldn't cover all cases though.

January 14, 2013
On Sunday, January 13, 2013 22:46:21 H. S. Teoh wrote:
> On Mon, Jan 14, 2013 at 06:33:44AM +0100, David Nadlinger wrote:
> > On Monday, 14 January 2013 at 04:42:33 UTC, Andrei Alexandrescu
> > 
> > wrote:
> > >I'll candidly mention that David's request was unclear to me. It's been winding enough that I missed the key sentence. Was it that we switch dmd to using llvm as backend?
> > 
> > If you want, yes - but not in the form of an actionable proposal yet. I was trying to argue that the benefits of using an existing solution like GCC or LLVM are large enough (resp. the costs of using a custom backend high enough) that we should seriously consider doing so. Especially because it looks as if the amount of work needed to keep the DMD backend and thus the reference D compiler competitive is going to increase further as other backends are gaining things like auto-vectorization to light up modern CPUs and ARM is gaining in importance.
> 
> [...]
> 
> I have to say, as a bystander to this discussion, that I *have* felt the dilemma of whether to use dmd (I want to try out the latest D features) vs. gdc/ldc (I want maximal performance for compute-heavy apps).
> 
> I don't mean to downplay dmd in any way, but IME code generated by gdc -O3 has been *consistently* 30-40% faster than code generated by dmd -O, sometimes even 50% or more. I have compared the assembly code generated by either compiler, and I have to say that, at least IME, the code produced by gdc is indisputably superior to the code produced by dmd. Due to this, I have wanted to use only gdc for my D projects, and I'd expect that any project where performance is a concern would feel the same way. But I was forced to use dmd because I wanted to try out the newest features and get the latest bugfixes as well.  So now I'm torn between wanting competitive performance vs. getting the bugfixes I need or the latest and greatest features.
> 
> This is far from ideal, IMHO. Users shouldn't have to choose between up-to-date language support vs. performance. Having the reference D compiler match gdc/ldc in performance would be a big factor in D adoption IMO (how many people would spend the time to look up gdc, or even know it exists, instead of just compiling a C/C++ program with gcc/g++, compare it to dmd output for a comparable D program, and walk away?).

If you want to use the lastest master, then you're going to be stuck with dmd. If you're willing to use the latest release, then you have a choice. And assuming that you don't need the latest master, I would argue that if performance is critical, what you'll want to do is develop with dmd (because of its fast compilation items), and then you'll actually want to release using gdc or ldc so that you actually get fast programs. And given the fact that Walter is rightly focused on language features rather than optimizing dmd's backend and the fact that almost no one else seems likely to spend time optimizing dmd's backend, it's a foregone conclusion at this point that dmd is going to generate slower programs than gdc and ldc for the forseeable future. On the other hand, it wouldn't surprise me if having Walter switch to another backend for the reference compiler would cost us so much development time from him (as he learns the new backend) that it wouldn't be even vaguely worth it.

- Jonathan M Davis
January 14, 2013
On 1/13/2013 9:33 PM, David Nadlinger wrote:
> as other backends are gaining things
> like auto-vectorization to light up modern CPUs and ARM is gaining in importance.

I've done some research on auto-vectorization, i.e. "The Software Vectorization Handbook" by Bik.

My conclusion (Manu independently came to the same one, he's our resident SIMD expert) is that auto-vectorization is a disaster.

What it is is:

1. Reverse engineer a loop into a higher level construct
2. Recompile that construct using vector instructions

It's a disaster because (2) often fails in ways that are utterly mysterious to 99% of programmers. The failure mode is to not auto-vectorize the loop. Hence, the failure is silent and the user just sees poor performance, if he notices it at all.

D's intended approach is completely different, and much more straightforward. The SIMD types are exposed to the front end, and you build vector operations like you would any other arithmetic operation. If a particular vector operation is not available for a particular SIMD type, then the compilation fails with an appropriate error message.

I.e. D's approach is to fix the language to support vector semantics, rather than trying to back-asswards fit it into the optimizer.

January 14, 2013
On 2013-01-14 00:57, Walter Bright wrote:

> Doing this all requires a fairly comprehensive understanding of the
> innards of the back end. If LLVM is lacking one or more of 1..5, then I
> wouldn't be surprised a bit if it took me considerably *longer* to do it
> for LLVM

Sure, but if you had switch to LLVM, say three years ago, you would know innards of the back end.

-- 
/Jacob Carlborg