May 12, 2013
On 5/11/2013 7:30 PM, Jonathan M Davis wrote:
> But in theory, the way to solve the problem of your program not compiling with
> the new compiler is to compile with the compiler it was developed with in the
> first place, and then if you want to upgrade your code, you upgrade your code
> and use it with the new compiler. The big problem is when you need to compile
> the compiler. You have a circular dependency due to the compiler depending on
> itself, and have to break it somehow. As long as newer compilers can compiler
> older ones, you're fine, but that's bound to fall apart at some point unless
> you freeze everything. But even bug fixes could make the old compiler not
> compile anymore, so unless the language and compiler (and anything they depend
> on) is extremely stable, you risk not being able to compile older compilers,
> and it's hard to guarantee that level of stability, especially if the compiler
> is not restricted in what features it uses or in what it uses from the
> standard library.

It isn't just compiling the older compiler, it is compiling it and verifying that it works.

At least for dmd, we keep all the old binaries up and downloadable for that reason.

May 12, 2013
On Saturday, May 11, 2013 19:56:00 Walter Bright wrote:
> At least for dmd, we keep all the old binaries up and downloadable for that reason.

That helps considerably, though if the compiler is old enough, that won't work for Linux due to glibc changes and whatnot. I expect that my particular situation is quite abnormal, but I thought that it was worth raising the point that if you're compiler has to compile itself, then changes to the language (and anything else the compiler depends on) can be that much more costly, so it may be worth minimizing what the compiler depends on (as Daniel is suggesting).

As we increase our stability, the likelihood of problems will be less, but we'll probably never eliminate them. Haskell's case is as bad as it is because they released a new standard for it and did it in a way that it doesn't necessarily work to build the old one anymore (and if it does, it tends to be a pain). It would be akin to if dmd were building itself when we went from D1 to D2, and the new compiler could only compile D1 when certain flags were used, and those flags were overly complicated to boot. So, it's much worse than simply going from one version of the compiler to the next.

- Jonathan M Davis
May 12, 2013
"Jonathan M Davis" <jmdavisProg@gmx.com> wrote in message news:mailman.1222.1368325870.4724.digitalmars-d@puremagic.com...
> The big problem is when you need to compile
> the compiler. You have a circular dependency due to the compiler depending
> on
> itself, and have to break it somehow. As long as newer compilers can
> compiler
> older ones, you're fine, but that's bound to fall apart at some point
> unless
> you freeze everything. But even bug fixes could make the old compiler not
> compile anymore, so unless the language and compiler (and anything they
> depend
> on) is extremely stable, you risk not being able to compile older
> compilers,
> and it's hard to guarantee that level of stability, especially if the
> compiler
> is not restricted in what features it uses or in what it uses from the
> standard library.
>
> - Jonathan M Davis

My thought was that you ensure (for the foreseeable future) that all D versions of the compiler compile with the most recent C++ version of the compiler.


May 12, 2013
Am Sat, 11 May 2013 23:51:36 +0100
schrieb Iain Buclaw <ibuclaw@ubuntu.com>:

> 
> I am more concerned from GDC's perspective of things.  Especially when it comes to building from hosts that may have phobos disabled (this is a configure switch).
> 

Indeed. Right now we can compile and run GDC on every system which has a c++ compiler. We can compile D code on all those platforms even if we don't have druntime or phobos support there.

Using phobos means that we would always need a complete & working phobos
port (at least some GC work, platform specific headers, TLS, ...) on the
host machine, even if we:
* Only want to compile D code which doesn't use
  phobos / druntime at all.
* Create a compiler which runs on A but generates code for B. Now we
  also need a working phobos port on A. (Think of a sh4 -> x86 cross
  compiler. This works now, it won't work when the frontend has been
  ported to D / phobos)

(I do understand why it would be nice to use phobos though. Hacking some include path code right now I wish I could use std.path...)
May 12, 2013
On 5/11/2013 10:25 PM, Daniel Murphy wrote:
> My thought was that you ensure (for the foreseeable future) that all D
> versions of the compiler compile with the most recent C++ version of the
> compiler.

That would likely mean the the D compiler sources must be compilable with 2.063.

May 12, 2013
On 2013-05-12 05:50, Jonathan M Davis wrote:

> That helps considerably, though if the compiler is old enough, that won't work
> for Linux due to glibc changes and whatnot.

My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions.

-- 
/Jacob Carlborg
May 12, 2013
On 12 May 2013 10:39, Jacob Carlborg <doob@me.com> wrote:

> On 2013-05-12 05:50, Jonathan M Davis wrote:
>
>  That helps considerably, though if the compiler is old enough, that won't
>> work
>> for Linux due to glibc changes and whatnot.
>>
>
> My experience is the other way around. Binaries built on newer version of Linux doesn't work on older. But binaries built on older versions usually works on newer versions.
>
> --
> /Jacob Carlborg
>

Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


May 12, 2013
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
> Depends... statically linked binaries will probably always work on the
> latest version, dynamic link and then you've got yourself a 'this
> libstdc++v5 doesn't exist anymore' problem.

I am picturing a Linux workstation with the Post-It note ”DO NOT UPDATE" stuck to it.
May 12, 2013
On 12 May 2013 11:08, w0rp <devw0rp@gmail.com> wrote:

> On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
>
>> Depends... statically linked binaries will probably always work on the latest version, dynamic link and then you've got yourself a 'this libstdc++v5 doesn't exist anymore' problem.
>>
>
> I am picturing a Linux workstation with the Post-It note ”DO NOT UPDATE" stuck to it.
>

:D

The only reason you'd have for that post-it note is if you were running some application that you; built yourself, obtained from a third party vendor, general other or not part of the distributions repository.

For instance, I've had some linux ports of games break on me once after an upgrade.  And I've even got a company gcc that does not work on Debian/Ubuntu.  There's nothing wrong with binary compatibility, just that they implemented a multi-arch directory structure, so everything is in a different place to what the vanilla gcc expects.  ;)

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


May 12, 2013
On Sunday, 12 May 2013 at 09:48:58 UTC, Iain Buclaw wrote:
> On 12 May 2013 10:39, Jacob Carlborg <doob@me.com> wrote:
>
>> On 2013-05-12 05:50, Jonathan M Davis wrote:
>>
>>  That helps considerably, though if the compiler is old enough, that won't
>>> work
>>> for Linux due to glibc changes and whatnot.
>>>
>>
>> My experience is the other way around. Binaries built on newer version of
>> Linux doesn't work on older. But binaries built on older versions usually
>> works on newer versions.
>>
>> --
>> /Jacob Carlborg
>>
>
> Depends... statically linked binaries will probably always work on the
> latest version, dynamic link and then you've got yourself a 'this
> libstdc++v5 doesn't exist anymore' problem.

So surely we can just offer a full history of statically linked binaries, problem solved?