February 26, 2016
On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
> On 2/26/2016 1:47 AM, Radu wrote:
>> Please don't get me wrong, we all apreciate what you offered to the D community,
>> but all these legal arguments are strongly tied to you, and less so to the
>> community.
>
> Didn't Google get hung out to dry over 6 lines of Java code or something like that? And I don't know how long you've been around here, but we DID have precisely these sorts of problems during the Phobos/Tango rift. Ignoring licensing issues can have ugly consequences.
>

I'm around here since 2004, not as vocal as I'm now, but yes, I remember those ugly times.
Due diligence is mandatory when dealing with software license, agreed, but we can't extrapolate your experience re. the backend with whatever is used in LDC or any other compiler. I'm sure in this regard LDC is not at peril.

>
>> Your LLVM license nit pick is hilarious, you can't do that when the "oficial" D
>> compiler has a non-liberal licensed backend, you just can't.
>
> That's not under my control, and is one of the reasons why D gravitated towards the Boost license for everything we could.
>

Yes, agreed, boost FTW, but still doesn't solve the backend issue.

>
>> But setting things aside, we all need to acknowledge that the current setup is
>> not fair to motivated and proven third party compilers, their contributors, and
>> their users.
>
> I don't see anything unfair. gdc, ldc, and dmd are each as good as their respective teams make them.
>

The lack of fairness comes from the way the ecosystem is setup, you have the reference compiler released, then everybody needs to catch up with it. Why not have others be part of the official release? This will undoubtedly increase the quality of the frontend and the glue layer, and probably the runtime, just because they will be tested on more architectures each release.

No matter how you put it, both LDC and GDC are limited in manpower, and also caught in the merge game with mainline. This is a bottle neck if they need to attract more talent. Right of the bat you need to do a lot of grunt work handling different repos, each at their own revision, plus all the knowledge about build env and testing env.

>
>> The D ecosistem must create and foster a friendly environment to anyone wanting
>> to have a good compiler that is current with the language/runtime/phobos
>> developments.
>
> And that's what we do. It's why we have 3 major compilers.

See above, just having 3 compilers (could be 5 for the matter), it's not enough. We will be better with just one that works great, but if that is not possible, at least give me the option to use the latest and greatest D on my Linux embedded ARM boards.

February 26, 2016
On 2/26/16 7:02 AM, Radu wrote:
> On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
>> I don't see anything unfair. gdc, ldc, and dmd are each as good as
>> their respective teams make them.
>>
>
> The lack of fairness comes from the way the ecosystem is setup, you have
> the reference compiler released, then everybody needs to catch up with
> it. Why not have others be part of the official release? This will
> undoubtedly increase the quality of the frontend and the glue layer, and
> probably the runtime, just because they will be tested on more
> architectures each release.
>
> No matter how you put it, both LDC and GDC are limited in manpower, and
> also caught in the merge game with mainline. This is a bottle neck if
> they need to attract more talent. Right of the bat you need to do a lot
> of grunt work handling different repos, each at their own revision, plus
> all the knowledge about build env and testing env.

The issue here is the front-end not the back end. Daniel has already stated this was a goal (to make the front end shared code). So it will happen (I think Daniel has a pretty good record of following through, we do have a D-based front end now after all).

Any effort to make both LDC and GDC part of the "official" release would be artificial -- instead of LDC and GDC getting released "faster", they would simply hold up dmd's release until they caught up. And this is probably more pressure than their developers need.

When the front end is shared, then the releases will be quicker, and you can be happier with it.

-Steve
February 26, 2016
On 2/26/16 6:04 AM, Russel Winder via Digitalmars-d wrote:
> On Fri, 2016-02-26 at 02:52 -0800, Walter Bright via Digitalmars-d
> wrote:
>> […]
>> I'm not aware of any, either, that is specific to github. But given
>> how digital
>> records in general (such as email, social media posts, etc.) are
>> routinely
>> accepted as evidence, I'd be very surprised if github wasn't.
>
> Be careful about make assumptions of admissibility as evidence. I have
> been expert witness in three cases regarding email logs and it is not
> always so simple to have them treated as a matter of record. Of course
> the USA is not the UK, rules and history are different in every
> jurisdiction – and the USA has more than one!
>

I think it's much stronger when the email/logs are maintained by a disinterested third party.

For example, I'd say emails that were maintained on a private server by one of the parties in the case would be less reliable than logs stored on yahoo's servers that neither party has access to.

There would also be no shortage of witnesses "Yes, I remember the day Walter added feature x, and github's logs are correct".

I think Walter is on solid ground there.

-Steve
February 26, 2016
On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer wrote:
> On 2/26/16 7:02 AM, Radu wrote:
>> On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
>>> I don't see anything unfair. gdc, ldc, and dmd are each as good as
>>> their respective teams make them.
>>>
>>
>> The lack of fairness comes from the way the ecosystem is setup, you have
>> the reference compiler released, then everybody needs to catch up with
>> it. Why not have others be part of the official release? This will
>> undoubtedly increase the quality of the frontend and the glue layer, and
>> probably the runtime, just because they will be tested on more
>> architectures each release.
>>
>> No matter how you put it, both LDC and GDC are limited in manpower, and
>> also caught in the merge game with mainline. This is a bottle neck if
>> they need to attract more talent. Right of the bat you need to do a lot
>> of grunt work handling different repos, each at their own revision, plus
>> all the knowledge about build env and testing env.
>
> The issue here is the front-end not the back end. Daniel has already stated this was a goal (to make the front end shared code). So it will happen (I think Daniel has a pretty good record of following through, we do have a D-based front end now after all).
>
> Any effort to make both LDC and GDC part of the "official" release would be artificial -- instead of LDC and GDC getting released "faster", they would simply hold up dmd's release until they caught up. And this is probably more pressure than their developers need.
>
> When the front end is shared, then the releases will be quicker, and you can be happier with it.
>
> -Steve

OK, a shared front end will be great!

My main concern is that if they are not integrated withing the daily pull-merge-auto-test loop they will always tend to drift and get out of sync while trying to fix stuff that breaks.

If the author of the pull request gets auto feedback from DMD and LDC on hes changes test results, than he will be aware of potential problems he might create.

The integration doesn't necessarily needs to be tightly coupled, i.e. LDC can keep its infrastructure and auto sync/run any merges from mainline. The issue is what to do with breaking changes.

Ideally, no breaking should be allowed for when fixing regressions or bugs, and any breaking on front-end or glue layers should at least be talked with the LDC/GDC guys.

All of the above needs steering from the leadership to follow trough.

And BTW, I'm happy with what D has become :), always room for improvements, thank you!
February 26, 2016
On Friday, 26 February 2016 at 11:50:27 UTC, Russel Winder wrote:
> On Fri, 2016-02-26 at 11:12 +0000, BBasile via Digitalmars-d wrote:
>> […]
>> BTW Malicious people can cheat and commit in the past, according
>> to
>> 
>> https://github.com/gelstudios/gitfiti
>> 
>> commitment date is not reliable.
>
> Indeed, which is why Mercurial is a much better system, though it is far from perfect.

"hg commit" knows the "--date" option just as well. Can we please keep this out of here?

 — David
February 26, 2016
On Thursday, 25 February 2016 at 23:06:43 UTC, H. S. Teoh wrote:
> Are there any low-hanging fruit left that could make dmd faster?

A big one would be overhauling the template mangling scheme so it does not generate mangled names a few hundred kilo (!) bytes in size anymore for code that uses templates and voldemort types. For an example, see http://forum.dlang.org/post/n96k3g$ka5$1@digitalmars.com, although the problem can get much worse in big code bases. I've seen just the handling of the mangle strings (generation, ...) making up a significant part of the time profile.

 — David
February 26, 2016
On 2/26/16 9:26 AM, Radu wrote:
> On Friday, 26 February 2016 at 13:11:11 UTC, Steven Schveighoffer wrote:
>> On 2/26/16 7:02 AM, Radu wrote:
>>> On Friday, 26 February 2016 at 11:01:46 UTC, Walter Bright wrote:
>>>> I don't see anything unfair. gdc, ldc, and dmd are each as good as
>>>> their respective teams make them.
>>>>
>>>
>>> The lack of fairness comes from the way the ecosystem is setup, you have
>>> the reference compiler released, then everybody needs to catch up with
>>> it. Why not have others be part of the official release? This will
>>> undoubtedly increase the quality of the frontend and the glue layer, and
>>> probably the runtime, just because they will be tested on more
>>> architectures each release.
>>>
>>> No matter how you put it, both LDC and GDC are limited in manpower, and
>>> also caught in the merge game with mainline. This is a bottle neck if
>>> they need to attract more talent. Right of the bat you need to do a lot
>>> of grunt work handling different repos, each at their own revision, plus
>>> all the knowledge about build env and testing env.
>>
>> The issue here is the front-end not the back end. Daniel has already
>> stated this was a goal (to make the front end shared code). So it will
>> happen (I think Daniel has a pretty good record of following through,
>> we do have a D-based front end now after all).
>>
>> Any effort to make both LDC and GDC part of the "official" release
>> would be artificial -- instead of LDC and GDC getting released
>> "faster", they would simply hold up dmd's release until they caught
>> up. And this is probably more pressure than their developers need.
>>
>> When the front end is shared, then the releases will be quicker, and
>> you can be happier with it.
>>
>
> OK, a shared front end will be great!
>
> My main concern is that if they are not integrated withing the daily
> pull-merge-auto-test loop they will always tend to drift and get out of
> sync while trying to fix stuff that breaks.

I think the intention is to make all of the compilers supported with some reasonable form of CI (not sure if all PRs would be tested this way, because that may be too much of a burden on the test servers).

The idea is that ldc and gdc will get plenty of warning if something breaks.

-Steve
February 26, 2016
On 26 Feb 2016 9:45 am, "Walter Bright via Digitalmars-d" < digitalmars-d@puremagic.com> wrote:
>
> On 2/26/2016 12:20 AM, Iain Buclaw via Digitalmars-d wrote:
>>
>> I thought that mulithreaded I/O did not change anything, or slowed
compilation
>> down in some cases?
>>
>> Or I recall seeing a slight slowdown when I first tested it in gdc all
those
>> years ago.  So left it disabled - probably for the best too.
>
>
>
> Running one test won't really give much useful information. I also wrote:
>
> "On a machine with local disk and running nothing else, no speedup. With
a slow filesystem, like an external, network, or cloud (!) drive, yes. I would also expect it to speed up when the machine is running a lot of other stuff."

Ah ha. Yes I can sort of remember that comment.

One interesting line of development (though would be difficult to implement) would be to do all three semantic passes asynchronously using fibers.

If I understand correctly, sdc already does this with many cases that need ironing out.


February 26, 2016
On Friday, 26 February 2016 at 18:19:57 UTC, Steven Schveighoffer wrote:
> The idea is that ldc and gdc will get plenty of warning if something breaks.

As stated, this in itself would be utterly useless. Right now, you can be absolutely certain that the AST semantics will change in between each DMD release. Sometimes in obvious ways because fields are removed and so on, but much more often silently and in a hard-to-track-down fashion because the structure of the AST or the interpretation of certain node properties changes.

In other words, we don't need any warning that something breaks, because we already know it will. The people that need the warning are the authors of the breaking front-end commits, so that they can properly document the changes and make sure they are acceptable for the other backends (right now, you typically have to reverse-engineer that from the DMD glue layer changes). Ideally, of course, no such changes would be merged without making sure that all the backends have already been adapted for them first.

 — David
February 26, 2016
On 02/26/2016 09:50 AM, David Nadlinger wrote:
> Can we please keep this out of here?

Thank you!! -- Andrei