April 18, 2010
Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:

> Maybe true, but I was thinking of it from a different angle -- why the
> main D2 development does not switch backends.

So which do you suggest be used instead - the one that doesn't work on
Windows (no exceptions) or the one that requires Walter to give away
all rights to his work?

-- 
Simen
April 18, 2010
Simen kjaeraas wrote:
> Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:
> 
>> Maybe true, but I was thinking of it from a different angle -- why the main D2 development does not switch backends.
> 
> So which do you suggest be used instead - the one that doesn't work on Windows (no exceptions) or the one that requires Walter to give away all rights to his work?
> 
	Actually, gcc doesn't require Walter to give away all rights to his
work just to use it as a backend (or else the gdc project wouldn't
exist). It would require ceding the rights only if Walter wanted D
to be part of the official gcc distribution.

		Jerome
-- 
mailto:jeberger@free.fr
http://jeberger.free.fr
Jabber: jeberger@jabber.fr



April 18, 2010
Jérôme M. Berger <jeberger@free.fr> wrote:

> 	Actually, gcc doesn't require Walter to give away all rights to his
> work just to use it as a backend (or else the gdc project wouldn't
> exist). It would require ceding the rights only if Walter wanted D
> to be part of the official gcc distribution.

You're right, I'm sorry.

-- 
Simen
April 20, 2010
Jérôme M. Berger wrote:
> Simen kjaeraas wrote:
>> Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:
>>
>>> Maybe true, but I was thinking of it from a different angle -- why the
>>> main D2 development does not switch backends.
>> So which do you suggest be used instead - the one that doesn't work on
>> Windows (no exceptions) or the one that requires Walter to give away
>> all rights to his work?
>>
> 	Actually, gcc doesn't require Walter to give away all rights to his
> work just to use it as a backend (or else the gdc project wouldn't
> exist). It would require ceding the rights only if Walter wanted D
> to be part of the official gcc distribution.
> 
> 		Jerome

No, the problem is that it potentially makes him give away the rights to the dmd backend. Which I think he can't legally do, even if he wanted to.
April 20, 2010
On Tue, 20 Apr 2010 10:50:37 -0400, Don <nospam@nospam.com> wrote:

> Jérôme M. Berger wrote:
>> Simen kjaeraas wrote:
>>> Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:
>>>
>>>> Maybe true, but I was thinking of it from a different angle -- why the
>>>> main D2 development does not switch backends.
>>> So which do you suggest be used instead - the one that doesn't work on
>>> Windows (no exceptions) or the one that requires Walter to give away
>>> all rights to his work?
>>>
>> 	Actually, gcc doesn't require Walter to give away all rights to his
>> work just to use it as a backend (or else the gdc project wouldn't
>> exist). It would require ceding the rights only if Walter wanted D
>> to be part of the official gcc distribution.
>>  		Jerome
>
> No, the problem is that it potentially makes him give away the rights to the dmd backend. Which I think he can't legally do, even if he wanted to.

I don't think there is any danger of this, it would be well established that Walter wrote all his proprietary backend code before he viewed gcc source.  The danger is for future code he writes.

Personally, I am not too concerned about the backend performance, it's not critical to D at this time.  Someone, somewhere, will make this better, and then any code written in D magically gets faster :)  We're talking about decreasing the constant for the D compiler complexity, not decreasing the complexity.  Code built with dmd runs plenty fast for me (save the GC performance, maybe we can focus on that first?).

-Steve
April 23, 2010
Steven Schveighoffer wrote:
>> No, the problem is that it potentially makes him give away the rights to the dmd backend. Which I think he can't legally do, even if he wanted to.
> 
> I don't think there is any danger of this, it would be well established that Walter wrote all his proprietary backend code before he viewed gcc source.  The danger is for future code he writes.

I can see the concern here, certainly.

> Personally, I am not too concerned about the backend performance, it's not critical to D at this time.  Someone, somewhere, will make this better, and then any code written in D magically gets faster :)  We're talking about decreasing the constant for the D compiler complexity, not decreasing the complexity.  Code built with dmd runs plenty fast for me (save the GC performance, maybe we can focus on that first?).

I'm looking forward to seeing gdc released for D2 -- I think it will be interesting to compare.  From what I understand part of the motivation for reawakening it was a comparison of performance of code generated by llvm and gcc respectively.

Part of my original issue over speed was that I'd heard D described as 'performance already as good as C++'.  So, I was coming with expectations about what I'd be able to achieve ... :-)
April 23, 2010
On Fri, 23 Apr 2010 11:55:42 -0400, Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:

> Steven Schveighoffer wrote:
>> Personally, I am not too concerned about the backend performance, it's
>> not critical to D at this time.  Someone, somewhere, will make this
>> better, and then any code written in D magically gets faster :)  We're
>> talking about decreasing the constant for the D compiler complexity, not
>> decreasing the complexity.  Code built with dmd runs plenty fast for me
>> (save the GC performance, maybe we can focus on that first?).
>
> I'm looking forward to seeing gdc released for D2 -- I think it will be
> interesting to compare.  From what I understand part of the motivation
> for reawakening it was a comparison of performance of code generated by
> llvm and gcc respectively.

ATM, the bottleneck almost always is the GC.  And the gdc GC is the same as the dmd GC, meaning you'll get the same relative performance.  Like I said, you are decreasing the constant, not the complexity.  Creating a better GC algorithm would be a more effective speedup.  I think LLVM has its own GC, so that might be significantly different.

> Part of my original issue over speed was that I'd heard D described as
> 'performance already as good as C++'.  So, I was coming with
> expectations about what I'd be able to achieve ... :-)

As long as you discount the vast differences in allocation performance, the code generated should be just as good as code generated by a C++ compiler.  Your interpretation of performance did not focus on the right part :)  Your test application heavily used allocation and reallocation, things that have nothing to do with how fast the code compiled by the compiler is, but are based more on the algorithms behind the allocation.  An equivalent C++-based GC would probably show similar performance (in fact, I think D's GC was based on a C++ GC).

This is all taken with a grain of salt of course, the perception is often more important than the technical details.  This thread being a prime example of it.

How I would characterize D when talking about performance is that it is possible to make it as high-performing as C++, but often favors memory safety over performance.  As far as syntax, D wins that battle hands down IMO.  And syntax is way more important to me than performance, especially at this stage of D's life.  Performance can always be tweaked and improved with few changes to the source code, but syntax changes can force you to have to modify an entire program.

-Steve
April 23, 2010
Steven Schveighoffer wrote:
> As long as you discount the vast differences in allocation performance, the code generated should be just as good as code generated by a C++ compiler.  Your interpretation of performance did not focus on the right part :)  Your test application heavily used allocation and reallocation, things that have nothing to do with how fast the code compiled by the compiler is, but are based more on the algorithms behind the allocation.  An equivalent C++-based GC would probably show similar performance (in fact, I think D's GC was based on a C++ GC).
> 
> This is all taken with a grain of salt of course, the perception is often more important than the technical details.  This thread being a prime example of it.

I do see the point about allocation and reallocation -- what was bothering me a bit was that even taking those aspects out of the code and preallocating everything, I could write C++ code that _didn't_ preallocate and still ran (much) faster ... :-)

> How I would characterize D when talking about performance is that it is possible to make it as high-performing as C++, but often favors memory safety over performance.  As far as syntax, D wins that battle hands down IMO.  And syntax is way more important to me than performance, especially at this stage of D's life.  Performance can always be tweaked and improved with few changes to the source code, but syntax changes can force you to have to modify an entire program.

Certainly agree about syntax -- it was not quite love at first sight, but close.  In my case performance matters a lot, safety somewhat less -- since I'm used to taking responsibility for it myself, and my programming is quite small-scale.

As for perception, my perception is that I like D a lot and will surely be using it more in future... :-)
April 23, 2010
On Fri, 23 Apr 2010 12:28:55 -0400, Joseph Wakeling <joseph.wakeling@webdrake.net> wrote:

> Steven Schveighoffer wrote:
>> As long as you discount the vast differences in allocation performance,
>> the code generated should be just as good as code generated by a C++
>> compiler.  Your interpretation of performance did not focus on the right
>> part :)  Your test application heavily used allocation and reallocation,
>> things that have nothing to do with how fast the code compiled by the
>> compiler is, but are based more on the algorithms behind the
>> allocation.  An equivalent C++-based GC would probably show similar
>> performance (in fact, I think D's GC was based on a C++ GC).
>>
>> This is all taken with a grain of salt of course, the perception is
>> often more important than the technical details.  This thread being a
>> prime example of it.
>
> I do see the point about allocation and reallocation -- what was
> bothering me a bit was that even taking those aspects out of the code
> and preallocating everything, I could write C++ code that _didn't_
> preallocate and still ran (much) faster ... :-)

If you are comparing vector's push_back to D's array append after pre-allocating, you are still not comparing apples to apples...

Array appending is working without context -- it has no idea who owns the data or how big it is until it does a GC query.  vector owns the data and knows exactly how big it is, so no expensive lookup needed.  The benefit of D arrays is you can pass them, or slices of them, around without copying or worrying about them being deallocated very cheaply.

D's standard library should have a construct that duplicates the performance of vector, I'm not sure if there is anything right now.  I thought Appender would do it, but you have said in the past it is slow.  This needs to be remedied.

-Steve
April 23, 2010
Steven Schveighoffer wrote:
>> I do see the point about allocation and reallocation -- what was bothering me a bit was that even taking those aspects out of the code and preallocating everything, I could write C++ code that _didn't_ preallocate and still ran (much) faster ... :-)
> 
> If you are comparing vector's push_back to D's array append after pre-allocating, you are still not comparing apples to apples...

No ... !  That was true in the original code I posted, but following bearophile's kind example that part of the code was updated to a form along the lines of,

    x.length = 5_000_000;

    for(uint i=0;i<100;++i) {
        size_t pos = 0;
        for(uint j=0;j<5_000;++j) {
            for(uint k=0;k<1_000;++k) {
                x[pos++] = j*k;
            }
        }
    }

... which in itself indeed runs about the same speed as C++.  But that's not the main cause of the difference in running time of the codes.

> D's standard library should have a construct that duplicates the performance of vector, I'm not sure if there is anything right now.  I thought Appender would do it, but you have said in the past it is slow. This needs to be remedied.

It would be great -- but by itself it's not responsible for the timing differences I'm observing.

Best wishes,

    -- Joe