July 25, 2012
On Wed, 25 Jul 2012 18:46:58 +0200, ixid <nuaccount@gmail.com> wrote:

>> beautiful ideas Andrei developed on policy class design
>
> Where would one find these ideas?
>

http://www.amazon.com/Modern-Design-Generic-Programming-Patterns/dp/0201704315

-- 
Simen
July 25, 2012
On 7/25/12 1:24 PM, Walter Bright wrote:
> On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
>> Yes, and both debug and release build times are important.
>
> Optimized build time comparisons are less relevant - are you really
> willing to trade off faster optimization times for less optimization?
>
> I think it's more the time of the edit-compile-debug loop, which would
> be the unoptimized build times.

There are systems that only work in release mode (e.g. performance is part of the acceptability criteria) and for which debugging means watching logs.

So the problem is not faster optimization times for less optimization (though that's possible, too), but instead build times for a given level of optimization.


Andrei
July 25, 2012
On 7/25/2012 10:50 AM, Andrei Alexandrescu wrote:
> On 7/25/12 1:24 PM, Walter Bright wrote:
>> On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
>>> Yes, and both debug and release build times are important.
>>
>> Optimized build time comparisons are less relevant - are you really
>> willing to trade off faster optimization times for less optimization?
>>
>> I think it's more the time of the edit-compile-debug loop, which would
>> be the unoptimized build times.
>
> There are systems that only work in release mode (e.g. performance is part of
> the acceptability criteria) and for which debugging means watching logs.
>
> So the problem is not faster optimization times for less optimization (though
> that's possible, too), but instead build times for a given level of optimization.

The easy way to improve optimized build times is to do less optimization.

I'm saying be careful what you ask for - you might get it!


July 25, 2012

On 25.07.2012 19:24, Walter Bright wrote:
> On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
>> Yes, and both debug and release build times are important.
>
> Optimized build time comparisons are less relevant - are you really
> willing to trade off faster optimization times for less optimization?
>
> I think it's more the time of the edit-compile-debug loop, which would
> be the unoptimized build times.
>
>

The "edit-compile-debug loop" is a use case where the D module system does not shine so well. Compare build times when only editing a single source file:
With the help of incremental linking, building a large C++ project only takes seconds.
In contrast, the D project usually recompiles everything from scratch with every little change.
July 25, 2012
On 7/25/12 4:53 PM, Rainer Schuetze wrote:
>
>
> On 25.07.2012 19:24, Walter Bright wrote:
>> On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
>>> Yes, and both debug and release build times are important.
>>
>> Optimized build time comparisons are less relevant - are you really
>> willing to trade off faster optimization times for less optimization?
>>
>> I think it's more the time of the edit-compile-debug loop, which would
>> be the unoptimized build times.
>>
>>
>
> The "edit-compile-debug loop" is a use case where the D module system
> does not shine so well. Compare build times when only editing a single
> source file:
> With the help of incremental linking, building a large C++ project only
> takes seconds.
> In contrast, the D project usually recompiles everything from scratch
> with every little change.

The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?

Andrei
July 25, 2012
On Wednesday, July 25, 2012 22:53:08 Rainer Schuetze wrote:
> On 25.07.2012 19:24, Walter Bright wrote:
> > On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
> >> Yes, and both debug and release build times are important.
> > 
> > Optimized build time comparisons are less relevant - are you really willing to trade off faster optimization times for less optimization?
> > 
> > I think it's more the time of the edit-compile-debug loop, which would be the unoptimized build times.
> 
> The "edit-compile-debug loop" is a use case where the D module system
> does not shine so well. Compare build times when only editing a single
> source file:
> With the help of incremental linking, building a large C++ project only
> takes seconds.
> In contrast, the D project usually recompiles everything from scratch
> with every little change.

D should actually compile _faster_ if you compile everything at once - certainly for smaller projects - since it then only has to lex and parse each module once. Incremental builds avoid having to fully compile each module every time, but there's still plenty of extra lexing and parsing which goes on.

I don't know how much it shifts with large projects (maybe incremental builds actually end up being better then, because you have enough files which aren't related to one another that the amount of code which needs to be relexed a reparsed is minimal in comparison to the number of files), but you can do incremental building with dmd if you want to. It's just more typical to do it all at once, because for most projects, that's faster. So, I don't see how there's an complaint against D here.

- Jonathan M Davis
July 25, 2012
On Wed, 25 Jul 2012 17:31:10 -0400
Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:

> On 7/25/12 4:53 PM, Rainer Schuetze wrote:
> >
> > The "edit-compile-debug loop" is a use case where the D module
> > system does not shine so well. Compare build times when only
> > editing a single source file:
> > With the help of incremental linking, building a large C++ project
> > only takes seconds.
> > In contrast, the D project usually recompiles everything from
> > scratch with every little change.
> 
> The same dependency management techniques can be applied to large D projects, as to large C++ projects. (And of course there are a few new ones.) What am I missing?
> 

Aren't there still issues with what object files DMD chooses to store instantiated templates into? Or has that all been fixed?

The xfbuild developers wrestled a lot with this and AIUI eventually
gave up. The symptoms are that you'll eventually start getting linker
errors related to template instantiations, which will be
fixed when you then do a complete rebuild.

July 25, 2012
On 7/25/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> D should actually compile _faster_ if you compile everything at once -
> certainly for smaller projects - since it then only has to lex and parse
> each
> module once. Incremental builds avoid having to fully compile each module
> every time, but there's still plenty of extra lexing and parsing which goes on.

That's assuming that the lexing/parsing is the bottleneck for DMD. For example: a full build of  WindowsAPI takes 14.6 seconds on my machine. But when compiling one module at a time and using parallelism it takes 7 seconds instead. And all it takes is a simple parallel loop.
July 25, 2012
On Thursday, July 26, 2012 00:34:07 Andrej Mitrovic wrote:
> On 7/25/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> > D should actually compile _faster_ if you compile everything at once -
> > certainly for smaller projects - since it then only has to lex and parse
> > each
> > module once. Incremental builds avoid having to fully compile each module
> > every time, but there's still plenty of extra lexing and parsing which
> > goes on.
> That's assuming that the lexing/parsing is the bottleneck for DMD.

Not necessarily. The point is that there's extra work that has to be done when compiling separately. So, whether it takes more or less time depends on how much other work you're avoiding by doing an incremental build. Certainly, I'd expect a full incremental build from scratch to take longer than one which was not incremental.

> For example: a full build of  WindowsAPI takes 14.6 seconds on my machine.
> But when compiling one module at a time and using parallelism it takes
> 7 seconds instead. And all it takes is a simple parallel loop.

Parallelism? How on earth do you manage that? dmd has no support for running on multiple threads AFAIK. Do you run multiple copies of dmd at once? Certainly, compiling files in parallel changes things. You've got multiple cores working on it at that point, so the equation is completely different.

- Jonathan M Davis
July 25, 2012
On 7/26/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> Parallelism? How on earth do you manage that? dmd has no support for running
> on multiple threads AFAIK.
> You've got multiple
> cores working on it at that point, so the equation is completely different.

That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?