July 25, 2012
On 7/26/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> Certainly, I'd expect a full incremental build from scratch to take longer than one which was not incremental.

Well that would probably only be done once. With full builds you do it every time.
July 25, 2012
On Thursday, July 26, 2012 00:44:14 Andrej Mitrovic wrote:
> On 7/26/12, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> > Parallelism? How on earth do you manage that? dmd has no support for
> > running on multiple threads AFAIK.
> > You've got multiple
> > cores working on it at that point, so the equation is completely
> > different.
> 
> That's exactly my point, you can take advantage of parallelism externally if you compile module-by-module simply by invoking multiple DMD processes. And who doesn't own a multicore machine these days?

Well, regardless, my and Andrei's point was that C++ has nothing on us here. We can do incremental just fine. The fact that most people just build the whole program from scratch every time is irrelevant. That just means that the build times are fast enough for most people not to care about doing incremental builds, not that they can't do them.

- Jonathan M Davis
July 26, 2012
On 7/25/2012 1:53 PM, Rainer Schuetze wrote:
> The "edit-compile-debug loop" is a use case where the D module system does not
> shine so well. Compare build times when only editing a single source file:
> With the help of incremental linking, building a large C++ project only takes
> seconds.
> In contrast, the D project usually recompiles everything from scratch with every
> little change.


I suspect that's one of two possibilities:

1. everything is passed on one command line to dmd. This, of course, requires dmd to recompile everything.

2. modules are not separated into .d and .di files. Hence every module that imports a .d file has to, at least, parse and semantically analyze the whole thing, although it won't optimize or generate code for it.


As for incremental linking, optlink has always been faster at doing a full link than the Microsoft linker does for an incremental link.
July 26, 2012

On 25.07.2012 23:31, Andrei Alexandrescu wrote:
> On 7/25/12 4:53 PM, Rainer Schuetze wrote:
>>
>>
>> On 25.07.2012 19:24, Walter Bright wrote:
>>> On 7/25/2012 8:13 AM, Andrei Alexandrescu wrote:
>>>> Yes, and both debug and release build times are important.
>>>
>>> Optimized build time comparisons are less relevant - are you really
>>> willing to trade off faster optimization times for less optimization?
>>>
>>> I think it's more the time of the edit-compile-debug loop, which would
>>> be the unoptimized build times.
>>>
>>>
>>
>> The "edit-compile-debug loop" is a use case where the D module system
>> does not shine so well. Compare build times when only editing a single
>> source file:
>> With the help of incremental linking, building a large C++ project only
>> takes seconds.
>> In contrast, the D project usually recompiles everything from scratch
>> with every little change.
>
> The same dependency management techniques can be applied to large D
> projects, as to large C++ projects. (And of course there are a few new
> ones.) What am I missing?

Incremental compilation does not work so well because

- with combined declaration and implementation in the source, you also get the full dependencies if you just need a short declaration
- even with di-files imports are viral: you must be very careful if you try to remove them from di-files because you might break runtime initialization order.
- di-file generation has other known problems (e.g. missing implementation for CTFE)

I thought about implementing incremental builds for Visual D, but soon gave up when I noticed that a single file compilation in a medium sized project (Visual D itself) almost takes as long as recompiling the whole thing.

I suspect the problem is that dmd fully analyzes all the imported files and only skips the code generation for these. It could be much faster if it would do the analysis lazily (though this might slightly change evaluation order and skip error messages in unused code blocks).

July 26, 2012

On 26.07.2012 03:48, Walter Bright wrote:
> On 7/25/2012 1:53 PM, Rainer Schuetze wrote:
>> The "edit-compile-debug loop" is a use case where the D module system
>> does not
>> shine so well. Compare build times when only editing a single source
>> file:
>> With the help of incremental linking, building a large C++ project
>> only takes
>> seconds.
>> In contrast, the D project usually recompiles everything from scratch
>> with every
>> little change.
>
>
> I suspect that's one of two possibilities:
>
> 1. everything is passed on one command line to dmd. This, of course,
> requires dmd to recompile everything.
>
> 2. modules are not separated into .d and .di files. Hence every module
> that imports a .d file has to, at least, parse and semantically analyze
> the whole thing, although it won't optimize or generate code for it.
>

I think working with di-files is too painful. A lot of the analysis in imported files could be skipped.

>
> As for incremental linking, optlink has always been faster at doing a
> full link than the Microsoft linker does for an incremental link.

Agreed, incremental linking is just a work-around for the linkers slowness.

July 26, 2012
On 25/07/12 16:13, Andrei Alexandrescu wrote:
> Yes, and both debug and release build times are important.

If you can advise some flag combinations (for D and C++) you'd like to see tested, I'll happily do them.
July 26, 2012
On 2012-07-26 00:17, Nick Sabalausky wrote:

> Aren't there still issues with what object files DMD chooses to store
> instantiated templates into? Or has that all been fixed?
>
> The xfbuild developers wrestled a lot with this and AIUI eventually
> gave up. The symptoms are that you'll eventually start getting linker
> errors related to template instantiations, which will be
> fixed when you then do a complete rebuild.

I'm pretty sure nothing has changed. But Walter said if you use the -lib flag it will output the templates to all object files. That will complicate things a bit but still possible to make it work.

-- 
/Jacob Carlborg
July 26, 2012
On 2012-07-25 23:56, Jonathan M Davis wrote:

> D should actually compile _faster_ if you compile everything at once -
> certainly for smaller projects - since it then only has to lex and parse each
> module once. Incremental builds avoid having to fully compile each module
> every time, but there's still plenty of extra lexing and parsing which goes
> on.
>
> I don't know how much it shifts with large projects (maybe incremental builds
> actually end up being better then, because you have enough files which aren't
> related to one another that the amount of code which needs to be relexed a
> reparsed is minimal in comparison to the number of files), but you can do
> incremental building with dmd if you want to. It's just more typical to do it
> all at once, because for most projects, that's faster. So, I don't see how
> there's an complaint against D here.

Incremental builds don't have to mean "pass a single file to the compiler". You can start by passing all the files at once to the compiler and then later you just pass all the files that have changed, at once. But I don't know how much difference it will be to recompiling the whole project.

-- 
/Jacob Carlborg
July 26, 2012
On 2012-07-26 00:42, Jonathan M Davis wrote:

> I'd expect a full incremental build from scratch to take longer than one which was
> not incremental.

Why? Just pass all the files to the compiler at once. Nothing says an incremental build needs to pass a single file to the compiler.

-- 
/Jacob Carlborg
July 26, 2012
On 7/26/12 4:15 AM, Joseph Rushton Wakeling wrote:
> On 25/07/12 16:13, Andrei Alexandrescu wrote:
>> Yes, and both debug and release build times are important.
>
> If you can advise some flag combinations (for D and C++) you'd like to
> see tested, I'll happily do them.

The classic to ones are: (a) no flags at all, (b) -O -release -inline, (c) -O -release -inline -noboundscheck.

You can skip the latter as it won't impact build time.


Andrei