April 03, 2015
On Friday, 3 April 2015 at 19:08:58 UTC, weaselcat wrote:
>> I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.
>
> change one file and see which one is faster with an incremental build.

I don't care if incremental build is 10x faster if full build still stays at ~1 second. However I do care (and consider unacceptable) if support for incremental builds makes full build 10 seconds long.
April 04, 2015
On Friday, 3 April 2015 at 19:54:09 UTC, Dicebot wrote:
> On Friday, 3 April 2015 at 19:08:58 UTC, weaselcat wrote:
>>> I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.
>>
>> change one file and see which one is faster with an incremental build.
>
> I don't care if incremental build is 10x faster if full build still stays at ~1 second. However I do care (and consider unacceptable) if support for incremental builds makes full build 10 seconds long.

I'm of the opposite opinion. I don't care if full builds take 1h as long as incremental builds are as fast as possible. Why would I keep doing full builds? That's like git cloning multiple times. What for?

What's clear is that I need to try Andrei's per-package idea, at least as an option, if not the default. Having a large D codebase to test it on would be nice as well, but I don't know of anything bigger than Phobos.

Atila
April 04, 2015
On 2015-04-03 19:54, Dicebot wrote:

> 2)
>
> Ironically, it is just very slow. Those who come from C world got used
> to using separate compilation to speed up rebuilds but it doesn't work
> that way in D. It may look better if you change only 1 or 2 module but
> as amount of modified modules grows, incremental rebuild quickly becomes
> _slower_ than full program build with all files processed in one go. It
> can sometimes result in order of magnitude slowdown (personal experience).

BTW, are all the issues with incremental rebuilds solved? I.e. templates not outputted to all object files and other problems I can't remember right now.

-- 
/Jacob Carlborg
April 04, 2015
On Friday, 3 April 2015 at 19:45:38 UTC, Andrei Alexandrescu wrote:
> On 4/3/15 10:10 AM, Dicebot wrote:
>> On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote:
>>> . Separate compilation. One file changes, only one file gets rebuilt
>>
>> This immediately has caught my eye as huge "no" in the description. We
>> must ban C style separate compilation, there is simply no way to move
>> forward otherwise. At the very least not endorse it in any way.
>
> Agreed. D build style should be one invocation per package. -- Andrei

Just to clarify, reggae has:

1. Low-level building blocks that can be used for pretty much anything
2. High-level convenience rules

There's nothing about #1 that forces per-module compilation. It doesn't force anything, it's just data definition.

The current implementations of #2, namely dExe and the dub integration spit out build systems that compiler per module but that can be easily changed or even configured.

Even now it's perfectly possible to define a build system for a D project with per package compilation, it'll just take more typing.

Atila
April 04, 2015
On Friday, 3 April 2015 at 19:49:04 UTC, Andrei Alexandrescu wrote:
> On 4/3/15 11:06 AM, Atila Neves wrote:
>>
>> It's true that it's not always faster to compile each module separately,
>> I already knew that. It seems to me, however, that when that's actually
>> the case, the practical difference is negligible. Even if 10x slower,
>> the linker will take longer anyway. Because it'll all still be under a
>> second. That's been my experience anyway. i.e. It's either faster or it
>> doesn't make much of a difference.
>
> Whoa. The difference is much larger (= day and night) on at least a couple of projects at work.

Even when only one file has changed?

Atila
April 04, 2015
On 4/4/15 1:30 AM, Atila Neves wrote:
> On Friday, 3 April 2015 at 19:49:04 UTC, Andrei Alexandrescu wrote:
>> On 4/3/15 11:06 AM, Atila Neves wrote:
>>>
>>> It's true that it's not always faster to compile each module separately,
>>> I already knew that. It seems to me, however, that when that's actually
>>> the case, the practical difference is negligible. Even if 10x slower,
>>> the linker will take longer anyway. Because it'll all still be under a
>>> second. That's been my experience anyway. i.e. It's either faster or it
>>> doesn't make much of a difference.
>>
>> Whoa. The difference is much larger (= day and night) on at least a
>> couple of projects at work.
>
> Even when only one file has changed?

Yes; due to interdependencies, it's rare that only one file gets compiled. -- Andrei

April 04, 2015
On Friday, 3 April 2015 at 17:55:00 UTC, Dicebot wrote:
> Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that.

Development builds are usually not whole-program optimized. And proper optimizers work with IR and see no problem in separate compilation, it's all transparent. Separate compilation is nice for RAM too - good in virtualized environment like a CI service.

> This also applies to attribute inference which has become a really important development direction to handle growing attribute hell.

Depends on code style.
April 04, 2015
On Saturday, 4 April 2015 at 07:44:12 UTC, Atila Neves wrote:
> I'm of the opposite opinion. I don't care if full builds take 1h as long as incremental builds are as fast as possible. Why would I keep doing full builds? That's like git cloning multiple times. What for?

Full build is important when you do it only once, e.g. if you want to try new version of a program and it's not precompiled, you'll need to compile it from source and never recompile.
April 04, 2015
On Saturday, 4 April 2015 at 07:44:12 UTC, Atila Neves wrote:
> On Friday, 3 April 2015 at 19:54:09 UTC, Dicebot wrote:
>> On Friday, 3 April 2015 at 19:08:58 UTC, weaselcat wrote:
>>>> I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.
>>>
>>> change one file and see which one is faster with an incremental build.
>>
>> I don't care if incremental build is 10x faster if full build still stays at ~1 second. However I do care (and consider unacceptable) if support for incremental builds makes full build 10 seconds long.
>
> I'm of the opposite opinion. I don't care if full builds take 1h as long as incremental builds are as fast as possible. Why would I keep doing full builds? That's like git cloning multiple times. What for?
>
> What's clear is that I need to try Andrei's per-package idea, at least as an option, if not the default. Having a large D codebase to test it on would be nice as well, but I don't know of anything bigger than Phobos.

At work I often switch between dozen of different projects a day with small chunk of changes for each. That means that incremental builds are never of any value.

Even if you consistently work with the same project it is incredibly rare to have a changeset contained in a single module. And if there are at least 5 changed modules (including inter-dependencies) it becomes long enough already.

As for test codebase - I know that Martin has been testing his GC improvements on Higgs (https://github.com/higgsjs/Higgs), could be a suitable test subject for you too.
April 04, 2015
On Saturday, 4 April 2015 at 16:58:23 UTC, Kagamin wrote:
> On Friday, 3 April 2015 at 17:55:00 UTC, Dicebot wrote:
>> Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that.
>
> Development builds are usually not whole-program optimized. And proper optimizers work with IR and see no problem in separate compilation, it's all transparent. Separate compilation is nice for RAM too - good in virtualized environment like a CI service.

We need solutions that can be reasonably implemented with existing resources, not perfect solutions. Storing IR in object files and using custom linker is "correct" approach for WPO but it is currently unaffordable. Add compilation time problems and there seems to be no compelling reasons to go that route for now.

>> This also applies to attribute inference which has become a really important development direction to handle growing attribute hell.
>
> Depends on code style.

I am not aware of any solutions based on coding style. Can you elaborate?