April 03, 2015
On Friday, 3 April 2015 at 17:59:22 UTC, Atila Neves wrote:
> Well, I took your advice (and one of my acceptance tests is based off of your simplified real-work example) and started with the low-level any-command-will-do API first. I built the high-level ones on top of that. It doesn't seem crazy to me that certain builds can only be done by certain backends. The fact that the make backend can track C/C++/D dependencies wasn't a given and the implementation is quite ugly.
>
> In any case, the Target structs aren't high-level abstractions, they're just data. Data that can be generated by any code. Your example is basically how the `dExe` rule works: run dmd at run-time, collect dependencies and build all the `Target` instances. You could have a D backend that outputs (then compiles and runs) your example. The "only" problem I can see is execution speed.
>
> Maybe I didn't include enough examples.
>
> I also need to think of your example a bit more.

I may have misunderstood how it works judging only by provided examples. Give a me bit more time to investigate actual sources and I may reconsider :)
April 03, 2015
On Friday, 3 April 2015 at 17:55:00 UTC, Dicebot wrote:
> On Friday, 3 April 2015 at 17:25:51 UTC, Ben Boeckel wrote:
>> On Fri, Apr 03, 2015 at 17:10:31 +0000, Dicebot via Digitalmars-d-announce wrote:
>>> On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote:
>>> > . Separate compilation. One file changes, only one file gets rebuilt
>>> 
>>> This immediately has caught my eye as huge "no" in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way.
>>
>> Why? Other than the -fversion=... stuff, what is really blocking this? I
>> personally find unity builds to not be worth it, but I don't see
>> anything blocking separate compilation for D if dependencies are set up
>> properly.
>>
>> --Ben
>
> There are 2 big problems with C-style separate compilation:
>
> 1)
>
> Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that. This also applies to attribute inference which has become a really important development direction to handle growing attribute hell.
>
> During last D Berlin Meetup we had an interesting conversation on attribute inference topic with Martin Nowak and dropping legacy C-style separate compilation seemed to be recognized as unavoidable to implement anything decent in that domain.
>
> 2)
>
> Ironically, it is just very slow. Those who come from C world got used to using separate compilation to speed up rebuilds but it doesn't work that way in D. It may look better if you change only 1 or 2 module but as amount of modified modules grows, incremental rebuild quickly becomes _slower_ than full program build with all files processed in one go. It can sometimes result in order of magnitude slowdown (personal experience).
>
> Difference from C is that repeated imports are very cheap in D (you don't copy-paste module content again and again like with headers) but at the same time semantic analysis of imported module is more expensive (because D semantics are more complicated). When you do separate compilation you discard already processed imports and repeat it again and again from the very beginning for each new compiled file, accumulating huge slowdown for application in total.
>
> To get best compilation speed in D you want to process as many modules with shared imports at one time as possible. At the same time for really big projects it becomes not feasible at some point, especially if CTFE is heavily used and memory consumption explodes. In that case best approach is partial separate compilation - decoupling parts of a program as static libraries and doing parallel compilation of each separate library - but still compiling each library in one go. That allows to get parallelization without doing the same costly work again and again.

Interesting.

It's true that it's not always faster to compile each module separately, I already knew that. It seems to me, however, that when that's actually the case, the practical difference is negligible. Even if 10x slower, the linker will take longer anyway. Because it'll all still be under a second. That's been my experience anyway. i.e. It's either faster or it doesn't make much of a difference.

All I know is I've seen a definite improvement in my edit-compile-unittest cycle by compiling modules separately.

How would the decoupling happen? Is the user supposed to partition the binary into suitable static libraries? Or is the system supposed to be smart enough to figure that out?

Atila


April 03, 2015
On Friday, 3 April 2015 at 17:55:00 UTC, Dicebot wrote:
> On Friday, 3 April 2015 at 17:25:51 UTC, Ben Boeckel wrote:
>> On Fri, Apr 03, 2015 at 17:10:31 +0000, Dicebot via Digitalmars-d-announce wrote:
>>> On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote:
>>> > . Separate compilation. One file changes, only one file gets rebuilt
>>> 
>>> This immediately has caught my eye as huge "no" in the description. We must ban C style separate compilation, there is simply no way to move forward otherwise. At the very least not endorse it in any way.
>>
>> Why? Other than the -fversion=... stuff, what is really blocking this? I
>> personally find unity builds to not be worth it, but I don't see
>> anything blocking separate compilation for D if dependencies are set up
>> properly.
>>
>> --Ben
>
> There are 2 big problems with C-style separate compilation:
>
> 1)
>
> Complicates whole-program optimization possibilities. Old school object files are simply not good enough to preserve information necessary to produce optimized builds and we are not in position to create own metadata + linker combo to circumvent that. This also applies to attribute inference which has become a really important development direction to handle growing attribute hell.

Not sure about other people, but I do not care about whole program optimization during an edit-compile-run cycle. I just want it to compile as fast as possible, and if I change one or two files I don't want to have to recompile an entire codebase.
April 03, 2015
On 2015-04-03 20:06, Atila Neves wrote:

> Interesting.
>
> It's true that it's not always faster to compile each module separately,
> I already knew that. It seems to me, however, that when that's actually
> the case, the practical difference is negligible. Even if 10x slower,
> the linker will take longer anyway. Because it'll all still be under a
> second. That's been my experience anyway. i.e. It's either faster or it
> doesn't make much of a difference.

I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.

-- 
/Jacob Carlborg
April 03, 2015
On Friday, 3 April 2015 at 19:07:09 UTC, Jacob Carlborg wrote:
> On 2015-04-03 20:06, Atila Neves wrote:
>
>> Interesting.
>>
>> It's true that it's not always faster to compile each module separately,
>> I already knew that. It seems to me, however, that when that's actually
>> the case, the practical difference is negligible. Even if 10x slower,
>> the linker will take longer anyway. Because it'll all still be under a
>> second. That's been my experience anyway. i.e. It's either faster or it
>> doesn't make much of a difference.
>
> I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.

change one file and see which one is faster with an incremental build.
April 03, 2015
On 2015-04-03 19:03, Atila Neves wrote:
> I wanted to work on this a little more before announcing it, but it
> seems I'm going to be busy working on trying to get unit-threaded into
> std.experimental so here it is:
>
> http://code.dlang.org/packages/reggae

One thing I noticed immediately (unless I'm mistaken), compiling a D project without dependencies is too complicated. It should just be:

$ cd my_d_project
$ reggae

-- 
/Jacob Carlborg
April 03, 2015
On 4/3/15 10:10 AM, Dicebot wrote:
> On Friday, 3 April 2015 at 17:03:35 UTC, Atila Neves wrote:
>> . Separate compilation. One file changes, only one file gets rebuilt
>
> This immediately has caught my eye as huge "no" in the description. We
> must ban C style separate compilation, there is simply no way to move
> forward otherwise. At the very least not endorse it in any way.

Agreed. D build style should be one invocation per package. -- Andrei
April 03, 2015
On 4/3/15 11:06 AM, Atila Neves wrote:
>
> It's true that it's not always faster to compile each module separately,
> I already knew that. It seems to me, however, that when that's actually
> the case, the practical difference is negligible. Even if 10x slower,
> the linker will take longer anyway. Because it'll all still be under a
> second. That's been my experience anyway. i.e. It's either faster or it
> doesn't make much of a difference.

Whoa. The difference is much larger (= day and night) on at least a couple of projects at work.

> All I know is I've seen a definite improvement in my
> edit-compile-unittest cycle by compiling modules separately.
>
> How would the decoupling happen? Is the user supposed to partition the
> binary into suitable static libraries? Or is the system supposed to be
> smart enough to figure that out?

Smarts would be nice, but in first approximation one package = one compilation unit is a great policy.


Andrei

April 03, 2015
On 4/3/15 12:07 PM, Jacob Carlborg wrote:
> On 2015-04-03 20:06, Atila Neves wrote:
>
>> Interesting.
>>
>> It's true that it's not always faster to compile each module separately,
>> I already knew that. It seems to me, however, that when that's actually
>> the case, the practical difference is negligible. Even if 10x slower,
>> the linker will take longer anyway. Because it'll all still be under a
>> second. That's been my experience anyway. i.e. It's either faster or it
>> doesn't make much of a difference.
>
> I just tried compiling one of my project. It has a makefile that does
> separate compilation and a shell script I use for unit testing which
> compiles everything in one go. The makefile takes 5.3 seconds, does not
> including linking since it builds a library. The shell script takes 1.3
> seconds which include compiling unit tests and linking as well.

Truth be told that's 5.3 seconds for an entire build so the comparison is only partially relevant. -- Andrei

April 03, 2015
On Friday, 3 April 2015 at 18:06:42 UTC, Atila Neves wrote:
> All I know is I've seen a definite improvement in my edit-compile-unittest cycle by compiling modules separately.
>
> How would the decoupling happen? Is the user supposed to partition the binary into suitable static libraries? Or is the system supposed to be smart enough to figure that out?

Ideally both. Build system should be smart enough to group into static libraries automatically if user doesn't care (Andrei suggestion of one package per library makes sense) but option of explicit definition of compilation units is still necessary of course.