April 05, 2015
On Saturday, 4 April 2015 at 19:56:28 UTC, Dicebot wrote:
> On Saturday, 4 April 2015 at 07:44:12 UTC, Atila Neves wrote:
>> On Friday, 3 April 2015 at 19:54:09 UTC, Dicebot wrote:
>>> On Friday, 3 April 2015 at 19:08:58 UTC, weaselcat wrote:
>>>>> I just tried compiling one of my project. It has a makefile that does separate compilation and a shell script I use for unit testing which compiles everything in one go. The makefile takes 5.3 seconds, does not including linking since it builds a library. The shell script takes 1.3 seconds which include compiling unit tests and linking as well.
>>>>
>>>> change one file and see which one is faster with an incremental build.
>>>
>>> I don't care if incremental build is 10x faster if full build still stays at ~1 second. However I do care (and consider unacceptable) if support for incremental builds makes full build 10 seconds long.
>>
>> I'm of the opposite opinion. I don't care if full builds take 1h as long as incremental builds are as fast as possible. Why would I keep doing full builds? That's like git cloning multiple times. What for?
>>
>> What's clear is that I need to try Andrei's per-package idea, at least as an option, if not the default. Having a large D codebase to test it on would be nice as well, but I don't know of anything bigger than Phobos.
>
> At work I often switch between dozen of different projects a day with small chunk of changes for each. That means that incremental builds are never of any value.
>
> Even if you consistently work with the same project it is incredibly rare to have a changeset contained in a single module. And if there are at least 5 changed modules (including inter-dependencies) it becomes long enough already.
>
> As for test codebase - I know that Martin has been testing his GC improvements on Higgs (https://github.com/higgsjs/Higgs), could be a suitable test subject for you too.

It seems our workflows are very different. Half of the time I make changes to a file that only contains unit tests. That's always self contained, and doing anything else except for recompiling that one file and relinking is going to be slower.

It seems to me that different projects might benefit from different compilation strategies. It might just be a case of unit tests alongside production code vs in separate files. As mentioned before, my experience with per-module compilation was usually faster, but I'm going to change the default to be per package.

Another cool thing about using reggae to build itself was building the unit test and production binaries at the same time. I couldn't really do that with dub alone.
April 05, 2015
On Saturday, 4 April 2015 at 19:59:46 UTC, Dicebot wrote:
> We need solutions that can be reasonably implemented with existing resources, not perfect solutions. Storing IR in object files and using custom linker is "correct" approach for WPO but it is currently unaffordable.

Works for me with llvm toolchain.

> Add compilation time problems and there seems to be no compelling reasons to go that route for now.

A compelling reason is memory consumption and exhaustion.

> I am not aware of any solutions based on coding style.

Not sure what you mean, reliance on attribute hell is a coding style. You can look at any language, which has no such problem.
April 05, 2015
On Sunday, 5 April 2015 at 12:17:09 UTC, Kagamin wrote:
> On Saturday, 4 April 2015 at 19:59:46 UTC, Dicebot wrote:
>> We need solutions that can be reasonably implemented with existing resources, not perfect solutions. Storing IR in object files and using custom linker is "correct" approach for WPO but it is currently unaffordable.
>
> Works for me with llvm toolchain.

Unless LDC does some D specific WPO magic I am not aware of this is not what your original statement was about.

>> I am not aware of any solutions based on coding style.
>
> Not sure what you mean, reliance on attribute hell is a coding style. You can look at any language, which has no such problem.

Erm. Either it is coding style issue or a language issue. Pick one. Only coding style for D I am aware of that deals with attribute hell is "ignore most attributes" which is hardly solution. Please give any specific example to back your point.
April 05, 2015
On Sunday, 5 April 2015 at 12:22:15 UTC, Dicebot wrote:
> Unless LDC does some D specific WPO magic I am not aware of this is not what your original statement was about.

llvm does normal WPO in a sense that compiled code is not opaque.

> Erm. Either it is coding style issue or a language issue. Pick one. Only coding style for D I am aware of that deals with attribute hell is "ignore most attributes" which is hardly solution.

The problem can't be solved for coding styles, which rely on attribute hell, I only said the problem depends on coding style.
April 05, 2015
On 4/4/15 12:56 PM, Dicebot wrote:
>
> Even if you consistently work with the same project it is incredibly
> rare to have a changeset contained in a single module. And if there are
> at least 5 changed modules (including inter-dependencies) it becomes
> long enough already.

That's my experience as well. -- Andrei
April 06, 2015
On Sunday, 5 April 2015 at 00:22:35 UTC, Atila Neves wrote:
> It seems to me that different projects might benefit from different compilation strategies. It might just be a case of unit tests alongside production code vs in separate files. As mentioned before, my experience with per-module compilation was usually faster, but I'm going to change the default to be per package.

I want to also share my experience in that regard.

When I was writing a vibe.d based application, I used dub as a build system, which sends everything in one go. My application was just a couple of files, so I was, practically, just building vibe every time.

I was developing the application on a desktop with 4 Gb RAM and everything was fine (albeit I was missing the "progress bar" of files in progress provided by ninja/make).

But then it was time to deploy the app, and I bought a 1 GB RAM virtual node from Linode. After executing dub it told me "Out of memory" and exited. And there was nothing I could do.

So I took the only option I saw - I switched to CMake (modified for working with D) to provide me a separate compilation build (ninja-based) and swore to never again.

I understand the reasoning behind both separate and "throw in everything" compilation strategies. And I also understand the pros of a middle-ground solution (like, per-package one), which is probably the way D will go. But this area seems kind of gray to me (like, in my case the "per-package" solution wouldn't work either, if I understand it correctly).

So, personally, I will probably stick to separate compilation, until I see that:

- The pros of "batch" compilation are clear and, desirably, obvious. At the moment it seems to me (seems to me), that faster compilation and attribute inference just don't have a significant impact.
- There's a way to fine tune between "separate" and "throw in everything" compilation if necessary.

Thanks!

April 07, 2015
On Monday, 6 April 2015 at 11:29:20 UTC, Sergei Nosov wrote:
> On Sunday, 5 April 2015 at 00:22:35 UTC, Atila Neves wrote:
>> It seems to me that different projects might benefit from different compilation strategies. It might just be a case of unit tests alongside production code vs in separate files. As mentioned before, my experience with per-module compilation was usually faster, but I'm going to change the default to be per package.
>
> I want to also share my experience in that regard.
>
> ...

See, the problem with this approach is that you can trivially get out of 1GB of memory with DMD even when compiling single module, all you need is to do enough compile-time magic. Separate compilation here delays the issue but does not actually solve it.

If any effort is to be put into supporting this scenario (on-server compilation), it is better to be put in reducing actual memory hog of compiler, not supporting another workaround.

Also you can still achieve the similar profile by splitting your project in small enough static libraries, so it is not completely out of question.
April 07, 2015
On Sunday, 5 April 2015 at 12:50:52 UTC, Kagamin wrote:
> On Sunday, 5 April 2015 at 12:22:15 UTC, Dicebot wrote:
>> Unless LDC does some D specific WPO magic I am not aware of this is not what your original statement was about.
>
> llvm does normal WPO in a sense that compiled code is not opaque.

And I have never been speaking about "normal WPO", only about one specific to D semantics.

>> Erm. Either it is coding style issue or a language issue. Pick one. Only coding style for D I am aware of that deals with attribute hell is "ignore most attributes" which is hardly solution.
>
> The problem can't be solved for coding styles, which rely on attribute hell, I only said the problem depends on coding style.

This sentence probably means something but I were not able to figure it out even after re-reading it several times. "coding style which relies on attribute hell", what kind of weird beast that is?
April 07, 2015
On Tuesday, 7 April 2015 at 08:28:08 UTC, Dicebot wrote:
> And I have never been speaking about "normal WPO", only about one specific to D semantics.

AFAIK, hypothetical D-specific optimizations were never implemented (like elision of pure calls and optimization of immutable data). But they work on signature level, so they shouldn't be affected by separate compilation in any way.

> This sentence probably means something but I were not able to figure it out even after re-reading it several times. "coding style which relies on attribute hell", what kind of weird beast that is?

I suppose your coding style can be an example, you wouldn't be interested in attribute hell otherwise.
April 07, 2015
On Tuesday, 7 April 2015 at 08:25:02 UTC, Dicebot wrote:
> See, the problem with this approach is that you can trivially get out of 1GB of memory with DMD even when compiling single module, all you need is to do enough compile-time magic. Separate compilation here delays the issue but does not actually solve it.

Yeah, absolutely agree. But at the moment separate compilation is the most "forgiving" one. Like, if it doesn't work - anything else won't work either. And given that personally I don't recognize the (possibly) increased compilation time as an issue, it's the solution that works for me.

> If any effort is to be put into supporting this scenario (on-server compilation), it is better to be put in reducing actual memory hog of compiler, not supporting another workaround.

Agreed, too. The whole "forget about frees" approach sounds a little too controversial to me. Especially, after I have faced the dark side of it. So, I'm all for improving in that regard. But it seems like it's not recognized as a (high-priority) issue at the moment. So, we (the users) have to live with that.

> Also you can still achieve the similar profile by splitting your project in small enough static libraries, so it is not completely out of question.

As I described, my project was just a couple of files. Building vibe.d was the actual problem. I don't think it is feasible to expect that a user of a library will start splitting it into "small enough libraries", when faced with this problem. A more structured approach is needed.

1 2 3 4
Next ›   Last »