Jump to page: 1 25  
Page
Thread overview
Any takers for http://d.puremagic.com/issues/show_bug.cgi?id=9673?
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Rainer Schuetze
Mar 10, 2013
Rainer Schuetze
Mar 11, 2013
Timon Gehr
Mar 11, 2013
Rainer Schuetze
Mar 11, 2013
Timon Gehr
Mar 12, 2013
Rainer Schuetze
Mar 10, 2013
Andrej Mitrovic
Mar 11, 2013
jerro
Mar 11, 2013
Vladimir Panteleev
Mar 11, 2013
jerro
Mar 12, 2013
Vladimir Panteleev
Mar 12, 2013
Marco Leise
Mar 12, 2013
jerro
Mar 12, 2013
Andrej Mitrovic
Mar 17, 2013
Jacob Carlborg
Mar 17, 2013
Jacob Carlborg
Mar 10, 2013
Marco Leise
Mar 10, 2013
deadalnix
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
deadalnix
Mar 10, 2013
Vladimir Panteleev
Mar 10, 2013
Marco Leise
Mar 10, 2013
Marco Leise
Mar 11, 2013
Vladimir Panteleev
Mar 17, 2013
Jacob Carlborg
Mar 17, 2013
Andrej Mitrovic
March 10, 2013
I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great relatively confined project of good utility. We preapproved it, if anyone wants to snatch it please come forward.

Also, any comments to the design are welcome.


Thanks,

Andrei
March 10, 2013
On Sunday, 10 March 2013 at 04:29:34 UTC, Andrei Alexandrescu wrote:
> I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great relatively confined project of good utility. We preapproved it, if anyone wants to snatch it please come forward.
>
> Also, any comments to the design are welcome.

I've thought about this before. Here are my thoughts:

1. Querying the dependencies of one module, and compiling it, should be done in one go (one dmd execution).

The idea is that if we need to get a module's dependencies, it will be because the module is one we've never compiled it before, or the module itself or one of its previously-known dependencies has changed

2. Object files (and their .deps) should be cached independently of the entry point module.

This will allow speeding up incremental compilation of multiple programs that share some source files.
March 10, 2013

On 10.03.2013 05:29, Andrei Alexandrescu wrote:
> I figure http://d.puremagic.com/issues/show_bug.cgi?id=9673 it's a great
> relatively confined project of good utility. We preapproved it, if
> anyone wants to snatch it please come forward.
>
> Also, any comments to the design are welcome.
>
>
> Thanks,
>
> Andrei

In my experience single file compilation of medium sized projects is unacceptably slow. Much slower than what you are used to by similar sized C++ projects. I think this is because without using di-files, a lot more code has to be analyzed for each compilation unit.

Another problem with single file compilation is that dependencies do not only cover changes to the declarations (as in C++) but also to the implementation, so the import chain can easily explode. A small change to the implementation of a function can trigger rebuilding a lot of other files.

The better option would be to pass all source files to update in one invocation of dmd, so it won't get slower than a full rebuild, but this has been plagued with linker errors in the past (undefined and duplicate symbols). If it works, it could identify independent group of files which you now separate into libraries.
March 10, 2013
On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
> In my experience single file compilation of medium sized projects is unacceptably slow. Much slower than what you are used to by similar sized C++ projects.

Even when taking advantage of multiple CPU cores?
March 10, 2013

On 10.03.2013 11:32, Vladimir Panteleev wrote:
> On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
>> In my experience single file compilation of medium sized projects is
>> unacceptably slow. Much slower than what you are used to by similar
>> sized C++ projects.
>
> Even when taking advantage of multiple CPU cores?

I don't have support for building on multiple cores, but trying it on visuald itself (48 files) yields

- combined compilation    6s
- single file compilation 1min4s

You'd need a lot of cores to be better off with single file compilation.

These are only the plugin files, not anything in the used libraries (about 300 more files). Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively)

Almost any change causes a lot of files to be rebuilt (just tried one, took 49s to build).
March 10, 2013
On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
>
>
> On 10.03.2013 11:32, Vladimir Panteleev wrote:
>> On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
>>> In my experience single file compilation of medium sized projects is
>>> unacceptably slow. Much slower than what you are used to by similar
>>> sized C++ projects.
>>
>> Even when taking advantage of multiple CPU cores?
>
> I don't have support for building on multiple cores, but trying it on visuald itself (48 files) yields
>
> - combined compilation    6s
> - single file compilation 1min4s
>
> You'd need a lot of cores to be better off with single file compilation.
>
> These are only the plugin files, not anything in the used libraries (about 300 more files). Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively)
>
> Almost any change causes a lot of files to be rebuilt (just tried one, took 49s to build).

Do you think it has much to do with that Windows has a larger overhead for process creation?

I've ran some tests on Linux:

~$ git clone git://github.com/CyberShadow/DFeed.git
~$ cd DFeed
~/DFeed$ git submodule init
~/DFeed$ time rdmd --force --build-only dfeed
real    0m2.290s
user    0m1.960s
sys     0m0.304s
~/DFeed$ dmd -o- -v dfeed.d | grep '^import ' | sed 's/.*(\(.*\))/\1/g' | grep -v '^/' > all.txt
~/DFeed$ time bash -c 'cat all.txt | xargs -n1 dmd -c'
real    0m16.935s
user    0m13.837s
sys     0m2.812s
~/DFeed$ time bash -c 'cat all.txt | xargs -n1 -P8 dmd -c'
real    0m3.703s
user    0m23.005s
sys     0m4.412s

(deprecation messages omitted)

I think 2.2s vs. 3.7s is a pretty good result. This was on a 4-core i7 - results should be even better with the new 8-cores on the horizon.
March 10, 2013
On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
> - combined compilation    6s
> - single file compilation 1min4s
>
> Using dmd compiled with dmc instead of cl makes these times 17s and 1min39s respectively)

Holy smokes! Are you saying that I can speed up compilation of D programs by almost 3 times just by building DMD with Microsoft's C++ compiler instead of the DigitalMars one?
March 10, 2013

On 10.03.2013 12:54, Vladimir Panteleev wrote:
> On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
>>
>>
>> On 10.03.2013 11:32, Vladimir Panteleev wrote:
>>> On Sunday, 10 March 2013 at 10:27:38 UTC, Rainer Schuetze wrote:
>>>> In my experience single file compilation of medium sized projects is
>>>> unacceptably slow. Much slower than what you are used to by similar
>>>> sized C++ projects.
>>>
>>> Even when taking advantage of multiple CPU cores?
>>
>> I don't have support for building on multiple cores, but trying it on
>> visuald itself (48 files) yields
>>
>> - combined compilation    6s
>> - single file compilation 1min4s
>>
>> You'd need a lot of cores to be better off with single file compilation.
>>
>> These are only the plugin files, not anything in the used libraries
>> (about 300 more files). Using dmd compiled with dmc instead of cl
>> makes these times 17s and 1min39s respectively)
>>
>> Almost any change causes a lot of files to be rebuilt (just tried one,
>> took 49s to build).
>
> Do you think it has much to do with that Windows has a larger overhead
> for process creation?

I doubt that causes a significant part of it. I think it's related to some files importing the translated Windows-SDK and VS-SDK header files (about 8MB of declarations) and these get imported (indirectly) by almost any other file.

>
> I've ran some tests on Linux:
>
> ~$ git clone git://github.com/CyberShadow/DFeed.git
> ~$ cd DFeed
> ~/DFeed$ git submodule init
> ~/DFeed$ time rdmd --force --build-only dfeed
> real    0m2.290s
> user    0m1.960s
> sys     0m0.304s
> ~/DFeed$ dmd -o- -v dfeed.d | grep '^import ' | sed 's/.*(\(.*\))/\1/g'
> | grep -v '^/' > all.txt
> ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 dmd -c'
> real    0m16.935s
> user    0m13.837s
> sys     0m2.812s
> ~/DFeed$ time bash -c 'cat all.txt | xargs -n1 -P8 dmd -c'
> real    0m3.703s
> user    0m23.005s
> sys     0m4.412s
>
> (deprecation messages omitted)
>
> I think 2.2s vs. 3.7s is a pretty good result. This was on a 4-core i7 -
> results should be even better with the new 8-cores on the horizon.

Looks pretty ok, but considering the number of modules in dfeed (I count about 24) and them being not very large, that makes compilation speed for each module about 1 second. It will only be faster if the number of modules to compile does not exceed twice the number of cores available. I think it does not scale well with increasing numbers of modules.
March 10, 2013

On 10.03.2013 13:11, Vladimir Panteleev wrote:
> On Sunday, 10 March 2013 at 11:25:13 UTC, Rainer Schuetze wrote:
>> - combined compilation    6s
>> - single file compilation 1min4s
>>
>> Using dmd compiled with dmc instead of cl makes these times 17s and
>> 1min39s respectively)
>
> Holy smokes! Are you saying that I can speed up compilation of D
> programs by almost 3 times just by building DMD with Microsoft's C++
> compiler instead of the DigitalMars one?

My usual estimate is about twice as fast, but it depends on what you compile. It doesn't have a huge effect on running the test suite, my guess is that the runtime initialization for the MS build is slightly slower than for the dmc build, and there are a large number of small files to compile there.
Also, it's quite difficult to get accurate and reproducable benchmarking numbers these days, with the (mobile) processors continuously changing their performance.
March 10, 2013
On Sunday, 10 March 2013 at 13:35:34 UTC, Rainer Schuetze wrote:
> Looks pretty ok, but considering the number of modules in dfeed (I count about 24) and them being not very large, that makes compilation speed for each module about 1 second. It will only be faster if the number of modules to compile does not exceed twice the number of cores available.

~/DFeed$ cat all.txt | wc -l
62

> I think it does not scale well with increasing numbers of modules.

Why? Wouldn't it scale linearly? Or you mean due to the increased number of graph edges when increasing graph nodes?

Anyway, the programmer can take steps in lessening intermodule dependencies to reduce incremental build times. That's not an option with compiling everything at once, unless you split the code manually into libraries.
« First   ‹ Prev
1 2 3 4 5