Thread overview | |||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
October 19, 2016 Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
This was posted on twitter a while ago: Comparing compilation time of random code in C++, D, Go, Pascal and Rust http://imgur.com/a/jQUav D was doing well but in the larger examples the D compiler crashed: "Error: more than 32767 symbols in object file". |
October 19, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Gary Willoughby | On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby wrote:
> crashed: "Error: more than 32767 symbols in object file".
Will that many symbols ever happen in real applications?
Anyway, nice!
|
October 19, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Gary Willoughby | On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby wrote: > D was doing well but in the larger examples the D compiler crashed: "Error: more than 32767 symbols in object file". A bug of this series: https://issues.dlang.org/show_bug.cgi?id=14315 |
October 20, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Gary Willoughby | On 10/19/2016 10:05 AM, Gary Willoughby wrote:
> D was doing well but in the larger examples the D compiler crashed: "Error: more
> than 32767 symbols in object file".
The article didn't say it crashed.
That message only occurs for Win32 object files - it's a limitation of the OMF file format. We could change the object file format, but:
1. that means changing optlink, too, which is a more formidable task
2. the source file was a machine generated contrived one with 100,000 functions it in - not terribly likely to happen in a real case
3. I don't think Win32 has much of a future and is unlikely to be worth the investment
|
October 20, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 20 October 2016 at 08:19:21 UTC, Walter Bright wrote: could you give facts that on linux it is ok? |
October 20, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to eugene | On 10/20/2016 9:20 AM, eugene wrote:
> could you give facts that on linux it is ok?
You can find out by writing a program to generate 100,000 functions and compile the result on linux.
|
October 27, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Gary Willoughby | On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby wrote: > This was posted on twitter a while ago: > > Comparing compilation time of random code in C++, D, Go, Pascal and Rust > > http://imgur.com/a/jQUav Very interesting, thanks for sharing! From the article: > Surprise: C++ without optimizations is the fastest! A few other surprises: Rust also seems quite competitive here. D starts out comparatively slow." These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. (However, then why do C++ standard committee members believe that the replacement of text-based #includes with C++ modules ("import") will speed up the compilation by one order of magnitude?) Working simultaneously on equally sized C++ projects and D projects, I believe that a "dcache" (using hashes of the AST?) might be usefull. The average project build time in my company is lower for C++ projects than for D projects (we're using "ccache g++ -O3" and "gdc -O3"). |
October 27, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sebastien Alaiwan | On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan wrote: > > From the article: >> Surprise: C++ without optimizations is the fastest! A few other surprises: Rust also seems quite competitive here. D starts out comparatively slow." > > These benchmarks seem to support the idea that it's not the parsing which is slow, but the code generation phase. If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial. See https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html I also have a working dcache implementation in LDC but it still needs some polishing. -Johan |
October 27, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Sebastien Alaiwan | On 10/27/2016 02:43 AM, Sebastien Alaiwan wrote: > > From the article: >> Surprise: C++ without optimizations is the fastest! A few other >> surprises: Rust also seems quite competitive here. D starts out >> comparatively slow." > > These benchmarks seem to support the idea that it's not the parsing > which is slow, but the code generation phase. If code > generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) > tool might be very beneficial. > > (However, then why do C++ standard committee members believe that the > replacement of text-based #includes with C++ modules ("import") will > speed up the compilation by one order of magnitude?) > How many source files are used? If all the functions are always packed into one large source file, or just a small handful, then that would mean the tests are accidentally working around C++'s infamous #include slowdowns. |
October 28, 2016 Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust | ||||
---|---|---|---|---|
| ||||
Posted in reply to Johan Engelen | On Thursday, 27 October 2016 at 12:11:09 UTC, Johan Engelen wrote:
> On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan
>> If code generation/optimization is the bottleneck, a "ccache-for-D" ("dcache"?) tool might be very beneficial.
>
> See https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html
>
> I also have a working dcache implementation in LDC but it still needs some polishing.
Hashing the LLVM bitcode ... how come I didn't think about this before!
Unless someone manages to do the same thing with gdc + GIMPLE, this could very well be the "killer" feature of LDC ...
Having a the fastest compiler on earth still doesn't provide scalability ; interestingly, when I build a full LLVM+LDC toolchain, the longest step is the compilation of the dmd frontend. It's the only part that is:
1) not cached: all the other source files from LLVM are ccache'd.
2) sequential: my CPU load drops to 12.5%, although it's near 100% for LLVM.
|
Copyright © 1999-2021 by the D Language Foundation