Thread overview | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
July 25, 2013 Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Vote up! http://www.reddit.com/r/programming/comments/1j1i30/increasing_the_d_compiler_speed_by_over_75/ https://news.ycombinator.com/item?id=6103883 Andrei |
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | Andrei Alexandrescu:
> http://www.reddit.com/r/programming/comments/1j1i30/increasing_the_d_compiler_speed_by_over_75/
Where is the 75% value coming from?
Regarding the hashing, maybe a different hashing scheme, like Python dicts hashing could be better.
Regarding Don's problems with memory used by dmd, is it a good idea to add a compilation switch like "-cgc" that switches on a garbage collector for the compiler (disabled on default)?
Bye,
bearophile
|
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to bearophile | On 7/25/2013 11:21 AM, bearophile wrote: > Andrei Alexandrescu: > >> http://www.reddit.com/r/programming/comments/1j1i30/increasing_the_d_compiler_speed_by_over_75/ >> > > Where is the 75% value coming from? Not sure what you mean. Numbers at the end of the article. > Regarding the hashing, maybe a different hashing scheme, like Python dicts > hashing could be better. It's not the hashing that's slow. It's the lookup that is. > Regarding Don's problems with memory used by dmd, is it a good idea to add a > compilation switch like "-cgc" that switches on a garbage collector for the > compiler (disabled on default)? It might be. |
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Andrei Alexandrescu | The biggest compile time killer in my experience is actually running out of memory and hitting the swap. My work app used to compile in about 8 seconds (on Linux btw). Then we added more and more stuff and it went up to about 20 seconds. It uses a fair amount of CTFE and template stuff, looping over almost every function in the program to generate code. Annoying... but then we added a little bit more and it skyrocketed to about 90 seconds to compile! That's unbearable. The cause was the build machine had run out of physical memory at the peak of the compile process, and started furiously swapping to disk. I "fixed" it by convincing them to buy more RAM, and now we're back to ~15 second compiles, but at some point the compiler will have to address this. I know donc has a dmd fork where he's doing a lot of work, completely re-engineering CTFE, so it is coming, but that will probably be the next speed increase, and we could be looking at as much as 5x in cases like mine! BTW apparently a dmd built with Microsoft's compile does the nasty in about 11 seconds rather than 30 for the std.algorithm build - comparable to the linux version with gcc. I really like dmc too, but a 3x speed increase is really significant for something that's relatively easy to do. |
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Adam D. Ruppe | On 7/25/2013 11:30 AM, Adam D. Ruppe wrote: > The biggest compile time killer in my experience is actually running out of > memory and hitting the swap. > > My work app used to compile in about 8 seconds (on Linux btw). Then we added > more and more stuff and it went up to about 20 seconds. It uses a fair amount of > CTFE and template stuff, looping over almost every function in the program to > generate code. > > Annoying... but then we added a little bit more and it skyrocketed to about 90 > seconds to compile! That's unbearable. > > The cause was the build machine had run out of physical memory at the peak of > the compile process, and started furiously swapping to disk. > > I "fixed" it by convincing them to buy more RAM, and now we're back to ~15 > second compiles, but at some point the compiler will have to address this. I > know donc has a dmd fork where he's doing a lot of work, completely > re-engineering CTFE, so it is coming, but that will probably be the next speed > increase, and we could be looking at as much as 5x in cases like mine! I know the memory consumption is a problem, but it's much harder to fix. > BTW apparently a dmd built with Microsoft's compile does the nasty in about 11 > seconds rather than 30 for the std.algorithm build - comparable to the linux > version with gcc. I really like dmc too, but a 3x speed increase is really > significant for something that's relatively easy to do. An interesting project would be to research the specific cause of the difference. |
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | On Thursday, 25 July 2013 at 19:07:02 UTC, Walter Bright wrote:
> On 7/25/2013 11:30 AM, Adam D. Ruppe wrote:
>> The biggest compile time killer in my experience is actually running out of
>> memory and hitting the swap.
>>
>> My work app used to compile in about 8 seconds (on Linux btw). Then we added
>> more and more stuff and it went up to about 20 seconds. It uses a fair amount of
>> CTFE and template stuff, looping over almost every function in the program to
>> generate code.
>>
>> Annoying... but then we added a little bit more and it skyrocketed to about 90
>> seconds to compile! That's unbearable.
>>
>> The cause was the build machine had run out of physical memory at the peak of
>> the compile process, and started furiously swapping to disk.
>>
>> I "fixed" it by convincing them to buy more RAM, and now we're back to ~15
>> second compiles, but at some point the compiler will have to address this. I
>> know donc has a dmd fork where he's doing a lot of work, completely
>> re-engineering CTFE, so it is coming, but that will probably be the next speed
>> increase, and we could be looking at as much as 5x in cases like mine!
>
> I know the memory consumption is a problem, but it's much harder to fix.
Obstacks are a popular approach in compilers. Allocation is the simple pointer-bump, so it should maintain the new speed. Deallocation can be done blockwise. Works great, if you know the lifetime of the objects.
|
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to qznc | On 7/25/2013 12:26 PM, qznc wrote:
> if you know the lifetime of the objects.
Aye, there's the rub!
And woe to you if you get that wrong.
|
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright | Walter Bright:
> It's not the hashing that's slow. It's the lookup that is.
By "different hashing scheme" I meant different strategies in resolving hash collisions, likes double hashing, internal hashing, cuckoo hashing, and so on and on. Maybe one of such alternative strategies is more fit for the needs of dmd compilation. (I think that currently the Python dicts are using a hashing strategy different from the built-in dictionaries of D. The Python style of hashing was implemented in D some months ago, but I don't remember what happened to that project later).
Bye,
bearophile
|
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to Walter Bright Attachments:
| On Thu, Jul 25, 2013 at 2:25 PM, Walter Bright <newshound2@digitalmars.com>wrote: > On 7/25/2013 11:21 AM, bearophile wrote: > >> Andrei Alexandrescu: >> >> http://www.reddit.com/r/**programming/comments/1j1i30/** >>> increasing_the_d_compiler_**speed_by_over_75/<http://www.reddit.com/r/programming/comments/1j1i30/increasing_the_d_compiler_speed_by_over_75/> >>> >>> >> Where is the 75% value coming from? >> > > Not sure what you mean. Numbers at the end of the article. > > I am also confused by the numbers. What I see at the end of the article is "21.56 seconds, and the latest development version does it in 12.19", which is really a 43% improvement. (Which is really great too.) > > > Regarding the hashing, maybe a different hashing scheme, like Python dicts >> hashing could be better. >> > > It's not the hashing that's slow. It's the lookup that is. > > > > Regarding Don's problems with memory used by dmd, is it a good idea to >> add a >> compilation switch like "-cgc" that switches on a garbage collector for >> the >> compiler (disabled on default)? >> > > It might be. > > |
July 25, 2013 Re: Article: Increasing the D Compiler Speed by Over 75% | ||||
---|---|---|---|---|
| ||||
Posted in reply to bearophile | On 7/25/2013 1:00 PM, bearophile wrote:
> Walter Bright:
>
>> It's not the hashing that's slow. It's the lookup that is.
>
> By "different hashing scheme" I meant different strategies in resolving hash
> collisions, likes double hashing, internal hashing, cuckoo hashing, and so on
> and on. Maybe one of such alternative strategies is more fit for the needs of
> dmd compilation. (I think that currently the Python dicts are using a hashing
> strategy different from the built-in dictionaries of D. The Python style of
> hashing was implemented in D some months ago, but I don't remember what happened
> to that project later).
Hash collisions are not the problem - I sized the hash bucket array to make it fairly sparse. Neither is the hash algorithm.
The slowness was in the frackin' "convert the hash to an index in the bucket", which is a modulus operation.
Also, computing the hash is done exactly once, in the lexer. Thereafter, all identifiers are known only by their handles, which are (not coincidentally) the pointer to the identifier, and by its very nature is unique.
|
Copyright © 1999-2021 by the D Language Foundation