May 29, 2012
I've been trying to work out why my compile times have gone to hell recently.

I have a lib, it takes 3.5 seconds to compile.
I add one CTFE heavy module, it's not huge, certainly much smaller than the
rest of the app, and it blows out to 18 seconds. I've done some experiments
removing bits and pieces of code, I can isolate the bits that add seconds
to the compile time, but the big offenders are one-line mixins which use
CTFE fairly aggressively to generate the strings they mix in.

Can anyone comment on CTFE as implemented? Why is it so slow? It's
certainly not executing a lot of code. I can imagine executing the same
routine in an interpreted language like lua would take milliseconds or
less, not multiple seconds.
What are the bottlenecks? Is there any way to improve it?


May 29, 2012
On 2012-05-29 10:25:54 +0000, Manu <turkeyman@gmail.com> said:

> What are the bottlenecks? Is there any way to improve it?

The answer to those questions is usually found by profiling. Asking people for what they think is slow is almost certain to give you wrong answers.

-- 
Michel Fortin
michel.fortin@michelf.com
http://michelf.com/

May 29, 2012
On 29 May 2012 14:28, Michel Fortin <michel.fortin@michelf.com> wrote:

> On 2012-05-29 10:25:54 +0000, Manu <turkeyman@gmail.com> said:
>
>  What are the bottlenecks? Is there any way to improve it?
>>
>
> The answer to those questions is usually found by profiling. Asking people for what they think is slow is almost certain to give you wrong answers.


I'm not in a hurry. I'm mainly asking out of curiosity, and wondering if others are thinking the same thing, or if there are motions to improve it.


May 29, 2012
On 2012-05-29 12:25, Manu wrote:
> I've been trying to work out why my compile times have gone to hell
> recently.
>
> I have a lib, it takes 3.5 seconds to compile.
> I add one CTFE heavy module, it's not huge, certainly much smaller than
> the rest of the app, and it blows out to 18 seconds. I've done some
> experiments removing bits and pieces of code, I can isolate the bits
> that add seconds to the compile time, but the big offenders are one-line
> mixins which use CTFE fairly aggressively to generate the strings they
> mix in.
>
> Can anyone comment on CTFE as implemented? Why is it so slow? It's
> certainly not executing a lot of code. I can imagine executing the same
> routine in an interpreted language like lua would take milliseconds or
> less, not multiple seconds.
> What are the bottlenecks? Is there any way to improve it?

Many small string mixins are slow, even if they're string literals and not generated. If possible, it's better with one huge string mixin.

-- 
/Jacob Carlborg
May 29, 2012
On 29 May 2012 15:10, Jacob Carlborg <doob@me.com> wrote:

> On 2012-05-29 12:25, Manu wrote:
>
>> I've been trying to work out why my compile times have gone to hell recently.
>>
>> I have a lib, it takes 3.5 seconds to compile.
>> I add one CTFE heavy module, it's not huge, certainly much smaller than
>> the rest of the app, and it blows out to 18 seconds. I've done some
>> experiments removing bits and pieces of code, I can isolate the bits
>> that add seconds to the compile time, but the big offenders are one-line
>> mixins which use CTFE fairly aggressively to generate the strings they
>> mix in.
>>
>> Can anyone comment on CTFE as implemented? Why is it so slow? It's
>> certainly not executing a lot of code. I can imagine executing the same
>> routine in an interpreted language like lua would take milliseconds or
>> less, not multiple seconds.
>> What are the bottlenecks? Is there any way to improve it?
>>
>
> Many small string mixins are slow, even if they're string literals and not generated. If possible, it's better with one huge string mixin.


That's interesting. I can probably give that a shot.
So you think that's a bigger cost than the CTFE code that generates the
strings?


May 29, 2012
On 29/05/12 12:25, Manu wrote:
> I've been trying to work out why my compile times have gone to hell
> recently.
>
> I have a lib, it takes 3.5 seconds to compile.
> I add one CTFE heavy module, it's not huge, certainly much smaller than
> the rest of the app, and it blows out to 18 seconds. I've done some
> experiments removing bits and pieces of code, I can isolate the bits
> that add seconds to the compile time, but the big offenders are one-line
> mixins which use CTFE fairly aggressively to generate the strings they
> mix in.

>
> Can anyone comment on CTFE as implemented? Why is it so slow?

You really don't want to know. What it's actually doing is horrific. Bug 6498.

The reason why it's still like that is that CTFE bugs have kept cropping up (mostly related to pointers and especially AAs), which have prevented me from doing anything on the performance issue.

> It's
> certainly not executing a lot of code. I can imagine executing the same
> routine in an interpreted language like lua would take milliseconds or
> less, not multiple seconds.
> What are the bottlenecks?

It's was originally based on the const-folding code used by the optimizer. So most of the code was written with totally goals (that didn't include performance).

> Is there any way to improve it?

Oh yeah. Orders of magnitude, easily. The slowness is not in any way inherent to CTFE. The experience will be completely different, once I have some time to work on it -- I know exactly how to do it.


May 29, 2012
On 2012-05-29 14:37, Manu wrote:

> That's interesting. I can probably give that a shot.
> So you think that's a bigger cost than the CTFE code that generates the
> strings?

I don't know. I just did a test with Derelict that needed to be compatible with D1 and D2 and therefore used string mixins for things like __gshared.

For example:

http://www.dsource.org/projects/derelict/browser/branches/Derelict2/DerelictGL/derelict/opengl/glfuncs.d#L699

Putting all those declarations in their own string mixins make a difference.

-- 
/Jacob Carlborg
May 29, 2012
On 29 May 2012 15:52, Don Clugston <dac@nospam.com> wrote:

> On 29/05/12 12:25, Manu wrote:
>
>> Is there any way to improve it?
>
>
> Oh yeah. Orders of magnitude, easily. The slowness is not in any way inherent to CTFE. The experience will be completely different, once I have some time to work on it -- I know exactly how to do it.
>

Alright, well I've got a case of beer with your name on it if you can pull it off! ;)


May 29, 2012
>
>
> Alright, well I've got a case of beer with your name on it if you can pull it off! ;)
>

+1. I too am waiting for CTFE improvements. I am working on a DSL and with the present limitations, it is impractically slow and memory consuming while compiling.


May 29, 2012
On Tue, May 29, 2012 at 2:52 PM, Don Clugston <dac@nospam.com> wrote:

>> Is there any way to improve it?
>
>
> Oh yeah. Orders of magnitude, easily.

!

>The slowness is not in any way
> inherent to CTFE. The experience will be completely different, once I have some time to work on it -- I know exactly how to do it.

Did 2.058 or 2.059 see any new code for CTFE? Like the OP, I've the impression CTFE/mixins suddenly became far slower. I'm not complaining, I understand it's a difficult part of DMD, but I wondered if what I see is real or imaginary.
« First   ‹ Prev
1 2
Top | Discussion index | About this forum | D home