December 05, 2016
On Monday, 5 December 2016 at 08:07:11 UTC, ketmar wrote:
> On Monday, 5 December 2016 at 07:55:32 UTC, deadalnix wrote:
>> On Monday, 5 December 2016 at 04:26:35 UTC, Stefan Koch wrote:
>>> I just improved the handling of void initializations.
>>> Now the code is less pessimistic and will allow them if they are assigned to before use.
>>> However using void variables at ctfe will not result in any performance wins.
>>
>> Void initialization are allowed at CTFE ?
>
> not now, but it looks like needless limitation. any void initialization can be converted to "fill the things with zeroes" in CTFE engine.

On the contrary. If something is NOT initialized, it uses its default (e.g. zeroes), but if it is =void it should be an error to use it before any assignment.

December 05, 2016
On Monday, 5 December 2016 at 07:55:32 UTC, deadalnix wrote:
> On Monday, 5 December 2016 at 04:26:35 UTC, Stefan Koch wrote:
>> I just improved the handling of void initializations.
>> Now the code is less pessimistic and will allow them if they are assigned to before use.
>> However using void variables at ctfe will not result in any performance wins.
>
> Void initialization are allowed at CTFE ?

the following code will compile just fine.
uint fn(uint a) {
 int b = void;
 if (a == 2)
 {
   b = 1;
 }

 return b; // only fine is a was 2;
}

static assert(fn(2) == 1);

December 05, 2016
On Monday, 5 December 2016 at 04:26:35 UTC, Stefan Koch wrote:
> I just improved the handling of void initializations.
> Now the code is less pessimistic and will allow them if they are assigned to before use.
> However using void variables at ctfe will not result in any performance wins.

Oh it's broken.
It does not detect the ... maybe not void case.

I found out why certain switches are broken.
Essentially this is fixable by putting a guarding jump around the braces that holds the cases.
Which is also the reason why code in a switch that does not belong to any case is unreachable code :)


December 05, 2016
On Monday, 5 December 2016 at 07:48:31 UTC, Stefan Koch wrote:
>
> I found an easily fixable performance problem inside the byte-code generator.
> Causing it to allocate 800K per discovery of a new type.
> Reducing this will probably make IR generation 10 times faster in the average case.
> Clearing 800K takes quite some time.

I just fixed this.
As predicted, the taken time to generate byte-code is now greatly reduced.
This has a really huge impact.
It looks like the performance wins brought by the new ctfe engine might be higher then I predicted.
December 05, 2016
On 12/05/2016 11:28 AM, Stefan Koch wrote:
> It looks like the performance wins brought by the new ctfe engine might
> be higher then I predicted.

That's awesome!! -- Andrei
December 05, 2016
On Monday, 5 December 2016 at 16:47:33 UTC, Andrei Alexandrescu wrote:
> On 12/05/2016 11:28 AM, Stefan Koch wrote:
>> It looks like the performance wins brought by the new ctfe engine might
>> be higher then I predicted.
>
> That's awesome!! -- Andrei

After discovering this performance bottleneck I have now changed my mind about how I will tackle concatenation of arrays and strings in particular.
All con-cat operations should be done as intrinsic calls.
Because tight cooperation with the CTFE-Memory-Management-Subsystem is needed.

December 06, 2016
On Monday, 5 December 2016 at 18:47:13 UTC, Stefan Koch wrote:
> On Monday, 5 December 2016 at 16:47:33 UTC, Andrei Alexandrescu wrote:
>> On 12/05/2016 11:28 AM, Stefan Koch wrote:
>>> It looks like the performance wins brought by the new ctfe engine might
>>> be higher then I predicted.
>>
>> That's awesome!! -- Andrei
>
> After discovering this performance bottleneck I have now changed my mind about how I will tackle concatenation of arrays and strings in particular.
> All con-cat operations should be done as intrinsic calls.
> Because tight cooperation with the CTFE-Memory-Management-Subsystem is needed.

I just implemented a bytecode cache, however the bytecode generation is so fast now that we only safe a couple micro seconds by using the cache.
It is not noticeable at all :P.

December 06, 2016
On 12/06/2016 11:27 AM, Stefan Koch wrote:
> On Monday, 5 December 2016 at 18:47:13 UTC, Stefan Koch wrote:
>> On Monday, 5 December 2016 at 16:47:33 UTC, Andrei Alexandrescu wrote:
>>> On 12/05/2016 11:28 AM, Stefan Koch wrote:
>>>> It looks like the performance wins brought by the new ctfe engine might
>>>> be higher then I predicted.
>>>
>>> That's awesome!! -- Andrei
>>
>> After discovering this performance bottleneck I have now changed my
>> mind about how I will tackle concatenation of arrays and strings in
>> particular.
>> All con-cat operations should be done as intrinsic calls.
>> Because tight cooperation with the CTFE-Memory-Management-Subsystem is
>> needed.
>
> I just implemented a bytecode cache, however the bytecode generation is
> so fast now that we only safe a couple micro seconds by using the cache.
> It is not noticeable at all :P.

Just give us time :o). -- Andrei

December 06, 2016
On Tuesday, 6 December 2016 at 16:27:38 UTC, Stefan Koch wrote:
> I just implemented a bytecode cache, however the bytecode generation is so fast now that we only safe a couple micro seconds by using the cache.
> It is not noticeable at all :P.

I can't wait to try this out.
December 06, 2016
On Tuesday, 6 December 2016 at 21:40:47 UTC, Nordlöw wrote:
> On Tuesday, 6 December 2016 at 16:27:38 UTC, Stefan Koch wrote:
>> I just implemented a bytecode cache, however the bytecode generation is so fast now that we only safe a couple micro seconds by using the cache.
>> It is not noticeable at all :P.
>
> I can't wait to try this out.

Go ahead.
Many features should already be working correctly.

The missing important ones are slices and concat.