April 16, 2017
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
> [ ... ]

Hi Guys,

I just fixed default initialization of structs.
So now a larger portion of code will be compiled and executed by newCTFE.

my_struct MyStruct;
will now work, before it would trigger a bailout.
NOTE: this will create bogus results if the struct does contain complex initializers i.e. anything other then integers.

Complex type support will come after dconf.
April 27, 2017
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
> [ ... ]
Hi Guys,

As you already probably know some work has been done in the past week to get an x86 jit rolling.

It is designed to produce very simple code with _any_ optimization at all.

Since optimization introduces heavy complexity down the road, even if at first it looks very affordable. My opinion is : "_any_ optimization too much."

This stance should make it possible to get some _really_ shiny performance numbers for dconf.

Cheers,
Stefan
April 26, 2017
On Thu, Apr 27, 2017 at 02:15:30AM +0000, Stefan Koch via Digitalmars-d wrote:
> On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
> > [ ... ]
> Hi Guys,
> 
> As you already probably know some work has been done in the past week to get an x86 jit rolling.
> 
> It is designed to produce very simple code with _any_ optimization at all.
> 
> Since optimization introduces heavy complexity down the road, even if at first it looks very affordable. My opinion is : "_any_ optimization too much."
> 
> This stance should make it possible to get some _really_ shiny performance numbers for dconf.
[...]

Is it possible at all to use any of the backend (in particular what parts of the optimizer that are pertinent), or is the API not conducive for this?


T

-- 
It always amuses me that Windows has a Safe Mode during bootup. Does that mean that Windows is normally unsafe?
April 27, 2017
On Thursday, 27 April 2017 at 03:33:03 UTC, H. S. Teoh wrote:
>
> Is it possible at all to use any of the backend (in particular what parts of the optimizer that are pertinent), or is the API not conducive for this?
>
>
> T
It is of course possible to use dmds backend but not very desirable, dmds backend works on an expression-tree, which would be expensive to build from the linear IR newCTFE uses.
Dmds backend is also very hard to debug for anyone who is not Walter.

CTFE is the common case will be fastest if executed without any optimizer interfering.
modern x86 chips done a very fine job indeed executing crappy code fast.
Therefore making it possible to get away with very simple and fast codegen.
(Where fast means code-generation speed rather then code execution speed).

April 27, 2017
On 4/27/17 4:15 AM, Stefan Koch wrote:
> On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
>> [ ... ]
> Hi Guys,
>
> As you already probably know some work has been done in the past week to
> get an x86 jit rolling.
>
> It is designed to produce very simple code with _any_ optimization at all.
>
> Since optimization introduces heavy complexity down the road, even if at
> first it looks very affordable. My opinion is : "_any_ optimization too
> much."

There is also trade-off of spending too much time doing an optimization.
That being said simple peep-hole optimizations may be well worth the effort.

>
> This stance should make it possible to get some _really_ shiny
> performance numbers for dconf.
>
> Cheers,
> Stefan

April 27, 2017
On Thursday, 27 April 2017 at 08:51:17 UTC, Dmitry Olshansky wrote:
> On 4/27/17 4:15 AM, Stefan Koch wrote:
>> On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
>>> [ ... ]
>> Hi Guys,
>>
>> As you already probably know some work has been done in the past week to
>> get an x86 jit rolling.
>>
>> It is designed to produce very simple code with _any_ optimization at all.
>>
>> Since optimization introduces heavy complexity down the road, even if at
>> first it looks very affordable. My opinion is : "_any_ optimization too
>> much."
>
> There is also trade-off of spending too much time doing an optimization.
> That being said simple peep-hole optimizations may be well worth the effort.
>
>>
>> This stance should make it possible to get some _really_ shiny
>> performance numbers for dconf.
>>
>> Cheers,
>> Stefan

I should probably clarify;  I made a typo.
I was meaning to write "without _any_ optimization at all."
Peep-holing would be worth it for wanting to get the last drop of performance;
However in the specific case of newCTFE, the crappiest JIT will already be much faster then an optimized interpreter would be.

Small peephole optimization quickly turns into and endless source of bugs.

April 28, 2017
On Thursday, 16 February 2017 at 21:05:51 UTC, Stefan Koch wrote:
> [ ... ]

After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best.
(codegen for scaled loads is hard :p)

I am now back to fixing non-compiling code,
such as :

struct S
{
    uint[] slice;
}

uint fn()
{
  S s;
  s.slice.length = 12;
  return cast(uint)s.slice.length;
}

static assert(fn() == 12);

This simple test does not compile because;
ahm well ...
Somewhere along the road we loose the type of s.slice and we cannot tell where to get .length from.
April 28, 2017
On Friday, 28 April 2017 at 08:47:43 UTC, Stefan Koch wrote:
> After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best.
> (codegen for scaled loads is hard :p)

Do you mean no Jit?
April 28, 2017
On Friday, 28 April 2017 at 13:03:42 UTC, Nordlöw wrote:
> On Friday, 28 April 2017 at 08:47:43 UTC, Stefan Koch wrote:
>> After a little of exploration of the JIT, I have now determined that a simple risc architecture is still the best.
>> (codegen for scaled loads is hard :p)
>
> Do you mean no Jit?

Of course there will be a JIT.

But currently I am fixing busy bugs in the generated IR.
So the implementation of jit will have to wait a little.
April 28, 2017
On Friday, 28 April 2017 at 13:13:16 UTC, Stefan Koch wrote:
>> Do you mean no Jit?
>
> Of course there will be a JIT.

Ah, I misunderstood you formulation.

> But currently I am fixing busy bugs in the generated IR.
> So the implementation of jit will have to wait a little.

Ok. Thanks.
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19