Jump to page: 1 2
Thread overview
{OT} Youtube Video: newCTFE: Starting to write the x86 JIT
Apr 20, 2017
Stefan Koch
Apr 20, 2017
Stefan Koch
Apr 20, 2017
Suliman
Apr 20, 2017
Stefan Koch
Apr 20, 2017
Nordlöw
Apr 22, 2017
evilrat
Apr 22, 2017
Stefan Koch
Apr 22, 2017
Stefan Koch
Apr 23, 2017
evilrat
Apr 23, 2017
Stefan Koch
Apr 22, 2017
John Colvin
Apr 22, 2017
Stefan Koch
Apr 24, 2017
Stefan Koch
Apr 25, 2017
Patrick Schluter
Apr 24, 2017
Jonathan Marler
Apr 24, 2017
jmh530
Apr 24, 2017
Jonathan Marler
April 20, 2017
Hi Guys,

I just begun work on the x86 jit backend.

Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.

Since I do believe that this is an interesting topic;
I will give you the over-the-shoulder perspective on this.

At the time of posting the video is still uploading, but you should be able to see it soon.

https://www.youtube.com/watch?v=pKorjPAvhQY

Cheers,
Stefan

April 20, 2017
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Actual code-gen starts at 34:00 something.
April 20, 2017
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Could you explain where it can be helpful?
April 20, 2017
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>
> Could you explain where it can be helpful?

It's helpful for newCTFE's development. :)
The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
which will make it about 100-1000x faster then the current CTFE.
April 20, 2017
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
> It's helpful for newCTFE's development. :)
> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
> which will make it about 100-1000x faster then the current CTFE.

Wow.
April 22, 2017
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>
>> Could you explain where it can be helpful?
>
> It's helpful for newCTFE's development. :)
> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
> which will make it about 100-1000x faster then the current CTFE.

Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.
April 22, 2017
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
> On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
>> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>>
>>> Could you explain where it can be helpful?
>>
>> It's helpful for newCTFE's development. :)
>> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
>> which will make it about 100-1000x faster then the current CTFE.
>
> Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.

No it most likely will not.
However I am planning to work on speeding templates up after newCTFE is done.

April 22, 2017
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
> On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
>> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>>
>>> Could you explain where it can be helpful?
>>
>> It's helpful for newCTFE's development. :)
>> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
>> which will make it about 100-1000x faster then the current CTFE.
>
> Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.

If you could share the code it would be appreciated.
If you cannot share it publicly come in irc sometime.
I am Uplink|DMD there.
April 22, 2017
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Is there not some way that you could get the current interpreter-based implementation in to dmd sooner and then modify the design later if necessary when you do x86 jit? The benefits of having just *fast* ctfe sooner are perhaps larger than the benefits of having *even faster* ctfe later. Faster templates are also something that might be higher priority - assuming it will be you who does the work there.

Obviously it's your time and you're free to do whatever you like whenever you like, but I was just wondering what you're reasoning for the order of your plan is?
April 22, 2017
On Saturday, 22 April 2017 at 14:22:18 UTC, John Colvin wrote:
> On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
>> Hi Guys,
>>
>> I just begun work on the x86 jit backend.
>>
>> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>>
>> Since I do believe that this is an interesting topic;
>> I will give you the over-the-shoulder perspective on this.
>>
>> At the time of posting the video is still uploading, but you should be able to see it soon.
>>
>> https://www.youtube.com/watch?v=pKorjPAvhQY
>>
>> Cheers,
>> Stefan
>
> Is there not some way that you could get the current interpreter-based implementation in to dmd sooner and then modify the design later if necessary when you do x86 jit? The benefits of having just *fast* ctfe sooner are perhaps larger than the benefits of having *even faster* ctfe later. Faster templates are also something that might be higher priority - assuming it will be you who does the work there.
>
> Obviously it's your time and you're free to do whatever you like whenever you like, but I was just wondering what you're reasoning for the order of your plan is?

newCTFE is currently at a phase where high-level features have to be implemented.
And for that reason I am looking to extend the interface to support for example scaled loads and the like.
Otherwise you and up with 1000 temporaries that add offsets to pointers.
Also and perhaps more importantly I am sick and tired of hearing "why don't you use ldc/llvm?" all the time...

« First   ‹ Prev
1 2