Jump to page: 1 2
Thread overview
{OT} Youtube Video: newCTFE: Starting to write the x86 JIT
6 days ago
Stefan Koch
6 days ago
Stefan Koch
6 days ago
Suliman
6 days ago
Stefan Koch
6 days ago
Nordlöw
4 days ago
evilrat
4 days ago
Stefan Koch
4 days ago
Stefan Koch
3 days ago
evilrat
3 days ago
Stefan Koch
4 days ago
John Colvin
4 days ago
Stefan Koch
2 days ago
Stefan Koch
2 days ago
Jonathan Marler
2 days ago
jmh530
2 days ago
Jonathan Marler
6 days ago
Hi Guys,

I just begun work on the x86 jit backend.

Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.

Since I do believe that this is an interesting topic;
I will give you the over-the-shoulder perspective on this.

At the time of posting the video is still uploading, but you should be able to see it soon.

https://www.youtube.com/watch?v=pKorjPAvhQY

Cheers,
Stefan

6 days ago
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Actual code-gen starts at 34:00 something.
6 days ago
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Could you explain where it can be helpful?
6 days ago
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>
> Could you explain where it can be helpful?

It's helpful for newCTFE's development. :)
The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
which will make it about 100-1000x faster then the current CTFE.
6 days ago
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
> It's helpful for newCTFE's development. :)
> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
> which will make it about 100-1000x faster then the current CTFE.

Wow.
4 days ago
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>
>> Could you explain where it can be helpful?
>
> It's helpful for newCTFE's development. :)
> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
> which will make it about 100-1000x faster then the current CTFE.

Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.
4 days ago
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
> On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
>> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>>
>>> Could you explain where it can be helpful?
>>
>> It's helpful for newCTFE's development. :)
>> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
>> which will make it about 100-1000x faster then the current CTFE.
>
> Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.

No it most likely will not.
However I am planning to work on speeding templates up after newCTFE is done.

4 days ago
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
> On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
>> On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
>>>
>>> Could you explain where it can be helpful?
>>
>> It's helpful for newCTFE's development. :)
>> The I estimate the jit will easily be 10 times faster then my bytecode interpreter.
>> which will make it about 100-1000x faster then the current CTFE.
>
> Is this apply to templates too? I recently tried some code, and templated version with about 10 instantiations for 4-5 types increased compile time from about 1 sec up to 4! The template itself was staightforward, just had a bunch of static if-else-else for types special cases.

If you could share the code it would be appreciated.
If you cannot share it publicly come in irc sometime.
I am Uplink|DMD there.
4 days ago
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
> Hi Guys,
>
> I just begun work on the x86 jit backend.
>
> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>
> Since I do believe that this is an interesting topic;
> I will give you the over-the-shoulder perspective on this.
>
> At the time of posting the video is still uploading, but you should be able to see it soon.
>
> https://www.youtube.com/watch?v=pKorjPAvhQY
>
> Cheers,
> Stefan

Is there not some way that you could get the current interpreter-based implementation in to dmd sooner and then modify the design later if necessary when you do x86 jit? The benefits of having just *fast* ctfe sooner are perhaps larger than the benefits of having *even faster* ctfe later. Faster templates are also something that might be higher priority - assuming it will be you who does the work there.

Obviously it's your time and you're free to do whatever you like whenever you like, but I was just wondering what you're reasoning for the order of your plan is?
4 days ago
On Saturday, 22 April 2017 at 14:22:18 UTC, John Colvin wrote:
> On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
>> Hi Guys,
>>
>> I just begun work on the x86 jit backend.
>>
>> Because right now I am at a stage where further design decisions need to be made and those decisions need to be informed by how a _fast_ jit-compatible x86-codegen is structured.
>>
>> Since I do believe that this is an interesting topic;
>> I will give you the over-the-shoulder perspective on this.
>>
>> At the time of posting the video is still uploading, but you should be able to see it soon.
>>
>> https://www.youtube.com/watch?v=pKorjPAvhQY
>>
>> Cheers,
>> Stefan
>
> Is there not some way that you could get the current interpreter-based implementation in to dmd sooner and then modify the design later if necessary when you do x86 jit? The benefits of having just *fast* ctfe sooner are perhaps larger than the benefits of having *even faster* ctfe later. Faster templates are also something that might be higher priority - assuming it will be you who does the work there.
>
> Obviously it's your time and you're free to do whatever you like whenever you like, but I was just wondering what you're reasoning for the order of your plan is?

newCTFE is currently at a phase where high-level features have to be implemented.
And for that reason I am looking to extend the interface to support for example scaled loads and the like.
Otherwise you and up with 1000 temporaries that add offsets to pointers.
Also and perhaps more importantly I am sick and tired of hearing "why don't you use ldc/llvm?" all the time...

« First   ‹ Prev
1 2