April 26, 2021

On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature wrote:

>

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:

>

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:

>

Native CTFE and macros are a beautiful thing though.

What did you mean with native?

When cx needs to execute a function at compiletime, it links it into a shared object and loads it back with dlsym/dlopen. So while you get a slower startup speed (until the cache is filled), any further calls to a ctfe function run at native performance.

Ah okay, but can't Dlang runtime functions not anyway called at compile time with native performance too?

So generally, cx first parses the program, then filters out what is a macro, then compiles all macro/ctfe functions into shared lib and execute these macros from that lib?

Isn't it better to use the cx compiler as a service at compile time and compile code in-memory in the executable segment (some kind of jiting I think) in order to execute it then.
I think the cling repl does it like that.

And how does cx pass type objects?

April 26, 2021
On Thursday, 15 April 2021 at 04:01:23 UTC, Ali Çehreli wrote:
> We will talk about compile time function execution (CTFE).
>
> Although this is announced on Meetup[1] as well, you can connect directly at
>
>   https://us04web.zoom.us/j/2248614462?pwd=VTl4OXNjVHNhUTJibms2NlVFS3lWZz09
>
> April 15, 2021
> Thursday
> 19:00 Pacific Time
>
> Ali
>
> [1] https://www.meetup.com/D-Lang-Silicon-Valley/events/kmqcvqyccgbtb/

What was the outcome of this meeting?
April 27, 2021

On Monday, 26 April 2021 at 14:01:37 UTC, sighoya wrote:

>

On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature wrote:

>

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:

>

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:

>

Native CTFE and macros are a beautiful thing though.

What did you mean with native?

When cx needs to execute a function at compiletime, it links it into a shared object and loads it back with dlsym/dlopen. So while you get a slower startup speed (until the cache is filled), any further calls to a ctfe function run at native performance.

Ah okay, but can't Dlang runtime functions not anyway called at compile time with native performance too?

So generally, cx first parses the program, then filters out what is a macro, then compiles all macro/ctfe functions into shared lib and execute these macros from that lib?

Sorta: when we hit a macro declaration, "the module at this point" (plus transitive imports) is compiled as a complete unit. This is necessary cause parser macros can change the interpretation of later code. Then the generated macro object is added to the module state going forward, and that way it can be imported by other modules.

>

Isn't it better to use the cx compiler as a service at compile time and compile code in-memory in the executable segment (some kind of jiting I think) in order to execute it then.
I think the cling repl does it like that.

That would also work, I just went the path of least resistance. I already had an llvm backend, so I just reused it. Adding a JIT backend would be fairly easy, except for the part of writing and debugging a JIT. :P

>

And how does cx pass type objects?

By reference. :) Since the compiler is in the search path, you can just import cx.base and get access to the same Type class that the compiler uses internally. In that sense, macros have complete parity with the compiler itself. There's no attempt to provide any sort of special interface for the macro that wouldn't also be used by compiler internal functions. (There's some class gymnastics to prevent module loops, ie. cx.base defines an interface for the compiler as a whole, that is implemented in main, but that is indeed also used by the compiler's internal modules themselves.)

The downside of all this is that you need to parse and process the entire compiler to handle a macro import. But DMD gives me hope that this too can be made fast. (RN compiling anything that pulls in a macro takes about a second even with warm object cache.)

April 27, 2021

On Tuesday, 27 April 2021 at 08:12:57 UTC, FeepingCreature wrote:

>

[...]

Nice, thanks.

Generally, I think providing a meta programming framework by the language/compiler is like any decision equipped with tradeoffs.

Technically, more power is better than providing a simple language with limitations and upgrades which fragment the language more and more over time.

However, too much power exceeds human- and compiler's semantic reasoning. For instance allowing macros to mutate non-locally Ast Nodes in the whole project or even in downstream code is a powerful yet horrible utility to assist developing a productive software.

May 03, 2021

On Tuesday, 27 April 2021 at 08:12:57 UTC, FeepingCreature wrote:

>

On Monday, 26 April 2021 at 14:01:37 UTC, sighoya wrote:

>

On Monday, 26 April 2021 at 13:17:49 UTC, FeepingCreature wrote:

>

On Sunday, 25 April 2021 at 21:27:55 UTC, sighoya wrote:

>

On Monday, 19 April 2021 at 06:37:03 UTC, FeepingCreature wrote:

>

Native CTFE and macros are a beautiful thing though.

What did you mean with native?

When cx needs to execute a function at compiletime, it links it into a shared object and loads it back with dlsym/dlopen. So while you get a slower startup speed (until the cache is filled), any further calls to a ctfe function run at native performance.

Ah okay, but can't Dlang runtime functions not anyway called at compile time with native performance too?

So generally, cx first parses the program, then filters out what is a macro, then compiles all macro/ctfe functions into shared lib and execute these macros from that lib?

Sorta: when we hit a macro declaration, "the module at this point" (plus transitive imports) is compiled as a complete unit. This is necessary cause parser macros can change the interpretation of later code. Then the generated macro object is added to the module state going forward, and that way it can be imported by other modules.

>

Isn't it better to use the cx compiler as a service at compile time and compile code in-memory in the executable segment (some kind of jiting I think) in order to execute it then.
I think the cling repl does it like that.

That would also work, I just went the path of least resistance. I already had an llvm backend, so I just reused it. Adding a JIT backend would be fairly easy, except for the part of writing and debugging a JIT. :P

>

And how does cx pass type objects?

By reference. :) Since the compiler is in the search path, you can just import cx.base and get access to the same Type class that the compiler uses internally. In that sense, macros have complete parity with the compiler itself. There's no attempt to provide any sort of special interface for the macro that wouldn't also be used by compiler internal functions. (There's some class gymnastics to prevent module loops, ie. cx.base defines an interface for the compiler as a whole, that is implemented in main, but that is indeed also used by the compiler's internal modules themselves.)

The downside of all this is that you need to parse and process the entire compiler to handle a macro import. But DMD gives me hope that this too can be made fast. (RN compiling anything that pulls in a macro takes about a second even with warm object cache.)

It is better to use an existing jit framework then to build one on your own with regards to the JIT backend.

-Alex

1 2
Next ›   Last »