Jump to page: 1 2
Thread overview
Using core.reflect to rewrite code
Aug 10, 2021
Stefan Koch
Aug 11, 2021
Stefan Koch
Aug 12, 2021
Brian Tiffin
Aug 12, 2021
max haughton
Aug 12, 2021
Stefan Koch
Aug 13, 2021
Adam D Ruppe
Aug 14, 2021
Stefan Koch
Aug 14, 2021
max haughton
Aug 14, 2021
Stefan Koch
Aug 12, 2021
Stefan Koch
Aug 12, 2021
surlymoor
Aug 12, 2021
Stefan Koch
August 10, 2021

Good evening everyone,

before going to bed I have been playing with my new project again.

This time I wanted to generate a D wrapper struct.

core.reflect can be used to transform

extern (C)
{
    struct FwdCType;
    void setText(FwdCType* ctx, const char* text);
    void setInt(FwdCType* c, const int value);

    int unrelated_function();

    ubyte* getBlob(const FwdCType* ctx);
}

into

struct FwdCTypeWrapper {
    FwdCType* ctx;

    void setText (const(char*) text) {
        setText(ctx, text);
    }

    void setInt (const(int) value) {
        setInt(ctx, value);
    }

    ubyte* getBlob () {
        getBlob(ctx);
    }

}

Note that the wrapping generation was smart enough to omit the unrelated function because no pointer to the context struct was detected in the signature.

The code that does this is just over 80 lines of code (omitting the boilerplate).
and it can be seen here:
https://gist.github.com/UplinkCoder/93cb06e4921ab4c96752c6325e03e42d

August 11, 2021

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

Good evening everyone,

before going to bed I have been playing with my new project again.

I've manged to improve the performance of using core.reflect by roughly 2x because I realized that class literals created by core.reflect don't need "scrubbing".

Scrubbing in this case means walking the expression-tree of an expression created by CTFE and removing "ctfe-only-features" from it thereby making it a proper ASTNode which can be inserted without "damaging" the tree.

To be honest I am surprised that I was able to get a 2x performance win for core.reflect without major architectural changes.

I guess sometimes one is lucky :)

Now core.reflect is relatively fast it's time to play around with it :)

Please let me know what you think about the examples I have posted so far.

Cheers,

Stefan

August 12, 2021

On Wednesday, 11 August 2021 at 23:58:21 UTC, Stefan Koch wrote:

>

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

Good evening everyone,

before going to bed I have been playing with my new project again.

I've manged to improve the performance of using core.reflect by roughly 2x because I realized that class literals created by core.reflect don't need "scrubbing".
...
Cheers,

Stefan

I'm too new here, and probably speaking out of turn, not at a level to profit from the core reflections you are working on, Stefan, but I just want to add a cheering response. Sounds very cool, and I look forward to the day I can write some D that takes advantage.

Well done. Even though I'm talking out a hole that doesn't know anything about the subject matter.

Have good, make well.

August 12, 2021

On Wednesday, 11 August 2021 at 23:58:21 UTC, Stefan Koch wrote:

>

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

[...]

I've manged to improve the performance of using core.reflect by roughly 2x because I realized that class literals created by core.reflect don't need "scrubbing".

[...]

Can you profile it vs. a template metaprogramming solution?

August 12, 2021

On Thursday, 12 August 2021 at 02:12:23 UTC, max haughton wrote:

>

On Wednesday, 11 August 2021 at 23:58:21 UTC, Stefan Koch wrote:

>

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

[...]

I've manged to improve the performance of using core.reflect by roughly 2x because I realized that class literals created by core.reflect don't need "scrubbing".

[...]

Can you profile it vs. a template metaprogramming solution?

Yes but I am still not quite sure how to write the equivalent template ;)

August 12, 2021

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

[...]

I enjoy your work, but what is the probability; assuming this functionality's consummation; of core.reflect's addition to druntime? I remember type functions weren't received well by Walter (and Andrei?).

August 12, 2021

On Thursday, 12 August 2021 at 02:12:23 UTC, max haughton wrote:

>

On Wednesday, 11 August 2021 at 23:58:21 UTC, Stefan Koch wrote:

>

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

[...]

I've manged to improve the performance of using core.reflect by roughly 2x because I realized that class literals created by core.reflect don't need "scrubbing".

[...]

Can you profile it vs. a template metaprogramming solution?

TL;DR
The template solution [1] takes 60% more time and 80% more memory
compared to the core.reflect solution.
While being more difficult to write and therefore being less useful.

I have written a template solution, while it does not offer the same functionality.
It should provide a reasonable lower bound for one.
The template only collect structs and functions.
Writing the correct filtering and constructing the string is a task for another day :)

Here are the results:

uplink@uplink-black:~/d/dmd(core_reflect)$ hyperfine "generated/linux/release/64/dmd -c testTemplate.d" "generated/linux/release/64/dmd -c testCollector.d"
Benchmark #1: generated/linux/release/64/dmd -c testTemplate.d
  Time (mean ± σ):      28.5 ms ±   2.3 ms    [User: 21.9 ms, System: 6.8 ms]
  Range (min … max):    21.1 ms …  33.2 ms    95 runs

Benchmark #2: generated/linux/release/64/dmd -c testCollector.d
  Time (mean ± σ):      21.7 ms ±   2.7 ms    [User: 16.5 ms, System: 5.4 ms]
  Range (min … max):    13.8 ms …  27.6 ms    147 runs

Summary
  'generated/linux/release/64/dmd -c testCollector.d' ran
    1.31 ± 0.19 times faster than 'generated/linux/release/64/dmd -c testTemplate.d'

and a little proxy for memory use as well as the output

uplink@uplink-black:~/d/dmd(core_reflect)$ /usr/bin/time generated/linux/release/64/dmd -c testTemplate.d
tuple(getName, getOrdinal, setName, unrelated)
(Ctx)
0.02user 0.00system 0:00.03elapsed 96%CPU (0avgtext+0avgdata 25824maxresident)k
0inputs+8outputs (0major+5191minor)pagefaults 0swaps
uplink@uplink-black:~/d/dmd(core_reflect)$ /usr/bin/time generated/linux/release/64/dmd -c testCollector.d
class CtxWrapper {
    const(char*) getName () {
        getName(ctx);
    }

    uint getOrdinal () {
        getOrdinal(ctx);
    }

    void setName (const(char*) name) {
        setName(ctx, name);
    }

}

0.02user 0.00system 0:00.02elapsed 104%CPU (0avgtext+0avgdata 14364maxresident)k
0inputs+40outputs (0major+2245minor)pagefaults 0swaps

template solution benchmarked:
[1] https://gist.github.com/UplinkCoder/c2838252c55c9fdf4fc526e2a8c5ce7e

core.reflect solution benchmarked:
[2] https://gist.github.com/UplinkCoder/93cb06e4921ab4c96752c6325e03e42d

August 12, 2021

On Thursday, 12 August 2021 at 05:53:58 UTC, surlymoor wrote:

>

On Tuesday, 10 August 2021 at 23:51:29 UTC, Stefan Koch wrote:

>

[...]

I enjoy your work, but what is the probability; assuming this functionality's consummation; of core.reflect's addition to druntime? I remember type functions weren't received well by Walter (and Andrei?).

I do not know. I have not spoken to them yet.

core.reflect is conceptually and in regards of implementation much simpler than type-functions were; so I would hope that they would be accepted.

Regards,
Stefan

August 13, 2021
On Thursday, 12 August 2021 at 04:47:52 UTC, Stefan Koch wrote:
> Yes but I am still not quite sure how to write the equivalent template ;)

You'd just loop through the members, see if there's the struct pointer as the argument, and if so, generate a forwarder function with the &this passed to it.

Though I wouldn't generate an actual wrapper here since UFCS does the same thing in practice, I have done similar in my script wrapper for ufcs

http://arsd-official.dpldocs.info/source/arsd.jsvar.d.html#L2480
August 14, 2021
On Friday, 13 August 2021 at 12:51:21 UTC, Adam D Ruppe wrote:
> On Thursday, 12 August 2021 at 04:47:52 UTC, Stefan Koch wrote:
>> Yes but I am still not quite sure how to write the equivalent template ;)
>
> You'd just loop through the members, see if there's the struct pointer as the argument, and if so, generate a forwarder function with the &this passed to it.
>
> Though I wouldn't generate an actual wrapper here since UFCS does the same thing in practice, I have done similar in my script wrapper for ufcs
>
> http://arsd-official.dpldocs.info/source/arsd.jsvar.d.html#L2480

Thanks for your input.
In the meantime I have been able to reproduce the behavior in a template.
`WrapperString` in the following gist https://gist.github.com/UplinkCoder/e60584a2c8f46ae4a490117b878ecec1

The performance of the template is roughly 2 times worse when compared to the `core.reflect` solution still.

« First   ‹ Prev
1 2