May 20, 2021

On Thursday, 20 May 2021 at 11:40:24 UTC, Ola Fosheim Grostad wrote:

>

But delegates have to work without a GC too

Well they do... sort of.

You can always take the address of a struct member function and now you have your nogc delegate.

Of course the difficulty is the receiving function has no way to knowing if that void* it received is a struct or an automatically captured variable set or what.

And the capture list takes a little work but there's tricks like making it all in a struct.

I wrote about this not too long ago:
http://dpldocs.info/this-week-in-d/Blog.Posted_2021_03_01.html#tip-of-the-week

However the delegate itself is less useful than a functor or interface though unless you must pass it to existing code. And then unless it is a scope receiver you're asking for a leak anyway again because of that void* being unknown to the caller (which is why this is possible at all, but also it leaves you a bit stuck).

It would be kinda cool if the compiler would magically pack small types into that void* sometimes. Since it is opaque to the caller it could actually pack in a captured int or two right there and be a by-value delegate.

May 20, 2021
On Thursday, 20 May 2021 at 01:21:34 UTC, Walter Bright wrote:
> On 5/19/2021 1:08 PM, Petar Kirov [ZombineDev] wrote:
>> Make it work as expected. If it turns out to be a performance bottleneck for some applications, they can always work around it, as obviously they had done until now.
>
> The point was that people will not realize they will have created a potentially very large performance bottleneck with an innocuous bit of code. This is a design pattern that should be avoided.
>

I don't expect this to be a huge deal in practice. There are many reasons for this I could go over, but the strongest argument is that literally any language out there does it and it doesn't seem to be a major issue.

>> Being conscious about performance trade-offs is important in language design, but at the same time, just because someone can create a fork bomb with just several lines of code doesn't mean that we should disallow every type of dynamic memory allocation. For every misuse a of sound language feature, there are plenty more valid usages.
>
> Yeah, well, I tend to bear the brunt of the unhappiness when these things go wrong. A fair amount of D's design decisions grew from discussions with programming lead engineers having problems with their less experienced devs making poor tradeoffs.
>
> I'm sure you've heard some of my rants against macros, version conditionals being simple identifiers instead of expressions, etc.
>
> D has many ways of getting past guardrails, but those need to be conscious decisions. Having no guardrails is not good design.

You are making a categorical error here.

The delegate things is fundamentally different. The tradeof being discussed is between something that might be slow in some cases, versus something that is outright broken.

While we will all agree that possibly slow or confusing is bad, and that the argument stands for macro and alike, when the alternative is to produce something broken, it simply does not make sense.

Possibly slow can be useful nevertheless, depending on the specifics of the situation. Broken is never useful.

May 20, 2021
On 5/19/21 9:02 AM, Steven Schveighoffer wrote:

> Of course, with Walter's chosen fix, only allowing capture of non-scoped variables, all of this is moot. I kind of feel like that's a much simpler (even if less convenient) solution.

After reading a lot of this discussion, I have changed my mind. We should implement the "correct" thing even if it performs poorly. While Walter's solution gets the compiler out of responsibility, it doesn't square with the fact that closures are already hidden allocations, so consistency dictates we deal with inner allocations the same way.

We need one heap block per scope that has captured variables. Expensive, but I don't see a way around it. Hopefully optimizers and scope delegates can alleviate performance issues.

-Steve
May 20, 2021

On Thursday, 20 May 2021 at 12:10:31 UTC, Adam D. Ruppe wrote:

>

It would be kinda cool if the compiler would magically pack small types into that void* sometimes. Since it is opaque to the caller it could actually pack in a captured int or two right there and be a by-value delegate.

Yes, what C++ lacks is a way to type a lambda (function object) before it is defined. A solution could be to have a way to say: this delegate should be able to hold 2 ints and 1 double, then it would have buffer space for that and there wold be no need to allocate. Libraries could provide aliases that are shorter, obviously.

May 20, 2021

On Thursday, 20 May 2021 at 12:42:51 UTC, Ola Fosheim Grostad wrote:

>

On Thursday, 20 May 2021 at 12:10:31 UTC, Adam D. Ruppe wrote:

>

It would be kinda cool if the compiler would magically pack small types into that void* sometimes. Since it is opaque to the caller it could actually pack in a captured int or two right there and be a by-value delegate.

Yes, what C++ lacks is a way to type a lambda (function object) before it is defined. A solution could be to have a way to say: this delegate should be able to hold 2 ints and 1 double, then it would have buffer space for that and there wold be no need to allocate. Libraries could provide aliases that are shorter, obviously.

You can do this with functors.

May 20, 2021

On Thursday, 20 May 2021 at 12:53:08 UTC, deadalnix wrote:

>

You can do this with functors.

Yes, but the point is to declare a delegate with internal closure buffer without knowing what it receives?

May 20, 2021

On Thursday, 20 May 2021 at 13:12:30 UTC, Ola Fosheim Grostad wrote:

>

On Thursday, 20 May 2021 at 12:53:08 UTC, deadalnix wrote:

>

You can do this with functors.

Yes, but the point is to declare a delegate with internal closure buffer without knowing what it receives?

It is more tricky, but some implementation of std::function do that. If what's captured is small enough, they store it in place, if it is larger, they allocate.

It is not mandated by the standard and the size after which they'll allocate is implementation defined when they do, not under the user's control.

May 20, 2021

On Thursday, 20 May 2021 at 13:48:01 UTC, deadalnix wrote:

>

It is more tricky, but some implementation of std::function do that. If what's captured is small enough, they store it in place, if it is larger, they allocate.

Yes, but sizeof std::function is always the same?

>

It is not mandated by the standard and the size after which they'll allocate is implementation defined when they do, not under the user's control.

True, but D could be smarter and do something similar, but allow the size to vary so that you can save memory. And if you only assign once, then the cost of having a larger buffer in the delegate is smaller than if you do many assignments.

D can be smarter than C++ because it can generate IR for all D files, I think?

May 21, 2021

On Thursday, 20 May 2021 at 12:31:00 UTC, Steven Schveighoffer wrote:

>

On 5/19/21 9:02 AM, Steven Schveighoffer wrote:

>

Of course, with Walter's chosen fix, only allowing capture of non-scoped variables, all of this is moot. I kind of feel like that's a much simpler (even if less convenient) solution.

After reading a lot of this discussion, I have changed my mind. We should implement the "correct" thing even if it performs poorly. While Walter's solution gets the compiler out of responsibility, it doesn't square with the fact that closures are already hidden allocations, so consistency dictates we deal with inner allocations the same way.

We need one heap block per scope that has captured variables. Expensive, but I don't see a way around it. Hopefully optimizers and scope delegates can alleviate performance issues.

-Steve

Yeah... Honestly, that getting-around-immutable thing seems like the nail in the coffin for the current behavior. Hopefully making it work "correctly" won't be too painful...

The delegate-related thing I really want improved is being able to capture local variables in places like:

int i = 3;
someRange.map!(x => x.thing == i).each!writeln;

...without needing the GC, since we "know" that i doesn't escape. Dunno if that's a pipe dream, though.

May 21, 2021

On Friday, 21 May 2021 at 00:31:52 UTC, TheGag96 wrote:

>

On Thursday, 20 May 2021 at 12:31:00 UTC, Steven Schveighoffer wrote:

>

On 5/19/21 9:02 AM, Steven Schveighoffer wrote:

>

Of course, with Walter's chosen fix, only allowing capture of non-scoped variables, all of this is moot. I kind of feel like that's a much simpler (even if less convenient) solution.

After reading a lot of this discussion, I have changed my mind. We should implement the "correct" thing even if it performs poorly. While Walter's solution gets the compiler out of responsibility, it doesn't square with the fact that closures are already hidden allocations, so consistency dictates we deal with inner allocations the same way.

We need one heap block per scope that has captured variables. Expensive, but I don't see a way around it. Hopefully optimizers and scope delegates can alleviate performance issues.

-Steve

Yeah... Honestly, that getting-around-immutable thing seems like the nail in the coffin for the current behavior. Hopefully making it work "correctly" won't be too painful...

The delegate-related thing I really want improved is being able to capture local variables in places like:

int i = 3;
someRange.map!(x => x.thing == i).each!writeln;

...without needing the GC, since we "know" that i doesn't escape. Dunno if that's a pipe dream, though.

This has to be an aim. It's simply stupid that using map in the way it's intended to be used results in a GC allocation (there are workarounds, but this is missing the point entirely). I know it won't be easy, but quite frankly if it's not possible that's a knock on us and our infrastructure - if we can't do big and important things properly we need to change that too.