May 19, 2021
On 5/19/2021 4:08 AM, deadalnix wrote:
> The notion of loops, immutability, ad closure at colliding with each others. they do not compose because they don't propose a set of invariant which are independent from each other, but, on the other hand, step on each others.

I don't really know what you mean by invariants in this context. Can you enumerate what invariants you propose for delegates?
May 19, 2021
On 5/19/2021 10:26 AM, deadalnix wrote:
> On Wednesday, 19 May 2021 at 13:02:59 UTC, Steven Schveighoffer wrote:
>> 2. We need one allocation PER loop. If we do this the way normal closures are done (i.e. allocate before the scope is entered), this would be insanely costly for a loop.
> 
> This is costly, but also the only way to ensure other invariants in the language are respected (immutability, no access after destruction, ...).
> 
> This is also consistent with what other languages do.

Languages like D also need to be useful, not just correct. Having a hidden allocation per loop will be completely unexpected for such a simple looking loop for a lot of people. That includes pretty much all of *us*, too.

I doubt users will be happy when they eventually discover this the reason their D program runs like sludge on Pluto and consumes all the memory in their system. If they discover the reason at all, and don't just dismiss D as unusable.

The workaround, for the users, is to simply move that referenced variable from an inner scope to function scope.

It's best to just return a compile error for such cases rather than go to very expensive efforts to make every combination of features work.

It's similar to the decision to give an error for certain operations on vectors if the hardware won't support it, rather than emulate. Emulation will necessarily be very, very slow. Give users the opportunity to fix hidden and extreme slowdowns in code rather than hide them. A systems programming language ought to behave this way.
May 19, 2021
On Wednesday, 19 May 2021 at 18:43:24 UTC, Steven Schveighoffer wrote:
> You mean opApply? Not necessarily, if the delegate parameter is scope (and it should be).
>

In all cases, if the closure doesn't escape, it can stay on heap. This is what compiler optimization do.

May 19, 2021
On Wednesday, 19 May 2021 at 18:43:24 UTC, Steven Schveighoffer wrote:
> I don't think a can of worms is opened, but it's not easy to implement for sure. I'm not suggesting that we follow this path. I'm just thinking about "What's the most performant way we can implement closures used inside loops". If a loop *rarely* allocates a closure (i.e. only one element actually allocates a closure), then allocating defensively seems super-costly.
>

There is going to be a ton of situation where the address of the variable becomes visible in some fashion.

May 19, 2021
On Wednesday, 19 May 2021 at 19:01:59 UTC, Walter Bright wrote:
> Having a hidden allocation per loop will be completely unexpected for such a simple looking loop for a lot of people. That includes pretty much all of *us*, too.

Citation needed.

It is fairly well known that closures and objects are pretty interchangeable, so the allocation should surprise nobody. This is a very common pattern in several languages. And even ones that don't do this have workarounds - a function returning a function that gets called to capture the arguments (this works in D as well btw) - since the allocation is kinda the point of a closure.

Whereas the current behavior surprises most everybody AND is pretty useless.
May 19, 2021
On 5/19/21 3:05 PM, deadalnix wrote:
> On Wednesday, 19 May 2021 at 18:43:24 UTC, Steven Schveighoffer wrote:
>> You mean opApply? Not necessarily, if the delegate parameter is scope (and it should be).
>>
> 
> In all cases, if the closure doesn't escape, it can stay on heap. This is what compiler optimization do.
> 

This results in code that only compiles when optimized.

-Steve
May 19, 2021
On 5/19/21 3:29 PM, Adam D. Ruppe wrote:
> On Wednesday, 19 May 2021 at 19:01:59 UTC, Walter Bright wrote:
>> Having a hidden allocation per loop will be completely unexpected for such a simple looking loop for a lot of people. That includes pretty much all of *us*, too.
> 
> Citation needed.
> 
> It is fairly well known that closures and objects are pretty interchangeable, so the allocation should surprise nobody. This is a very common pattern in several languages. And even ones that don't do this have workarounds - a function returning a function that gets called to capture the arguments (this works in D as well btw) - since the allocation is kinda the point of a closure.


e.g.:

foreach(i; someLargeThing)
{
   if(Clock.currTime.year == 2020)// i.e. never
     dg = {return  i;};
}

If we defensively allocate for the delegate, this is going to allocate every iteration of someLargeThing, even though it's very rare that it will need to.

> 
> Whereas the current behavior surprises most everybody AND is pretty useless.

Nobody disagrees. What the disagreement here is, whether we should make the behavior work as expected at all costs, or invalidate the behavior completely because it's too costly.

-Steve
May 19, 2021
On Wednesday, 19 May 2021 at 19:48:52 UTC, Steven Schveighoffer wrote:
> If we defensively allocate for the delegate, this is going to allocate every iteration of someLargeThing, even though it's very rare that it will need to.

Yeah, it could just allocate when the assignment is made for cases like that, which is what the current

     dg = ((i)=>(){return  i;})(i);

pattern does.


Which I actually don't mind at all myself.
May 19, 2021
On Wednesday, 19 May 2021 at 19:48:52 UTC, Steven Schveighoffer wrote:
> e.g.:
>
> foreach(i; someLargeThing)
> {
>    if(Clock.currTime.year == 2020)// i.e. never
>      dg = {return  i;};
> }
>
> If we defensively allocate for the delegate, this is going to allocate every iteration of someLargeThing, even though it's very rare that it will need to.

Why not just use a backend intrinsic? The frontend does not have to know what the backend will do? Leave it to the implementation...


May 19, 2021
On Wednesday, 19 May 2021 at 19:48:52 UTC, Steven Schveighoffer wrote:
>
> Nobody disagrees. What the disagreement here is, whether we should make the behavior work as expected at all costs, or invalidate the behavior completely because it's too costly.
>
> -Steve

Make it work as expected. If it turns out to be a performance bottleneck for some applications, they can always work around it, as obviously they had done until now.

Being conscious about performance trade-offs is important in language design, but at the same time, just because someone can create a fork bomb with just several lines of code doesn't mean that we should disallow every type of dynamic memory allocation. For every misuse a of sound language feature, there are plenty more valid usages.