May 19, 2021
On Wednesday, 19 May 2021 at 20:08:02 UTC, Petar Kirov [ZombineDev] wrote:
> Being conscious about performance trade-offs is important in language design, but at the same time, just because someone can create a fork bomb with just several lines of code doesn't mean that we should disallow every type of dynamic memory allocation. For every misuse a of sound language feature, there are plenty more valid usages.

A trade-off is to issue a warning and provide a warning silencer.


May 19, 2021
On Wednesday, 19 May 2021 at 19:01:59 UTC, Walter Bright wrote:
> On 5/19/2021 10:26 AM, deadalnix wrote:
>> On Wednesday, 19 May 2021 at 13:02:59 UTC, Steven Schveighoffer wrote:
>>> 2. We need one allocation PER loop. If we do this the way normal closures are done (i.e. allocate before the scope is entered), this would be insanely costly for a loop.
>> 
>> This is costly, but also the only way to ensure other invariants in the language are respected (immutability, no access after destruction, ...).
>> 
>> This is also consistent with what other languages do.
>
> Languages like D also need to be useful, not just correct. Having a hidden allocation per loop will be completely unexpected for such a simple looking loop for a lot of people. That includes pretty much all of *us*, too.
>

It is not surprising that taking a closure would allocate on heap if the closure escapes. This is done for functions, this is done in every single programming language out there but D, and the compiler can remove the allocation if it detect that thing don't escape.

In fact, even in C++, you'll find yourself with an allocation per loop if you do:

std::vector<std::function<void()>> funs;
for (int i = 0; i < 10; i++) {
    funs.push_back([i]() { printf("%d\n", i); });
}

The instantiation of std::function here will allocate.
May 19, 2021
On Wednesday, 19 May 2021 at 19:48:05 UTC, Steven Schveighoffer wrote:
> On 5/19/21 3:05 PM, deadalnix wrote:
>> On Wednesday, 19 May 2021 at 18:43:24 UTC, Steven Schveighoffer wrote:
>>> You mean opApply? Not necessarily, if the delegate parameter is scope (and it should be).
>>>
>> 
>> In all cases, if the closure doesn't escape, it can stay on heap. This is what compiler optimization do.
>> 
>
> This results in code that only compiles when optimized.
>
> -Steve

No that result in code that looks like it's always allocating on the heap, but in fact doesn't if it doesn't need to.
May 19, 2021
On Wednesday, 19 May 2021 at 20:14:18 UTC, Ola Fosheim Grostad wrote:
> On Wednesday, 19 May 2021 at 20:08:02 UTC, Petar Kirov [ZombineDev] wrote:
>> Being conscious about performance trade-offs is important in language design, but at the same time, just because someone can create a fork bomb with just several lines of code doesn't mean that we should disallow every type of dynamic memory allocation. For every misuse a of sound language feature, there are plenty more valid usages.
>
> A trade-off is to issue a warning and provide a warning silencer.

I see no point in having closure allocations causing compiler warnings. That's what profilers are for. Every application has different characteristics. Just because a newbie can write code that ends up generating a ton of GC garbage doesn't mean that closure allocations would even register on the performance radar of many applications.

May 19, 2021

On Wednesday, 19 May 2021 at 20:27:59 UTC, Petar Kirov [ZombineDev] wrote:

>

On Wednesday, 19 May 2021 at 20:14:18 UTC, Ola Fosheim Grostad wrote:

>

On Wednesday, 19 May 2021 at 20:08:02 UTC, Petar Kirov [ZombineDev] wrote:

>

Being conscious about performance trade-offs is important in language design, but at the same time, just because someone can create a fork bomb with just several lines of code doesn't mean that we should disallow every type of dynamic memory allocation. For every misuse a of sound language feature, there are plenty more valid usages.

A trade-off is to issue a warning and provide a warning silencer.

I see no point in having closure allocations causing compiler warnings. That's what profilers are for. Every application has different characteristics. Just because a newbie can write code that ends up generating a ton of GC garbage doesn't mean that closure allocations would even register on the performance radar of many applications.

That said, there's the -vgc compiler switch, which prints during compilation all parts of the program that may cause a GC allocation. My point is that GC allocations shouldn't cause errors/warnings outside of @nogc code as we have plenty of tools to diagnose performance bugs.

May 19, 2021
On 5/19/21 4:20 PM, deadalnix wrote:
> On Wednesday, 19 May 2021 at 19:48:05 UTC, Steven Schveighoffer wrote:
>> On 5/19/21 3:05 PM, deadalnix wrote:
>>> On Wednesday, 19 May 2021 at 18:43:24 UTC, Steven Schveighoffer wrote:
>>>> You mean opApply? Not necessarily, if the delegate parameter is scope (and it should be).
>>>>
>>>
>>> In all cases, if the closure doesn't escape, it can stay on heap. This is what compiler optimization do.
>>>
>>
>> This results in code that only compiles when optimized.
>>
> 
> No that result in code that looks like it's always allocating on the heap, but in fact doesn't if it doesn't need to.

Sorry, I misread that, it looked like you were saying in all cases it could stay on the stack (you did mean to write stack, right?), but missed the qualifier "if the closure doesn't escape".

-Steve
May 19, 2021
On Wednesday, 19 May 2021 at 19:01:59 UTC, Walter Bright wrote:
> Languages like D also need to be useful, not just correct. Having a hidden allocation per loop will be completely unexpected for such a simple looking loop for a lot of people. That includes pretty much all of *us*, too.

If closures causing "hidden" allocations is problematic, from a language-design perspective, then it's problematic whether it occurs inside a loop or not. Either we should (a) deprecate and remove GC-allocated closures entirely, or (b) make them work correctly in all cases.

> It's best to just return a compile error for such cases rather than go to very expensive efforts to make every combination of features work.

This is the worst of both worlds: we still pay the price of having "hidden" allocations in our code, but we do not even get the benefit of having properly-implemented closures in return.
May 19, 2021
On Wednesday, 19 May 2021 at 20:27:59 UTC, Petar Kirov [ZombineDev] wrote:
> I see no point in having closure allocations causing compiler warnings. That's what profilers are for. Every application has different characteristics. Just because a newbie can write code

Ok, but if the alternative is an error...

May 19, 2021
On Wednesday, 19 May 2021 at 20:19:22 UTC, deadalnix wrote:
> In fact, even in C++, you'll find yourself with an allocation per loop if you do:
>
> std::vector<std::function<void()>> funs;
> for (int i = 0; i < 10; i++) {
>     funs.push_back([i]() { printf("%d\n", i); });
> }
>
> The instantiation of std::function here will allocate.

I think it is implementation defined how large the internal buffer in std::function is? So it will allocate if it is too large? But yeah, it is ugly, I never use it. Usually one can avoid it..


May 19, 2021
On Wednesday, 19 May 2021 at 21:56:19 UTC, Ola Fosheim Grostad wrote:
> On Wednesday, 19 May 2021 at 20:19:22 UTC, deadalnix wrote:
>> In fact, even in C++, you'll find yourself with an allocation per loop if you do:
>>
>> std::vector<std::function<void()>> funs;
>> for (int i = 0; i < 10; i++) {
>>     funs.push_back([i]() { printf("%d\n", i); });
>> }
>>
>> The instantiation of std::function here will allocate.
>
> I think it is implementation defined how large the internal buffer in std::function is? So it will allocate if it is too large? But yeah, it is ugly, I never use it. Usually one can avoid it..

It is as large as it needs to be because you can capture arbitrarily large objects.

If it is small enough, some implementations of std::function can do small object optimization and do it in place. It is only guaranteed for raw function pointers.