August 09, 2022
On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
> dmd not freeing by default is/was a bad idea. The memory usage

Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
Why isnt the natural conclusion that it looks like it worked out; just corrrect?
August 09, 2022

On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:

>

On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:

>

dmd not freeing by default is/was a bad idea. The memory usage

Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
Why isnt the natural conclusion that it looks like it worked out; just corrrect?

Exactly

-lowmem
    Enable the garbage collector for the compiler, reducing the compiler memory requirements but increasing compile times.

Having control over your memory allocation strategy is what's important

Hence forcing one on the users is a bad idea when you need that little performance boost that ends up being your killer feature (fast compile speed)

August 09, 2022
On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
> On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
>> dmd not freeing by default is/was a bad idea. The memory usage
>
> Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
> Why isnt the natural conclusion that it looks like it worked out; just corrrect?

Your "natural" conclusion is based off a biased sample
August 10, 2022

On Tuesday, 9 August 2022 at 23:41:04 UTC, ryuukk_ wrote:

>

On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:

>

On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:

>

dmd not freeing by default is/was a bad idea. The memory usage

Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
Why isnt the natural conclusion that it looks like it worked out; just corrrect?

Exactly

-lowmem
    Enable the garbage collector for the compiler, reducing the compiler memory requirements but increasing compile times.

Having control over your memory allocation strategy is what's important

Hence forcing one on the users is a bad idea when you need that little performance boost that ends up being your killer feature (fast compile speed)

As if Go, Java, Common Lisp, Eiffel, C#, F#, OCaml,.. were any molasses by having their compilers using GC.

August 10, 2022

On Monday, 8 August 2022 at 15:25:40 UTC, Paul Backus wrote:

>

This is possible using the GC API in core.memory:

{
    import core.memory: GC;

    GC.disable();
    scope(exit) GC.enable();

    foreach (...)
        // hot code goes here
}
void load_assets()
{
    import core.memory: GC;

    // allocate, load stuff, etc..
    GC.collect();
}

I'll join this week's coffee corner talk about GC.

At ASML, Julia is now used on (part of) the machine. The machine is a time critical production system (you all want more chips right? ;), and GC was apparently one of the main concerns. They solved it using the manual GC.disable / GC.collect approach.
https://pretalx.com/juliacon-2022/talk/GUQBSE/

I work on hardware at ASML and am not involved with software development for the scanner; so I do not know any details but found it quite interesting to see that Julia is used in this way.

-Johan

August 10, 2022
On Monday, 8 August 2022 at 22:32:23 UTC, Ethan wrote:
> So tl;dr is that there's tactical usage of non-GC _AND_ GC memory in Doom.

Calling this garbage collection is watering-down the term to near uselessness. This is just manual memory management. I haven't read the code, but from your description it sounds exactly like a bump allocator and a tweaked general purpose allocator which can reuse allocated memory regions when heap fragmentation becomes an issue.

These days, these technique should be unremarkable. I have no doubt that this was innovative in '93. But 30 years later, creating a bump allocator for memory which doesn't change much after init with a known lifetime of use should be common practice for programmers, especially when most PCs have > 8gb of RAM.
August 10, 2022
On Wednesday, 10 August 2022 at 17:09:50 UTC, Jack Stouffer wrote:
> I haven't read the code

Well there's the problem right there.

You can compare the code previously linked to an actual bump allocator I wrote for my own branch of the Doom code (that resets every render frame) at https://github.com/GooberMan/rum-and-raisin-doom/blob/master/src/doom/r_main.h#L182-L218
August 10, 2022
On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
> On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
>> On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
>>> dmd not freeing by default is/was a bad idea. The memory usage
>>
>> Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
>> Why isnt the natural conclusion that it looks like it worked out; just corrrect?
>
> Your "natural" conclusion is based off a biased sample

Thats fairly self serving a take.

If it was as bad as your impling even a good team would have fucked it up; no?
August 10, 2022
On Wednesday, 10 August 2022 at 19:15:46 UTC, monkyyy wrote:
> On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
>> On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
>>> On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
>>>> dmd not freeing by default is/was a bad idea. The memory usage
>>>
>>> Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
>>> Why isnt the natural conclusion that it looks like it worked out; just corrrect?
>>
>> Your "natural" conclusion is based off a biased sample
>
> Thats fairly self serving a take.
>
> If it was as bad as your impling even a good team would have fucked it up; no?

It's easy to get wrong but I think you can avoid most of the "bloat" (most of this isn't so much bloat as in wasted space as much as wasted cycles which I think requires slightly more nuanced discussion) by having cycle-counting and hard upper bounds on runtimes in CI.
August 10, 2022
On Wednesday, 10 August 2022 at 20:23:17 UTC, max haughton wrote:
> On Wednesday, 10 August 2022 at 19:15:46 UTC, monkyyy wrote:
>> On Tuesday, 9 August 2022 at 23:46:57 UTC, max haughton wrote:
>>> On Tuesday, 9 August 2022 at 23:12:33 UTC, monkyyy wrote:
>>>> On Tuesday, 9 August 2022 at 16:32:09 UTC, max haughton wrote:
>>>>> dmd not freeing by default is/was a bad idea. The memory usage
>>>>
>>>> Hmmmm; isnt d compiler pretty quick and fairly good about not crashing dispite having a small team?
>>>> Why isnt the natural conclusion that it looks like it worked out; just corrrect?
>>>
>>> Your "natural" conclusion is based off a biased sample
>>
>> Thats fairly self serving a take.
>>
>> If it was as bad as your impling even a good team would have fucked it up; no?
>
> It's easy to get wrong but I think you can avoid most of the "bloat" (most of this isn't so much bloat as in wasted space as much as wasted cycles which I think requires slightly more nuanced discussion) by having cycle-counting and hard upper bounds on runtimes in CI.
If I look at some of the old std code the only explaination for why some of it is so terrible is that it was written before there were good d programmers
Presumably parts of dmd were written before there would good programmers; and if they managed to make a fast compiler(when most compilers are terrible) I would think maybe theres some unique design decisions deserve some credence