January 13, 2022
On Wednesday, 12 January 2022 at 16:17:02 UTC, H. S. Teoh wrote:
> On Wed, Jan 12, 2022 at 03:41:03PM +0000, Adam D Ruppe via Digitalmars-d-announce wrote:
>> On Wednesday, 12 January 2022 at 15:25:37 UTC, H. S. Teoh wrote:
>> > 	However it turns out that unless you are writing a computer
>> > 	game, a high frequency trading system, a web server
>> 
>> Most computer games and web servers use GC too.
> [...]
>
> Depends on what kind of games, I guess. If you're writing a 60fps real-time raytraced 3D FPS running at 2048x1152 resolution, then *perhaps* you might not want a GC killing your framerate every so often.
>
> (But even then, there's always GC.disable and @nogc... so it's not as if you *can't* do it in D. It's more a psychological barrier triggered by the word "GC" than anything else, IMNSHO.)
>
>
> T

Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine. That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window.

In other words, it's only acceptable if you have natural pauses (loading screens, transitions, etc.) with limited resource consumption between them OR if you can afford to e.g. halve your FPS for a while. The alternative is to collect every frame, which means sacrificing a quarter of runtime. No, thanks.

Thing is, "limited resource consumption" means you're preallocating anyway, at which point one has to question why use the GC in the first place. The majority of garbage created per frame can be trivially allocated from an arena and "deallocated" in one `mov` instruction (or a few of them). And things that can't be allocated in an arena, i.e. things with destructors - you *can't* reliably delegate to the GC anyway - which means your persistent state is more likely to be manually managed.

TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.
January 13, 2022

On Thursday, 13 January 2022 at 03:10:14 UTC, zjh wrote:

I'm a GC phobia.

January 13, 2022
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov wrote:
> Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine. That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window.
>

But the time it takes depends on the number of threads it has to stop and the amount of live memory of your heap. If it took 4ms regardless of these factors it wouldn't be bad, but that's not how D's GC works... And the language design of D isn't all that friendly to better GC implementation. That is the real problem here, that is why it keeps coming up.
January 13, 2022
On Wednesday, 12 January 2022 at 02:37:47 UTC, Walter Bright wrote:
> "Why I like D" is on the front page of HackerNews at the moment at number 11.
>
> https://news.ycombinator.com/news

I enjoyed reading the article.
January 13, 2022
On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov wrote:
> On Wednesday, 12 January 2022 at 16:17:02 UTC, H. S. Teoh wrote:
>> [...]
>
> Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine. That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window.
>
> In other words, it's only acceptable if you have natural pauses (loading screens, transitions, etc.) with limited resource consumption between them OR if you can afford to e.g. halve your FPS for a while. The alternative is to collect every frame, which means sacrificing a quarter of runtime. No, thanks.
>
> Thing is, "limited resource consumption" means you're preallocating anyway, at which point one has to question why use the GC in the first place. The majority of garbage created per frame can be trivially allocated from an arena and "deallocated" in one `mov` instruction (or a few of them). And things that can't be allocated in an arena, i.e. things with destructors - you *can't* reliably delegate to the GC anyway - which means your persistent state is more likely to be manually managed.
>
> TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.

You collect it when it matters less, like loading a level, some of them take so long that people even have written mini-games that play during loading scenes, they won't notice a couple of ms more.

Hardly any different from having an arena throw away the whole set of frame data during loading.

Unless we start talking about DirectStorage and similar.

January 13, 2022

On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov wrote:

>

TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.

"Time it" isn't really useful for someone starting on a project, as it is too late when you have something worth measuring. The reason for this is that it gets worse and worse as your application grows. Then you end up either giving up on the project or going through a very expensive and bug prone rewrite. There is no trivial upgrade path for code relying on the D GC.

And quite frankly, 4 ms is not a realistic worse case scenario for the D GC. You have to wait for all threads to stop on the worst possible OS/old-budget-hardware/program state configuration.

It is better to start with a solution that is known to scale well if you are writing highly interactive applications. For D that could be ARC.

January 13, 2022

On Thursday, 13 January 2022 at 11:57:41 UTC, Araq wrote:

>

But the time it takes depends on the number of threads it has to stop and the amount of live memory of your heap. If it took 4ms regardless of these factors it wouldn't be bad, but that's not how D's GC works...

Sadly fast scanning is still bad, unless you are on an architecture where you can scan without touching the caches. If you burst through gigabytes of memory then you have a negative effect on real time threads that expect lookup tables to be in the caches. That means you need more headroom in real time threads, so you sacrifice the quality of work done by real time threads by saturating the memory data bus.

It would be better to have a concurrent collector that slowly crawls or just take the predicable overhead of ARC that is distributed fairly even in time (unless you do something silly).

January 13, 2022

On Thursday, 13 January 2022 at 15:44:33 UTC, Ola Fosheim Grøstad wrote:

>

On Thursday, 13 January 2022 at 10:21:12 UTC, Stanislav Blinov wrote:

>

TLDR: it's pointless to lament on irrelevant trivia. Time it! Any counter-arguments from either side are pointless without that.

"Time it" isn't really useful for someone starting on a project, as it is too late when you have something worth measuring. The reason for this is that it gets worse and worse as your application grows. Then you end up either giving up on the project or going through a very expensive and bug prone rewrite. There is no trivial upgrade path for code relying on the D GC.

And quite frankly, 4 ms is not a realistic worse case scenario for the D GC. You have to wait for all threads to stop on the worst possible OS/old-budget-hardware/program state configuration.

It is better to start with a solution that is known to scale well if you are writing highly interactive applications. For D that could be ARC.

Just leaving this here from a little well known company.

https://developer.arm.com/solutions/internet-of-things/languages-and-libraries/go

ARC, tracing GC, whatever, but make your mind otherwise other languages that know what they want to be get the spotlight in such vendors.

January 13, 2022

On Thursday, 13 January 2022 at 16:33:59 UTC, Paulo Pinto wrote:

>

ARC, tracing GC, whatever, but make your mind otherwise other languages that know what they want to be get the spotlight in such vendors.

Go has a concurrent collector, so I would assume it is reasonable well-behaving in regards to other system components (e.g. does not sporadically saturate the data-bus for a long time). Go's runtime also appears to be fairly limited, so it does not surprise me that people want to use it on micro controllers.

We had some people in these forums who were interested in using D for embedded, but they seemed to give up as modifying the runtime was more work than it was worth for them. That is at least my interpretation of what they stated when they left.

So well, D has not made a point of capturing embedded programmers in the past, and there are no plans for a strategic change in that regard AFAIK.

January 13, 2022
On Thu, Jan 13, 2022 at 10:21:12AM +0000, Stanislav Blinov via Digitalmars-d-announce wrote: [...]
> Oh there is a psychological barrier for sure. On both sides of the, uh, "argument". I've said this before but I can repeat it again: time it. 4 milliseconds. That's how long a single GC.collect() takes on my machine.  That's a quarter of a frame. And that's a dry run. Doesn't matter if you can GC.disable or not, eventually you'll have to collect, so you're paying that cost (more, actually, since that's not going to be a dry run). If you can afford that - you can befriend the GC. If not - GC goes out the window.

?? That was exactly my point. If you can't afford it, you use @nogc. That's what it's there for!

And no, if you don't GC-allocate, you won't eventually have to collect 'cos there'd be nothing to collect. Nobody says you HAVE to use the GC. You use it when it fits your case; when it doesn't, you GC.disable or write @nogc, and manage your own allocations, e.g., with an arena allocator, etc..

Outside of your game loop you can still use GC allocations freely. You just collect before entering the main loop, then GC.disable or just enter @nogc code. You can even use GC memory to pre-allocate your arena allocator buffers, then run your own allocator on top of that. E.g., allocate a 500MB buffer (or however big you need it to be) before the main loop, then inside the main loop a per-frame arena allocator hands out pointers into this buffer. At the end of the frame, reset the pointer. That's a single-instruction collection.  After you exit your main loop, call GC.collect to collect the buffer itself.

This isn't Java where every allocation must come from the GC. D lets you work with raw pointers for a reason.


> In other words, it's only acceptable if you have natural pauses (loading screens, transitions, etc.) with limited resource consumption between them OR if you can afford to e.g. halve your FPS for a while. The alternative is to collect every frame, which means sacrificing a quarter of runtime. No, thanks.

Nobody says you HAVE to use the GC in your main loop.


> Thing is, "limited resource consumption" means you're preallocating anyway, at which point one has to question why use the GC in the first place.

You don't have to use the GC. You can malloc your preallocated buffers. Or GC-allocate them but call GC.disable before entering your main loop.


> The majority of garbage created per frame can be trivially
> allocated from an arena and "deallocated" in one `mov` instruction (or
> a few of them). And things that can't be allocated in an arena, i.e.
> things with destructors - you *can't* reliably delegate to the GC
> anyway - which means your persistent state is more likely to be
> manually managed.
[...]

Of course. So don't use the GC for those things. That's all. The GC is still useful for things outside the main loop, e.g., setup code, loading resources in between levels, etc..  The good thing about D is that you *can* make this choice.  It's not like Java where you're forced to use the GC whether you like it or not.  There's no reason to clamor to *remove* the GC from D, like some appear to be arguing for.


T

-- 
The only difference between male factor and malefactor is just a little emptiness inside.