January 14, 2021
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
> I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC?

Languages where the GC usage is unavoidable (Javascript and Java) have created a lot of situations where there is a GC pause in realtime program and the cause is this dynamically allocated memory. So a lot of people make their opinion of GC while using setup where you couldn't really avoid it.

For example in Javascript from 10 years ago just using a closure or an array literals could make your web game stutter.
January 14, 2021
On Thursday, 14 January 2021 at 10:05:51 UTC, Guillaume Piolat wrote:
> On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
>> I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC?
>
> Languages where the GC usage is unavoidable (Javascript and Java) have created a lot of situations where there is a GC pause in realtime program and the cause is this dynamically allocated memory. So a lot of people make their opinion of GC while using setup where you couldn't really avoid it.

Indeed, but I don't think we should underestimate the perceived value of having a minimal runtime. Like, if D had a better GC solution that involved an even heavier runtime, it would still be a big issue for people interested in low level system programming.

Transparency is an issue. System level programming means you want to have a clear picture of what is going on in the system at all levels, all the way down to the hardware. If you cannot understand how the runtime works you also cannot fix issues... so a simple runtime is more valuable than a feature rich complex runtime.

That is kinda what defines system level programming: you know exactly what every subsystem is doing so that you can anticipate performance/resource issues. And that is the opposite of high level programming where you make no assumptions about the underlying machinery and only care about the abstract descriptions of language semantics.




January 14, 2021
On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
> I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC?

Semi serious answer:

In the domain of hoby-ism and small companies programmers that work with statically typed languages all believe that they are super hero in the domain of memory managment. When they see "GC" they think that they are considered as 2nd grade student ^^

It's basically snobbism.
January 14, 2021
On Thursday, 14 January 2021 at 10:28:13 UTC, Basile B. wrote:
> Semi serious answer:
>
> In the domain of hoby-ism and small companies programmers that work with statically typed languages all believe that they are super hero in the domain of memory managment. When they see "GC" they think that they are considered as 2nd grade student ^^
>
> It's basically snobbism.

I know your response is *tongue in cheek*, but I actually find it easier to use c++11 style memory management across the board than mixing two models.

C style memory management, on the other hand, is pretty horrible and you'll end up spend much of your time debugging "unexplainable" crashes. I don't experience that much in C++ when staying within their standard regime.

When you want more performance than standard C++ memory management, things can go wrong e.g. manual emplace strategies and forgetting to call destructors etc, but that is the same in D. And frankly, you seldom need that, maybe 2-3 critical places in your program (e.g. graphics/audio).

January 14, 2021
On Wednesday, 13 January 2021 at 20:06:51 UTC, H. S. Teoh wrote:
> On Wed, Jan 13, 2021 at 06:58:56PM +0000, Marcone via Digitalmars-d-learn wrote:
>> I've always heard programmers complain about Garbage Collector GC. But I never understood why they complain. What's bad about GC?
>
> It's not merely a technical issue, but also a historical and sociological one.  The perception of many people, esp. those with C/C++ background, is heavily colored by the GC shipped with early versions of Java, which was stop-the-world, inefficient, and associated with random GUI freezes and jerky animations.  This initial bad impression continues to persist today esp. among the C/C++ crowd, despite GC technology having made great advances since those early Java days.
>
> Aside from skewed impressions, there's still these potential concerns with the GC:
>
> (1) Stop-the-world GC pauses (no longer a problem with modern
> generational collectors, but still applies to D's GC);
>
> (2) Non-deterministic destruction of objects (D's dtors are not even
> guaranteed to run if it's a GC'd object) -- you cannot predict when an
> object will be collected;
>
> (3) GC generally needs more memory than the equivalent manual memory
> management system.

I think you also have to consider that the GC you get with D is not state of the art, and if the opinions expressed on the newsgroup are accurate, it's not likely to get any better. So while you can find examples of high performance applications, AAA games, or whatever that use GC, I doubt any of them would be feasible with Ds GC. And given the design choices D has made as a language, a high performance GC is not really possible.

So the GC is actually a poor fit for D as a language. It's like a convertible car with a roof that is only safe up to 50 mph, go over that and its likely to be torn off. So if you want to drive fast you have to put the roof down.


January 14, 2021
As other people already mentioned, garbage collection incurs some amount of non-determinism and people working in low level areas prefer to handle things deterministically because they think non-deterministic handling of memory makes your code slow.

For example rendering in gaming gets paused by the GC prolonging the whole rendering process.
The conclusion arrives that code runs slow, but it doesn't. In fact, a tracing GC tries to do the opposite, in order to make the whole process faster, it releases memory in a buffer like mode often yielding faster execution in the long run, but not in the short run where it is mandatory not to introduce any kind of pauses.

So, in the end, it isn't a fate about performance rather a fate of determinism vs non-determinism.

Non-determinism has the potential to let the code run faster.
For instance, no one really uses the proposed threading model in Rust with owned values where one thread can only work mutably on an owned value because other threads wanting to mutate that value have to wait. This works by creating a deterministic order of thread execution where each thread can only work after the other thread releases its work.
This model gets often praised in Rust but the potential seems only a theoretical one.
As a result you most often see the use of atomic reference counting (ARC) cluttering in codebases in Rust.

The other point is the increased memory footprint because you have a runtime memory manager taking responsibility over (de)allocation which is impossible to have in some areas of limited memory systems.

However, why just provide a one size fits all solution when there are plenty of GC algorithms for different kinds of problem domains?
Why not offering more than one just as it is the case in Java?
The advantage hereby is to adapt the GC algorithm after the program was compiled, so you can reuse the same program with different GC algorithms.

January 14, 2021
On Thursday, 14 January 2021 at 11:11:58 UTC, Ola Fosheim Grøstad wrote:

> I know your response is *tongue in cheek*, but I actually find it easier to use c++11 style memory management across the board than mixing two models.

But this is already the case for C++ and Rust. Remembering the days back developing in C++ there were a huge amount of memory deallocation side effects because opencv's memory management differs from qt's memory management.
Just to say it was a hell.

Personally, I find it better to prefer encapsulating manual memory management and not to leak them outside.
January 14, 2021
On Thursday, 14 January 2021 at 13:01:04 UTC, sighoya wrote:
> Why not offering more than one just as it is the case in Java?
> The advantage hereby is to adapt the GC algorithm after the program was compiled, so you can reuse the same program with different GC algorithms.

Because Java has a well defined virtual machine with lots of restrictions.

January 14, 2021
On Thursday, 14 January 2021 at 13:08:06 UTC, Ola Fosheim Grøstad wrote:

> Because Java has a well defined virtual machine with lots of restrictions.

So you're insisting this isn't possible in D?
January 14, 2021
On Thursday, 14 January 2021 at 13:10:14 UTC, sighoya wrote:
> On Thursday, 14 January 2021 at 13:08:06 UTC, Ola Fosheim Grøstad wrote:
>
>> Because Java has a well defined virtual machine with lots of restrictions.
>
> So you're insisting this isn't possible in D?

It isn't possible in a meaningful way.

In system level programming languages you have to manually uphold the invariants needed to not break the GC collection algorithm.

So if you change to a significantly different collection model, the needed invariants will change.

The possible alternatives are:

1. Use "shared" to prevent GC allocated memory from entering other threads and switch to thread local GC. Then use ARC for shared.

2. Redefine language semantics/type system for a different GC model. This will break existing code.