January 02, 2013
On Wednesday, 2 January 2013 at 11:41:33 UTC, DythroposTheImposter wrote:
>  I'm interested in how the new LuaJIT GC ends up performing. But overall I can't say I have much hope for GC right now.
>
>  GC/D = Generally Faster allocation. Has a cost associated with every living object.
>

True, however, GC + immutability allow new idoms that are plain impossible in a non GC world, and that are really efficients at reducing copies and allocations (and used in many high perf D libs).

The comparison you present here fail to take this into account.

>  C++ = Generally Slower allocation, but while it is alive there is no cost.
>

Counting reference have a cost. Plus, when the heap grow, the object that dies together tend to grow as well, which cause pauses.

Latency contrived programs, like video games, tends to avoid allocation altogether because of this. This obviously work both with GC and reference counting.

>  So as the heap grows, the GC language falls behind.
>

That is more subtle than that. Reconsider what I wrote above, for instance that GC + immutability can be used to reduce allocations (so live heap size). You'll also have to consider that GC can be concurrent, when reference counting usually cannot (or loose some of its benefit compared to GC).

>  This seems to be the case in every language I've looked at this uses a GC.

Most GC language lack proper memory management, and sometime have design choice that are a nightmare for the GC (java have no value type for instance). You are comparing here way more than simply GC vs other memory management.

Considering the pro and cons of each, an hybrid approach seems like the most reasonable thing to do in any high performance program. Which is possible in D because of GC.free .
January 02, 2013
On Wednesday, 2 January 2013 at 12:32:01 UTC, deadalnix wrote:
> Most GC language lack proper memory management, and sometime have design choice that are a nightmare for the GC (java have no value type for instance).

Surely Java's primitive types (byte, short, int, long, float, double, boolean, and char) count as value types?
January 02, 2013
On Tuesday, 1 January 2013 at 16:31:55 UTC, Stewart Gordon wrote:
> But there is something called packratting, which is a mistake at the code level of keeping a pointer hanging around for longer than necessary and therefore preventing whatever it's pointing to from being GC'd.


I've heard it called "midlife crisis" :)
January 02, 2013
On Wednesday, 2 January 2013 at 17:18:52 UTC, Thiez wrote:
> On Wednesday, 2 January 2013 at 12:32:01 UTC, deadalnix wrote:
>> Most GC language lack proper memory management, and sometime have design choice that are a nightmare for the GC (java have no value type for instance).
>
> Surely Java's primitive types (byte, short, int, long, float, double, boolean, and char) count as value types?

I think he meant non-primitives
January 02, 2013
On Wednesday, 2 January 2013 at 17:18:52 UTC, Thiez wrote:
> On Wednesday, 2 January 2013 at 12:32:01 UTC, deadalnix wrote:
>> Most GC language lack proper memory management, and sometime have design choice that are a nightmare for the GC (java have no value type for instance).
>
> Surely Java's primitive types (byte, short, int, long, float, double, boolean, and char) count as value types?

Yes, but they are irrelevant for the GC.
January 02, 2013
On Wednesday, 2 January 2013 at 11:41:33 UTC, DythroposTheImposter wrote:
>  I'm interested in how the new LuaJIT GC ends up performing. But overall I can't say I have much hope for GC right now.
>
>  GC/D = Generally Faster allocation. Has a cost associated with every living object.
>
>  C++ = Generally Slower allocation, but while it is alive there is no cost.
>
>  So as the heap grows, the GC language falls behind.
>
>  This seems to be the case in every language I've looked at this uses a GC.

As a former user from Native Oberon and BlueBottle OS back in the late 90's, I can attest that the GC was fast enough for powering a single user desktop operating system.

Surely there are cases where it is an issue, mainly due to hardware constraints.

While I am a GC supporter, I wouldn't like to fly in an airplane with a GC system, but a few missile radar systems are controled with GC based systems (Ground Master 400). So who knows?!

--
Paulo
January 31, 2014
On Monday, 31 December 2012 at 14:05:01 UTC, Kiith-Sa wrote:
> All you have to do is care about how you allocate and, if
> GC seems to be an issue, profile to see _where_ the GC is being
> called most and optimize those allocations.

Bumping old thread:

There is an ever-increasing amount of newcomers in #d on IRC who end up asking how to avoid the GC or at least to be able to determine where implicit allocations happen. I think we should work on creating a wiki page and gather as much advice as possible on dealing with the GC, implicit allocations, real-time constraints, etc.
January 31, 2014
On Wednesday, 2 January 2013 at 22:46:20 UTC, pjmlp wrote:
> On Wednesday, 2 January 2013 at 11:41:33 UTC, DythroposTheImposter wrote:
>> I'm interested in how the new LuaJIT GC ends up performing. But overall I can't say I have much hope for GC right now.
>>
>> GC/D = Generally Faster allocation. Has a cost associated with every living object.
>>
>> C++ = Generally Slower allocation, but while it is alive there is no cost.
>>
>> So as the heap grows, the GC language falls behind.
>>
>> This seems to be the case in every language I've looked at this uses a GC.
>
> As a former user from Native Oberon and BlueBottle OS back in the late 90's, I can attest that the GC was fast enough for powering a single user desktop operating system.
>
> Surely there are cases where it is an issue, mainly due to hardware constraints.
>
> While I am a GC supporter, I wouldn't like to fly in an airplane with a GC system, but a few missile radar systems are controled with GC based systems (Ground Master 400). So who knows?!
>
> --
> Paulo


The problem isn't stability, as you manual management can be just
or even more error prone. The problem is with real time. If the
GC must stop the world then it might be a stability issue with
respect to the design. Audio and gaming are critical. While in
some instances it is only superficial, such as video games, but
absolutely a deal breaker with professional audio... even though
in a sense these are not as critical as, say, a missile system,
they are used much more often so are put to the test much more.
There is, also, redundancy built into such critical systems as
military defense... not so in a digital mixer for a PA that can't
pop and click because the GC has to compact the heap. Your
audience is going to be quite annoyed when they here a loud
pop(due to the audio cutting off and potentially going from a
high value to a low value which produces a very loud click) every
30 seconds, minute or so. Not to mention what that would do to
their hearing over the course of a night.

What if the GC decides to pause just at the wrong time when you
only needed just a little bit more time to get the audio buffer
filled? You are screwed. Manual management or ARC or whatever is
at least much more predictable. Sure you can disable the GC but
when you do that you just make things worse(potentially takes
longer to catch up).

I think with proper management these things can be avoided, but
as of now, since the D core relies so much on the GC, it's kinda
pointless unless you avoid all the goodies it provides. The
mindset that the GC is the ultimate way to deal with memory
management has led to the issues with D and the GC and why some
avoid D. If it would have been designed as optional, we wouldn't
be talking about these problems.

January 31, 2014
+1.

What it boils down to for me then, is:

If I want to write applications without low-latency requirements,
D is the perfect fit for me. D's templates, slices, CTFE instead
of a nasty macro or preprocessor language, compiler producing
machine code, are enough for me to switch from C# or Java.

If on the other hand, I do have low-latency requirements, I
simply don't want to jump through hoops, i.e. implement all
critical memory management C++ style by myself to prevent the GC
from kicking in. And If I even have to use GC profiling on top of
that, D's cleaner language concepts just don't cut it for me.
I'll take a look at Rust for that (admitted, not as well thought
out a language as D).
January 31, 2014
On Friday, 31 January 2014 at 19:08:10 UTC, Andrej Mitrovic wrote:
> On Monday, 31 December 2012 at 14:05:01 UTC, Kiith-Sa wrote:
>> All you have to do is care about how you allocate and, if
>> GC seems to be an issue, profile to see _where_ the GC is being
>> called most and optimize those allocations.
>
> Bumping old thread:
>
> There is an ever-increasing amount of newcomers in #d on IRC who end up asking how to avoid the GC or at least to be able to determine where implicit allocations happen. I think we should work on creating a wiki page and gather as much advice as possible on dealing with the GC, implicit allocations, real-time constraints, etc.

We should concentrate on implementing scope. scope could prevent many of such things.
E.g.:

{
     scope int[] arr = new int[count];
}

Since scope shall guarantee that the marked variable does not leave the scope, the compiler could safely assume that arr shouldn't handled with the GC.
With that information the code could safely rewritten to:

{
     int* tmp_arr_ptr = cast(int*) calloc(count, int.sizeof);
     scope(exit) free(tmp_arr_ptr);
     int[] arr = tmp_arr_ptr[0 .. count];
}

So no need to invoke the GC.
Since many objects have a small lifetime it is preferable to avoid the GC. With scope the user could mark such objects... and everyone wins. It is not perfect and does not cover all use cases but it would be a start.

Sadly I doubt that scope gets implemented (soon). And since scope classes was replaced with an ugly library solution I seriously doubt that this nice dream comes true. But it would be so nice. :)