October 03, 2015
On Friday, 2 October 2015 at 07:32:02 UTC, Kagamin wrote:
> Low latency (also a synonym for fast) is required by interactive applications like client and server software

Isn't a typical collection cycle's duration negligible compared to typical network latency?
October 03, 2015
On Saturday, 3 October 2015 at 13:35:19 UTC, Vladimir Panteleev wrote:
> On Friday, 2 October 2015 at 07:32:02 UTC, Kagamin wrote:
>> Low latency (also a synonym for fast) is required by interactive applications like client and server software
>
> Isn't a typical collection cycle's duration negligible compared to typical network latency?

Not really, especially if you have to block all threads, meaning hundreds of requests.
October 03, 2015
On Saturday, 3 October 2015 at 18:21:55 UTC, deadalnix wrote:
> On Saturday, 3 October 2015 at 13:35:19 UTC, Vladimir Panteleev wrote:
>> On Friday, 2 October 2015 at 07:32:02 UTC, Kagamin wrote:
>>> Low latency (also a synonym for fast) is required by interactive applications like client and server software
>>
>> Isn't a typical collection cycle's duration negligible compared to typical network latency?
>
> Not really, especially if you have to block all threads, meaning hundreds of requests.

I don't understand how that is relevant.

E.g. how is making 1% of requests take 100ms longer worse than making 100% of requests take 10ms longer?

October 03, 2015
On Saturday, 3 October 2015 at 18:26:32 UTC, Vladimir Panteleev wrote:
> On Saturday, 3 October 2015 at 18:21:55 UTC, deadalnix wrote:
>> On Saturday, 3 October 2015 at 13:35:19 UTC, Vladimir Panteleev wrote:
>>> On Friday, 2 October 2015 at 07:32:02 UTC, Kagamin wrote:
>>>> Low latency (also a synonym for fast) is required by interactive applications like client and server software
>>>
>>> Isn't a typical collection cycle's duration negligible compared to typical network latency?
>>
>> Not really, especially if you have to block all threads, meaning hundreds of requests.
>
> I don't understand how that is relevant.
>
> E.g. how is making 1% of requests take 100ms longer worse than making 100% of requests take 10ms longer?

Let's say you have capacity on you server to serve 100 requests and a request takes 100ms to process. Then you need to dimension your infra for you servers to absorb 1 request per ms per server.

Now you need to stop operation for 100ms to do a GC cycle. In the meantime, requests continue arriving. By the end of the GC cycle, you have 100 more requests to process. Twice as much.

The problem is that you are creating a peak demand and need to be able to absorb it.

This is a serious problem, in fact, twitter have a very complex system to get machine that are GCing out the load balancer and back at the end of the cycle. This is one way to do it, but it is far from ideal as now re-configuring the load balancer is part of the GC cycle.

TL;DR: it is not bad for a user individually, it is bad because it creates peaks of demand on your server you need to absorb.

October 03, 2015
D gives users tools to avoid heap allocations and if it is necessary to allocate heap memory you have scoped memory management or ref counting so your GC heap is small or non existent. People fear manual memory management because they hear stories about C but for most part it can be easy and safe.
October 04, 2015
On Saturday, 3 October 2015 at 19:01:33 UTC, welkam wrote:
> D gives users tools to avoid heap allocations and if it is necessary to allocate heap memory you have scoped memory management or ref counting so your GC heap is small or non existent. People fear manual memory management because they hear stories about C but for most part it can be easy and safe.

these tools are not very good and they don't help when the standard library(...or builtin language features...) use the GC and tie your hands
October 04, 2015
On Saturday, 3 October 2015 at 07:49:35 UTC, Iain Buclaw wrote:
> On 2 Oct 2015 1:32 pm, "Tourist via Digitalmars-d" < digitalmars-d@puremagic.com> wrote:
>>
>> On Friday, 2 October 2015 at 06:53:56 UTC, Iain Buclaw wrote:
>>>
>>> On 1 Oct 2015 11:35 am, "Tourist via Digitalmars-d" <
> digitalmars-d@puremagic.com> wrote:
>>>>
>>>> [...]
>>>
>>> good GC. And they keep working on it, e.g.
> https://github.com/golang/proposal/blob/master/design/12800-sweep-free-alloc.md
>>>>
>>>> [...]
>>>
>>> https://github.com/golang/go/blob/master/LICENSE
>>>>
>>>> [...]
>>>
>>> Wouldn't it largely benefit D? I guess that I'm not the first one to
> think about it. Thoughts?
>>>
>>> Why do you think Go's GC might be better than D's?  Is it because we
> lack the PR when changes/innovations are done to the GC in druntime?  Do you *know* about anything new that has changed or improved in D's GC over the last two years?
>>>
>>> I'd be interested to hear about this.
>>
>>
>> I know that it has the reputation of being of the simplest kind. Haven't
> looked at the code actually (and I wouldn't understand much even if I did).
>
> So I doubt you've looked at Go's GC code either.  In which case it is a matter of PR which led to your suggestion.

That's basically true, but isn't it a good approximation of the real state of affairs? My comment about the D GC being of the simple kind was something I've read here on the forums, not on e.g. Reddit or the Go forums, so it's probably approximately true (why would you falsely bash yourself?). And Google, being a huge company, can invest a lot in Go, which includes the GC, and the fact that there are articles about its improvements here and there suggests that they do invest a lot.
October 04, 2015
On Sunday, 4 October 2015 at 12:40:00 UTC, rsw0x wrote:
> these tools are not very good and they don't help when the standard library(...or builtin language features...) use the GC and tie your hands

IMO tools for memory management in D are way better than that of other languages. Game developers who use c++ dont use all of c++ features(templates, exceptions), because they care about performance the same can be said about D. Yes some features use GC heap, but you can just not use it. Or you can use so little that GC collection wont even kick in. With Go you have no real option but to use GC.
October 04, 2015
On Friday, 2 October 2015 at 11:27:12 UTC, Tourist wrote:
> I know that it has the reputation of being of the simplest kind. Haven't looked at the code actually (and I wouldn't understand much even if I did).

Go has a very simple GC itself. It's concurrent, so it trades low latency against performance (write-barriers) and throughput.
We wouldn't want to force everybody to use write-barriers, but you can avoid creating garbage in D much easier (e.g. map) and we're improving support for deterministic memory management. So while we keep on improving D's GC as well, GC performance is less of a problem in D b/c you have a smaller GC heap.
October 04, 2015
On Sunday, 4 October 2015 at 17:22:52 UTC, Martin Nowak wrote:
> On Friday, 2 October 2015 at 11:27:12 UTC, Tourist wrote:
>> I know that it has the reputation of being of the simplest kind. Haven't looked at the code actually (and I wouldn't understand much even if I did).
>
> Go has a very simple GC itself. It's concurrent, so it trades low latency against performance (write-barriers) and throughput.
> We wouldn't want to force everybody to use write-barriers, but you can avoid creating garbage in D much easier (e.g. map) and we're improving support for deterministic memory management. So while we keep on improving D's GC as well, GC performance is less of a problem in D b/c you have a smaller GC heap.

I still say it's worth investigating a thread-local GC by taking advantage of the fact that shared has never really been properly fleshed out. This would heavily play to D's TLS by default, and preferring message passing over shared data.

Bye.