January 21, 2015
On Wednesday, 21 January 2015 at 01:07:59 UTC, weaselcat wrote:
> On Wednesday, 21 January 2015 at 01:05:28 UTC, weaselcat wrote:
>> there is no silver bllet for memory and
> Sorry, meant "silver bullet for memory management", bit tired : )

Yeah, but the silver bullet for memory management in real time applications are:

1. preallocation
2. O(1) allocation with upper bounds on total memory usage

Then you can use a GC if you
1. know when it runs and have headroom for it
2. can put an upper bound on what it scans

Implicit concurrent GC collection leads to:

1. less efficient codegen + restrictions related to FFI
2. higher memory usage with limited ability to put upper bounds
3. intermittent pauses
4. random cache pollution

Inconvenient if you try to get the most out of the hardware.
January 21, 2015
On Wednesday, 21 January 2015 at 00:47:43 UTC, deadalnix wrote:
> On Tuesday, 20 January 2015 at 23:47:44 UTC, Ola Fosheim Grøstad wrote:
>> On Tuesday, 20 January 2015 at 23:17:28 UTC, deadalnix wrote:
>>>> Concurrent GC is too expensive for a proper system level language.
>>>>
>>>
>>> That is an unsubstanciated claim.
>>
>> And so is «pigs can't fly».
>>
>
> That is the old prove a negative bullshit.

No. You can make a pig fly, but the landing is going to be ugly compared to the competition.

The status quo is that GC is not desirable in typical real time / system level programming. And if it is to be used, it should be predictable and not require special code gen.

So that is hypothesis h0. Then it is up to you to replace it with hypothesis h1: "concurrent GC is fine for typical system level programming".

All you have to do is to provide a documented proof-of-concept, or at least a link to a solid paper that describes a concurrent GC that has the desired properties:

1. no negative effect on code gen
2. predictable upper bounds all around (time, latency and memory)
3. no bad pollution of caches at the wrong time
4. no intermittent pauses at random times

> You assert that something is too expensive for system language, which is not a negative, so the burden of proof is on you, substantiate your claim or you are just generating noise.

Oh... But you are the one that is challenging the status quo. Not me. The excitement created by Rust is proof enough for what the status quo is.

> Relying on cheap rhetorical trick is not going to fly (and you could add, no more than a pig).

GC is an unnecessary pig to begin with. If you want to use a pig to eat your leftovers, then you need to know how hungry it is and how much and how fast it can eat. I think the metaphor is apt. :-)
January 21, 2015
On Wednesday, 21 January 2015 at 09:03:32 UTC, Ola Fosheim Grøstad wrote:
>> You assert that something is too expensive for system language, which is not a negative, so the burden of proof is on you, substantiate your claim or you are just generating noise.
>
> Oh... But you are the one that is challenging the status quo. Not me. The excitement created by Rust is proof enough for what the status quo is.
>

That's irrelevant. Once again, you are giving into the rhetoric bullshit. Or, as to quote Coluche « C'est pas parce qu'il sont nombreux à avoir tort qu'ils ont raison » that I would translate by "It is not because many of them are wrong that they are right."

What is the status quo is irrelevant. You make a claim, you substantiate.
January 21, 2015
On Wednesday, 21 January 2015 at 08:51:08 UTC, Ola Fosheim Grøstad wrote:
> Implicit concurrent GC collection leads to:
>
> 1. less efficient codegen + restrictions related to FFI

No.

> 2. higher memory usage with limited ability to put upper bounds

Yes, the memory allocated during the collection will be collected at the next cycle,and you can even get into trashing scenario if you allocate faster than you can collect.

> 3. intermittent pauses

I think you don't quite get what concurrent is about.

> 4. random cache pollution

If you have a multicore (and you have one) no.

> Inconvenient if you try to get the most out of the hardware.

This is all the contrary. As you may have noticed, most memory allocation intensive benchmark in Java tend to outperform their C++ counter part. This is because Java can collect concurrently, giving some level of parallelism for free. Nowadays, multicore machine are all over the place, and getting a part of the workload offloaded on another core for free is something you want in the general case.
January 21, 2015
On Wednesday, 21 January 2015 at 09:54:32 UTC, deadalnix wrote:
> What is the status quo is irrelevant. You make a claim, you substantiate.

You need to take a course called "the philosophy of science".

You've made a claim that concurrent GC is what is needed for competitive system level programming support. "Competitive" meaning you can get close to peak performance.

A peer reviewed reference is all you need to provide.

That can't be too much to ask for, given that GC is a well researched field in computer science.
January 21, 2015
On Wednesday, 21 January 2015 at 10:00:21 UTC, deadalnix wrote:
>> 1. less efficient codegen + restrictions related to FFI
>
> No.

How would that work? You cannot trace pointers that stay in registers. You also cannot do precise unwinding of FFI stacks.

>> 3. intermittent pauses
>
> I think you don't quite get what concurrent is about.

Parallel: running simultaneously

Concurrent: parallel and/or interleaving

C# use intermittent pauses/parallel

https://msdn.microsoft.com/en-us/library/ee787088(v=vs.110).aspx#concurrent_garbage_collection

>> 4. random cache pollution
>
> If you have a multicore (and you have one) no.

Multicore won't help on cache level 3. (hyper-threading cores share cache too)

>> Inconvenient if you try to get the most out of the hardware.
>
> This is all the contrary. As you may have noticed, most memory allocation intensive benchmark in Java tend to outperform their C++ counter part.

Malloc is not fast. If you want performance you use O(1) allocators/deallocators. Bogus benchmarks to make Java look good are not convincing.
January 21, 2015
Ok listen, I'm gonna put a full stop to this conversation :
 - You provide no evidence of your claims.
 - You discard any evidence that contradict your claims without solid argument.
 - You are constantly shitfting the goalspot using more and more ridiculous claims (like malloc should not be used).

There no point in continuing this discussion.

If you have a point, the burden of proof is on you. I remind you that, before you brought all kind of topic to the discussion to avoid having to substantiate anything, your claim was :
 - A concurrent GC is not suitable for system languages.
January 21, 2015
On 1/21/15 3:37 AM, Paolo Invernizzi wrote:
> On Wednesday, 21 January 2015 at 03:02:53 UTC, Steven Schveighoffer wrote:
>> On 1/20/15 9:04 PM, ketmar via Digitalmars-d wrote:
>>> On Tue, 20 Jan 2015 20:51:34 -0500
>>> Steven Schveighoffer via Digitalmars-d <digitalmars-d@puremagic.com>
>>> wrote:
>>>
>>>>>> You can always put @nogc on the dtor if you want.
>>>>> seems that you completely missing my point. (sigh)
>>>>
>>>> Nope, not missing it. The mechanics are there. You just have to
>>>> annotate.
>>> that is where you missing it. your answer is like "hey, C has all
>>> mechanics for doing OOP with virtual methods and type checking, you
>>> just have to write the code!"
>>
>> No, actually it's not. Adding @nogc to a function is as hard as
>> writing "class" when you want to do OOP.
>>
>>>
>>> the whole point of my talk was "free programmer from writing the
>>> obvious and setup some red tapes for beginners".
>>
>> If he does it wrong, it gives him a stack trace on where to look. What
>> is different here than any other programming error?
>
> Are you suggesting that newcomers should learn D by discovering it day
> by day from stack traces?

No, I was saying if something causes an exception/error, it is a programming error, and there just isn't any way for a compiler to prevent people from making *any* mistakes.

But calling sometimes-allocating functions inside a dtor that don't allocate when you call them *that* time shouldn't be banned by the compiler.

>
> Actually there's nothing on the documentation about class destructors
> [1] that warns about that specific issue of the current (and default) GC.
>
> [1] http://dlang.org/class.html#destructors
>

I think the docs are in need of updating there. It's not meant to be a secret that you cannot allocate inside a GC collection. I'll try to put together a PR.

-Steve
January 21, 2015
On Wednesday, 21 January 2015 at 20:15:53 UTC, deadalnix wrote:
> Ok listen, I'm gonna put a full stop to this conversation :
>  - You provide no evidence of your claims.

HAHAHA…

>  - You discard any evidence that contradict your claims without solid argument.

What evidence?

>  - You are constantly shitfting the goalspot using more and more ridiculous claims (like malloc should not be used).

You can batch-allocate, but you need to use an O(1) allocator if you want fast memory handling.

> There no point in continuing this discussion.

That's right...

>  - A concurrent GC is not suitable for system languages.

And there is nothing that suggests that it is. Go is not a system programming language.
January 21, 2015
On Wednesday, 21 January 2015 at 20:32:14 UTC, Steven Schveighoffer wrote:
> On 1/21/15 3:37 AM, Paolo Invernizzi wrote:
>> On Wednesday, 21 January 2015 at 03:02:53 UTC, Steven Schveighoffer wrote:
>>> On 1/20/15 9:04 PM, ketmar via Digitalmars-d wrote:
>>> If he does it wrong, it gives him a stack trace on where to look. What
>>> is different here than any other programming error?
>>
>> Are you suggesting that newcomers should learn D by discovering it day
>> by day from stack traces?
>
> No, I was saying if something causes an exception/error, it is a programming error, and there just isn't any way for a compiler to prevent people from making *any* mistakes.
>
> But calling sometimes-allocating functions inside a dtor that don't allocate when you call them *that* time shouldn't be banned by the compiler.

You can't ban them, either now with an annotated @nogc destructor: SetFunctionAttributes.