August 18, 2014
On 18/08/2014 13:55, Peter Alexander wrote:
> I'm not sure what the status of std.typecons.Unique is. Last I heard it
> had some issues, but I haven't tried it much myself.

It should work, but it is bug-prone without the @disable this(this) fix in master. Unfortunately that won't make it into 2.066.

Also, the web docs lack a concrete example - I have a pull which fixes that:
https://github.com/D-Programming-Language/phobos/pull/2346
August 18, 2014
> A good reason is the ability to write lock-free algorithms, which are very hard to implement without GC support. This is the main reason why C++11 has a GC API and Herb Sutter will be discussing about GC in C++ at CppCon.

 *some* lock free algorithms benefit from GC, there is still plenty you can do without GC, just look at TBB.

> Reference counting is only a win over GC with compiler support for reducing increment/decrement operations via dataflow analysis.
>
> C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr are slower than other languages with GC support, because those classes are plain library types without compiler support. Of course, compiler vendors can have blessed library types, but the standard does not require it.

 Not really accurate. First of all, don't include unique_ptr as if it had the same overhead as the other two, it doesn't.

 With RC you pay a price during creation/deletion/sharing, but not while it is alive.
 With GC you pay almost no cost during allocation/deletion, but a constant cost while it is alive. You allocate enough objects and the sum cost ant so small.

 Besides that, in C++ it works like this.
 90% of objects: value types, on stack or embedded into other objects
 9% of objects: unique types, use unique_ptr, no overhead
 ~1% of objects: shared, use shared_ptr/weak_ptr etc.

 With GC you give up deterministic behavior, which is *absolutely* not worth giving up for 1% of objects.

 I think most people simply haven't worked in an environment that supports unique/linear types. So everyone assumes that you need a GC. Rust is showing that this is nonsense, as C++ has already done for people using C++11.




August 18, 2014
Am 18.08.2014 20:56, schrieb b:
>> A good reason is the ability to write lock-free algorithms, which are
>> very hard to implement without GC support. This is the main reason why
>> C++11 has a GC API and Herb Sutter will be discussing about GC in C++
>> at CppCon.
>
>   *some* lock free algorithms benefit from GC, there is still plenty you
> can do without GC, just look at TBB.

Sure, but you need to be a very good expert to pull them off.

>
>> Reference counting is only a win over GC with compiler support for
>> reducing increment/decrement operations via dataflow analysis.
>>
>> C++ programs with heavy use of unique_ptr/shared_ptr/weak_ptr are
>> slower than other languages with GC support, because those classes are
>> plain library types without compiler support. Of course, compiler
>> vendors can have blessed library types, but the standard does not
>> require it.
>
>   Not really accurate. First of all, don't include unique_ptr as if it
> had the same overhead as the other two, it doesn't.

Yes it does, when you do cascade destruction of large data structures.

>
>   With RC you pay a price during creation/deletion/sharing, but not
> while it is alive.
>   With GC you pay almost no cost during allocation/deletion, but a
> constant cost while it is alive. You allocate enough objects and the sum
> cost ant so small.
>
>   Besides that, in C++ it works like this.
>   90% of objects: value types, on stack or embedded into other objects
>   9% of objects: unique types, use unique_ptr, no overhead
>   ~1% of objects: shared, use shared_ptr/weak_ptr etc.

It is more than 1% I would say, because in many cases where you have an unique_ptr, you might need a shared_ptr instead, or go unsafe and give direct access to the underlying pointer.

For example, parameters and temporaries, where you can be sure no one else is using the pointer, but afterwards as a consequence of destructor invocation the data is gone.

>
>   With GC you give up deterministic behavior, which is *absolutely* not
> worth giving up for 1% of objects.

Being a GC enabled systems programming language does not forbid the presence of deterministic memory management support, for the use cases that really need it.


>
>   I think most people simply haven't worked in an environment that
> supports unique/linear types. So everyone assumes that you need a GC.
> Rust is showing that this is nonsense, as C++ has already done for
> people using C++11.
>

I know C++ pretty well (using it since 1993), like it a lot, but I also think we can get better than it.

Specially since I had the luck to get to know systems programming languages with GC like Modula-3 and Oberon(-2). The Oberon OS had quite
a few nice concepts that go way back to Mesa/Cedar at Xerox PARC.

Rust is also showing how complex a type system needs to be to handle all memory management cases. Not sure how many developers will jump into it.

For example, currently you can only concatenate strings if they are both heap allocated.

There are still some issues with operations that mix lifetimes being
sorted out.


--
Paulo
August 18, 2014
On Monday, 18 August 2014 at 18:56:42 UTC, b wrote:
>  With RC you pay a price during creation/deletion/sharing, ...

Are you sure that there is even a cost for creation? I mean sure we have to allocate memory on the heap but the GC has to do the same thing.

And with C++11 unique_ptr moves by default, so I think that the creation cost is exactly the same as with a GC.

I also think that deletion is cheaper for a unique_ptr compared to a GC, because all you have to do is call the destructor when it goes out of scope.


My initial question was why D uses the GC for everything. Personally it would make more sense to me if D would use the GC as a library. Let the user decide what he wants, something like

shared_ptr(GC) ptr;
shared_ptr(RC) ptr;




August 18, 2014
On Monday, 18 August 2014 at 19:43:14 UTC, maik klein wrote:
> My initial question was why D uses the GC for everything.

It isn't supposed to with @nogc?

> Personally it would make more sense to me if D would use the GC as a library. Let the user decide what he wants, something like
>
> shared_ptr(GC) ptr;
> shared_ptr(RC) ptr;

Memory allocation should be in the compiler/runtime to get proper
optimizations. But D has apparently gone with nongc-allocators as a library feature.


August 18, 2014
On Mon, 18 Aug 2014 19:43:13 +0000
maik klein via Digitalmars-d <digitalmars-d@puremagic.com> wrote:

> My initial question was why D uses the GC for everything.
to avoid unnecessary complications in user source code. GC is necessary for some cool D features (it was noted ealier), and GC is first-class citizen in D.

> Personally it would make more sense to me if D would use the GC as a library.
D is not C++. besides, you can write your own allocators (oh, we need more documentation on this) and avoid GC usage. so user *can* avoid GC if he wants to. take a look at std.typecons and it's 'scoped', for example.


August 18, 2014
Various reasons:
1. Memory safety. A whole class of bugs are eliminated by the use
of a GC.
2. It is faster on multithreaded systems than RC (as the
reference count must be synchronized properly).
3. Having all the heap under GC control is important so that the
GC won't delete live objects.

If you are willing to get rid of 1., D's allow you to free object
from the GC explicitly via GC.free, which will bring you back to
a manual memory management safety and performance level, but will
still provide you with a protection net against memory leaks.
August 18, 2014
On 8/18/14, 8:51 AM, bearophile wrote:
> Jonathan M Davis:
>
>> The biggest reason is memory safety. With a GC, it's possible to make
>> compiler guarantees about memory safety, whereas with
>> manual memory management, it isn't.
>
> Unless you have a very smart type system and you accept some compromises
> (Rust also uses a reference counter some some cases, but I think most
> allocations don't need it).
>
> Bye,
> bearophile

It's very smart, yes. But it takes half an hour to compile the compiler itself. And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.
August 19, 2014
Ary Borenszweig:

> It's very smart, yes. But it takes half an hour to compile the compiler itself.

I think this is mostly a back-end issue. How much time does it take to compile ldc2? Can't they create a Rust with dmc back-end? :o)


> And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.

I've never written Rust programs longer than twenty lines, so I don't know. But I think Rust code is acceptable to write, I have seen code written in far worse situations (think about a programs one million lines of code long of MUMPS).

Apparently the main Rust designer is willing to accept anything to obtain memory safety of Rust in parallel code. And the language is becoming simpler to use and less noisy.

Bye,
bearophile
August 19, 2014
On Monday, 18 August 2014 at 23:48:24 UTC, Ary Borenszweig wrote:
> On 8/18/14, 8:51 AM, bearophile wrote:
>> Jonathan M Davis:
>>
>>> The biggest reason is memory safety. With a GC, it's possible to make
>>> compiler guarantees about memory safety, whereas with
>>> manual memory management, it isn't.
>>
>> Unless you have a very smart type system and you accept some compromises
>> (Rust also uses a reference counter some some cases, but I think most
>> allocations don't need it).
>>
>> Bye,
>> bearophile
>
> It's very smart, yes. But it takes half an hour to compile the compiler itself. And you have to put all those unwrap and types everywhere, I don't think it's fun or productive that way.

Initially all those type wrapping and scoping and lifetimes were single-character annotations and that gave me the impression that the idea is that once you get comfortable with Rust's type system and syntax, you can use all that fine-grained control over the scope and lifetime of the data to get superior compile-time error checking and to give better cues to the compiler to get better performance, without too much effort and without hindering the readability(again - once you get comfortable with the type system).

Now, though, when they remove more and more syntax to the library in an attempt to reach the elegance and simplicity of modern C++, I'm no longer sure that was the true goal of that language...