May 13, 2014
On Monday, 12 May 2014 at 04:22:21 UTC, Marco Leise wrote:
> On the positive side the talk about Rust, in particular how
> reference counted pointers decay to borrowed pointers made me
> think the same could be done for our "scope" args. A reference
> counted slice with 3 machine words could decay to a 2 machine
> word "scoped" slice. Most of my code at least just works on the
> slices and doesn't keep a reference to them.

I wouldn't mind banning slices on the heap, but what D needs is to ban having pointers to internal data outlive allocation base pointers. I think that is a bad practice anyway and consider it to be a bug. If you can establish that constraint then you can avoid tracing pointers to non-aligned addresses:

if (addr&MASK==0) trace...

You could also statically annotate pointers to be known as "guaranteed allocation base pointer" or " known to be traced already" (e.g. borrowed pointers in Rust)

> A counter example
> is when you have something like an XML parser - a use case
> that D traditionally (see Tango) excelled in. The GC
> environment and slices make it possible to replace string
> copies with cheap slices into the original XML string.

As pointed out by others, this won't work for XML. It will work for some binary formats, but you usually want to map a stuct onto the data (or copy) anyway.

I have little need for slices on the heap... I'd much rather have it limited to registers (conceptually) if that means faster GC.


May 13, 2014
On Tuesday, 13 May 2014 at 07:12:02 UTC, Marco Leise wrote:
> Am Mon, 12 May 2014 08:44:51 +0000
> schrieb "Marc Schütz" <schuetzm@gmx.net>:
>
>> On Monday, 12 May 2014 at 04:22:21 UTC, Marco Leise wrote:
>> > On the positive side the talk about Rust, in particular how
>> > reference counted pointers decay to borrowed pointers made me
>> > think the same could be done for our "scope" args. A reference
>> > counted slice with 3 machine words could decay to a 2 machine
>> > word "scoped" slice. Most of my code at least just works on the
>> > slices and doesn't keep a reference to them. A counter example
>> > is when you have something like an XML parser - a use case
>> > that D traditionally (see Tango) excelled in. The GC
>> > environment and slices make it possible to replace string
>> > copies with cheap slices into the original XML string.
>> 
>> Rust also has a solution for this: They have lifetime annotations. D's scope could be extended to support something similar:
>> 
>>      scope(input) string getSlice(scope string input);
>> 
>> or with methods:
>> 
>>      struct Xml {
>>          scope(this) string getSlice();
>>      }
>> 
>> scope(symbol) means, "this value references/aliases (parts of) the value referred to by <symbol>". The compiler can then make sure it is never assigned to variables with longer lifetimes than <symbol>.
>
> Crazy shit, now we are getting into concepts that I have no
> idea of how well they play in real code. There are no globals,
> but threads all create their own call stacks with independent
> lifetimes. So at that point lifetime annotations become
> interesting.

I don't really know a lot about Rust, but I believe this is not an issue with Rust, as its variables are only thread-local. You can send things to other threads, but then they become inaccessible in the current thread. In general, lifetime annotations can only be used for "simple" relationships. It's also not a way to keep objects alive as long as they are referenced, but rather a way to disallow references to exist longer than the objects they point to.
May 13, 2014
On Tuesday, 13 May 2014 at 07:42:26 UTC, Manu via Digitalmars-d wrote:
> The other topic is still relevant to me too however (and many others).
> We still need to solve the problem with destructors.
> I agree with Andrei, they should be removed from the language as they
> are. You can't offer destructors if they don't get called.

Andrei only said, they are not called sometimes, not always, so we can guarantee destructor calls, when it can be guaranteed.

> And the usefulness of destructors is seriously compromised if you can't rely
> on them being executed eagerly. Without eager executed destructors, in
> many situations, you end up with effective manual releasing the object
> anyway (*cough* C#), and that implies manually maintaining knowledge
> of lifetime/end of life and calling some release.

I use finalizers in C#, they're useful. I understand, it's a popular misunderstanding, that people think, that GC must work like RAII. But GC manages only its resources, not your resources. It can manage memory without RAII, and it does so. Speaking about eager resource management, we have Unique and RefCounted in phobos, in fact, files are already managed that way. What's problem?
May 13, 2014
On Tuesday, 13 May 2014 at 07:42:26 UTC, Manu via Digitalmars-d wrote:
> Do we ARC just those objects that have destructors like Andrei
> suggested? It's a possibility, I can't think of any other solution. In
> lieu of any other solution, it sounds like we could very well end up
> with ARC tech available one way or another, even if it's not
> pervasive, just applied implicitly to things with destructors.

BTW, I don't see how ARC would be more able to call destructors, than GC. If ARC can call destructor, so can GC. Where's the difference?
May 13, 2014
On 13 May 2014 21:42, Kagamin via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On Tuesday, 13 May 2014 at 07:42:26 UTC, Manu via Digitalmars-d wrote:
>>
>> The other topic is still relevant to me too however (and many others).
>> We still need to solve the problem with destructors.
>> I agree with Andrei, they should be removed from the language as they
>> are. You can't offer destructors if they don't get called.
>
>
> Andrei only said, they are not called sometimes, not always, so we can guarantee destructor calls, when it can be guaranteed.

... what?

>> And the usefulness of destructors is seriously compromised if you can't
>> rely
>> on them being executed eagerly. Without eager executed destructors, in
>> many situations, you end up with effective manual releasing the object
>> anyway (*cough* C#), and that implies manually maintaining knowledge
>> of lifetime/end of life and calling some release.
>
>
> I use finalizers in C#, they're useful. I understand, it's a popular misunderstanding, that people think, that GC must work like RAII. But GC manages only its resources, not your resources. It can manage memory without RAII, and it does so. Speaking about eager resource management, we have Unique and RefCounted in phobos, in fact, files are already managed that way. What's problem?

It completely undermines the point. If you're prepared to call
finalise, when you might as well call free... Every single detail
required to perform full manual memory management is required to use
finalise correctly.
I see absolutely no point in a GC when used with objects that require
you to manually call finalise anyway.

>> Do we ARC just those objects that have destructors like Andrei suggested? It's a possibility, I can't think of any other solution. In lieu of any other solution, it sounds like we could very well end up with ARC tech available one way or another, even if it's not pervasive, just applied implicitly to things with destructors.
>
>
> BTW, I don't see how ARC would be more able to call destructors, than GC. If ARC can call destructor, so can GC. Where's the difference?

ARC release is eager. It's extremely common that destructors either expect to be called eagerly, or rely on proper destruction ordering. Otherwise you end up with finalise again, read: unsafe manual memory management :/
May 13, 2014
On 13/05/14 13:46, Kagamin wrote:

> BTW, I don't see how ARC would be more able to call destructors, than
> GC. If ARC can call destructor, so can GC. Where's the difference?

The GC will only call destructors when it deletes an object, i.e. when it runs a collection. There's no guarantee that a collection will happen. With ARC, as soon as a reference goes out of scope it's decremented. If the reference count then goes to zero it will call the destructor and delete the object.

-- 
/Jacob Carlborg
May 13, 2014
On Tuesday, 13 May 2014 at 13:21:04 UTC, Jacob Carlborg wrote:
> The GC will only call destructors when it deletes an object, i.e. when it runs a collection. There's no guarantee that a collection will happen.

Ah, so when GC collects an object, it calls destructor. It sounded as if it's not guaranteed at all.
May 13, 2014
On Tuesday, 13 May 2014 at 12:18:06 UTC, Manu via Digitalmars-d wrote:
> It completely undermines the point. If you're prepared to call
> finalise, when you might as well call free... Every single detail
> required to perform full manual memory management is required to use
> finalise correctly.
> I see absolutely no point in a GC when used with objects that require
> you to manually call finalise anyway.

Well, GC doesn't run immidiately, so you can't do eager resource management with it. GC manages memory, not other resources, and lots of people do see point in it: java and C# are industry quality technologies in wide use.

> ARC release is eager. It's extremely common that destructors either
> expect to be called eagerly, or rely on proper destruction ordering.
> Otherwise you end up with finalise again, read: unsafe manual memory
> management :/

No language will figure out all algorithms for you, but this looks like a rare scenario: for example, kernel objects don't require ordered destruction.
Finalizer will be called when GC collects the object, it's a last resort cleanup, but it's not as unsafe as it used to be.
May 13, 2014
On Tuesday, 13 May 2014 at 14:46:18 UTC, Kagamin wrote:
> On Tuesday, 13 May 2014 at 13:21:04 UTC, Jacob Carlborg wrote:
>> The GC will only call destructors when it deletes an object, i.e. when it runs a collection. There's no guarantee that a collection will happen.
>
> Ah, so when GC collects an object, it calls destructor. It sounded as if it's not guaranteed at all.

Currently it isn't, because the GC sometimes lacks type information, e.g. for dynamic arrays.
May 13, 2014
On Tuesday, 13 May 2014 at 14:59:42 UTC, Kagamin wrote:
> On Tuesday, 13 May 2014 at 12:18:06 UTC, Manu via Digitalmars-d wrote:
>> It completely undermines the point. If you're prepared to call
>> finalise, when you might as well call free... Every single detail
>> required to perform full manual memory management is required to use
>> finalise correctly.
>> I see absolutely no point in a GC when used with objects that require
>> you to manually call finalise anyway.
>
> Well, GC doesn't run immidiately, so you can't do eager resource management with it. GC manages memory, not other resources, and lots of people do see point in it: java and C# are industry quality technologies in wide use.
>
>> ARC release is eager. It's extremely common that destructors either
>> expect to be called eagerly, or rely on proper destruction ordering.
>> Otherwise you end up with finalise again, read: unsafe manual memory
>> management :/
>
> No language will figure out all algorithms for you, but this looks like a rare scenario: for example, kernel objects don't require ordered destruction.
> Finalizer will be called when GC collects the object, it's a last resort cleanup, but it's not as unsafe as it used to be.

It's not (memory) unsafe because you cannot delete live objects accidentally, but it's "unsafe" because it leaks resources. Imagine a file object that relies on the destructor closing the file descriptor. You will quickly run out of FDs...

I only see two use cases for finalizers (as opposed to destructors):

1.) Release manually allocated objects (or even ARC objects) that belong to the finalized object, i.e. releasing dependent objects. This, of course _must not_ involve critical external resources like FDs or temporary files.

2.) Implement weak references.