On 17 April 2014 03:37, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
On 4/16/2014 4:50 AM, Manu via Digitalmars-d wrote:
I am convinced that ARC would be acceptable,

ARC has very serious problems with bloat and performance.

This is the first I've heard of it, and I've been going on about it for ages.


Every time a copy is made of a pointer, the ref count must be dealt with, engendering bloat and slowdown. C++ deals with this by providing all kinds of ways to bypass doing this, but the trouble is such is totally unsafe.

Obviously, a critical part of ARC is the compilers ability to reduce redundant inc/dec sequences. At which point your 'every time' assertion is false. C++ can't do ARC, so it's not comparable.
With proper elimination, transferring ownership results in no cost, only duplication/destruction, and those are moments where I've deliberately committed to creation/destruction of an instance of something, at which point I'm happy to pay for an inc/dec; creation/destruction are rarely high-frequency operations.

Have you measured the impact? I can say that in realtime code and embedded code in general, I'd be much happier to pay a regular inc/dec cost (a known, constant quantity) than commit to unknown costs at unknown times.
I've never heard of Obj-C users complaining about the inc/dec costs.

If an inc/dec becomes a limiting factor in hot loops, there are lots of things you can do to eliminate them from your loops. I just don't buy that this is a significant performance penalty, but I can't say that experimentally... can you?

How often does ref fiddling occur in reality? My guess is that with redundancy elimination, it would be surprisingly rare, and insignificant.
I can imagine that I would be happy with this known, controlled, and controllable cost. It comes with a whole bunch of benefits for realtime/embedded use (immediate destruction, works in little-to-no-free-memory environments, predictable costs, etc).


Further problems with ARC are inability to mix ARC references with non-ARC references, seriously hampering generic code.

That's why the only workable solution is that all references are ARC references.
The obvious complication is reconciling malloc pointers, but I'm sure this can be addressed with some creativity.

I imagine it would look something like:
By default, pointers are fat: struct ref { void* ptr, ref_t* rc; }
malloc pointers could conceivably just have a null entry for 'rc' and therefore interact comfortably with rc pointers.
I imagine that a 'raw-pointer' type would be required to refer to a thin pointer. Raw pointers would implicitly cast to fat pointers, and a fat->thin casts may throw if the fat pointer's rc is non-null, or compile error if it can be known at compile time.

Perhaps a solution is possible where an explicit rc record is not required (such that all pointers remain 'thin' pointers)...
A clever hash of the pointer itself can look up the rc?
Perhaps the rc can be found at ptr[-1]? But then how do you know if the pointer is rc allocated or not? An unlikely sentinel value at ptr[-1]? Perhaps the virtual memory page can imply whether pointers allocated in that region are ref counted or not? Some clever method of assigning the virtual address space so that recognition of rc memory can amount to testing a couple of bits in pointers?

I'm just making things up, but my point is, there are lots of creative possibilities, and I have never seen any work to properly explore the options.

and I've never heard anyone suggest
any proposal/fantasy/imaginary GC implementation that would be acceptable...

Exactly.

So then consider ARC seriously. If it can't work, articulate why. I still don't know, nobody has told me.
It works well in other languages, and as far as I can tell, it has the potential to produce acceptable results for _all_ D users.
iOS is a competent realtime platform, Apple are well known for their commitment to silky-smooth, jitter-free UI and general feel. Android on the other hand is a perfect example of why GC is not acceptable.


In complete absence of a path towards an acceptable GC implementation, I'd
prefer to see people that know what they're talking about explore how
refcounting could be used instead.
GC backed ARC sounds like it would acceptably automate the circular reference
catching that people fuss about, while still providing a workable solution for
embedded/realtime users; disable(/don't link) the backing GC, make sure you mark
weak references properly.

I have, and I've worked with a couple others here on it, and have completely failed at coming up with a workable, safe, non-bloated, performant way of doing pervasive ARC.

Okay. Where can I read about that? It doesn't seem to have surfaced, at least, it was never presented in response to my many instances of raising the topic.
What are the impasses?

I'm very worried about this. ARC is the only imaginary solution I have left. In lieu of that, we make a long-term commitment to a total fracturing of memory allocation techniques, just like C++ today where interaction between libraries is always a massive pain in the arse. It's one of the most painful things about C/C++, and perhaps one of the primary causes of incompatibility between libraries and frameworks. This will transfer into D, but it's much worse in D because of the relatively high number of implicit allocations ('~', closures, etc).
Frameworks and libraries become incompatible with each other, which is a problem in C/C++ that modern languages (java, C#, python, etc) typically don't suffer.

My feeling is that, if D doesn't transcend these fundamental troubles we wrestle in C++, then D is a stepping stone rather than a salvation. @nogc, while seemingly simple and non-destructive, feels kinda like a commitment, or at least an acceptance, of fracturing allocation paradigms between codebases. Like I say before, I kinda like the idea of @nogc, but I'm seriously concerned about what it implies...