April 21, 2014
Here are two very good reasons to avoid extensive ref-counting:

1. transactional memory ( you don't want a lock on two reads )

2. cache coherency ( you don't want barriers everywhere )

Betting everything on ref counting is the same as saying no to upcoming CPUs.

IMO that means ARC is DOA. It might be useful for some high level objects… but I don't understand why one would think that it is a good low level solution. It is a single-threaded solution.
April 21, 2014
On Sunday, 20 April 2014 at 18:48:49 UTC, John Colvin wrote:
>> The way I understood your idea, was that a template could be marked @nogc, and yet still allow template arguments that themselves may gc.
>>
>> This can be accomplished by creating a unit test that passes non-allocating template parameters, and then verifying the instantiation is @nogc.
>
> The only way that works is if the unittest has coverage of all possible currently non-GC-using instantiations of all templates all the way down the call-tree.*
>
> Imagine the case where some function deep down the call-tree has a `static if(T == NastyType) doGCStuff();`.
>
> In order to protect against this, you have to check the internals of the entire call-tree in order to write the required unittest, and verify manually that you haven't missed a case every time anything changes.
>
> *alright, technically only those that can be instantiated by the function your testing, but this still blows up pretty fast.

Looks like John has similar thinking pattern for this specific case :P

Also you proposal does not add any hygiene checks to non-template functions _and_ requires to create a boilerplate output range mocks (for all possible duck types) and extra static asserts for all functions that you may want to mark as weak @nogc. I don't see it as clean solution that can actually be used in a library.

The very marketing value of @nogc is not to show that something like it is possible (it already is) but to show "hey, look how easy and clean it is!"
April 21, 2014
On 4/21/2014 1:29 PM, Steven Schveighoffer wrote:
> I think you are misunderstanding something. This is not for a pervasive
> ARC-only, statically guaranteed system. The best example he gives (and I agree
> with him) is iOS. Just look at the success of iOS, where the entire OS API is
> based on ARC (actually RC, with an option for both ARC and manual, but the
> latter is going away). If ARC was "so bad", the iOS experience would show it.
> You may have doubts, but I can assure you I can build very robust and performant
> code with ARC in iOS.

The thing is, with iOS ARC, it cannot be statically guaranteed to be memory safe. This makes it simply not acceptable for D in the general case. It "works" with iOS because iOS allows all kinds of (unsafe) ways to escape it, and it must offer those ways because it is not performant.

Kinda sorta memory safe, mostly memory safe, etc., is not a static guarantee.

There is JUST NO WAY that:

    struct RefCount {
        T* data;
        int* count;
    }

is going to be near as performant as:

    T*

1. A dereference requires two indirections. Cache performance, poof!

2. A copy requires two indirections to inc, two indirections to dec, and an exception unwind handler for dec.

3. Those two word structs add to memory consumption.

As you pointed out, performant code is going to have to cache the data* value. That cannot be guaranteed memory safe.


>> I can't reconcile agreeing that ARC isn't good enough to be pervasive with
>> compiler technology eliminates unnecessary ARC overhead.
> It's pretty pervasive on iOS. ARC has been around since iOS 4.3 (circa 2011).

Pervasive means "for all pointers". This is not true of iOS. It's fine for iOS to do a half job of it, because the language makes no pretensions about memory safety. It is not fine for D to replace a guaranteed memory safe system with an unsafe, hope-your-programmers-get-it-right, solution.

April 22, 2014
On Monday, 21 April 2014 at 23:02:54 UTC, Walter Bright wrote:
> There is JUST NO WAY that:
>
>     struct RefCount {
>         T* data;
>         int* count;
>     }
>

This is actually quite efficient compared to the standard NSObject which uses a hashtable for refcounting:

http://www.opensource.apple.com/source/objc4/objc4-551.1/runtime/NSObject.mm
http://www.opensource.apple.com/source/objc4/objc4-551.1/runtime/llvm-DenseMap.h

This is how Core Foundation does it:

http://www.opensource.apple.com/source/CF/CF-855.11/CFRuntime.c

Pretty longwinded:

CFTypeRef CFRetain(CFTypeRef cf) {
    if (NULL == cf) { CRSetCrashLogMessage("*** CFRetain() called with NULL ***"); HALT; }
    if (cf) __CFGenericAssertIsCF(cf);
    return _CFRetain(cf, false);
}

static CFTypeRef _CFRetain(CFTypeRef cf, Boolean tryR) {
    uint32_t cfinfo = *(uint32_t *)&(((CFRuntimeBase *)cf)->_cfinfo);
    if (cfinfo & 0x800000) { // custom ref counting for object
        ...stuff deleted…
        refcount(+1, cf);
        return cf;
    }
    …lots of stuff deleted…
    return cf;
}
April 22, 2014
On 21/04/14 19:49, Frustrated wrote:

> Not quite. AST macros simply transform code. Attributes attach meta data
> to code. While I'm sure there is some overlap they are not the same.
>
> Unless AST macros have the ability to arbitrary add additional
> contextual information to meta code then they can't emulate attributes.

I'm not saying we should emulate attributes, we already have those.

BTW, I'm pretty sure they could be implemented with macros. Just return the exact same AST that was passed in, but replace the top AST node with a node that is a subclass that adds the data for the UDA.

> E.g., Suppose you have D with AST macros but not attributes, how can you
> add them?
>
> In the dip, you have
>
> macro attr (Context context, Declaration decl)
> {
>      auto attrName = decl.name;
>      auto type = decl.type;
>
>      return <[
>          private $decl.type _$decl.name;
>
>          $decl.type $decl.name ()
>          {
>              return _$decl.name;
>          }
>
>          $decl.type $decl.name ($decl.type value)
>          {
>              return _$decl.name = value;
>          }
>      ]>;
> }
>
> class Foo
> {
>      @attr int bar;
> }
>
> but attr is not an attribute. It is an macro. @attr converts the "int
> bar" field into a private setter and getter. This has nothing to do with
> attributes.

Sure it does. @nogc could be implemented with AST macros.

@nogc void foo ()
{
    new Object;
}

macro nogc (Context context, Declaration decl)
{
    if (containsGCAllocation(decl))
        context.compiler.error(decl.name ~ " marked with @nogc performs GC allocations);

    return decl;
}

> (just cause you use the attr word and the @ symbol doesn't make it an
> attribute)
>
>
> I don't see how you could ever add attributes to D using AST macros
> above unless the definition of an AST macro is modified. [Again,
> assuming D didn't have attributes in the first place]
>
> This does not mean that AST macros could not be used to help define the
> generalized attributes though.
>
>
> What I am talking about is instead of hard coding attributes in the
> compiler, one abstracts and generalizes the code so that any attribute
> could be added in the future with minimal work.
>
> It would simply require one to add the built in attributes list, add the
> attribute grammar(which is used to reduce compound attributes), add any
> actions that happen when the attribute is used in code.
>
> e.g.,
>
> builtin_attributes = {
>
>      {pureness, pure, !pure/impure,
>          attr = any(attr, impure) => impure
>          attr = all(attr, pure) => pure
>      }
>
>      {gc, gc, !gc/nogc,
>          attr = any(attr, gc) => gc
>          attr = all(attr, nogc) => nogc
>      }
>      etc... }
>
> notices that pureness and gc have the same grammatical rules. Code would
> be added to handle the pureness and gc attributes when they are come
> across for optimization purposes.
>
> The above syntax is just made up and pretty bad but hopefully not too
> difficult to get the bigger picture.
>
> Every new built in attribute would just have to be added to the list
> above(easy) and code that uses it for whatever purpose would be added in
> the code where it belongs.
>
> User define attributes essentially would make the attributes list above
> dynamic allowing the user to add to it. The compiler would only be told
> how to simplify the attributes using the grammar and would do so but
> would not have any code inserted because there is no way for the user to
> hook into the compiler properly(I suppose it could be done if the
> compiler was written in an oop like way).

The AST macros would provide a way to hook into the compiler. We already have a way to define attributes, that is, UDA's. What is missing is a way to add semantic meanings the UDA's, that is where macros come in.

-- 
/Jacob Carlborg
April 22, 2014
On 22/04/14 01:02, Walter Bright wrote:

> The thing is, with iOS ARC, it cannot be statically guaranteed to be
> memory safe. This makes it simply not acceptable for D in the general
> case. It "works" with iOS because iOS allows all kinds of (unsafe) ways
> to escape it, and it must offer those ways because it is not performant.

So does D. That's why there is @safe, @trusted and @system. What is the unsafe part of ARC anyway?


-- 
/Jacob Carlborg
April 22, 2014
On 4/21/2014 11:51 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
> This is actually quite efficient compared to the standard NSObject which uses a
> hashtable for refcounting:

It's not efficient compared to pointers.

April 22, 2014
On 4/22/2014 12:11 AM, Jacob Carlborg wrote:
> On 22/04/14 01:02, Walter Bright wrote:
>
>> The thing is, with iOS ARC, it cannot be statically guaranteed to be
>> memory safe. This makes it simply not acceptable for D in the general
>> case. It "works" with iOS because iOS allows all kinds of (unsafe) ways
>> to escape it, and it must offer those ways because it is not performant.
>
> So does D. That's why there is @safe, @trusted and @system. What is the unsafe
> part of ARC anyway?

As I said, it is when it is bypassed for performance reasons.

April 22, 2014
On Tuesday, 22 April 2014 at 09:02:21 UTC, Walter Bright wrote:
> On 4/22/2014 12:11 AM, Jacob Carlborg wrote:
>> On 22/04/14 01:02, Walter Bright wrote:
>>
>>> The thing is, with iOS ARC, it cannot be statically guaranteed to be
>>> memory safe. This makes it simply not acceptable for D in the general
>>> case. It "works" with iOS because iOS allows all kinds of (unsafe) ways
>>> to escape it, and it must offer those ways because it is not performant.
>>
>> So does D. That's why there is @safe, @trusted and @system. What is the unsafe
>> part of ARC anyway?
>
> As I said, it is when it is bypassed for performance reasons.

A system that is automatically safe but can be manually managed for extra performance. That sounds very D-ish.
April 22, 2014
On Tuesday, 22 April 2014 at 09:01:20 UTC, Walter Bright wrote:
> On 4/21/2014 11:51 PM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>> This is actually quite efficient compared to the standard NSObject which uses a
>> hashtable for refcounting:
>
> It's not efficient compared to pointers.

It isn't efficient compared to pointers if you use a blind ARC implementation. If you use ARC to track ownership of regions then it can be efficient.

The real culprit is multithreading, it can be resolved though. If you put the counters on cachelines that are local to the thread, either by offset or TLS.

E.g. pseudocode for 8 refcounted pointers on 4 threads with 32 bytes cachelines could be something along the lines of:

struct {
   func* destructor[8]; // cacheline -1
   void* ptr[8]; //cacheline 0
   uint bitmask[8]; //cacheline 1
   int refcount[8*4] ; //cacheline2-6 initialized to -1
}

THREADOFFSET = (THREADID+2)*32

retain(ref){ //ref is a pointer to cacheline 0
 if ( increment(ref+THREADOFFSET) == 0) {
   if( CAS_SET_BIT(ref+32,THREADID)==THREADID ){
      HALT_DESTRUCTED()
   }
 }
}

release(ref){
  if( decrement(ref+THREADOFFSET)<0 ){
    if( CAS_CLR_BIT(ref+32,THREADID)==0){
       call_destructor(ref-32,*ref);
    }
  }
}