October 28, 2015
On Wednesday, 28 October 2015 at 01:13:16 UTC, Walter Bright wrote:
>
> It's not just safety. If the compiler knows that reference counting is going on, it can potentially elide a lot of the overhead. If it is faced with an arbitrary library solution, it only has a worm's eye view of it, and cannot do higher level optimizations.

I don't think the compiler can do that much more, but before I address that point, let me mention that intrinsic can be added for inc and dec, which would be much more lightweight for the language at large.

Now as to why I think this wouldn't give that much. First, if exception can be thrown, then all bets are pretty much off, as inc and dec do not go by pairs anymore. So we are down to the no exception situation. In that case, pairs are fairly visible to the compiler and can be optimized away or combined, that is already the kind of things that optimizer are good at.

But if, so, how do you explain the C++ situation, where nothing is elided (or ObjC's) ?

Well, there is a major difference with these languages: sharing by default. It means that inc and dec must be (atomic and ordered) or synchronized, which means that, as far as the compiler is concerned, all bets are off and the optimizer can't do its job.

This doesn't really apply to D, so I don't expect this to be a problem. And even if there is: intrinsic can save the day to hint the optimizer, no need for heavyweight language addition.

Now, let's get back to the exception case, as it is IMO the most interesting one. What if one is willing to accept leakage on exception throwing. That would get the optimizer back into the game and remove a lot of "dark matter" as Andrei calls it, which have a real cost on term of icache pressure and exception unwinding cost (one now doesn't have to resume each frame to maintain refcount).

If I had to go about this, I'd rather see the introduction a scope(exit/success/failure) like mechanism for destructors rather than something ref counting specific.

October 28, 2015
On Wednesday, 28 October 2015 at 03:55:25 UTC, deadalnix wrote:
> If I had to go about this, I'd rather see the introduction a scope(exit/success/failure) like mechanism for destructors rather than something ref counting specific.

can you expand upon this?
October 28, 2015
On Wednesday, 28 October 2015 at 03:55:25 UTC, deadalnix wrote:
> On Wednesday, 28 October 2015 at 01:13:16 UTC, Walter Bright wrote:
>>
>> It's not just safety. If the compiler knows that reference counting is going on, it can potentially elide a lot of the overhead. If it is faced with an arbitrary library solution, it only has a worm's eye view of it, and cannot do higher level optimizations.
>
> I don't think the compiler can do that much more, but before I address that point, let me mention that intrinsic can be added for inc and dec, which would be much more lightweight for the language at large.
>
> Now as to why I think this wouldn't give that much. First, if exception can be thrown, then all bets are pretty much off, as inc and dec do not go by pairs anymore. So we are down to the no exception situation. In that case, pairs are fairly visible to the compiler and can be optimized away or combined, that is already the kind of things that optimizer are good at.
>
> But if, so, how do you explain the C++ situation, where nothing is elided (or ObjC's) ?
>
> Well, there is a major difference with these languages: sharing by default. It means that inc and dec must be (atomic and ordered) or synchronized, which means that, as far as the compiler is concerned, all bets are off and the optimizer can't do its job.
>
> This doesn't really apply to D, so I don't expect this to be a problem. And even if there is: intrinsic can save the day to hint the optimizer, no need for heavyweight language addition.
>
> Now, let's get back to the exception case, as it is IMO the most interesting one. What if one is willing to accept leakage on exception throwing. That would get the optimizer back into the game and remove a lot of "dark matter" as Andrei calls it, which have a real cost on term of icache pressure and exception unwinding cost (one now doesn't have to resume each frame to maintain refcount).
>
> If I had to go about this, I'd rather see the introduction a scope(exit/success/failure) like mechanism for destructors rather than something ref counting specific.

Objective-C  does elide refcounting, there are a few WWDC ARC sessions where it is mentioned. Same applies to Swift.

However their exceptions work in a more RC friendly way.
October 28, 2015
On 2015-10-27 22:50, Andrei Alexandrescu wrote:

> You can safely ignore the C++ part, the views are unsafe. I'd appreciate
> if you backed up your claim on Rust. -- Andrei

Rust is unsafe as well, when you interface with unsafe code.

-- 
/Jacob Carlborg
October 28, 2015
On 2015-10-28 07:07, Paulo Pinto wrote:

> However their exceptions work in a more RC friendly way.

Swift doesn't support exceptions. And in Objective-C exceptions are like Errors in D. They should not be caught and the program should terminate.

The error handling support that was added in Swift 2.0 is syntax sugar for the Objective-C pattern to use NSError out parameters for error handling.

-- 
/Jacob Carlborg
October 28, 2015
On 2015-10-27 22:19, Andrei Alexandrescu wrote:

> That doesn't seem to be the case at all. -- Andrei

I'm not a C++ or Rust expert. But I think that in Rust and with the new C++ guide lines the idea is to use reference counting pointers only for owning resources. If you want to pass the data to some of part of the code, that does not need to own the resource, a raw pointer should be used.

-- 
/Jacob Carlborg
October 28, 2015
On Wednesday, 28 October 2015 at 08:07:40 UTC, Jacob Carlborg wrote:
> On 2015-10-28 07:07, Paulo Pinto wrote:
>
>> However their exceptions work in a more RC friendly way.
>
> Swift doesn't support exceptions. And in Objective-C exceptions are like Errors in D. They should not be caught and the program should terminate.
>
> The error handling support that was added in Swift 2.0 is syntax sugar for the Objective-C pattern to use NSError out parameters for error handling.

Hence why I mentioned they are more RC friendly.

Swift, because it doesn't have them.

Objective-C, because termination is the only option so no need to worry about preserving counters.

I was typing on the phone, so didn't want to provide the full explanation.

--
Paulo
October 28, 2015
On Wednesday, 28 October 2015 at 06:07:12 UTC, Paulo Pinto wrote:
> Objective-C  does elide refcounting, there are a few WWDC ARC sessions where it is mentioned. Same applies to Swift.

Indeed, John McCall from Apple has already described how ARC works in these forums (astonishingly nobody felt like thanking him for the input... :-/):

http://forum.dlang.org/post/hgmhgirfervrsvcghchw@forum.dlang.org

To what extent you can elide inc/dec depends on how you define and track ownership and whether you do whole program analysis, of course.

October 28, 2015
On 28 October 2015 at 11:13, Walter Bright via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 10/27/2015 11:10 AM, deadalnix wrote:
>>
>> I've made the claim that we should implement reference counting as a
>> library
>> many time, so I think I should explicit my position. Indeed, RC require
>> some
>> level a compiler support to be safe. That being said, the support does not
>> need
>> to be specific to RC. On fact, my position is that the language should
>> provide
>> some basic mechanism on top of which safe RC can be implemented, as a
>> library.
>
>
>
> It's not just safety. If the compiler knows that reference counting is going on, it can potentially elide a lot of the overhead. If it is faced with an arbitrary library solution, it only has a worm's eye view of it, and cannot do higher level optimizations.

I just want to drop in that I strongly feel both points here, they are
not at odds. I've been arguing for years now that D needs effective
escape analysis, this will allow all sorts of safe allocation and
lifetime patterns; and while it may enable some improved library
solutions to refcounting, I think the key advantage is actually
related to making better and safe use of stack allocation. I think
that is a much better focus when considering the need for
comprehensive escape analysis tools.
That has little to do with the language also benefiting from RC
primitives such that the compiler is able to do a quality job of
optimising ref-counting, which is a spectacularly prevalent pattern,
particularly so when used in conjunction with libraries such that the
inc/dec functions are opaque indirect calls into some foreign lib and
can't be optimised (this is the majority case in my experience). If
they can't be wrapped by a language primitive that it knows can
optimise this particular calling pattern, then the compiler has no
power to optimise such opaque calls at all.

As an anecdote, since I operate almost exclusively via practical
experience; my current project would heavily benefit from both, and
they would each contribute to a strong case for migration to D. These
2 issues alone represent, by far, the greatest trouble we face with
C++ currently.
RC is okay-ish in C++11 (with rval references), although it could be
much better, for instance, the type mangling/wrapping induced by this
sort of library solution always leads to awkward situations, ie,
'this' pointer in a method is not an RC object anymore! Methods can't
give out pointers to themselves (ie, signaling events where it's
conventional to pass a 'sender' to the subscribers). Pretty massive
fail!
But what we completely fail at is making good use of stack allocation;
requiring conservative fallback to heap allocations because we have no
proof mechanism for containing temporary ownership. We need expressive
escape analysis.

This is a very heavily object orientated codebase, rife with shared pointers, with a strong focus on the external API and user extensibility. Focus on the public API implies conservative allocation habits; ie, RC is prevalent because we don't want to place complex restrictions on users, and we must also be safe. If we has an effective escape analysis mechanism, we would gain a lot of opportunities to revert RC to stack allocations because we can statically prove via the API that the user won't escape pointers.

The program consists of a typical hierarchical ownership structure, an
arbitrarily shared generalised resource pool, a highly interactive
scene graph with runtime datasets scaling to 10s of gigabytes, and a
ridiculously abstract API's (lots of painful C++ meta-magic). It is
also realtime. GC was considered and rejected on the premise that it
is realtime, and operates on truly gargantuan working datasets.
It's the most ambitious thing I've ever written, and I am dying inside
a little bit more every single day that I remain stuck with C++. I
want to start writing front-end plugin code in D as soon as possible,
which means, at very least, comprehensive RC interaction.
October 28, 2015
On Wednesday, 28 October 2015 at 11:21:17 UTC, Manu wrote:
> RC is okay-ish in C++11 (with rval references), although it could be
> much better, for instance, the type mangling/wrapping induced by this
> sort of library solution always leads to awkward situations, ie,
> 'this' pointer in a method is not an RC object anymore! Methods can't
> give out pointers to themselves (ie, signaling events where it's
> conventional to pass a 'sender' to the subscribers). Pretty massive
> fail!

Did you look into doing something like std::enable_shared_from_this? I use it pretty routinely in networking code (boost.asio), and while it is not as pretty as it could be, it does the trick.

 – David