February 26, 2015
Am Thu, 26 Feb 2015 01:55:14 +1000
schrieb Manu via Digitalmars-d <digitalmars-d@puremagic.com>:

> >> I agree. I would suggest if ARC were proven possible, we would
> >> like, switch.
> >> 
> >
> > I'd like to see ARC support in D, but I do not think it makes sense as a default.
> 
> Then we will have 2 distinct worlds. There will be 2 kinds of D code, and they will be incompatible... I think that's worse.

Excuse my ignorance, but I'm no longer sure what everybody in this thread is actually arguing about:

Andrei's WIP DIP74 [1] adds compiler-recognized AddRef/Release calls to classes. The compiler will _automatically_ call these. Of course the compiler can then also detect and optimize dead AddRef/Release pairs.

All the exception issues Walter described also apply here and they also apply for structs with destructors in general.

So I'd say DIP74 is basically ARC. Ironically structs which are better for RC right now won't benefit from optimizations. However, it'd be simple to also recognize Release/AddRef in structs to gain the necessary information for optimization. It's not even necessary to call these automatically, they can be called manually from struct dtor as usual.

So what exactly is the difference between ARC and DIP74? Simply that it's not the default memory management method? I do understand that this makes a huge difference in practice, OTOH switching all D classes to ARC seems very unrealistic as it could break code in subtle ways (cycles).

But even if one wants ARC as a default, DIP74 is a huge step forward. If library maintainers are convinced that ARC works well it will become a quasi-default. I'd expect most game-related libraries (SFMLD, derelict) will switch to RC classes.

[1] http://wiki.dlang.org/DIP74
February 26, 2015
On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
> https://developer.apple.com/news/?id=02202015a
>
> Interesting...
>
> Apple is dropping GC in favor of automatic reference counting. What are the benefits of ARC over GC? Is it just about predictability of resource freeing? Would ARC make sense in D?

Im not here to be liked so let's throw the bomb:

http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html

The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.


February 26, 2015
On Thursday, 26 February 2015 at 12:06:53 UTC, Baz wrote:
> On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
>> https://developer.apple.com/news/?id=02202015a
>>
>> Interesting...
>>
>> Apple is dropping GC in favor of automatic reference counting. What are the benefits of ARC over GC? Is it just about predictability of resource freeing? Would ARC make sense in D?
>
> Im not here to be liked so let's throw the bomb:
>
> http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html
>
> The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.

I am behind a firewall so I cannot read it, still here goes my take on it.

Only people writing on their own, with full control of 100% of the application code without any third party libraries can manage memory by their own.

Add a third party library without possibility to change the code, teams with more than 10 developers, teams that are distributed, teams with varied skill set, teams with member rotation, overtime, and I don't believe anyone in this context is able to have on their head the whole ownership relations of heap allocated memory.

I have seen it happen lots of time in enterprise projects.

Besides, the CVE daily exploits notification list is the proof of that.


--
Paulo
February 26, 2015
On Thursday, 26 February 2015 at 11:28:16 UTC, ponce wrote:
> That's the problem with future/promises, you spent your time explaining who waits for what instead of just writing what things do.

There are many ways to do futures, but I don't think it is all that complicated for the end user in most cases. E.g.

auto a = request_db1_async();
auto b = request_db2_async();
auto c = request_db3_async();
auto d = compute_stuff_async();
r = wait_for_all(a,b,c,d);
if( has_error(r) ) return failure;

> No. If I can't open a file I'd better not create a File object in an invalid state. Invalid states defeats RAII.

This is the attitude I don't like, because it means that you have to use pointers when you could just embed the file-handle. That leads to more allocations and more cache misses.

> So you can't re-enter that mutex as you asked, so I will grant you a scopedLock, but it is in an errored state so you'd better check that it is valid!

A file can always enter an errored state. So can OpenGL. That doesn't mean you have to react immediately in all cases.

When you mmap a file you write to the file indirectly without any function calls. The disk could die... but how do you detect that? You wait until you msync() and detect it late. It is more efficient.
February 26, 2015
On 2/25/2015 8:51 PM, weaselcat wrote:
> Is this implying you've begun work on a compacting D collector,

No.

> or are you relating to your Java experiences?

Yes.

February 26, 2015
On 2/25/2015 11:01 PM, Benjamin Thaut wrote:
> What you are describing is a compacting GC and not a generational GC. Please
> just describe in words how you would do a generational GC without write
> barriers. Because just as deadalnix wrote, the problem is tracking pointers
> within the old generation that point to the new generation.

It was a generational gc, I described earlier how it used page faults instead of write barriers. I eventually removed the page fault system because it was faster without it.
February 26, 2015
On 2/25/2015 11:05 PM, Russel Winder via Digitalmars-d wrote:
> Have you studied the G1 GC?

Nope.

> Any "data" from the 1990 regarding Java,

1998 or so. I wrote D's GC some years later.

> and indeed any other programming language with a lifespan of 20+years, is
> suspect, to say the least. Also what is said above is not data, it is
> opinion.

The day C++ compilers routinely generate write barriers is the day I believe they don't have runtime cost.
February 26, 2015
On 2/25/2015 9:01 PM, deadalnix wrote:
> On Thursday, 26 February 2015 at 04:11:42 UTC, Walter Bright wrote:
>> On 2/25/2015 6:57 PM, deadalnix wrote:
>>> You seems to avoid the important part of my message : write barrier tend to be
>>> very cheap on immutable data. Because, as a matter of fact, you don't write
>>> immutable data (in fact you do to some extent, but the amount of write is
>>> minimal.
>>>
>>> There is no reason not to leverage this for D, and java comparison are
>>> irrelevant on the subject as java does not have the concept of immutability.
>>>
>>> The same way, we can use the fact that TL data are not supposed to refers to
>>> each others.
>>
>> Of course, you don't pay a write barrier cost when you don't write to data,
>> whether it is immutable or not is irrelevant.
>
> It DOES matter as user tends to write way more mutable data than immutable ones.
> Pretending otherwise is ridiculous.

I don't really understand your point. Write barriers are emitted for code that is doing a write. This doesn't happen for code that doesn't do writes. For example:

   x = 3: // write barrier emitted for write to x!
   y = x + 5; // no write barrier emitted for read of x!

How would making x immutable make (x + 5) faster?

February 26, 2015
On 2/25/2015 11:50 PM, Paulo Pinto wrote:
> Maybe it failed the goal of having C++ developers fully embrace .NET, but it
> achieved its goal of providing an easier way to integrate existing C++ code into
> .NET applications, instead of the P/Invoke dance.

I wasn't referring to technical success. There is no doubt that multiple pointer types technically works. I was referring to acceptance by the community.

Back in the old DOS days, there were multiple pointer types (near and far). Programmers put up with that because it was the only way, but they HATED HATED HATED it.
February 26, 2015
On Thursday, 26 February 2015 at 12:06:53 UTC, Baz wrote:
> On Saturday, 21 February 2015 at 19:20:48 UTC, JN wrote:
>> https://developer.apple.com/news/?id=02202015a
>>
>> Interesting...
>>
>> Apple is dropping GC in favor of automatic reference counting. What are the benefits of ARC over GC? Is it just about predictability of resource freeing? Would ARC make sense in D?
>
> Im not here to be liked so let's throw the bomb:
>
> http://www.codingwisdom.com/codingwisdom/2012/09/reference-counted-smart-pointers-are-for-retards.html
>
> The guy, despite of...well you know, is right, smart ptr are for beginner. The more the experience, the more you can manage memory by your own.

Programmer that think they are so smart they handle everything are the very first to screw up because of lack of self awareness. Good, now that we established the author is the retard here, how seriously do we take his rant ?