April 17, 2014
On Thursday, 17 April 2014 at 09:55:38 UTC, Ola Fosheim Grøstad wrote:
> On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
>> Any iOS device runs circles around those systems, hence why I always like to make clear it was Apple's failure to make a workable GC in a C based language and not the virtues of pure ARC over pure GC.
>
> I am not making an argument for pure ARC. Objective-C allows you to mix and Os-X is most certainly not pure ARC based.
>
> If we go back in time to the timeslot you point to even C was considered waaaay too slow for real time graphics.
>
> On the C64 and the Amiga you wrote in assembly and optimized for the hardware. E.g. using hardware scroll register on the C64 and the copperlist (a specialized scanline triggered processor writing to hardware registers) on the Amiga. No way you could do real time graphics in a GC backed language back then without a dedicated engine with HW support. Real time audio was done with DSPs until the mid 90s.

Sure, old demoscener here.
April 17, 2014
On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via Digitalmars-d wrote:
> Yes, the current attribute situation in D is a mess.

A more coherent D syntax would make the language more approachable. I find the current syntax to be somewhat annoying.

I'd also like to see coherent naming conventions for attributes etc, e.g.

@nogc    // assert/prove no gc (for compiled code)
@is_nogc // assume/guarantee no gc (for linked code, or "unprovable" code)
April 17, 2014
On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via
Digitalmars-d wrote:
> On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:
>> On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
>>> On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
>>>> http://wiki.dlang.org/DIP60
>>>>
>>>> Start on implementation:
>>>>
>>>> https://github.com/D-Programming-Language/dmd/pull/3455
>>>
>>> This is a good start, but I am sure I am not the only person who thought "maybe we should have this on a module level". This would allow people to nicely group pieces of the application that should not use GC.
>> 
>> Sure it does.
>> 
>> module mymodule;
>> @nogc:
>> 
>>      void myfunc(){}
>> 
>>      class MyClass {
>>          void mymethod() {}
>>      }
>> 
>> 
>> Everything in above code has @nogc applied to it.
>> Nothing special about it, can do it for most attributes like
>> static, final and UDA's.
>
> It does not work like that. User defined attributes only apply to
> the current scope, ie your MyClass.mymethod() would *not* have the
> attribute. With built-in attributes it becomes more "interesting" -
> for example '@safe' will include child scopes, but 'nothrow" won't.
>
> Yes, the current attribute situation in D is a mess. No, attribute
> inference isn't the answer.
>
> artur

Good point yes, in the case of a class/struct its methods won't
have it applied to them.
No idea post manually adding it to start of those declarations
can be done. Either that or we need language changes.

@nogc
module mymodule;

@("something")
module mymodule;

Well it is a possible option for improvement. Either way, I'm not gonna advocate this.
April 17, 2014
On 17 April 2014 18:22, Paulo Pinto via Digitalmars-d < digitalmars-d@puremagic.com> wrote:

> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done".
>

The argument is, GC is not appropriate for various classes of software. It
is unacceptable. No GC that anyone has yet imagined/proposed will address
this fact.
ARC offers a solution that is usable by all parties. We're not making
comparisons between contestants or their implementation quality here, GC is
not in the race.


April 17, 2014
I'm not convinced that any automatic memory management scheme will buy much with real time applications. Generally with real-time processes, you need to pre-allocate. I think GC could be feasible for a real-time application if the GC is precise and collections are scheduled, instead of run randomly. Scoped memory also helps.
April 17, 2014
On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:
> ARC offers a solution that is usable by all parties.

Is this a proven statement?

If that paper is right then ARC with cycle management is in fact equivalent to Garbage Collection.
Do we have evidence to the contrary?


My very vague reasoning on the topic:

Sophisticated GCs use various methods to avoid scanning the whole heap, and by doing so they in fact implement something equivalent to ARC, even if it doesn't appear that way on the surface. In the other direction, ARC ends up implementing a GC to deal with cycles. I.e.

Easy work (normal data): A clever GC effectively implements ARC. ARC does what it says on the tin.

Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*. ARC effectively implements a GC.

*in the normal sense, not GC-jargon.

Ergo they aren't really any different?
April 17, 2014
On Wed, 16 Apr 2014 13:39:36 -0400, Walter Bright <newshound2@digitalmars.com> wrote:

> On 4/16/2014 1:49 AM, "Ola Fosheim Grøstad" <ola.fosheim.grostad+dlang@gmail.com>" wrote:
>> Btw, I think you should add @noalloc also which prevents both new and malloc. It
>> would be useful for real time callbacks, interrupt handlers etc.
>
> Not practical. malloc() is only one way of allocating memory - user defined custom allocators are commonplace.

More practical:

Mechanism for the compiler to apply arbitrary "transitive" attributes to functions.

In other words, some mechanism that you can tell the compiler "all the functions this @someattribute function calls must have @someattribute attached to it," that also applies the attribute automatically for templates.

Then, you can come up with whatever restrictive schemes you want.

Essentially, this is the same as @nogc, except the compiler has special hooks to the GC (e.g. new) that need to be handled. The compiler has no such hooks for C malloc, or whatever allocation scheme you use, so it's all entirely up to the library and user code.

-Steve
April 17, 2014
On 17 April 2014 18:52, via Digitalmars-d <digitalmars-d@puremagic.com>wrote:

> On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
>
>> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC is better than the crappy GC implementation we have done".
>>
>
> I have never seen a single instance of a GC based system doing anything smooth in the realm of audio/visual real time performance without being backed by a non-GC engine.
>
> You can get decent performance from GC backed languages on the higher level constructs on top of a low level engine. IMHO the same goes for ARC. ARC is a bit more predictable than GC. GC is a bit more convenient and less predictable.
>
> I think D has something to learn from this:
>
> 1. Support for manual memory management is important for low level engines.
>
> 2. Support for automatic memory management is important for high level code on top of that.
>
> The D community is torn because there is some idea that libraries should assume point 2 above and then be retrofitted to point 1. I am not sure if that will work out.
>

See, I just don't find managed memory incompatible with 'low level'
realtime or embedded code, even on tiny microcontrollers in principle.
ARC would be fine in low level code, assuming the language supported it to
the fullest of it's abilities. I'm confident that programmers would learn
it's performance characteristics and be able to work effectively with it in
very little time.
It's well understood, and predictable. You know exactly how it works, and
precisely what the costs are. There are plenty of techniques to move any
ref fiddling out of your function if you identify that to be the source of
a bottleneck.

I think with some care and experience, you could use ARC just as
effectively as full manual memory management in the inner loops, but also
gain the conveniences it offers on the periphery where the performance
isn't critical.
_Most_ code exists in this periphery, and therefore the importance of that
convenience shouldn't be underestimated.


Maybe it is better to just say that structs are bound to manual memory
> management and classes are bound to automatic memory management.
>
Use structs for low level stuff with manual memory management.
> Use classes for high level stuff with automatic memory management.
>
> Then add language support for "union-based inheritance" in structs with a special construct for programmer-specified subtype identification.
>
> That is at least conceptually easy to grasp and the type system can more easily safeguard code than in a mixed model.
>

No. It misses basically everything that compels the change. Strings, '~',
closures. D largely depends on it's memory management. That's the entire
reason why library solutions aren't particularly useful.
I don't want to see D evolve to another C++ where libraries/frameworks are
separated or excluded by allocation practise.

Auto memory management in D is a reality. Unless you want to build yourself into a fully custom box (I don't!), then you have to deal with it. Any library that wasn't written by a gamedev will almost certainly rely on it, and games are huge complex things that typically incorporate lots of libraries. I've spent my entire adult lifetime dealing with these sorts of problems.


Most successful frameworks that allow high-level programming have two
> layers:
> - Python/heavy duty c libraries
> - Javascript/browser engine
> - Objective-C/C and Cocoa / Core Foundation
> - ActionScript / c engine
>
> etc
>
> I personally favour the more integrated approach that D appears to be aiming for, but I am somehow starting to feel that for most programmers that model is going to be difficult to grasp in real projects, conceptually. Because they don't really want the low level stuff. And they don't want to have their high level code bastardized by low level requirements.
>
> As far as I am concerned D could just focus on the structs and the low level stuff, and then later try to work in the high level stuff. There is no efficient GC in sight and the language has not been designed for it either.
>
> ARC with whole-program optimization fits better into the low-level paradigm than GC. So if you start from low-level programming and work your way up to high-level programming then ARC is a better fit.
>

The thing is, D is not particularly new, it's pretty much 'done', so there
will be no radical change in direction like you seem to suggest.
But I generally agree with your final points.

The future is not manual memory management. But D seems to be pushing us
back into that box without a real solution to this problem.
Indeed, it is agreed that there is no fantasy solution via GC on the
horizon... so what?

Take this seriously. I want to see ARC absolutely killed dead rather than dismissed.


April 17, 2014
On 2014-04-17 03:13:48 +0000, Manu via Digitalmars-d <digitalmars-d@puremagic.com> said:

> Obviously, a critical part of ARC is the compilers ability to reduce
> redundant inc/dec sequences. At which point your 'every time' assertion is
> false. C++ can't do ARC, so it's not comparable.
> With proper elimination, transferring ownership results in no cost, only
> duplication/destruction, and those are moments where I've deliberately
> committed to creation/destruction of an instance of something, at which
> point I'm happy to pay for an inc/dec; creation/destruction are rarely
> high-frequency operations.

You're right that transferring ownership does not cost with ARC. What costs you is return values and temporary local variables.

While it's nice to have a compiler that'll elide redundant retain/release pairs, function boundaries can often makes this difficult. Take this first example:

	Object globalObject;

	Object getObject()
	{
		return globalObject; // implicit: retain(globalObject)
	}

	void main()
	{
		auto object = getObject();
		writeln(object);
		// implicit: release(object)
	}

It might not be obvious, but here the getObject function *has to* increment the reference count by one before returning. There's no other convention that'll work because another implementation of getObject might return a temporary object. Then, at the end of main, globalObject's reference counter is decremented. Only if getObject gets inlined can the compiler detect the increment/decrement cycle is unnecessary.

But wait! If writeln isn't pure (and surely it isn't), then it might change the value of globalObject (you never know what's in Object.toString, right?), which will in turn release object. So main *has to* increment the reference counter if it wants to make sure its local variable object is valid until the end of the writeln call. Can't elide here.

Let's take this other example:

	Object globalObject;
	Object otherGlobalObject;

	void main()
	{
		auto object = globalObject; // implicit: retain(globalObject)
		foo(object);
		// implicit: release(object)
	}

Here you can elide the increment/decrement cycle *only if* foo is pure. If foo is not pure, then it might set another value to globalObject (you never know, right?), which will decrement the reference count and leave the "object" variable in main the sole owner of the object. Alternatively, if foo is not pure but instead gets inlined it might be provable that it does not touch globalObject, and elision might become a possibility.

I think ARC needs to be practical without eliding of redundant calls. It's a good optimization, but a difficult one unless everything is inlined. Many such elisions that would appear to be safe at first glance aren't provably safe for the compiler because of function calls.

-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

April 17, 2014
On 17 April 2014 21:57, John Colvin via Digitalmars-d < digitalmars-d@puremagic.com> wrote:

> On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:
>
>> ARC offers a solution that is usable by all parties.
>>
>
> Is this a proven statement?
>
> If that paper is right then ARC with cycle management is in fact
> equivalent to Garbage Collection.
> Do we have evidence to the contrary?
>

People who care would go to the effort of manually marking weak references.
If you make a commitment to that in your software, you can eliminate the
backing GC. Turn it off, or don't even link it.
The backing GC is so that 'everyone else' would be unaffected by the shift.
They'd likely see an advantage too, in that the GC would have a lot less
work to do, since the ARC would clean up most of the memory (fall generally
in the realm you refer to below).


My very vague reasoning on the topic:
>
> Sophisticated GCs use various methods to avoid scanning the whole heap, and by doing so they in fact implement something equivalent to ARC, even if it doesn't appear that way on the surface. In the other direction, ARC ends up implementing a GC to deal with cycles. I.e.
>
> Easy work (normal data): A clever GC effectively implements ARC. ARC does
> what it says on the tin.
>
> Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*. ARC effectively implements a GC.
>
> *in the normal sense, not GC-jargon.
>
> Ergo they aren't really any different?


Nobody has proposed a 'sophisticated' GC for D. As far as I can tell, it's
considered impossible by the experts.
It also doesn't address the fundamental issue with the nature of a GC,
which is that it expects plenty of free memory. You can't use a GC in a
low-memory environment, no matter how it's designed. It allocates until it
can't, then spends a large amount of time re-capturing unreferenced memory.
As free memory decreases, this becomes more and more frequent.