September 12, 2014
On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
> On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto wrote:
>> Am 11.09.2014 20:32, schrieb Daniel Alves:
>
>> It is incredible how Objective-C's ARC became a symbol for reference counting, instead of the living proof of Apple's failure to produce
>> a working GC for Objective-C that didn't crash every couple of seconds.
>
> I think I fail to grasp something here. For me, ARC is something that is managed at runtime: you have a counter on a chunk of memory and you increase it with each new reference towards that memory, then you decrement it when memory is released. In the end, when the counter reaches 0, you drop the chunk.
>
> OTOH, code analysis and automatically inserting free/delete where the programmers would/should have done it is not really that. Is a compile-time approach and not different of manual memory management.
>
> Which one is, in the end, the approach took by Apple, and which one is the "true" ARC?...

ARC was a term popularized by Apple when they introduced the said feature in Objective-C.

In the GC literature it is plain reference counting.

ARC in Objective-C is a mix of both approaches that you mention.

It only applies to Objective-C classes that follow the retain/release patterns since the NeXTStep days. For structs, malloc() or even classes that don't follow the Cooca patterns, only manual memory management is possible.

The compiler inserts the retain/release calls that a programmer would write manually, at the locations one would expect from the said patterns.

Then a second pass, via dataflow analysis, removes the pairs of retain/release that are superfluous, due to object lifetime inside a method/function block.

This way you get automatic reference counting, as long as those classes use the said patterns correctly. As a plus the code gets to interact with libraries that are clueless about ARC.

Now, having said this, when Apple introduced GC in Objective-C it was very fragile, only worked with Objective-C classes, was full of "take care of X when you do Y" and required all Frameworks on the project to have compatible build settings.

Of course, more often than not, the result was random crashes when using third party libraries, that Apple never sorted out.

So ARC in Objective-C ended up being a better solution, due to interoperability issues, and not just because RC is better than GC.

--
Paulo





September 12, 2014
On Friday, 12 September 2014 at 12:39:51 UTC, Paulo  Pinto wrote:
> On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
>> On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto wrote:
>>> Am 11.09.2014 20:32, schrieb Daniel Alves:
>>

> ARC was a term popularized by Apple when they introduced the said feature in Objective-C.

Many thanks.
September 12, 2014
On Friday, 12 September 2014 at 08:50:17 UTC, po wrote:
>  But using modern C++11/14 + TBB it really isn't hard at all. It is fairly trivial to scale to N cores using a task based approach. Smart pointers are rarely used, most C++ stuff is done by value.

Strings too?

>  For instance, I work on a game engine, almost everything is either by value or unique.
>
> The only stuff that is "shared" and thus is requires ref counting are external assets(shaders,models,sounds, some gpu resources). These objects are also closed, and thus incapable of circular references.

For closed external resources one can often figure out ownership and if it's done, you don't even need smart pointers, as you already know, where to destroy the object. The difficult task is to do it for all allocated memory everywhere. Duplication is certainly possible, but it kinda goes against efficiency.
September 12, 2014
Am Fri, 12 Sep 2014 13:45:45 +0200
schrieb Jacob Carlborg <doob@me.com>:

> On 12/09/14 08:59, Daniel Kozak via Digitalmars-d wrote:
> 
> > toUpperInPlace could help little, but still not perfect
> 
> Converting text to uppercase doesn't work in-place in some cases. For example the German double S will take two letters in uppercase form.

The German "double S", I see ... Let me help you out of this.

The letter ß, named SZ, Eszett, sharp S, hunchback S, backpack S, Dreierles-S, curly S or double S in Swiss, becomes SS in upper case since 1967, because it is never used as the start of a word and thus doesn't have an upper case representation of its own. Before, from 1926 on, the translation was to SZ. So a very old Unicode library might give you incorrect results.

The uppercase letter I on the other hand depends on the locale. E.g. in England the lower case version is i, whereas in Turkey it is ı, because they also have a dotted İ, which becomes i.

;)

-- 
Marco

September 12, 2014
Am Thu, 11 Sep 2014 13:44:09 +0000
schrieb "Adam D. Ruppe" <destructionator@gmail.com>:

> On Thursday, 11 September 2014 at 12:38:54 UTC, Andrey Lifanov wrote:
> > And I think of idea of complete extraction of GC from D.
> 
> You could also recompile the runtime library without the GC. Heck, with the new @nogc on your main, the compiler (rather than the linker) should even give you nicish error messages if you try to use it, but I've done it before that was an option.
> 
> Generally though, GC fear is overblown. Use it in most places and just don't use it where it makes things worse.

The Higgs JIT compiler running 3x faster just because you call
GC.reserve(1024*1024*1024); show how much fear is appropriate
(with this GC implementation).

-- 
Marco

September 12, 2014
On Thursday, 11 September 2014 at 20:55:43 UTC, Andrey Lifanov wrote:
> Everyone tells about greatness and safety of GC, and that it is hard to live without it... But, I suppose, you all do know the one programming language in which 95% of AAA-quality popular desktop software and OS is written. And this language is C/C++.

Because due to the way the market changed in the last 20 years, compiler vendors focused on native code compilers for C and C++, while the
others faded away.

>
> How do you explain this? Just because we are stubborn and silly people, we use terrible old C++? No. The real answer: there is no alternative.

There used to exist.

I am old enough to remeber when C only mattered if coding on UNIX.

>
> Stop telling fairy tales that there is not possible to program safe in C++. Every experienced programmer can easily handle parallel programming and memory management in C++. Yes, it requires certain work and knowledge, but it is possible, and many of us do it on the everyday basis (on my current work we use reference counting, though the overall quality of code is terrible, I must admit).

Of course, it is possible to do safe coding in C++, but you need good coders on the team.

I always try to apply the safe practices from the Algol world, as well as, many good practices I have learned since I got in touch with C++ back in 1993.

My pure C programming days were coffined to the Turbo Pascal -> C++ transition, university projects and my first job. Never liked its unsafe design.

Now the thing is, I could only make use of safe programming practices like compiler specific collections (later STL) and RAII,
when coding on my own or in small teams composed of good C++ developers.

More often than not, the C++ codebases I have met on my projects looked either C compiled with a C++ compiler or OOP gone wild. With lots of nice macros as well.

When the teams had high rotation, then the code quality was even worse.

A pointer goes boom and no one knows which module is responsible for doing what in terms of memory management.

We stopped using C++ on our consulting projects back in 2005, as we started to focus mostly on JVM and .NET projects.

Still use it for my hobby coding, or some jobs on side, where I can control the code quality though.

However, I am also found of system programming languages with GC, having had the opportunity to use the Oberon OS back in the mid-90's.

--
Paulo

September 12, 2014
On Friday, 12 September 2014 at 12:39:51 UTC, Paulo  Pinto wrote:
> On Friday, 12 September 2014 at 07:46:03 UTC, eles wrote:
>> On Thursday, 11 September 2014 at 19:56:17 UTC, Paulo Pinto wrote:
>>> Am 11.09.2014 20:32, schrieb Daniel Alves:
>>
>>> It is incredible how Objective-C's ARC became a symbol for reference counting, instead of the living proof of Apple's failure to produce
>>> a working GC for Objective-C that didn't crash every couple of seconds.
>>
>> I think I fail to grasp something here. For me, ARC is something that is managed at runtime: you have a counter on a chunk of memory and you increase it with each new reference towards that memory, then you decrement it when memory is released. In the end, when the counter reaches 0, you drop the chunk.
>>
>> OTOH, code analysis and automatically inserting free/delete where the programmers would/should have done it is not really that. Is a compile-time approach and not different of manual memory management.
>>
>> Which one is, in the end, the approach took by Apple, and which one is the "true" ARC?...
>
> ARC was a term popularized by Apple when they introduced the said feature in Objective-C.
>
> In the GC literature it is plain reference counting.
>
> ARC in Objective-C is a mix of both approaches that you mention.
>
> It only applies to Objective-C classes that follow the retain/release patterns since the NeXTStep days. For structs, malloc() or even classes that don't follow the Cooca patterns, only manual memory management is possible.
>
> The compiler inserts the retain/release calls that a programmer would write manually, at the locations one would expect from the said patterns.
>
> Then a second pass, via dataflow analysis, removes the pairs of retain/release that are superfluous, due to object lifetime inside a method/function block.
>
> This way you get automatic reference counting, as long as those classes use the said patterns correctly. As a plus the code gets to interact with libraries that are clueless about ARC.
>
> Now, having said this, when Apple introduced GC in Objective-C it was very fragile, only worked with Objective-C classes, was full of "take care of X when you do Y" and required all Frameworks on the project to have compatible build settings.
>
> Of course, more often than not, the result was random crashes when using third party libraries, that Apple never sorted out.
>
> So ARC in Objective-C ended up being a better solution, due to interoperability issues, and not just because RC is better than GC.
>
> --
> Paulo

[Caveat: I'm no expert]
I once read a manual that explained the GC in Objective-C (years ago). It said that some objects never get collected although they're dead, but the garbage collector can no longer reach them. But maybe that's true of other GC implementations too (Java?). ARC definitely makes more sense for Objective-C than what they had before. But that's for Objective-C with its retain-release mechanism. Also, I wonder, is ARC really "automatic". Sure, the compiler inserts retain-release automatically (what the programmer would have done manually in the "old days"). But that's not really a GC algorithm that scans and collects during runtime. Isn't it cheating? Also, does anyone know what problems Apple programmers have encountered with ARC?
September 12, 2014
Am Fri, 12 Sep 2014 15:43:14 +0000
schrieb "Chris" <wendlec@tcd.ie>:

> [Caveat: I'm no expert]
> I once read a manual that explained the GC in Objective-C (years
> ago). It said that some objects never get collected although
> they're dead, but the garbage collector can no longer reach them.
> But maybe that's true of other GC implementations too (Java?).

With only ARC, if two objects reference each other, they keep
each other alive indefinitely unless one of the references is a
"weak" reference, which doesn't count as a real reference
count and will cause the destruction.
Other than that, in case of Java or D it is just a question of
how you define "never" I guess. Since a tracing GC only runs
every now and then, there might be uncollected dead objects
floating around at program termination.

> [...] But that's not really a GC algorithm that scans and collects during runtime. Isn't it cheating?

A GC algorithm that scans and collects during runtime is called a "tracing GC". ARC none the less collects garbage. You, the programmer, don't need to do that.

-- 
Marco

September 12, 2014
On Friday, 12 September 2014 at 20:41:53 UTC, Marco Leise wrote:
> Am Fri, 12 Sep 2014 15:43:14 +0000
> schrieb "Chris" <wendlec@tcd.ie>:

> With only ARC, if two objects reference each other, they keep
> each other alive indefinitely unless one of the references is a
> "weak" reference, which doesn't count as a real reference

But do we need more than that? Translating the question into C++:

what use case wouldn't be covered by unique_ptr and shared_ptr?

Cycles like that could happen in manual memory management, too. There is Valgrind for that...
September 13, 2014
Am 13.09.2014 01:52, schrieb eles:
> On Friday, 12 September 2014 at 20:41:53 UTC, Marco Leise wrote:
>> Am Fri, 12 Sep 2014 15:43:14 +0000
>> schrieb "Chris" <wendlec@tcd.ie>:
>
>> With only ARC, if two objects reference each other, they keep
>> each other alive indefinitely unless one of the references is a
>> "weak" reference, which doesn't count as a real reference
>
> But do we need more than that? Translating the question into C++:
>
> what use case wouldn't be covered by unique_ptr and shared_ptr?

Cycles, that is why weak_ptr also exists.

>
> Cycles like that could happen in manual memory management, too. There is
> Valgrind for that...

For those that can compile their code under GNU/Linux.

There are lots of OS where Valgrind does not run.

--
Paulo