February 04, 2014
On Mon, 03 Feb 2014 22:51:39 -0800, Adam Wilson <flyboynw@gmail.com> wrote:

> On Mon, 03 Feb 2014 22:39:02 -0800, Walter Bright <newshound2@digitalmars.com> wrote:
>
>> On 2/3/2014 7:03 PM, Adam Wilson wrote:
>>> Note that ObjC has special syntax to handle weak pointers. It's not well
>>> understood by many.
>>
>> Sounds like explicitly managed memory is hardly worse.
>>
>
> Well special syntax is needed to make ARC work because of the requirement to explicitly mark a pointer as a weak-reference. It would also mean that all existing D code would subtly break (leak memory) in any case where a weak-reference is required and was previously handled transparently by the GC. I won't say that manually managed memory is worse, because it does automatically clean up most of the trash, but ARC still allows to shot yourself in the foot easily enough.
>

Oops. I won't say that ARC is worse than Manual Memory Management.

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
"Ola Fosheim Gr?stad" <ola.fosheim.grostad+dlang@gmail.com>"> On Tuesday, 4 February 2014 at 00:49:04 UTC, Adam Wilson wrote:
>> Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related problems. I've been working with WPF since 2005, I can say the biggest performance problem with it by far is the naive rendering of rounded corners, the GC has NEVER caused a hitch.
>
> According to Wikipedia:
>
> ?While the majority of WPF is in managed code, the composition engine which renders the WPF applications is a native component. It is named Media Integration Layer (MIL) and resides in milcore.dll. It interfaces directly with DirectX and provides basic support for 2D and 3D surfaces, timer-controlled manipulation of contents of a surface with a view to exposing animation constructs at a higher level, and compositing the individual elements of a WPF application into a final 3D "scene" that represents the UI of the application and renders it to the screen.?
>
> So, Microsoft does not think that GC is suitable for real time interactive graphics. And they are right.

As long as other code is in managed code, there is GC running at
 background, no matter your code write in whatever high performance
language, it will be affect by GC anyway.  so that "So, Microsoft does
not think that GC is suitable for real time > interactive graphics.
And they are right." is only your opinion. you don't know the real reason
behind MS's decision. some resource need release ASAP, so you can't
rely on GC. or maybe bacause of overhead when call low API in managed
code.


February 04, 2014
On Mon, 03 Feb 2014 23:05:35 -0800, Eric Suen <eric.suen.tech@gmail.com> wrote:

>
> "Ola Fosheim Gr?stad" <ola.fosheim.grostad+dlang@gmail.com>"> On Tuesday, 4 February 2014 at 00:49:04 UTC, Adam Wilson
> wrote:
>>> Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related
>>> problems. I've been working with WPF since 2005, I can say the biggest performance problem with it by far is the
>>> naive rendering of rounded corners, the GC has NEVER caused a hitch.
>>
>> According to Wikipedia:
>>
>> ?While the majority of WPF is in managed code, the composition engine which renders the WPF applications is a native
>> component. It is named Media Integration Layer (MIL) and resides in milcore.dll. It interfaces directly with DirectX
>> and provides basic support for 2D and 3D surfaces, timer-controlled manipulation of contents of a surface with a view
>> to exposing animation constructs at a higher level, and compositing the individual elements of a WPF application into
>> a final 3D "scene" that represents the UI of the application and renders it to the screen.?
>>
>> So, Microsoft does not think that GC is suitable for real time interactive graphics. And they are right.
>
> As long as other code is in managed code, there is GC running at
>  background, no matter your code write in whatever high performance
> language, it will be affect by GC anyway.  so that "So, Microsoft does
> not think that GC is suitable for real time > interactive graphics.
> And they are right." is only your opinion. you don't know the real reason
> behind MS's decision. some resource need release ASAP, so you can't
> rely on GC. or maybe bacause of overhead when call low API in managed
> code.
>

Actually the reason is that DirectX is a specialized Native COM-Like API that is NOT compatible with normal COM and therefore not compatible with .NET COM Interop. Milcore is a low-level immediate API. Most of the legwork in WPF is done in managed code. And yes, the GC would effect the performance regardless of whether or not the rendering API is native because so much of WPF is managed code.

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
On Tuesday, 4 February 2014 at 02:05:07 UTC, Nick Sabalausky wrote:
> On 2/3/2014 4:13 PM, H. S. Teoh wrote:
>> I've seen real-life
>> examples of ARCs gone horribly, horribly wrong, whereas had a GC been
>> used in the first place things wouldn't have gone down that route.
>>
>
> I'm curious to hear more about this.

An example is when you have a huge graph and the root reaches it count == 0.

The time taken into a cascading deletes of the whole structure is similar to a stop-the-world GC pause.


--
Paulo
February 04, 2014
On Mon, 03 Feb 2014 23:31:25 -0800, Paulo Pinto <pjmlp@progtools.org> wrote:

> On Tuesday, 4 February 2014 at 02:05:07 UTC, Nick Sabalausky wrote:
>> On 2/3/2014 4:13 PM, H. S. Teoh wrote:
>>> I've seen real-life
>>> examples of ARCs gone horribly, horribly wrong, whereas had a GC been
>>> used in the first place things wouldn't have gone down that route.
>>>
>>
>> I'm curious to hear more about this.
>
> An example is when you have a huge graph and the root reaches it count == 0.
>
> The time taken into a cascading deletes of the whole structure is similar to a stop-the-world GC pause.
>
>
> --
> Paulo

That's a generic example, but ARC proponents steadfastly maintain that it doesn't happen often enough in practice to be relevant. Mr. Teoh's examples of it would be most helpful. And there is more than one way ARC can go horribly wrong, like cyclic references inside a huge graph. Wouldn't THAT be fun! :-) I could design a huge graph that never gets freed in ARC in my sleep, I've built them before without trying hard in C#, most of the time quite by accident, but not always...

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
On Tuesday, 4 February 2014 at 01:13:09 UTC, Manu wrote:
> On 4 February 2014 06:52, Adam Wilson <flyboynw@gmail.com> wrote:
>
>>
>>
> Java and the .NET GC's are consistent examples of just how good GC's can
>> actually be.
>
>
> Also, neither languages are systems languages, or practical/appropriate on
> embedded, or memory limited systems. What is your argument? That D not be a
> systems language?
> Additionally, if you are going to beat that drum, you need to prove that
> either languages GC technology is even possible in the context of D,
> otherwise it's a red herring.

Well there are quite a few systems with an usable GC it seems,

Singularity research OS
http://research.microsoft.com/en-us/projects/Singularity/

Netduino hardware boards
http://www.netduino.com/netduino/

Cortex M3 boards with Oberon-07
http://www.astrobe.com/boards.htm

MirageOS

http://www.openmirage.org/


--
Paulo
February 04, 2014
On Mon, 03 Feb 2014 22:12:18 -0800, Manu <turkeyman@gmail.com> wrote:

> On 4 February 2014 15:23, Adam Wilson <flyboynw@gmail.com> wrote:
>
>> On Mon, 03 Feb 2014 18:57:00 -0800, Andrei Alexandrescu <
>> SeeWebsiteForEmail@erdani.org> wrote:
>>
>>  On 2/3/14, 5:36 PM, Adam Wilson wrote:
>>>
>>>> You still haven't dealt with the cyclic reference problem in ARC. There
>>>> is absolutely no way ARC can handle that without programmer input,
>>>> therefore, it is simply not possible to switch D to ARC without adding
>>>> some language support to deal with cyclic-refs. Ergo, it is simply not
>>>> possible to seamlessly switch D to ARC without creating all kinds of
>>>> havoc as people now how memory leaks where they didn't before. In order
>>>> to support ARC the D language will necessarily have to grow/change to
>>>> accommodate it. Apple devs constantly have trouble with cyclic-refs to
>>>> this day.
>>>>
>>>
>>> The stock response: weak pointers. But I think the best solution is to
>>> allow some form of automatic reference counting backed up by the GC, which
>>> will lift cycles.
>>>
>>> Andrei
>>>
>>>
>> The immediate problem that I can see here is you're now paying for TWO GC
>> algorithms. There is no traditional GC without a Mark phase (unless it's a
>> copying collector, which will scare off the Embedded guys), and the mark
>> phase is actually typically the longer portion of the pause. If you have
>> ARC backed up by a GC you'll still have to mark+collect which means the GC
>> still has to track ARC memory and then when a collection is needed, mark
>> and collect. This means that you might reduce the total number of pauses,
>> but you won't eliminate them. That in turn makes it an invalid tool for
>> RT/Embedded purposes. And of course we still have the costs of ARC. Manu
>> still can't rely on pause-free (although ARC isn't either) memory
>> management, and the embedded guys still have to pay the costs in heap size
>> to support the GC.
>>
>
> So, the way I see this working in general, is that because in the majority
> case, ARC would release memory immediately freeing up memory regularly, an
> alloc that would have usually triggered a collect will happen far, far less
> often.
> Practically, this means that the mark phase, which you say is the longest
> phase, would be performed far less often.
>

Well, if you want the ARC memory to share the heap with the GC the ARC memory will need to be tracked and marked by the GC. Otherwise the GC might try to allocate over the top of ARC memory and vice versa. This means that every time you run a collection you're still marking all ARC+GC memory, that will induce a pause. And the GC will still STW-collect on random allocations, and it will still have to Mark all ARC memory to make sure it's still valid. So yes, there will be fewer pauses, but they will still be there.

> For me and my kind, I think the typical approach would be to turn off the
> backing GC, and rely on marking weak references correctly.
> This satisfies my requirements, and I also lose nothing in terms of
> facilities in Phobos or other libraries (assuming that those libraries have
> also marked weak references correctly, which I expect phobos would
> absolutely be required to do).
>
> This serves both worlds nicely, I retain access to libraries since they use
> the same allocator, the GC remains (and is run less often) for those that
> want care-free memory management, and for RT/embedded users, they can
> *practically* disable the GC, and take responsibility for weak references
> themselves, which I'm happy to do.
>
>
> Going the other way, GC is default with ARC support on the side, is not as
>> troublesome from an implementation standpoint because the GC does not have
>> to be taught about the ARC memory. This means that ARC memory is free of
>> being tracked by the GC and the GC has less overall memory to track which
>> makes collection cycles faster. However, I don't think that the RT/Embedded
>> guys will like this either, because it means you are still paying for the
>> GC at some point, and they'll never know for sure if a library they are
>> using is going to GC-allocate (and collect) when they don't expect it.
>>
>
> It also means that phobos and other libraries will use the GC because it's
> the default. Correct, I don't see this as a valid solution. In fact, I
> don't see it as a solution at all.
> Where would implicit allocations like strings, concatenations, closures be
> allocated?
> I might as well just use RefCounted, I don't see this offering anything
> much more than that.
>
> The only way I can see to make the ARC crowd happy is to completely replace
>> the GC entirely, along with the attendant language changes (new keywords,
>> etc) that are probably along the lines of Rust. I strongly believe that the
>> reason we've never seen a GC backed ARC system is because in practice it
>> doesn't completely solve any of the problems with either system but costs
>> quite a bit more than either system on it's own.
>
>
> Really? [refer to my first paragraph in the reply]
> It seems to me like ARC in front of a GC would result in the GC running far
> less collect cycles. And the ARC opposition would be absolved of having to
> tediously mark weak references. Also, the GC opposition can turn the GC
> off, and everything will still work (assuming they take care of their
> cycles).
> I don't really see the disadvantage here, except that the
> only-GC-at-all-costs-I-won't-even-consider-ARC crowd would gain a
> ref-count, but they would also gain the advantage where the GC would run
> less collect cycles. That would probably balance out.
>
> I'm certainly it would be better than what we have, and in theory, everyone
> would be satisfied.

I'm not convinced. Mostly, because it's not likely going to be good news for the GC crowd. First, now there are two GC algos running unpredictably at different times, so while you *might* experience a perf win in ARC-only mode, we'll probably pay for it in GC-backed ARC mode, because you still have the chance at non-deterministic pause lengths with ARC and you have the GC pauses, and they happen at different times (GC pause on allocate, ARC pause on delete). Each individual pause length *might* be shorter, but there is no guarantee of that, but you end up paying more time on the whole than you would otherwise, remembering that with the GC on, the slow part of the collection has to be performed on all memory, not just the GC memory. So yes you might delete a bit less, but you're marking just as much, and you've still got those pesky ARC pauses to deal with. And in basically everything but games you measure time spent on resource management as a portion of CPU cycles over time, not time spent per frame.

That ref-count you hand-wave can actually cost quite a lot. Implementing ARC properly can have some serious perf implications on pointer-ops and count-ops due to the need to make sure that everything is performed atomically. And since this is a compiler thing, you can't say "Don't atomically operate here because I will never do anything that might race." because the compiler has to assume that at some point you will and the compiler cannot detect which mode it needs, or if a library will ruin your day. The only way you could get around this is with yet more syntax hints for the compiler like '@notatomic'.

Very quickly ARC starts needing a lot of specialized syntax to make it work efficiently. And that's not good for "Modern Convenience".

However, you don't have to perform everything atomically with a GC as the collect phase can always be performed concurrently on a separate thread and in most cases, the mark phase can do stop-the-thread instead of stop-the-world and in some cases, it will never stop anything at all. That can very easily result in pause times that are less than ARC on average. So now I've got a marginal improvement in the speed of ARC over the GC at best, and I still haven't paid for the GC.

And if we disable the GC to get the speed back we now require that everyone on the team learns the specialized rules and syntax for cyclic-refs. That might be relatively easy for someone coming from C++, but it will be difficult to teach someone coming from C#/Java, which is statistically the more likely person to come to D. And indeed would be more than enough to stop my company moving to D.

I've seen you say more than once that you can't bond with the GC, and believe me I understand, if you search back through the forums, you'll find one of the first things I did when I got here was complain about the GC. But what you're saying is "I can't bond with this horrible GC so we need to throw it out and rebuild the compiler to support ARC." All I am saying is "I can't bond with the horrible GC, so why don't we make a better one, that doesn't ruin responsiveness, because I've seen it done in other places and there is no technical reason D can't do the same, or better." Now that I've started to read the GC Handbook I am starting to suspect that using D, there might be a way to create a completely pause-less GC. Don't hold me too it, I don't know enough yet, but D has some unique capabilities that might just make it possible.

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
On Tuesday, 4 February 2014 at 02:57:00 UTC, Andrei Alexandrescu wrote:
> On 2/3/14, 5:36 PM, Adam Wilson wrote:
>> You still haven't dealt with the cyclic reference problem in ARC. There
>> is absolutely no way ARC can handle that without programmer input,
>> therefore, it is simply not possible to switch D to ARC without adding
>> some language support to deal with cyclic-refs. Ergo, it is simply not
>> possible to seamlessly switch D to ARC without creating all kinds of
>> havoc as people now how memory leaks where they didn't before. In order
>> to support ARC the D language will necessarily have to grow/change to
>> accommodate it. Apple devs constantly have trouble with cyclic-refs to
>> this day.
>
> The stock response: weak pointers. But I think the best solution is to allow some form of automatic reference counting backed up by the GC, which will lift cycles.
>
> Andrei

This is the approach taken by Cedar on the Mesa system.

"On Adding Garbage Collection and
Runtime Types to a Strongly-Typed,
Statically-Checked, Concurrent Language"

http://www.textfiles.com/bitsavers/pdf/xerox/parc/techReports/CSL-84-7_On_Adding_Garbage_Collection_and_Runtime_Types_to_a_Strongly-Typed_Statically-Checked_Concurrent_Language.pdf

--
Paulo
February 04, 2014
On Tuesday, 4 February 2014 at 00:15:34 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 4 February 2014 at 00:08:28 UTC, Frank Bauer wrote:
>> How cool would that be: a language with the memory allocation features of Rust but with stronger syntax and semantics roots in the C / C++ world?
>
> It is really tempting to write a new parser for Rust, isn't it? :-)

Us ML folks don't see a need for it.
February 04, 2014
On Tuesday, 4 February 2014 at 03:30:58 UTC, Manu wrote:
> On 4 February 2014 12:59, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org
>> wrote:
>
>> On 2/3/14, 5:51 PM, Manu wrote:
>>
>>> I'd have trouble disagreeing more; Android is the essence of why Java
>>> should never be used for user-facing applications.
>>> Android is jerky and jittery, has random pauses and lockups all the
>>> time, and games on android always jitter and drop frames. Most high-end
>>> games on android now are written in C++ as a means to mitigate that
>>> problem, but then you're back writing C++. Yay!
>>> iOS is silky smooth by comparison to Android.
>>>
>>
>> Kinda difficult to explain the market success of Android.
>
>
> I think it's easy to explain.
> 1. It's aggressively backed by the biggest technology company in the world.
> 2. It's free for product vendors.
> 3. For all product vendors at the curve of Android's success, it presented
> a realistic and well supported (by Google) competition to Apple, who were
> running away with the industry. Everybody had to compete with Apple, but
> didn't have the resources to realistically compete on their own. Nokia for
> instance were certainly in the best position to produce serious
> competition, but they fumbled multiple times. I suspect Google won because
> they're Google, and it is free.
>
> Even Microsoft failed. C# is, for all intents and purposes, the same as
> Java, except it's even better. If Java was a significant factor in
> Android's success, I'd argue that WindowsMobile should have been equally
> successful.
> I think it's safe to say, that's not the case.
> Personally, I suspect that Java was actually a barrier to entry in early
> Android, and even possibly the reason that it took as it did for Android to
> take root.
> It's most certainly the reason that Android had absolutely no games on it
> for so many years. They eventually released the NDK, and games finally
> appeared. There were years between Angry Birds success in iPhone, and any
> serious games appearing on Android.
> There were years where if you wanted to play games in your mobile device,
> you had to get an iDevice. It's finally levelled now that the indisputable
> success of Android is absolute (and the NDK is available).

You forget to mention that the NDK has a very restrained set of APIs.

If you want to do interact with the OS besides audio and OpenGL, it is JNI Ad nauseam because Google sees the NDK as a minor inconvenience and all APIs are Java based.

Even Java native methods need to be called via their Java class and are not accessible to NDK code.

--
Paulo