February 04, 2014
On Mon, 03 Feb 2014 16:24:52 -0800, NoUseForAName <no@spam.com> wrote:

> On Monday, 3 February 2014 at 23:00:23 UTC, woh wrote:
>>  ur right I never thought of that, I bet all them game devs never thought of it either, they so dumb.  I bet they never tried to use a GC, what fools!  Endless graphs of traced objects, oh yes oh yes!  It only runs when I allocate, oh what a fool I've been, please castigate me harder!
>
> Also people should consider that Apple (unlike C++ game devs) did not have a tradition of contempt for GC. In fact they tried GC *before* they switched to ARC. The pro-GC camp always likes to pretend that the anti-GC one is just ignorant, rejecting GC based on prejudice not experience but Apple rejected GC based on experience.
>
> GCed Objective-C did not allow them to deliver the user experience they wanted (on mobile), because of the related latency issues. So they switched to automated ref counting. It is not in question that ref counting sacrifices throughput (compared to an advanced GC) but for interactive, user facing applications latency is much more important.
>

That may be the case, but StackOverflow shows that ARC hasn't been panacea in Apple land either. Way to many people don't understand ARC and how to use it, and subsequently beg for help understanding heisenleaks and weak references. ARC places a higher cognitive load on the programmer than a GC does. And Android runs just fine with GC'ed apps, but ARC guys don't want to talk about Google's successes there.

> You can do soft-real time with GC as long as the GC is incremental (D's is not) and you heavily rely on object reuse. That is what I am doing with LuaJIT right now and the frame rates are nice and constant indeed. However, you pay a high price for that. Object reuse means writing additional code, makes things more complex and error-prone, which is why your average app developer does not do it.. and should not have to do it.
>
> Apple had to come up with a solution which does not assume that the developers will be careful about allocations. The performance of the apps in the iOS app store are ultimately part of the user experience so ARC is the right solution because it means that your average iOS app written by Joe Coder will not have latency issues or at least less latency issues compared to any GC-based solution.
>
> I think it is an interesting decision for the D development team to make. Do you want a language which can achieve low latency *if used carefully* or one which sacrifices maximal throughput performance for less latency issues in the common case.
>
> I see no obvious answer to that. I have read D has recently been used for some server system at Facebook, ref counting usually degrades performance in that area. It is no coincidence that Java shines on the server as a high performance solution while Java is a synonym for dog slow memory hog on the desktop and mighty unpopular there because of that. The whole Java ecosystem from the VM to the libraries is optimized for enterprise server use cases, for throughput, scalability, and robustness, not for making responsive GUIs (and low latency in general) or for memory use.
>

Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related problems. I've been working with WPF since 2005, I can say the biggest performance problem with it by far is the naive rendering of rounded corners, the GC has NEVER caused a hitch.

> If D wants to be the new Java GC is the way to go, but no heap allocation happy GCed language will ever challenge C/C++ on the desktop.
>

So that's why nearly every desktop app (for Windows at least, but that's the overwhelming majority) that started development since .NET came out is written C#?

> Which reminds me of another major company who paddled back on GC based on experience: Microsoft. Do you remember the talk back then .NET/C# were new? Microsoft totally wanted that to be the technology stack of the future "managed code" everywhere, C/C++ becoming "legacy". However, C# ended up being nothing more than Microsoft Java, shoveling enterprise CRUD in the server room. Microsoft is hosting "Going Native" conferences nowadays, declaring their present and future dedication to C++ (again) and they based the new WinRT on ref counting not GC.
>

This is primarily due to Internal Microsoft Politics rather than any desire of the Microsoft Developer Community to jettison .NET. I won't dive in to those right now, there are plenty of places on the web that tell the story better than I could.

I remember sitting in the Build 2011 Keynotes thinking "They just cost themselves two years, minimum." Turns out I my estimate was low. They have tried in vain for over two years get devs to write WinRT apps using C++/CX. They've also failed miserably. I forget the exact percentage, but the overwhelming majority if WinRT apps are written in ... C#, most of the rest are VB.NET, only something like 3% of all apps in the Windows Store are C++/CX. Server apps are written almost universally in .NET languages, such that they didn't even bother making C++/CX capable of building servers. Microsoft recently had to do a mea culpa and reinvest in .NET (Build 2013).

Microsoft might be trying to get us all to move to C++ with Hats, but they are failing miserably in real terms, despite what the rah-rah section of Microsoft would have you believe.

At the end of the day, the overwhelming majority of the MS Dev Community it quite happy with the performance of the GC. They even gave us more performance tweaking options in the latest release. You'll not find any ARC fans in that world. And given that the MS Dev Community eclipses Apple's in real terms, that's important to note.



-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
On Tuesday, 4 February 2014 at 00:49:04 UTC, Adam Wilson wrote:
> Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related problems. I've been working with WPF since 2005, I can say the biggest performance problem with it by far is the naive rendering of rounded corners, the GC has NEVER caused a hitch.

According to Wikipedia:

«While the majority of WPF is in managed code, the composition engine which renders the WPF applications is a native component. It is named Media Integration Layer (MIL) and resides in milcore.dll. It interfaces directly with DirectX and provides basic support for 2D and 3D surfaces, timer-controlled manipulation of contents of a surface with a view to exposing animation constructs at a higher level, and compositing the individual elements of a WPF application into a final 3D "scene" that represents the UI of the application and renders it to the screen.»

So, Microsoft does not think that GC is suitable for real time interactive graphics. And they are right.
February 04, 2014
On Tuesday, 4 February 2014 at 00:19:53 UTC, Adam Wilson wrote:
> On Mon, 03 Feb 2014 15:26:22 -0800, Frustrated <c1514843@drdrb.com> wrote:
>
>> On Monday, 3 February 2014 at 21:42:59 UTC, Shammah Chancellor
>> wrote:
>>>
>>> You can always force the GC to run between cycles in your game, and
>>> turn off automatic sweeps.  This is how most games operate nowadays.
>>> It's also probably possible to create a drop-in replacement for the GC
>>> to do something else.   I could see if being *VERY* useful to make the
>>> GC take a compile-time parameter to select which GC engine is used.
>>
>>
>> This is just non-sense. Maybe this is why modern games suck then?
>> How do you guarantee that the GC won't kick in at the most
>> inopportune times? Oh, you manually trigger it? When? Right at
>> the moment when the player is about to down the boss after a 45
>> min fight?
>>
>> Oh, right.. you just segfault cause there is no memory left.
>>
>> On Monday, 3 February 2014 at 22:51:50 UTC, Frank Bauer wrote:
>>
>>> I'm not quite sure that I understand what you mean by GC avoidance being a major focus of 2014 though. In the long term, can I look forward to writing an absolutely, like in 100 %, like in guaranteed, GC free D app with all of current D's and Phobos' features if I choose to? Or does it mean: well for the most part it will avoid the GC, but if you're unlucky the GC might still kick in if you don't pay attention and when you least expect it?
>>
>> It's either got to be 100% or nothing. The only issue of the GC
>> is the non-determinism.... or if you do corner it and trigger it
>> manually you end up with exactly the types of problems Mr.
>> Chancellor thinks doesn't exist... i.e., the longer you have to
>> put off the GC the worse it becomes(the more time it takes to run
>> or the less memory you have to work with).
>>
>
> Why is this myth of non-determinism still alive? The only truly non-deterministic GC's are concurrent collectors, but alas concurrent collects don't routinely stop-the-world either, so there really aren't any pauses to complain about. D's Mark-Sweep GC is *perfectly* deterministic. It can  *only* pause on allocation. Ergo you can determine exactly which allocation caused the problem. You might not expect the function you called to GC-allocate, but that doesn't make it non-deterministic, just not what you expected. Please, stop blaming your missed expectations on the GC. This non-determinism thing is a red herring that is repeated over and over by people who obviously have no idea what they are talking about.
>

What I want you to do then is tell me exactly which allocation
will stop the world. You claim it is deterministic so it should
be obvious... hell, it shouldn't even require any code to monitor
what the GC is doing. With manual allocation and deallocation I
can see exactly when the memory will be allocated and free'ed.
That is deterministic.

Just because you want to use an all encompassing definition that
is meaningless doesn't mean you are right. RNG's are
deterministic using your definition, but that is a useless
definition.
February 04, 2014
On 4 February 2014 06:21, Adam Wilson <flyboynw@gmail.com> wrote:

> On Mon, 03 Feb 2014 12:02:29 -0800, Andrei Alexandrescu < SeeWebsiteForEmail@erdani.org> wrote:
>
>  On 2/3/14, 6:57 AM, Frank Bauer wrote:
>>
>>> Anyone asking for the addition of ARC or owning pointers to D, gets pretty much ignored. The topic is "Smart pointers instead of GC?", remember? People here seem to be more interested in diverting to nullable, scope and GC optimization. Telling, indeed.
>>>
>>
>> I thought I made it clear that GC avoidance (which includes considering built-in reference counting) is a major focus of 2014.
>>
>> Andrei
>>
>>
> Andrei, I am sorry to report that anything other than complete removal of the GC and replacement with compiler generated ARC will be unacceptable to a certain, highly vocal, subset of D users. No arguments can be made to otherwise, regardless of validity. As far as they are concerned the discussion of ARC vs. GC is closed and decided. ARC is the only path forward to the bright and glorious future of D. ARC most efficiently solves all memory management problems ever encountered. Peer-Reviewed Research and the Scientific Method be damned! ALL HAIL ARC!
>
> Sadly, although written as hyperbole, I feel that the above is fairly close to the actual position of the ARC crowd.


Don't be a dick.
I get the impression you don't actually read my posts. And I also feel like
you're a lot more dogmatic about this than you think I am.

I'm absolutely fine with GC in most applications, I really couldn't give
any shits if most people want a GC. I'm not dogmatic about it, and I've
**honestly** tried to love the GC for years now.
What I'm concerned about is that I have _no option_ to use D uninhibited
when I need to not have the GC.

These are the problems:
 * GC stalls for long periods time at completely un-predictable moments.
 * GC stalls become longer *and* more frequent as memory becomes less
available, and the working pool becomes larger (what a coincidence).
 * Memory footprint is unknowable, what if you don't have a virtual memory
manager? What if your total memory is measured in megabytes?
 * It's not possible to know when destruction of an object will happen,
which has known workarounds (like in C#) but is also annoying in many
cases, and supports the prior point.

Conclusion:
  GC is unfit for embedded systems. One of the most significant remaining
and compelling uses for a native systems language.

The only realistic path I am aware of is to use ARC, which IS a form of GC,
and allows a lot more flexibility in the front-end.
GC forces one very particular paradigm upon you.
ARC is a GC, but it has some complex properties __which can be addressed in
various ways__. Unlike a GC which is entirely inflexible.

You're not happy with ARC's cleaning objects up on the spot? Something that
many people WANT, but I understand zero cleanup times in the running
context is in other occasions a strength of GC; fine, just stick the
pointer on a dead list, and free it either later during idle time, or on
another thread. On the contrary, I haven't heard any proposal for a GC that
would allow it to operate in carefully controlled time-slices, or strictly
during idle-time.
Cycles are a problem with ARC? True, how much effort are you willing to
spend to mitigate the problem? None: run a secondary GC in the background
to collect cycles (yes, there is still a GC, but it has much less work to
do). Some: Disable background GC, manually require user specified weak
references and stuff. Note: A user-preferred combination of the 2 could
severely mitigate the workload of the background GC if it is still desired
to handle some complex situations, or user errors.
Are there any other disadvantages to ARC? I don't know of them if there are.

Is far as I can tell, an ARC collector could provide identical convenience as the existing GC for anyone that simply doesn't care. It would also seem that it could provide significantly more options and control for those that do.

I am _yet to hear anyone present a realistic path forwards using any form
of GC_, so what else do I have to go with? Until I know of any other path
forward, I'll stand behind the only one I can see.
You're just repeating "I don't care about something that a significant
subset of D developers do care about, and I don't think any changes should
be made to support them".
As far as I know, a switch to ARC could be done in a way that 'regular'
users don't lose anything, or even notice... why is that so offensive?


February 04, 2014
On 4 February 2014 06:52, Adam Wilson <flyboynw@gmail.com> wrote:

> On Mon, 03 Feb 2014 12:40:20 -0800, Dmitry Olshansky < dmitry.olsh@gmail.com> wrote:
>
>  04-Feb-2014 00:21, Adam Wilson пишет:
>>
>>> On Mon, 03 Feb 2014 12:02:29 -0800, Andrei Alexandrescu <SeeWebsiteForEmail@erdani.org> wrote:
>>>
>>>  On 2/3/14, 6:57 AM, Frank Bauer wrote:
>>>>
>>>>> Anyone asking for the addition of ARC or owning pointers to D, gets pretty much ignored. The topic is "Smart pointers instead of GC?", remember? People here seem to be more interested in diverting to nullable, scope and GC optimization. Telling, indeed.
>>>>>
>>>>
>>>> I thought I made it clear that GC avoidance (which includes considering built-in reference counting) is a major focus of 2014.
>>>>
>>>> Andrei
>>>>
>>>>
>>>  ...
>>
>>> Sadly, although written as hyperbole, I feel that the above is fairly close to the actual position of the ARC crowd.
>>>
>>>
>> I won't be surprised that half of current GC problems are because it's simply too stupid as an allocator. I had some numbers that I'd need to dig up were GC.malloc was ~2x slower then malloc even with garbage collection disabled.
>>
>> With that said I'd focus on ref-counting somehow coexisting with tracing GC. It doesn't go well at the moment - try creating dynamic arrays of std.stdio.File (that is ref-counted).
>>
>>
> I will not defend the current GC, it is about as stupid as you can make one and still have a functioning memory management system. It implements exactly none of the optimizations recommended for Mark-Sweep GC's in the GC Handbook.
>
> That said, I firmly believe that wholesale replacement of the GC is throwing the baby out with the bathwater. Effectively, the poor D GC implementation has become an excuse to launch a crusade against all GC's everywhere, never mind that Java and the .NET GC's are consistent examples of just how good GC's can actually be.


Point me at a proposal for a GC that satisfies the problems in my prior email, which is realistically implement-able in D, and I'll shut up.


February 04, 2014
On 4 February 2014 06:52, Adam Wilson <flyboynw@gmail.com> wrote:

>
>
Java and the .NET GC's are consistent examples of just how good GC's can
> actually be.


Also, neither languages are systems languages, or practical/appropriate on
embedded, or memory limited systems. What is your argument? That D not be a
systems language?
Additionally, if you are going to beat that drum, you need to prove that
either languages GC technology is even possible in the context of D,
otherwise it's a red herring.


February 04, 2014
On 4 February 2014 10:19, Adam Wilson <flyboynw@gmail.com> wrote:

> On Mon, 03 Feb 2014 15:26:22 -0800, Frustrated <c1514843@drdrb.com> wrote:
>
>  On Monday, 3 February 2014 at 21:42:59 UTC, Shammah Chancellor
>> wrote:
>>
>>>
>>> You can always force the GC to run between cycles in your game, and turn off automatic sweeps.  This is how most games operate nowadays. It's also probably possible to create a drop-in replacement for the GC to do something else.   I could see if being *VERY* useful to make the GC take a compile-time parameter to select which GC engine is used.
>>>
>>
>>
>> This is just non-sense. Maybe this is why modern games suck then? How do you guarantee that the GC won't kick in at the most inopportune times? Oh, you manually trigger it? When? Right at the moment when the player is about to down the boss after a 45 min fight?
>>
>> Oh, right.. you just segfault cause there is no memory left.
>>
>> On Monday, 3 February 2014 at 22:51:50 UTC, Frank Bauer wrote:
>>
>>  I'm not quite sure that I understand what you mean by GC avoidance being
>>> a major focus of 2014 though. In the long term, can I look forward to writing an absolutely, like in 100 %, like in guaranteed, GC free D app with all of current D's and Phobos' features if I choose to? Or does it mean: well for the most part it will avoid the GC, but if you're unlucky the GC might still kick in if you don't pay attention and when you least expect it?
>>>
>>
>> It's either got to be 100% or nothing. The only issue of the GC is the non-determinism.... or if you do corner it and trigger it manually you end up with exactly the types of problems Mr. Chancellor thinks doesn't exist... i.e., the longer you have to put off the GC the worse it becomes(the more time it takes to run or the less memory you have to work with).
>>
>>
> Why is this myth of non-determinism still alive? The only truly non-deterministic GC's are concurrent collectors, but alas concurrent collects don't routinely stop-the-world either, so there really aren't any pauses to complain about. D's Mark-Sweep GC is *perfectly* deterministic. It can  *only* pause on allocation. Ergo you can determine exactly which allocation caused the problem. You might not expect the function you called to GC-allocate, but that doesn't make it non-deterministic, just not what you expected. Please, stop blaming your missed expectations on the GC. This non-determinism thing is a red herring that is repeated over and over by people who obviously have no idea what they are talking about.


Your assertion makes the assumption that people who write huge complex
programs have completely control over their code, and/or depend on zero
libraries, which is a ridiculous notion.
I'm quite sick of people making claims like that. If the default is to do
something incompatible with my use case, and I depend on any libraries
(including phobos), then it's safe to say, I'm running code that is
incompatible with my use case.
What are my options then?

Additionally, in D, you don't have to type 'new' to allocate memory, it happens all the time... a closure here, a concatenate there. These are core language features, and to say that I'm meant to avoid them in my million loc program, authored by perhaps hundreds of programmers, because I need to be certain where every alloc is being issued from, is quite unrealistic to say the least.


February 04, 2014
On Mon, 03 Feb 2014 17:04:08 -0800, Manu <turkeyman@gmail.com> wrote:

> On 4 February 2014 06:21, Adam Wilson <flyboynw@gmail.com> wrote:
>
>> On Mon, 03 Feb 2014 12:02:29 -0800, Andrei Alexandrescu <
>> SeeWebsiteForEmail@erdani.org> wrote:
>>
>>  On 2/3/14, 6:57 AM, Frank Bauer wrote:
>>>
>>>> Anyone asking for the addition of ARC or owning pointers to D, gets
>>>> pretty much ignored. The topic is "Smart pointers instead of GC?",
>>>> remember? People here seem to be more interested in diverting to
>>>> nullable, scope and GC optimization. Telling, indeed.
>>>>
>>>
>>> I thought I made it clear that GC avoidance (which includes considering
>>> built-in reference counting) is a major focus of 2014.
>>>
>>> Andrei
>>>
>>>
>> Andrei, I am sorry to report that anything other than complete removal of
>> the GC and replacement with compiler generated ARC will be unacceptable to
>> a certain, highly vocal, subset of D users. No arguments can be made to
>> otherwise, regardless of validity. As far as they are concerned the
>> discussion of ARC vs. GC is closed and decided. ARC is the only path
>> forward to the bright and glorious future of D. ARC most efficiently solves
>> all memory management problems ever encountered. Peer-Reviewed Research and
>> the Scientific Method be damned! ALL HAIL ARC!
>>
>> Sadly, although written as hyperbole, I feel that the above is fairly
>> close to the actual position of the ARC crowd.
>
>
> Don't be a dick.
> I get the impression you don't actually read my posts. And I also feel like
> you're a lot more dogmatic about this than you think I am.
>
> I'm absolutely fine with GC in most applications, I really couldn't give
> any shits if most people want a GC. I'm not dogmatic about it, and I've
> **honestly** tried to love the GC for years now.
> What I'm concerned about is that I have _no option_ to use D uninhibited
> when I need to not have the GC.
>
> These are the problems:
>  * GC stalls for long periods time at completely un-predictable moments.
>  * GC stalls become longer *and* more frequent as memory becomes less
> available, and the working pool becomes larger (what a coincidence).
>  * Memory footprint is unknowable, what if you don't have a virtual memory
> manager? What if your total memory is measured in megabytes?
>  * It's not possible to know when destruction of an object will happen,
> which has known workarounds (like in C#) but is also annoying in many
> cases, and supports the prior point.
>
> Conclusion:
>   GC is unfit for embedded systems. One of the most significant remaining
> and compelling uses for a native systems language.
>
> The only realistic path I am aware of is to use ARC, which IS a form of GC,
> and allows a lot more flexibility in the front-end.
> GC forces one very particular paradigm upon you.
> ARC is a GC, but it has some complex properties __which can be addressed in
> various ways__. Unlike a GC which is entirely inflexible.
>
> You're not happy with ARC's cleaning objects up on the spot? Something that
> many people WANT, but I understand zero cleanup times in the running
> context is in other occasions a strength of GC; fine, just stick the
> pointer on a dead list, and free it either later during idle time, or on
> another thread. On the contrary, I haven't heard any proposal for a GC that
> would allow it to operate in carefully controlled time-slices, or strictly
> during idle-time.
> Cycles are a problem with ARC? True, how much effort are you willing to
> spend to mitigate the problem? None: run a secondary GC in the background
> to collect cycles (yes, there is still a GC, but it has much less work to
> do). Some: Disable background GC, manually require user specified weak
> references and stuff. Note: A user-preferred combination of the 2 could
> severely mitigate the workload of the background GC if it is still desired
> to handle some complex situations, or user errors.
> Are there any other disadvantages to ARC? I don't know of them if there are.
>
> Is far as I can tell, an ARC collector could provide identical convenience
> as the existing GC for anyone that simply doesn't care. It would also seem
> that it could provide significantly more options and control for those that
> do.
>
> I am _yet to hear anyone present a realistic path forwards using any form
> of GC_, so what else do I have to go with? Until I know of any other path
> forward, I'll stand behind the only one I can see.
> You're just repeating "I don't care about something that a significant
> subset of D developers do care about, and I don't think any changes should
> be made to support them".
> As far as I know, a switch to ARC could be done in a way that 'regular'
> users don't lose anything, or even notice... why is that so offensive?

I am not trying to be a dick. But I do feel like a small number of people are trying to gang up on me for daring to point out that the solution they've proposed solution might have bigger problems for other people than they care to admit.

You still haven't dealt with the cyclic reference problem in ARC. There is absolutely no way ARC can handle that without programmer input, therefore, it is simply not possible to switch D to ARC without adding some language support to deal with cyclic-refs. Ergo, it is simply not possible to seamlessly switch D to ARC without creating all kinds of havoc as people now how memory leaks where they didn't before. In order to support ARC the D language will necessarily have to grow/change to accommodate it. Apple devs constantly have trouble with cyclic-refs to this day.

I am not against supporting ARC side-by-side with the GC (I'm actually quite for it, I would love the flexibility), but it is unrealistic to make ARC the default option in D as that would subtly break all existing D code, something that Walter has point-blank refused to do in much smaller easier to find+fix cases. You can't grep for a weak-ref. So if that is what you are asking for, then yes, it will never happen in D.

Also, I don't think you've fully considered what the perf penalty actually is for a *good* ARC implementation. I just leafed through the P-Code in the GC Handbook for their ARC implementation, it's about 4x longer than what their best P-Code Mark-Sweep implementation is.

I would also like to point out that the GC Handbook points out six scientifically confirmed problems with ARC. (See Page 59)

1. RC imposes a time overhead on mutators in order to manipulate the counter.
2. Both the counter manipulation and pointer load/store operations MUST be atomic to prevent races.
3. Naive RC turns read ops into store ops to update the count.
4. No RC can reclaim cyclic data structures, which are much more common than is typically understood. [Bacon and Rajan 2001]
5. Counter must be the same size as the pointer, which can result in significant overhead for small objects.
6. RC can still pause. When the last head to a large pointer structure is deleted, RC MUST delete each descendant node.

Note that these are paraphrases of the book, not me talking. And these apply equally to ARC and vanilla RC.

Boehm demonstrated in one of his papers (2004) that thread-safe ARC may even lead to longer maximum pause times than a standard Tracing GC.

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 04, 2014
On Tuesday, 4 February 2014 at 00:49:04 UTC, Adam Wilson wrote:
> On Mon, 03 Feb 2014 16:24:52 -0800, NoUseForAName <no@spam.com> wrote:
>
On Tuesday, 4 February 2014 at 00:49:04 UTC, Adam Wilson wrote:
> That may be the case, but StackOverflow shows that ARC hasn't been panacea in Apple land either. Way to many people don't understand ARC and how to use it, and subsequently beg for help understanding heisenleaks and weak references.

Your point? ARC addressed the latency issues, I never said it was without challenges of its own.

>ARC places a higher cognitive load on the programmer than a GC does.

Yes, it does. But the whole "not thinking about allocations" thing comes at the often unacceptable cost of unresponsive apps.

>And
> Android runs just fine with GC'ed apps, but ARC guys don't want to talk about Google's successes there.

Google's success there is that demanding apps are written using the NDK.

> Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related problems.

Silverlight is dead and was an utter failure. WinForms and WPF have an uncertain future. Neither has ever been used much in end user applications.

I would also like to say that the typical .NET or Java developer has lost all sense of what an efficient app feels like. E.g. someone who works with Eclipse all day will of course consider about everything else lightweight and snappy.

> So that's why nearly every desktop app (for Windows at least, but that's the overwhelming majority) that started development since .NET came out is written C#?

That is simply not true. The set of widely popular Windows desktop applications is basically .NET free. However, maybe you misunderstood me because - I admit - my phrasing was unclear. When I said "desktop" I meant end user desktop applications and games. Not enterprise/government desktop CRUD apps which are forced upon office workers who cannot reject them because of their horrible performance. I would not be surprised if most of those are indeed written in .NET (if not written in Java).

> only something like 3% of all apps in the Windows Store are C++/CX.

Does anybody actually use Windows Store? Frankly I do not know anyone who does.

> Server apps are written almost universally in .NET languages

Eh.. yes. I said myself that Java and Java-likes rule that domain.

Again, if D wants to compete with Java (or Microsoft's version of it) there is nothing wrong with GC.
February 04, 2014
On 4 February 2014 10:49, Adam Wilson <flyboynw@gmail.com> wrote:

> On Mon, 03 Feb 2014 16:24:52 -0800, NoUseForAName <no@spam.com> wrote:
>
>  On Monday, 3 February 2014 at 23:00:23 UTC, woh wrote:
>>
>>>  ur right I never thought of that, I bet all them game devs never
>>> thought of it either, they so dumb.  I bet they never tried to use a GC,
>>> what fools!  Endless graphs of traced objects, oh yes oh yes!  It only runs
>>> when I allocate, oh what a fool I've been, please castigate me harder!
>>>
>>
>> Also people should consider that Apple (unlike C++ game devs) did not have a tradition of contempt for GC. In fact they tried GC *before* they switched to ARC. The pro-GC camp always likes to pretend that the anti-GC one is just ignorant, rejecting GC based on prejudice not experience but Apple rejected GC based on experience.
>>
>> GCed Objective-C did not allow them to deliver the user experience they wanted (on mobile), because of the related latency issues. So they switched to automated ref counting. It is not in question that ref counting sacrifices throughput (compared to an advanced GC) but for interactive, user facing applications latency is much more important.
>>
>>
> That may be the case, but StackOverflow shows that ARC hasn't been panacea in Apple land either. Way to many people don't understand ARC and how to use it, and subsequently beg for help understanding heisenleaks and weak references. ARC places a higher cognitive load on the programmer than a GC does. And Android runs just fine with GC'ed apps, but ARC guys don't want to talk about Google's successes there.


I'd have trouble disagreeing more; Android is the essence of why Java
should never be used for user-facing applications.
Android is jerky and jittery, has random pauses and lockups all the time,
and games on android always jitter and drop frames. Most high-end games on
android now are written in C++ as a means to mitigate that problem, but
then you're back writing C++. Yay!
iOS is silky smooth by comparison to Android.
I'm sure this isn't entirely attributable to the GC, or Java in general,
but it can't possibly be used as an example of success. Precisely the
opposite if anything. Games on Android make gamedevs who care about smooth
interactivity's brains bleed.

 You can do soft-real time with GC as long as the GC is incremental (D's is
>> not) and you heavily rely on object reuse. That is what I am doing with LuaJIT right now and the frame rates are nice and constant indeed. However, you pay a high price for that. Object reuse means writing additional code, makes things more complex and error-prone, which is why your average app developer does not do it.. and should not have to do it.
>>
>> Apple had to come up with a solution which does not assume that the developers will be careful about allocations. The performance of the apps in the iOS app store are ultimately part of the user experience so ARC is the right solution because it means that your average iOS app written by Joe Coder will not have latency issues or at least less latency issues compared to any GC-based solution.
>>
>> I think it is an interesting decision for the D development team to make. Do you want a language which can achieve low latency *if used carefully* or one which sacrifices maximal throughput performance for less latency issues in the common case.
>>
>> I see no obvious answer to that. I have read D has recently been used for some server system at Facebook, ref counting usually degrades performance in that area. It is no coincidence that Java shines on the server as a high performance solution while Java is a synonym for dog slow memory hog on the desktop and mighty unpopular there because of that. The whole Java ecosystem from the VM to the libraries is optimized for enterprise server use cases, for throughput, scalability, and robustness, not for making responsive GUIs (and low latency in general) or for memory use.
>>
>>
> Ahem. Wrong. See: WinForms, WPF, Silverlight. All extremely successful GUI toolkits that are not known for GC related problems. I've been working with WPF since 2005, I can say the biggest performance problem with it by far is the naive rendering of rounded corners, the GC has NEVER caused a hitch.


On a modern many ghz PC with many cores, and many gb of ram (most of which is unallocated), a hardware virtual memory manager, and a mature RTOS. Computers come in all shapes and sizes. D is positioned as a systems language, last time I checked... or else I don't know what I'm doing here.

 If D wants to be the new Java GC is the way to go, but no heap allocation
>> happy GCed language will ever challenge C/C++ on the desktop.
>>
>>
> So that's why nearly every desktop app (for Windows at least, but that's the overwhelming majority) that started development since .NET came out is written C#?


I don't think people write C# because it has a GC. People write C# because
it is productive and awesome; has an amazing dev infrastructure, dev
environment, well integrated GUI toolkits, debugger works awesome, docs are
excellent, etc.
Correlation does not imply causality.
I know lots of people who love C#, even write games in it, but criticise
the GC as it's biggest flaw.

I'm not saying there are lots of people that love it, and for C#'s intended market, it makes perfect sense. I don't think D's market is C#'s market. If it was, I would be a happy C# developer, and I never would have given D a moments notice.