February 01, 2014
On 2014-02-01 09:27:18 +0000, "Adam Wilson" <flyboynw@gmail.com> said:

> That's kind of my point. You're asking for massive changes throughout the  entire compiler to support what is becoming more of an edge case, not less  of one. For the vast majority of use cases, a GC is the right call and D  has to cater to the majority if it wants to gain any significant mindshare  at all. You don't grow by increasing specialization...

Keep in mind that improving the GC we have now is probably going to require pretty much the same changes in the compiler, perhaps even more drastic ones than ARC.

To make it short, to implement a concurrent GC the compiler must add some code for each pointer assignment and each pointer move. To implement ARC, the compiler must add some code only to pointer assignments.

The point is, unless we're satisfied with the current GC implementation, those massive changes to the compiler will need to be done anyway. From the compiler's point of view ARC or a better concurrent GC have pretty similar requirements. And given current D semantics (where stack variables are movable at will) ARC is easier to implement than a concurrent GC.

-- 
Michel Fortin
michel.fortin@michelf.ca
http://michelf.ca

February 01, 2014
On 1/31/14, 8:33 PM, Frank Bauer wrote:
> @Andrei (who I believe is the go-to (!) guy for all things memeory
> allocation related right now):
> IIRC you mentioned that it was convenient to have the GC around for
> implementing the D language features. Would it just be a minor
> inconvenience to drop that dependency in the generated compiler output
> and replace it with new/delete or something equivalent to owning
> pointers, say over the next one or two years? Or would it be a major
> roadblock that would require too much redesign work?

It would be a major effort. It's possible we have to do it.

> Maybe you could
> test the concept of owning and borrowed pointers internally for some
> compiler components before actually bringing them "to the surface" for
> us to play with, if it turns out useful. But only of course, if we could
> leave the rest of D as good as it is today.
>
> I would really like to see the GC-free requesting crowd be taken a lttle
> more seriously. Without asking them to forego D features or do manual
> C++-style memory management. Javas and .NETs garbage collectors had
> enough time to mature and are still unsatisfactory for many applications.

I consider memory allocation a major issue to focus on in 2014.


Andrei

February 01, 2014
Am 01.02.2014 13:35, schrieb develop32:
> On Saturday, 1 February 2014 at 12:29:10 UTC, Nick Sabalausky wrote:
>> Come to think of it, I wonder what language Frostbite 3 uses for
>> game-level "scripting" code. From what I've heard about them,
>> Frostbite 3 and Unreal Engine 4 both sound like they have some notable
>> similarities with Unity3D (from the standpoint of the user-experience
>> for game developers), although AIUI Unreal Engine 4 still uses C++ for
>> game code (unless the engine user wants to to do lua or something on
>> their own). I imagine Frostbite's probably the same, C++, but I
>> haven't actually heard anything.
>
> Frostbite has an option for both Lua and C++. C++ is the preferred one.
>
> Unreal Engine went from Unrealscript to C++ and everyone applauded that.

Do you have any experience with Project Anarchy?

--
Paulo
February 01, 2014
On Saturday, 1 February 2014 at 16:37:23 UTC, Paulo Pinto wrote:

> Do you have any experience with Project Anarchy?
>
> --
> Paulo

No I have not. I haven't got a smartphone yet, so I guess that explains it.
February 01, 2014
On Saturday, 1 February 2014 at 11:40:37 UTC, Frustrated wrote:
>
> And why does Phobos/runtime require the GC in so many cases when
> it could just use an internal buffer? So much effort has been put
> in to make the GC work that it simply has neglected all those
> that can't use it as it is.

This is the crux of the problem.  It's not so much that D uses garbage collection as that Phobos is built in a way that prevents the reuse of existing buffers.  It becomes much harder to sell the language to a GC averse group if it turns out they can't use the standard library as well.  Though I guess Tango is an option here as a replacement.
February 01, 2014
On Saturday, 1 February 2014 at 17:30:54 UTC, Sean Kelly wrote:
> On Saturday, 1 February 2014 at 11:40:37 UTC, Frustrated wrote:
>>
>> And why does Phobos/runtime require the GC in so many cases when
>> it could just use an internal buffer? So much effort has been put
>> in to make the GC work that it simply has neglected all those
>> that can't use it as it is.
>
> This is the crux of the problem.  It's not so much that D uses garbage collection as that Phobos is built in a way that prevents the reuse of existing buffers.  It becomes much harder to sell the language to a GC averse group if it turns out they can't use the standard library as well.  Though I guess Tango is an option here as a replacement.

Right, and because of the mentality that the GC is one size that
fits all, phobos got this way. That mentality is still pervasive.
I call it laziness.

I think that at some point phobos will need to be rewritten...
maybe more .net like(there sense of hierarchy and presentation is
excellent).  Maybe D needs a proper allocator sub system built in
to get to that point?
February 01, 2014
On 1 February 2014 18:20, Paulo Pinto <pjmlp@progtools.org> wrote:

> Am 01.02.2014 06:29, schrieb Manu:
>
>> On 26 December 2012 00:48, Sven Over <dlang@svenover.de <mailto:dlang@svenover.de>> wrote:
>>
>>         std.typecons.RefCounted!T
>>
>>         core.memory.GC.disable();
>>
>>
>>     Wow. That was easy.
>>
>>     I see, D's claim of being a multi-paradigm language is not false.
>>
>>
>> It's not a realistic suggestion. Everything you want to link uses the GC, and the language its self also uses the GC. Unless you write software in complete isolation and forego many valuable features, it's not a solution.
>>
>>
>>         Phobos does rely on the GC to some extent. Most algorithms and
>>         ranges do not though.
>>
>>
>>     Running (library) code that was written with GC in mind and turning
>>     GC off doesn't sound ideal.
>>
>>     But maybe this allows me to familiarise myself more with D. Who
>>     knows, maybe I can learn to stop worrying and love garbage collection.
>>
>>     Thanks for your help!
>>
>>
>> I've been trying to learn to love the GC for as long as I've been around
>> here. I really wanted to break that mental barrier, but it hasn't
>> happened.
>> In fact, I am more than ever convinced that the GC won't do. My current
>> #1 wishlist item for D is the ability to use a reference counted
>> collector in place of the built-in GC.
>> You're not alone :)
>>
>> I write realtime and memory-constrained software (console games), and for me, I think the biggest issue that can never be solved is the non-deterministic nature of the collect cycles, and the unknowable memory footprint of the application. You can't make any guarantees or predictions about the GC, which is fundamentally incompatible with realtime software.
>>
>
>
> Meanwhile Unity and similar engines are becoming widespread, with C++ being pushed all the way to the bottom on the stack.
>
> At least from what I hear in the gaming communities I hop around.
>
> What is your experience there?
>

Unity is indeed popular, for casual/indy games.
AAA/'big games' show no signs of moving away from C++. The 'next gen' has
enough memory for GC (still can't afford the time though), but handhelds
and small devices are a bigger market these days.
It's true that there are less 'big games' on handhelds, which are the
future of resource-limited devices, but I think that rift is closing
quickly.


February 01, 2014
On Sat, 01 Feb 2014 09:57:41 -0800, Frustrated <c1514843@drdrb.com> wrote:

> On Saturday, 1 February 2014 at 17:30:54 UTC, Sean Kelly wrote:
>> On Saturday, 1 February 2014 at 11:40:37 UTC, Frustrated wrote:
>>>
>>> And why does Phobos/runtime require the GC in so many cases when
>>> it could just use an internal buffer? So much effort has been put
>>> in to make the GC work that it simply has neglected all those
>>> that can't use it as it is.
>>
>> This is the crux of the problem.  It's not so much that D uses garbage collection as that Phobos is built in a way that prevents the reuse of existing buffers.  It becomes much harder to sell the language to a GC averse group if it turns out they can't use the standard library as well.  Though I guess Tango is an option here as a replacement.
>
> Right, and because of the mentality that the GC is one size that
> fits all, phobos got this way. That mentality is still pervasive.
> I call it laziness.
>
> I think that at some point phobos will need to be rewritten...
> maybe more .net like(there sense of hierarchy and presentation is
> excellent).  Maybe D needs a proper allocator sub system built in
> to get to that point?

You do realize that in .NET the BCL is not GC-optional right. It GC-allocates like a fiend. What they have is a monumentally better GC than D. .NET is actually proof that GC's can work quite well in all but the most extreme edge cases.

Effectively what you are saying is ".NET does it well with a GC so we need to rewrite Phobos to not use a GC" ... Um wait, what?

I am not saying to Phobos needs to allocate as much as it does, just that argument itself fails to met a basic standard of logical coherency.

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 01, 2014
On Sat, 01 Feb 2014 04:04:54 -0800, JR <zorael@gmail.com> wrote:

> On Saturday, 1 February 2014 at 05:36:44 UTC, Manu wrote:
>> I write realtime and memory-constrained software (console games), and for
>> me, I think the biggest issue that can never be solved is the
>> non-deterministic nature of the collect cycles, and the unknowable memory
>> footprint of the application. You can't make any guarantees or predictions
>> about the GC, which is fundamentally incompatible with realtime software.
> (tried to manually fix ugly linebreaks here, so apologies if it turns out even worse.)
>
> (Maybe this would be better posted in D.learn; if so I'll crosspost.)
>
> In your opinion, of how much value would deadlining be? As in, "okay handyman, you may sweep the floor now BUT ONLY FOR 6 MILLISECONDS; whatever's left after that you'll have to take care of next time, your pride as a professional Sweeper be damned"?
>
> It obviously doesn't address memory footprint, but you would get the illusion of determinism in cases similar to where race-to-idle approaches work. Inarguably, this wouldn't apply if the goal is to render as many frames per second as possible, such as for non-console shooters where tearing is not a concern but latency is very much so.
>
> I'm very much a layman in this field, but I'm trying to soak up as much knowledge as possible, and most of it from threads like these. To my uneducated eyes, an ARC collector does seem like the near-ideal solution -- assuming, as always, the code is written with the GC in mind. But am I right in gathering that it solves two-thirds of the problem? You don't need to scan the managed heap, but when memory is actually freed is still non-deterministic and may incur pauses, though not necessarily a program-wide stop. Aye?
>

It would only not be a program wide stop if you had multiple threads running, otherwise yes, ARC can still Stop-The-World for a non-deterministic period of time, because you the programmer have no idea how long that collection cycle will last. Also note that this is just a shuffling of where the collection happens. In D's GC a collection can happen any time you attempt to allocate, whereas in ARC you eagerly collect when you delete, because if you don't eagerly collect you'll have a memory leak. Also you can't make ARC concurrent.

> At the same time, Lucarella's dconf slides were very, very attractive. I gather that allocations themselves may become slower with a concurrent collector, but collection times in essence become non-issues. Technically parallelism doesn't equate to free CPU time; but that it more or less *is* assuming there is a cores/thread to spare. Wrong?
>

Essentially yes, concurrency does get rid of MOST of the STW aspects of GC's. However, most modern GC's are generational and typically their are one or two generations that are not collected concurrently. In .NET both Generations 0 and 1 are not collected concurrently because they can be collected very quickly, more quickly than cost of enabling concurrent collection support on allocation.

For example, I use WPF for almost every project I do at work. WPF is a retained-mode GUI API based on DirectX 9, and it has a 60FPS render speed requirement. The only time I have seen the rendering bog down due to the GC is when there are a LOT of animations starting and stopping. Otherwise it's almost always because of WPF's horrifically naive rendering code.

> Lastly, am I right in understanding precise collectors as identical to the stop-the-world collector we currently have, but with a smarter allocation scheme resulting in a smaller managed heap to scan? With the additional bonus of less false pointers. If so, this would seem like a good improvement to the current implementation, with the next increment in the same direction being a generational gc.
>

Correct, precision won't change the STW nature of the GC, just make it so there is much less to scan/collect in the first place, and believe it or not, the difference can be huge. See Rainer Schutze's for more information on a precise collector in D: http://dconf.org/2013/talks/schuetze.html

> I would *dearly* love to have concurrency in whatever we end up with, though. For a multi-core personal computer threads are free lunches, or close enough so. Concurrentgate and all that jazz.

You and me both, this is the way all GC's are headed, I can't think of a major GC-language that doesn't have a Concurrent-Generational-Incremental GC. :-)

-- 
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator
February 01, 2014
On 1 February 2014 19:27, Adam Wilson <flyboynw@gmail.com> wrote:

> On Fri, 31 Jan 2014 23:35:44 -0800, Manu <turkeyman@gmail.com> wrote:
>
>  On 1 February 2014 16:26, Adam Wilson <flyboynw@gmail.com> wrote:
>>
>>  On Fri, 31 Jan 2014 21:29:04 -0800, Manu <turkeyman@gmail.com> wrote:
>>>
>>>  On 26 December 2012 00:48, Sven Over <dlang@svenover.de> wrote:
>>>
>>>>
>>>>   std.typecons.RefCounted!T
>>>>
>>>>>
>>>>>
>>>>>> core.memory.GC.disable();
>>>>>>
>>>>>>
>>>>>>  Wow. That was easy.
>>>>>
>>>>> I see, D's claim of being a multi-paradigm language is not false.
>>>>>
>>>>>
>>>>
>>>> It's not a realistic suggestion. Everything you want to link uses the
>>>> GC,
>>>> and the language its self also uses the GC. Unless you write software in
>>>> complete isolation and forego many valuable features, it's not a
>>>> solution.
>>>>
>>>>
>>>>  Phobos does rely on the GC to some extent. Most algorithms and ranges
>>>> do
>>>>
>>>>  not though.
>>>>>
>>>>>>
>>>>>>
>>>>>>  Running (library) code that was written with GC in mind and turning
>>>>> GC
>>>>> off
>>>>> doesn't sound ideal.
>>>>>
>>>>> But maybe this allows me to familiarise myself more with D. Who knows, maybe I can learn to stop worrying and love garbage collection.
>>>>>
>>>>> Thanks for your help!
>>>>>
>>>>>
>>>>>  I've been trying to learn to love the GC for as long as I've been
>>>> around
>>>> here. I really wanted to break that mental barrier, but it hasn't
>>>> happened.
>>>> In fact, I am more than ever convinced that the GC won't do. My current
>>>> #1
>>>> wishlist item for D is the ability to use a reference counted collector
>>>> in
>>>> place of the built-in GC.
>>>> You're not alone :)
>>>>
>>>> I write realtime and memory-constrained software (console games), and
>>>> for
>>>> me, I think the biggest issue that can never be solved is the
>>>> non-deterministic nature of the collect cycles, and the unknowable
>>>> memory
>>>> footprint of the application. You can't make any guarantees or
>>>> predictions
>>>> about the GC, which is fundamentally incompatible with realtime
>>>> software.
>>>> Language-level ARC would probably do quite nicely for the miscellaneous
>>>> allocations. Obviously, bulk allocations are still usually best handled
>>>> in
>>>> a context sensitive manner; ie, regions/pools/freelists/whatever, but
>>>> the
>>>> convenience of the GC paradigm does offer some interesting and massively
>>>> time-saving features to D.
>>>> Everyone will always refer you to RefCounted, which mangles your types
>>>> and
>>>> pollutes your code, but aside from that, for ARC to be useful, it needs
>>>> to
>>>> be supported at the language-level, such that the language/optimiser is
>>>> able to optimise out redundant incref/decref calls, and also that it is
>>>> compatible with immutable (you can't manage a refcount if the object is
>>>> immutable).
>>>>
>>>>
>>> The problem isn't GC's per se. But D's horribly naive implementation,
>>> games are written on GC languages now all the time (Unity/.NET). And
>>> let's
>>> be honest, games are kind of a speciality, games do things most programs
>>> will never do.
>>>
>>> You might want to read the GC Handbook. GC's aren't bad, but most, like the D GC, are just to simplistic for common usage today.
>>>
>>
>>
>> Maybe a sufficiently advanced GC could address the performance
>> non-determinism to an acceptable level, but you're still left with the
>> memory non-determinism, and the conundrum that when your heap approaches
>> full (which is _always_ on a games console), the GC has to work harder and
>> harder, and more often to try and keep the tiny little bit of overhead
>> available.
>> A GC heap by nature expects you to have lots of memory, and also lots of
>> FREE memory.
>>
>> No serious console game I'm aware of has ever been written in a language with a GC. Casual games, or games that don't attempt to raise the bar may get away with it, but that's not the industry I work in.
>>
>
> That's kind of my point. You're asking for massive changes throughout the entire compiler to support what is becoming more of an edge case, not less of one. For the vast majority of use cases, a GC is the right call and D has to cater to the majority if it wants to gain any significant mindshare at all. You don't grow by increasing specialization...


Why is ARC any worse than GC? Why is it even a compromise at the high level? Major players have been moving away from GC to ARC in recent years. It's still a perfectly valid method of garbage collection, and it has the advantage that it's intrinsically real-time compatible.

I don't think realtime software is becoming an edge case by any means, maybe 'extreme' realtime is, but that doesn't change the fact that the GC still causes problems for all realtime software.

I personally believe latency and stuttering is one of the biggest usability hindrances in modern computing, and it will become a specific design focus in software of the future. People are becoming less and less tolerant of latency in all forms; just consider the success of iPhone compared to competition, almost entirely attributable to the silky smooth UI experience. It may also be a telling move that Apple switched to ARC around the same time, but I don't know the details.

I also firmly believe that if D - a native compiled language familiar to
virtually all low-level programmers - doesn't have any ambition to service
the embedded space in the future, what will? And why not?
The GC is the only thing inhibiting D from being a successful match in that
context. ARC is far more appropriate, and will see it enter a lot more
fields.
What's the loss?

I think it's also telling that newcomers constantly raise it as a massive
concern, or even a deal-breaker. Would they feel the same about ARC? I
seriously doubt it. I wonder if a poll is in order...
Conversely, would any of the new-comers who are pro-GC feel any less happy
if it were ARC instead? $100 says they probably wouldn't even know, and
almost certainly wouldn't care.