April 28, 2012
On Saturday, 28 April 2012 at 01:09:25 UTC, H. S. Teoh wrote:
> On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote:
> [...]
>> The other thing that would make it attractive among the C++
>> developers, would be the development of a lightweight, high
>> performance, minimal library that doesn't use the GC at all.  Ideally,
>> it would be compatible with Phobos. I bet if such a library existed,
>> flocks of C++ developers would suddenly switch to D.
>
> I know the current GC leaves much room for improvement, but what's the
> hangup about the GC anyway? If -- and yes this is a very big if -- the
> GC has real-time guarantees, would that make it more palatable to C++
> devs? Or is it just because they have trouble with the idea of having a
> GC in the first place?
>
>
> T

Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
April 28, 2012
On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
>
> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.

Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
April 28, 2012
On Saturday, April 28, 2012 11:12:21 SomeDude wrote:
> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.

If what you want is the C standard library, then use the C standard library. There's nothing stopping you, and trying to replicate it in D would be pointless.

The main problems with the GC in Phobos are likely arrays and containers. You can't fix the array problem. If you do _anything_ which involves slicing or any array functions which could allocate, you're going to need the GC. The only way to avoid the problem completely is to restrict the functions that you use with arrays to those which won't append to an array or otherwise allocate memory for an array. The container problem should be resolved via custom allocators once they've been added. The custom allocators will also help reduce GC issues for classes in general.

But in general, by minimizing how much you do which would require the GC, the little that does shouldn't be a big deal. Still, due to how arrays work, there's really no way to get away from the GC completely without restricting what you do with them, which in some cases means not using Phobos. I don't think that there's really any way around that.

- Jonathan M Davis
April 28, 2012
On Saturday, 28 April 2012 at 09:22:35 UTC, Jonathan M Davis wrote:
> On Saturday, April 28, 2012 11:12:21 SomeDude wrote:
>> Real time guarantees on a GC is not something we are going to
>> offer anytime soon anyway. While a minimal library, loosely based
>> on the C standard library, with some more bells and whistles that
>> could be borrowed from Phobos, this is a goal that is achievable
>> in a foreseeable future. And both game developers and embedded
>> programmers would be interested.
>
> If what you want is the C standard library, then use the C standard library.
> There's nothing stopping you, and trying to replicate it in D would be
> pointless.
>
> The main problems with the GC in Phobos are likely arrays and containers. You
> can't fix the array problem. If you do _anything_ which involves slicing or any
> array functions which could allocate, you're going to need the GC. The only
> way to avoid the problem completely is to restrict the functions that you use
> with arrays to those which won't append to an array or otherwise allocate
> memory for an array. The container problem should be resolved via custom
> allocators once they've been added. The custom allocators will also help
> reduce GC issues for classes in general.
>
> But in general, by minimizing how much you do which would require the GC, the
> little that does shouldn't be a big deal. Still, due to how arrays work,
> there's really no way to get away from the GC completely without restricting
> what you do with them, which in some cases means not using Phobos. I don't
> think that there's really any way around that.
>
> - Jonathan M Davis

Right, I understand the situation better now. So basically, what's needed is the custom allocators, and the GC would be relieved from much of the work. That would still not work for hard real time embedded, but for those applications, there are lots of restrictions on memory anyway (no dynamic allocation for once), so it wouldn't change much.

April 28, 2012
On Saturday, April 28, 2012 11:35:19 SomeDude wrote:
> Right, I understand the situation better now. So basically, what's needed is the custom allocators, and the GC would be relieved from much of the work. That would still not work for hard real time embedded, but for those applications, there are lots of restrictions on memory anyway (no dynamic allocation for once), so it wouldn't change much.

With custom allocators and/or shared pointers/references, you can pretty much avoid the GC entirely for classes as well as any structs that you put on the heap. So, you'd be in essentially the same place that C++ is for that.

It's just arrays that you can't really fix. If you restrict yourself to what C/C++ can do with arrays (plus taking advantage of the length property), then you're fine, but if you do much beyond that, then you need the GC or you're going to have problems.

So, as long as you're careful with arrays, you should be able to have the memory situation be pretty much identical to what it is in C/C++. And, of course, if you can afford to use the GC in at least some of your code, then it's there to use.

I believe that the typical approach however is to use the GC unless profiling indicates that it's causing you performance problems somewhere, and then you optimize that code so that it minimizes its GC usage or so that it avoids the GC entirely. That way, your program as a whole can reap the benefits granted by the GC, but your performance-critical code can still be performant.

Actually, now that I think about it, delegates would be another area where you'd have to be careful, since they generally end up having to have closures allocated for them when you pass them to a function unless that function them takes them as scope parameters. But it's easy to avoid using delegates if you want to. And if you want to program in a subset of the language that's closer to C, then you probably wouldn't be using them anyway.

- Jonathan M Davis
April 28, 2012
On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:
> On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
>>
>> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
>
> Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.

I also write games in D.

My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay.

GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much.

Just learn how to write code that doesn't allocate memory.

A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
April 29, 2012
"SomeDude" <lovelydear@mailmetrash.com> wrote in message news:zmlqmuhznaynwtcyplof@forum.dlang.org...
> On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
>>
>> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
>
> Note that Kenta Cho, who wrote fast games in D1,

Actually, I think it was pre-D1.

(They were fantastic games, too.)

> used this approach, and it worked very well for him.

Interesting, I had wondered about that. I never dug quite that deep into the code, so I never knew he had done it that way.


April 29, 2012
On 28 April 2012 04:10, H. S. Teoh <hsteoh@quickfur.ath.cx> wrote:

> On Sat, Apr 28, 2012 at 01:31:32AM +0200, SomeDude wrote: [...]
> > The other thing that would make it attractive among the C++ developers, would be the development of a lightweight, high performance, minimal library that doesn't use the GC at all.  Ideally, it would be compatible with Phobos. I bet if such a library existed, flocks of C++ developers would suddenly switch to D.
>
> I know the current GC leaves much room for improvement, but what's the hangup about the GC anyway? If -- and yes this is a very big if -- the GC has real-time guarantees, would that make it more palatable to C++ devs? Or is it just because they have trouble with the idea of having a GC in the first place?
>

If the GC guarantees to behave in a deterministic and predictable way, I
have no problem with it. And even if it doesn't, as long as it's lightning
fast, and I can control the sweeps.
One major concern to me is invisible allocations. I want to know when I'm
allocating, I like allocate operations to be clearly visible. There are a
lot of operations that cause invisible allocations in D, but they are
avoidable.
Games are both embedded and realtime code at the same time, this unions the
strict requirements of both worlds into a system that demands very tight
control of these things. Fragmentation is the enemy, so is losing 1ms (GC
takes WAY longer than this currently) at random moments.

There is a problem right now where the GC doesn't actually seem to work,
and I'm seeing D apps allocating gigabytes and never releasing the memory.
A great case study for the GC is VisualD, if any GC experts would like to
check it out. It shows a use case where the GC utterly fails, and makes the
software borderline unusable as a result. It seems to 'leak' memory, and
collects can take 5-10 seconds at a time (manifested by locking up the
entire application).
VisualD has completely undermined by faith and trust in the GC, and I've
basically banned using it. I can't afford to run into that situation a few
months down the line.


April 29, 2012
On 28 April 2012 18:16, Peter Alexander <peter.alexander.au@gmail.com>wrote:

> On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:
>
>> On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
>>
>>>
>>> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
>>>
>>
>> Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
>>
>
> I also write games in D.
>
> My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay.
>
> GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much.
>
> Just learn how to write code that doesn't allocate memory.
>
> A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
>

I think that basically sums it up.

I'm interested to know is whether using a new precise GC will guarantee ALL
unreferenced stuff will be cleaned on any given sweep.
I can imagine a model in games where I could:
 1 Use the GC to allocate as much as I like during initialisation
 2 During runtime you never allocate anyway, so disable the GC (this is
when it is important to know about hidden allocations)
 3 During some clean-up, first run the logic to de-reference all things
that are no longer required
 4 Finally, force a precise GC scan, which should guarantee that all
no-longer referenced memory would be cleaned up at that time.

This would actually be a very convenient working model for games. But it only works if I know everything that was released will definitely be cleaned, otherwise I may not be ale to allocate the next level (games often allocate all memory a machine has within 100k or so).


April 29, 2012
On Apr 29, 2012, at 2:38 AM, Manu <turkeyman@gmail.com> wrote:

> On 28 April 2012 18:16, Peter Alexander <peter.alexander.au@gmail.com> wrote:
> On Saturday, 28 April 2012 at 09:14:51 UTC, SomeDude wrote:
> On Saturday, 28 April 2012 at 09:12:23 UTC, SomeDude wrote:
> 
> Real time guarantees on a GC is not something we are going to offer anytime soon anyway. While a minimal library, loosely based on the C standard library, with some more bells and whistles that could be borrowed from Phobos, this is a goal that is achievable in a foreseeable future. And both game developers and embedded programmers would be interested.
> 
> Note that Kenta Cho, who wrote fast games in D1, used this approach, and it worked very well for him.
> 
> I also write games in D.
> 
> My approach is this: use the GC all you want during loading or other non-interactive parts of the game and then just make sure that you don't use it during gameplay.
> 
> GC vs. manual memory allocation is a non-issue for real-time guarantees. The simple fact of the matter is that you should be using neither. I also don't use malloc/free during runtime because it has the same non-real-time problems as using the GC. A single malloc can stall for tens of milliseconds or more, and that's simply too much.
> 
> Just learn how to write code that doesn't allocate memory.
> 
> A bigger problem with GC for games is memory management i.e. controlling how much memory is currently allocated, and what systems are using what memory. Having deterministic memory usage is preferable for those cases because I know that as soon as I delete something that the memory is available for something else. I don't get that guarantee with a GC.
> 
> I think that basically sums it up.
> 
> I'm interested to know is whether using a new precise GC will guarantee ALL unreferenced stuff will be cleaned on any given sweep.
> I can imagine a model in games where I could:
>  1 Use the GC to allocate as much as I like during initialisation
>  2 During runtime you never allocate anyway, so disable the GC (this is when it is important to know about hidden allocations)
>  3 During some clean-up, first run the logic to de-reference all things that are no longer required
>  4 Finally, force a precise GC scan, which should guarantee that all no-longer referenced memory would be cleaned up at that time.
> 
> This would actually be a very convenient working model for games. But it only works if I know everything that was released will definitely be cleaned, otherwise I may not be ale to allocate the next level (games often allocate all memory a machine has within 100k or so).

For a use pattern like this, one thing that may work is to add a GC proxy immediately before loading a level. To unload the level, terminate that GC.