January 08, 2013
On Tue, Jan 08, 2013 at 10:29:26AM +0100, Paulo Pinto wrote:
> On Monday, 7 January 2013 at 23:13:13 UTC, H. S. Teoh wrote:
> >...
> >
> >Crippling the language to cater to the 10% crowd who want to squeeze every last drop of performance from the hardware is the wrong approach IMO.
[...]
> Agreed.
> 
> Having used GC languages for the last decade, I think the cases where manual memory management is really required are very few.
> 
> Even if one is forced to do manual memory management over GC, it is still better to have the GC around than do everything manually.

Yes, hence my idea of splitting up the performance-critical core of a game engine vs. the higher-level application stuff (like scripting, etc.) that aren't as performance-critical. The latter would be greatly helped by a GC -- it makes it easier for scripting people to use, whereas writing GC-less code demands a certain level of rigor and certainly requires more effort and care than is necessary for the most part.


> But this is based on my experience doing business applications, desktop and server side or services/daemons.
[...]

Well, business applications and server-side stuff (I assume it's web-based stuff) are exactly the kind of applications that benefit the most from a GC. In my mind, they are just modern incarnations of batch processing applications, where instant response isn't critical, and so the occasional GC pause is acceptable and, indeed, mostly unnoticeable.

Game engines, OTOH, are a step away from hard real-time applications, where pause-the-world GCs are unacceptable. While it isn't fatal for a game engine to pause every now and then, it is very noticeable, and detrimental to the players' experience, so game devs generally shy away from anything that needs to pause the world. For real-time apps, though, it's not only noticeable, it can mean the difference between life and death (e.g., in controllers for medical equipment -- pausing for 1/2 seconds while the GC runs can mean that the laser burns off stuff that shouldn't be burned off the patient's body).

But then again, considering the bulk of all software being written today, how much code is actually mission-critical real-time apps or game engine cores? I suspect real-time apps are <5% of all software, and while games are a rapidly growing market, I daresay less than 30-40% of game code actually needs to be pauseless (mainly just video-rendering code -- code that handles monster AI, for example, wouldn't fail horribly if it had to take a few extra frames to decide what to do next -- in fact, it may even be more realistic that way). Which, in my estimation, probably doesn't account for more than 10% of all software out there. The bulk of software being written today don't really need to be GC-less.


T

-- 
The richest man is not he who has the most, but he who needs the least.
January 08, 2013
Am 08.01.2013 16:25, schrieb H. S. Teoh:
> On Tue, Jan 08, 2013 at 10:29:26AM +0100, Paulo Pinto wrote:
>> On Monday, 7 January 2013 at 23:13:13 UTC, H. S. Teoh wrote:
>>> ...
>>>
>>> Crippling the language to cater to the 10% crowd who want to squeeze
>>> every last drop of performance from the hardware is the wrong
>>> approach IMO.
> [...]
>> Agreed.
>>
>> Having used GC languages for the last decade, I think the cases
>> where manual memory management is really required are very few.
>>
>> Even if one is forced to do manual memory management over GC, it is
>> still better to have the GC around than do everything manually.
>
> Yes, hence my idea of splitting up the performance-critical core of a
> game engine vs. the higher-level application stuff (like scripting,
> etc.) that aren't as performance-critical. The latter would be greatly
> helped by a GC -- it makes it easier for scripting people to use,
> whereas writing GC-less code demands a certain level of rigor and
> certainly requires more effort and care than is necessary for the most
> part.
>
>
>> But this is based on my experience doing business applications,
>> desktop and server side or services/daemons.
> [...]
>
> Well, business applications and server-side stuff (I assume it's
> web-based stuff) are exactly the kind of applications that benefit the
> most from a GC. In my mind, they are just modern incarnations of batch
> processing applications, where instant response isn't critical, and so
> the occasional GC pause is acceptable and, indeed, mostly unnoticeable.
>
> Game engines, OTOH, are a step away from hard real-time applications,
> where pause-the-world GCs are unacceptable. While it isn't fatal for a
> game engine to pause every now and then, it is very noticeable, and
> detrimental to the players' experience, so game devs generally shy away
> from anything that needs to pause the world. For real-time apps, though,
> it's not only noticeable, it can mean the difference between life and
> death (e.g., in controllers for medical equipment -- pausing for 1/2
> seconds while the GC runs can mean that the laser burns off stuff that
> shouldn't be burned off the patient's body).
>
> But then again, considering the bulk of all software being written
> today, how much code is actually mission-critical real-time apps or game
> engine cores? I suspect real-time apps are <5% of all software, and
> while games are a rapidly growing market, I daresay less than 30-40% of
> game code actually needs to be pauseless (mainly just video-rendering
> code -- code that handles monster AI, for example, wouldn't fail
> horribly if it had to take a few extra frames to decide what to do next
> -- in fact, it may even be more realistic that way). Which, in my
> estimation, probably doesn't account for more than 10% of all software
> out there. The bulk of software being written today don't really need to
> be GC-less.
>
>
> T
>

So how much experience do you have with game engine programming to make such statements?

Kind Regards
Benjamin Thaut
January 08, 2013
On Tue, Jan 08, 2013 at 04:31:45PM +0100, Benjamin Thaut wrote:
> Am 08.01.2013 16:25, schrieb H. S. Teoh:
[...]
> >Game engines, OTOH, are a step away from hard real-time applications, where pause-the-world GCs are unacceptable. While it isn't fatal for a game engine to pause every now and then, it is very noticeable, and detrimental to the players' experience, so game devs generally shy away from anything that needs to pause the world. For real-time apps, though, it's not only noticeable, it can mean the difference between life and death (e.g., in controllers for medical equipment -- pausing for 1/2 seconds while the GC runs can mean that the laser burns off stuff that shouldn't be burned off the patient's body).
> >
> >But then again, considering the bulk of all software being written today, how much code is actually mission-critical real-time apps or game engine cores? I suspect real-time apps are <5% of all software, and while games are a rapidly growing market, I daresay less than 30-40% of game code actually needs to be pauseless (mainly just video-rendering code -- code that handles monster AI, for example, wouldn't fail horribly if it had to take a few extra frames to decide what to do next -- in fact, it may even be more realistic that way). Which, in my estimation, probably doesn't account for more than 10% of all software out there. The bulk of software being written today don't really need to be GC-less.
> >
> >
> >T
> >
> 
> So how much experience do you have with game engine programming to make such statements?
[...]

Not much, I'll admit. So maybe I'm just totally off here. But the last two sentences weren't specific to game code, I was just making a statement about software in general. (It would be a gross misrepresentation to claim that only 10% of a game is performance critical!)


T

-- 
Many open minds should be closed for repairs. -- K5 user
January 08, 2013
Am 08.01.2013 16:46, schrieb H. S. Teoh:
>> So how much experience do you have with game engine programming to
>> make such statements?
> [...]
>
> Not much, I'll admit. So maybe I'm just totally off here. But the last
> two sentences weren't specific to game code, I was just making a
> statement about software in general. (It would be a gross
> misrepresentation to claim that only 10% of a game is performance
> critical!)
>
>
> T
>

So to give a little background about me. I'm currently doing my masters degree in informatics which is focused on media related programming. (E.g. games, applications with other visual output, mobile apps, etc).

Besides my studies I'm working at Havok, the biggest middle ware company in the gaming industry. I'm working there since about a year. I also have some contacts to people working at Crytek.

My impression so far: No one who is writing a tripple A gaming title or engine is only remotly interested in using a GC. Game engine programmers almost do anything to get better performance on a certain plattform. There are really elaborate taks beeing done just to get 1% more performance. And because of that, a GC is the very first thing every serious game engine programmer will kick. You have to keep in mind that most games run at 30 FPS. That means you only have 33 ms to do everything. Rendering, simulating physics, doing the game logic, handling network input, playing sounds, streaming data, and so on.
Some games even try to get 60 FPS which makes it even harder as you only have 16 ms to compute everything. Everything is performance critical if you try to achive that.

I also know that Crytek used Lua for game scripting in Crysis 1. It was one of the reasons they never managed to get it onto the Consoles (ps3, xbox 360). In Crysis 2 they removed all the lua game logic and wrote everything in C++ to get better performance.

Doing pooling with a GC enabled, still wastes a lot of time. Because when pooling is used almost all data will survive a collection anyway (because most of it is in pools). So when the GC runs, most of the work it does is wasted, because its running over instances that are going to survive anyway. Pooling is just another way of manual memory management and I don't find this a valid argument for using a GC.

Also my own little test case (a game I wrote for university) has shown that I get a 300% improvement by not using a GC. At the beginning when I wrote the game I was convinced that one could make a game work when using a GC with only a little performance impact (10%-5%). I already heavily optimized the game with some background knowdelge about how the GC works. I even did some manual memory mangement for memory blocks that were garantueed to not contain any pointers to GC data.
Despite all this I got a 300% performance improvement after swichting to pure manual memory management and removing the GC from druntime.

When D wants to get into the gaming space, there has to be a GC free option. Otherwise D will not even be considered when programming languages are evaluated.

Kind Regards
Benjamin Thaut

January 08, 2013
On Tuesday, 8 January 2013 at 15:27:21 UTC, H. S. Teoh wrote:
> But then again, considering the bulk of all software being written
> today, how much code is actually mission-critical real-time apps or game
> engine cores?

You also need to consider the market for D. Performance is one of D's key selling points. If it had the performance of Python then D would be a much less interesting language, and I honestly doubt anyone would even look at it.

Whether or not the bulk of software written is critically real-time is irrelevant. The question is whether the bulk of software written *in D* is critically real-time. I don't know what the % is, but I'd assume it is much larger than the average piece of software.
January 08, 2013
On 01/08/2013 03:43 PM, ixid wrote:
> Just speaking as a bystander but I believe it is becoming apparent that a good
> guide to using D without the GC is required.

I'd second that.  I've tried on a couple of occasions to use D with a minimal-to-no GC approach (e.g. using std.container.Array in place of builtin arrays, etc.) and ran into difficulties.  It would be very useful to have a carefully written guide or tutorial on GC-less D.

January 08, 2013
On Monday, January 07, 2013 23:26:02 Rob T wrote:
> Is this a hard fact, or can there be a way to make it work? For example what about the custom allocator idea?
> 
> From a marketing POV, if the language can be made 100% free of the GC it would at least not be a deterrent to those who cannot accept having to use one. From a technical POV, there are definitely many situations where not using a GC is desirable.

It's a hard fact. Some features (e.g. appending to an array) require the GC and will always require the GC. There may be features which currently require the GC but shouldn't necessarily require it (e.g. AAs may fall in that camp), but some features absolutely require it, and there's no way around that. You can limit your use of the GC or outright not use it at all, but it comes with the cost of not being able to use certain features (mostly with regards to arrays).

- Jonathan M Davis
January 08, 2013
On 01/08/2013 08:09 PM, Jonathan M Davis wrote:
> It's a hard fact. Some features (e.g. appending to an array) require the GC
> and will always require the GC. There may be features which currently require
> the GC but shouldn't necessarily require it (e.g. AAs may fall in that camp),
> but some features absolutely require it, and there's no way around that.

... but there is also std.container.Array which if I understand right, does its own memory management and does not require the GC, no?

Which leads to the question, to what extent is it possible to use built-in arrays and std.container.Arrays interchangeably?  What are the things you can't do with a std.container.Array that you can with a built-in one?
January 08, 2013
On 2013-01-08 17:12, Benjamin Thaut wrote:

> So to give a little background about me. I'm currently doing my masters
> degree in informatics which is focused on media related programming.
> (E.g. games, applications with other visual output, mobile apps, etc).
>
> Besides my studies I'm working at Havok, the biggest middle ware company
> in the gaming industry. I'm working there since about a year. I also
> have some contacts to people working at Crytek.

Impressive.

> When D wants to get into the gaming space, there has to be a GC free
> option. Otherwise D will not even be considered when programming
> languages are evaluated.

It seems you already have done great progress in this area, with your fork of druntime and Phobos.

-- 
/Jacob Carlborg
January 08, 2013
On Tuesday, January 08, 2013 20:32:29 Joseph Rushton Wakeling wrote:
> On 01/08/2013 08:09 PM, Jonathan M Davis wrote:
> > It's a hard fact. Some features (e.g. appending to an array) require the
> > GC
> > and will always require the GC. There may be features which currently
> > require the GC but shouldn't necessarily require it (e.g. AAs may fall in
> > that camp), but some features absolutely require it, and there's no way
> > around that.
> ... but there is also std.container.Array which if I understand right, does its own memory management and does not require the GC, no?
> 
> Which leads to the question, to what extent is it possible to use built-in arrays and std.container.Arrays interchangeably? What are the things you can't do with a std.container.Array that you can with a built-in one?

std.container.Array and built-in arrays are _very_ different. Array is a container, not a range. You can slice it to get a range and operate on that, but it's not a range itself. On the other hand, built-in arrays aren't true containers. They don't own or manage their own memory in any way, shape, or form, and they're ranges. The fact that an array _is_ a slice has a _huge_ impact on arrays, and that's not the case with Array. And of course, the APIs for the two are quite different. They're not really interchangeable at all.

True, you can use Array in a lot of places that you can use built-in arrays, but they're fundamentally different, and one is definitely _not_ a drop-in replacement for the other.

- Jonathan M Davis