April 07, 2013
On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
> Am 08.04.2013 00:27, schrieb Timon Gehr:
>> Every time a variable is reassigned, its old value is destroyed.
>
> I do have functional and logic programming background and still fail to see how that is manual memory management.

Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.
April 08, 2013
On 6 April 2013 19:51, Peter Alexander <peter.alexander.au@gmail.com> wrote:

> You can switch off the GC, but then things will leak as the core language, druntime, and phobos all use the GC is many cases.
>
> What I do is just avoid the functions that allocate, and rewrite the ones I need. I also use a modified druntime that prints callstacks when a GC allocation occurs, so I know if it happens by accident.
>

This needs to be a feature in the standard library that you can turn on... or a compiler option (version?) that will make it complain at compile time when you call functions that may produce hidden allocations.


April 08, 2013
> That's the critical missing piece of the puzzle. In effect we need to be able to use a sub-set of D that is 100% GC free.


That's it actually - spot on.
If only we could write 100% GC free D code... that would be it.




----Android NewsGroup Reader----
http://www.piaohong.tk/newsgroup
April 08, 2013
On 7 April 2013 20:59, Paulo Pinto <pjmlp@progtools.org> wrote:

> I am not giving up speed. It just happens that I have been coding since 1986 and I am a polyglot programmer that started doing system programming in the Pascal family of languages, before moving into C and C++ land.
>
> Except for some cases, it does not matter if you get an answer in 1s or 2ms, however most single language C and C++ developers care about the 2ms case even before starting to code, this is what I don't approve.
>

Bear in mind, most remaining C/C++ programmers are realtime programmers,
and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
realtime software.
If I chose not to care about 2ms only 8 times, I'll have no time left. I
would cut off my left nut for 2ms most working days!
I typically measure execution times in 10s of microseconds, if something
measures in milliseconds it's a catastrophe that needs to be urgently
addressed... and you're correct, as a C/C++ programmer, I DO design with
consideration for sub-ms execution times before I write a single line of
code.
Consequently, I have seen the GC burn well into the ms on occasion, and as
such, it is completely unacceptable in realtime software.

The GC really needs to be addressed in terms of performance; it can't stop
the world for milliseconds at a time. I'd be happy to give it ~150us every
16ms, but NOT 2ms every 200ms.
Alternatively, some urgency needs to be invested in tools to help
programmers track accidental GC allocations.

I cope with D in realtime software by carefully avoiding excess GC usage,
which, sadly, means basically avoiding the standard library at all costs.
People use concatenations all through the std lib, in the strangest places,
I just can't trust it at all anymore.
I found a weird one just a couple of days ago in the function
toUpperInPlace() (!! it allocates !!), but only when it encountered a utf8
sequence, which means I didn't even notice while working in my language! >_<
Imagine it, I would have gotten a bug like "game runs slow in russian", and
I would have been SOOOO "what the ****!?", while crunching to ship the
product...

That isn't so say I don't appreciate the idea of the GC if it was efficient enough for me to use. I do use it, but very carefully. If there are only a few GC allocations it's okay at the moment, but I almost always run into trouble when I call virtually any std library function within loops. That's the critical danger in my experience.

Walter's claim is that D's inefficient GC is mitigated by the fact that D
produces less garbage than other languages, and this is true to an extent.
But given that is the case, to be reliable, it is of critical importance
that:
a) the programmer is aware of every allocation they are making, they can't
be hidden inside benign looking library calls like toUpperInPlace.
b) all allocations should be deliberate.
c) helpful messages/debugging features need to be available to track where
allocations are coming from. standardised statistical output would be most
helpful.
d) alternatives need to be available for the functions that allocate by
nature, or an option for user-supplied allocators, like STL, so one can
allocate from a pool instead.
e) D is not very good at reducing localised allocations to the stack, this
needs some attention. (array initialisation is particularly dangerous)
f) the GC could do with budgeting controls. I'd like to assign it 150us per
16ms, and it would defer excess workload to later frames.

Of course I think given time D compilers will be able to achieve C++ like
> performance, even with GC or who knows, a reference counted version.
>
> Nowadays the only place I do manual memory management is when writing Assembly code.
>

Apparently you don't write realtime software. I get so frustrated on this
forum by how few people care about realtime software, or any architecture
other than x86 (no offense to you personally, it's a general observation).
Have you ever noticed how smooth and slick the iPhone UI feels? It runs at
60hz and doesn't miss a beat. It wouldn't work in D.
Video games can't stutter, audio/video processing can't stutter. These are
all important tasks in modern computing.
The vast majority of personal computers in the world today are in peoples
pockets running relatively weak ARM processors, and virtually every user of
these devices appreciates the smooth operation of the devices interfaces.
People tend to complain when their device is locking up or stuttering.
These small, weak devices are increasingly becoming responsible for _most_
personal computing tasks these days, and apart from the web, most personal
computing tasks are realtime in some way (music/video, skype, etc).
It's not a small industry. It is, perhaps, the largest computing industry,
and sadly D is yet not generally deployable to the average engineer... only
the D enthusiast prepared to take the time to hold it's hand until this is
important issue is addressed.


April 08, 2013
"Minas Mina" <minas_mina1990@hotmail.co.uk> Wrote in message:
> I agree that language support for disabling the GC should exist. D, as I understand, is targeting C++ programmers (primarily). Those people are concerned about performance. If D as a systems programming language, can't deliver that, they aren't going to use it just because it has better templates (to name something).
> 


very well put.

I want to be able to write programs as fast as C++ ones... in D.
D is it for me... I just need not be hampered by a GC -
 particularly when its implementation is somewhat
 lagging.




----Android NewsGroup Reader----
http://www.piaohong.tk/newsgroup
April 08, 2013
On 8 April 2013 04:41, Rob T <alanb@ucora.com> wrote:

> Ideally, I think what we need is 1) a better GC since the pros with using one are very significant, and 2) the ability to selectively mark sections of code as "off limits" to all GC dependent code. What I mean by this is that the compiler will refuse to compile any code that makes use of automated memory allocations for a @noheap marked section of code.
>

I wonder if UDA's could be leveraged to implement this in a library?
UDA's can not permute the type, so I guess it's impossible to implement
something like @noalloc that behaves like @nothrow in a library...
I wonder what it would take, it would be generally interesting to move some
of the built-in attributes to UDA's if the system is rich enough to express
it.

As a side though though, the information about whether a function can
allocate could be known implicitly by the compiler if it chose to track
that detail. I wonder if functions could gain a constant property so you
can assert on that detail in your own code?
ie:

void myFunction()
{
  // does stuff...
}


{
  // ...code that i expect not to allocate...

  static assert(!myFunction.canAllocate);

  myFunction();
}

This way, I know for sure my code is good, and if I modify the body of myFunction at some later time (or one of its sub-calls is modified), for instance, to make an allocating library call, then i'll know about it the moment I make the change.

Then again, I wonder if a formal attribute @noalloc would be useful in the same way as @nothrow? The std library would be enriched with that information... issues like the one where toUpperInPlace() was allocating (which is clearly a bug, it's not 'in place' if it's allocating), should have ideally been caught at the time of authoring the function. Eliminating common sources of programmer errors and thus reducing bug counts is always an interesting prospect... and it would offer a major tool towards this thread's topic :)


April 08, 2013
On Sunday, 7 April 2013 at 22:59:37 UTC, Peter Alexander wrote:
> On Sunday, 7 April 2013 at 22:33:04 UTC, Paulo Pinto wrote:
>> Am 08.04.2013 00:27, schrieb Timon Gehr:
>>> Every time a variable is reassigned, its old value is destroyed.
>>
>> I do have functional and logic programming background and still fail to see how that is manual memory management.
>
> Mutable state is essentially an optimisation that reuses the same memory for a new value (as opposed to heap allocating the new value). In that sense, mutable state is manual memory management because you have to manage what has access to that memory.

If you as a developer don't call explicitly any language API to acquire/release resource and it is actually done on your behalf by the runtime, it is not manual memory management.
April 08, 2013
On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
> On 7 April 2013 20:59, Paulo Pinto <pjmlp@progtools.org> wrote:
>
>> I am not giving up speed. It just happens that I have been coding since
>> 1986 and I am a polyglot programmer that started doing system programming
>> in the Pascal family of languages, before moving into C and C++ land.
>>
>> Except for some cases, it does not matter if you get an answer in 1s or
>> 2ms, however most single language C and C++ developers care about the 2ms
>> case even before starting to code, this is what I don't approve.
>>
>
> Bear in mind, most remaining C/C++ programmers are realtime programmers,
> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
> realtime software.
> If I chose not to care about 2ms only 8 times, I'll have no time left. I
> would cut off my left nut for 2ms most working days!
> I typically measure execution times in 10s of microseconds, if something
> measures in milliseconds it's a catastrophe that needs to be urgently
> addressed... and you're correct, as a C/C++ programmer, I DO design with
> consideration for sub-ms execution times before I write a single line of
> code.
> Consequently, I have seen the GC burn well into the ms on occasion, and as
> such, it is completely unacceptable in realtime software.


I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.

Afterwards the same discussion came around with JVM and .NET environments, which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true.

Nowadays template based code beats C, systems programming is moving to C++ in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark.

So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.


>
> Walter's claim is that D's inefficient GC is mitigated by the fact that D
> produces less garbage than other languages, and this is true to an extent.
> But given that is the case, to be reliable, it is of critical importance
> that:
> a) the programmer is aware of every allocation they are making, they can't
> be hidden inside benign looking library calls like toUpperInPlace.
> b) all allocations should be deliberate.
> c) helpful messages/debugging features need to be available to track where
> allocations are coming from. standardised statistical output would be most
> helpful.
> d) alternatives need to be available for the functions that allocate by
> nature, or an option for user-supplied allocators, like STL, so one can
> allocate from a pool instead.
> e) D is not very good at reducing localised allocations to the stack, this
> needs some attention. (array initialisation is particularly dangerous)
> f) the GC could do with budgeting controls. I'd like to assign it 150us per
> 16ms, and it would defer excess workload to later frames.


No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management.

What you need is a way to do controlled allocations for the few cases that there is no way around it, but this should be reserved for modules with system code and not scattered everywhere.

>
> Of course I think given time D compilers will be able to achieve C++ like
>> performance, even with GC or who knows, a reference counted version.
>>
>> Nowadays the only place I do manual memory management is when writing
>> Assembly code.
>>
>
> Apparently you don't write realtime software. I get so frustrated on this
> forum by how few people care about realtime software, or any architecture
> other than x86 (no offense to you personally, it's a general observation).
> Have you ever noticed how smooth and slick the iPhone UI feels? It runs at
> 60hz and doesn't miss a beat. It wouldn't work in D.
> Video games can't stutter, audio/video processing can't stutter. ....

I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years.

However I do see a lot of games being pushed out the door in Java, C# with local optimizations done in C and C++.

Yeah most of they are no AAA, but that does make them less enjoyable.

I also had the pleasure of being able to use the Native Oberon and AOS operating systems back in the late 90's at the university, desktop operating systems done in GC systems programming languages. Sure you could do manual memory management, but only via the SYSTEM pseudo module.

One of the applications was a video player, just the decoder was written in Assembly.

http://ignorethecode.net/blog/2009/04/22/oberon/


In the end the question is what would a D version just with manual memory management have as compelling feature against C++1y and Ada, already established languages with industry standards?

Then again my lack of experience in the embedded world invalidates what I think might be the right way.

--
Paulo
April 08, 2013
On Monday, 8 April 2013 at 06:35:27 UTC, Paulo Pinto wrote:
> On Monday, 8 April 2013 at 03:13:00 UTC, Manu wrote:
>> On 7 April 2013 20:59, Paulo Pinto <pjmlp@progtools.org> wrote:
>>
>>> I am not giving up speed. It just happens that I have been coding since
>>> 1986 and I am a polyglot programmer that started doing system programming
>>> in the Pascal family of languages, before moving into C and C++ land.
>>>
>>> Except for some cases, it does not matter if you get an answer in 1s or
>>> 2ms, however most single language C and C++ developers care about the 2ms
>>> case even before starting to code, this is what I don't approve.
>>>
>>
>> Bear in mind, most remaining C/C++ programmers are realtime programmers,
>> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
>> realtime software.
>> If I chose not to care about 2ms only 8 times, I'll have no time left. I
>> would cut off my left nut for 2ms most working days!
>> I typically measure execution times in 10s of microseconds, if something
>> measures in milliseconds it's a catastrophe that needs to be urgently
>> addressed... and you're correct, as a C/C++ programmer, I DO design with
>> consideration for sub-ms execution times before I write a single line of
>> code.
>> Consequently, I have seen the GC burn well into the ms on occasion, and as
>> such, it is completely unacceptable in realtime software.
>
>
> I do understand that, the thing is that since I am coding in 1986, I remember people complaining that C and Turbo Pascal were too slow, lets code everything in Assembly. Then C became alright, but C++ and Ada were too slow, god forbid to call virtual methods or do any operator calls in C++'s case.
>
> Afterwards the same discussion came around with JVM and .NET environments, which while making GC widespread, also had the sad side-effect to make younger generations think that safe languages require a VM when that is not true.
>
> Nowadays template based code beats C, systems programming is moving to C++ in mainstream OS, leaving C behind, while some security conscious areas are adopting Ada and Spark.
>
> So for me when someone claims about the speed benefits of C and C++ currently have, I smile as I remember having this kind of discussions with C having the role of too slow language.
>
>
>>
>> Walter's claim is that D's inefficient GC is mitigated by the fact that D
>> produces less garbage than other languages, and this is true to an extent.
>> But given that is the case, to be reliable, it is of critical importance
>> that:
>> a) the programmer is aware of every allocation they are making, they can't
>> be hidden inside benign looking library calls like toUpperInPlace.
>> b) all allocations should be deliberate.
>> c) helpful messages/debugging features need to be available to track where
>> allocations are coming from. standardised statistical output would be most
>> helpful.
>> d) alternatives need to be available for the functions that allocate by
>> nature, or an option for user-supplied allocators, like STL, so one can
>> allocate from a pool instead.
>> e) D is not very good at reducing localised allocations to the stack, this
>> needs some attention. (array initialisation is particularly dangerous)
>> f) the GC could do with budgeting controls. I'd like to assign it 150us per
>> 16ms, and it would defer excess workload to later frames.
>
>
> No doubt D's GC needs to be improved, but I doubt making D a manual memory managed language will improve the language's adoption, given that all new system programming languages either use GC or reference counting as default memory management.
>
> What you need is a way to do controlled allocations for the few cases that there is no way around it, but this should be reserved for modules with system code and not scattered everywhere.
>
>>
>> Of course I think given time D compilers will be able to achieve C++ like
>>> performance, even with GC or who knows, a reference counted version.
>>>
>>> Nowadays the only place I do manual memory management is when writing
>>> Assembly code.
>>>
>>
>> Apparently you don't write realtime software. I get so frustrated on this
>> forum by how few people care about realtime software, or any architecture
>> other than x86 (no offense to you personally, it's a general observation).
>> Have you ever noticed how smooth and slick the iPhone UI feels? It runs at
>> 60hz and doesn't miss a beat. It wouldn't work in D.
>> Video games can't stutter, audio/video processing can't stutter. ....
>
> I am well aware of that and actually I do follow the game industry quite closely, being my second interest after systems/distributed computing. And I used to be a IGDA member for quite a few years.
>
> However I do see a lot of games being pushed out the door in Java, C# with local optimizations done in C and C++.
>
> Yeah most of they are no AAA, but that does make them less enjoyable.

Correction:

Yeah most of they are no AAA, but that does not make them less
enjoyable.
April 08, 2013
On 2013-04-08 05:12, Manu wrote:

> Bear in mind, most remaining C/C++ programmers are realtime programmers,
> and that 2ms is 12.5% of the ENTIRE AMOUNT OF TIME that you have to run
> realtime software.
> If I chose not to care about 2ms only 8 times, I'll have no time left. I
> would cut off my left nut for 2ms most working days!
> I typically measure execution times in 10s of microseconds, if something
> measures in milliseconds it's a catastrophe that needs to be urgently
> addressed... and you're correct, as a C/C++ programmer, I DO design with
> consideration for sub-ms execution times before I write a single line of
> code.
> Consequently, I have seen the GC burn well into the ms on occasion, and
> as such, it is completely unacceptable in realtime software.
>
> The GC really needs to be addressed in terms of performance; it can't
> stop the world for milliseconds at a time. I'd be happy to give it
> ~150us every 16ms, but NOT 2ms every 200ms.
> Alternatively, some urgency needs to be invested in tools to help
> programmers track accidental GC allocations.

An easy workaround is to remove the GC and when you use the GC you'll get linker errors. Not pretty but it could work.

> I cope with D in realtime software by carefully avoiding excess GC
> usage, which, sadly, means basically avoiding the standard library at
> all costs. People use concatenations all through the std lib, in the
> strangest places, I just can't trust it at all anymore.
> I found a weird one just a couple of days ago in the function
> toUpperInPlace() (!! it allocates !!), but only when it encountered a
> utf8 sequence, which means I didn't even notice while working in my
> language! >_<
> Imagine it, I would have gotten a bug like "game runs slow in russian",
> and I would have been SOOOO "what the ****!?", while crunching to ship
> the product...

To address this particular case, without having looked at the code, you do know that it's possible that the length of a Unicode string changes when converting between upper and lower case for some languages. With that in mind, it might not be a good idea to have an in place version of toUpper/Lower at all.

> d) alternatives need to be available for the functions that allocate by
> nature, or an option for user-supplied allocators, like STL, so one can
> allocate from a pool instead.

Have you seen this, links at the bottom:

http://3d.benjamin-thaut.de/?p=20

-- 
/Jacob Carlborg