On 11 December 2011 20:34, Paulo Pinto <pjmlp@progtools.org> wrote:
Am 11.12.2011 19:18, schrieb Manu:
On 11 December 2011 15:15, maarten van damme <maartenvd1994@gmail.com
<mailto:maartenvd1994@gmail.com>> wrote:


   2011/12/11 Paulo Pinto <pjmlp@progtools.org
   <mailto:pjmlp@progtools.org>>


       Am 10.12.2011 21:35, schrieb Andrei Alexandrescu:

           On 12/10/11 2:22 PM, maarten van damme wrote:

               Just for fun I
               wanted to create a program as little as possible,
               compiled without
               garbage collector/phobos/... and it turned out that
               compiling without
               garbage collector is pretty much impossible (memory
               leaks all around the
               place in druntime). dynamic linking for the d standard
               library would be
               a great new option for a dmd release in the future :).


           Using D without GC is an interesting direction, and dynamic
           linking
           should be available relatively soon.


           Andrei


       As a long time beliver in systems programming languages with GC
       support
       (Modula-3, Oberon, Sing#, ...), I think allowing this in D is
       the wrong direction.

       Sure provinding APIs to control GC behavior makes sense, but not
       turn it
       off, we already have enough languages to do systems programming
       without GC.


   I was only trying it "for the fun of it", not to be used seriously.
   D should always have it's GC support built-in and have some
   functions to control it's behaviour (core.memory). But I think that
   D, beeing a systems programming language, should also be able to be
   used without GC. I don't mean phobos to be writtin without a GC in
   mind but druntime should be compilable with something like a -nogc
   flag that make it usable without GC.

   There are a lot of users out there who think that a GC produces
   terribly slow programs, big hangs while collecting,... (thank java
   for that. Right now the java GC has been improved and it's extremely
   good but the memory stays :p)
   Letting them know that D can be run without GC can be a good point.
   If they don't like it, they can turn it off.


That's got nothing to do with it. People who seriously NEED to be able
to use the language without the GC enabled are probably working on small
embedded systems with extremely limited resources. It's also possible
that various different resource types need to be allocated/located in
different places.
Also, In many cases, you need to able to have confidence in strict
deterministic allocation patterns. You can't do that with a GC enabled.
I'm all about having a GC in D, obviously, but I certainly couldn't
consider the language for universal adoption in many of my projects
without the option to control/disable it at times.
If I can't write some small programs with the GC completely disabled,
then I basically can't work on microprocessors. It's fair to give up the
standard library when working in this environment, but druntine, the
fundamental library, probably still needs to work. Infact, I'd
personally like it if it was designed in such a way that it never used
the GC under any circumstances. No library FORCED on me should restrict
my usage of the language in such a way.

In my experience programming embedded systems in highly constrained environments usually means assembly or at most a C compiler using lots
of compiler specific extensions for the target environment.

I fail to see how D without GC could be a better tool in such enviroments.

The best current example I can think of is PS3, there are 2 separate architectures in use in that machine, PPC, and SPU, the PPC has 2 separate heaps, and each SPU has its own 256k micro-heap.
The PPC side would probably use GC in the main heap, and manually manage resources in the secondary video heap. Each SPU however is a self contained processor with 256k of memory.
Not only is there no room to waste, but the PROGRAM needs to fit in there too. Interaction between the main program and the SPU micro-programs is FAR more painless if they share the same language. With the source code being shared, it's very easy to copy memory between the processors which directly share the structure definitions.
Obviously, any code running on the SPU may not use the GC... Period. The program code its self must also be as small as possible, no space to wastefully link pointless libraries.

It's hard to say a PS3 isn't a fair target platform for a language like D. It's not going away either.
This is just one example, almost every games console works like this:
 * PS2 had 4 coprocessors, each with their own micro-memory, one of which would certainly share source code with the main app (even back then, this approach was the norm).
 * PSP only has 32mb of ram, explicit management is mandatory on systems with such small memory.
 * Wii has an ARM coprocessor with a tiny memory bank, it also has bugger-all main memory (24mb), so you probably want to explicitly manage memory a lot more.
 * Nintendo DS has a secondary small coprocessor.
 * Larrabee architecture, CELL architecture, or anything like these it will require running code with explicit management of micro-heaps.

Signal processing, DSP's, etc, usually coprocessors, these aren't tiny slave chips with microprograms written in assembly anymore, they are powerful sophisticated coprocessors, with full floating point (usually SIMD arithmetic, PLEASE PLEASE ADD A 128bit SIMD TYPE) support, would benefit from D and its libraries, but still have bugger all memory and require strict control over it.

It's nice to write the master program and the coprocessor programs in the same language. At very least for the ability to directly share data structs.

I hope I have made an indisputable a case for this. It's come up here a lot of times, and it's pretty annoying when everyone suggests the reasons for wanting it are nothing more that maximising performance. I'd suggest that in 95% of cases, performance has nothing to do with it.