November 22, 2006
Kyle Furlong wrote:

> If this is talking about my first post in this

I wasn't ranting at you in particular, no. :-) It's more of a general vibe around here sometimes.

> thread, thats not what I said. I merely said that
> trying to apply the conventional wisdom of C++
> to D is misguided.
> Is that incorrect?

I can't remember the context of your original comment (and the web interface to this NG doesn't do threading), so I'm not sure what "conventional wisdom of C++" you were talking about. If it was some specific rote-learned rule like "every new must have a delete" then you're right, that's clearly daft. If it was a general statement then I disagree. They're two different languages, of course, but aimed at similar niches and subject to similar constraints.

The design of D was in large part driven by "conventional C++ wisdom", both in terms of better built-in support for the useful idioms that have evolved in C++ and of avoiding widely accepted what-the-hell-were-they-thinking howlers. (I don't know many if any C++ "fans", in the sense that D or Ruby has fans; it's usually more of a pragmatic "better the devil you know" attitude.)

Also, there's not yet any experience using D in big (mloc) software systems, or maintaining it over many many years, or guaranteeing ridiculous levels of uptime. Those raise issues - build scalability, dependency management, source and binary compatibility, heap fragmentation - that just don't come up in small exploratory projects. I fully agree with you that such experience doesn't port across directly, but as a source of flags for problems that *might* come up it's better than nothing.

cheers
Mike
November 22, 2006
John Reimer wrote:

> Huh?  I'm not following.  I said it's unfair that
> C++ users frequently see D as GC-only.  Your
> response seems to indicate that this is not unfair,
> but I can't determine your line of reasoning.

I may be being naive.

There's a difference between "a D program CAN ONLY allocate memory on the GC heap" and "a D program WILL allocate memory on the GC heap".

The first statement is plain wrong, and once you point out that malloc is still available there's not much to discuss, so I can't believe that this is what C++ users have a problem with.

The second statement is technically wrong, but only if you carefully avoid certain language features and don't use the standard library.

Hence, if you don't want GC (because of concerns about pausing, or working-set footprint, or whatever) then you're not using the language as it was intended to be used. GC and GC-less approaches are not on an equal footing.

> I'm sorry, Mike.  What post are you saying I'm
> applauding?  I can't see how relating my applauding
> to the conclusion in that sentence makes any
> sense.  Is there something implied or did you mean
> to point something out?

Steve Horne's post -
http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=44644
- which I thought you were agreeing with. If I misread or got muddled about
attribution, apologies.

My point was just this: regardless of whether GC-less D is a reasonable thing to want, if you take the attitude that it's not worth supporting then it's hard to see why a C++ users' perception of D as "GC whether you want it or not" is unfair.
November 22, 2006
On Wed, 22 Nov 2006 14:17:26 -0800, Mike Capp <mike.capp@gmail.com> wrote:

> John Reimer wrote:
>
>> Huh?  I'm not following.  I said it's unfair that
>> C++ users frequently see D as GC-only.  Your
>> response seems to indicate that this is not unfair,
>> but I can't determine your line of reasoning.
>
> I may be being naive.
>
> There's a difference between "a D program CAN ONLY allocate memory on the GC heap"
> and "a D program WILL allocate memory on the GC heap".
>
> The first statement is plain wrong, and once you point out that malloc is still
> available there's not much to discuss, so I can't believe that this is what C++
> users have a problem with.
>
> The second statement is technically wrong, but only if you carefully avoid certain
> language features and don't use the standard library.
>

Avoid new/delete, dynamic arrays, array slice operations, and array concatenation.  I think that's it... Further, new/delete can be reimplemented to provide custom allocators.  You can use stack based variables if necessary (soon with scope attribute, I hope).  For those special cases, knowing how to do this would be important anyway: careful programming knowledge and practice is a requirement regardless.  It is clearly possible in D.  You do not need to use the GC, if you feel the situation warrants such avoidance.

What I find strange is that some C++ users, who do not use D, make complaints about D in this area, then fall into a debate about C++ verses D memory management (when you mention the possibility of malloc or otherwise); no the argument is not over; when you prove the point that D is flexible here, they then digress into a discussion on how they can improve on C++ default memory management by implementing a /custom/ solution in there own C++ programs.  How is this different than in D?  I think D makes it even easier to do this.  These guys likely would be serious D wizards if they ever tried it out.

They do find much to discuss whatever point is made!  That's why I said that it's not all about D; it's more about being entrenched in something they are comfortable with.

> Hence, if you don't want GC (because of concerns about pausing, or working-set
> footprint, or whatever) then you're not using the language as it was intended to
> be used. GC and GC-less approaches are not on an equal footing.
>

D is used however someone wants to use it.  For D based kernels, you must avoid using a GC and interface with a custom memory management system of some sort.  Does that mean kernels should not be programmed in D, because they avoid using a GC (apparently an intended part of the D language)?  It's no problem avoiding the GC, so that tells me that's part of the intended workings of D as well.  I believe D was /intended/ to work with in both situations (though it's not a language that was intended to work without a gc, /all the time/).

It takes a different mindset to learn how to do this -- when and when not to use a gc, even in different parts of a program.  D Developers working 3D games/libraries seem to realize this (and have worked successfully with the gc).  Some C++ users seem to want to argue from there C++ experience only.  So what we have is two groups debating from different perspectives.  Neither side is necessarily wrong.  But the argument is rather lame.  And C++ users that don't use D don't have the perspective of a D user -- which, yes, makes there accusations unfair.

GC is indeed the preferred way to go. But as a systems language, D cannot promote that as the one and only way.  It wisely leaves room for other options.  I support that.  And D has so many powerful feature, that subracting a precious few features from it to accomodate non-gc based programming is hardly damaging to D's expressiveness.


>> I'm sorry, Mike.  What post are you saying I'm
>> applauding?  I can't see how relating my applauding
>> to the conclusion in that sentence makes any
>> sense.  Is there something implied or did you mean
>> to point something out?
>
> Steve Horne's post -
> http://www.digitalmars.com/pnews/read.php?server=news.digitalmars.com&group=digitalmars.D&artnum=44644
> - which I thought you were agreeing with. If I misread or got muddled about
> attribution, apologies.
>


I supported what I thought was his conclusion.  Maybe I was confused?  I supported that he thought D was optimal for it's support of both gc based memory management and manual memory management at the same time, that no one dimension should be force over another.  I agreed with that.


> My point was just this: regardless of whether GC-less D is a reasonable thing to
> want, if you take the attitude that it's not worth supporting then it's hard to
> see why a C++ users' perception of D as "GC whether you want it or not" is unfair.

It appears my "unfair" statement is slowly being dragged into broader realms.  I'm not so sure it's maintaining its original application anymore.  The discussion about making a GC-less standard library was debated here as to whether it was worth supporting or not.  That was another discussion.  A standard non-gc based library is likely not going to meet with massive approval from those already in the community who have experienced D for themselves, especially merely to attract the attention of a skeptical C++ crowd (although I admitted that it would be an interesting experiment; and Walter seems to have stated that it wasn't even necessary).  I doubt that would work anyway.

If it is "unfair" for C++ skeptics to be told that there isn't support for a non-gc library given your expression of how they perceive the message ("GC way or the highway"), I'd have to disagree.  C++ skeptics are still operating from a different perspective -- one they are quite committed to.  If they are spreading mis-information about how D works because they are unwilling to test a different perspective, then it's kind of hard to feel sorry for them if they don't get what they want from a language they have most certainly decided they will never use anyway.  Naturally, the result for the D apologist, is that he'll never conve the C++ skeptic. But as others have mentioned in other posts, that goal is futile anyway.  The positive side-effect of the debate does appear to be education of others that might be interested in D -- it's an opportunity to intercept mis-information.

In summary, the GC is there and available for use if you want it. But you don't have to use it.  A good mix of GC and manual-memory management is fully supported and recommended as the situation requires.  There is no reason to fear being stuck with the gc in D.

I do have my own mis-givings about some things in D, but, as you probably guessed, this is not one of them.  And for the  most part, D is a very enjoyable langauge to use. :)

-JJR
November 23, 2006
John Reimer wrote:

> Avoid new/delete, dynamic arrays, array slice
> operations, and array concatenation.  I think
> that's it...

Also associative arrays. I'm not convinced, though, that slices require GC. I'd expect GC-less slicing to be safe so long as the programmer ensured that the lifetime of the whole array exceeded that of the slice. In many cases that's trivial to do.

I don't like the fact that the same piece of code that initializes a stack array in C will create a dynamic array in D. Porting accident waiting to happen.

> What I find strange is that some C++ users [...]
> then digress into a discussion on how they can
> improve on C++ default memory management by
> implementing a /custom/ solution in there own C++
> programs.  How is this different than in D?

Dunno; I'd never argue otherwise. I've never found a need to implement a custom allocator in C++ (especially STL allocators, which turned out to be a complete waste of paper).

> D makes it even easier to do this.  These guys
> likely would be serious D wizards if they ever
> tried it out.

Yeah. Similarly, I think Walter's too pessimistic when he doubts that the "career" C++ people will ever switch.

C++ has been a peculiar language lately; for all practical intents and purposes it forked a few years back into two diverging dialects. One is fairly conservative and is used to write production code that needs to maintained by people not named Andrei. The other is a cloud-cuckoo-land research language used to see what can be done with template metaprogramming.

The people drawn to the latter dialect, from Alexandrescu all the way back to Stepanov, enjoy pushing a language's features beyond what the original designer anticipated. Given D's different and richer feature set, I'd be amazed if they didn't have fun with it.

> It's no problem avoiding the GC,

No? Hypothetical: your boss dumps a million lines of D code in your lap and says, "Verify that this avoids the GC in all possible circumstances". What do you do? What do you grep for? What tests do you run?

That's not rhetorical; I'm not saying there isn't an answer. I just don't see what it is.

> It appears my "unfair" statement is slowly being
> dragged into broader realms. I'm not so sure it's
> maintaining its original application anymore.

Fair enough; I don't mean to drag you kicking and screaming out of context.

> A standard non-gc based library is likely not
> going to meet with massive approval from those
> already in the community

I should clarify. I'm not proposing that the entire standard library should have a GC-less implementation, just that it should be as useful as possible without introducing GC usage when the user is trying to avoid it.

Crude analogy: imagine everything in Phobos being wrapped in a version(GC) block. Phobos is normally built with -version=GC, so no change for GC users. GC-less users use Phobos built without GC, so none of the library is available, which is where they were anyway.

Now, a GC-less user feels the need for std.foo.bar(). They look at the source, and
find that std.foo.bar() won't ever use GC because all it does is add two ints, so
they take it out of the version(GC) block and can now use it. Or they write a less
efficient but GC-less implementation in a version(NOGC) block. Either way, they
get what they want without affecting GC users. It's incremental, and the work is
done by the people (if any) who care.

cheers
Mike
November 23, 2006
Mike Capp wrote:
> John Reimer wrote:
> 
>> Avoid new/delete, dynamic arrays, array slice
>> operations, and array concatenation.  I think
>> that's it...
> 
> Also associative arrays. I'm not convinced, though, that slices require GC. I'd
> expect GC-less slicing to be safe so long as the programmer ensured that the
> lifetime of the whole array exceeded that of the slice. In many cases that's
> trivial to do.
> 
> I don't like the fact that the same piece of code that initializes a stack array
> in C will create a dynamic array in D. Porting accident waiting to happen.
> 
>> What I find strange is that some C++ users [...]
>> then digress into a discussion on how they can
>> improve on C++ default memory management by
>> implementing a /custom/ solution in there own C++
>> programs.  How is this different than in D?
> 
> Dunno; I'd never argue otherwise. I've never found a need to implement a custom
> allocator in C++ (especially STL allocators, which turned out to be a complete
> waste of paper).

STL allocators can be useful for adapting the containers to work with shared memory.  And I created a debug allocator I use from time to time.  But the complexity allocators add to container implementation is significant, for arguably little return.

>> D makes it even easier to do this.  These guys
>> likely would be serious D wizards if they ever
>> tried it out.
> 
> Yeah. Similarly, I think Walter's too pessimistic when he doubts that the "career"
> C++ people will ever switch.

I suppose it depends on how you define "career."  If it's simply that a person uses the language because it is the best fit for their particular problem domain and they are skilled at using it, then I think such a user would consider D (I certainly have).  But if you define it as someone who is clinging to their (possibly limited) knowledge of C++ and is not interested in learning new skills then probably not.  Frankly, I think the greatest obstacle D will have is gaining traction in existing projects.  With a decade or two of legacy C++ code to support, adopting to D is simply not feasible.  Training is another issue.  A small development shop could switch languages relatively easily, but things are much more difficult if a build team, QA, various levels of programmers, etc, all need to learn a new language or tool set.  At that point the decision has more to do with short-term cost than anything.

> C++ has been a peculiar language lately; for all practical intents and purposes it
> forked a few years back into two diverging dialects. One is fairly conservative
> and is used to write production code that needs to maintained by people not named
> Andrei. The other is a cloud-cuckoo-land research language used to see what can be
> done with template metaprogramming.

LOL.  Pretty much.  Professionally, it's uncommon that I'll meet C++ programmers that have much experience with STL containers, let alone knowledge of template metaprogramming.  Even techniques that I consider commonplace, like RAII, seem to be unknown to many/most C++ programmers.  I think the reality is that most firms still write C++ code as if it were 1996, not 2006.

>> It's no problem avoiding the GC,
> 
> No? Hypothetical: your boss dumps a million lines of D code in your lap and says,
> "Verify that this avoids the GC in all possible circumstances". What do you do?
> What do you grep for? What tests do you run?

I'd probably begin by hooking the GC collection routine and dumping data on what was being cleaned up non-deterministically.  This should at least point out specific types of objects which may need to be managed some other way.  But it still leaves bits of dynamic arrays discarded by resizes, AA segments, etc, to fall through the cracks.  A more complete solution would be to hook both allocation and collection and generate reports, or rely on a tool like Purify.  Probably pretty slow work in most cases.

> I should clarify. I'm not proposing that the entire standard library should have a
> GC-less implementation, just that it should be as useful as possible without
> introducing GC usage when the user is trying to avoid it.

This is really almost situational.  Personally, I try to balance elegance and intended usage with optional destination buffer parameters and such.  In the algorithm code I've written, for example, the only routines that allocate are pushHeap, unionOf (set union), and intersectionOf (set intersection).  It would be possible to allow for optional destination buffers for the set operations, but with optional comparison predicates already in place, I'm faced with deciding whether to put the buffer argument before or after the predicate... and neither is ideal.  I guess a bunch of overloads could solve the problem, but what a mess.


Sean
November 23, 2006
Mike Capp wrote:
> John Reimer wrote:
>>A standard non-gc based library is likely not
>>going to meet with massive approval from those
>>already in the community
> 
> I should clarify. I'm not proposing that the entire standard library should have a
> GC-less implementation, just that it should be as useful as possible without
> introducing GC usage when the user is trying to avoid it.
> 
> Crude analogy: imagine everything in Phobos being wrapped in a version(GC) block.
> Phobos is normally built with -version=GC, so no change for GC users. GC-less
> users use Phobos built without GC, so none of the library is available, which is
> where they were anyway.
> 
> Now, a GC-less user feels the need for std.foo.bar(). They look at the source, and
> find that std.foo.bar() won't ever use GC because all it does is add two ints, so
> they take it out of the version(GC) block and can now use it. Or they write a less
> efficient but GC-less implementation in a version(NOGC) block. Either way, they
> get what they want without affecting GC users. It's incremental, and the work is
> done by the people (if any) who care.

If DMD were an old fashioned shrink wrapped product, this would be solved by having a little note in the library documentation next to each Phobos function, stating whether GC {will|may|won't} get used.


(The last statement is not strictly correct, it should rather be something like "allocate from the heap", "GC-safe", or whatever, but the phrasing serves the point here.)


If somebody were to actually check Phobos, the obvious first thing to do is to grep for "new". But what's the next thing?
November 23, 2006
Sean Kelly wrote:
> Mike Capp wrote:

>> Dunno; I'd never argue otherwise. I've never found a need to implement a custom
>> allocator in C++ (especially STL allocators, which turned out to be a complete
>> waste of paper).
> 
> 
> STL allocators can be useful for adapting the containers to work with shared memory.  And I created a debug allocator I use from time to time.  But the complexity allocators add to container implementation is significant, for arguably little return.

I ended up doing just that thing for a project once.
That is, I used STL custom allocators to allocate the memory from SGI's shmem shared memory pools.

It not at all fun, though.  Especially since the allocator comes at the very end of every parameter list.  So using allocators means you have to specify *all* the parameters of every STL container you use.

--bb
November 23, 2006
On Tue, 21 Nov 2006 22:55:27 -0800, "John Reimer" <terminal.node@gmail.com> wrote:

>It was too long, but with good points.  If it were pared down, it would read easier and the points might hit home even harder.

That's my writing style, normally - except when no-one agrees with the 'good points' bit, anyway.

Trouble is, if I were to got through and try to pare down, it would get longer. I'd worry that actually there is a narrow range of platforms and applications where non-GC might work but GC not - those that are right on the edge of coping with malloc/free and unable to bare any GC overhead.

Its an Aspergers thing. People misunderstand what you say, so you get more verbose to try and avoid the misunderstandings. You have no common sense yourself so you can't know what can be left to the common sense of others. Besides, odd non-verbals tend to trigger mistrust, and that triggers defensiveness in the form of nit-picking every possible gap in your reasoning.

>If you really take an honest look at OSNEWS posts and others, you will realize that some of these people are literally annoyed at D and D promoters for a reason deeper and unrelated to the language.  You can't argue with that.

D is openly embracing something that people have stereotyped as a feature of scripting languages. Sure some of those scripting languages are good for full applications, and sure Java is much more aimed at applications, and sure theres all those 'academic' languages too, but a lot of systems level programmers had GC tagged as something for 'lesser' programmers who might manage the odd high level app, or academic geeks who never write a line of real-world code.

Stereotypes. Status. In-groups. Non-GC has become a symbol, really.

When I first encountered D, and read that it is a systems-level language with GC, at first I laughed and then all the 'reasons' why thats bad went through my head. Looking back, that sounds like a defence mechanism to me. Why should I need a defence mechanism? Perhaps I felt under attack?

This is just me, of course, and I got over it, but anyone care to bet that I'm the only one?

Of course putting this kind of thing out as advocacy is a bad idea. When people feel under attack, the worst thing you can do is accuse them of being irrational.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 23, 2006
On Wed, 22 Nov 2006 09:20:17 +0000 (UTC), Boris Kolar
<boris.kolar@globera.com> wrote:

>== Quote from Steve Horne (stephenwantshornenospam100@aol.com)'s article
>> Most real world code has a mix of
>> high-level and low-level.
>
>True. It feels so liberating when you at least have an option to cast reference to int, mirror internal structure of another class, or mess with stack frames. Those are all ugly hacks, but ability to use them makes programming much more fun.
>
>The ideal solution would be to have a safe language with optional unsafe features, so hacks like that would have to be explicitly marked as unsafe. Maybe that's a good idea for D 2.0 :) If D's popularity keeps rising, there will be eventually people who will want Java or .NET backend. With unsafe features, you can really put a lot of extra power in the language (opAssign, opIdentity,...) that may work or may not work as intended - but it's programmer's error if it doesn't (intimate knowledge of compiler internals is assumed).

Hmmm

C# does that safe vs. unsafe thing, doesn't it. My reaction was basically that I never used the 'unsafe' stuff at all. I learned it, but for anything that would need 'unsafe' I avoided .NET altogether.

Why?

Well, what I didn't learn is exactly what impact it has on users. As soon as I realised there is an impact on users, I felt very nervous.

If code could be marked as unsafe, and then be allowed to use some subset of unsafe features, I'd say that could be a good thing. But it should be an issue for developers to deal with, not users.

D is in a good position for this, since basically unsafe blocks should be highlighted in generated documentation (to ensure they get more attention in code reviews etc).

Also, possibly there should be a white-list of unsafe blocks to be allowed during compilation - something that each developer can hack for his own testing builds, but for the main build comes from some central source. Unsafe modules that aren't on the list should trigger either errors or warnings, depending on compiler options.

-- 
Remove 'wants' and 'nospam' from e-mail.
November 23, 2006
On Thu, 23 Nov 2006 21:25:29 +0000, Steve Horne <stephenwantshornenospam100@aol.com> wrote:

>Well, what I didn't learn is exactly what impact it has on users. As soon as I realised there is an impact on users, I felt very nervous.

Just to expand on it...

If you use the 'obsolete' keyword, your users don't get a "this application uses obsolete code!!!" message every time they start it. The keyword triggers warnings for developers, not users.

-- 
Remove 'wants' and 'nospam' from e-mail.