February 28, 2005
In article <cvvuqm$id4$1@digitaldaemon.com>, Kris says...
>
>In article <cvvqp9$d89$1@digitaldaemon.com>, Walter says...
>>
>>There's a lot there, and I just want to respond to a couple of points.
>>
>>1) You ask what's the bug risk with having a lot of casts. The point of a cast is to *escape* the typing rules of the language. Consider the extreme case where the compiler inserted a cast whereever there was any type mismatch. Typechecking pretty much will just go out the window.
>>
>>2) You said you use C++ every time when you want speed. That implies you feel that C++ is inherently more efficient than D (or anything else). That isn't my experience with D compared with C++. D is faster. DMDScript in D is faster than the C++ version. The string program in D www.digitalmars.com/d/cppstrings.html is much faster in D. The dhrystone benchmark is faster in D.
>>
>>And this is using an optimizer and back end that is designed to efficiently optimize C/C++ code, not D code. What can be done with a D optimizer hasn't even been explored yet.
>>
>>There's a lot of conventional wisdom that since C++ offers low level control, that therefore C++ code executes more efficiently. The emperor has no clothes, Matthew! I can offer detailed explanations of why this is true if you like.
>>
>>I challenge you to take the C++ program in www.digitalmars.com/d/cppstrings.html and make it faster than the D version. Use any C++ technique, hack, trick, you need to.
>
>
>There is no doubt in my mind that array slicing is a huge factor in making D faster than C in very specific areas. I'm a big proponent of this.
>
>Having said that; making sweeping statements regarding "low level control" and so on does nobody any favours. I would very much like to hear this detailed explanation you offer:
>

So would I (just for the educational aspects), but I'd rather have Walter use
that time to continue work on the reference compiler ;-)

As to the 'performance via low-level control' type of argument, the recent thread on why higher level array and vector expressions could be optimized by a compiler (but very difficult if not impossible to optimize as well with hand-tuned, low-level C code) offers a good general example of what I think Walter had in mind w/ his statement.

- Dave


February 28, 2005
On Mon, 28 Feb 2005 20:22:04 +0000 (UTC), Kris wrote:

> In article <cvvp0e$b66$1@digitaldaemon.com>, Walter says...
>>
>>
>>"Matthew" <admin.hat@stlsoft.dot.org> wrote in message news:cvv05a$2gnu$1@digitaldaemon.com...
>>> Sorry, but that's only required if all integral expressions must be
>>promoted to (at least) int.
>>
>>Right.
>>
>>> Who says that's the one true path?
>>
>>Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels.
>>
>>> Why is int the level at which it's drawn?
>>
>>Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky.
> 
> 
> This appears to be thoroughly misleading. We're talking about what the compiler enforces as the model of correctness; /not/ how that is translated into machine code. Do you see the distinction, Walter?

Kris has a damn good point or two here. DMD is a compiler not an assembler.

>>Note that when the initial C standard was being drawn up, there was an unfortunate reality that there were two main branches of default integral promotions - the "sign preserving" and "value preserving" camps. They were different in some edge cases. One way had to be picked, the newly wrong compilers had to be fixed, and some old code would break. There was a lot of wailing and gnashing and a minor earthquake about it, but everyone realized it had to be done. That was a *minor* change compared to throwing out default integral promotions.
> 
> 
> Great stuff for the talk show, but unrelated. D is not C ~ you've already broken that concept in so many ways.

D has deliberately broken away from some aspects of C and C++ in order to
come up with a better language. But not all excess baggage seems to have
been left behind.

> 
>>> Why can there not a bit more contextual information applied? Is there no smarter way of dealing with it?
>>
>>Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization.
> 
> 
> On the face of it, the contrary would be true. However, that hardly answers Matthew's question about whether there's another approach. Instead it seems to indicate that you don't know of one, or are not open to one. Or am I misreading this?
> 
> 
>>CPUs are designed to execute C semantics
>>efficiently. That pretty much nails down accepting C's operator precedence
>>and default integral promotion rules as is.
> 
> 
> Misleading and invalid. Your argument fully binds the language model to the machine-code model. There's no need for them to be as tightly bound as you claim. Sure, there's tension between the two ~ but you are claiming it's all or nothing. It may have been a number of years since I wrote a compiler, but I don't buy this nonsense. And neither should anyone else.

With Walter's reasoning here, it would seem that DMD would be better off outputting C source code and getting that compiled with a decent C compiler.

-- 
Derek
Melbourne, Australia
February 28, 2005
Derek wrote:

> I'm also looking forward to the day when somebody designs the language
> which learns from *all* the mistakes of C, C++ and D.

++D, right ? (Since the green E seems to be taken already,
which would otherwise be a natural progression from red D)

> Why E?
> 
> Java, Perl, Python, C++, TCL, and on and on. Don't we have enough
> languages already? How could we justify bringing yet another
> programming language into the world?

(from http://www.skyhunter.com/marcs/ewalnut.html, with apologies)

--anders
February 28, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cvvun0$i78$1@digitaldaemon.com...
> 'cause if it's not part of the language spec, it might as well not exist.

Is this an argument for making certain warnings part of the spec? I don't understand the context.

> Several commentators (including IIRC, the original authors of the language; must trip off and check my copy of The Art Of UNIX Programming ...) have observed that the biggest single mistake in the development of the C language was having, for reasons of program size, the lint functionality as a separate program. Most people don't use it.
>
> Consider the case of C++, which is simply too complicated for an effective lint.

Plus the preprocessor makes linting harder. I run the MATLAB editor and the Emacs mode for MATLAB with mlint turned on and it lints code on load and save. It totally rocks. Load a file and you immediately see the "problem areas". MATLAB is interpreted so the concept of compiler warnings is moot but the work-flow is similar enough that integrated linting should have obvious benefits. Plus the linting interface can be more customizable than the warning interface on a compiler. I'd rather have Walter fix compiler bugs than work on a fancy warning infrastructure (or even a non-fancy warning infrastructure).


> And anyway, there's the more fundamental thing that if any item's going to be effectively mandatory in lint, it should be in the compiler in the first place. As I've said before, which competent C++ engineer does not set warnings to max?

A compiler's verboseness about warnings is entirely up to the compiler writer. If one doesn't like the lack of warnings in one compiler run or make another. GDC is a good candidate for a starting point. If all D has to worry about is the lack of warnings from the reference compiler then we are in great shape.

> <brad@domain.invalid> wrote in message news:cvvrs8$e8v$1@digitaldaemon.com...
>> Just a comment from the sidelines - feel free to ignore.
>> To me it doesn't look like Walter is in any hurry to flag narrowing casts
>> as warnings (or add any warnings into D at all).  And it doesn't look
>> like everyone else is willing to give up without a fight.  Why not use
>> this as an opportunity to get started on a D lint program? There are
>> enough skilled people with a stake in this particular issue alone to make
>> it worth while.
>> Walter - how hard, in your opinion, is it to add lint like checking into
>> the GPL D frontend?  Can it be some in a painless way so that future
>> updates to the frontend have low impact on the lint code?
>>
>> Brad
>
> 


February 28, 2005
Matthew wrote:

> IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems), but it scores pretty poorly on correctness.
> 
> Since Correctness is the sine qua non of software - there's precisely zero use for a piece of incorrect software to a client; ask 'em! - it doesn't matter if D programs perform 100x faster than C/C++ programs on all possible architectures. If the language promotes the writing of write buggy software, I ain't gonna use it.

Also, there's a lot of interesting new stuff coming up on the C/C++
front, like system vendor's heavy optimizations and auto-vectorization,
that the D implementation will be missing out on and thus could lose...

I like that performance is a constant D focus. But it can't be *all* ?

--anders
February 28, 2005
>> And anyway, there's the more fundamental thing that if any item's going to be effectively mandatory in lint, it should be in the compiler in the first place. As I've said before, which competent C++ engineer does not set warnings to max?
>
> A compiler's verboseness about warnings is entirely up to the compiler writer. If one doesn't like the lack of warnings in one compiler run or make another. GDC is a good candidate for a starting point. If all D has to worry about is the lack of warnings from the reference compiler then we are in great shape.

But isn't the whole point to avoid dialecticism?

At the rate it's going, DMD is going to be like Borland, an easy to use toy compiler that people learn on, but which is simply not good enough for any serious development. I can't believe that's in line with Walter's ambitions for Digital Mars (as opposed to D).

Anyway, I tire of this, mainly my own voice if I'm honest. I'll try my best to keep my word and shut up.



February 28, 2005
"Georg Wrede" <georg.wrede@nospam.org> wrote in message news:422311B7.3010701@nospam.org...
>
>
> Matthew wrote:
>> But the example itself is bogus. Let's look at it again, with a bit more flesh:
>>
>>     byte b;
>>
>>     b = 255
>>
>>     b = b + 1;
>>
>> Hmmm .... do we still want that implicit behaviour? I think not!!
>
> int big;
> big = A_BIG_LITERAL_VALUE;
> big = big + 1; // or any value at all, actually.
>
> Where's the difference?

In principal none.

In practice:
    - the larger a type, the less likely it is to overflow with
expressions involving literals
    - the larger a type, the fewer types there are larger than it
    - there has to be some line drawn.

> If somebody typed b as byte, then it's his own decision. Anyone using a small integral type has to know they can overflow -- this is no place for nannying.

That's just total crap. Calling it nannying doesn't make it any less important, it just shows that your argument is in need of some emotional bolstering.

I got back to the basic point: a language that allows the following:

    long    l     = 12345678901234;
    byte    b    =    l;
    short    s    =    1;

without allowing *any* feedback from a standards compliant compiler is a non-starter.

> Heck, any integral type might overflow.

True. There are limits to all principles, and imperfections to all notions. However, saying all integral=>integral conversions are *by definition* valid is not the pragmatic answer to our theoretical limitations. It's ostrich stuff.

> (What I'd love is a switch for non-release code to check for overflow!)

Why non-release?



February 28, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cvvtk1$ggn$1@digitaldaemon.com...
> > 1) You ask what's the bug risk with having a lot of casts. The point
> > of a
> > cast is to *escape* the typing rules of the language. Consider the
> > extreme
> > case where the compiler inserted a cast whereever there was any type
> > mismatch. Typechecking pretty much will just go out the window.
>
> This point is lost on my. Why would the compiler insert casts at every mismatch?

It's just a hypothetical what if to illustrate the point.

> AFACS, the compiler already is inserting casts wherever there is an integral type mismatch.

Implicit casting is a form of this, but a very restricted form. Full casting is unrestricted, and so will negate most of the advantages of type checking. I agree that implicit casting *does* negate some of the features of static type checking. But it's worth it.


> I believe that software engineering has a pretty much linear order of concerns:
>
>     Correctness           - does the chosen
> language/libraries/techniques promote the software actually do the right
> thing?
>     Robustness            - does the language/libraries/techniques
> support writing software that can cope with erroneous environmental
> conditions _and_ with design violations (CP)
>     Efficiency              - does the language/libraries/techniques
> support writing software that can perform with all possible speed (in
> circumstances where that's a factor)
>     Maintainability       - does the language/libraries/techniques
> support writing software that is maintainable
>     Reusability            - does the language/libraries/techniques
> support writing software that may be reused

It's a good list, but I think a serious issue that matters is programmer productivity in the language. Programmer productivity is one of the strong appeals of Perl and Python, enough that it often justifies the performance hit. Trying to improve productivity is a major goal of D, it's why, for example, you don't have to write .h files and forward reference declarations.

> Given that list, correct use of C++ scores (I'm using scores out of five since this is entirely subjective and not intended to be a precise scale)
>
>     Correctness           - 4/5: very good, but not perfect; you have to
> know what you're doing

I don't agree with this assessment at all. I'd give it a 2, maybe a 3. I'll give some examples where C++ makes it very, very difficult to write correct code. This is based on many years of real experience:

1) No default initialization of variables. This leads to code that appears to work, but sometimes fails mysteriously. When you try to insert debugging code, the failure shifts away or disappears. Sometimes it doesn't show up until you port to another platform. It's a rich source of erratic, random bugs, which are bugs of the WORST sort.

2) No default initialization of class members. I can't tell you how many times I've had mysterious bugs because I've added a member to a class with many constructors, and forgot to add an initializer to one of them.

3) Overriding a non-virtual member function in base class, and forgetting to go back and make the earliest appearance of it in the heirarchy virtual. Programmers tend to make such functions non-virtual for efficiency.

4) 'Holes' in between struct members being filled with random garbage.

5) Namespace pollution caused by macros, making it very hard to integrate diverse libraries and ensure they do not step on each other.

6) No array bounds checking for builtin arrays. Yes, I know about std::vector<>, and how sometimes it does bounds checking and sometimes it doesn't. We all know how well this works in practice, witness the endless buffer overrun security bugs.

7) Overreliance on pointers to do basic chores. Pointers have no bounds checking, or really much of any checking at all.

8) The old bugaboos of memory leaks, dangling pointers, double deletion of memory, etc. STL claims to mitigate this, but I am unconvinced. The more STL tries to lock this up, the more tempted programmers are to go around it for efficiency.

9) C++ code is not portable 'out of the box'. One has to learn how to write portable code by having lots of experience with it. This is due to all the "undefined" and "implementation defined" behaviors in it. It is very hard to examine a piece of code and determine if it is portable or not.

10) Static constructors are run in an "undefined" order.

11) Lovely syntactical traps like:
    for (i = 0; i < 10; i++);
    {
        ...
    }

12) Undefined order of evaluation of expression side effects.

13) Name lookup rules. How many C++ programmers really understand dependent and non-dependent lookups? Or how forward references work in class scope but not global scope? Or ADL? Or how about some of those wretchedly bizarre rules like the base class not being in scope in a template class? How is correctness achieved when not just programmers don't understand the lookup rules, but the compiler developers get it wrong often too (resulting in variation among compilers)? The 'length' D issue is a real one, but it pales in comparison to C++'s problems.

14)
Then there are the portability issues, like implementation defined int
sizes.

The only reason you and I are able to be successful writing 'correct' code in C++ is because we have many, many, many years of practice running into the potholes and learning to avoid them. I suspect you gave C++ such a high score on this is because you are *so used* to driving around those potholes, you don't see them anymore. Watch some of the newbie code posts on comp.lang.c++ sometimes <g>. Even I was surprised when I ported some of my C++ code over to D, code I thought was thoroughly debugged, field tested, and correct. Bink! Array overflows!

>     Robustness            - 3/5: good, but could be much better

Agree with 3, doubt it can be improved. I know that Boost claims to resolve much of this, but Boost is far too heavilly reliant on standards compliance at the very edges of the language, pushing compilers past their limits. I also feel that what Boost is doing is inventing a language on top of C++, sort of using the C++ compiler as a back end, and is that new language actually C++ or does it merit being called a different language?

>     Efficiency              - 4/5: very good, but not perfect.

Agree. C++'s overreliance on pointers, and the aliasing problem, are significant impediments.

>     Maintainability       - 2/5: yes, but it's hard yacka. If you're
> disciplined, you can get 4/5, but man! that's some rare discipline

Agree.

>     Reusability            - 2/5: pretty ordinary; takes a lot of
> effort/skill/experience/luck

Agree.

>   I also think there's an ancillary trait, of importance purely to the
> programmer
>     Challenge/Expressiveness/Enjoyment            -    4/5

I'd give it a 3. I find D a lot more fun to program in, as it frees me of much of the tedium required by C++.

> IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems),

I've used gc's on large-scale high throughput systems. I'm confident D's will perform well. It's also possible to do a much better gc than the rather old-fashioned one it has now. I know how to do a better one, and have carefully left the door open for it in the semantics of the language.

> but it scores pretty poorly on correctness.

I disagree strongly on this. Unless you have something else in mind I'm forgetting, in your posts you've focussed in on two or three issues and have assigned major importance to them. None of them, even if they go awry in the manner you predict, will cause the kinds of erratic, random, awful bugs that C++ holes mentioned above will and do cause. All of them, as I've argued (and you and Kris disagree, fair enough), will open the door to other kinds of correctness bugs if they are fixed per your suggestions.


> Since Correctness is the sine qua non of software - there's precisely zero use for a piece of incorrect software to a client; ask 'em! - it doesn't matter if D programs perform 100x faster than C/C++ programs on all possible architectures. If the language promotes the writing of write buggy software, I ain't gonna use it.

But I think you already do <g>. See my list above.

> D probably scores really highly on Robustness, with unittests, exceptions, etc. But it's pretty damn unmaintainable when we've things like the 'length' pseudo keyword, and the switch-default/return thing. I demonstrated only a couple of days ago how easy it was to introduce broken switch/case code with nary a peep from the compiler. All we got from Walter was a "I don't want to be nagged when I'm inserting debugging code". That's just never gonna fly in commercial development.

Try the following code with g++ -Wall :

    int foo()
    {
        return 3;
        return 4;
    }

and this:

    int foo(int x)
    {
        switch (x)
        {    case 3:    x = 4; break;
                            x = 5;
              case 4:   break;
        }
        return x;
    }

I understand how you feel about it, but I don't agree that it is a showstopper of an issue.

> So, I'd give two scores for D. What I think it is now:
>
>     Correctness           - 2/5:
>     Robustness            - 3/5:

I'm going to argue about the robustness issue. I've gotten programs to work reliably in much less time in D than in C++.

>     Efficiency              - ~5/5:
>     Maintainability       - 1/5:

I don't understand your basis for 1/5. Even a small thing like 'deprecated' is a big improvement for any maintainer wanting to upgrade a library.

>     Reusability            - 2/5:

This remains to be seen. Certainly, D doesn't suffer from the macro problems that seriously impede C++ reusability. Just that should move it up to a 3.

>     Challenge/Expressiveness/Enjoyment            -    3/5


February 28, 2005
To Matthew and Anders:

Maybe you are right in some points but ...

Performance is still an important issue nowadays, when everybody says
..want speed, buy a bigga machine...
(Try to buy X, 2*X, X^2 or 2^X size machines when X is growing :-)

Most software developer (company and individual managers also) does
not take care about theoretical "correctness".
If it was true Windows would never born.
Windows is what managers plan and people buy. Works, but not correct.

I personally like D while is it much more correct than C/C++
in some aspects. In D the binary code does (roughly) what I want.
A C++ binary it does not (the code is unreadable compared to D).
A C binary does what is written in the source, but it is
very painful to explain every small things to the compiler.

In C/C++ there are everywhere traps. I have to watch all and every steps.
If I forget to delete something I'll be punished with a crash where
I do not expect it... There are no (or not too much) help from the
language(compiler) itself.

D helps a lot _while_ developing. I mean this is Correctness. I agree, that the compiler is not finished yet.

Tamas Nagy


In article <d004m9$pbo$2@digitaldaemon.com>, =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
>
>Matthew wrote:
>
>> IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems), but it scores pretty poorly on correctness.
>> 
>> Since Correctness is the sine qua non of software - there's precisely zero use for a piece of incorrect software to a client; ask 'em! - it doesn't matter if D programs perform 100x faster than C/C++ programs on all possible architectures. If the language promotes the writing of write buggy software, I ain't gonna use it.
>
>Also, there's a lot of interesting new stuff coming up on the C/C++ front, like system vendor's heavy optimizations and auto-vectorization, that the D implementation will be missing out on and thus could lose...
>
>I like that performance is a constant D focus. But it can't be *all* ?
>
>--anders


February 28, 2005
"Walter" <newshound@digitalmars.com> wrote in message news:d006j3$s40$1@digitaldaemon.com...
>
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cvvtk1$ggn$1@digitaldaemon.com...
>> > 1) You ask what's the bug risk with having a lot of casts. The
>> > point
>> > of a
>> > cast is to *escape* the typing rules of the language. Consider the
>> > extreme
>> > case where the compiler inserted a cast whereever there was any
>> > type
>> > mismatch. Typechecking pretty much will just go out the window.
>>
>> This point is lost on my. Why would the compiler insert casts at
>> every
>> mismatch?
>
> It's just a hypothetical what if to illustrate the point.

Sigh. You just keep moving the goalposts

If it was just hypothetical, then a furphy indeed.

>> AFACS, the compiler already is inserting casts wherever there is an integral type mismatch.
>
> Implicit casting is a form of this, but a very restricted form. Full
> casting
> is unrestricted, and so will negate most of the advantages of type
> checking.
> I agree that implicit casting *does* negate some of the features of
> static
> type checking. But it's worth it.
>
>
>> I believe that software engineering has a pretty much linear order of concerns:
>>
>>     Correctness           - does the chosen
>> language/libraries/techniques promote the software actually do the
>> right
>> thing?
>>     Robustness            - does the language/libraries/techniques
>> support writing software that can cope with erroneous environmental
>> conditions _and_ with design violations (CP)
>>     Efficiency              - does the language/libraries/techniques
>> support writing software that can perform with all possible speed (in
>> circumstances where that's a factor)
>>     Maintainability       - does the language/libraries/techniques
>> support writing software that is maintainable
>>     Reusability            - does the language/libraries/techniques
>> support writing software that may be reused
>
> It's a good list, but I think a serious issue that matters is
> programmer
> productivity in the language. Programmer productivity is one of the
> strong
> appeals of Perl and Python, enough that it often justifies the
> performance
> hit. Trying to improve productivity is a major goal of D, it's why,
> for
> example, you don't have to write .h files and forward reference
> declarations.
>
>> Given that list, correct use of C++ scores (I'm using scores out of
>> five
>> since this is entirely subjective and not intended to be a precise
>> scale)
>>
>>     Correctness           - 4/5: very good, but not perfect; you have
>> to
>> know what you're doing
>
> I don't agree with this assessment at all. I'd give it a 2, maybe a 3.
> I'll
> give some examples where C++ makes it very, very difficult to write
> correct
> code. This is based on many years of real experience:
>
> 1) No default initialization of variables. This leads to code that
> appears
> to work, but sometimes fails mysteriously. When you try to insert
> debugging
> code, the failure shifts away or disappears. Sometimes it doesn't show
> up
> until you port to another platform. It's a rich source of erratic,
> random
> bugs, which are bugs of the WORST sort.
>
> 2) No default initialization of class members. I can't tell you how
> many
> times I've had mysterious bugs because I've added a member to a class
> with
> many constructors, and forgot to add an initializer to one of them.
>
> 3) Overriding a non-virtual member function in base class, and
> forgetting to
> go back and make the earliest appearance of it in the heirarchy
> virtual.
> Programmers tend to make such functions non-virtual for efficiency.
>
> 4) 'Holes' in between struct members being filled with random garbage.
>
> 5) Namespace pollution caused by macros, making it very hard to
> integrate
> diverse libraries and ensure they do not step on each other.
>
> 6) No array bounds checking for builtin arrays. Yes, I know about
> std::vector<>, and how sometimes it does bounds checking and sometimes
> it
> doesn't. We all know how well this works in practice, witness the
> endless
> buffer overrun security bugs.
>
> 7) Overreliance on pointers to do basic chores. Pointers have no
> bounds
> checking, or really much of any checking at all.
>
> 8) The old bugaboos of memory leaks, dangling pointers, double
> deletion of
> memory, etc. STL claims to mitigate this, but I am unconvinced. The
> more STL
> tries to lock this up, the more tempted programmers are to go around
> it for
> efficiency.
>
> 9) C++ code is not portable 'out of the box'. One has to learn how to
> write
> portable code by having lots of experience with it. This is due to all
> the
> "undefined" and "implementation defined" behaviors in it. It is very
> hard to
> examine a piece of code and determine if it is portable or not.
>
> 10) Static constructors are run in an "undefined" order.
>
> 11) Lovely syntactical traps like:
>    for (i = 0; i < 10; i++);
>    {
>        ...
>    }
>
> 12) Undefined order of evaluation of expression side effects.
>
> 13) Name lookup rules. How many C++ programmers really understand
> dependent
> and non-dependent lookups? Or how forward references work in class
> scope but
> not global scope? Or ADL? Or how about some of those wretchedly
> bizarre
> rules like the base class not being in scope in a template class? How
> is
> correctness achieved when not just programmers don't understand the
> lookup
> rules, but the compiler developers get it wrong often too (resulting
> in
> variation among compilers)? The 'length' D issue is a real one, but it
> pales
> in comparison to C++'s problems.
>
> 14)
> Then there are the portability issues, like implementation defined int
> sizes.
>
> The only reason you and I are able to be successful writing 'correct'
> code
> in C++ is because we have many, many, many years of practice running
> into
> the potholes and learning to avoid them. I suspect you gave C++ such a
> high
> score on this is because you are *so used* to driving around those
> potholes,
> you don't see them anymore. Watch some of the newbie code posts on
> comp.lang.c++ sometimes <g>. Even I was surprised when I ported some
> of my
> C++ code over to D, code I thought was thoroughly debugged, field
> tested,
> and correct. Bink! Array overflows!
>
>>     Robustness            - 3/5: good, but could be much better
>
> Agree with 3, doubt it can be improved. I know that Boost claims to
> resolve
> much of this, but Boost is far too heavilly reliant on standards
> compliance
> at the very edges of the language, pushing compilers past their
> limits. I
> also feel that what Boost is doing is inventing a language on top of
> C++,
> sort of using the C++ compiler as a back end, and is that new language
> actually C++ or does it merit being called a different language?
>
>>     Efficiency              - 4/5: very good, but not perfect.
>
> Agree. C++'s overreliance on pointers, and the aliasing problem, are significant impediments.
>
>>     Maintainability       - 2/5: yes, but it's hard yacka. If you're
>> disciplined, you can get 4/5, but man! that's some rare discipline
>
> Agree.
>
>>     Reusability            - 2/5: pretty ordinary; takes a lot of
>> effort/skill/experience/luck
>
> Agree.
>
>>   I also think there's an ancillary trait, of importance purely to
>> the
>> programmer
>>     Challenge/Expressiveness/Enjoyment            -    4/5
>
> I'd give it a 3. I find D a lot more fun to program in, as it frees me
> of
> much of the tedium required by C++.
>
>> IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems),
>
> I've used gc's on large-scale high throughput systems. I'm confident
> D's
> will perform well. It's also possible to do a much better gc than the
> rather
> old-fashioned one it has now. I know how to do a better one, and have
> carefully left the door open for it in the semantics of the language.
>
>> but it scores pretty poorly on correctness.
>
> I disagree strongly on this. Unless you have something else in mind
> I'm
> forgetting, in your posts you've focussed in on two or three issues
> and have
> assigned major importance to them. None of them, even if they go awry
> in the
> manner you predict, will cause the kinds of erratic, random, awful
> bugs that
> C++ holes mentioned above will and do cause. All of them, as I've
> argued
> (and you and Kris disagree, fair enough), will open the door to other
> kinds
> of correctness bugs if they are fixed per your suggestions.
>
>
>> Since Correctness is the sine qua non of software - there's precisely
>> zero use for a piece of incorrect software to a client; ask 'em! - it
>> doesn't matter if D programs perform 100x faster than C/C++ programs
>> on
>> all possible architectures. If the language promotes the writing of
>> write buggy software, I ain't gonna use it.
>
> But I think you already do <g>. See my list above.
>
>> D probably scores really highly on Robustness, with unittests,
>> exceptions, etc. But it's pretty damn unmaintainable when we've
>> things
>> like the 'length' pseudo keyword, and the switch-default/return
>> thing. I
>> demonstrated only a couple of days ago how easy it was to introduce
>> broken switch/case code with nary a peep from the compiler. All we
>> got
>> from Walter was a "I don't want to be nagged when I'm inserting
>> debugging code". That's just never gonna fly in commercial
>> development.
>
> Try the following code with g++ -Wall :
>
>    int foo()
>    {
>        return 3;
>        return 4;
>    }
>
> and this:
>
>    int foo(int x)
>    {
>        switch (x)
>        {    case 3:    x = 4; break;
>                            x = 5;
>              case 4:   break;
>        }
>        return x;
>    }
>
> I understand how you feel about it, but I don't agree that it is a showstopper of an issue.
>
>> So, I'd give two scores for D. What I think it is now:
>>
>>     Correctness           - 2/5:
>>     Robustness            - 3/5:
>
> I'm going to argue about the robustness issue. I've gotten programs to
> work
> reliably in much less time in D than in C++.
>
>>     Efficiency              - ~5/5:
>>     Maintainability       - 1/5:
>
> I don't understand your basis for 1/5. Even a small thing like
> 'deprecated'
> is a big improvement for any maintainer wanting to upgrade a library.
>
>>     Reusability            - 2/5:
>
> This remains to be seen. Certainly, D doesn't suffer from the macro
> problems
> that seriously impede C++ reusability. Just that should move it up to
> a 3.
>
>>     Challenge/Expressiveness/Enjoyment            -    3/5

In Steve Krug's rather excellent book "Don't Make Me Think!", he discusses the notion of "satisficing"

He says "we tend to assume that [people] ... consider all of the available options, and choose the best one. In reality, though, most of the time we don't choose the *best* option - we choose the *first reasonable option*, a strategy known as _satisficing_. As soon as we find [something] that [leads] to what we're looking for, there's a very good chance we'll [choose] it."

(The heavy editing is because he's talking about usage patterns for Web sites. However, he goes on to show how this is based on drawing data from a variety of fields.)

My point is that D may very well solve some of the Big Scary Issues in C++, but that doesn't matter. I, along with all the readers of Imperfect C++ <g>, already know how to obviate/ameliorate/avoid these problems. That D solves them is therefore of little consequence where there are other, perhaps seemingly trivial to some, issues that are far more fundamental that it gets completely wrong. I repeat, I have no confidence in using D as it stands now to develop commercial software.

<Aside>
My interest in D ascribes precisely 0.0 to the fact that D initialises
variables, or other such stuff that you claim is so important. (I'm not
saying this is not important to others, or in principle, mind.) What I'm
interested in is:
    - slicing
    - foreach
    - intelligent and efficient class loading (a post 1.0 feature, I
know)
    - an auto type mechanism (another post 1.0)
    - a full and proper (and therefore unique!) handling of threading.
(a post 2.0 feature, I suspect!!)
    - doing a truly beautiful+powerful+efficient+EASY_TO_UNDERSTAND
template library that surpasses all the arcane nonsense of C++ (whether
Boost, or STLSoft or whatever)
    - a "portable C++"
    - a language that has good libraries of broad scope
    - this community
    - the incredibly fortuitous mix of features that makes D's memory
management "Not Your Father's Resource Management"

But if those things are layered atop something that I believe is
fundamentally fragile/flawed, I'm not gonna use it (and I'm also gonna
cry).
</Aside>

I rely on Java proscribing my writing invalid conversions.
I rely on Ruby making conversions not matter (so much).
I rely on C/C++ compilers warning me about dodgy conversions.
But in D I would have to rely on code reviews, and nothing else.
Consequent confidence factor: 0%.

But instead of all this ultimately fruitless and wearing back and forth, why can't you simply provide us with flags - "--pre1.0-only--check-narrowing", "--pre1.0-only--check-switch-default", "--pre1.0-only--check-missing-return", "--pre1.0-only--check-boolean-subexpression", etc. - to test things out. Let's run such things over Mango's impressive code base? If I'm wrong, I'll be happy to admit it. I'm sure Kris is of similar metal. As it stands, I'm *never* going to be convinced I'm wrong through talk, because I have _real experience_ in other similar languages where such things _have_ caused bugs when unchecked.