February 05, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1@digitaldaemon.com...
> Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!

NASA uses C, C++, Ada and assembler for space hardware.

http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html

That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that:

1) make it impossible to ignore situations the programmer did not think of

2) the bias is to force bugs to show themselves in an obvious manner

3) not making it easy for the programmer to insert dead code to "shut up the compiler"

This is why the return and the switch defaults are the way they are. The illustrative example of why this is a superior approach is the Java compiler's insistence on function signatures listing every exception they might raise. Sounds like a great idea to create robust code. Unfortunately, the opposite happens. Java programmers get used to just inserting catch all statements just to get the compiler to shut up. The end result is that critical errors get SILENTLY IGNORED rather than dealt with.

The ABSOLUTELY WORST thing a critical software app can do is silently ignore errors.

I've talked to a couple NASA probe engineers. They insert "deadman" switches in the computers. If the code crashes, locks, or has an unhandled exception, the deadman trips and the computer resets. The other approach I've seen in critical systems is "shut me down, notify the pilot, and engage the backup" upon crash, lock, or unhandled exception.

This won't happen if the error is silently ignored.

Having the compiler complain about lack of a return statement will encourage the programmer to just throw in a return 0; statement. Compiler is happy, the potential bug is NOT fixed, maintenance programmer is left wondering why there's a statement there that never gets executed, visual inspection of the code will not reveal anything obviously wrong, and testing will likely not reveal the bug since the function returns normally. Testers who use code coverage analyzers (an excellent QA technique) will have dead code sticking in their craws.

However, if the runtime exception does throw, the programmer knows he has a REAL bug, not a HYPOTHETICAL bug, and it's something that needs fixing, not an annoyance that needs shutting up. Testing will likely reveal it. If it happens in the field in critical software, the deadman or backup can be engaged. A return 0; will paper over the bug, potentially causing far worse things to happen.

The same comments apply to the switch default issue.

Correct me if I'm wrong, but your position is that the compiler issuing an error will ensure that the programmer will correct the hypothetical error by inserting dead code, thereby making it correct. This may happen most of the time, but I worry about the cases where the shut the compiler up dead code is inserted instead, as what happens in Java even by Java experts who KNOW BETTER yet do it anyway. (I know this because they've told me they do this even knowing they shouldn't.)

I've used compilers that insisted that I insert dead code. I usually add a comment saying it's dead code to shut the compiler up. It doesn't look good <g>.

I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up.

In short, I strongly believe that inserting dead code (code that will never be executed) is not the answer to writing bug free code. Having such code in there is misleading at best, and at worst will cause critical errors to be silently ignored.


February 05, 2005
I'd just like to say how much I agree with this.  Of course, this is an open source concept, really, but in practice it is something I have found to polish software much more robustly.

As an example, I write forum software (in PHP - I also do stuff in D, and even C#, but those are much lower scale...) which uses a relational database.  One of the primary causes of bugs is database errors - meaning, syntax or data errors created by unexpected input (for example, not selecting any items but then clicking "delete selected".)

Of course, showing such database errors to end users is a bad idea.  In the worst case, showing detailed information about these errors can more easily expose a security hole which might otherwise be patched by the time we heard of the error.  Instead, database errors are shown to administrators (that is, people with privilege to see them) and also logged in the database for later retrieval.

Now, this may not seem to translate directly, but to me it does.  In previous versions of the software, database error messages were neither logged nor shown to anyone.  After the change, fixing bugs became much easier... especially for third party add-on developers.  It was then possible to fix bugs much more quickly and easily for all involved (including the users, which were sometimes programmers themselves), only increasing productivity and stability.

Moreover, sometimes relying on the compiler to detect your errors makes you soft.  By this, I don't mean you just stick in dead code - I mean that you expect the compiler to tell you if there are any paths that lead to a missing return (as an admittedly bad example.)  If the compiler, for any reason, mistakenly ignores a possibility... you will ignore it too.  Yes, this could be considered a bug in the compiler... but that only compounds the number of bugs.  IMHO, one of the best ways to make software stable is to make it so that if there ARE bugs, they won't do as much damage as they might otherwise.

Some people think they can dream up some way to rid the world of bugs. You can't do it - they can live through nuclear blasts, darn it! Prevention and a good strong boot are the only things that work, and thinking otherwise is only going to cause infestation not salvation. For those who don't like metaphors, I only mean to emphasize what I said above; there is no catch all solution to software bugs - even misplaced returns.

-[Unknown]

> I want to comment on the idea that having an unhandled exception happening
> to the customer makes the app developer look bad. Yep, it makes the
> developer look bad. Bugs always make the developer look bad. Silently
> ignoring bugs doesn't make them go away. At least with the exception you
> have a good chance of being able to reproduce the problem and get it fixed.
> That's much better for the customer than having a silent papered over bug
> insidiously corrupt his expensive database he didn't back up.
February 05, 2005
This is all tired old ground, and I know I'm not going to prevail. However, the fact that my comment's got your back up sufficiently to post a long and erudite response must indicate that you realise that I'm not the sole barking-mad dog, howling at the wind. So, I'll bite. Just a little.

Before I kick off, I must say I find a disappointing lack of weight to your list of points, which I think reflects the lack of cogency to the state of D around this area:

> That said, you and I have different ideas on what constitutes support
> for
> writing reliable code. I think it's better to have mechanisms in the
> language that:
>
> 1) make it impossible to ignore situations the programmer did not think of

So do I. So does any sane person.

But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.

> 2) the bias is to force bugs to show themselves in an obvious manner.

So do I.

But this statement is too bland to be worth anything. What's is "obvious"? *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?

Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.

> 3) not making it easy for the programmer to insert dead code to "shut
> up the
> compiler"

I completely agree.

But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few. Do you really think it's worth it? Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?


But I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)


My position is simply that compile-time error detection is better than runtime error detection. Further, where compile-time detection is not possible, runtime protection should be to the MAX: practically, this means that I *strongly* believe that contract violations mean death for an application, without exception. (So, FTR, the last several paragraphs of your post most certainly don't apply to this position. I'm highly confident you already know I hold this position, so I assume they're in there for wider pedegagical purposes, and will not comment on them further.)

Your position now - or maybe its just expressed altogether in a single place for the first time-  seems to be that having a compiler detect potential errors opens up the door for programmers to shut the compiler up with dead code. This is indeed true. You seem to argue that, as a consequence, it's better to prevent the compiler from giving (what you admit would be: "This may happen most of the time ...") very useful help in the majority of cases. I disagree.

Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again. Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either.

It's clear, or seems to to me, that this issue, at least as far as the
strictures of D is concerned, is a balance between the likelihoods of:
    1.    producing a non-violating program, and
    2.    preventing a violating program from continuing its execution
and, therefore, potentially wreck a system.

You seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so.

However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.)

Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>).

Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?

It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?

I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)

One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....

[My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]

Matthew




"Walter" <newshound@digitalmars.com> wrote in message news:cu1clr$r71$1@digitaldaemon.com...
>
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1@digitaldaemon.com...
>> Guys, if we persist with the mechanism of no compile-time detection
>> of
>> return paths, and rely on the runtime exceptions, do we really think
>> NASA would use D? Come on!
>
> NASA uses C, C++, Ada and assembler for space hardware.
>
> http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html
>
> That said, you and I have different ideas on what constitutes support
> for
> writing reliable code. I think it's better to have mechanisms in the
> language that:
>
> 1) make it impossible to ignore situations the programmer did not think of
>
> 2) the bias is to force bugs to show themselves in an obvious manner
>
> 3) not making it easy for the programmer to insert dead code to "shut
> up the
> compiler"
>
> This is why the return and the switch defaults are the way they are.
> The
> illustrative example of why this is a superior approach is the Java
> compiler's insistence on function signatures listing every exception
> they
> might raise. Sounds like a great idea to create robust code.
> Unfortunately,
> the opposite happens. Java programmers get used to just inserting
> catch all
> statements just to get the compiler to shut up. The end result is that
> critical errors get SILENTLY IGNORED rather than dealt with.
>
> The ABSOLUTELY WORST thing a critical software app can do is silently
> ignore
> errors.
>
> I've talked to a couple NASA probe engineers. They insert "deadman"
> switches
> in the computers. If the code crashes, locks, or has an unhandled
> exception,
> the deadman trips and the computer resets. The other approach I've
> seen in
> critical systems is "shut me down, notify the pilot, and engage the
> backup"
> upon crash, lock, or unhandled exception.
>
> This won't happen if the error is silently ignored.
>
> Having the compiler complain about lack of a return statement will
> encourage
> the programmer to just throw in a return 0; statement. Compiler is
> happy,
> the potential bug is NOT fixed, maintenance programmer is left
> wondering why
> there's a statement there that never gets executed, visual inspection
> of the
> code will not reveal anything obviously wrong, and testing will likely
> not
> reveal the bug since the function returns normally. Testers who use
> code
> coverage analyzers (an excellent QA technique) will have dead code
> sticking
> in their craws.
>
> However, if the runtime exception does throw, the programmer knows he
> has a
> REAL bug, not a HYPOTHETICAL bug, and it's something that needs
> fixing, not
> an annoyance that needs shutting up. Testing will likely reveal it. If
> it
> happens in the field in critical software, the deadman or backup can
> be
> engaged. A return 0; will paper over the bug, potentially causing far
> worse
> things to happen.
>
> The same comments apply to the switch default issue.
>
> Correct me if I'm wrong, but your position is that the compiler
> issuing an
> error will ensure that the programmer will correct the hypothetical
> error by
> inserting dead code, thereby making it correct. This may happen most
> of the
> time, but I worry about the cases where the shut the compiler up dead
> code
> is inserted instead, as what happens in Java even by Java experts who
> KNOW
> BETTER yet do it anyway. (I know this because they've told me they do
> this
> even knowing they shouldn't.)
>
> I've used compilers that insisted that I insert dead code. I usually
> add a
> comment saying it's dead code to shut the compiler up. It doesn't look
> good
> <g>.
>
> I want to comment on the idea that having an unhandled exception
> happening
> to the customer makes the app developer look bad. Yep, it makes the
> developer look bad. Bugs always make the developer look bad. Silently
> ignoring bugs doesn't make them go away. At least with the exception
> you
> have a good chance of being able to reproduce the problem and get it
> fixed.
> That's much better for the customer than having a silent papered over
> bug
> insidiously corrupt his expensive database he didn't back up.
>
> In short, I strongly believe that inserting dead code (code that will
> never
> be executed) is not the answer to writing bug free code. Having such
> code in
> there is misleading at best, and at worst will cause critical errors
> to be
> silently ignored.
>
> 


February 05, 2005
"Vathix" <vathix@dprogramming.com> wrote in message news:opslo861ihkcck4r@esi...
>>> Guys, if we persist with the mechanism of no compile-time detection
>>> of
>>> return paths
>>
>> "and switch cases"
>>
>>> , and rely on the runtime exceptions, do we really think NASA would
>>> use
>>> D? Come on!
>>
>
> Would you fly to mars in debug mode?

Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.

I recently worked on a large-scale multi-protocol, (multi-threaded) multi-process, non-stop system, and used a lot of contract programming (CP) in it. It's now humming away happily with all that good contract enforcement, and suicidal servers.

I have to tell you, I had a devil of a time persuading the project managers of the utility of CP, and even the techie guy had his qualms.

Like all commercial projects, this was started system testing the day it went into production. And, do you know, it has only had two bugs so far. One of these had an invariant condition ready for it, so it killed itself informatively and the bug was fixed in 10 minutes. The second did not have an invariant coded for it - much to my chagrin - and took over a week to find.

So, the lesson to me is that CP should always be on, and the more complex the system the more important it is that that be so. Although I've worked on a few complex large-scale systems in the past that did not have it (and which have run without flaw for years), I will not do so in the future. CP all the way!

btw, we're going to write this up as a case-study for an instalment of Bjorn Karlsson and my Smart Pointers column, called: "The Nuclear Reactor and the Deep Space Probe". It's mostly written, including some excellent quotes from big-W, and we hope to get it out sometime this month. (The column's on Artima.com, and available free for anyone; no sign-up required.)

Cheers

Matthew



February 05, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1@digitaldaemon.com...
> > 1) make it impossible to ignore situations the programmer did not think of
>
> So do I. So does any sane person.
>
> But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.

If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.

> > 2) the bias is to force bugs to show themselves in an obvious manner.
>
> So do I.
>
> But this statement is too bland to be worth anything. What's is "obvious"?

Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.

> *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?

As early as possible. Putting in the return 0; means the showing will be late.

> Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.

I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.

> But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.

I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."

> Do you really think it's worth it?

Absolutely.

> Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )?

D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.

> But I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.)

I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.

There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.

> My position is simply that compile-time error detection is better than runtime error detection.

In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.

> Further, where compile-time detection is not
> possible, runtime protection should be to the MAX: practically, this
> means that I *strongly* believe that contract violations mean death for
> an application, without exception. (So, FTR, the last several paragraphs
> of your post most certainly don't apply to this position. I'm highly
> confident you already know I hold this position, so I assume they're in
> there for wider pedegagical purposes, and will not comment on them
> further.)
>
> Your position now - or maybe its just expressed altogether in a single place for the first time-  seems to be that having a compiler detect potential errors opens up the door for programmers to shut the compiler up with dead code. This is indeed true. You seem to argue that, as a consequence, it's better to prevent the compiler from giving (what you admit would be: "This may happen most of the time ...") very useful help in the majority of cases. I disagree.

I know we disagree. <g>

> Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again.

Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.

> Depending
> on the vaguaries of its operating environment, it may well just keep
> going bad, in the same (hopefully very short) amount of time, again and
> again and again. The system's not being (further) corrupted, but it's
> not getting anything done either.

One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.

On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.

> It's clear, or seems to to me, that this issue, at least as far as the
> strictures of D is concerned, is a balance between the likelihoods of:
>     1.    producing a non-violating program, and
>     2.    preventing a violating program from continuing its execution
> and, therefore, potentially wreck a system.

There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.

> You seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so.
>
> However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.)
>
> Walter, I think that you've hung D on the petard of 'absolutism in the
> name of simplicity', on this and other issues. For good reasons, you
> won't conscience warnings, or pragmas, or even switch/function
> decoarator keywords (e.g. "int allcases func(int i) { if i < 0
> return -1'; }"). Indeed, as I think most participants will acknowledge,
> there are good reasons for all the decisions made for D thus far. But
> there are also good reasons against most/all of those decisions. (Except
> for slices. Slices are *the best thing* ever, and coupled with auto+GC,
> will eventually stand D out from all other mainstream languages.<G>).

Jan Knepper came up with the slicing idea. Sheer genius!

> Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one?

Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)

> It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect?

I don't believe it is perfect. I believe it is the best balance of competing factors.

> I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.)

I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.

> One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction ....

Essentially, that means requiring the programmer to insert:
    assert(0);
    return 0;
It just seems that requiring some fixed boilerplate to be inserted means
that the language should do that for you. After all, that's what computers
are good at!

> [My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...]

I don't expect we'll agree on this anytime soon.


February 05, 2005
On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:

> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu15pb$jqf$1@digitaldaemon.com...
>> Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!
> 
> NASA uses C, C++, Ada and assembler for space hardware.
> 
> http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html
> 
> That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that:
> 
> 1) make it impossible to ignore situations the programmer did not think of
> 
> 2) the bias is to force bugs to show themselves in an obvious manner
> 
> 3) not making it easy for the programmer to insert dead code to "shut up the compiler"

I come from the position that a compiler's job (a part from compiling), is to help the coder write correct programs. Of course, it can't do this to the Nth degree because how does the compiler 'know' what is correct or not? However, a compiler is often able to detect things that are *probably* incorrect or have a high probability to cause the application to function incorrectly. Thus I see think that a good compiler is one that is allowed to have the ability to point these situations out to the code writer. (The compiler should also allow coders to tell the compiler that the coder knows what they are doing in this instance and just let me get on with it, okay?!)

Now, what to do though if the code writer chooses to ignore the compiler's observations? I would suggest that the compiler should insert run time code that prevents the application from continuing if the application tries to continue past the code that the compiler thinks might ( i.e. highly likely) cause bad results.

You seem to be concerned that a code will always insert 'dead code' just so the compiler will stop nagging them. Of course, some coders are just this immature. They either grow up or whither. As a coder matures, they will begin to take the compiler seriously and add in code that makes sense in the context.

I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ...

  Abort("Logic Error #nnn. If you see this, a mistake was made by the
programmer. This should never be seen. Inform your supplier about this
message.");

You might regard this is superfluous 'dead code', however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode. Thus my switch constructs always have a default clause, and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keeps the users better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things.

Currently, D is way too dogmatic and unreasonably unhelpful to the coder.

It is mostly still better than C/C++ though.

-- 
Derek
Melbourne, Australia
February 05, 2005
> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1@digitaldaemon.com...
>> > 1) make it impossible to ignore situations the programmer did not think of
>>
>> So do I. So does any sane person.
>>
>> But it's a question of level, context, time. You're talking about two
>> measures that are small-scale, whose effects may or may not ever be
>> seen
>> in a running system . If they do, they may or may not be in a
>> context,
>> and at a time, which renders them useless as an aid to improving the
>> program.
>
> If the error is silently ignored, it will be orders of magnitude
> harder to
> find. Throwing in a return 0; to get the compiler to stop squawking is
> not
> helping.

I'm not arguing for that!

You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)

>> > 2) the bias is to force bugs to show themselves in an obvious manner.
>>
>> So do I.
>>
>> But this statement is too bland to be worth anything. What's is "obvious"?
>
> Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.

Man oh man! Have you taken up politics?

My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error.

Come on.

Q: Do you think driving on the left-hand side of the road is more or
less sensible than driving on the right?
A: When driving on the left-hand side of the road, be careful to monitor
junctions from the left.

>> *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?
>
> As early as possible. Putting in the return 0; means the showing will
> be
> late.

Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?

>> Frankly, one might argue that the notion that the language and its
>> premier compiler actively work to _prevent_ the programmer from
>> detecting bugs at compile-time, forcing a wait of an unknowable
>> amount
>> of testing (or, more horribly, deployment time) to find them, is
>> simply
>> crazy.
>
> I understand your point, but for this case, I do not agree for all the
> reasons stated here. I.e. there are other factors at work, factors
> that will
> make the bugs harder to find, not easier, if your approach is used. It
> is
> recognition of how programmers really write code, rather than the way
> they
> are exhorted to write code.

Disagree.

>> But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.
>
> I don't believe it is a few. It is enough that Java was forced to
> change
> things, to allow unchecked exceptions. People who look at a lot of
> Java code
> and work with a lot of Java programmers tell me it is a commonplace
> practice, *even* among the experts. When even the experts tend to
> write code
> that is wrong even though they know it is wrong and tell others it is
> wrong,
> is a very strong signal that the language requirement they are dealing
> with
> is broken. I don't want to design a language that the experts will say
> "do
> as I say, not as I do."

Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.

>> Will those handful % of better-employed-working-in-the-spam-industry
>> find no other way to screw up their systems? Is this really going to
>> answer all the issues attendant with a lack of
>> skill/learning/professionalism/adequate quality mechanisms (incl,
>> design
>> reviews, code reviews, documentation, refactoring, unit testing,
>> system
>> testing, etc. etc. )?
>
> D is based on my experience and that of many others on how programmers
> actually write code, rather than how we might wish them to.
> (Supporting a
> compiler means I see an awful lot of real world code!) D shouldn't
> force
> people to insert dead code into their source. It's tedious, it looks
> wrong,
> it's misleading, and it entices bad habits even from expert
> programmers.

Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.

>> But I'm not going to argue point by point with your post, since you
>> lost
>> me at "Java's exceptions". The analogy is specious, and thus
>> unconvincing. (Though I absolutely concur that they were a little
>> tried
>> 'good idea', like C++'s exception specifications or, in fear of
>> drawing
>> unwanted venom from my friends in the C++ firmament, export.)
>
> I believe it is an apt analogy as it shows how forcing programmers to
> do
> something unnatural leads to worse problems than it tries to solve.
> The best
> that can be said for it is "it seemed like a good idea at the time". I
> was
> at the last C++ standard committee meeting, and the topic came up on
> booting
> exception specifications out of C++ completely. The consensus was that
> it
> was now recognized as a worthless feature, but it did no harm (since
> it was
> optional), so leave it in for legacy compatibility.

All of this is of virtually no relevance to the topic under discussion

> There's some growing thought that even static type checking is an
> emperor
> without clothes, that dynamic type checking (like Python does) is more
> robust and more productive. I'm not at all convinced of that yet <g>,
> but
> it's fun seeing the conventional wisdom being challenged. It's good
> for all
> of us.

I'm with you there.

>> My position is simply that compile-time error detection is better
>> than
>> runtime error detection.
>
> In general, I agree with that statement. I do not agree that it is
> always
> true, especially in this case, as it is not necessarilly an error. It
> is
> hypothetically an error.

Nothing is *always* true. That's kind of one of the bases of my thesis.

>> Now you're absolutely correct that an invalid state throwing an
>> exception, leading to application/system reset is a good thing.
>> Absolutely. But let's be honest. All that achieves is to prevent a
>> bad
>> program from continuing to function once it is established to be bad.
>> It
>> doesn't make that program less bad, or help it run well again.
>
> Oh, yes it does make it less bad! It enables the program to notify the
> system that it has failed, and the backup needs to be engaged. That
> can make
> the difference between an annoyance and a catastrophe. It can help it
> run
> well again, as the error is found closer to the the source of it,
> meaning it
> will be easier to reproduce, find and correct.

Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth.

So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime.

Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.

>> Depending
>> on the vaguaries of its operating environment, it may well just keep
>> going bad, in the same (hopefully very short) amount of time, again
>> and
>> again and again. The system's not being (further) corrupted, but it's
>> not getting anything done either.
>
> One of the Mars landers went silent for a couple days. Turns out it
> was a
> self detected fault, which caused a reset, then the fault, then the
> reset,
> etc. This resetting did eventually allow JPL to wrest control of it
> back. If
> it had simply locked, oh well.

Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking.

At no time have I ever said such a thing.

> On airliners, the self detected faults trigger a dedicated circuit
> that
> disables the faulty computer and engages the backup. The last, last,
> last
> thing you want the autopilot on an airliner to do is execute a return
> 0;
> some programmer threw in to shut the compiler up. An exception thrown,
> shutting down the autopilot, engaging the backup, and notifying the
> pilot is
> what you'd much rather happen.

Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.

>> It's clear, or seems to to me, that this issue, at least as far as
>> the
>> strictures of D is concerned, is a balance between the likelihoods
>> of:
>>     1.    producing a non-violating program, and
>>     2.    preventing a violating program from continuing its
>> execution
>> and, therefore, potentially wreck a system.
>
> There's a very, very important additional point - that of not enticing
> the
> programmer into inserting "shut up" code to please the compiler that
> winds
> up masking a bug.

Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.

>> You seem to be of the opinion that the current situation of missing
>> return/case handling (MRCH) minimises the likelihood of 2. I agree
>> that
>> it does so.
>>
>> However, contrarily, I assert that D's MRCH minimises the likelihood
>> of
>> producing a non-violating program in the first place. The reasons are
>> obvious, so I'll not go into them. (If anyone's cares to disagree, I
>> ask
>> you to write a non-trival C++ program in a hurry, disable *all*
>> warnings, and go straight to production with it.)
>>
>> Walter, I think that you've hung D on the petard of 'absolutism in
>> the
>> name of simplicity', on this and other issues. For good reasons, you
>> won't conscience warnings, or pragmas, or even switch/function
>> decoarator keywords (e.g. "int allcases func(int i) { if i < 0
>> return -1'; }"). Indeed, as I think most participants will
>> acknowledge,
>> there are good reasons for all the decisions made for D thus far. But
>> there are also good reasons against most/all of those decisions.
>> (Except
>> for slices. Slices are *the best thing* ever, and coupled with
>> auto+GC,
>> will eventually stand D out from all other mainstream languages.<G>).
>
> Jan Knepper came up with the slicing idea. Sheer genius!

Truly

>> Software engineering hasn't yet found a perfect language. D is not
>> perfect, and it'd be surprising to hear anyone here say that it is.
>> That
>> being the case, how can the policy of absolutism be deemed a sensible
>> one?
>
> Now that you set yourself up, I can't resist knocking you down with
> "My
> position is simply that compile-time error detection is better than
> runtime
> error detection." :-)

?

If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?

>> It cannot be sanely argued that throwing on missing returns is a
>> perfect
>> solution, any more than it can be argued that compiler errors on
>> missing
>> returns is. That being the case, why has D made manifest in its
>> definition the stance that one of these positions is indeed perfect?
>
> I don't believe it is perfect. I believe it is the best balance of
> competing
> factors.

I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.

>> I know the many dark roads that await once the tight control on the
>> language is loosened, but the real world's already here, batting on
>> the
>> door. I have an open mind, and willing fingers to all kinds of
>> languages. I like D a lot, and I want it to succeed a *very great
>> deal*.
>> But I really cannot imagine recommending use of D to my clients with
>> these flaws of absolutism. (My hopeful guess for the future is that
>> other compiler variants will arise that will, at least, allow
>> warnings
>> to detect such things at compile time, which may alter the commercial
>> landscape markedly; D is, after all, full of a great many wonderful
>> things.)
>
> I have no problem at all with somebody making a "lint" for D that will
> explore other ideas on checking for errors. One of the reasons the
> front end
> is open source is so that anyone can easily make such a tool.

I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(

>> One last word: I recall a suggestion a year or so ago that would
>> required the programmer to explicitly insert what is currently
>> inserted
>> implicitly. This would have the compiler report errors to me if I
>> missed
>> a return. It'd have the code throw errors to you if an unexpected
>> code
>> path occured. Other than screwing over people who prize typing one
>> less
>> line over robustness, what's the flaw? And yet it got no traction
>> ....
>
> Essentially, that means requiring the programmer to insert:
>    assert(0);
>    return 0;

That is not the suggested syntax, at least not to the best of my recollection.

> It just seems that requiring some fixed boilerplate to be inserted
> means
> that the language should do that for you. After all, that's what
> computers
> are good at!

LOL! Well, there's no arguing with you there, eh?

You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do.

Just in case anyone's missed the extreme illogic of that position, I'll reiterate.

    Camp A want behaviour X to be done automatically by the compiler
    Camp B want behaviour Y to be done automatically by the compiler. X
and Y are incompatible, when done automatically.
    By having Z done manually, X and Y are moot, and everything works
well. (To the degree that D will, then, and only then, achieve resultant
robustnesses undreamt of.)

    Walter reckons that Z should be done automatically by the compiler.
Matthew auto-defolicalises and goes to wibble his frimble in the back
drim-drim with the other nimpins.

Less insanely, I'm keen to hear if there's any on-point response to this?

>> [My goodness! That was way longer than I wanted. I guess we'll still
>> be
>> arguing about this when the third edition of DPD's running hot
>> through
>> the presses ...]
>
> I don't expect we'll agree on this anytime soon.

Agreed


February 05, 2005
Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with.

Walter says: if it's compile time, programmers will patch it without thinking.  That's bad.  So let's use runtime.

You say: Runtime checking is bad.  Let's use compile time, that fixes everything!

You and Derek, who posted earlier, have implied that the runtime checking can still supplement the compile time checking.  Perhaps I've missed something crucial here, but I don't understand how - either there is a return there, or there isn't.  Example:

int main()
{
   return 0;
}

I do not see any space for runtime checking there.  None.  Not a single bit.  So, by that, we can logically come to the conclusion that if compile time checking is used runtime checking is impossible, because it makes no sense.

Walter, to my reckoning, is saying that the problem is this:

int main(char[][] args)
{
   if (args[1] != "--help")
   {
      doStuff();
      return 0;
   }
   else
      showHelp();
}

Oops.  Forgot the "return 1;".  His argument is that, in a more complicated function (with many lines and possibly different return values...) it may be difficult to tell what should be returned here.

Tell me, if you're working on a group project, using CVS or otherwise, and you are testing some code you've just added which you are about to check in... but someone else has checked in some code which no longer compiles because of said return warning - what is your instinct?  To sit on it until the return is fixed?  Maybe.  Or, maybe you want to fix it.

Being that you didn't write the code, you might say... well, it looks like if it gets to here it should return a 0.  Maybe you're right. Maybe you're wrong.  Maybe if you're wrong, the original author will notice and fix it.  Maybe not.  I hate maybes, they mean bugs.

Now, I'm sure I'm misrepresenting you.  We're all good patient programmers, and we'll wait for the guy on vacation who wrote this to come back and add his return.  Then we'll all break his bones for checking in code that doesn't even compile.

Here's another example.  Someone might argue that the compiler should give errors/warnings for the following:

if (true)
   1;
else if (var == 4)
   2;

Obviously, 2 will never happen.  Unreachable code detected, yes?  But what if it's this:

if (true) //var == 3)
   1;
else if (var == 4)
   2;

Suddenly, the obviousness of this error is gone.  It's no longer an error, it's testing.  2 isn't unreachable at all, it's only "commented out" so to speak!

What about this...?

version (1)
   1;
else version(2)
   2;

Is that an error?  No else for the versions... shouldn't there (probably) be a static assert there or similar?  Yes, maybe.  Obviously that can't be relied on, because sometimes it won't be true.  But, should you be forced to do this?

version (1)
   1;
else version(2)
   2;
else
   1 == 1;

Okay.  Let me reformat this example.  Should you be forced to do this?

int doIt(int var)
{
   if (var == 1)
      return 1;
   else if (var == 2)
      return 2;
   else
      return 0;
}

Same thing.  You'll say no, though.  These are different.  One's returning things, the other isn't, you'll say.

-[Unknown]
February 05, 2005
> Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with.

Does it? How?

> Walter says: if it's compile time, programmers will patch it without thinking.  That's bad.  So let's use runtime.
>
> You say: Runtime checking is bad.  Let's use compile time, that fixes everything!

I didn't say that. You appear to have caught Walter's disease.



February 05, 2005
Unknown W. Brackets wrote:
> Matthew, this response makes it sound like you're ignoring Walter's primary argument, which you earlier stated you disagree with.
> 
> Walter says: if it's compile time, programmers will patch it without thinking.  That's bad.  So let's use runtime.
> 
> You say: Runtime checking is bad.  Let's use compile time, that fixes everything!
> 

I better stay out of this... but Matthew's last post did clarify that he was /not/ against runtime checking.  He states that quite clearly.

<Ducks away again>

- John R.