February 07, 2005
Sounds good to me.

But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop.

"Regan Heath" <regan@netwin.co.nz> wrote in message news:opslstuwzg23k2f5@ally...
> Disclaimer: Please correct me if I have miss-represented anyone, I appologise in advance for doing so, it was not my intent.
>
> The following is my impression of the points/positions in this argument:
>
> 1. Catching things at compile time is better than at runtime.
>  - all parties agree
>
> 2. If it cannot be caught at compile time, then a hard failure at
> runtime  is desired.
>  - all parties agree
>
> 3. An error which causes the programmer to add code to 'shut the
> compiler  up' causes hidden bugs
>  - Walter
>
> Matthew?
>
> 4. Programmers should take responsibilty for the code they add to
> 'shut  the compiler up' by adding an assert/exception.
>  - Matthew
>
> Walter?
>
> 5. The language/compiler should where it can make it hard for the
> programmer to write bad code
>  - Walter
>
> Matthew?
>
>
> IMO it seems to be a disagreement about what happens in the "real world",  IMO Matthew has an optimistic view, Walter a pessimistic view, eg.
>
> Matthew: If it were a warning, programmers would notice immediately, consider the error, fix it or add an assert for protection, thus the error  would be caught immediately or at runtime.
>
> It seems to me that Matthews position is that warning the programmer at  compile time about the situation gives them the opportunity to fix it at  compile time, and I agree.
>
> Walter: If it were a warning, programmers might add 'return 0;' causing  the error to remain un-detected for longer.
>
> It seems to me that Walters position is that if it were a warning there is  potential for the programmer to do something stupid, and I agree.
>
> So why can't we have both?
>
> To explore this, an imaginary situation:
>
> - Compiler detects problem.
> - Adds code to handle it (hard-fail at runtime).
> - Gives notification of the potential problem.
> - Programmer either:
>
>  a. cannot see the problem, adds code to shut the compiler up.
> (causing  removal of auto hard-fail code)
>
>  b. cannot see the problem, adds an assert (hard-fail) and code to
> shut  the compiler up.
>
>  c. sees the problem, fixes it.
>
> if a then the bug could remain undetected for longer.
> if b then the bug is caught at runtime.
> if c then the bug is avoided.
>
> Without the notification (a) is impossible, so it seems Walters position  removes the worst case scenario, BUT, without the notification (c) is  impossible, so it seems Walters position removes the best case scenario  also.
>
> Of course for any programmer who would choose (b) over (a) 'all the
> time'  Matthews position is clearly the superior one, however...
>
> The real question is. In the real world are there more programmers who
> choose (a), as Walter imagines, or are there more choosing (b) as
> Matthew  imagines?
>
> Those that choose (a), do they do so out of ignorance, impatience, or
> stupidity? (or some other reason)
>
> If stupidity, there is no cure for stupidity.
>
> If impatience (as Walter has suggested) what do we do, can we do
> anything.
>
> If ignorance, then how do we teach them? does auto-inserting the hard fail  and giving no warning do so? would giving the warning do a better/worse  job?
>
> eg.
>
> "There is the potential for undefined behaviour here, an exception has been added automatically please consider the situation and either: A. add  your own exception or B. fix the bug."
>
> Regan
>
> On Sat, 5 Feb 2005 20:26:43 +1100, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:cu1pe6$15ks$1@digitaldaemon.com...
>>>> > 1) make it impossible to ignore situations the programmer did not think of
>>>>
>>>> So do I. So does any sane person.
>>>>
>>>> But it's a question of level, context, time. You're talking about
>>>> two
>>>> measures that are small-scale, whose effects may or may not ever be
>>>> seen
>>>> in a running system . If they do, they may or may not be in a
>>>> context,
>>>> and at a time, which renders them useless as an aid to improving
>>>> the
>>>> program.
>>>
>>> If the error is silently ignored, it will be orders of magnitude
>>> harder to
>>> find. Throwing in a return 0; to get the compiler to stop squawking
>>> is
>>> not
>>> helping.
>>
>> I'm not arguing for that!
>>
>> You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
>>
>>>> > 2) the bias is to force bugs to show themselves in an obvious manner.
>>>>
>>>> So do I.
>>>>
>>>> But this statement is too bland to be worth anything. What's is "obvious"?
>>>
>>> Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
>>
>> Man oh man! Have you taken up politics?
>>
>> My problem is that you're forcing issues that can be dealt with at
>> compile time to be runtime. Your response: exceptions are the best
>> way
>> to indicate runtime error.
>>
>> Come on.
>>
>> Q: Do you think driving on the left-hand side of the road is more or
>> less sensible than driving on the right?
>> A: When driving on the left-hand side of the road, be careful to
>> monitor
>> junctions from the left.
>>
>>>> *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late?
>>>
>>> As early as possible. Putting in the return 0; means the showing
>>> will
>>> be
>>> late.
>>
>> Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
>>
>>>> Frankly, one might argue that the notion that the language and its
>>>> premier compiler actively work to _prevent_ the programmer from
>>>> detecting bugs at compile-time, forcing a wait of an unknowable
>>>> amount
>>>> of testing (or, more horribly, deployment time) to find them, is
>>>> simply
>>>> crazy.
>>>
>>> I understand your point, but for this case, I do not agree for all
>>> the
>>> reasons stated here. I.e. there are other factors at work, factors
>>> that will
>>> make the bugs harder to find, not easier, if your approach is used.
>>> It
>>> is
>>> recognition of how programmers really write code, rather than the
>>> way
>>> they
>>> are exhorted to write code.
>>
>> Disagree.
>>
>>>> But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few.
>>>
>>> I don't believe it is a few. It is enough that Java was forced to
>>> change
>>> things, to allow unchecked exceptions. People who look at a lot of
>>> Java code
>>> and work with a lot of Java programmers tell me it is a commonplace
>>> practice, *even* among the experts. When even the experts tend to
>>> write code
>>> that is wrong even though they know it is wrong and tell others it
>>> is
>>> wrong,
>>> is a very strong signal that the language requirement they are
>>> dealing
>>> with
>>> is broken. I don't want to design a language that the experts will
>>> say
>>> "do
>>> as I say, not as I do."
>>
>> Yet again, you are broad-brushing your arbitrary (or at least
>> partial)
>> absolute decisions with a complete furphy. This is not an analogy,
>> it's
>> a mirror with some smoke machines behind it.
>>
>>>> Will those handful % of
>>>> better-employed-working-in-the-spam-industry
>>>> find no other way to screw up their systems? Is this really going
>>>> to
>>>> answer all the issues attendant with a lack of
>>>> skill/learning/professionalism/adequate quality mechanisms (incl,
>>>> design
>>>> reviews, code reviews, documentation, refactoring, unit testing,
>>>> system
>>>> testing, etc. etc. )?
>>>
>>> D is based on my experience and that of many others on how
>>> programmers
>>> actually write code, rather than how we might wish them to.
>>> (Supporting a
>>> compiler means I see an awful lot of real world code!) D shouldn't
>>> force
>>> people to insert dead code into their source. It's tedious, it looks
>>> wrong,
>>> it's misleading, and it entices bad habits even from expert
>>> programmers.
>>
>> Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
>>
>>>> But I'm not going to argue point by point with your post, since you
>>>> lost
>>>> me at "Java's exceptions". The analogy is specious, and thus
>>>> unconvincing. (Though I absolutely concur that they were a little
>>>> tried
>>>> 'good idea', like C++'s exception specifications or, in fear of
>>>> drawing
>>>> unwanted venom from my friends in the C++ firmament, export.)
>>>
>>> I believe it is an apt analogy as it shows how forcing programmers
>>> to
>>> do
>>> something unnatural leads to worse problems than it tries to solve.
>>> The best
>>> that can be said for it is "it seemed like a good idea at the time".
>>> I
>>> was
>>> at the last C++ standard committee meeting, and the topic came up on
>>> booting
>>> exception specifications out of C++ completely. The consensus was
>>> that
>>> it
>>> was now recognized as a worthless feature, but it did no harm (since
>>> it was
>>> optional), so leave it in for legacy compatibility.
>>
>> All of this is of virtually no relevance to the topic under discussion
>>
>>> There's some growing thought that even static type checking is an
>>> emperor
>>> without clothes, that dynamic type checking (like Python does) is
>>> more
>>> robust and more productive. I'm not at all convinced of that yet
>>> <g>,
>>> but
>>> it's fun seeing the conventional wisdom being challenged. It's good
>>> for all
>>> of us.
>>
>> I'm with you there.
>>
>>>> My position is simply that compile-time error detection is better
>>>> than
>>>> runtime error detection.
>>>
>>> In general, I agree with that statement. I do not agree that it is
>>> always
>>> true, especially in this case, as it is not necessarilly an error.
>>> It
>>> is
>>> hypothetically an error.
>>
>> Nothing is *always* true. That's kind of one of the bases of my thesis.
>>
>>>> Now you're absolutely correct that an invalid state throwing an
>>>> exception, leading to application/system reset is a good thing.
>>>> Absolutely. But let's be honest. All that achieves is to prevent a
>>>> bad
>>>> program from continuing to function once it is established to be
>>>> bad.
>>>> It
>>>> doesn't make that program less bad, or help it run well again.
>>>
>>> Oh, yes it does make it less bad! It enables the program to notify
>>> the
>>> system that it has failed, and the backup needs to be engaged. That
>>> can make
>>> the difference between an annoyance and a catastrophe. It can help
>>> it
>>> run
>>> well again, as the error is found closer to the the source of it,
>>> meaning it
>>> will be easier to reproduce, find and correct.
>>
>> Sorry, but this is totally misleading nonsense. Again, you're arguing
>> against me as if I think runtime checking is invalid or useless.
>> Nothing
>> could be further from the truth.
>>
>> So, again, my position is: Checking for an invalid state at runtime,
>> and
>> acting on it in a non-ignorable manner, is the absolute best thing
>> one
>> can do. Except when that error can be detected at runtime.
>>
>> Please stop arguing against your demons on this, and address my
>> point.
>> If an error can be detected at compile time, then it is a mistake to
>> detect it at runtime. Please address this specific point, and stop
>> general carping at the non-CP adherents. I'm not one of 'em.
>>
>>>> Depending
>>>> on the vaguaries of its operating environment, it may well just
>>>> keep
>>>> going bad, in the same (hopefully very short) amount of time, again
>>>> and
>>>> again and again. The system's not being (further) corrupted, but
>>>> it's
>>>> not getting anything done either.
>>>
>>> One of the Mars landers went silent for a couple days. Turns out it
>>> was a
>>> self detected fault, which caused a reset, then the fault, then the
>>> reset,
>>> etc. This resetting did eventually allow JPL to wrest control of it
>>> back. If
>>> it had simply locked, oh well.
>>
>> Abso-bloody-lutely spot on behaviour. What: you think I'm arguing
>> that
>> the lander should have all its checking done at compile time (as if
>> that's even possible) and eschew runtime checking.
>>
>> At no time have I ever said such a thing.
>>
>>> On airliners, the self detected faults trigger a dedicated circuit
>>> that
>>> disables the faulty computer and engages the backup. The last, last,
>>> last
>>> thing you want the autopilot on an airliner to do is execute a
>>> return
>>> 0;
>>> some programmer threw in to shut the compiler up. An exception
>>> thrown,
>>> shutting down the autopilot, engaging the backup, and notifying the
>>> pilot is
>>> what you'd much rather happen.
>>
>> Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
>>
>>>> It's clear, or seems to to me, that this issue, at least as far as
>>>> the
>>>> strictures of D is concerned, is a balance between the likelihoods
>>>> of:
>>>>     1.    producing a non-violating program, and
>>>>     2.    preventing a violating program from continuing its
>>>> execution
>>>> and, therefore, potentially wreck a system.
>>>
>>> There's a very, very important additional point - that of not
>>> enticing
>>> the
>>> programmer into inserting "shut up" code to please the compiler that
>>> winds
>>> up masking a bug.
>>
>> Absolutely. But that is not, in and of itself, sufficient
>> justification
>> for ditching compile detection in favour of runtime detection. Yet
>> again, we're having to swallow absolutism - dare I say dogma? -
>> instead
>> of coming up with a solution that handles all requirements to a
>> healthy
>> degree.
>>
>>>> You seem to be of the opinion that the current situation of missing
>>>> return/case handling (MRCH) minimises the likelihood of 2. I agree
>>>> that
>>>> it does so.
>>>>
>>>> However, contrarily, I assert that D's MRCH minimises the
>>>> likelihood
>>>> of
>>>> producing a non-violating program in the first place. The reasons
>>>> are
>>>> obvious, so I'll not go into them. (If anyone's cares to disagree,
>>>> I
>>>> ask
>>>> you to write a non-trival C++ program in a hurry, disable *all*
>>>> warnings, and go straight to production with it.)
>>>>
>>>> Walter, I think that you've hung D on the petard of 'absolutism in
>>>> the
>>>> name of simplicity', on this and other issues. For good reasons,
>>>> you
>>>> won't conscience warnings, or pragmas, or even switch/function
>>>> decoarator keywords (e.g. "int allcases func(int i) { if i < 0
>>>> return -1'; }"). Indeed, as I think most participants will
>>>> acknowledge,
>>>> there are good reasons for all the decisions made for D thus far.
>>>> But
>>>> there are also good reasons against most/all of those decisions.
>>>> (Except
>>>> for slices. Slices are *the best thing* ever, and coupled with
>>>> auto+GC,
>>>> will eventually stand D out from all other mainstream
>>>> languages.<G>).
>>>
>>> Jan Knepper came up with the slicing idea. Sheer genius!
>>
>> Truly
>>
>>>> Software engineering hasn't yet found a perfect language. D is not
>>>> perfect, and it'd be surprising to hear anyone here say that it is.
>>>> That
>>>> being the case, how can the policy of absolutism be deemed a
>>>> sensible
>>>> one?
>>>
>>> Now that you set yourself up, I can't resist knocking you down with
>>> "My
>>> position is simply that compile-time error detection is better than
>>> runtime
>>> error detection." :-)
>>
>> ?
>>
>> If you're trying to say that I've implied that compile-time detection
>> can handle everything, leaving nothing to be done at runtime, you're
>> either kidding, sly, or mental. I'm assuming kidding, from the
>> smiley,
>> but it's a bit disingenuous at this level of the debate, don't you
>> think?
>>
>>>> It cannot be sanely argued that throwing on missing returns is a
>>>> perfect
>>>> solution, any more than it can be argued that compiler errors on
>>>> missing
>>>> returns is. That being the case, why has D made manifest in its
>>>> definition the stance that one of these positions is indeed
>>>> perfect?
>>>
>>> I don't believe it is perfect. I believe it is the best balance of
>>> competing
>>> factors.
>>
>> I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
>>
>>>> I know the many dark roads that await once the tight control on the
>>>> language is loosened, but the real world's already here, batting on
>>>> the
>>>> door. I have an open mind, and willing fingers to all kinds of
>>>> languages. I like D a lot, and I want it to succeed a *very great
>>>> deal*.
>>>> But I really cannot imagine recommending use of D to my clients
>>>> with
>>>> these flaws of absolutism. (My hopeful guess for the future is that
>>>> other compiler variants will arise that will, at least, allow
>>>> warnings
>>>> to detect such things at compile time, which may alter the
>>>> commercial
>>>> landscape markedly; D is, after all, full of a great many wonderful
>>>> things.)
>>>
>>> I have no problem at all with somebody making a "lint" for D that
>>> will
>>> explore other ideas on checking for errors. One of the reasons the
>>> front end
>>> is open source is so that anyone can easily make such a tool.
>>
>> I'm not talking about lint. I confidently predict that the least
>> badness
>> that will happen will be the general use of non-standard compilers
>> and
>> the general un-use of DMD. But I realistically think that D'll
>> splinter
>> as a result of making the same kinds of mistakes, albeit for
>> different
>> reasons, as C++. :-(
>>
>>>> One last word: I recall a suggestion a year or so ago that would
>>>> required the programmer to explicitly insert what is currently
>>>> inserted
>>>> implicitly. This would have the compiler report errors to me if I
>>>> missed
>>>> a return. It'd have the code throw errors to you if an unexpected
>>>> code
>>>> path occured. Other than screwing over people who prize typing one
>>>> less
>>>> line over robustness, what's the flaw? And yet it got no traction
>>>> ....
>>>
>>> Essentially, that means requiring the programmer to insert:
>>>    assert(0);
>>>    return 0;
>>
>> That is not the suggested syntax, at least not to the best of my recollection.
>>
>>> It just seems that requiring some fixed boilerplate to be inserted
>>> means
>>> that the language should do that for you. After all, that's what
>>> computers
>>> are good at!
>>
>> LOL! Well, there's no arguing with you there, eh?
>>
>> You don't want the compiler to automate the bits I want. I don't want
>> it
>> to automate the bits you want. I suggest a way to resolve this, by
>> requiring more of the programmer - fancy that! - and you discount
>> that
>> because it's something the compiler should do.
>>
>> Just in case anyone's missed the extreme illogic of that position,
>> I'll
>> reiterate.
>>
>>     Camp A want behaviour X to be done automatically by the compiler
>>     Camp B want behaviour Y to be done automatically by the compiler.
>> X
>> and Y are incompatible, when done automatically.
>>     By having Z done manually, X and Y are moot, and everything works
>> well. (To the degree that D will, then, and only then, achieve
>> resultant
>> robustnesses undreamt of.)
>>
>>     Walter reckons that Z should be done automatically by the
>> compiler.
>> Matthew auto-defolicalises and goes to wibble his frimble in the back
>> drim-drim with the other nimpins.
>>
>> Less insanely, I'm keen to hear if there's any on-point response to this?
>>
>>>> [My goodness! That was way longer than I wanted. I guess we'll
>>>> still
>>>> be
>>>> arguing about this when the third edition of DPD's running hot
>>>> through
>>>> the presses ...]
>>>
>>> I don't expect we'll agree on this anytime soon.
>>
>> Agreed
>>
>>
>



February 07, 2005
"Derek Parnell" <derek@psych.ward> wrote in message news:cu66qc$29vn$1@digitaldaemon.com...
> On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:
>
>> "sai" <sai_member@pathlink.com> wrote in message news:cu65me$27i4$1@digitaldaemon.com...
>>> Anders_F_Bj=F6rklund?= says...
>>>>Okay, so -release is "intuitive" and "self-explanatory" - but you'll
>>>>have to read the docs to find out what it does ? Does not compute
>>>>:-)
>>>>I find "-debug -release" to be a rather weird combination of DFLAGS
>>>>?
>>>
>>> Yes, -release means ..... it is a release version with all contracts
>>> (including
>>> pre-conditions, invariants, post-conditions and assertions) etc etc
>>> turned off,
>>> quite self explainatory to me !!
>>
>> I think the original issue under debate, sadly largely ignored since,
>> is
>> whether contracts (empty clauses elided, of course), should be
>> included
>> in a 'default' release build. I'm inexorably moving over to the
>> opinion
>> that they should, and I was hoping for opinions from people,
>> considering
>> the long-term desire to turn D into a major player in systems
>> engineering
>
> I'm thinking as I write here, so I could be way off ...
>
> Isn't the idea of contracts just a mechanism to assist *coders* locate
> bugs
> during testing.

Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!

As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.

> And by 'bugs', I mean behaviour that is not documented in
> the program's (business) requirements specifications. As distinct from
> runtime handling of bad data or unexpected situations.

Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing. A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.

> If so, then by the time you build a final production version of the application, all the testing is completed.

As I said, this can never be asserted with 100% confidence.

> And thus contracts can be
> removed from the final release.

So this conclusion may not be drawn.

> However, you might keep them in for a beta
> release.

Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.

> Bad data and unexpected situations should be still addressed by
> exceptions
> and/or simple messages, designed to be read by an *end* user and not
> only
> the developers.

Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.


February 07, 2005
On Mon, 7 Feb 2005 13:27:06 +1100, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
> Sounds good to me.
>
> But I suspect Walter will argue that given the programmer any hint of
> the problem will result in them putting something in to shut the
> compiler up.

He already has, it's the (a) option below, the 'worst case' scenario.

I did note that you mentioned it would be possible for people to start using a "return 0;" to avoid the auto assert, I agree it's possible, I just don't think it's very likely.

> At which point I'll have to smash myself in the head with
> my laptop.

I do hope it's one of those new ones that isn't very big and/or solid, not like the one I used to have which survived being run over by a car.

Regan

> "Regan Heath" <regan@netwin.co.nz> wrote in message
> news:opslstuwzg23k2f5@ally...
>> Disclaimer: Please correct me if I have miss-represented anyone, I
>> appologise in advance for doing so, it was not my intent.
>>
>> The following is my impression of the points/positions in this
>> argument:
>>
>> 1. Catching things at compile time is better than at runtime.
>>  - all parties agree
>>
>> 2. If it cannot be caught at compile time, then a hard failure at
>> runtime  is desired.
>>  - all parties agree
>>
>> 3. An error which causes the programmer to add code to 'shut the
>> compiler  up' causes hidden bugs
>>  - Walter
>>
>> Matthew?
>>
>> 4. Programmers should take responsibilty for the code they add to
>> 'shut  the compiler up' by adding an assert/exception.
>>  - Matthew
>>
>> Walter?
>>
>> 5. The language/compiler should where it can make it hard for the
>> programmer to write bad code
>>  - Walter
>>
>> Matthew?
>>
>>
>> IMO it seems to be a disagreement about what happens in the "real
>> world",  IMO Matthew has an optimistic view, Walter a pessimistic
>> view, eg.
>>
>> Matthew: If it were a warning, programmers would notice immediately,
>> consider the error, fix it or add an assert for protection, thus the
>> error  would be caught immediately or at runtime.
>>
>> It seems to me that Matthews position is that warning the programmer
>> at  compile time about the situation gives them the opportunity to fix
>> it at  compile time, and I agree.
>>
>> Walter: If it were a warning, programmers might add 'return 0;'
>> causing  the error to remain un-detected for longer.
>>
>> It seems to me that Walters position is that if it were a warning
>> there is  potential for the programmer to do something stupid, and I
>> agree.
>>
>> So why can't we have both?
>>
>> To explore this, an imaginary situation:
>>
>> - Compiler detects problem.
>> - Adds code to handle it (hard-fail at runtime).
>> - Gives notification of the potential problem.
>> - Programmer either:
>>
>>  a. cannot see the problem, adds code to shut the compiler up.
>> (causing  removal of auto hard-fail code)
>>
>>  b. cannot see the problem, adds an assert (hard-fail) and code to
>> shut  the compiler up.
>>
>>  c. sees the problem, fixes it.
>>
>> if a then the bug could remain undetected for longer.
>> if b then the bug is caught at runtime.
>> if c then the bug is avoided.
>>
>> Without the notification (a) is impossible, so it seems Walters
>> position  removes the worst case scenario, BUT, without the
>> notification (c) is  impossible, so it seems Walters position removes
>> the best case scenario  also.
>>
>> Of course for any programmer who would choose (b) over (a) 'all the
>> time'  Matthews position is clearly the superior one, however...
>>
>> The real question is. In the real world are there more programmers who
>> choose (a), as Walter imagines, or are there more choosing (b) as
>> Matthew  imagines?
>>
>> Those that choose (a), do they do so out of ignorance, impatience, or
>> stupidity? (or some other reason)
>>
>> If stupidity, there is no cure for stupidity.
>>
>> If impatience (as Walter has suggested) what do we do, can we do
>> anything.
>>
>> If ignorance, then how do we teach them? does auto-inserting the hard
>> fail  and giving no warning do so? would giving the warning do a
>> better/worse  job?
>>
>> eg.
>>
>> "There is the potential for undefined behaviour here, an exception has
>> been added automatically please consider the situation and either: A.
>> add  your own exception or B. fix the bug."
>>
>> Regan
>>
>> On Sat, 5 Feb 2005 20:26:43 +1100, Matthew
>> <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>>> "Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message
>>>> news:cu1pe6$15ks$1@digitaldaemon.com...
>>>>> > 1) make it impossible to ignore situations the programmer did not
>>>>> > think of
>>>>>
>>>>> So do I. So does any sane person.
>>>>>
>>>>> But it's a question of level, context, time. You're talking about
>>>>> two
>>>>> measures that are small-scale, whose effects may or may not ever be
>>>>> seen
>>>>> in a running system . If they do, they may or may not be in a
>>>>> context,
>>>>> and at a time, which renders them useless as an aid to improving
>>>>> the
>>>>> program.
>>>>
>>>> If the error is silently ignored, it will be orders of magnitude
>>>> harder to
>>>> find. Throwing in a return 0; to get the compiler to stop squawking
>>>> is
>>>> not
>>>> helping.
>>>
>>> I'm not arguing for that!
>>>
>>> You have the bad habit of attributing positions to me that are either
>>> more extreme, or not representative whatsoever, in order to have
>>> something against which to argue more strongly. (You're not unique in
>>> that, of course. I'm sure I do it as well sometimes.)
>>>
>>>>> > 2) the bias is to force bugs to show themselves in an obvious
>>>>> > manner.
>>>>>
>>>>> So do I.
>>>>>
>>>>> But this statement is too bland to be worth anything. What's is
>>>>> "obvious"?
>>>>
>>>> Throwing an uncaught exception is designed to be obvious and is the
>>>> preferred method of being obvious about a runtime error.
>>>
>>> Man oh man! Have you taken up politics?
>>>
>>> My problem is that you're forcing issues that can be dealt with at
>>> compile time to be runtime. Your response: exceptions are the best
>>> way
>>> to indicate runtime error.
>>>
>>> Come on.
>>>
>>> Q: Do you think driving on the left-hand side of the road is more or
>>> less sensible than driving on the right?
>>> A: When driving on the left-hand side of the road, be careful to
>>> monitor
>>> junctions from the left.
>>>
>>>>> *Who decides* what is obvious? How does/should the bug show
>>>>> itself? When should the showing be done: early, or late?
>>>>
>>>> As early as possible. Putting in the return 0; means the showing
>>>> will
>>>> be
>>>> late.
>>>
>>> Oh? And that'd be later than the compiler preventing it from even
>>> getting to object code in the first place?
>>>
>>>>> Frankly, one might argue that the notion that the language and its
>>>>> premier compiler actively work to _prevent_ the programmer from
>>>>> detecting bugs at compile-time, forcing a wait of an unknowable
>>>>> amount
>>>>> of testing (or, more horribly, deployment time) to find them, is
>>>>> simply
>>>>> crazy.
>>>>
>>>> I understand your point, but for this case, I do not agree for all
>>>> the
>>>> reasons stated here. I.e. there are other factors at work, factors
>>>> that will
>>>> make the bugs harder to find, not easier, if your approach is used.
>>>> It
>>>> is
>>>> recognition of how programmers really write code, rather than the
>>>> way
>>>> they
>>>> are exhorted to write code.
>>>
>>> Disagree.
>>>
>>>>> But you're hamstringing 100% of all developers for the
>>>>> careless/unprofessional/inept of a few.
>>>>
>>>> I don't believe it is a few. It is enough that Java was forced to
>>>> change
>>>> things, to allow unchecked exceptions. People who look at a lot of
>>>> Java code
>>>> and work with a lot of Java programmers tell me it is a commonplace
>>>> practice, *even* among the experts. When even the experts tend to
>>>> write code
>>>> that is wrong even though they know it is wrong and tell others it
>>>> is
>>>> wrong,
>>>> is a very strong signal that the language requirement they are
>>>> dealing
>>>> with
>>>> is broken. I don't want to design a language that the experts will
>>>> say
>>>> "do
>>>> as I say, not as I do."
>>>
>>> Yet again, you are broad-brushing your arbitrary (or at least
>>> partial)
>>> absolute decisions with a complete furphy. This is not an analogy,
>>> it's
>>> a mirror with some smoke machines behind it.
>>>
>>>>> Will those handful % of
>>>>> better-employed-working-in-the-spam-industry
>>>>> find no other way to screw up their systems? Is this really going
>>>>> to
>>>>> answer all the issues attendant with a lack of
>>>>> skill/learning/professionalism/adequate quality mechanisms (incl,
>>>>> design
>>>>> reviews, code reviews, documentation, refactoring, unit testing,
>>>>> system
>>>>> testing, etc. etc. )?
>>>>
>>>> D is based on my experience and that of many others on how
>>>> programmers
>>>> actually write code, rather than how we might wish them to.
>>>> (Supporting a
>>>> compiler means I see an awful lot of real world code!) D shouldn't
>>>> force
>>>> people to insert dead code into their source. It's tedious, it looks
>>>> wrong,
>>>> it's misleading, and it entices bad habits even from expert
>>>> programmers.
>>>
>>> Sorry, but wrong again. As I mentioned in the last post, there's a
>>> mechanism for addressing both camps, yet you're still banging on with
>>> this all-or-nothing position.
>>>
>>>>> But I'm not going to argue point by point with your post, since you
>>>>> lost
>>>>> me at "Java's exceptions". The analogy is specious, and thus
>>>>> unconvincing. (Though I absolutely concur that they were a little
>>>>> tried
>>>>> 'good idea', like C++'s exception specifications or, in fear of
>>>>> drawing
>>>>> unwanted venom from my friends in the C++ firmament, export.)
>>>>
>>>> I believe it is an apt analogy as it shows how forcing programmers
>>>> to
>>>> do
>>>> something unnatural leads to worse problems than it tries to solve.
>>>> The best
>>>> that can be said for it is "it seemed like a good idea at the time".
>>>> I
>>>> was
>>>> at the last C++ standard committee meeting, and the topic came up on
>>>> booting
>>>> exception specifications out of C++ completely. The consensus was
>>>> that
>>>> it
>>>> was now recognized as a worthless feature, but it did no harm (since
>>>> it was
>>>> optional), so leave it in for legacy compatibility.
>>>
>>> All of this is of virtually no relevance to the topic under
>>> discussion
>>>
>>>> There's some growing thought that even static type checking is an
>>>> emperor
>>>> without clothes, that dynamic type checking (like Python does) is
>>>> more
>>>> robust and more productive. I'm not at all convinced of that yet
>>>> <g>,
>>>> but
>>>> it's fun seeing the conventional wisdom being challenged. It's good
>>>> for all
>>>> of us.
>>>
>>> I'm with you there.
>>>
>>>>> My position is simply that compile-time error detection is better
>>>>> than
>>>>> runtime error detection.
>>>>
>>>> In general, I agree with that statement. I do not agree that it is
>>>> always
>>>> true, especially in this case, as it is not necessarilly an error.
>>>> It
>>>> is
>>>> hypothetically an error.
>>>
>>> Nothing is *always* true. That's kind of one of the bases of my
>>> thesis.
>>>
>>>>> Now you're absolutely correct that an invalid state throwing an
>>>>> exception, leading to application/system reset is a good thing.
>>>>> Absolutely. But let's be honest. All that achieves is to prevent a
>>>>> bad
>>>>> program from continuing to function once it is established to be
>>>>> bad.
>>>>> It
>>>>> doesn't make that program less bad, or help it run well again.
>>>>
>>>> Oh, yes it does make it less bad! It enables the program to notify
>>>> the
>>>> system that it has failed, and the backup needs to be engaged. That
>>>> can make
>>>> the difference between an annoyance and a catastrophe. It can help
>>>> it
>>>> run
>>>> well again, as the error is found closer to the the source of it,
>>>> meaning it
>>>> will be easier to reproduce, find and correct.
>>>
>>> Sorry, but this is totally misleading nonsense. Again, you're arguing
>>> against me as if I think runtime checking is invalid or useless.
>>> Nothing
>>> could be further from the truth.
>>>
>>> So, again, my position is: Checking for an invalid state at runtime,
>>> and
>>> acting on it in a non-ignorable manner, is the absolute best thing
>>> one
>>> can do. Except when that error can be detected at runtime.
>>>
>>> Please stop arguing against your demons on this, and address my
>>> point.
>>> If an error can be detected at compile time, then it is a mistake to
>>> detect it at runtime. Please address this specific point, and stop
>>> general carping at the non-CP adherents. I'm not one of 'em.
>>>
>>>>> Depending
>>>>> on the vaguaries of its operating environment, it may well just
>>>>> keep
>>>>> going bad, in the same (hopefully very short) amount of time, again
>>>>> and
>>>>> again and again. The system's not being (further) corrupted, but
>>>>> it's
>>>>> not getting anything done either.
>>>>
>>>> One of the Mars landers went silent for a couple days. Turns out it
>>>> was a
>>>> self detected fault, which caused a reset, then the fault, then the
>>>> reset,
>>>> etc. This resetting did eventually allow JPL to wrest control of it
>>>> back. If
>>>> it had simply locked, oh well.
>>>
>>> Abso-bloody-lutely spot on behaviour. What: you think I'm arguing
>>> that
>>> the lander should have all its checking done at compile time (as if
>>> that's even possible) and eschew runtime checking.
>>>
>>> At no time have I ever said such a thing.
>>>
>>>> On airliners, the self detected faults trigger a dedicated circuit
>>>> that
>>>> disables the faulty computer and engages the backup. The last, last,
>>>> last
>>>> thing you want the autopilot on an airliner to do is execute a
>>>> return
>>>> 0;
>>>> some programmer threw in to shut the compiler up. An exception
>>>> thrown,
>>>> shutting down the autopilot, engaging the backup, and notifying the
>>>> pilot is
>>>> what you'd much rather happen.
>>>
>>> Same as above. Please address my thesis, not the more conveniently
>>> down-shootable one you seem to have addressing.
>>>
>>>>> It's clear, or seems to to me, that this issue, at least as far as
>>>>> the
>>>>> strictures of D is concerned, is a balance between the likelihoods
>>>>> of:
>>>>>     1.    producing a non-violating program, and
>>>>>     2.    preventing a violating program from continuing its
>>>>> execution
>>>>> and, therefore, potentially wreck a system.
>>>>
>>>> There's a very, very important additional point - that of not
>>>> enticing
>>>> the
>>>> programmer into inserting "shut up" code to please the compiler that
>>>> winds
>>>> up masking a bug.
>>>
>>> Absolutely. But that is not, in and of itself, sufficient
>>> justification
>>> for ditching compile detection in favour of runtime detection. Yet
>>> again, we're having to swallow absolutism - dare I say dogma? -
>>> instead
>>> of coming up with a solution that handles all requirements to a
>>> healthy
>>> degree.
>>>
>>>>> You seem to be of the opinion that the current situation of missing
>>>>> return/case handling (MRCH) minimises the likelihood of 2. I agree
>>>>> that
>>>>> it does so.
>>>>>
>>>>> However, contrarily, I assert that D's MRCH minimises the
>>>>> likelihood
>>>>> of
>>>>> producing a non-violating program in the first place. The reasons
>>>>> are
>>>>> obvious, so I'll not go into them. (If anyone's cares to disagree,
>>>>> I
>>>>> ask
>>>>> you to write a non-trival C++ program in a hurry, disable *all*
>>>>> warnings, and go straight to production with it.)
>>>>>
>>>>> Walter, I think that you've hung D on the petard of 'absolutism in
>>>>> the
>>>>> name of simplicity', on this and other issues. For good reasons,
>>>>> you
>>>>> won't conscience warnings, or pragmas, or even switch/function
>>>>> decoarator keywords (e.g. "int allcases func(int i) { if i < 0
>>>>> return -1'; }"). Indeed, as I think most participants will
>>>>> acknowledge,
>>>>> there are good reasons for all the decisions made for D thus far.
>>>>> But
>>>>> there are also good reasons against most/all of those decisions.
>>>>> (Except
>>>>> for slices. Slices are *the best thing* ever, and coupled with
>>>>> auto+GC,
>>>>> will eventually stand D out from all other mainstream
>>>>> languages.<G>).
>>>>
>>>> Jan Knepper came up with the slicing idea. Sheer genius!
>>>
>>> Truly
>>>
>>>>> Software engineering hasn't yet found a perfect language. D is not
>>>>> perfect, and it'd be surprising to hear anyone here say that it is.
>>>>> That
>>>>> being the case, how can the policy of absolutism be deemed a
>>>>> sensible
>>>>> one?
>>>>
>>>> Now that you set yourself up, I can't resist knocking you down with
>>>> "My
>>>> position is simply that compile-time error detection is better than
>>>> runtime
>>>> error detection." :-)
>>>
>>> ?
>>>
>>> If you're trying to say that I've implied that compile-time detection
>>> can handle everything, leaving nothing to be done at runtime, you're
>>> either kidding, sly, or mental. I'm assuming kidding, from the
>>> smiley,
>>> but it's a bit disingenuous at this level of the debate, don't you
>>> think?
>>>
>>>>> It cannot be sanely argued that throwing on missing returns is a
>>>>> perfect
>>>>> solution, any more than it can be argued that compiler errors on
>>>>> missing
>>>>> returns is. That being the case, why has D made manifest in its
>>>>> definition the stance that one of these positions is indeed
>>>>> perfect?
>>>>
>>>> I don't believe it is perfect. I believe it is the best balance of
>>>> competing
>>>> factors.
>>>
>>> I know you do. We all know that you do. It's just that many disagree
>>> that it is. That's one of the problems.
>>>
>>>>> I know the many dark roads that await once the tight control on the
>>>>> language is loosened, but the real world's already here, batting on
>>>>> the
>>>>> door. I have an open mind, and willing fingers to all kinds of
>>>>> languages. I like D a lot, and I want it to succeed a *very great
>>>>> deal*.
>>>>> But I really cannot imagine recommending use of D to my clients
>>>>> with
>>>>> these flaws of absolutism. (My hopeful guess for the future is that
>>>>> other compiler variants will arise that will, at least, allow
>>>>> warnings
>>>>> to detect such things at compile time, which may alter the
>>>>> commercial
>>>>> landscape markedly; D is, after all, full of a great many wonderful
>>>>> things.)
>>>>
>>>> I have no problem at all with somebody making a "lint" for D that
>>>> will
>>>> explore other ideas on checking for errors. One of the reasons the
>>>> front end
>>>> is open source is so that anyone can easily make such a tool.
>>>
>>> I'm not talking about lint. I confidently predict that the least
>>> badness
>>> that will happen will be the general use of non-standard compilers
>>> and
>>> the general un-use of DMD. But I realistically think that D'll
>>> splinter
>>> as a result of making the same kinds of mistakes, albeit for
>>> different
>>> reasons, as C++. :-(
>>>
>>>>> One last word: I recall a suggestion a year or so ago that would
>>>>> required the programmer to explicitly insert what is currently
>>>>> inserted
>>>>> implicitly. This would have the compiler report errors to me if I
>>>>> missed
>>>>> a return. It'd have the code throw errors to you if an unexpected
>>>>> code
>>>>> path occured. Other than screwing over people who prize typing one
>>>>> less
>>>>> line over robustness, what's the flaw? And yet it got no traction
>>>>> ....
>>>>
>>>> Essentially, that means requiring the programmer to insert:
>>>>    assert(0);
>>>>    return 0;
>>>
>>> That is not the suggested syntax, at least not to the best of my
>>> recollection.
>>>
>>>> It just seems that requiring some fixed boilerplate to be inserted
>>>> means
>>>> that the language should do that for you. After all, that's what
>>>> computers
>>>> are good at!
>>>
>>> LOL! Well, there's no arguing with you there, eh?
>>>
>>> You don't want the compiler to automate the bits I want. I don't want
>>> it
>>> to automate the bits you want. I suggest a way to resolve this, by
>>> requiring more of the programmer - fancy that! - and you discount
>>> that
>>> because it's something the compiler should do.
>>>
>>> Just in case anyone's missed the extreme illogic of that position,
>>> I'll
>>> reiterate.
>>>
>>>     Camp A want behaviour X to be done automatically by the compiler
>>>     Camp B want behaviour Y to be done automatically by the compiler.
>>> X
>>> and Y are incompatible, when done automatically.
>>>     By having Z done manually, X and Y are moot, and everything works
>>> well. (To the degree that D will, then, and only then, achieve
>>> resultant
>>> robustnesses undreamt of.)
>>>
>>>     Walter reckons that Z should be done automatically by the
>>> compiler.
>>> Matthew auto-defolicalises and goes to wibble his frimble in the back
>>> drim-drim with the other nimpins.
>>>
>>> Less insanely, I'm keen to hear if there's any on-point response to
>>> this?
>>>
>>>>> [My goodness! That was way longer than I wanted. I guess we'll
>>>>> still
>>>>> be
>>>>> arguing about this when the third edition of DPD's running hot
>>>>> through
>>>>> the presses ...]
>>>>
>>>> I don't expect we'll agree on this anytime soon.
>>>
>>> Agreed
>>>
>>>
>>
>
>
>

February 07, 2005
I think it's getting clear that we're going to need fine grained control of what stays in a final production build, and what does not.

I also think that we should probably either:
    1 make "-release" mean "*all* CP stuff stays in", or
    2 make "*all* CP is elided".

Any halfway house is just likely to lead to confusion.

Since I, personally, think that 2 is a bad thing, but I strongly suspect there'll be objections to 1, for obverse reasons.

Maybe the answer might be to drop "-release" entirely. Can someone with a more detailed understanding than me describe what this might entail, i.e. what the diff between "-debug" and "" might be?

Vaguely, yours

Matthew

"Anders F Björklund" <afb@algonet.se> wrote in message news:cu66t7$29pc$1@digitaldaemon.com...
> Matthew wrote:
>
>> I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
>
> I just don't think contracts has anything to do with release vs. debug ?
>
> For instance, I had to build a non-release version of the Phobos lib
> just to make it check the runtime contracts in my own debugging
> builds.
> I though that pure debugging code was to be put in debug {} blocks ?
> And that the contracts *could* remain, even in released versions...
>
> Array-bounds and switch-default are probably OK to strip for release. Maybe stripping asserts and contracts in release builds is standard procedure, but it would be more straight-forward if called -contract ? (which could be a "subflag" that is triggered to 0 by -release, but)
>
> Or maybe I am just mixing up exceptions versus contracts, as usual... http://research.remobjects.com/blogs/mh/archive/2005/01/11/232.aspx
>
> Even so, having a libphobos-debug.a version has helped me catch a few. (i.e. for debugging builds I use -lphobos-debug, -lphobos for release)
>
> --anders


February 07, 2005
Matthew wrote:
> Sounds good to me.
> 
> But I suspect Walter will argue that given the programmer any hint of
> the problem will result in them putting something in to shut the
> compiler up. At which point I'll have to smash myself in the head with my laptop.
> 

If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)

February 07, 2005
"John Reimer" <brk_6502@yahoo.com> wrote in message news:cu6mt4$785$1@digitaldaemon.com...
> Matthew wrote:
>> Sounds good to me.
>>
>> But I suspect Walter will argue that given the programmer any hint of the problem will result in them putting something in to shut the compiler up. At which point I'll have to smash myself in the head with my laptop.
>>
>
> If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)

Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the desk, with its miserable little broken hinge.



February 07, 2005
On Mon, 7 Feb 2005 13:36:48 +1100, Matthew wrote:

> "Derek Parnell" <derek@psych.ward> wrote in message news:cu66qc$29vn$1@digitaldaemon.com...
>> On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:
>>
>>> "sai" <sai_member@pathlink.com> wrote in message news:cu65me$27i4$1@digitaldaemon.com...
>>>> Anders_F_Bj=F6rklund?= says...
>>>>>Okay, so -release is "intuitive" and "self-explanatory" - but you'll
>>>>>have to read the docs to find out what it does ? Does not compute
>>>>>:-)
>>>>>I find "-debug -release" to be a rather weird combination of DFLAGS
>>>>>?
>>>>
>>>> Yes, -release means ..... it is a release version with all contracts
>>>> (including
>>>> pre-conditions, invariants, post-conditions and assertions) etc etc
>>>> turned off,
>>>> quite self explainatory to me !!
>>>
>>> I think the original issue under debate, sadly largely ignored since,
>>> is
>>> whether contracts (empty clauses elided, of course), should be
>>> included
>>> in a 'default' release build. I'm inexorably moving over to the
>>> opinion
>>> that they should, and I was hoping for opinions from people,
>>> considering
>>> the long-term desire to turn D into a major player in systems
>>> engineering
>>
>> I'm thinking as I write here, so I could be way off ...
>>
>> Isn't the idea of contracts just a mechanism to assist *coders* locate
>> bugs
>> during testing.
> 
> Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!

Of course. In the same sense that nothing is ever perfect. By 'testing' I was referring to the formal development process. And I was thinking more about *who* was doing the testing (as a formal process). The contract code, as I see it, is designed to interact with a developer and not an end user.

> 
> As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.

'stay in' what? The executable shipped to the end user? Well of course you could, as in the end, its really a matter of style. I'm using the model that says that "contract code" is that portion of the source code that is only examining stuff so that it can detect specification errors. Other sorts of errors, such as bad data, and such as (illogical?) situations that *have not been specified*, are being tested by different portions of source code at run-time. So its just a matter of definition, I guess.

I'm just segregating the types of errors being tested based on who will be getting the messages about said errors. "Contract" code assumes its audience for its messages is the development team, "Other Error Testing" code assumes its audience for its messages is both development people *and* end users.


"Contract" code checks for bad output using good input, bad input caused by coding errors (i.e. not user entered data), illogical process flows, etc...

"Other Error Testing" code checks for bad inputs, bad environments (eg.
missing files), temporal anomalies (eg. a file which was open, is suddenly
found to be closed), etc...

>> And by 'bugs', I mean behaviour that is not documented in
>> the program's (business) requirements specifications. As distinct from
>> runtime handling of bad data or unexpected situations.
> 
> Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing.

Sorry. They are two (of many) distinct classes of errors. 'bad data' is one type of error. 'unexpected situations' is another type of error.


> A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.

I meant 'unexpected' in the sense that it is a situation that was not documented in the requirements specification, but happened anyway. I could be seen as a bug in the spec, rather than the code.

>> If so, then by the time you build a final production version of the application, all the testing is completed.
> 
> As I said, this can never be asserted with 100% confidence.

Again it's a definition thing. "testing is completed" means that the formal testing process for release candidate X is completed and the source code for that candidate is frozen. A production build is produced from that and a 'gold disk' created for the marketing/sales group.

Of course, the test builds for the code still exist, but they are only used in house and by beta testers.

But yes, I agree that end users are also involuntary gamma testers ;-)

>> And thus contracts can be
>> removed from the final release.
> 
> So this conclusion may not be drawn.
> 
>> However, you might keep them in for a beta
>> release.
> 
> Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.
> 
>> Bad data and unexpected situations should be still addressed by
>> exceptions
>> and/or simple messages, designed to be read by an *end* user and not
>> only
>> the developers.
> 
> Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.

I'm thinking about the *cause* of invariant violations. When caused by coding errors, then they should be tested for by contract code. When caused by inputting bad data, then they should be handled by non-contract testing code.

I say this, just because I can conceive that some testing code is not suitable for shipping to unsuspecting customers, and should really just be handled in-house. Such code needs to be removed from production versions and the DMD -release switch is the current mechanism for doing that.

-- 
Derek
Melbourne, Australia
7/02/2005 1:51:12 PM
February 07, 2005
"Derek Parnell" <derek@psych.ward> wrote in message news:cu6835$2cmj$1@digitaldaemon.com...
> It appears to me then that you now have DMD informing the pilot at 30,000 feet that the lines are crossed and the system is shutting down, rather than letting maintenance people on the ground know before the plane takes off.

Actually, the point of that story was that one cannot assume away "bad" programmers, one must assume their existence and design to prevent errors or mitigate the damage they can cause. I view the compiler error in this case as akin to "Hey, hydraulic fluid was leaking. A couple of the lines weren't hooked up, so I just screwed them into a couple of ports nearby. It doesn't leak anymore!" It's a mistake to assume that the mechanic will go read the documentation and hook them to the correct port. Sooner or later, some mechanic will do the easiest "fix" possible, so it's very, very important to design so that the easiest fix is the correct one.

The solution you use, while very correct, is not the easiest one. I wish all programmers were as careful as you obviously are.


February 07, 2005
"Regan Heath" <regan@netwin.co.nz> wrote in message news:opslsuv5qc23k2f5@ally...
> AOP is cool, I wish it was possible to use it in D.

I looked at Kris' reference, but AOP is one of those things I don't understand at all.


February 07, 2005
In article <w34t3lnducuh$.1cbudn9r87umu.dlg@40tude.net>, Derek says...
>
>On Sat, 5 Feb 2005 17:46:37 -0800, Walter wrote:
>
>> What you're advocating sounds very much like how compile time warnings work in typical C/C++ compilers. Is this what you mean?
>
>Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching', whatever... messages, I don't care. I don't care what you call the messages. I am asking for better (useful, helpful, detailed) information to be passed from the compiler to the coder. As I know you have some deep seated hang-up with the concept of 'warning message', what say we call them Transitory Information/Problem Status messages (TIPS for short).
>
>Secondly, we are only talking about two or three distinct situations, not all the hundreds of possible constructs out there. Currently DMD *already* takes special action in these situations, so its not a big difference. DMD already has all the information at its fingertips, so to speak, all it needs to do is pass this information on to the coder.
>
>If the coder decides to ignore them, or add stupid code, or tells DMD to shut up, then there is nothing more you can do. Its not your fault! Its okay, really. You did your best to help. In the long run, one cannot protect oneself, or others, from idiots. A fool-proof system just causes the universe to come up with a better class of fool.
>
>-- 
>Derek
>Melbourne, Australia

A statement about inserting code could be made in verbose mode (-v) since that is a flag to the compiler to get all the details about what it is doing. Non-verbose mode should be... non-verbose.