April 12, 2005
"Georg Wrede" <georg.wrede@nospam.org> wrote in message news:425C57E3.7050409@nospam.org...
> Matthew wrote:
>> Sorry, again this is completely wrong. Once the programmer is using a plug-in outside the bounds of its correctness *it is impossible* for that programmer to decide what the behaviour of his/her program is.
>>
>> It really amazes me that people don't get this. Is it some derring-do attitude that 'we shall overcome' all obstacles by dint of hard-work and resolve? No-one binds together nuts and bolts with a hammer, no matter how hard they hit them.
>
> Your metaphor is just riveting!

Woof!

> Seriously, I get the feeling that you're not getting through, not at least with the current tac.

Ya think?!? :-)

> Folks have a hard time understanding that the entire application should be shut down just because a single contract has failed. Especially if we are talking about plugins. After all, the same plugin may have other bugs, etc., and in those cases the mandatory shut-down never becomes an issue as long as no contract traps fire.

I know. I had this same feeling when I started getting into it. FTR: it was a combination of discussions with Walter, reading "The Pragmatic Programmer" and inductive reasoning  that got me to this point. But now I'm here I can't go back, in part because no-one's ever offered an answer to what can be expected of software once it's reached an invalid state.

> As I see it, the value in mandatory shutdown with CP is in making it so painfully obvious to all concerned (customer, main contractor, subcontractor, colleagues, etc.) what went wrong, (and whose fault it was!) that this simply forces the bug fixed in "no time".

It's twofold. The philosophical is "why do you want software to act outside its design, and what good do you think can come of that?" The practical is that this methodology acts like a knife through butter when cutting out bugs. In every application in which I've strictly enforced irrecoverability I've seen significant differences in the amount of time it takes to get stable. It's almost insanely effective. (To experiment, try, for two weeks, pressing Abort everytime you're presented with Abort, Retry, Ignore.)

> We must also remember that not every program is written for the same kind of environment. Moving gigabucks is where absolute correctness is a must. Another might be hospital equipment. Or space ship software.

Amen to all of the above.

> But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque.

Indeed. This is one of those where I shall have to think. Instinct tells me strict-CP will win out, but I need to think about it. :)

> This is made even worse with people getting filthy rich peddling blatantly inferior programs and "operating systems". Programmers, and especially the pointy haired bosses, have a hard time becoming motivated to do things Right.

True. When I plugged in the irrecoverability to my client's comms systems, the technical manager - a very smart cookie with very wide experience - really cacked himself, and instructed me not to tell the management anything about it. This was during dev/pre-system testing. He and the prime engineer turned round within a day: they thought it was marvellous that the processes would detect design violations, tell you what and where the problem was and stop dead, and that I'd have them fixed and up and running on to the next one within minutes. When you're dealing with several multi-threaded comms processes, involving different comms protocols, such behaviour was previously unheard of: to them and to me.

We still didn't tell the management that we were using this methodology for a considerable time, however, not until it was working without problems for a couple of weeks. <g>



April 12, 2005
"Regan Heath" <regan@netwin.co.nz> wrote in message news:opso47vqdb23k2f5@nrage.netwin.co.nz...
> On Wed, 13 Apr 2005 02:21:07 +0300, Georg Wrede <georg.wrede@nospam.org>  wrote:
>> Matthew wrote:
>>> Sorry, again this is completely wrong. Once the programmer is
>>> using a  plug-in outside the bounds of its correctness *it is
>>> impossible* for  that programmer to decide what the behaviour of
>>> his/her program is.
>>>  It really amazes me that people don't get this. Is it some
>>> derring-do  attitude that 'we shall overcome' all obstacles by
>>> dint of hard-work  and resolve? No-one binds together nuts and
>>> bolts with a hammer, no  matter how hard they hit them.
>>
>> Your metaphor is just riveting!
>>
>> Seriously, I get the feeling that you're not getting through, not at  least with the current tac.
>>
>> Folks have a hard time understanding that the entire application should  be shut down just because a single contract has failed. Especially if we  are talking about plugins. After all, the same plugin may have other  bugs, etc., and in those cases the mandatory shut-down never becomes an  issue as long as no contract traps fire.
>>
>> As I see it, the value in mandatory shutdown with CP is in making it so  painfully obvious to all concerned (customer, main contractor,  subcontractor, colleagues, etc.) what went wrong, (and whose fault it  was!) that this simply forces the bug fixed in "no time".
>>
>> We must also remember that not every program is written for the same  kind of environment. Moving gigabucks is where absolute correctness is a  must. Another might be hospital equipment. Or space ship software.
>>
>> But (alas) most programmers are forced to work in environments where you  debug only enough for the customer to accept your bill. They might find  the argumentation seen so far in this thread, er, opaque.
>
> Or they, like you Georg, can see how it happens in the real world, despite  it not being "Right".

I take that point, and am in sympathy with it (practically, not in principle).

The answer to "the real world" is how effective the methodology is when used. I've been involved with all kinds of different approaches over the years, and I'm telling you I've seen nothing anywere near as effective as Informative Zero Tolerance (IZT - did I just invent a new acronym? <g>) for producing good code fast. (Informative because it tells you as soon as possible what is wrong and where it is)

>> This is made even worse with people getting filthy rich peddling blatantly inferior programs and "operating systems". Programmers, and  especially the pointy haired bosses, have a hard time becoming motivated  to do things Right.
>
> Amen. (to Bob)
>
> The real world so often intrudes on purity of design. I can understand  Matthews position, where he's coming from. For the most part I agree with  his points/concerns. I just don't think it's the right thing for D to  enforce, it's not flexible enough for real world situations. Perhaps  Matthew is right, and we should beat the world into submission, but I  think a better tack is to subvert it slowly to our design. You don't throw  a frog into boiling water, it will jump out, instead you heat it slowly.

Well, politically, I can agree with that. For me, then, the slow boiling is the "-no-cp-violations" flag (or absence of the -debug flag, if you will). If we do not have irrecoverable support for the contract violation class(es) within D, then there's no way to ever get it to boiling point. It'll just be warm.




April 13, 2005
"Regan Heath" <regan@netwin.co.nz> wrote in message news:opso47kto123k2f5@nrage.netwin.co.nz...
> On Wed, 13 Apr 2005 09:04:15 +1000, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>> "Regan Heath" <regan@netwin.co.nz> wrote in message news:opso45k5x523k2f5@nrage.netwin.co.nz...
>>> On Wed, 13 Apr 2005 08:32:05 +1000, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>>> Let's turn it round:
>>>>     1. Why do you want to use a software component contrary to
>>>> its  design?
>>>
>>> I dont.
>>>
>>>>     2. What do you expect it to do for you in that
>>>> circumstance?
>>>
>>> Nothing.
>>>
>>> I expect to be able to disable/stop using a component that fails (in  whatever fashion) and continue with my primary purpose whatever that  happens to be.
>>
>> But you can't, don't you see. Once it's experienced a single instruction past the point at which it's violated its design, all bets are off.
>
> The module/plugin won't execute a single instruction past the point at  which it violates it's design. It will be killed. My program hasn't  violated it's design at all.

Without executing any instructions beyond that point, how does it (or we) know it's invalid?

> The only problem I can see occurs when the module/plugin corrupts my  programs memory space.

Which it may have done before anyone, including it, knows its invalid.

>>> Under what circumstances do you see the goal above to be impossible?
>>
>> It is theoretically impossible in all circumstances. (FTR: We're only talking about contract violations here. I'm going to keep saying that, just to be clear)
>
> Can you give me an example of the sort of contract violation you're  referring to. I'm seeing...
>
> class Foo {
>   int a;
>
>   invariant {
>     if (a != 5) assert("contract violation");
>   }
> }
>
> which can be caused by any number of things:
>  - buggy algorithm
>  - unexpected input, without an assertion
>  - memory corruption

Alas, I'm really not smart enough to work on partial examples. Can you flesh out a small but complete example which will demonstrate what you're after, and I'll do my best to prove my case on it?

>> And
>> therefore if the application is written to attempt recovery of
>> invalid behaviour it is going to get you. One day you'll lose a
>> very
>> important piece of user data, or contents of your harddrive will
>> be
>> scrambled.
>
> I don't want the plugin/module that has asserted to continue, I want it to  die. I want my main program to continue, the only situation I can see  where this is likely to cause a problem is memory corruption and in that  case, yes it's possible it will have the effects you describe.

I agree that that's the desirable situation. Alas, it's impossible (in principle; as I've said, it's possible some Heisenbergian proportion of the time in practice).

> As you say above, the probability is very small, thus each application  needs to make the decision about whether to continue or not, for some the  risk might be acceptable, for others it might not.

Yeah, it sounds persuasive. But that'd only be valid if an application were to pop a dialog that said:

    "The third-party component ReganGroovy.dll has encountered a
condition outside the bounds of its design, and cannot be used
further. You are strongly advised to shutdown the application
immediately to ensure your work is not lost. If you do not follow
this advice there is a non-negligble chance of deleterious effects,
ranging from the loss of your unsaved work or deletion of the
file(s) you are working with, to your system being rendered
inoperable or damage to the integrity of your corporate network. Do
you wish to continue?"

Now *maybe* if that was the case, then the programmers of that application can argue that using the "-no-cp-violations" flag is valid. But can you see a manager agreeing to that message?

Of course we live in a world of mendacity motivated by greed, so your manager's going to have you water down that dialog faster than you can "gorporate creed". In which case, we should all just violate away. (But I believe that most engineers care about their craft, and would have trouble sleeping in such circumstances.)

>>> However, how can we  detect that situation?
>>
>> We cannot. The only person that's got the faintest chance of
>> specifying the conditions for which it's not going to happen (at
>> least not by design <g>) is the author of the particular peice of
>> code. And the way they specify it is to reify the contracts of
>> their
>> code in CP constructs: assertions, invariants, etc.
>>
>> The duty of the programmer of the application that hosts that
>> code
>> is to acknowledge the _fact_ that their application is now in an
>> invalid state and the _likelihood_ that something bad will
>> happen,
>> and to shutdown in as timely and graceful a manner as possible.
>
> If that is what they want to do, they could equally decide the risk was  small (as it is) and continue.

Who decides? The programmer, or the user?

>> Since D is (i) new and open to improvement and (ii) currently not
>> capable of supporting irrecoverability by library, I am
>> campaigning
>> for it to have it built in.
>
> I'd prefer an optional library solution. For reasons expressed in this  post/thread.

So would I, but D does not have the mechanisms to support that, so it needs language support.

>>> further how can I even be sure my program is going  to terminate how I intend/want it to, more likely it crashes somewhere random.
>>
>> If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
>
> I _might_ crash. If the assertion was due to memory corruption, and even  then it might be localised to the module in which the assertion was  raised, if so it has no effect on the main program.

Indeed, it might. In many cases it will. But you'll never know for sure. It might have corrupted your stack such that the next file you open is C:\boot.ini, and the next time you reboot your machine it doesn't start. If the programmer makes that decision for an uninformed user, they deserve to be sued, IMO.

>> , and in practice it
>> is likely to do so an uncomfortable/unacceptable proportion of
>> the
>> time.
>
> unacceptable to whom? you? me? the programmer of application X 10 years in  the future?

The user, of course. The end victim of all such invalid behaviour is the user, whether it's a bank losing millions of dollars because the comms services sent messages the wrong way, or Joe Image the graphic designer who's lost 14 hours of work 2 hours before he has to present it to his major client, who'll terminate his contract and put his company under.

> I think you stand a very good chance of annoying the hell out of a future  program author by forcing him/her into a design methodology that they do  not aspire to, whether it's correct or not.
>
> For the records I do agree failing hard and fast is usually the best  practice. I just don't believe people should be forced into it, all the  time.

Again it boils down to two things, the theoretical "what do you expect of your software once it's operating outside its design?" and the practical "wouldn't you like to use software that's been subject to a highly fault-intolerant design/development/testing methodology?"

There's simply no getting away from the first, and many good reasons to embrace the second.


April 13, 2005
"Matthew" <admin@stlsoft.dot.dot.dot.dot.org> wrote in message news:d3hlb1$dqa$1@digitaldaemon.com...
>>> Matthew: Nonetheless, I do have serious doubts that irrecoverability will be incorporated into D, since Walter tends to favour "good enough" solutions rather than aiming for strict/theoretical best, and because the principles and advantages of irrecoverability are not yet sufficiently mainstream. It's a pity though, because it'd really lift D's head above its peers. (And without it, there'll be another area in which C++ will continue to be able to claim supremacy, because D cannot support it in library form.)
>>
>> Ben: I think Walter has made the right choices - except the hierarchy has gotten out of whack. Robust, fault-tolerant software is easier to write with D than C++.
>
> Bold statement. Can you back it up?
>
> I'm not just being a prick, I am genuinely interested in why people vaunt this sentiment so readily. In my personal experience I encounter bugs in the code *far* more in D than I do in C++. Now of course that's at least in part because I've done a lot of C++ over the last 10+ years, but that being the case does not, in and of itself, act as a supportive argument for your proposition. (FYI: I don't make the same kinds of bugs in C# or in Python or in Ruby, and I'm less experienced in those languages than I am in D)

ok.

> Take a couple of cases:
>
> 1. D doesn't have pointers. Sounds great. Except that one can get null-pointer violations when attempting value comparisons. I've had those a fair amount in D. Not had such a thing in C++ in as long as I can remember. C++ cleanly delineates between references (as aliases for instances) and pointers. When I type x == y in C++ I _know_ - Machiavelli weilding int &x=*(int*)0; aside - that I'm not going to have an access violation. I do _not_ know that in D.

D does have pointers. But if you want to ignore that wrinkle most pointer errors are due to dangling pointers (and those are squashed by garbage collection). A null pointer violation is the easiest pointer error to debug IMO. In terms of == you are comparing apples and oranges since you well know using == on object references in D is very different than pointer ==. I've chased plenty of dangling pointers and one of the joys of using Java (and I hope D) is not having to worry about that anymore.

> 2. Take the current debate about irrecoverability. As I've said I've been using irrecoverable CP in the real world in a pretty high-stakes project - lots of AU$ to be lost! - over the last year, and its effect has been to save time and increase robustness, to a surprising (including to me) degree: system testing/production had only two bugs. One was diagnosed within minutes of a halted process with "file(line: VIOLATION: <details here>", and was fixed and running in less than two hours. The other took about a week of incredibly hard debugging, walk throughs, arguments, and rants and raves, because, ironically, I'd _not_ added a some contract enforcements I'd deemed never-going-to-happen!!

I don't see what a missing assert has to do with recoverable on irrecoverable. Or did you have an assert that was swallowed? I can't tell.

> So, unless and until I hear from people with _practical experience_ of these techniques that they've had bad experiences - and the only things I read about from people such as the Pragmatic Programmers is in line with my experience - I cannot be anything but convinced of their power to increase robustness and aid development and testing effectiveness and efficiency.

Hard failures during debugging is fine. My own experience in code robustness
comes from working with engineering companies (who use MATLAB to generate
code for cars and planes) where an unrecoverable error means your car shuts
down when you are doing 65 on the highway. Or imagine if that happens with
an airplane. That is not acceptable. They have drilled over and over into
our heads that a catastrophic error means people die. I don't mean to be
overly dramatic but it is a fact.
With the current D AssertError subclassing Error I agree it is too easy to
catch assertion failures. That is why in my proposal AssertionFailure
subclasses Object directly. When a newbie is tempted to be lazy and swallow
errors without thinking they will most likely swallow Exception. Only the
truely unfortunate will think "oh - I can catch Object and swallow
OutOfMemory and AssertionFailure, too!" My experience with Java is that
newbies catch Exception and not Throwable or Error.

> Now C++ has deterministic destruction, which means I was easily able to create an unrecoverable exception type - found in <stlsoft/unrecoverable.hpp> for anyone interested - to get the behaviour I need. D does not support deterministic destruction of thrown exceptions, so it is not possible to provide irrecoverability in D.
>
> Score 0 for 2. And so it might go on.

I'd score 2 for 0..

> Naturally, this is my perspective, and I don't seek to imply that my perspective is any more absolute than anyone else's. But that being the case, such blanket statements about D's superiority, when used as a palliative in debates concerning improvements to D, are not worth very much.
>
> I'm keen to hear from others from all backgrounds, including C++, their take on Ben's statement (with accompanying rationale, of course).
>
> Cheers
>
> Matthew
>
> 


April 13, 2005
>> Take a couple of cases:
>>
>> 1. D doesn't have pointers. Sounds great. Except that one can get null-pointer violations when attempting value comparisons. I've had those a fair amount in D. Not had such a thing in C++ in as long as I can remember. C++ cleanly delineates between references (as aliases for instances) and pointers. When I type x == y in C++ I _know_ - Machiavelli weilding int &x=*(int*)0; aside - that I'm not going to have an access violation. I do _not_ know that in D.
>
> D does have pointers. But if you want to ignore that wrinkle most pointer errors are due to dangling pointers (and those are squashed by garbage collection).

Indeed. Don't know why I said it like that.

>A null pointer violation is the easiest pointer error to debug IMO. In terms of == you are comparing apples and oranges since you well know using == on object references in D is very different than pointer ==.

I'm talking about the value comparison of references. C++ has references as aliases, which cannot be NULL unless someone's done something deliberately wrong. D has faux references, which are really just pointers with a different syntax, and which can be null. That's the point I was making.

>I've chased plenty of dangling pointers and one of the joys of using Java (and I hope D) is not having to worry about that anymore.
>
>> 2. Take the current debate about irrecoverability. As I've said I've been using irrecoverable CP in the real world in a pretty high-stakes project - lots of AU$ to be lost! - over the last year, and its effect has been to save time and increase robustness, to a surprising (including to me) degree: system testing/production had only two bugs. One was diagnosed within minutes of a halted process with "file(line: VIOLATION: <details here>", and was fixed and running in less than two hours. The other took about a week of incredibly hard debugging, walk throughs, arguments, and rants and raves, because, ironically, I'd _not_ added a some contract enforcements I'd deemed never-going-to-happen!!
>
> I don't see what a missing assert has to do with recoverable on irrecoverable. Or did you have an assert that was swallowed? I can't tell.

I was saying the presence of a contract violation assertion detected a design error, and facilitated a very rapid fix. And the absence of one resulted in a *lot* of effort that'd've been spared had I not been so stupid as to think they'd never happen.

>> So, unless and until I hear from people with _practical experience_ of these techniques that they've had bad experiences - and the only things I read about from people such as the Pragmatic Programmers is in line with my experience - I cannot be anything but convinced of their power to increase robustness and aid development and testing effectiveness and efficiency.
>
> Hard failures during debugging is fine. My own experience in code robustness comes from working with engineering companies (who use MATLAB to generate code for cars and planes) where an unrecoverable error means your car shuts down when you are doing 65 on the highway. Or imagine if that happens with an airplane. That is not acceptable. They have drilled over and over into our heads that a catastrophic error means people die. I don't mean to be overly dramatic but it is a fact.

They sound powerfully persuasive on first reading. But it's still wrong, I'm afraid. What would you expect of your car/plane when it's operating outside its design? That is truly frightening!

I think the real issue is that the examples you've described are not pure software engineering challenges. Frankly, I don't want to drive a car, or be in a plane where one computer is in total control, and it's going to be allowed to carry on when it's operating outside its design. From what I know of the space shuttle, it has three identical systems, and a supervisory controller that ignores an errant member of the triumvirate. Of course, that gets us to who monitors the controller. I don't know anything about that, but I would hope it's something that can effect a NMI and reboot itself within a few ms. All such things are a risk balance, naturally, and it may be that the risk estimate in such circumstances is that operating out of bounds is better than rebooting. In which case, build with "-no-cp-violations". Just don't crack on that these special circumstances somehow are exempt from the possibility that continuing is worth that stopping.

What's wrong with having a car reboot at 65 on the highway? Why does reboot on such an embedded system have to take more than a ms or two? Why cannot such an embedded system start up seamlessly within a moving vehicle, and take over from where its previous incarnation had correctly steered it until its apoptosis? That'd be the car I'd trust.

> With the current D AssertError subclassing Error I agree it is too easy to catch assertion failures. That is why in my proposal AssertionFailure subclasses Object directly. When a newbie is tempted to be lazy and swallow errors without thinking they will most likely swallow Exception. Only the truely unfortunate will think "oh - I can catch Object and swallow OutOfMemory and AssertionFailure, too!" My experience with Java is that newbies catch Exception and not Throwable or Error.

Why not just make it impossible?

>> Now C++ has deterministic destruction, which means I was easily able to create an unrecoverable exception type - found in <stlsoft/unrecoverable.hpp> for anyone interested - to get the behaviour I need. D does not support deterministic destruction of thrown exceptions, so it is not possible to provide irrecoverability in D.
>>
>> Score 0 for 2. And so it might go on.
>
> I'd score 2 for 0..

You mean 2 for 2, I think.

The point I was trying to raise is that you and Walter and others trot out these granular statements that D is better for writing robust software, and I want to know why that should be so? Is it just because of GC? I mean, a great many people have proposed a great many changes with the intent of improving D's ability to write robust software, and yet they've fallen on fallow ground. Is D's 'ethos' of good enough applying here, i.e. is having a GC that makes dangling pointer problem irrelevant such a big gain that we needn't care about anything else? I'm not (just) being sarcastic, I really want to know!

Cheers


April 13, 2005
Matthew wrote:
> "Georg Wrede" <georg.wrede@nospam.org> wrote in message news:425C57E3.7050409@nospam.org...
>>But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque.
> 
> Indeed. This is one of those where I shall have to think. Instinct tells me strict-CP will win out, but I need to think about it. :)

I, for one, am 100% for strict here. The idea of CP gets diluted if you can have all kinds of methods for deferring shutdown. Or if you somehow can choose when and where it does what.

Either compile with all contracts, or compile without. (And while I'm at it, Phobos should either come as source code only, or (hopefully) rather as several binaries precompiled with different switches: contracts, debugging, optimized, for example, automatically chosen by the compiler.)

> He and the prime engineer turned round within a day: they thought it was marvellous that the processes would detect design violations, tell you what and where the problem was and stop dead, and that I'd have them fixed and up and running on to the next one within minutes. When you're dealing with several multi-threaded comms processes, involving different comms protocols, such behaviour was previously unheard of: to them and to me.

The problem with "ordinary" (non-critical) software development is, that using CP _looks_ harder and more labourious at the outset. Probably because you then can't "see" the massive amounts of unnecessary work done when not using CP.

Heh, I've caught myself more than once taking shortcuts through fields, woods, bushes, or hills, only to find that it took more energy, ruined my clothes -- and worst of all, took more time than the regular road.
April 13, 2005
On Wed, 13 Apr 2005 10:07:40 +1000, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>>> I expect to be able to disable/stop using a component that fails
>>>> (in  whatever fashion) and continue with my primary purpose
>>>> whatever that  happens to be.
>>>
>>> But you can't, don't you see. Once it's experienced a single
>>> instruction past the point at which it's violated its design, all
>>> bets are off.
>>
>> The module/plugin won't execute a single instruction past the
>> point at  which it violates it's design. It will be killed. My
>> program hasn't  violated it's design at all.
>
> Without executing any instructions beyond that point, how does it
> (or we) know it's invalid?

Ok, missunderstanding, the 'point' in my mind was the assert statement, but you're saying it's where the erroneous 'thing' was carried out, at some stage before the assert, correct? If so, agreed.

>> The only problem I can see occurs when the module/plugin corrupts
>> my  programs memory space.
>
> Which it may have done before anyone, including it, knows its
> invalid.

Yep.

>>>> Under what circumstances do you see the goal above to be
>>>> impossible?
>>>
>>> It is theoretically impossible in all circumstances. (FTR: We're
>>> only talking about contract violations here. I'm going to keep
>>> saying that, just to be clear)
>>
>> Can you give me an example of the sort of contract violation
>> you're  referring to. I'm seeing...
>>
>> class Foo {
>>   int a;
>>
>>   invariant {
>>     if (a != 5) assert("contract violation");
>>   }
>> }
>>
>> which can be caused by any number of things:
>>  - buggy algorithm
>>  - unexpected input, without an assertion
>>  - memory corruption
>
> Alas, I'm really not smart enough to work on partial examples. Can
> you flesh out a small but complete example which will demonstrate
> what you're after, and I'll do my best to prove my case on it?

I was asking *you* for an example, the above is half-formed because I am trying to guess what you mean. Feel free to modify it, and/or start from scratch.

Basically I'm asking:
1- What are the causes of contract violations?
2- How many of those would corrupt the "main program" if they occured in a plugin/module.

The point I am driving at is that a very small subset of contract violations corrupt the main program in such a way as to cause it to crash, the rest can be logged, the bad code disabled/not used and execution can continue to operate in a perfectly valid and normal fashion.

In other words, only in a small subset of contract violations would the main program start to "operating outside it's design".

The choice about whether to continue or not lies in the hands of the programmer of that application.

>>> And
>>> therefore if the application is written to attempt recovery of
>>> invalid behaviour it is going to get you. One day you'll lose a
>>> very
>>> important piece of user data, or contents of your harddrive will
>>> be
>>> scrambled.
>>
>> I don't want the plugin/module that has asserted to continue, I
>> want it to  die. I want my main program to continue, the only
>> situation I can see  where this is likely to cause a problem is
>> memory corruption and in that  case, yes it's possible it will
>> have the effects you describe.
>
> I agree that that's the desirable situation. Alas, it's impossible
> (in principle; as I've said, it's possible some Heisenbergian
> proportion of the time in practice).

Some _large_ proportion of the time in practice, as far as I can see.

>> As you say above, the probability is very small, thus each
>> application  needs to make the decision about whether to continue
>> or not, for some the  risk might be acceptable, for others it
>> might not.
>
> Yeah, it sounds persuasive. But that'd only be valid if an
> application were to pop a dialog that said:
>
>     "The third-party component ReganGroovy.dll has encountered a
> condition outside the bounds of its design, and cannot be used
> further. You are strongly advised to shutdown the application
> immediately to ensure your work is not lost. If you do not follow
> this advice there is a non-negligble chance of deleterious effects,
> ranging from the loss of your unsaved work or deletion of the
> file(s) you are working with, to your system being rendered
> inoperable or damage to the integrity of your corporate network. Do
> you wish to continue?"

That would be nice, but it's not required. The choice is in the hands of the programmer, not the user. If the user doesn't like the choice made by the programmer, they'll stop using the program.

> Now *maybe* if that was the case, then the programmers of that
> application can argue that using the "-no-cp-violations" flag is
> valid. But can you see a manager agreeing to that message?

That depends on the manager. I vaguely recall recieving a windows error message very much like that one shown above.
On serveral occasions, most of which continued to run, albeit with other errors, some of which died shortly thereafter.

> Of course we live in a world of mendacity motivated by greed, so
> your manager's going to have you water down that dialog faster than
> you can "gorporate creed". In which case, we should all just violate
> away. (But I believe that most engineers care about their craft, and
> would have trouble sleeping in such circumstances.)

Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is unique to each situation and application. Thus why we cannot mandate program termination. But, by all means, provide one in the library.

>>>> However, how can we  detect that situation?
>>>
>>> We cannot. The only person that's got the faintest chance of
>>> specifying the conditions for which it's not going to happen (at
>>> least not by design <g>) is the author of the particular peice of
>>> code. And the way they specify it is to reify the contracts of
>>> their
>>> code in CP constructs: assertions, invariants, etc.
>>>
>>> The duty of the programmer of the application that hosts that
>>> code
>>> is to acknowledge the _fact_ that their application is now in an
>>> invalid state and the _likelihood_ that something bad will
>>> happen,
>>> and to shutdown in as timely and graceful a manner as possible.
>>
>> If that is what they want to do, they could equally decide the
>> risk was  small (as it is) and continue.
>
> Who decides? The programmer, or the user?

The programmer.

>>> Since D is (i) new and open to improvement and (ii) currently not
>>> capable of supporting irrecoverability by library, I am
>>> campaigning
>>> for it to have it built in.
>>
>> I'd prefer an optional library solution. For reasons expressed in
>> this  post/thread.
>
> So would I, but D does not have the mechanisms to support that, so
> it needs language support.

I assume you're referring to the fact that you can catch Object. IMO catching object is an advanced technique. Once the Exception tree is sorted out people will be catching "Exception" if they want to catch "everything" and that won't include asserts and other contract violations.

This will leave the _possibility_ of catching Object if desired and all will be happy.

>>>> further how can I even be sure my program is going  to terminate
>>>> how I intend/want it to, more likely it crashes somewhere
>>>> random.
>>>
>>> If it experiences a contract violation then, left to its own
>>> devices, in principle it _will_ crash randomly
>>
>> I _might_ crash. If the assertion was due to memory corruption,
>> and even  then it might be localised to the module in which the
>> assertion was  raised, if so it has no effect on the main program.
>
> Indeed, it might. In many cases it will. But you'll never know for
> sure. It might have corrupted your stack such that the next file you
> open is C:\boot.ini, and the next time you reboot your machine it
> doesn't start. If the programmer makes that decision for an
> uninformed user, they deserve to be sued, IMO.

In which case the user will move on to another program. The programmer will hopefully learn from the mistake and improve. Either way, it's not ours to mandate.

>>> , and in practice it
>>> is likely to do so an uncomfortable/unacceptable proportion of
>>> the
>>> time.
>>
>> unacceptable to whom? you? me? the programmer of application X 10
>> years in  the future?
>
> The user, of course. The end victim of all such invalid behaviour is
> the user, whether it's a bank losing millions of dollars because the
> comms services sent messages the wrong way, or Joe Image the graphic
> designer who's lost 14 hours of work 2 hours before he has to
> present it to his major client, who'll terminate his contract and
> put his company under.

(as above) The user will choose, the programmer will learn.. or not.

>> I think you stand a very good chance of annoying the hell out of a
>> future  program author by forcing him/her into a design
>> methodology that they do  not aspire to, whether it's correct or
>> not.
>>
>> For the records I do agree failing hard and fast is usually the
>> best  practice. I just don't believe people should be forced into
>> it, all the  time.
>
> Again it boils down to two things, the theoretical "what do you
> expect of your software once it's operating outside its design?"

If it *is* operating outside it's design you cannot expect anything from it.

However, (assuming plugin/main context) you cannot know that it (main program) *is* operating outside it's design, it only *might* be (if the plugin has corrupted it).

> and
> the practical "wouldn't you like to use software that's been subject
> to a highly fault-intolerant design/development/testing
> methodology?"

Of course. But that does not mean I agre with mandatory program termination.

> There's simply no getting away from the first

Indeed.

> , and many good reasons
> to embrace the second.

Agreed. And we can/will with a revised exception tree and clear description of expected practices i.e. catching exception but not object (unless you're doing x, y, z) and why that is frowned upon.

Regan
April 13, 2005
>>> Can you give me an example of the sort of contract violation you're  referring to. I'm seeing...
>>>
>>> class Foo {
>>>   int a;
>>>
>>>   invariant {
>>>     if (a != 5) assert("contract violation");
>>>   }
>>> }
>>>
>>> which can be caused by any number of things:
>>>  - buggy algorithm
>>>  - unexpected input, without an assertion
>>>  - memory corruption
>>
>> Alas, I'm really not smart enough to work on partial examples.
>> Can
>> you flesh out a small but complete example which will demonstrate
>> what you're after, and I'll do my best to prove my case on it?
>
> I was asking *you* for an example, the above is half-formed because I am  trying to guess what you mean. Feel free to modify it, and/or start from  scratch.

Ah. More work for me. :-)

I'll give you the real example from the comms system. One component, call it B, serves as a bridge between two others translating messages on TCP connections from the upstream component to message queue entries to be dispatched to the downstream. It effects a reverse translation from downstream MQ back through to upstream TCP. The invariant that was violated was that the channel's internal message container for receiving from the MQ should not contain any messages when the upstream (TCP) is not connected to its peer. The reason this fired is that the downstream process, under some circumstances, did indeed send up messages. Because the upstream entities maintain transaction state based on connectivity, any messages from a previous TCP connection are by defintion from a previous transaction, and so to pass them up would violate the protocol, and therefore entirely unreasonable. The only reasonable action is to drop them.

Because the program as originally designed did not expect to encounter this scenario, that assumption was codified in the class invariant for the channel type. Thus, when it was encountered in practice - i.e. the program now encountered a condition that violated its design - a contract violation fired, and the program did an informative suicide. As soon as this happened we were able to infer that our design assumptions were wrong, and we corrected the design such that stale messages are now expected, and are dealt with by a visit to the bit bucket.

As I said in another post, imagine if that CP violation had not happened. The stale messages could have been encountered every few days/weeks/months, and may well have been treated as some emergent and unknown behaviour of the overall system complexity, or may have been masked in other errors encountered in this large and diverse insfrastructure. Who knows? What I think we can say with a high certainty is that it would not have been immediately diagnosed and rapidly rectified.


If you want, I'll have a root around my code base for another in a bit ...


> Basically I'm asking:
> 1- What are the causes of contract violations?

Whatever the programmer dictates.

> 2- How many of those would corrupt the "main program" if they occured in a  plugin/module.

Impossible to determine. In principle all do. In practice probably a minority. (Bear in mind that just because the numbers of incidence are likely to be low, its not valid to infer that the ramifications of each corruption will be similarly low.)

> The point I am driving at is that a very small subset of contract violations corrupt the main program in such a way as to cause it to crash,

May I ask on what you base this statement?

Further, remember that crashing is but one of many deleterious consequences of contract violation. In some ways a crash is the best one might hope for. What if you code editing program appears to be working perfectly fine, but it saves everything in lowercase, or when you finally close it down it deletes its configuration files.

Much more nastily, what if you're doing a financial system, and letting it continue results in good uptime, but transactions are corrupted. When the client finally finds out it's your responsibility, you're going to wish you'd gone for those cosmetically unappealing shutdowns, rather than staying up in error.

> the rest can be logged, the bad code disabled/not used and execution can  continue to operate in a perfectly valid and normal fashion.
>


>>     "The third-party component ReganGroovy.dll has encountered a
>> condition outside the bounds of its design, and cannot be used
>> further. You are strongly advised to shutdown the application
>> immediately to ensure your work is not lost. If you do not follow
>> this advice there is a non-negligble chance of deleterious
>> effects,
>> ranging from the loss of your unsaved work or deletion of the
>> file(s) you are working with, to your system being rendered
>> inoperable or damage to the integrity of your corporate network.
>> Do
>> you wish to continue?"
>
> That would be nice, but it's not required. The choice is in the hands of  the programmer, not the user. If the user doesn't like the choice made by  the programmer, they'll stop using the program.

They sure will. But they may've been seriously inconvenienced by it, to the detriment of



>> Of course we live in a world of mendacity motivated by greed, so
>> your manager's going to have you water down that dialog faster
>> than
>> you can "gorporate creed". In which case, we should all just
>> violate
>> away. (But I believe that most engineers care about their craft,
>> and
>> would have trouble sleeping in such circumstances.)
>
> Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is  unique to each situation and application. Thus why we cannot mandate  program termination. But, by all means, provide one in the library.

We can't, because D does not have the facilities to do so.



>>>> Since D is (i) new and open to improvement and (ii) currently
>>>> not
>>>> capable of supporting irrecoverability by library, I am
>>>> campaigning
>>>> for it to have it built in.
>>>
>>> I'd prefer an optional library solution. For reasons expressed
>>> in
>>> this  post/thread.
>>
>> So would I, but D does not have the mechanisms to support that,
>> so
>> it needs language support.
>
> I assume you're referring to the fact that you can catch Object. IMO  catching object is an advanced technique. Once the Exception tree is  sorted out people will be catching "Exception" if they want to catch  "everything" and that won't include asserts and other contract violations.
>
> This will leave the _possibility_ of catching Object if desired and all  will be happy.

What's the merit in catching Object? I just don't get it.

Does anyone have a motivating case for doing so?

Does anyone have any convincing experience in throwing fundamental types in C++? IIRC, the only reason for allowing it in the first place was so that one could throw literal strings to avoid issues of stack existentiality in early exception-handling infrastructures. Nowadays, no-one talks of throwing anything other than a std::exception-derived type, and with good reason.

Leaving it as Object-throw/catchable is just the same as the situation with the opApply return value. It's currently not being abused, so it's "good enough"! :-(




>>>>> further how can I even be sure my program is going  to
>>>>> terminate
>>>>> how I intend/want it to, more likely it crashes somewhere
>>>>> random.
>>>>
>>>> If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
>>>
>>> I _might_ crash. If the assertion was due to memory corruption, and even  then it might be localised to the module in which the assertion was  raised, if so it has no effect on the main program.
>>
>> Indeed, it might. In many cases it will. But you'll never know
>> for
>> sure. It might have corrupted your stack such that the next file
>> you
>> open is C:\boot.ini, and the next time you reboot your machine it
>> doesn't start. If the programmer makes that decision for an
>> uninformed user, they deserve to be sued, IMO.
>
> In which case the user will move on to another program. The programmer  will hopefully learn from the mistake and improve. Either way, it's not  ours to mandate.

You don't think that user would rather have received a dialog telling him that something's gone wrong and that his work has been saved for him and that he must shut down? He'd rather have his machine screwed?

You don't think that developer would rather receive a mildly irrirated complaint from a user with an accompanying auto-generated error-report, which he can use to fix the problem forthwith? He'd rather have his reputation trashed in newsgroups, field very irate email, be sued?



>> Again it boils down to two things, the theoretical "what do you expect of your software once it's operating outside its design?"
>
> If it *is* operating outside it's design you cannot expect anything from  it.
>
> However, (assuming plugin/main context) you cannot know that it
> (main  program) *is* operating outside it's design, it only
> *might* be (if the  plugin has corrupted it).

I don't think it's divisible. If the program has told you it might be operating outside its design, then it is operating outside its design. After all, a program cannot, by definition, be designed to work with a part of it that is operating outside its design. How could the interaction with that component be spelt out in design, never mind codified?

>> and
>> the practical "wouldn't you like to use software that's been
>> subject
>> to a highly fault-intolerant design/development/testing
>> methodology?"
>
> Of course. But that does not mean I agre with mandatory program termination.
>
>> There's simply no getting away from the first
>
> Indeed.


btw, I think we've covered most of the stuff now. I'm happy to continue if you are, but equally happy to let the group decide (that irrecoverability is not worth having ;< )

Cheers

Matthew



April 13, 2005
> As I see it, the value in mandatory shutdown with CP is in making it so painfully obvious to all concerned (customer, main contractor, subcontractor, colleagues, etc.) what went wrong, (and whose fault it was!) that this simply forces the bug fixed in "no time".

That's a tough sell. I'm not in sales but I imagine they wouldn't like saying "yeah unlike our competitors who recover gracefully from errors - when we have something go wrong we exit the whole app and create this nifty file called 'core' that has all your data. It's good for you, or so say the experts."

> We must also remember that not every program is written for the same kind of environment. Moving gigabucks is where absolute correctness is a must. Another might be hospital equipment. Or space ship software.
>
> But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque.

Customers don't like programs crashing. In practice many errors - even many asserts - are not fatal and can be recovered from.

> This is made even worse with people getting filthy rich peddling blatantly inferior programs and "operating systems". Programmers, and especially the pointy haired bosses, have a hard time becoming motivated to do things Right.

I assume you are talking about Windows. There are many pressures on software companies. I won't defend Microsoft's decisions but I doubt that if they had made Windows crash "harder" than it did that they would have been more motivated to fix bugs. They had a different measure of release criteria than we do today.


April 13, 2005
On Wed, 13 Apr 2005 11:49:25 +1000, Matthew <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>>> Can you give me an example of the sort of contract violation
>>>> you're  referring to. I'm seeing...
>>>>
>>>> class Foo {
>>>>   int a;
>>>>
>>>>   invariant {
>>>>     if (a != 5) assert("contract violation");
>>>>   }
>>>> }
>>>>
>>>> which can be caused by any number of things:
>>>>  - buggy algorithm
>>>>  - unexpected input, without an assertion
>>>>  - memory corruption
>>>
>>> Alas, I'm really not smart enough to work on partial examples.
>>> Can
>>> you flesh out a small but complete example which will demonstrate
>>> what you're after, and I'll do my best to prove my case on it?
>>
>> I was asking *you* for an example, the above is half-formed
>> because I am  trying to guess what you mean. Feel free to modify
>> it, and/or start from  scratch.
>
> Ah. More work for me. :-)
>
> I'll give you the real example from the comms system. One component,
> call it B, serves as a bridge between two others translating
> messages on TCP connections from the upstream component to message
> queue entries to be dispatched to the downstream. It effects a
> reverse translation from downstream MQ back through to upstream TCP.
> The invariant that was violated was that the channel's internal
> message container for receiving from the MQ should not contain any
> messages when the upstream (TCP) is not connected to its peer. The
> reason this fired is that the downstream process, under some
> circumstances, did indeed send up messages. Because the upstream
> entities maintain transaction state based on connectivity, any
> messages from a previous TCP connection are by defintion from a
> previous transaction, and so to pass them up would violate the
> protocol, and therefore entirely unreasonable. The only reasonable
> action is to drop them.
>
> Because the program as originally designed did not expect to
> encounter this scenario, that assumption was codified in the class
> invariant for the channel type. Thus, when it was encountered in
> practice - i.e. the program now encountered a condition that
> violated its design - a contract violation fired, and the program
> did an informative suicide. As soon as this happened we were able to
> infer that our design assumptions were wrong, and we corrected the
> design such that stale messages are now expected, and are dealt with
> by a visit to the bit bucket.
>
> As I said in another post, imagine if that CP violation had not
> happened. The stale messages could have been encountered every few
> days/weeks/months, and may well have been treated as some emergent
> and unknown behaviour of the overall system complexity, or may have
> been masked in other errors encountered in this large and diverse
> insfrastructure. Who knows? What I think we can say with a high
> certainty is that it would not have been immediately diagnosed and
> rapidly rectified.
>
>
> If you want, I'll have a root around my code base for another in a
> bit ...

No need, this is fine, thank you.

One important point is that I'm not recommending the removal of the CP violation, quite the opposite, but, I believe that the programmer should be able to make the informed decision about whether it's terminal or not.

In your example, the best course was/is to terminate, as it's main goal/purpose cannot be achieved without significant risk of corruption.

In another example, the plugin one, the main program can continue, it's main goal/purpose can still be achieved without significant risk of corruption. (assuming here the plugin is non-essential to it's main goal).

It comes down to the priority of the task that has asserted, if it's low priority then given circumstances it's concievable that the programmer may want to log it, and continue, so as to achieve his/her main priority. There is risk involved, but it's admittedly small.

>> Basically I'm asking:
>> 1- What are the causes of contract violations?
>
> Whatever the programmer dictates.

Not contract conditions, those the programmer dictates. I want to know what can cause a contract to be violated, I was thinking:
 - buggy algorithm
 - memory corruption
..

>> 2- How many of those would corrupt the "main program" if they
>> occured in a  plugin/module.
>
> Impossible to determine. In principle all do. In practice probably a
> minority. (Bear in mind that just because the numbers of incidence
> are likely to be low, its not valid to infer that the ramifications
> of each corruption will be similarly low.)

I agree. Incidence is low, the ramifications may be large. But they may not be, in which case let the programmer decide.

>> The point I am driving at is that a very small subset of contract
>> violations corrupt the main program in such a way as to cause it
>> to crash,
>
> May I ask on what you base this statement?

On the comments you and I have made about the likelihood of memory corruption, which, so far, appears to be the only one that causes an undetectable/handleable crash.

> Further, remember that crashing is but one of many deleterious
> consequences of contract violation. In some ways a crash is the best
> one might hope for. What if you code editing program appears to be
> working perfectly fine, but it saves everything in lowercase, or
> when you finally close it down it deletes its configuration files.
>
> Much more nastily, what if you're doing a financial system, and
> letting it continue results in good uptime, but transactions are
> corrupted. When the client finally finds out it's your
> responsibility, you're going to wish you'd gone for those
> cosmetically unappealing shutdowns, rather than staying up in error.

Sure, in these circumstances the programmer should "choose" to crash.
I just want to retain the option.

>>>     "The third-party component ReganGroovy.dll has encountered a
>>> condition outside the bounds of its design, and cannot be used
>>> further. You are strongly advised to shutdown the application
>>> immediately to ensure your work is not lost. If you do not follow
>>> this advice there is a non-negligble chance of deleterious
>>> effects,
>>> ranging from the loss of your unsaved work or deletion of the
>>> file(s) you are working with, to your system being rendered
>>> inoperable or damage to the integrity of your corporate network.
>>> Do
>>> you wish to continue?"
>>
>> That would be nice, but it's not required. The choice is in the
>> hands of  the programmer, not the user. If the user doesn't like
>> the choice made by  the programmer, they'll stop using the
>> program.
>
> They sure will. But they may've been seriously inconvenienced by it,
> to the detriment of

... ? thier business, health, bank account.

>>> Of course we live in a world of mendacity motivated by greed, so
>>> your manager's going to have you water down that dialog faster
>>> than
>>> you can "gorporate creed". In which case, we should all just
>>> violate
>>> away. (But I believe that most engineers care about their craft,
>>> and
>>> would have trouble sleeping in such circumstances.)
>>
>> Principle/Ideal vs Reality/Practice it's a fine line/balance, one
>> that is  unique to each situation and application. Thus why we
>> cannot mandate  program termination. But, by all means, provide
>> one in the library.
>
> We can't, because D does not have the facilities to do so.

You cannot "enforce" or "mandate" it, but you can "provide" it.

>>>>> Since D is (i) new and open to improvement and (ii) currently
>>>>> not
>>>>> capable of supporting irrecoverability by library, I am
>>>>> campaigning
>>>>> for it to have it built in.
>>>>
>>>> I'd prefer an optional library solution. For reasons expressed
>>>> in
>>>> this  post/thread.
>>>
>>> So would I, but D does not have the mechanisms to support that,
>>> so
>>> it needs language support.
>>
>> I assume you're referring to the fact that you can catch Object.
>> IMO  catching object is an advanced technique. Once the Exception
>> tree is  sorted out people will be catching "Exception" if they
>> want to catch  "everything" and that won't include asserts and
>> other contract violations.
>>
>> This will leave the _possibility_ of catching Object if desired
>> and all  will be happy.
>
> What's the merit in catching Object? I just don't get it.
>
> Does anyone have a motivating case for doing so?
>
> Does anyone have any convincing experience in throwing fundamental
> types in C++? IIRC, the only reason for allowing it in the first
> place was so that one could throw literal strings to avoid issues of
> stack existentiality in early exception-handling infrastructures.
> Nowadays, no-one talks of throwing anything other than a
> std::exception-derived type, and with good reason.
>
> Leaving it as Object-throw/catchable is just the same as the
> situation with the opApply return value. It's currently not being
> abused, so it's "good enough"! :-(

Well, personally I don't care what it's called, I just want to be able to catch everything, including Assertions etc. I'm happy with it being uncommon, eg.

Object <- not throw/catch-able
  Throwable
    Assertion
    Exception
       ..etc..

you dont catch Throwable generally speaking, only Exception.

>>>>>> further how can I even be sure my program is going  to
>>>>>> terminate
>>>>>> how I intend/want it to, more likely it crashes somewhere
>>>>>> random.
>>>>>
>>>>> If it experiences a contract violation then, left to its own
>>>>> devices, in principle it _will_ crash randomly
>>>>
>>>> I _might_ crash. If the assertion was due to memory corruption,
>>>> and even  then it might be localised to the module in which the
>>>> assertion was  raised, if so it has no effect on the main
>>>> program.
>>>
>>> Indeed, it might. In many cases it will. But you'll never know
>>> for
>>> sure. It might have corrupted your stack such that the next file
>>> you
>>> open is C:\boot.ini, and the next time you reboot your machine it
>>> doesn't start. If the programmer makes that decision for an
>>> uninformed user, they deserve to be sued, IMO.
>>
>> In which case the user will move on to another program. The
>> programmer  will hopefully learn from the mistake and improve.
>> Either way, it's not  ours to mandate.
>
> You don't think that user would rather have received a dialog
> telling him that something's gone wrong and that his work has been
> saved for him and that he must shut down? He'd rather have his
> machine screwed?

Sure.

> You don't think that developer would rather receive a mildly
> irrirated complaint from a user with an accompanying auto-generated
> error-report, which he can use to fix the problem forthwith? He'd
> rather have his reputation trashed in newsgroups, field very irate
> email, be sued?

Sure.

But again, it's his/her choice.

>>> Again it boils down to two things, the theoretical "what do you
>>> expect of your software once it's operating outside its design?"
>>
>> If it *is* operating outside it's design you cannot expect
>> anything from  it.
>>
>> However, (assuming plugin/main context) you cannot know that it
>> (main  program) *is* operating outside it's design, it only
>> *might* be (if the  plugin has corrupted it).
>
> I don't think it's divisible. If the program has told you it might
> be operating outside its design, then it is operating outside its
> design.

Not if it's design includes trying to handle that situation.

> After all, a program cannot, by definition, be designed to
> work with a part of it that is operating outside its design.

Why not?

> How
> could the interaction with that component be spelt out in design,
> never mind codified?

if (outside_design) component = disabled;
if (component == disabled) return ;

>>> and
>>> the practical "wouldn't you like to use software that's been
>>> subject
>>> to a highly fault-intolerant design/development/testing
>>> methodology?"
>>
>> Of course. But that does not mean I agre with mandatory program
>> termination.
>>
>>> There's simply no getting away from the first
>>
>> Indeed.
>
>
> btw, I think we've covered most of the stuff now. I'm happy to
> continue if you are, but equally happy to let the group decide (that
> irrecoverability is not worth having ;< )

Ok, lets leave it here then. I'll leave the comments I've just made, feel free to ignore them :)

Regan
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Top | Discussion index | About this forum | D home