View mode: basic / threaded / horizontal-split · Log in · Help
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
On Tue, 12 Apr 2005 22:23:56 +1000, Matthew  
<admin@stlsoft.dot.dot.dot.dot.org> wrote:
>> Do you know the design of a program someone is going to write in
>> the  future?
>>
>> How can you say with utmost surety that under circumstance X that
>> program  has violated it's design and *must* terminate?
>
> Somewhere along the way we've had a gigantic disconnect. Maybe this
> stuff's new to you and I've assumed too much?

> A programmer uses CP constructs - assertions, pre/postconditions,
> invariants - to assert truths about the logic of their program, i.e.
> that a given truth will hold if the program is behaving according to
> its design. So its not the case that _I_ say/know anything about the
> program's design, or any such fantastic thing, but that the
> programmer(s) know(s) the design as they're writing, and they
> _assert_ the truths about that design within the code.

Assertions pre/post conditions etc are removed in a release build. I  
assume from the above you leave them in?
Regardless lets apply the above to the plugin example posted in this  
thread in several places by several people.
If a plugin asserts should the main program be forced to terminate. IMO no.

Assuming assertions are removed all you're left with is exceptions. Should  
a program be forced to terminate on an exception? IMO no.

The reasoning for my opinions above is quite simple, the programmer (of  
the program) is the only person in a position to decide what the design  
and behaviour of the program is, not the writer of a plugin, not the  
writer of the std library or language.

<snip>

>> So as to clear any confusion, under what circumstance would you
>> enforce  program termination?
>
> When a program violates its design, as detected by the assertions
> inserted into it by its creator(s). It must always do so.

Sure, but if I write a program that uses your plugin should your plugin  
(your creation) dictate to my program (my creation) what it's design is?  
By enforcing program termination you do so.

<snip>

Regan
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
>> 1. As soon as your editor encounters a CP violation, it is, in 
>> principle and in practice, capable of doing anything, including 
>> using your work. The only justifiable action, once you've saved 
>> (if possible) and shut down, is to disable the offending plug-in. 
>> (I know this because I use an old version of Visual Studio. <G>)
>
> But why would it be mandatory to shut down? If the core of the app 
> is able to disable the plugin without shutting down, I'd say 
> that's better (assuming, of course, that internal consistency can 
> be ensured, which it can be in many cases, and can't be in many 
> other cases; if it is not clear whether the app is in a consistent 
> state, I agree shutting down completely is the best thing to do)

Alas, this is quite wrong and misses the point, although you do hint 
at it. If the plug-in has violated its design, then any further 
action performed by it or by any other part of the process is, in 
principle, indeterminate and in outside the bounds of correct 
behaviour. (Now, of course it is true that in many cases you can 
carry on for a while, even a long while, but you run a non-zero risk 
of ending in a nasty crash.)

I get the strong impression that people keep expanding the scope of 
this principle into 'normal' exceptions. If a plug-in runs out of 
memory, or can't open a file, or any other normal runtime error, 
then that's *not* a contract violation, and in no way implies that 
the hosting process must shutdown in a timely fashion. It is only 
the case when a contract violation has occured, because only that is 
a signal from the code's designer to the code's user (or rather the 
runtime) that the plug-in is now invalid.

So, looking back at your para "If the code of the app is able to 
disable the plugin without shutting down" applies to Exceptions (and 
Exhaustions), whereas "if it is not clear whether the app is in a 
consistent state, I agree shutting down completely is the best thing 
to do" applies to Errors. These two things hold, of course, since 
they are the definitions of Exceptions and Errors.

>> 2. The picture you've painted fails to take into account the 
>> effect of the extremely high-intolerance of bugs in "irrecovering 
>> code". Basically, they're blasted out of existence in an 
>> extremely short amount of time - the code doesn't violate its 
>> design, or it doesn't get used. period - and the result is 
>> high-quality systems.
>
> I don't understand the first sentence, sorry..

CP violations are never tolerated, which means they get fixed _very_ 
quickly.

>>>I mean, when I do something like inspect a value in a debugger, 
>>>and the value is 150MB and the debugger runs out of memory, I 
>>>don't want it to stop, just because it can't show me a var (which 
>>>can just as easily manifest as an assertion error or whatever)...
>>
>> Out of memory is not an error, it's an exception, so that's just 
>> not an issue.
>
> Like I said, out of memory can just as well manifest itself later 
> as a broken contract or whatever.

No, it cannot.

Actually, there are occasions where contracts are used to assert the 
availability of memory - e.g. where you've preallocated a large 
block which you _know_ is large enough and are then allocating from 
it - but that's more an expedient (ab)use of CP, rather than CP 
itself.

> That is completely off the point, though, I'm trying to address 
> your claim that some errors should force the app to shut down.

All errors should force the app to shutdown. No exceptions should, 
in principle, force the app to shutdown, although in practice its 
appropriate to do so (e.g. when you've got no memory left)

>> btw, at no time have I _ever_ said that processes should just 
>> stop. There should always be some degree of logging of the flaw 
>> and, where appropriate, an attempt made to shutdown gracefully 
>> and with as little collateral damage as possible.
>
> I've seen in your other responses that you don't mean uncatchable 
> when you say unrecoverable. I'm not quite sure what you do mean, 
> then. I guess it would be a throwable that can be caught, but that 
> doesn't stop it from propagating ahead?

I mean, one uses catch clauses in the normal way, to effect logging, 
and even, perhaps, to try and save ones work, but at the end of that 
catch clause the error is rethrown in you've not done so manually, 
or rethrown another Error type.

>> So, given that:
>>     - irrecoverability applies only to contract violations, i.e. 
>> code that is detected to have violated its design via runtime 
>> constructs inserted by its author(s) for that purpose
>>     - an invalid process cannot, by definition, perform validly. 
>> It can only stop, or perform against its design.
>
> True, but a faulty part of the application should not be taken as 
> if the whole application is faulty.

Dead wrong. Since 'parts' of the application share an address space, 
a faulty part of an application is the very definition of the whole 
application being faulty!

This is a crucial point, and a sine qua non for discussions on this 
topic.

If a part of a C/C++/D/C#/C++.NET program has violated its design, 
then it's possible for it to do anything, including writing an 
arbitrary number of bytes to an arbitrary memory location. The 
veracity of this cannot be denied, otherwise how would we be in the 
middle of an epidemic of viruses and worms?

But other languages that don't have pointers are just as dead in the 
water. If you've a comms server written in Java - I know, I know, 
but let's assumefor pedagogical purposes that you might - and a 
plug-in violates its contract, then it can still do anything, like 
kill threads, write out security information to the console, email 
James Gosling a nasty letter, delete a crucial file, corrupt a 
database.

There's a good reason that modern operating systems separate memory 
spaces, so that corrupted applications have minimal impact on each 
other. Now that largely saves the memory corruption problem, 
although not completely, but it still doesn't isolate all possible 
effects from violating programs. In principle a contract violation 
in any thread should cause the shutdown of the process, the machine, 
and the entire population of machines connected to it through any 
and all networks. The reason we don't is not, as might be suggested, 
because that's stupid - after all, viruses propagation demonstrates 
why this is a real concern - but because (i) contract violation 
support is inadequate and/or elided from release builds, and, more 
importantly, (ii) the cost benefit of shutting down several billion 
computers millions of times a day is obviously not a big winner.

Similarly, in most cases of contract violation, it's not sensible to 
shutdown the system when a violation in one process happens. But 
no-one should be fooled into thinking that that means that we're 
proof from such actions. If you have a Win9x system, and you do 
debugging on it, I'm sure you'll experience with reasonable 
regularity the unwanted effects of broken processes on each other 
through that tiny little 2GB window of shared memory. <g> Even on NT 
systems, which I've used rather than 9x for many years now, I 
experience corruptions between processes every week or two. (I tend 
to have a *lot* of applications going, each representing the current 
thread of work of a particular project ... not sensible I know, but 
still not outside the bounds of what a multitasking OS should eat 
for breakfast.)

No, it is at the level of the process that things should be shut 
down, because all the parts of a process have an extreme level of 
intimacy in that they share memory. All classes, mem-maps, data 
blocks, pointers, stack, variables, singletons - you name it! - they 
all share the same memory space, and an errant piece of code in any 
part of that process can screw up any other part. So (hopefully) you 
see that it is impossible to ever state (with even practical 
certainty) that "a faulty part of the application should not be 
taken as if the whole application is faulty".

>>     - "Crashing Early" in practice results in extremely high 
>> quality code, and rapid turn around of bug diagnosis and fixes
>>     - D cannot support opt-in/library-based irrecoverability; to 
>> have it, it must be built in
>> do you still think it would be a mistake?
>>
>> If so, can you explain why?
>
> OK, it is obviously desired in some cases, so I agree it should be 
> supported by the language, BUT, none of the built-in exceptions 
> should then be unrecoverable.

Well again, I get the feeling that you think I've implied that a 
wide range of things should be unrecoverable. I have not, and 
perhaps I should say so explicitly now: AFAIK, only contract 
violations should be Errors (i.e. unrecoverable), since only they 
are a message from the author(s) of the code to say when it's become 
invalid. Only the author can know. Not the users of the libraries, 
not the users of any programs, not you or me, not even the designer 
of the language can make that assertion.

All other exceptions, including those already in D that are called 
errors, should *not* be unrecoverable.
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
"Regan Heath" <regan@netwin.co.nz> wrote in message 
news:opso43bibk23k2f5@nrage.netwin.co.nz...
> On Tue, 12 Apr 2005 22:23:56 +1000, Matthew 
> <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>> Do you know the design of a program someone is going to write in
>>> the  future?
>>>
>>> How can you say with utmost surety that under circumstance X 
>>> that
>>> program  has violated it's design and *must* terminate?
>>
>> Somewhere along the way we've had a gigantic disconnect. Maybe 
>> this
>> stuff's new to you and I've assumed too much?
>
>> A programmer uses CP constructs - assertions, pre/postconditions,
>> invariants - to assert truths about the logic of their program, 
>> i.e.
>> that a given truth will hold if the program is behaving according 
>> to
>> its design. So its not the case that _I_ say/know anything about 
>> the
>> program's design, or any such fantastic thing, but that the
>> programmer(s) know(s) the design as they're writing, and they
>> _assert_ the truths about that design within the code.
>
> Assertions pre/post conditions etc are removed in a release build. 
> I  assume from the above you leave them in?

This is something we've not covered for some weeks. I'm moving in my 
own work towards leaving them in more and more, and, as I've said, 
in a recent high-risk project they are in, and doing their job very 
nicely (which is to say nothing at all after the first few days in 
system testing, which gives me a very nice calm feeling, given the 
amount of money travelling through those components each day!).

But, I think in D there should be an option to have them elided, 
yes. Again, I'd err on the side of them always being in abset 
a -no_cp_violations flag, but I can live with them being opt-in 
rather than opt-out.

The crucial thing we need is language support for irrecoverability, 
whether it can then be disabled on the command-line or not, since 
there is no means within D to provide it by library.

> Regardless lets apply the above to the plugin example posted in 
> this  thread in several places by several people.
> If a plugin asserts should the main program be forced to 
> terminate. IMO no.
>
> Assuming assertions are removed all you're left with is 
> exceptions. Should  a program be forced to terminate on an 
> exception? IMO no.

I've never said a program should be forced to terminate on an 
exception. In principle there is no need. In practice one might well 
do so, of course, but that's beside the/this point.

> The reasoning for my opinions above is quite simple, the 
> programmer (of  the program) is the only person in a position to 
> decide what the design  and behaviour of the program is, not the 
> writer of a plugin, not the  writer of the std library or 
> language.

Sorry, again this is completely wrong. Once the programmer is using 
a plug-in outside the bounds of its correctness *it is impossible* 
for that programmer to decide what the behaviour of his/her program 
is.

It really amazes me that people don't get this. Is it some 
derring-do attitude that 'we shall overcome' all obstacles by dint 
of hard-work and resolve? No-one binds together nuts and bolts with 
a hammer, no matter how hard they hit them.

> <snip>
>
>>> So as to clear any confusion, under what circumstance would you
>>> enforce  program termination?
>>
>> When a program violates its design, as detected by the assertions
>> inserted into it by its creator(s). It must always do so.
>
> Sure, but if I write a program that uses your plugin should your 
> plugin  (your creation) dictate to my program (my creation) what 
> it's design is?  By enforcing program termination you do so.

If you use my plug-in counter to its design, it might do anything, 
include scramble your entire hard disk. Once you've stepped outside 
the bounds of its contract all bets are off. Not to mention the fact 
that it cannot possibly be expected to work

Let's turn it round:
   1. Why do you want to use a software component contrary to its 
design?
   2. What do you expect it to do for you in that circumstance?
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
On Wed, 13 Apr 2005 08:32:05 +1000, Matthew  
<admin@stlsoft.dot.dot.dot.dot.org> wrote:
> Let's turn it round:
>     1. Why do you want to use a software component contrary to its  
> design?

I dont.

>     2. What do you expect it to do for you in that circumstance?

Nothing.

I expect to be able to disable/stop using a component that fails (in  
whatever fashion) and continue with my primary purpose whatever that  
happens to be.

Under what circumstances do you see the goal above to be impossible?

You've mentioned scrambled memory and I agree if your plugin has scrambled  
my memory there is no way I can continue sanely. However, how can we  
detect that situation? further how can I even be sure my program is going  
to terminate how I intend/want it to, more likely it crashes somewhere  
random.

Regan
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
>>> Why shouldn't Object be throwable? It has useful methods like 
>>> toString() and print() (I'm starting to think print() should 
>>> stay). What would Throwable provide that Object doesn't? I would 
>>> make it harder to throw the wrong thing I suppose:
>>>  throw new Studebacker()
>>
>> Exactly that. Is that not adequate motivation?
>
> Not to me - but then I haven't seen much newbie code involving 
> exception handling.

Let's just leave that as a philosophical difference then. Nothing to 
be gained here by further debate.

>>> OutOfMemory is special because in typical usage if you run out 
>>> of memory you can't even allocate another exception safely.
>>
>> Indeed, although that's easily obviated by having the exception 
>> ready, on a thread-specific basis. (I'm sure you know this Ben, 
>> but it's worth saying for purposes of general edification.)
>
> Easily? Preallocating exceptions or other objects in case an 
> OutOfMemory is thrown is an advanced maneuver IMO.

Sure, but easy for the designer of a language's core library. 
(They're gonna have a lot harder tasks than that, methinks.)

>>> Memory is the one resource programs have a very very hard time 
>>> running without.
>>
>> "the one"? Surely not. I've already mentioned TSS keys. They're 
>> at least as hard to live without. And what about stack? (Assuming 
>> you mean heap, as I did.)
>
> I view running out of TSS keys as running out of gas in a car - a 
> pain but expected. I view running out of memory as running out of 
> oxygen in the atmosphere - a bigger pain and unexpected.

Hmm. Again, we probably need to agree that this is fatuous to 
debate. (It might be that out-of-mem in C++ never troubles too 
painfully because by the time the exception's caught - usually in 
main()/do_main() - many death tractors have been deterministically 
caught and memory released; especially if you're using memory 
parachutes. In Java, I guess it's different, and the same will apply 
to D, save for a few auto-instances.)

>>> It must be catchable because otherwise there's no way to tell if 
>>> a large allocation failed or not.
>>
>> Well, no-one's debating whether or not that, or any other 
>> error/exception, is catchable. (It's worrying me that there are 
>> now two people talking about catchability, as if the possibility 
>> of uncatchability has been raised.)
>
> yup - I just wanted to be clear.

Ok, we're agreed. All things are catchable. Some few will be 
rethrown by the language if you don't do it yourself. Hence, they're 
unquenchable.

>>> Running out of threads is much less catastrophic than running 
>>> out of memory.
>>
>> I agree, but who's talked about threads? I mentioned TSS keys, 
>> but they're not the same thing at all. Maybe someone else has 
>> discussed threads, and I've missed it.
>
> TSS means "thread-specific storage", correct?

It does. And to use it one needs to allocate slots, the keys for 
which are well-known values shared between all threads that act as 
indexes into tables of thread-specific data. One gets at one's TSS 
data by specifying the key, and the TSS library works out the slot 
for the calling thread, and gets/sets the value for that slot for 
you.

TSS underpins multi-threaded libraries - e.g. errno / GetLastError() 
per thread is one of the more simple uses - and running out of TSS 
keys is a catastrophic event. If you run out before you've built the 
runtime structures for you C runtime library, there's really nothing 
you can do, and precious little you can say about it.

> I was guessing that was what it meant but maybe I was wrong. 
> Personally

Sure, I think you mentioned problems allocating threads, which is a 
different matter, and I wanted to make clear the distinction 
otherwise our conversation might look woolly to someone else.

>> But I fear this is what people, through ignorance or laziness or 
>> whatever, are portraying the irrecoverability / "Crashing Early" 
>> debate to be, and it's quite disingenuous. Although there can be 
>> no guarantees in what's supportable behaviour after a contract 
>> violation occurs, practically speaking there's always scope for 
>> creating a log to screen/file, and often for saving current work 
>> (in GUI contexts).
>
> Let me give another example besides BSOD why "unrecoverable" is 
> application-specific. Let's say I'm writing an application like 
> Photoshop or GIMP (or, say, MATLAB) that has a concept of 
> plug-ins. Now if a plug-in asserts and gets itself into a bad 
> state the controlling application must be able to catch that and 
> recover. Any decent application would print out some message like 
> "the plugin Foo had an internal error and has been unloaded" and 
> unload the offending plug-in. It would be unacceptable for the 
> language/run-time to force the controlling application to quit 
> because of a faulty plug-in.

Everyone keeps using vague terms - "a bad state" - which helps their 
cause, I think. ;)

If the plug-in has violated its contract, then the process within 
which it resides must shutdown. Naturally, an application that has 
user interaction, and may support user data, should make good 
efforts to save that user's data, otherwise it'll be pretty 
unpopular.

To not unload raises the questions:
   1. Why do you want to use a software component contrary to its 
design?
   2. What do you expect it to do for you in that circumstance?

Instead of thinking of this as an intrusion into one's freedom, why 
not look at it as what it is intended to be, and what it proves to 
be in practice: a very sharp tool for cutting out bugs.

Imagine that applications did as I'm suggesting (and as some 
actually do in reality). In that case, buggy plug-ins would not be 
tolerated. The bugs would be filtered back to their creators 
rapidly. And they'd be well armed to fix them because (i) less time 
would elapse because people wouldn't live with that bug as they 
might otherwise do, and (ii) the bug would manifest close to its 
source: rather than waiting for a crash (in which you could lose 
your data!) the bug would report its context, e.g.

   "./GIMP/Plugins/Xyz/Abc.d;1332: Contract Violation: MattRenderer 
contains active overlays without primary images"

I know from personal experience that this kind of thing leads to 
near-instant diagnosis and very rapid correction.

>  In the same way modern OSes don't quit when an application has an 
> internal error.

This is an exceedingly specious analogy, and I'm surprised at you 
Ben. You know full well that modern OSs isolate applications from 
each other's address spaces. This is pure misinformation, and makes 
it hard to have a serious debate. People reading the thread for whom 
the subject is new will be unduly influenced by such 
misrepresentations.
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
"Regan Heath" <regan@netwin.co.nz> wrote in message 
news:opso45k5x523k2f5@nrage.netwin.co.nz...
> On Wed, 13 Apr 2005 08:32:05 +1000, Matthew 
> <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>> Let's turn it round:
>>     1. Why do you want to use a software component contrary to 
>> its  design?
>
> I dont.
>
>>     2. What do you expect it to do for you in that circumstance?
>
> Nothing.
>
> I expect to be able to disable/stop using a component that fails 
> (in  whatever fashion) and continue with my primary purpose 
> whatever that  happens to be.

But you can't, don't you see. Once it's experienced a single 
instruction past the point at which it's violated its design, all 
bets are off.

> Under what circumstances do you see the goal above to be 
> impossible?

It is theoretically impossible in all circumstances. (FTR: We're 
only talking about contract violations here. I'm going to keep 
saying that, just to be clear)

It is practically impossible in probably a minor of cases. In, say, 
80% of cases you could carry on quite happily. Maybe it's 90%? Maybe 
even 99%. But the point is that it is absolutely never 100%, and you 
_cannot know_ when that 1%, or 10% or 20% is going to bite. And 
therefore if the application is written to attempt recovery of 
invalid behaviour it is going to get you. One day you'll lose a very 
important piece of user data, or contents of your harddrive will be 
scrambled.

> You've mentioned scrambled memory and I agree if your plugin has 
> scrambled  my memory there is no way I can continue sanely.

Cool! :-)

> However, how can we  detect that situation?

We cannot. The only person that's got the faintest chance of 
specifying the conditions for which it's not going to happen (at 
least not by design <g>) is the author of the particular peice of 
code. And the way they specify it is to reify the contracts of their 
code in CP constructs: assertions, invariants, etc.

The duty of the programmer of the application that hosts that code 
is to acknowledge the _fact_ that their application is now in an 
invalid state and the _likelihood_ that something bad will happen, 
and to shutdown in as timely and graceful a manner as possible.

Since D is (i) new and open to improvement and (ii) currently not 
capable of supporting irrecoverability by library, I am campaigning 
for it to have it built in.

> further how can I even be sure my program is going  to terminate 
> how I intend/want it to, more likely it crashes somewhere  random.

If it experiences a contract violation then, left to its own 
devices, in principle it _will_ crash randomly, and in practice it 
is likely to do so an uncomfortable/unacceptable proportion of the 
time.

If the application is designed (or forced by the language) to 
respect the detection of its invalid state as provided by the code's 
author(s), then you stand a very good chance in practice of being 
able to effect a graceful shutdown. (Even though in principle you 
cannot be sure that you can do so.)
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
Matthew wrote:
> Sorry, again this is completely wrong. Once the programmer is using 
> a plug-in outside the bounds of its correctness *it is impossible* 
> for that programmer to decide what the behaviour of his/her program 
> is.
> 
> It really amazes me that people don't get this. Is it some 
> derring-do attitude that 'we shall overcome' all obstacles by dint 
> of hard-work and resolve? No-one binds together nuts and bolts with 
> a hammer, no matter how hard they hit them.

Your metaphor is just riveting!

Seriously, I get the feeling that you're not getting through, not at 
least with the current tac.

Folks have a hard time understanding that the entire application should 
be shut down just because a single contract has failed. Especially if we 
are talking about plugins. After all, the same plugin may have other 
bugs, etc., and in those cases the mandatory shut-down never becomes an 
issue as long as no contract traps fire.

As I see it, the value in mandatory shutdown with CP is in making it so 
painfully obvious to all concerned (customer, main contractor, 
subcontractor, colleagues, etc.) what went wrong, (and whose fault it 
was!) that this simply forces the bug fixed in "no time".

We must also remember that not every program is written for the same 
kind of environment. Moving gigabucks is where absolute correctness is a 
must. Another might be hospital equipment. Or space ship software.

But (alas) most programmers are forced to work in environments where you 
debug only enough for the customer to accept your bill. They might find 
the argumentation seen so far in this thread, er, opaque.

This is made even worse with people getting filthy rich peddling 
blatantly inferior programs and "operating systems". Programmers, and 
especially the pointy haired bosses, have a hard time becoming motivated 
to do things Right.
April 12, 2005
Robust, fault-tolerant software is easier to write with D than C++. Or is it? [was: Re: recoverable and unrecoverable errors or exceptions]
>> Matthew: Nonetheless, I do have serious doubts that 
>> irrecoverability will be incorporated into D, since Walter tends 
>> to favour "good enough" solutions rather than aiming for 
>> strict/theoretical best, and because the principles and 
>> advantages of irrecoverability are not yet sufficiently 
>> mainstream. It's a pity though, because it'd really lift D's head 
>> above its peers. (And without it, there'll be another area in 
>> which C++ will continue to be able to claim supremacy, because D 
>> cannot support it in library form.)
>
> Ben: I think Walter has made the right choices - except the 
> hierarchy has gotten out of whack. Robust, fault-tolerant software 
> is easier to write with D than C++.

Bold statement. Can you back it up?

I'm not just being a prick, I am genuinely interested in why people 
vaunt this sentiment so readily. In my personal experience I 
encounter bugs in the code *far* more in D than I do in C++. Now of 
course that's at least in part because I've done a lot of C++ over 
the last 10+ years, but that being the case does not, in and of 
itself, act as a supportive argument for your proposition. (FYI: I 
don't make the same kinds of bugs in C# or in Python or in Ruby, and 
I'm less experienced in those languages than I am in D)

Take a couple of cases:

1. D doesn't have pointers. Sounds great. Except that one can get 
null-pointer violations when attempting value comparisons. I've had 
those a fair amount in D. Not had such a thing in C++ in as long as 
I can remember. C++ cleanly delineates between references (as 
aliases for instances) and pointers. When I type x == y in C++ I 
_know_ - Machiavelli weilding int &x=*(int*)0; aside - that I'm not 
going to have an access violation. I do _not_ know that in D.

2. Take the current debate about irrecoverability. As I've said I've 
been using irrecoverable CP in the real world in a pretty 
high-stakes project - lots of AU$ to be lost! - over the last year, 
and its effect has been to save time and increase robustness, to a 
surprising (including to me) degree: system testing/production had 
only two bugs. One was diagnosed within minutes of a halted process 
with "file(line: VIOLATION: <details here>", and was fixed and 
running in less than two hours. The other took about a week of 
incredibly hard debugging, walk throughs, arguments, and rants and 
raves, because, ironically, I'd _not_ added a some contract 
enforcements I'd deemed never-going-to-happen!! So, unless and until 
I hear from people with _practical experience_ of these techniques 
that they've had bad experiences - and the only things I read about 
from people such as the Pragmatic Programmers is in line with my 
experience - I cannot be anything but convinced of their power to 
increase robustness and aid development and testing effectiveness 
and efficiency. Now C++ has deterministic destruction, which means I 
was easily able to create an unrecoverable exception type - found in 
<stlsoft/unrecoverable.hpp> for anyone interested - to get the 
behaviour I need. D does not support deterministic destruction of 
thrown exceptions, so it is not possible to provide irrecoverability 
in D.

Score 0 for 2. And so it might go on.

Naturally, this is my perspective, and I don't seek to imply that my 
perspective is any more absolute than anyone else's. But that being 
the case, such blanket statements about D's superiority, when used 
as a palliative in debates concerning improvements to D, are not 
worth very much.

I'm keen to hear from others from all backgrounds, including C++, 
their take on Ben's statement (with accompanying rationale, of 
course).

Cheers

Matthew
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
On Wed, 13 Apr 2005 09:04:15 +1000, Matthew  
<admin@stlsoft.dot.dot.dot.dot.org> wrote:
> "Regan Heath" <regan@netwin.co.nz> wrote in message
> news:opso45k5x523k2f5@nrage.netwin.co.nz...
>> On Wed, 13 Apr 2005 08:32:05 +1000, Matthew
>> <admin@stlsoft.dot.dot.dot.dot.org> wrote:
>>> Let's turn it round:
>>>     1. Why do you want to use a software component contrary to
>>> its  design?
>>
>> I dont.
>>
>>>     2. What do you expect it to do for you in that circumstance?
>>
>> Nothing.
>>
>> I expect to be able to disable/stop using a component that fails
>> (in  whatever fashion) and continue with my primary purpose
>> whatever that  happens to be.
>
> But you can't, don't you see. Once it's experienced a single
> instruction past the point at which it's violated its design, all
> bets are off.

The module/plugin won't execute a single instruction past the point at  
which it violates it's design. It will be killed. My program hasn't  
violated it's design at all.

The only problem I can see occurs when the module/plugin corrupts my  
programs memory space.

>> Under what circumstances do you see the goal above to be
>> impossible?
>
> It is theoretically impossible in all circumstances. (FTR: We're
> only talking about contract violations here. I'm going to keep
> saying that, just to be clear)

Can you give me an example of the sort of contract violation you're  
referring to. I'm seeing...

class Foo {
  int a;

  invariant {
    if (a != 5) assert("contract violation");
  }
}

which can be caused by any number of things:
 - buggy algorithm
 - unexpected input, without an assertion
 - memory corruption

so if this occurs in a plugin/module only that last one can possibly  
corrupt the main program.

> It is practically impossible in probably a minor of cases. In, say,
> 80% of cases you could carry on quite happily. Maybe it's 90%? Maybe
> even 99%. But the point is that it is absolutely never 100%, and you
> _cannot know_ when that 1%, or 10% or 20% is going to bite.

Agreed.

> And
> therefore if the application is written to attempt recovery of
> invalid behaviour it is going to get you. One day you'll lose a very
> important piece of user data, or contents of your harddrive will be
> scrambled.

I don't want the plugin/module that has asserted to continue, I want it to  
die. I want my main program to continue, the only situation I can see  
where this is likely to cause a problem is memory corruption and in that  
case, yes it's possible it will have the effects you describe.

As you say above, the probability is very small, thus each application  
needs to make the decision about whether to continue or not, for some the  
risk might be acceptable, for others it might not.

>> However, how can we  detect that situation?
>
> We cannot. The only person that's got the faintest chance of
> specifying the conditions for which it's not going to happen (at
> least not by design <g>) is the author of the particular peice of
> code. And the way they specify it is to reify the contracts of their
> code in CP constructs: assertions, invariants, etc.
>
> The duty of the programmer of the application that hosts that code
> is to acknowledge the _fact_ that their application is now in an
> invalid state and the _likelihood_ that something bad will happen,
> and to shutdown in as timely and graceful a manner as possible.

If that is what they want to do, they could equally decide the risk was  
small (as it is) and continue.

> Since D is (i) new and open to improvement and (ii) currently not
> capable of supporting irrecoverability by library, I am campaigning
> for it to have it built in.

I'd prefer an optional library solution. For reasons expressed in this  
post/thread.

>> further how can I even be sure my program is going  to terminate
>> how I intend/want it to, more likely it crashes somewhere  random.
>
> If it experiences a contract violation then, left to its own
> devices, in principle it _will_ crash randomly

I _might_ crash. If the assertion was due to memory corruption, and even  
then it might be localised to the module in which the assertion was  
raised, if so it has no effect on the main program.

> , and in practice it
> is likely to do so an uncomfortable/unacceptable proportion of the
> time.

unacceptable to whom? you? me? the programmer of application X 10 years in  
the future?

> If the application is designed (or forced by the language) to
> respect the detection of its invalid state as provided by the code's
> author(s), then you stand a very good chance in practice of being
> able to effect a graceful shutdown. (Even though in principle you
> cannot be sure that you can do so.)

I think you stand a very good chance of annoying the hell out of a future  
program author by forcing him/her into a design methodology that they do  
not aspire to, whether it's correct or not.

For the records I do agree failing hard and fast is usually the best  
practice. I just don't believe people should be forced into it, all the  
time.

Regan
April 12, 2005
Re: recoverable and unrecoverable errors or exceptions
On Wed, 13 Apr 2005 02:21:07 +0300, Georg Wrede <georg.wrede@nospam.org>  
wrote:
> Matthew wrote:
>> Sorry, again this is completely wrong. Once the programmer is using a  
>> plug-in outside the bounds of its correctness *it is impossible* for  
>> that programmer to decide what the behaviour of his/her program is.
>>  It really amazes me that people don't get this. Is it some derring-do  
>> attitude that 'we shall overcome' all obstacles by dint of hard-work  
>> and resolve? No-one binds together nuts and bolts with a hammer, no  
>> matter how hard they hit them.
>
> Your metaphor is just riveting!
>
> Seriously, I get the feeling that you're not getting through, not at  
> least with the current tac.
>
> Folks have a hard time understanding that the entire application should  
> be shut down just because a single contract has failed. Especially if we  
> are talking about plugins. After all, the same plugin may have other  
> bugs, etc., and in those cases the mandatory shut-down never becomes an  
> issue as long as no contract traps fire.
>
> As I see it, the value in mandatory shutdown with CP is in making it so  
> painfully obvious to all concerned (customer, main contractor,  
> subcontractor, colleagues, etc.) what went wrong, (and whose fault it  
> was!) that this simply forces the bug fixed in "no time".
>
> We must also remember that not every program is written for the same  
> kind of environment. Moving gigabucks is where absolute correctness is a  
> must. Another might be hospital equipment. Or space ship software.
>
> But (alas) most programmers are forced to work in environments where you  
> debug only enough for the customer to accept your bill. They might find  
> the argumentation seen so far in this thread, er, opaque.

Or they, like you Georg, can see how it happens in the real world, despite  
it not being "Right".

> This is made even worse with people getting filthy rich peddling  
> blatantly inferior programs and "operating systems". Programmers, and  
> especially the pointy haired bosses, have a hard time becoming motivated  
> to do things Right.

Amen. (to Bob)

The real world so often intrudes on purity of design. I can understand  
Matthews position, where he's coming from. For the most part I agree with  
his points/concerns. I just don't think it's the right thing for D to  
enforce, it's not flexible enough for real world situations. Perhaps  
Matthew is right, and we should beat the world into submission, but I  
think a better tack is to subvert it slowly to our design. You don't throw  
a frog into boiling water, it will jump out, instead you heat it slowly.

Regan
1 2 3 4 5 6 7 8
Top | Discussion index | About this forum | D home