April 13, 2005
> Well, what you said also misses the point.. In principle, any action performed by a plug-in (or core) is indeterminate and whatever, it doesn't matter whether it has already produced a CP error or not.

The difference is that the designer of the code has designated some things to be design violations. Of course other bugs can and will occur. No-one's ever claimed that using CP magically ensures total coverage of all possible errors.

> I fail to see what the big difference is between
>
> assert(a!==null)
>
> and
>
> if (a is null)
>     throw new IllegalArgumentException()
>
> They both prevent the code that follows from running in the case where the supplied parameter is null, so you can't run it with invalid parameters, which is the whole purpose of those two statements.

You're correct. There is no difference.

> Why is it so hard for you to admit that it is possible to have CP errors in code that cannot corrupt your app?

    In principle it is not so.
    In practice it often is (albeit you cannot know).

I've said that a hundred times.

If you want me to say that, in principle, there are CP errors that cannot corrupt your app, then I won't because it ain't so. There's a beguiling thought that preconditions can be classed in that way, but it doesn't pan out. (And I'm not going to blather on about that because I think everyone's heartily sick of the debate by now. I know I am <g>)

> I mean, if you give a plugin what is basically a read-only view of your data, it shouldn't be able to corrupt it. Sure it can, if it wants to, but that has nothing to do with CP, exceptions, errors or whatever.

At the risk of being called an arrogant whatever by fragile types, I don't think you're understanding the concept correctly.

The case you give is an interesting one. I've recently added a string_view type to the STLSoft libraries, which effectively gives slices to C++ (albeit one must needs be aware of the lifetime of the source memory, unlike in D).

So, a string_view is a length + pointer

One of the invariants we would have in such a class is that if length is non-zero, the pointer must not be NULL.

Now, if we used that in a plug-in, we might see something like:

class IPlugIn
{
    virtual int findNumStringsInBlock(char const *s) const = 0;

    virtual void release() = 0;
};

class MyPlugIn
    : public IPlugIn
{

private:
    ::stlsoft::basic_string_view<char>    m_view;
};

bool MyPlugIn_Entry( < some initialisation parameters > , char const
*pMemToManipulate, size_t cbMemToManipulate, IPlugIn **ppPlugIn)
{
    . . .

    *ppPlugIn = new MyPlug(pMemToManipulate, cbMemToManipulate);

    . . .
}


So the container application loads the plug-in via an entry point MyPlugIn_Entry() in its dynamic lib, and gets back an object expressing the IPlugIn interface.

At some point it'll then ask the plug in to count the number of strings in the block with which it was initialised, and on which it is holding a view via the m_view instance.

Let's say, for argument's sake, that the ctor for string_view didn't do the argument check, but that the find() method does. Let's further say that pMemToManipulate was passed a NULL by the containing application, but cbMemToManipulate was passed 10. Hence, when the containing app calls findNumStringsInBlock(), the invariant will fire


int MyPlugIn::findNumStringsInBlock(char const *s) const
{
    .    .    .

    m_view.find(s); // Invariant fires here!

    .    .    .
}

Now MyPlugIn is not changing any of the memory it's been asked to work with. But the design of one of the components has been violated. (Of course, we'd have a check in the plug-in entry point, but that's just implementation detail. In principle the violation is meaningful at whatever level it happens.)


>> So, looking back at your para "If the code of the app is able to disable the plugin without shutting down" applies to Exceptions (and Exhaustions), whereas "if it is not clear whether the app is in a consistent state, I agree shutting down completely is the best thing to do" applies to Errors. These two things hold, of course, since they are the definitions of Exceptions and Errors.
>
> Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.

That's its definition. Nothing to do with me.

There's no choice about it. If you're outside the design, you're outside the design. Just because the program _may_ operate, in a given instance, correctly, from a particular perspective, does not make it in-design.

This is a crucial point, and there's no going forward without it.

>>>>>I mean, when I do something like inspect a value in a debugger, and the value is 150MB and the debugger runs out of memory, I don't want it to stop, just because it can't show me a var (which can just as easily manifest as an assertion error or whatever)...
>>>>
>>>>Out of memory is not an error, it's an exception, so that's just not an issue.
>>>
>>>Like I said, out of memory can just as well manifest itself later as a broken contract or whatever.
>>
>>
>> No, it cannot.
>
> Sure it can:
>
> int[] allocIfYouCan(int size)
> {
>     try {
>         return new int[size];
>     } catch (OutOfMemory ignored) {
>         return null;
>     }
> }
>
> void doSomething(int[] arr)
> {
>    assert(arr!==null);
> }
>
> doSomething(allocIfYouCan(10000000));
>
> Obviously, allocIfYouCan() is not a good idea, but it can still happen.

No, what that actually is a programmer mistake in applying functions with incompatible contracts.


>>  [snip]
> > So (hopefully) you see that it is impossible to ever state
> > (with even practical certainty) that "a faulty part of the
> > application should not be taken as if the whole application
> > is faulty".
>
> No, I don't.. There are parts and there are parts. You're saying that even the tiniest error (but only if it was specified in a contract) should abort the app, I'm just saying that in some cases it's possible for that to be overreacting.

Sigh. As I've said a hundred times, in principle: no, in practice: sometimes. But there's no determinism about the sometimes. That's the point.

>>>OK, it is obviously desired in some cases, so I agree it should be supported by the language, BUT, none of the built-in exceptions should then be unrecoverable.
>>
>> Well again, I get the feeling that you think I've implied that a wide range of things should be unrecoverable. I have not, and perhaps I should say so explicitly now: AFAIK, only contract violations should be Errors (i.e. unrecoverable), since only they are a message from the author(s) of the code to say when it's become invalid.
>
> How has the code become invalid exactly? The code doesn't change when it violates a contract.

It is invalid because it is operating, or has been asked to operate, outside its design. And we know that *only* because the author of the program has said so, by putting in the tests.

> For another example, let's say you have a DB-handling class with an invariant that the connection is always open to the DB (and that is a totally normal invariant). If you reboot the DB server, which is better - to kill all applications that use such class out there, or for them to reconnect?

If that is the design, then yes. However, I would say that's a bad design. If the DB server can be rebooted, then that's a realistic runtime condition, and should therefore be dealt with as an exception.

> > Only the author can know. Not the users of the libraries,
>> not the users of any programs, not you or me, not even the designer of the language can make that assertion.
>
> Yup, and the author should have the option of using D's CP constructs without making his app die on every single error that happens.

Which author? Library, or client code?

That's going to be the tricky challenge for us, if we go for separate +CP / -CP libraries.



April 13, 2005
>>I fail to see what the big difference is between
>>
>>assert(a!==null)
>>
>>and
>>
>>if (a is null)
>>    throw new IllegalArgumentException()
>>
>>They both prevent the code that follows from running in the case where the supplied parameter is null, so you can't run it with invalid parameters, which is the whole purpose of those two statements.
> 
> You're correct. There is no difference.

So why should the first one be treated differently than the second one?


>>Why is it so hard for you to admit that it is possible to have CP errors in code that cannot corrupt your app?
> 
> 
>     In principle it is not so.
>     In practice it often is (albeit you cannot know).

What's with the principle/practice distinction you're constantly making? What percentage of the apps in your computer were coded in principle? I believe the result is 0%.

And my whole point is that you can know in some cases. For the umpteenth time, I'm just saying the coder should have the choice on how to handle errors. That includes both having unstoppable throwables and the option to handle CP violations, so he is able to choose whichever he thinks is better.

If you ask me, instead of this whole argument, we should be persuading Walter to include a critical_assert() that throws an unstoppable something, and we'll both be happy? (and BTW, as soon as it's catchable, one can start a new thread and never exit the catch block in the first one, so there goes your unrecoverability anyway)


>>I mean, if you give a plugin what is basically a read-only view of your data, it shouldn't be able to corrupt it. Sure it can, if it wants to, but that has nothing to do with CP, exceptions, errors or whatever.
> 
> [snip]

> Now MyPlugIn is not changing any of the memory it's been asked to work with. But the design of one of the components has been violated. (Of course, we'd have a check in the plug-in entry point, but that's just implementation detail. In principle the violation is meaningful at whatever level it happens.)

Didn't you just prove my point? Even though a CP violation occurred, the state of the app is exactly the same as before, except that after the violation, the app knows the plugin is faulty and can disable it, so in a way, the state is actually better than before.


>>Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
> 
> That's its definition. Nothing to do with me.

Well, OK, in the strictest sense it is by definition inconsistent.


> There's no choice about it. If you're outside the design, you're outside the design. Just because the program _may_ operate, in a given instance, correctly, from a particular perspective, does not make it in-design.

True.

> This is a crucial point, and there's no going forward without it.

I agree again, if you're outside the design, you're definitely not inside the design.


>> [snip]
> No, what that actually is a programmer mistake in applying functions with incompatible contracts.

Of course it is, but it's still an outOfMemory manifesting as a CP violation.


> Sigh. As I've said a hundred times, in principle: no, in practice: sometimes. But there's no determinism about the sometimes. That's the point.

In principle, there is no determinism, but in practice there can be. You're claiming there can never be determinism and that's what I don't agree with.


>>>Only the author can know. Not the users of the libraries,
>>>not the users of any programs, not you or me, not even the designer of the language can make that assertion.
>>
>>Yup, and the author should have the option of using D's CP constructs without making his app die on every single error that happens.
> 
> Which author? Library, or client code?

At least the one that wrote the contracts, I guess?

And, for a few more examples:

I. Say you have some image processing app, you give it the name of a directory, and it goes through all the .tif files and processes them in some manner. There are two components involved, one scans the dir for all files ending with .tif and passes each to the other one, that does the actual processing. Say the processing takes long, so you run it before you go home from work. You see that it processed a few files and leave. The processing component has a contract that only valid .tif files' can be passed to it. The 15th of 5000 files is corrupt. Will you be happier in the morning if
a) the app processed 4999 files and left you a note about the error
b) the app processed 14 files and left you a note about the error

II. You have a drawing app. You draw a single image for weeks and weeks and the day before the deadline, you're done. You make a backup copy and, finally, go home. You leave your computer switched on, but there is a fire later, and the sprinkler system destroys your computer, so you only have the backup. Each shape in the image is stored in a simple format - TYPE_OF_SHAPE NUMBER_OF_COORDS COORD[]. But, on the CD, a single byte was written wrong. The app is made of several components, one of which reads files, and another one draws them. The second one has a contract that the TYPE_OF_SHAPE can only be 1 or 2, but the wrong byte has the value 3. The reader doesn't care, because it is still able to read the file. Will you be happier if
a) the app will display/print your image, except for the wrong shape
b) you'll get to see that there is a bug (in principle, the reader should check the type, but it doesn't), but you'll have to wait 2 months for an update, before you can open your image again (of course, missing the deadline along the way, and wasting those weeks of effort)

III. You're using a spreadsheet and type in numbers for a few hours. There's a FDIV bug in your processor. Somehow you manage to hit it, causing the contract of the division code to fail, because it checks the results. Will you be happier if you
a) get an error message, and the only thing you can do is to save the data, including the expression that causes the fault (so the next time you open it, the same thing will happen again)
b) you get an error message stating that a result may be wrong, but are able to undo the last thing you typed?


xs0
April 14, 2005
"Maxime Larose" <mlarose@broadsoft.com> wrote in message news:d3jd0v$1s8q$1@digitaldaemon.com...
> Your points about throwing Objects are well noted. I didn't realize that
> the toString and print functions would solve most of the problems I
> mentionned.I still believe throwing a specific class (or interface) is
> better in the
> case more stuff creeps in (like the stack traces). I mean, that's the
> whole idea behind specializing (a class) in the first place right? In
> fact, the
> main idea behind OO inheritance in general. Why would an Object be
> throwable, when you can have a specialized Throwable class (or whatever)
> that offers specialized services (like take a snapshot of the stack trace
> at construction). IMO, it is better to make these kinds of
> used-all-over-the-place-and-then-some classes/constructs (exceptions,
> strings, etc.) thinking well into the future. Obviously, not all future
> cases can be thought of now. However, if you foresee a possible change and
> if, all other things being equal, a design better accomodates the change
> than another, why not use the better accomodating design?

Exception is practically speaking the root of the exception tree. One issue that makes me nervous about getting the stack trace for all exceptions is what to do with OutOfMemory since there might not be space for the stack trace. Currently the OutOfMemory exception is a singleton (it throws the classinfo's .init value) so attaching a stack trace to it would be troublesome. Presumably then OutOfMemory would not save or print any stack trace (not that it would be a huge loss). Practically speaking Exception is the root of the exception tree so it can get the stack trace capabilities. In my initial proposed hierarchy since OutOfMemory wouldn't subclass Exception (just like today) then it wouldn't get the stack trace API. Introducing a class between Object and Exception would be fine with me as long as it served a practical purpose - but that depends on the details of the exception inheritance tree.

> You ask about examples using checked exceptions... Hmmm... I guess you could say that checked exceptions are a very useful part of contract programming.

I am not saying they aren't - I'm just saying if you want to convince Walter I would recommend reading his past posts about checked exceptions and address his concerns. I vaguely remember his concerns are that many times Java coders (just for example) don't pay enough attention to checked exceptions and in the process of shutting up the compiler the coder winds up doing more harm than good. In a perfect world checked exceptions are wonderful - but we don't live in a perfect world, unfortunately.

[snip rest of checked exceptions paragraph to shorten reply post]


April 14, 2005
> I think the Object base class violates this to some people because it
> violates
> the "what the hell is this?" principle (which I just made up).  If an
> application throws something that is not an exception (ie. that doesn't
> describe
> itself in some way) then the client has no idea what the error was.

And throwing something that subclasses Exception instead of Object will add
more information about what the error was? It has the same toString and
print methods as Object. Checking strings for information is not safe to
i18n so no code should start parsing strings or messages to drive program
logic. The only reasonable thing for code to do is look at inheritance by
dynamically casting to what it knows how to deal with. Or it can just catch
what it knows how to deal with in the first place :-)
Or when you say client do you mean the human client? I assumed you meant
client code.

> Sure I
> could throw a ClientAccount, but what was the error that caused the
> problem in
> the first place?  Printing the ClientAccount won't help anyone in
> determining
> that.  It would make much more sense to wrap it in an Exception with a bit
> of
> descriptive information.

So don't throw a ClientAccount :-)


April 14, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message
> And throwing something that subclasses Exception instead of Object will
add
> more information about what the error was? It has the same toString and print methods as Object.

So why not move print() to the Exception root? Or why have print() at all,
if you can call toString() on it? Just exactly how often does a programmer
invoke Exception.print()? And why can't they just call print(exception)
instead? There's something about tight coupling at this level that really
troubles me.

> Checking strings for information is not safe to i18n so no code should start parsing strings or messages to drive program logic.

And print()/printf() handles i18n? If the exception root /must/ have a
print() method, surely it should be made pluggable by calling some
externally defined function dedicated to the task? Hard-coding printf(), or
annything else, anywhere in there is just totally bogus for all kinds of
reasons; in my terribly humble and grouchy opinion :{

All the more reason for calling print(exception) instead, where that function is defined somewhere in the IO layer.

2c.

- Kris


April 14, 2005
"Kris" <fu@bar.com> wrote in message news:d3kjqb$2sk4$1@digitaldaemon.com...
> "Ben Hinkle" <ben.hinkle@gmail.com> wrote in message
>> And throwing something that subclasses Exception instead of Object will
> add
>> more information about what the error was? It has the same
>> toString and
>> print methods as Object.
>
> So why not move print() to the Exception root? Or why have print()
> at all,
> if you can call toString() on it? Just exactly how often does a
> programmer
> invoke Exception.print()? And why can't they just call
> print(exception)
> instead? There's something about tight coupling at this level that
> really
> troubles me.

Hear, hear.

>> Checking strings for information is not safe to i18n so no code should start parsing strings or messages to drive program logic.
>
> And print()/printf() handles i18n? If the exception root /must/
> have a
> print() method, surely it should be made pluggable by calling some
> externally defined function dedicated to the task? Hard-coding
> printf(), or
> annything else, anywhere in there is just totally bogus for all
> kinds of
> reasons; in my terribly humble and grouchy opinion :{
>
> All the more reason for calling print(exception) instead, where
> that
> function is defined somewhere in the IO layer.

And again


April 14, 2005
I'm a newbie to D here, but as far as I have been able to tell from this
thread, there are various overlapping arguments here.
Whilst talking of Eclipse and its plugin based design, we are looking at a
loosely coupled system. This by definition would have contracts with a
greater tolerance for errors (NOT exceptions).
On strictly contract bound designs the contracts would generate more
irrecoverable errors because the contract would bind "harder".
Again not knowing much about D, I would say modifiers or some other way can
be found to make sure that contracts can be "tagged" with the degree of
violation that the system can sustain of the contract itself.
The point being here that it is upto the library or framework designer to
decide what part of a contract is critical for his audience, and what part
can be accepted. Although there is still a great deal of argument on C++'s
const modifiers they were a solution that enabled this kind of tagging to
contracts(at least upto some level).

Just my 2 cents and hope it helps


April 14, 2005
All right.

I'm not sure I want to give myself the trouble of sifting trhough tons of old threads to try and convince Walter about the benefits of checked exceptions. They are the same as CP, so that he disagrees is a bit strange, but... and because people shut up the compiler?!? Let them shut up the compiler if they so desire. Their app is their own business... For all practical purposes, it is better to receive a compiler error that some few people will then shut up than to not receive such errors in the first place... Anyway...  In fact, I *am* sure I don't want to give myself that trouble. He has his biases, like any of us do, and I guess he's the one making the calls for now. ("For now" not to patronize or imply he is not doing a good job. I mean it in the sense that as D gets more accepted - something we all hope - there will be a point where moving from one-man-decides-all to a committe kind of things will be necessary. It is the way of life and will have to be done for the good of D. On the other hand, from his point of view it must be awful to see all sorts of weird proposals, everyone trying to pull in their own directions. I totally agree with the fact that it is entirely his endevour and he has every right to chose what to put in the language and what not to put in the language. D is like any other: it has advantages and disadvantages, in large part brought in by the language designer!)

That some Exception not be stack traced is quite ok with me. Either you make the non-stack traced exceptions apart from the inheritance tree or you remove the stack tracing method from the non-stack traced Exceptions. The latter is preferrable, because it is a difference in *behavior*, not in *is-a*. In other words, a non-stack traced exception is an Exception, but has a different behavior. That's the whole point behind sub-classing: the sub-class, while "being a" super-class, has a different behavior.

So, in fact, the best design to me seems to be very similar to what java has
done (with the *very* important distinction that Throwable is unchecked):
Throwable (has stack tracing abilities)
  |
  -> Exception (usually catched)
        |
        ->CheckedException (obviously, compiler has to enforce checked
semantics)
               |
               > User-defined classes (*no* system exceptions here)
  |
  -> Error (usually only catched by main for reporting before exiting - if
caught at all)
           |
           ->OutOfMemoryError (overrides stack tracing)

- OR -
...
   -> Error
           |
           > NonStackTracedError
                     |
                     > OutOfMemoryError

I believe #1 is better as there shouldn't be other non-stacked traced throwables...

Anyway, I would love to seek my teeth in your proposal. I don't care at all if you don't agree with me on a few points. We agree on a lot of points. (Parenthesis: After having gone from newsgroup and such for a very long time, coming back here feels eerie... You'd expect less dogma from people supposed to be "thinking-men"...)

My offer to implement stack tracing still stands. The more I think about it, the more it seems to me like the way to go. I am still waiting on Walter's reply on that issue (hoping the email address I had was good.)

Have a nice day,

Max




"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3khfn$2r30$1@digitaldaemon.com...
>
> "Maxime Larose" <mlarose@broadsoft.com> wrote in message news:d3jd0v$1s8q$1@digitaldaemon.com...
> > Your points about throwing Objects are well noted. I didn't realize that
> > the toString and print functions would solve most of the problems I
> > mentionned.I still believe throwing a specific class (or interface) is
> > better in the
> > case more stuff creeps in (like the stack traces). I mean, that's the
> > whole idea behind specializing (a class) in the first place right? In
> > fact, the
> > main idea behind OO inheritance in general. Why would an Object be
> > throwable, when you can have a specialized Throwable class (or whatever)
> > that offers specialized services (like take a snapshot of the stack
trace
> > at construction). IMO, it is better to make these kinds of used-all-over-the-place-and-then-some classes/constructs (exceptions, strings, etc.) thinking well into the future. Obviously, not all future cases can be thought of now. However, if you foresee a possible change
and
> > if, all other things being equal, a design better accomodates the change than another, why not use the better accomodating design?
>
> Exception is practically speaking the root of the exception tree. One
issue
> that makes me nervous about getting the stack trace for all exceptions is what to do with OutOfMemory since there might not be space for the stack trace. Currently the OutOfMemory exception is a singleton (it throws the classinfo's .init value) so attaching a stack trace to it would be troublesome. Presumably then OutOfMemory would not save or print any stack trace (not that it would be a huge loss). Practically speaking Exception
is
> the root of the exception tree so it can get the stack trace capabilities. In my initial proposed hierarchy since OutOfMemory wouldn't subclass Exception (just like today) then it wouldn't get the stack trace API. Introducing a class between Object and Exception would be fine with me as long as it served a practical purpose - but that depends on the details of the exception inheritance tree.
>
> > You ask about examples using checked exceptions... Hmmm... I guess you could say that checked exceptions are a very useful part of contract programming.
>
> I am not saying they aren't - I'm just saying if you want to convince
Walter
> I would recommend reading his past posts about checked exceptions and address his concerns. I vaguely remember his concerns are that many times Java coders (just for example) don't pay enough attention to checked exceptions and in the process of shutting up the compiler the coder winds
up
> doing more harm than good. In a perfect world checked exceptions are wonderful - but we don't live in a perfect world, unfortunately.
>
> [snip rest of checked exceptions paragraph to shorten reply post]
>
>


April 15, 2005
"Ben Hinkle" <ben.hinkle@gmail.com> wrote in message news:d3hnug$flk$1@digitaldaemon.com...
> Hard failures during debugging is fine. My own experience in code
robustness
> comes from working with engineering companies (who use MATLAB to generate code for cars and planes) where an unrecoverable error means your car
shuts
> down when you are doing 65 on the highway. Or imagine if that happens with an airplane. That is not acceptable. They have drilled over and over into our heads that a catastrophic error means people die. I don't mean to be overly dramatic but it is a fact.

The reason airliners are safe is because they are designed to be tolerant of any single failure. Computer systems are very unreliable, and the first thing the designer thinks of is "assume the computer system goes beserk and does the worst thing possible, how do I design the system to prevent that from bringing down the airliner?"

Having worked on airliner design, I know how computer controlled subsystems handle self-detected faults. They do it by shutting themselves down and switching to the backup system. They don't try to soldier on. To do so would be to, by definition, be operating in an undefined, untested, and unknown configuration. I wouldn't want to bet my life on that.

Even if the software was perfect, which it never is, the chips themselves are both prone to random failure and are uninspectable. Therefore, in my opinion from having worked on systems that must be safe, a system that cannot stand a catastrophic failure of a computer system is an inherently unsafe design to begin with. Making the computer more reliable does not solve the problem.

What CP provides is another layer of security offering the capability of a program to self-detect a fault. The only reasonable thing it can do then is shut itself down and engage the backup.

But if you're writing, say, a word processor, one might decide to attempt to save the user's data upon a CP violation and hope for the best. In a word processor, safety and security aren't the top priority.

There isn't one size that fits all applications, the engineer writing the program will have to decide. Therefore, having a class of errors that is not catchable at all would be a mistake.


April 15, 2005
> What CP provides is another layer of security offering the
> capability of a
> program to self-detect a fault. The only reasonable thing it can
> do then is
> shut itself down and engage the backup.
>
> But if you're writing, say, a word processor, one might decide to
> attempt to
> save the user's data upon a CP violation and hope for the best. In
> a word
> processor, safety and security aren't the top priority.
>
> There isn't one size that fits all applications, the engineer
> writing the
> program will have to decide. Therefore, having a class of errors
> that is not
> catchable at all would be a mistake.

Do you mean Catchable, or Quenchable? They have very quite different implications. AFAIK, only Sean has mentioned anything even uncatchable-like. What I've been proposing is that CP violations should be unquechable. This is no way prevents the word processor doing its best to save your work.