May 30, 2012
On May 30, 2012, at 7:21 AM, deadalnix <deadalnix@gmail.com> wrote:

> Le 30/05/2012 12:59, Jonathan M Davis a écrit :
>>> And it's very valuable to log it properly.
>> 
>> Yes, which is why it's better to have an Error thrown rather than a halt instruction be executed. But that doesn't mean that any attempt at cleanup is any more valid.
>> 
> 
> Sorry but that is bullshit. What can be the benefit of not trying to clean things up ?
> 
> Do you really consider that corrupted files, client waiting forever at the other end of a connection or any similar stuff is a good thing ? Because that is what you are advocating.
> 
> I may sound good on the paper, but in real life, system DOES fail. It isn't a question of if, but a question of when and how often, and what to do about it.

I'd certainly at least want to be given the option of cleaning up when an Error is thrown.  If not, I have a feeling that in circumstances where I really wanted it I'd do something horrible to make sure it happened in some other way.
May 30, 2012
On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac@nospam.com> wrote:

> On 30/05/12 10:40, Jonathan M Davis wrote:
>> On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
>>> The fact that error don't trigger scope and everything is nonsensial.
>>
>> If an Error is truly unrecoverable (as they're generally supposed to be), then
>> what does it matter? Something fatal occured in your program, so it
>> terminates. Because it's an Error, you can get a stack trace and report
>> something before the program actually terminates, but continuing execution
>> after an Error is considered to be truly _bad_ idea, so in general, why does
>> it matter whether scope statements, finally blocks, or destructors get
>> executed? It's only rarer cases where you're trying to do something like
>> create a unit test framework on top of assert that you would need to catch an
>> Error, and that's questionable enough as it is. In normal program execution,
>> an error is fatal, so cleanup is irrelevant and even potentially dangerous,
>> because your program is already in an invalid state.
>
> That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.

There's also no reason to assume that orderly cleanup *doesn't* cause any damage.  In fact, it's not reasonable to assume *anything*.

Which is the point.  If you want to recover from an error, you have to do it manually.  It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want).

But there is no reasonable *default* for handling an error that the runtime can assume.

I'd classify errors/exceptions into three categories:

1. corruption/segfault -- not recoverable under any reasonable circumstances.  Special cases exist (such as a custom paging mechanism).
2. program invariant errors (i.e. assert errors) --  Recovery is not defined by the runtime, so you must do it manually.  Any decision the runtime makes will be arbitrary, and could be wrong.
3. try/catch exceptions -- these are planned for and *expected* to occur because the program cannot control it's environment.  e.g. EOF when none was expected.

The largest problem with the difference between 2 and 3 is the actual decision of whether an exceptional case is categorized as 2 or 3 can be decoupled from the code that decides between them.

For example:

double invert(double x)
{
   assertOrEnfoce?(x != 0); // which should it be?
   return 1.0/x;
}

case 1:

void main()
{
    writeln(invert(0)); // clearly a program error
}

case 2:

int main(string[] args)
{
   writeln(invert(to!double(args[1])); // clearly a catchable error
}

I don't know of a good way to solve that...

-Steve
May 30, 2012
On May 30, 2012, at 8:05 AM, "Steven Schveighoffer" <schveiguy@yahoo.com> wrote:

> On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac@nospam.com> wrote:
> 
>> On 30/05/12 10:40, Jonathan M Davis wrote:
>>> On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
>>>> The fact that error don't trigger scope and everything is nonsensial.
>>> 
>>> If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
>> 
>> That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.
> 
> There's also no reason to assume that orderly cleanup *doesn't* cause any damage.  In fact, it's not reasonable to assume *anything*.
> 
> Which is the point.  If you want to recover from an error, you have to do it manually.  It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want).
> 
> But there is no reasonable *default* for handling an error that the runtime can assume.
> 
> I'd classify errors/exceptions into three categories:
> 
> 1. corruption/segfault -- not recoverable under any reasonable circumstances.  Special cases exist (such as a custom paging mechanism).
> 2. program invariant errors (i.e. assert errors) --  Recovery is not defined by the runtime, so you must do it manually.  Any decision the runtime makes will be arbitrary, and could be wrong.
> 3. try/catch exceptions -- these are planned for and *expected* to occur because the program cannot control it's environment.  e.g. EOF when none was expected.
> 
> The largest problem with the difference between 2 and 3 is the actual decision of whether an exceptional case is categorized as 2 or 3 can be decoupled from the code that decides between them.
> 
> For example:
> 
> double invert(double x)
> {
>   assertOrEnfoce?(x != 0); // which should it be?
>   return 1.0/x;
> }
> 
> case 1:
> 
> void main()
> {
>    writeln(invert(0)); // clearly a program error
> }
> 
> case 2:
> 
> int main(string[] args)
> {
>   writeln(invert(to!double(args[1])); // clearly a catchable error
> }
> 
> I don't know of a good way to solve that...

Sounds like a good argument for the assert handler in core.runtime.
May 30, 2012
On 30/05/12 12:59, Jonathan M Davis wrote:
> On Wednesday, May 30, 2012 11:32:00 Don Clugston wrote:
>> On 30/05/12 10:40, Jonathan M Davis wrote:
>>> On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
>>>> The fact that error don't trigger scope and everything is nonsensial.
>>>
>>> If an Error is truly unrecoverable (as they're generally supposed to be),
>>> then what does it matter? Something fatal occured in your program, so it
>>> terminates. Because it's an Error, you can get a stack trace and report
>>> something before the program actually terminates, but continuing
>>> execution after an Error is considered to be truly _bad_ idea, so in
>>> general, why does it matter whether scope statements, finally blocks, or
>>> destructors get executed? It's only rarer cases where you're trying to do
>>> something like create a unit test framework on top of assert that you
>>> would need to catch an Error, and that's questionable enough as it is. In
>>> normal program execution, an error is fatal, so cleanup is irrelevant and
>>> even potentially dangerous, because your program is already in an invalid
>>> state.
>>
>> That's true for things like segfaults, but in the case of an
>> AssertError, there's no reason to believe that cleanup would cause any
>> damage.
>> In fact, generally, the point of an AssertError is to prevent the
>> program from entering an invalid state.
>
> An assertion failure really isn't all that different from a segfault. By
> definition, if an assertion fails, the program is an invalid state, because the
> whole point of the assertion is to guarantee something about the program's
> state.

There's a big difference. A segfault is a machine error. The integrity of the machine model has been violated, and the machine is in an out-of-control state. In particular, the stack may be corrupted, so stack unwinding may not be successful.

But, in an assert error, the machine is completely intact; the error is at a higher level, which does not interfere with stack unwinding.

Damage is possible only if you've written your destructors/finally code extremely poorly. Note that, unlike C++, it's OK to throw a new Error or Exception from inside a destructor.
But with (say) a stack overflow, you don't necessarily know what code is being executed. It could do anything.


> Now, if a segfault occurs (particularly if it's caused by something
> other than a null pointer), the program is likely to be in a _worse_ state,
> but it's in an invalid state in either case. In neither case does it make any
> sense to try and recover, and in both cases, there's a definite risk in
> executing any further code - including cleanup code.

> Yes, the segfault is
> probably worse but not necessarily all that much worse. A logic error can be
> just as insidious to the state of a program as memory corruption, depending on
> what it is.

I'm surprised by your response, I didn't think this was controversial.
We could just as easily have said assert() throws an AssertException.
(Or have two kinds of assert, one which is an Error and the other merely an Exception).

May 30, 2012
Steven Schveighoffer wrote:
> On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac@nospam.com> wrote:
> 
> >On 30/05/12 10:40, Jonathan M Davis wrote:
> >>On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
> >>>The fact that error don't trigger scope and everything is nonsensial.
> >>
> >>If an Error is truly unrecoverable (as they're generally
> >>supposed to be), then
> >>what does it matter? Something fatal occured in your program, so it
> >>terminates. Because it's an Error, you can get a stack trace and report
> >>something before the program actually terminates, but continuing
> >>execution
> >>after an Error is considered to be truly _bad_ idea, so in
> >>general, why does
> >>it matter whether scope statements, finally blocks, or destructors get
> >>executed? It's only rarer cases where you're trying to do something like
> >>create a unit test framework on top of assert that you would
> >>need to catch an
> >>Error, and that's questionable enough as it is. In normal
> >>program execution,
> >>an error is fatal, so cleanup is irrelevant and even potentially
> >>dangerous,
> >>because your program is already in an invalid state.
> >
> >That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.
> 
> There's also no reason to assume that orderly cleanup *doesn't* cause any damage.  In fact, it's not reasonable to assume *anything*.
> 
> Which is the point.  If you want to recover from an error, you have to do it manually.  It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want).
> 
> But there is no reasonable *default* for handling an error that the runtime can assume.
> 
> I'd classify errors/exceptions into three categories:
> 
> 1. corruption/segfault -- not recoverable under any reasonable
> circumstances.  Special cases exist (such as a custom paging
> mechanism).
> 2. program invariant errors (i.e. assert errors) --  Recovery is not
> defined by the runtime, so you must do it manually.  Any decision
> the runtime makes will be arbitrary, and could be wrong.
> 3. try/catch exceptions -- these are planned for and *expected* to
> occur because the program cannot control it's environment.  e.g. EOF
> when none was expected.
> 
> The largest problem with the difference between 2 and 3 is the actual decision of whether an exceptional case is categorized as 2 or 3 can be decoupled from the code that decides between them.
> 
> For example:
> 
> double invert(double x)
> {
>    assertOrEnfoce?(x != 0); // which should it be?
>    return 1.0/x;
> }

It's a logic error. Thus,

double invert(double x)
in { assert(x != 0); }
body
{
   return 1.0/x;
}

> case 1:
> 
> void main()
> {
>     writeln(invert(0)); // clearly a program error
> }

Obviously a logic error.

> case 2:
> 
> int main(string[] args)
> {
>    writeln(invert(to!double(args[1])); // clearly a catchable error
> }

This should be
int main(string[] args)
{
   auto arg = to!double(args[1]);
   enforce(arg != 0);
   writeln(invert(arg));
}

The enforce is needed because args[1] is user input. If the programmer controlled the value of arg and believes arg != 0 always holds then no enforce would be needed.

Doesn't this make sense?

Jens

PS
For the record, I think (like most) that Errors should like Exceptions
work with scope, etc. The only arguments against is the theoretical
possibility of causing more damage while cleaning up. I say theoretical
because there was no practical example given. It seems that it may cause
more damage but it does not need to. Of course, if damage happens it's
the programmers fault but it's also the programmer's fault if he does
not try to do a graceful shutdown, i.e. closing sockets, sending a crash
report, or similar.
May 30, 2012
On 30.05.2012 19:05, Steven Schveighoffer wrote:
> On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac@nospam.com> wrote:
>
>> On 30/05/12 10:40, Jonathan M Davis wrote:
>>> On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
>>>> The fact that error don't trigger scope and everything is nonsensial.
>>>
>>> If an Error is truly unrecoverable (as they're generally supposed to
>>> be), then
>>> what does it matter? Something fatal occured in your program, so it
>>> terminates. Because it's an Error, you can get a stack trace and report
>>> something before the program actually terminates, but continuing
>>> execution
>>> after an Error is considered to be truly _bad_ idea, so in general,
>>> why does
>>> it matter whether scope statements, finally blocks, or destructors get
>>> executed? It's only rarer cases where you're trying to do something like
>>> create a unit test framework on top of assert that you would need to
>>> catch an
>>> Error, and that's questionable enough as it is. In normal program
>>> execution,
>>> an error is fatal, so cleanup is irrelevant and even potentially
>>> dangerous,
>>> because your program is already in an invalid state.
>>
>> That's true for things like segfaults, but in the case of an
>> AssertError, there's no reason to believe that cleanup would cause any
>> damage.
>
> There's also no reason to assume that orderly cleanup *doesn't* cause
> any damage. In fact, it's not reasonable to assume *anything*.
>
> Which is the point. If you want to recover from an error, you have to do
> it manually. It should be doable, but the default handling should not
> need to be defined (i.e. implementations should be free to do whatever
> they want).
>
> But there is no reasonable *default* for handling an error that the
> runtime can assume.
>

I'd say that calling scope, destructors etc. on Error being thrown is the most _useful_ thing in all cases. If you realy-realy afraid of memory corruption killing sensitive data, taking control of OS and so on -  you just catch Errors early on inside such sensitive functions. And call C's abort(). And that's it.

Let's make common and hard case default and automatic plz.

-- 
Dmitry Olshansky
May 30, 2012
On Wed, 30 May 2012 11:47:34 -0400, Jens Mueller <jens.k.mueller@gmx.de> wrote:

> Steven Schveighoffer wrote:

>> case 2:
>>
>> int main(string[] args)
>> {
>>    writeln(invert(to!double(args[1])); // clearly a catchable error
>> }
>
> This should be
> int main(string[] args)
> {
>    auto arg = to!double(args[1]);
>    enforce(arg != 0);
>    writeln(invert(arg));
> }
>
> The enforce is needed because args[1] is user input. If the programmer
> controlled the value of arg and believes arg != 0 always holds then no
> enforce would be needed.
>
> Doesn't this make sense?

Yes and no.  Yes, the ultimate result of what you wrote is the desired functionality.  But no, I don't think you have properly solved the problem.

Consider that user data, or environment data, can come from anywhere, and at any time.  Consider also that you have decoupled the function parameter validation from the function itself!  Ideally, invert should be the one deciding whether the original data is valid or not.  In order to write correct code, I must "know" what the contents of invert are as the writer of main.  I'd rather do something like:

int main(string[] args)
{
   auto argToInvert = to!double(args[1]);
   validateInvertArgs(argToInvert); // uses enforce
   invert(argToInvert);
}

Note that even *this* isn't ideal, because now the author of invert has to write and maintain a separate function for validating its arguments, even though invert is *already* validating its arguments.

It's almost as if, I want to re-use the same code inside invert that validates its arguments, but use a different mechanism to flag an error, depending on the source of the arguments.

It can get even more tricky, if say a function has two parameters, and one is hard-coded and the other comes from user input.

> PS
> For the record, I think (like most) that Errors should like Exceptions
> work with scope, etc. The only arguments against is the theoretical
> possibility of causing more damage while cleaning up. I say theoretical
> because there was no practical example given. It seems that it may cause
> more damage but it does not need to. Of course, if damage happens it's
> the programmers fault but it's also the programmer's fault if he does
> not try to do a graceful shutdown, i.e. closing sockets, sending a crash
> report, or similar.

Indeed, it's all up to the programmer to handle the situation properly.  If an assert occurs, the program may be already in an invalid state, and *trying to save* files or close/flush databases may corrupt the data.

My point is, it's impossible for the runtime to know that your code is properly handling the error or not, and that running all the finally/scope blocks will not be worse than not doing it.

-Steve
May 30, 2012
Steven Schveighoffer wrote:
> On Wed, 30 May 2012 11:47:34 -0400, Jens Mueller <jens.k.mueller@gmx.de> wrote:
> 
> >Steven Schveighoffer wrote:
> 
> >>case 2:
> >>
> >>int main(string[] args)
> >>{
> >>   writeln(invert(to!double(args[1])); // clearly a catchable error
> >>}
> >
> >This should be
> >int main(string[] args)
> >{
> >   auto arg = to!double(args[1]);
> >   enforce(arg != 0);
> >   writeln(invert(arg));
> >}
> >
> >The enforce is needed because args[1] is user input. If the programmer controlled the value of arg and believes arg != 0 always holds then no enforce would be needed.
> >
> >Doesn't this make sense?
> 
> Yes and no.  Yes, the ultimate result of what you wrote is the desired functionality.  But no, I don't think you have properly solved the problem.
> 
> Consider that user data, or environment data, can come from anywhere, and at any time.  Consider also that you have decoupled the function parameter validation from the function itself! Ideally, invert should be the one deciding whether the original data is valid or not.  In order to write correct code, I must "know" what the contents of invert are as the writer of main.  I'd rather do something like:
> 
> int main(string[] args)
> {
>    auto argToInvert = to!double(args[1]);
>    validateInvertArgs(argToInvert); // uses enforce
>    invert(argToInvert);
> }
> 
> Note that even *this* isn't ideal, because now the author of invert has to write and maintain a separate function for validating its arguments, even though invert is *already* validating its arguments.
> 
> It's almost as if, I want to re-use the same code inside invert that validates its arguments, but use a different mechanism to flag an error, depending on the source of the arguments.
> 
> It can get even more tricky, if say a function has two parameters, and one is hard-coded and the other comes from user input.

Why should invert validate its arguments? invert just states if the
input has this and that property, then I will return the inverse of the
argument. And it makes sure that its assumptions actually hold. And
these assumption are that fundamental that failing to verify these is an
error.
Why should it do more than that? Actually, it can't do more than that
because it does not know what to do. Assuming the user passed 0 then
different recovery approaches are possible.

> >PS
> >For the record, I think (like most) that Errors should like Exceptions
> >work with scope, etc. The only arguments against is the theoretical
> >possibility of causing more damage while cleaning up. I say theoretical
> >because there was no practical example given. It seems that it may cause
> >more damage but it does not need to. Of course, if damage happens it's
> >the programmers fault but it's also the programmer's fault if he does
> >not try to do a graceful shutdown, i.e. closing sockets, sending a crash
> >report, or similar.
> 
> Indeed, it's all up to the programmer to handle the situation properly.  If an assert occurs, the program may be already in an invalid state, and *trying to save* files or close/flush databases may corrupt the data.
> 
> My point is, it's impossible for the runtime to know that your code is properly handling the error or not, and that running all the finally/scope blocks will not be worse than not doing it.

I thought this is the only argument for not executing finally/scope blocks. Because running these in case of an Error may actually be worse than not running them. Not that we have an example of such code but that's the theoretical issue brought up against executing finally/scope in case of an Error.

Jens
May 30, 2012
On Wednesday, May 30, 2012 15:28:22 Jacob Carlborg wrote:
> On 2012-05-30 12:59, Jonathan M Davis wrote:
> > Yes, which is why it's better to have an Error thrown rather than a halt instruction be executed. But that doesn't mean that any attempt at cleanup is any more valid.
> 
> If you're not supposed to be able to catch Errors then what's the difference?

You can catch them to print out additional information or whatever is useful to generate more information about the Error. In fact, just what the Error gives you is already more useful: message, file, line number, stack trace, etc. That alone makes an Error more useful than a halt instruction.

You can catch them to attempt explicit cleanup that absolutely must be done for whatever reason (with the knowledge that it's potentially dangerous to do that cleanup due to the Error).

You can catch them in very controlled circumstances where you know that continuing is safe (obviously this isn't the sort of thing that you do in @safe code). For instance, in some restricted cases, that could be done with an OutOfMemoryError. But when you do that sort of thing you have to catch the Error _very_ close to the throw point and be sure that there's no cleanup code in between. It only works when you can guarantee yourself that the program state is not being compromised by the Error, and you're able to guarantee that continuing from the catch point is safe. That works in some cases with AssertError in unit test code but becomes problematic as such code becomes more complex.

- Jonathan M Davis
May 30, 2012
On Wednesday, May 30, 2012 17:29:30 Don Clugston wrote:
> On 30/05/12 12:59, Jonathan M Davis wrote:
> > On Wednesday, May 30, 2012 11:32:00 Don Clugston wrote:
> >> On 30/05/12 10:40, Jonathan M Davis wrote:
> >>> On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
> >>>> The fact that error don't trigger scope and everything is nonsensial.
> >>> 
> >>> If an Error is truly unrecoverable (as they're generally supposed to
> >>> be),
> >>> then what does it matter? Something fatal occured in your program, so it
> >>> terminates. Because it's an Error, you can get a stack trace and report
> >>> something before the program actually terminates, but continuing
> >>> execution after an Error is considered to be truly _bad_ idea, so in
> >>> general, why does it matter whether scope statements, finally blocks, or
> >>> destructors get executed? It's only rarer cases where you're trying to
> >>> do
> >>> something like create a unit test framework on top of assert that you
> >>> would need to catch an Error, and that's questionable enough as it is.
> >>> In
> >>> normal program execution, an error is fatal, so cleanup is irrelevant
> >>> and
> >>> even potentially dangerous, because your program is already in an
> >>> invalid
> >>> state.
> >> 
> >> That's true for things like segfaults, but in the case of an
> >> AssertError, there's no reason to believe that cleanup would cause any
> >> damage.
> >> In fact, generally, the point of an AssertError is to prevent the
> >> program from entering an invalid state.
> > 
> > An assertion failure really isn't all that different from a segfault. By definition, if an assertion fails, the program is an invalid state, because the whole point of the assertion is to guarantee something about the program's state.
> 
> There's a big difference. A segfault is a machine error. The integrity of the machine model has been violated, and the machine is in an out-of-control state. In particular, the stack may be corrupted, so stack unwinding may not be successful.
> 
> But, in an assert error, the machine is completely intact; the error is at a higher level, which does not interfere with stack unwinding.
> 
> Damage is possible only if you've written your destructors/finally code
> extremely poorly. Note that, unlike C++, it's OK to throw a new Error or
> Exception from inside a destructor.
> But with (say) a stack overflow, you don't necessarily know what code is
> being executed. It could do anything.

There is definitely a difference in severity. Clearly memory corruption is more severe than a logic error in your code. However, in the general case, if you have a logic error in your code which is caught by an assertion, there's no way to know without actually examining the code how valid the state of the program is at that point. It's in an invalid state _by definition_, because the assertion was testing the validity of the state of the program, and it failed. So, at that point, it's only a question of degree. _How_ invalid is the state? Since there's no way for the program to know how severe the logic error was, it has no way of knowing whether it's safe to run any cleanup code (the same as the program has no way of knowing whether a segfault is relatively minor - e.g. a null pointer - or absolutely catastrophic - e.g. memory is horribly corrupted).

If you got an OutOfMemoryError rather than one specifically indicating a logic error (as with Errors such as AssertError or RangeError), then that's specifcally telling you that your program has run out of a particular resource (i.e. memory), which means that any code which assumes that that resource is available (which in the case of memory is pretty much all code) will fail. Running cleanup code could be very precarious at that point if it allocates any memory (which a lot of cleanup code wouldn't, but I'm sure that it would be very easy to find cleanup code which did). Any further attempts at allocation would result in more OutOfMemoryErrors and leaving the cleanup code only partially run, thereby possibly making things even worse, depending on what the cleanup code does.

Running cleanup code is _not_ safe when an Error is thrown, because the program is definitely in an invalid state at that point, even if it's not as bad as a segfault can be.

Now, it may be that that risk is worth it, especially since a lot of the time, cleanup code won't be invalidated in the least by whatever caused Errors elsewhere in the program, and there are definitely plenty of cases where at least attempting to cleanup everything is better than skipping it all due of an Error somewhere else in the program. But it's still not safe. It's a question of whether we think that the risks posed by trying to run cleanup code after the program is in an invalid enough state that an Error was thrown are too great to attempt cleanup or whether we think that the problems caused by skipping that cleanup are greater.

> > Now, if a segfault occurs (particularly if it's caused by something
> > other than a null pointer), the program is likely to be in a _worse_
> > state,
> > but it's in an invalid state in either case. In neither case does it make
> > any sense to try and recover, and in both cases, there's a definite risk
> > in executing any further code - including cleanup code.
> > 
> > Yes, the segfault is
> > probably worse but not necessarily all that much worse. A logic error can
> > be just as insidious to the state of a program as memory corruption,
> > depending on what it is.
> 
> I'm surprised by your response, I didn't think this was controversial. We could just as easily have said assert() throws an AssertException. (Or have two kinds of assert, one which is an Error and the other merely an Exception).

In general, a segfault is definitely worse, but logic errors can_ be just as bad in terms of the damage that they can do (especially in cmparison with segfaults caused by null pointers as opposed to those caused by memory corruption). It all depends on what the logic error is and what happens if you try and continue with the program in such a state.

- Jonathan M Davis