June 05, 2012
On 05.06.2012 15:57, Don Clugston wrote:
> On 05/06/12 09:07, Jonathan M Davis wrote:
>> On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
>>> On 04/06/12 21:29, Steven Schveighoffer wrote:
>>>> On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac@nospam.com> wrote:
>>>>> 1. There exist cases where you cannot know why the assert failed.
>>>>> 2. Therefore you never know why an assert failed.
>>>>> 3. Therefore it is not safe to unwind the stack from a nothrow
>>>>> function.
>>>>>
>>>>> Spot the fallacies.
>>>>>
>>>>> The fallacy in moving from 2 to 3 is more serious than the one from 1
>>>>> to 2: this argument is not in any way dependent on the assert occuring
>>>>> in a nothrow function. Rather, it's an argument for not having
>>>>> AssertError at all.
>>>>
>>>> I'm not sure that is the issue here at all. What I see is that the
>>>> unwinding of the stack is optional, based on the assumption that
>>>> there's
>>>> no "right" answer.
>>>>
>>>> However, there is an underlying driver for not unwinding the stack --
>>>> nothrow. If nothrow results in the compiler optimizing out whatever
>>>> hooks a function needs to properly unwind itself (my limited
>>>> understanding is that this helps performance), then there *is no
>>>> choice*, you can't properly unwind the stack.
>>>>
>>>> -Steve
>>>
>>> No, this whole issue started because the compiler currently does do
>>> unwinding whenever it can. And Walter claimed that's a bug, and it
>>> should be explicitly disabled.
>>>
>>> It is, in my view, an absurd position. AFAIK not a single argument has
>>> been presented in favour of it. All arguments have been about "you
>>> should never unwind Errors".
>>
>> It's quite clear that we cannot completely, correctly unwind the stack
>> in the
>> face of Errors.
>
> Well that's a motherhood statement. Obviously in the face of extreme
> memory corruption you can't guarantee *any* code is valid.
> The *main* reason why stack unwinding would not be possible is if
> nothrow intentionally omits stack unwinding code.
>
>> As such, no one should be relying on stack unwinding when an
>> Error is thrown.
>
> This conclusion DOES NOT FOLLOW. And I am getting so sick of the number
> of times this fallacy has been repeated in this thread.

Finally voice of reason. My prayers must have touched somebody up above...

>
> These kinds of generalizations are completely invalid in a systems
> programming language.
>
>> Regardless, I think that there are a number of people in this thread
>> who are
>> mistaken in how recoverable they think Errors and/or segfaults are,
>> and they
>> seem to be the ones pushing the hardest for full stack unwinding on
>> the theory
>> that they could somehow ensure safe recovery and a clean shutdown when an
>> Error occurs, which is almost never possible, and certainly isn't
>> possible in
>> the general case.
>>
>> - Jonathan M Davis
>
> Well I'm pushing it because I implemented it (on Windows).
>
> I'm less knowledgeable about what happens on other systems, but know
> that on Windows, the whole system is far, far more robust than most
> people on this thread seem to think.
>

Exactly, hence the whole idea about SEH in the OS.

> I can't see *any* problem with executing catch(Error) clauses. I cannot
> envisage a situation where that can cause a problem. I really cannot.
>
> And catch(Exception) clauses won't be run, because of the exception
> chaining scheme we have implemented.
>
> The only difficult case is 'finally' clauses, which may be expecting an
> Exception.


-- 
Dmitry Olshansky
June 05, 2012
On Tuesday, June 05, 2012 13:57:14 Don Clugston wrote:
> On 05/06/12 09:07, Jonathan M Davis wrote:
> > On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
> >> On 04/06/12 21:29, Steven Schveighoffer wrote:
> >>> On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac@nospam.com>  wrote:
> >>>> 1. There exist cases where you cannot know why the assert failed.
> >>>> 2. Therefore you never know why an assert failed.
> >>>> 3. Therefore it is not safe to unwind the stack from a nothrow
> >>>> function.
> >>>> 
> >>>> Spot the fallacies.
> >>>> 
> >>>> The fallacy in moving from 2 to 3 is more serious than the one from 1 to 2: this argument is not in any way dependent on the assert occuring in a nothrow function. Rather, it's an argument for not having AssertError at all.
> >>> 
> >>> I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer.
> >>> 
> >>> However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack.
> >>> 
> >>> -Steve
> >> 
> >> No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled.
> >> 
> >> It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
> > 
> > It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors.
> 
> Well that's a motherhood statement. Obviously in the face of extreme
> memory corruption you can't guarantee *any* code is valid.
> The *main* reason why stack unwinding would not be possible is if
> nothrow intentionally omits stack unwinding code.

It's not possible precisely because of nothrow.

> > As such, no one should be relying on stack unwinding when an Error is thrown.
> 
> This conclusion DOES NOT FOLLOW. And I am getting so sick of the number of times this fallacy has been repeated in this thread.
> 
> These kinds of generalizations are completely invalid in a systems programming language.

If nothrow prevents the stack from being correctly unwound, then no, you shouldn't be relying on stack unwinding when an Error is thrown, because it's _not_ going to work properly.

> > Regardless, I think that there are a number of people in this thread who are mistaken in how recoverable they think Errors and/or segfaults are, and they seem to be the ones pushing the hardest for full stack unwinding on the theory that they could somehow ensure safe recovery and a clean shutdown when an Error occurs, which is almost never possible, and certainly isn't possible in the general case.
> > 
> > - Jonathan M Davis
> 
> Well I'm pushing it because I implemented it (on Windows).
> 
> I'm less knowledgeable about what happens on other systems, but know that on Windows, the whole system is far, far more robust than most people on this thread seem to think.
> 
> I can't see *any* problem with executing catch(Error) clauses. I cannot envisage a situation where that can cause a problem. I really cannot.

In many cases, it's probably fine, but if the program is in a bad enough state that an Error is thrown, then you can't know for sure that any particular such block will execute properly (memory corruption being the extreme case), and if it doesn't run correctly, then it could make things worse (e.g. writing invalid data to a file, corrupting that file). Also, if the stack is not unwound perfectly (as nothrow prevents), then the program's state will become increasingly invalid the farther that the program gets from the throw point, which will increase the chances of cleanup code functioning incorrectly, as any assumptions that they've made about the program state are increasingly likely to be wrong (as well as it being increasingly likely that the variables that they operate on no longer being valid).

A lot of it comes down to worst case vs typical case. In the typical case, the code causing the Error is isolated enough and the code doing the cleanup is self-contained enough that trying to unwind the stack as much as possible will result in more correct behavior than skipping it all. But in the worst case, you can't rely on running any code being safe, because the state of the program is very much invalid, in which case, it's better to kill the program ASAP. Walter seems to subscribe to the approach that it's best to assume the worst case (e.g. that an assertion failure indicates horrible memory corruption), and always have Errors function that way, whereas others subscribe to the approach that things are almost never that bad, so we should just assume that they aren't, since skipping all of that cleanup causes other problems.

And it's not that the error-handling system isn't robust, it's that if the program state is invalid, then you can't actually assume that _any_ of it's valid, no matter how well it's written, in which case, you _cannot_ know whether running the cleanup code is better or worse than skipping it. Odds are that it's just fine, but you have no such guarantee, because there's no way for the program to know how severe or isolated an Error is when it occurs. It just knows that something went horribly wrong.

- Jonathan M Davis
June 05, 2012
On Jun 5, 2012, at 8:44 AM, Jonathan M Davis <jmdavisProg@gmx.com> wrote:
> 
> In many cases, it's probably fine, but if the program is in a bad enough state that an Error is thrown, then you can't know for sure that any particular such block will execute properly (memory corruption being the extreme case), and if it doesn't run correctly, then it could make things worse (e.g. writing invalid data to a file, corrupting that file). Also, if the stack is not unwound perfectly (as nothrow prevents), then the program's state will become increasingly invalid the farther that the program gets from the throw point, which will increase the chances of cleanup code functioning incorrectly, as any assumptions that they've made about the program state are increasingly likely to be wrong (as well as it being increasingly likely that the variables that they operate on no longer being valid).

Then we should really just abort on Error. What I don't understand is the assertion that it isn't safe to unwind the stack on Error and yet that catch(Error) clauses should still execute. If the program state is really so bad that nothing can be done safely then why would the user attempt to log the error condition or anything else?

I think an argument could be made that the current behavior of stack unwinding should continue and a hook should be added to let the user call abort or whatever instead. But we couldn't make abort the default and let the user disable that.
June 05, 2012
Le 04/06/2012 21:29, Steven Schveighoffer a écrit :
> On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston <dac@nospam.com> wrote:
>
>> 1. There exist cases where you cannot know why the assert failed.
>> 2. Therefore you never know why an assert failed.
>> 3. Therefore it is not safe to unwind the stack from a nothrow function.
>>
>> Spot the fallacies.
>>
>> The fallacy in moving from 2 to 3 is more serious than the one from 1
>> to 2: this argument is not in any way dependent on the assert occuring
>> in a nothrow function. Rather, it's an argument for not having
>> AssertError at all.
>
> I'm not sure that is the issue here at all. What I see is that the
> unwinding of the stack is optional, based on the assumption that there's
> no "right" answer.
>
> However, there is an underlying driver for not unwinding the stack --
> nothrow. If nothrow results in the compiler optimizing out whatever
> hooks a function needs to properly unwind itself (my limited
> understanding is that this helps performance), then there *is no
> choice*, you can't properly unwind the stack.
>
> -Steve

It change nothing in term of performances as long as you not throw. And when you throw, performance are not your main problem.
June 06, 2012
On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
> On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:
>
>> I don't agree that OutOfMemory is critical:
>> 	--> make it an exception ?
>
> No.  What we need is a non-throwing version of malloc that returns NULL.  (throwing version can wrap this).  If you want to throw an exception, then throw it there (or use enforce).

With some sugar:

    auto a = nothrow new Foo; // Returns null on OOM

Then, ordinary new can be disallowed in nothrow code.

IMO, failing assertions and out-of-bounds errors should just abort(), or, as Sean suggests, call a special handler.

-Lars
June 06, 2012
On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
> On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
> 
> wrote:
> > On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
> > 
> > <dmitry.olsh@gmail.com> wrote:
> >> I don't agree that OutOfMemory is critical:
> >> 	--> make it an exception ?
> > 
> > No.  What we need is a non-throwing version of malloc that
> > returns NULL.  (throwing version can wrap this).  If you want
> > to throw an exception, then throw it there (or use enforce).
> 
> With some sugar:
> 
>      auto a = nothrow new Foo; // Returns null on OOM
> 
> Then, ordinary new can be disallowed in nothrow code.

But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated. I'd hate to see regular new not be allowed in nothrow functions. Having a way to allocate and return null on failure would definitely be a good feature for those trying to handle running out of memory, but for 99.9999999% of programs, it's just better to throw the Error thereby killing the program and making it clear what happened.

- Jonathan M Davis
June 06, 2012
On Wed, 06 Jun 2012 05:13:39 -0400, Lars T. Kyllingstad <public@kyllingen.net> wrote:

> On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
>> On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky <dmitry.olsh@gmail.com> wrote:
>>
>>> I don't agree that OutOfMemory is critical:
>>> 	--> make it an exception ?
>>
>> No.  What we need is a non-throwing version of malloc that returns NULL.  (throwing version can wrap this).  If you want to throw an exception, then throw it there (or use enforce).
>
> With some sugar:
>
>      auto a = nothrow new Foo; // Returns null on OOM
>
> Then, ordinary new can be disallowed in nothrow code.

That doesn't work, new conflates memory allocation with construction.  What if the constructor throws?

-Steve
June 06, 2012
On 05/06/12 17:44, Jonathan M Davis wrote:
> On Tuesday, June 05, 2012 13:57:14 Don Clugston wrote:
>> On 05/06/12 09:07, Jonathan M Davis wrote:
>>> On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
>>>> On 04/06/12 21:29, Steven Schveighoffer wrote:
>>>>> On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac@nospam.com>   wrote:
>>>>>> 1. There exist cases where you cannot know why the assert failed.
>>>>>> 2. Therefore you never know why an assert failed.
>>>>>> 3. Therefore it is not safe to unwind the stack from a nothrow
>>>>>> function.
>>>>>>
>>>>>> Spot the fallacies.
>>>>>>
>>>>>> The fallacy in moving from 2 to 3 is more serious than the one from 1
>>>>>> to 2: this argument is not in any way dependent on the assert occuring
>>>>>> in a nothrow function. Rather, it's an argument for not having
>>>>>> AssertError at all.
>>>>>
>>>>> I'm not sure that is the issue here at all. What I see is that the
>>>>> unwinding of the stack is optional, based on the assumption that there's
>>>>> no "right" answer.
>>>>>
>>>>> However, there is an underlying driver for not unwinding the stack --
>>>>> nothrow. If nothrow results in the compiler optimizing out whatever
>>>>> hooks a function needs to properly unwind itself (my limited
>>>>> understanding is that this helps performance), then there *is no
>>>>> choice*, you can't properly unwind the stack.
>>>>>
>>>>> -Steve
>>>>
>>>> No, this whole issue started because the compiler currently does do
>>>> unwinding whenever it can. And Walter claimed that's a bug, and it
>>>> should be explicitly disabled.
>>>>
>>>> It is, in my view, an absurd position. AFAIK not a single argument has
>>>> been presented in favour of it. All arguments have been about "you
>>>> should never unwind Errors".
>>>
>>> It's quite clear that we cannot completely, correctly unwind the stack in
>>> the face of Errors.
>>
>> Well that's a motherhood statement. Obviously in the face of extreme
>> memory corruption you can't guarantee *any* code is valid.
>> The *main* reason why stack unwinding would not be possible is if
>> nothrow intentionally omits stack unwinding code.
>
> It's not possible precisely because of nothrow.


nothrow only means 'does not throw Exceptions'. It doesn't mean 'does not throw Errors'.
Therefore, given:

int foo() nothrow { ...}


try
{
   foo();
} catch (Error e)
{
  ...
}

even though there are no throw statements inside foo(), the compiler is NOT permitted to remove the catch(Error), whereas it could remove catch(Exception).

The problem is 'finally' clauses. Are they called only on Exception, or on Exception and Error?

>>> Regardless, I think that there are a number of people in this thread who
>>> are mistaken in how recoverable they think Errors and/or segfaults are,
>>> and they seem to be the ones pushing the hardest for full stack unwinding
>>> on the theory that they could somehow ensure safe recovery and a clean
>>> shutdown when an Error occurs, which is almost never possible, and
>>> certainly isn't possible in the general case.
>>>
>>> - Jonathan M Davis
>>
>> Well I'm pushing it because I implemented it (on Windows).
>>
>> I'm less knowledgeable about what happens on other systems, but know
>> that on Windows, the whole system is far, far more robust than most
>> people on this thread seem to think.
>>
>> I can't see *any* problem with executing catch(Error) clauses. I cannot
>> envisage a situation where that can cause a problem. I really cannot.
>
> In many cases, it's probably fine, but if the program is in a bad enough state
> that an Error is thrown, then you can't know for sure that any particular such
> block will execute properly (memory corruption being the extreme case), and if
> it doesn't run correctly, then it could make things worse (e.g. writing
> invalid data to a file, corrupting that file). Also, if the stack is not unwound
> perfectly (as nothrow prevents), then the program's state will become
> increasingly invalid the farther that the program gets from the throw point,
> which will increase the chances of cleanup code functioning incorrectly, as
> any assumptions that they've made about the program state are increasingly
> likely to be wrong (as well as it being increasingly likely that the variables
> that they operate on no longer being valid).
>
> A lot of it comes down to worst case vs typical case. In the typical case, the
> code causing the Error is isolated enough and the code doing the cleanup is
> self-contained enough that trying to unwind the stack as much as possible will
> result in more correct behavior than skipping it all. But in the worst case,
> you can't rely on running any code being safe, because the state of the
> program is very much invalid, in which case, it's better to kill the program
> ASAP. Walter seems to subscribe to the approach that it's best to assume the
> worst case (e.g. that an assertion failure indicates horrible memory
> corruption), and always have Errors function that way, whereas others
> subscribe to the approach that things are almost never that bad, so we should
> just assume that they aren't, since skipping all of that cleanup causes other
> problems.

I believe I now understand the root issue behind this dispute.

Consider:

if (x) throw new FileError;

if (x) throw new FileException;

What is the difference between these two, from the point of view of the compiler? Practically nothing. Only the name is different.
There is absolutely no difference in the validity of the machine state when executing the first, rather than the second.

In both cases it is possible that something has gone horribly wrong; it's also possible that it's a superficial problem.

The difference between Error and Exception is a matter of *convention*.

Now, what people have been pointing out is that *even with things like null pointer exceptions* there are still cases where the machine state has remained valid.

Now, we can say that when an Error is thrown, the machine is in an invalid state *by definition*, regardless of whether it really is, or not. If we do this, then Walters statements about catching AssertErrors become valid, but for a different reason.

When you have thrown an Error, you've told the compiler that the machine is in an invalid state. Catching it and continuing is wrong not because the machine is unstable (which might be true, or might not); rather it's wrong because it's logically inconsistent: by throwing an Error you've told the compiler that it is not recoverable, but by catching it, you've also told it that it is recoverable!

If we chose to say that Error means that the machine is in an invalid state, there are still a couple of issues:

(1) How to deal with cases where the compiler generates an Error, but you know that the machine state is still valid, and you want to supress the Error and continue.

I think Steven's point near the start of the thread was excellent: in the cases where recovery is possible, it is almost always extremely close to the point where the Error was generated.

(2) Does it make sense to run finally clauses on Error, if we are saying that the machine state is invalid?

Ie, at present they are finally_Throwable clauses, should they instead be finally_Exception clauses?

I cannot see any way in which it makes sense to run them if they're in a throwable function, but not if they are in a nothrow function.

If we take the view that Error by definition implies an invalid machine state, then I don't think they should run at all.

But noting that in a significant fraction of cases, the machine state isn't actually invalid, I think it can be reasonable to provide a mechanism to make them be run.

BTW its worth noting that cleanup is not necessarily performed even in the Exception case. If an exception is thrown while processing a finally clause (eg, inside a destructor) then the destructor didn't completely run. C++ just aborts the program if this happens. We've got exception chaining so that case is well defined, and can be detected, nonetheless it's got a lot of similarities to the Error case.
June 06, 2012
Le 06/06/2012 11:13, Lars T. Kyllingstad a écrit :
> On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
>> On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
>> <dmitry.olsh@gmail.com> wrote:
>>
>>> I don't agree that OutOfMemory is critical:
>>> --> make it an exception ?
>>
>> No. What we need is a non-throwing version of malloc that returns
>> NULL. (throwing version can wrap this). If you want to throw an
>> exception, then throw it there (or use enforce).
>
> With some sugar:
>
> auto a = nothrow new Foo; // Returns null on OOM
>
> Then, ordinary new can be disallowed in nothrow code.
>
> IMO, failing assertions and out-of-bounds errors should just abort(),
> or, as Sean suggests, call a special handler.
>
> -Lars

Let see what Andrei propose for custom allocators.
June 06, 2012
Le 05/06/2012 18:21, Sean Kelly a écrit :
> On Jun 5, 2012, at 8:44 AM, Jonathan M Davis<jmdavisProg@gmx.com>  wrote:
>>
>> In many cases, it's probably fine, but if the program is in a bad enough state
>> that an Error is thrown, then you can't know for sure that any particular such
>> block will execute properly (memory corruption being the extreme case), and if
>> it doesn't run correctly, then it could make things worse (e.g. writing
>> invalid data to a file, corrupting that file). Also, if the stack is not unwound
>> perfectly (as nothrow prevents), then the program's state will become
>> increasingly invalid the farther that the program gets from the throw point,
>> which will increase the chances of cleanup code functioning incorrectly, as
>> any assumptions that they've made about the program state are increasingly
>> likely to be wrong (as well as it being increasingly likely that the variables
>> that they operate on no longer being valid).
>
> Then we should really just abort on Error. What I don't understand is the assertion that it isn't safe to unwind the stack on Error and yet that catch(Error) clauses should still execute. If the program state is really so bad that nothing can be done safely then why would the user attempt to log the error condition or anything else?
>

Yes, either we consider the environement may have been compromised and it don't even make sense to throw an Error, or we consider this environement is still consistent, and we have a logic bug. If so, scope (especially failure) should run when stack is unwinded.

As need depend on the software (an office suite should try its best to fail gracefully, a plane autpilot should crash ASAP and give control back to the pilot), what is needed here is a compiler switch.