June 01, 2012
Le 01/06/2012 14:16, Tobias Pankrath a écrit :
> On Friday, 1 June 2012 at 12:03:15 UTC, deadalnix wrote:
>> Le 01/06/2012 12:29, Walter Bright a écrit :
>>> On 6/1/2012 1:15 AM, Jens Mueller wrote:
>>>> Since the current implementation does not follow the specification
>>>> regarding scope and finally block being executed in case of Error will
>>>> try ... catch (...Error) keep working?
>>>
>>> No. The reason for this is the implementation was not updated after the
>>> split between Error and Exception happened. It was overlooked.
>>>
>>>> I have code that uses
>>>> assertThrows!AssertError to test some in contracts. Will this code
>>>> break?
>>>
>>> I don't know exactly what your code is, but if you're relying on scope
>>> to unwind in the presence of Errors, that will break.
>>>
>>
>> If you have an error, it is already broken in some way.
>>
>> But this is unreasonable to think that the whole program is broken,
>> except in very specific cases (stack corruption for instance) but in
>> such a case, you can't throw an error anyway.
>
> I agree. It should be possible to have an plugin system where not every
> null pointer dereference in a plugin screws up the hole program. Without
> using different processes for the plugin.
>
> 90% of null pointer dereferences are simple bugs not memory corruption.
>

You want to crash an airplane or what ???
June 01, 2012
On 06/01/12 19:59, Steven Schveighoffer wrote:
> On Fri, 01 Jun 2012 13:50:16 -0400, Walter Bright <newshound2@digitalmars.com> wrote:
> 
>> On 6/1/2012 5:29 AM, Steven Schveighoffer wrote:
>>> No. What we need is a non-throwing version of malloc that returns NULL.
>>
>> We have it. It's called "malloc"!
> 
> Oh sorry, I meant *GC.malloc* :)

auto GC_malloc(size_t s) nothrow { void *p; try p = GC.malloc(s); catch {} return p; }

Which isn't ideal, but probably good enough - it's not like OOM will happen
often enough that the exception overhead matters.
The various implicit allocations will be more problematic, once GC.malloc
starts to fail.

artur
June 01, 2012
Le 01/06/2012 12:15, Walter Bright a écrit :
> On 6/1/2012 12:45 AM, Jens Mueller wrote:
>> This is perfectly valid when developing such critical systems. But
>> limiting D to effectively only allow developing such particular systems
>> cannot be the appropriate response. There are plenty of other systems
>> that do not operate in such a constrained environment.
>
> You can catch thrown asserts if you want, after all, D is a systems
> programming language. But that isn't a valid way to write robust software.
>

No, you can't because this is planed to be half broken with error, on the fallacy that the program can be in corrupted state.

If your program is in corrupted state, throwing an error is already a stupid idea. Good luck unwinding a corrupted stack.

What is needed here is a flag to HALT on error, that is used for critical systems.
June 01, 2012
On Friday, June 01, 2012 14:00:01 deadalnix wrote:
> Le 01/06/2012 12:26, Walter Bright a écrit :
> > Except that you do not know why the arithmetic turned out wrong - it could be the result of memory corruption.
> 
> Yes. wrong calculation often comes from memory corruption. Almost never from programmer having screwed up in the said calculation.
> 
> It is so perfectly reasonable and completely match my experience. I'm sure everybody here will agree.
> 
> Not to mention that said memory corruption obviously come from compiler bug. As always. What programmer does mistakes in his code ? We write programs, not bugs !

I'd have to agree that the odds of an arithmetic error being caused by memory corruption are generally quite low, but the problem is that when an assertion fails, there's _no_ way for the program to know how bad things really are or what the cause is. The programmer would have to examine the entire program state (which probably still isn't enough in many cases, since you don't have the whole history of the program state - only its current state) and the code that generated the assertion in order to figure out what really happened.

When an assertion fails, the program has to assume the worst case scenario, because it doesn't have the information required to figure out how bad the situation really is. When you use an assertion, you're saying that if it fails, there is a bug in your program, and it must be terminated. If you want to recover from whatever the assertion is testing, then _don't use an assertion_.

- Jonathan M Davis
June 01, 2012
On 01.06.2012 23:38, Jonathan M Davis wrote:
> On Friday, June 01, 2012 14:00:01 deadalnix wrote:
>> Le 01/06/2012 12:26, Walter Bright a écrit :
>>> Except that you do not know why the arithmetic turned out wrong - it
>>> could be the result of memory corruption.
>>
>> Yes. wrong calculation often comes from memory corruption. Almost never
>> from programmer having screwed up in the said calculation.
>>
>> It is so perfectly reasonable and completely match my experience. I'm
>> sure everybody here will agree.
>>
>> Not to mention that said memory corruption obviously come from compiler
>> bug. As always. What programmer does mistakes in his code ? We write
>> programs, not bugs !
>
> I'd have to agree that the odds of an arithmetic error being caused by memory
> corruption are generally quite low, but the problem is that when an assertion
> fails, there's _no_ way for the program to know how bad things really are or
> what the cause is.

Indeed it's quite bad to assume both extremes - either "oh, my god everything is corrupted" or "blah, whatever, keep going". I thought D was trying to hold keep reasonable compromises where possible.

By the way memory corruption is checkable. And even recoverable, one just needs to have certain precautions like adding checksums or better yet ECC codes to _every_ important data structure (including of course stack security hashes). Seems very useful for compiler generated code with '-debug' switch. It even can ask GC to recheck ECC on every GC datastructure. Do that memory check on each throw of error dunno. Trust me to do the thing manually I dunno. Provide some options, damn it.

For all I care the program is flawless it's cosmic rays that are funky in this area.

Certain compilers by the way already do something like that on each stack entry/leave in debug mode (stack hash sums).

P.S. Trying to pour more and more of "generally impossible" "can't do this", "can't do that" and ya-da-ya-da doesn't help solving problems.


-- 
Dmitry Olshansky
June 01, 2012
On 02.06.2012 0:06, Dmitry Olshansky wrote:
> On 01.06.2012 23:38, Jonathan M Davis wrote:
>> On Friday, June 01, 2012 14:00:01 deadalnix wrote:
>>> Le 01/06/2012 12:26, Walter Bright a écrit :
>>>> Except that you do not know why the arithmetic turned out wrong - it
>>>> could be the result of memory corruption.
>>>
>>> Yes. wrong calculation often comes from memory corruption. Almost never
>>> from programmer having screwed up in the said calculation.
>>>
>>> It is so perfectly reasonable and completely match my experience. I'm
>>> sure everybody here will agree.
>>>
>>> Not to mention that said memory corruption obviously come from compiler
>>> bug. As always. What programmer does mistakes in his code ? We write
>>> programs, not bugs !
>>
>> I'd have to agree that the odds of an arithmetic error being caused by
>> memory
>> corruption are generally quite low, but the problem is that when an
>> assertion
>> fails, there's _no_ way for the program to know how bad things really
>> are or
>> what the cause is.
>
> Indeed it's quite bad to assume both extremes - either "oh, my god
> everything is corrupted" or "blah, whatever, keep going". I thought D
> was trying to hold keep reasonable compromises where possible.
>
> By the way memory corruption is checkable. And even recoverable, one
> just needs to have certain precautions like adding checksums or better
> yet ECC codes to _every_ important data structure (including of course
> stack security hashes). Seems very useful for compiler generated code
> with '-debug' switch. It even can ask GC to recheck ECC on every GC
> datastructure. Do that memory check on each throw of error dunno. Trust
> me to do the thing manually I dunno. Provide some options, damn it.
>
> For all I care the program is flawless it's cosmic rays that are funky
> in this area.
>
> Certain compilers by the way already do something like that on each
> stack entry/leave in debug mode (stack hash sums).
>
> P.S. Trying to pour more and more of "generally impossible" "can't do
> this", "can't do that" and ya-da-ya-da doesn't help solving problems.
>
>

Ah, forgot the most important thing: I'm not convinced that throwing Error that _loosely_ _unwinds_ stack is better then straight abort on spot or _proper_ stack unwinding. nothrow is not argument of itself. I've yet to see argument for what it gives us that so important.
(C++ didn't have proper nothrow for ages, it did worked somehow)

BTW stack can be corrupted (in fact it's quite often is, and that's most dangerous thing). Even C's runtime abort can be corrupted. Mission critical software should just use straight HALT instruction*.

So point is don't get too paranoid by default please.

Yet it will leave things that this program operates on in undefined state, like half-open airlock.

-- 
Dmitry Olshansky
June 01, 2012
On 6/1/2012 11:14 AM, deadalnix wrote:
> We are talking about runing scope statement and finally when unwiding the stack,
> not trying to continue the execution of the program.

Which will be running arbitrary code not anticipated by the assert failure, and code that is highly unlikely to be desirable for shutdown.

> This is, most of the time, the point of error/exceptions. You rarely recover
> from them in real life.

I believe this is a misunderstanding of what exceptions are for. "File not found" exceptions, and other errors detected in inputs, are routine and routinely recoverable.

This discussion has come up repeatedly in the last 30 years. It's root is always the same - conflating handling of input errors, and handling of bugs in the logic of the program.

The two are COMPLETELY different and dealing with them follow completely different philosophies, goals, and strategies.

Input errors are not bugs, and vice versa. There is no overlap.
June 01, 2012
On 6/1/2012 1:06 PM, Dmitry Olshansky wrote:
> Indeed it's quite bad to assume both extremes - either "oh, my god everything is
> corrupted" or "blah, whatever, keep going". I thought D was trying to hold keep
> reasonable compromises where possible.

D has a lot of bias towards being able to mechanically guarantee as much as possible, with, of course, allowing the programmer to circumvent these if he so desires.

For example, you can XOR pointers in D.

But if I were your program manager, you'd need an extraordinary justification to allow such a practice.

My strongly held opinion on how to write reliable software is based on decades of experience by others in the aviation business on how to do it. And the proof that this works is obvious.

It's also obvious to me that the designers of the Deep Water Horizon rig and the Fukishima plant did not follow these principles, and a heavy price was paid.

D isn't going to make anyone follow these principles - but it is going to make it more difficult to violate them. I believe D should be promoting, baked into the design of the language, proven successful best practices.

In programming courses and curriculum I've seen, very little attention is paid to this, and programmers are left to discover it the hard way.


> By the way memory corruption is checkable. And even recoverable, one just needs
> to have certain precautions like adding checksums or better yet ECC codes to
> _every_ important data structure (including of course stack security hashes).
> Seems very useful for compiler generated code with '-debug' switch. It even can
> ask GC to recheck ECC on every GC datastructure. Do that memory check on each
> throw of error dunno. Trust me to do the thing manually I dunno. Provide some
> options, damn it.
>
> For all I care the program is flawless it's cosmic rays that are funky in this
> area.
>
> Certain compilers by the way already do something like that on each stack
> entry/leave in debug mode (stack hash sums).
>
> P.S. Trying to pour more and more of "generally impossible" "can't do this",
> "can't do that" and ya-da-ya-da doesn't help solving problems.

It doesn't even have to be memory corruption that puts your program in an invalid state where it cannot reliably continue. The assert would have detected a logic bug, and the invalid state of the program is not at all necessarily memory corruption. Invalid does not imply corruption, though corruption does imply invalid.

June 01, 2012
On 6/1/2012 1:14 PM, Dmitry Olshansky wrote:
> nothrow is not argument of itself. I've yet to see argument for
> what it gives us that so important.

What nothrow gives is mechanically checkable documentation on the possible results of a function.


> (C++ didn't have proper nothrow for ages, it did worked somehow)

C++ is infamous for not being able to look at the signature of a function and glean much useful information on what its inputs, outputs, and side effects are. This makes it highly resistant to analysis.
June 01, 2012
On 02.06.2012 0:52, Walter Bright wrote:
> On 6/1/2012 1:06 PM, Dmitry Olshansky wrote:
>> Indeed it's quite bad to assume both extremes - either "oh, my god
>> everything is
>> corrupted" or "blah, whatever, keep going". I thought D was trying to
>> hold keep
>> reasonable compromises where possible.
>
> D has a lot of bias towards being able to mechanically guarantee as much
> as possible, with, of course, allowing the programmer to circumvent
> these if he so desires.
>
> For example, you can XOR pointers in D.
>

Thanks. Could come in handy one day :)


> But if I were your program manager, you'd need an extraordinary
> justification to allow such a practice.

http://en.wikipedia.org/wiki/XOR_linked_list

>
> My strongly held opinion on how to write reliable software is based on
> decades of experience by others in the aviation business on how to do
> it. And the proof that this works is obvious.
>
No argument here. Planes are in fact surprisingly safe transport.

> It's also obvious to me that the designers of the Deep Water Horizon rig
> and the Fukishima plant did not follow these principles, and a heavy
> price was paid.
>
> D isn't going to make anyone follow these principles - but it is going
> to make it more difficult to violate them. I believe D should be
> promoting, baked into the design of the language, proven successful best
> practices.
>

I'm just hope there is some provision to customize a bit what exactly to do on Error. There might be few things to try before dying. Like close the airlock.

> In programming courses and curriculum I've seen, very little attention
> is paid to this, and programmers are left to discover it the hard way.
>

That's something I'll agree on. I've had little exposure to in-house firmware design. It's not just people or skills but the development process that is just wrong. These days I won't trust a toaster with "smart" MCU in it. Better old analog stuff.

>> Certain compilers by the way already do something like that on each stack
>> entry/leave in debug mode (stack hash sums).
>>
>> P.S. Trying to pour more and more of "generally impossible" "can't do
>> this",
>> "can't do that" and ya-da-ya-da doesn't help solving problems.
>
> It doesn't even have to be memory corruption that puts your program in
> an invalid state where it cannot reliably continue.

Point was memory corruption is hard to check. Yet it's quite checkable. Logical invariants in fact easier to check. There are various other techniques to make sure if global state is intact and what parts of it can be save and so on. Trying to cut all of it down with single cure is not good.
Again I'm speaking of options here, pretty much like XORing a pointer they are surely not everyday thing.

The assert would
> have detected a logic bug, and the invalid state of the program is not
> at all necessarily memory corruption. Invalid does not imply corruption,
> though corruption does imply invalid.

Correct.

-- 
Dmitry Olshansky