November 02, 2013
Walter Bright:

> I think you're missing the point. Improving the quality of the software is not the answer to making fail safe systems.

To make high integrity software you have to start with reliable tools, and then use the right testing methodologies, sometimes you have to write down proofs, then you have to add redundancy, to use the right politics in the firm that writes the software, etc. Improving the quality of the language is not enough, but it's useful. You have to face the reliability problem from all the sides at the same time.

All subsystems can fail, but to to make a reliable system you don't start building your whole system using the less reliable sub-parts you can find in the market. You use "good" components and good strategies at all levels.

Bye,
bearophile
November 02, 2013
On 11/02/2013 10:55 AM, bearophile wrote:
>
>
>> I think you're missing the point. Improving the quality of the
>> software is not the answer to making fail safe systems.
>
> To make high integrity software you have to start with reliable tools,
> and then use the right testing methodologies, sometimes you have to
> write down proofs,

Well, if there is a formal proof of correctness, checking for overflow at runtime is as pointless as limiting oneself to a language without undefined behaviour in its basic semantics.
November 02, 2013
On 02/11/13 10:55, bearophile wrote:
> To make high integrity software you have to start with reliable tools

I know what you're saying, but there is an inherent assumption in the concept of "reliable tools".  So far as I can see the important thing is to assume that _nothing_ in the system is reliable, and that anything can fail.

If you rely on the language or on the compiler to detect integral overflows, you're not necessarily safer -- your safety rests on the assumption that the compiler will implement these things correctly, and will ALWAYS do so regardless of circumstances.  How can you tell if the automated integral overflow checking is working as it should?  And even if it is a high-quality implementation, how do you protect yourself against extreme pathological cases which may arise in very rare circumstances?

"Necessary but not sufficient" seems a good phrase to use here.
November 02, 2013
Timon Gehr:

> Well, if there is a formal proof of correctness, checking for overflow at runtime is as pointless as limiting oneself to a language without undefined behaviour in its basic semantics.

Perhaps you remember the funny quote by Knuth: "Beware of bugs in the above code; I have only proved it correct, not tried it."

Proof of correctness are usually limited to certain subsystems, because writing a proof for the whole system usually it too much hard, or takes too much time/money, or because you have to interface with modules written by subcontractors. Often there is an "outer world" you are not including in your proof. This is a common strategy for systems written in Spark-Ada, and this is done even by NASA (and indeed was the cause of one famous crash of a NASA probe). So you rely on the language and simpler means to assure the safety of interfaces between the sections of the program.

Another purpose for the run-time tests is to guard against random flips of bits, caused by radiations, cosmic rays, interferences, heat noise, etc. Such run-time tests are present in the probes on Mars, because even space-hardened electronics sometimes errs (and relying on other back-up means is not enough, as I have explained in the precedent post).

And generally in high integrity systems you don't want to use a language with undefined behaviour in its basic constructs because such language is harder to test for the compiler writer too. If you take a look at blogs that today discuss the semantics of C programs you see there are cases where the C standard is ambiguous and the GCC/Intel/Clang compilers behave differently. This is a fragile foundation you don't want to build high integrity software on. It took decades to write a certified C compiler.

Bye,
bearophile
November 02, 2013
On 11/02/2013 01:56 PM, bearophile wrote:
> Timon Gehr:
>
>> Well, if there is a formal proof of correctness, checking for overflow
>> at runtime is as pointless as limiting oneself to a language without
>> undefined behaviour in its basic semantics.
>
> Perhaps you remember the funny quote by Knuth: "Beware of bugs in the
> above code; I have only proved it correct, not tried it."
>
> Proof of correctness are usually limited to certain subsystems, ...

As long as additional ad-hoc techniques for error avoidance are fruitful, the formal proof of correctness does not cover enough cases. (Of course, one may not be able to construct such a proof.)

> This is a common strategy for systems written in Spark-Ada, and this
> is done even by NASA (and indeed was the cause of one famous crash of a NASA probe).

Well, I think it is funny to consider a methodology adequate in hindsight if it has resulted in a crash. Have the techniques advocated by Walter been thoroughly applied in this project?

> And generally in high integrity systems you don't want to use a language
> with undefined behaviour in its basic constructs because such language
> is harder to test for the compiler writer too. If you take a look at
> blogs that today discuss the semantics of C programs you see there are
> cases where the C standard is ambiguous and the GCC/Intel/Clang
> compilers behave differently.

None of those compilers is proven correct.

> This is a fragile foundation you don't want to build high integrity software on.

Therefore, obviously.

> It took decades to write a certified C compiler.
>

No, it took four years. CompCert was started in 2005.
November 02, 2013
On 11/1/2013 8:47 AM, Chris wrote:
>
> I still think that, although software can help to make things safer,
> common sense should not be turned off while driving. If it's raining
> heavily, slow down. Sounds simple, but many drivers don't do it, with or
> without software.

I've noticed that the worse the driving conditions are, the more tailgating jackasses I have to put up with.

November 02, 2013
On 11/2/2013 8:09 AM, Joseph Rushton Wakeling wrote:
> On 02/11/13 10:55, bearophile wrote:
>> To make high integrity software you have to start with reliable tools
>
> I know what you're saying, but there is an inherent assumption in the
> concept of "reliable tools".  So far as I can see the important thing is
> to assume that _nothing_ in the system is reliable, and that anything
> can fail.
>

"Reliable" of course simply meaning "less unreliable".

> If you rely on the language or on the compiler to detect integral
> overflows, you're not necessarily safer -- your safety rests on the
> assumption that the compiler will implement these things correctly, and
> will ALWAYS do so regardless of circumstances.

It still helps and is therefore worthwhile. Nobody's claiming that runtime overflow checks were sufficient to ensure reliability, only that *not* having them can be a bad idea.

November 02, 2013
On 11/1/13 8:03 AM, bearophile wrote:
> Walter Bright:
>
>> ...
>
> Everyone who writes safety critical software should really avoid
> languages unable to detect integral overflows (at compile-time or
> run-time) in all normal numerical operations,

I'm unclear on why you seem so eager to grind that axe. The matter seems to be rather trivial - disallow statically the use of built-in integrals, and prescribe the use of library types that do the verification. A small part of the codebase that's manually verified (such as the library itself) could use the primitive types. Best of all worlds. In even a medium project, the cost of the verifier and maintaining that library is negligible.

> and languages that have undefined operations in their basic
> semantics.

We need to get SafeD up to snuff!

> So Ada language is OK, C and D are not OK for safety critical software.

Well that's Ada's claim to fame. But I should hope D would have a safety edge over C.


Andrei

November 02, 2013
Andrei Alexandrescu:

> I'm unclear on why you seem so eager to grind that axe.

Because I've tried the alternative, I've seen it catch bugs (unwanted integral overflows) in my code that I was supposing to be "good", so I will never trust again languages that ignores the overflows. And if we talk about high integrity software, D integrals management is not good enough.


> The matter seems to be rather trivial - disallow statically the use of built-in integrals, and prescribe the use of library types that do the verification. A small part of the codebase that's manually verified (such as the library itself) could use the primitive types. Best of all worlds. In even a medium project, the cost of the verifier and maintaining that library is negligible.

How many C++ programs do this? Probably very few (despite now Clang is able to catch something in C code). How many Ada programs perform those run-time tests? Most of them.


> A small part of the codebase that's manually verified (such as the library itself) could use the primitive types.

In some cases you want to use the run-time tests even in verified code, to guard against hardware errors caused by radiations, noise, etc.


> We need to get SafeD up to snuff!

At the moment safeD means "memory safe D", it's not "safe" regarding other kinds of problems.

"Undefined operations" are lines of code like this, some of them are supposed to become defined in future D:

a[i++] = i++;
foo(bar(), baz());
auto x = int.max + 1;

and so on.


> But I should hope D would have a safety edge over C.

Of course :-) Idiomatic D code is much simpler to write correctly compared to C code (but in D code you sometimes write more complex code, so you have a little of bug-proneness equalization. This is a part of the human nature).

There are many levels of safety, high integrity software is at the top of those levels (and probably even high integrity software has various levels: some submersibles guide software could be not as bug free as software of the Space Shuttle guiding system) is a very small percentage of all the code written today.

Bye,
bearophile
November 02, 2013
On 11/2/13 9:49 AM, bearophile wrote:
> Andrei Alexandrescu:
>
>> I'm unclear on why you seem so eager to grind that axe.
>
> Because I've tried the alternative, I've seen it catch bugs (unwanted
> integral overflows) in my code that I was supposing to be "good", so I
> will never trust again languages that ignores the overflows. And if we
> talk about high integrity software, D integrals management is not good
> enough.

I disagree.

>> The matter seems to be rather trivial - disallow statically the use of
>> built-in integrals, and prescribe the use of library types that do the
>> verification. A small part of the codebase that's manually verified
>> (such as the library itself) could use the primitive types. Best of
>> all worlds. In even a medium project, the cost of the verifier and
>> maintaining that library is negligible.
>
> How many C++ programs do this? Probably very few (despite now Clang is
> able to catch something in C code). How many Ada programs perform those
> run-time tests? Most of them.

That has to do with the defaults chosen. I don't think "mission critical" is an appropriate default for D. I do think that D offers better speed than Ada by default, and better abstraction capabilities than both C++ and Ada, which afford it good library checked integrals. I don't see a necessity to move out of that optimum.

>> A small part of the codebase that's manually verified (such as the
>> library itself) could use the primitive types.
>
> In some cases you want to use the run-time tests even in verified code,
> to guard against hardware errors caused by radiations, noise, etc.

How is this a response to what I wrote?

>> We need to get SafeD up to snuff!
>
> At the moment safeD means "memory safe D", it's not "safe" regarding
> other kinds of problems.
>
> "Undefined operations" are lines of code like this, some of them are
> supposed to become defined in future D:
>
> a[i++] = i++;
> foo(bar(), baz());
> auto x = int.max + 1;
>
> and so on.

Agreed, this is not part of the charter of SafeD. We need to ensure left-to-right evaluation semantics through and through.

>> But I should hope D would have a safety edge over C.
>
> Of course :-) Idiomatic D code is much simpler to write correctly
> compared to C code (but in D code you sometimes write more complex code,
> so you have a little of bug-proneness equalization. This is a part of
> the human nature).
>
> There are many levels of safety, high integrity software is at the top
> of those levels (and probably even high integrity software has various
> levels: some submersibles guide software could be not as bug free as
> software of the Space Shuttle guiding system) is a very small percentage
> of all the code written today.

And to take that to its logical conclusion, we don't want the defaults in the D language to cater to such applications.


Andrei