Jump to page: 1 2
Thread overview
Re: Null references redux
Sep 26, 2009
bearophile
Sep 27, 2009
language_fan
Sep 27, 2009
Walter Bright
Sep 27, 2009
Justin Johansson
Sep 27, 2009
Walter Bright
Sep 27, 2009
Justin Johansson
Sep 27, 2009
Jeremie Pelletier
Sep 27, 2009
Walter Bright
Sep 27, 2009
BCS
Sep 27, 2009
Walter Bright
Sep 27, 2009
Christopher Wright
Sep 27, 2009
Yigal Chripun
September 26, 2009
Walter Bright:

> I used to work at Boeing designing critical flight systems. Absolutely the WRONG failure mode is to pretend nothing went wrong and happily return default values and show lovely green lights on the instrument panel. The right thing is to immediately inform the pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM DOWN before it does something really, really bad, because now it is in an unknown state. The pilot then follows the procedure he's trained to, such as engage the backup.

Today we think this design is not the best one, because the pilot suddenly goes from a situation seen as safe where the autopilot does most things, to a situation where the pilot has to do everything. It causes panic. A human needs time to understand the situation and act correctly. So a better solution is to fail gracefully, giving back the control to the human in a progressive way, with enough time to understand the situation.
Some of the things you have seen at Boeing today can be done better, there's some progress in the design of human interfaces too. That's why I suggest you to program in dotnet C# for few days.


> You could think of null exceptions like pain - sure it's unpleasant, but people who feel no pain constantly injure themselves and don't live very long. When I went to the dentist as a kid for the first time, he shot my cheek full of novacaine. After the dental work, I went back to school. I found to my amusement that if I chewed on my cheek, it didn't hurt.
> 
> Boy was I sorry about that later <g>.

Oh my :-(

Bye,
bearophile
September 27, 2009
Sat, 26 Sep 2009 19:27:51 -0400, bearophile thusly wrote:

> Some of the things you have seen at Boeing
> today can be done better, there's some progress in the design of human
> interfaces too. That's why I suggest you to program in dotnet C# for few
> days.

That is a really good suggestion. To me it seems that several known language authors have experimented with various kinds of languages before settling down. But Walter has only done assembler/C/C++/D/Java/Pascal? There are so many other important languages, such as Self, Eiffel, Scala, Scheme, SML, Haskell, Prolog, etc. It is not by any means harmful to know about their internals. There is great deals of CS concepts to be learned only by studying the language cores.
September 27, 2009
bearophile wrote:
> Walter Bright:
> 
>> I used to work at Boeing designing critical flight systems.
>> Absolutely the WRONG failure mode is to pretend nothing went wrong
>> and happily return default values and show lovely green lights on
>> the instrument panel. The right thing is to immediately inform the
>> pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM
>> DOWN before it does something really, really bad, because now it is
>> in an unknown state. The pilot then follows the procedure he's
>> trained to, such as engage the backup.
> 
> Today we think this design is not the best one, because the pilot
> suddenly goes from a situation seen as safe where the autopilot does
> most things, to a situation where the pilot has to do everything. It
> causes panic.

I've never seen any suggestion that Boeing (or Airbus, or the FAA) has changed its philosophy on this. Do you have a reference?

I should also point out that this strategy has been extremely successful. Flying is inherently dangerous, yet is statistically incredibly safe. Boeing is doing a LOT right, and I would be extremely cautious of changing the philosophy that so far has delivered spectacular results.

BTW, shutting off the autopilot does not cause the airplane to suddenly nosedive. Airliner aerodynamics are designed to be stable and to seek straight and level flight if the controls are not touched. Autopilots do shut themselves off now and then, and the pilot takes command.

Computers control a lot of systems besides the autopilot, too.


> A human needs time to understand the situation and act
> correctly. So a better solution is to fail gracefully, giving back
> the control to the human in a progressive way, with enough time to
> understand the situation. Some of the things you have seen at Boeing
> today can be done better,

Please give an example. I'll give one. How about that crash in the Netherlands recently where the autopilot decided to fly the airplane into the ground? As I recall it was getting bad data from the altimeters. I have a firm conviction that if there's a fault in the altimeters, the pilot should be informed and get control back immediately, as opposed to thinking about a sandwich (or whatever) while the autopilot soldiered on. An emergency can escalate very, very fast when you're going 600 mph.

There have been cases of faults in the autopilot causing abrupt, bizarre maneuvers. This is why the autopilot must STOP IMMEDIATELY upon any fault which implies that the system is in an unknown state.

Failing gracefully is done by shutting down the failed system and engaging a backup, not by trying to convince yourself that a program in an unknown state is capable of continuing to function. Software simply does not work that way - one bit wrong and anything can happen.


> there's some progress in the design of
> human interfaces too. That's why I suggest you to program in dotnet
> C# for few days.
September 27, 2009
Walter Bright Wrote:

> bearophile wrote:
> > Walter Bright:
> > 
> >> I used to work at Boeing designing critical flight systems. Absolutely the WRONG failure mode is to pretend nothing went wrong and happily return default values and show lovely green lights on the instrument panel. The right thing is to immediately inform the pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM DOWN before it does something really, really bad, because now it is in an unknown state. The pilot then follows the procedure he's trained to, such as engage the backup.
> > 
> > Today we think this design is not the best one, because the pilot suddenly goes from a situation seen as safe where the autopilot does most things, to a situation where the pilot has to do everything. It causes panic.
> 
> I've never seen any suggestion that Boeing (or Airbus, or the FAA) has changed its philosophy on this. Do you have a reference?
> 
> I should also point out that this strategy has been extremely successful. Flying is inherently dangerous, yet is statistically incredibly safe. Boeing is doing a LOT right, and I would be extremely cautious of changing the philosophy that so far has delivered spectacular results.
> 
> BTW, shutting off the autopilot does not cause the airplane to suddenly nosedive. Airliner aerodynamics are designed to be stable and to seek straight and level flight if the controls are not touched. Autopilots do shut themselves off now and then, and the pilot takes command.
> 
> Computers control a lot of systems besides the autopilot, too.
> 
> 
> > A human needs time to understand the situation and act
> > correctly. So a better solution is to fail gracefully, giving back
> > the control to the human in a progressive way, with enough time to
> > understand the situation. Some of the things you have seen at Boeing
> > today can be done better,
> 
> Please give an example. I'll give one. How about that crash in the Netherlands recently where the autopilot decided to fly the airplane into the ground? As I recall it was getting bad data from the altimeters. I have a firm conviction that if there's a fault in the altimeters, the pilot should be informed and get control back immediately, as opposed to thinking about a sandwich (or whatever) while the autopilot soldiered on. An emergency can escalate very, very fast when you're going 600 mph.
> 
> There have been cases of faults in the autopilot causing abrupt, bizarre maneuvers. This is why the autopilot must STOP IMMEDIATELY upon any fault which implies that the system is in an unknown state.
> 
> Failing gracefully is done by shutting down the failed system and engaging a backup, not by trying to convince yourself that a program in an unknown state is capable of continuing to function. Software simply does not work that way - one bit wrong and anything can happen.
> 
> 
> > there's some progress in the design of
> > human interfaces too. That's why I suggest you to program in dotnet
> > C# for few days.

Re:
> As I recall it was getting bad data from the altimeters. I have a firm conviction that if there's a fault in the altimeters, the pilot should be informed and get control back immediately, as opposed to thinking about a sandwich (or whatever) while the autopilot soldiered on.

Walter, in the heat of this thread I hope you haven't missed the correlation with discussion on "Dispatching on a variant" and noting:

"Further, and worth mentioning given another raging thread on this forum at the moment,
it turns out the ensuring type-safety of my design means that NPE's are a thing of the
past (for me at least).  This is due to strong static type checking together with runtime type
validation all for a pretty reasonable cost."

http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=96847

Regards
Justin Johansson


September 27, 2009
Justin Johansson wrote:
> Walter, in the heat of this thread I hope you haven't missed the correlation with discussion
> on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. I just don't see them as a replacement for *all* reference types.
September 27, 2009
Walter Bright Wrote:

> Justin Johansson wrote:
> > Walter, in the heat of this thread I hope you haven't missed the correlation with discussion on "Dispatching on a variant" and noting:
> 
> Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. I just don't see them as a replacement for *all* reference types.

What you just said made me think that much of this thread is talking at cross-purposes.

Perhaps the problem should be re-framed.

The example

T bar;
bar.foo();    // new magic in hypothetical D doesn't kill the canary just yet

is a bad example to base this discussion on.

Something like

T bar;
mar.foo( bar)

is a better example to consider.

Forgetting about reference types for a moment, consider the following statements:

"An int type is an indiscriminate union of negativeIntegerType, nonNegativeIntegerType, positiveIntegerType and other range-checked integer types.  Passing around int's to
functions that take int arguments, unless full 32 bits of int is what you really mean, is
akin to passing around an indiscriminate union value, which is a no no."

Pondering this might well shed some light and set useful context for the overall discussion.

In other words, it's not so much an argument about calling a method on a reference type,
its more about how to treat any type, value or reference, in type-safe, discriminate, manner.

Just a thought (?)

September 27, 2009
Walter Bright wrote:
> Justin Johansson wrote:
>> Walter, in the heat of this thread I hope you haven't missed the correlation with discussion
>> on "Dispatching on a variant" and noting:
> 
> Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. 

No. There is no means to disable default construction.

> I just don't see them as a replacement for *all* reference types.

Non-nullable references should be the default.


Andrei
September 27, 2009
Andrei Alexandrescu wrote:
> Walter Bright wrote:
>> Justin Johansson wrote:
>>> Walter, in the heat of this thread I hope you haven't missed the correlation with discussion
>>> on "Dispatching on a variant" and noting:
>>
>> Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. 
> 
> No. There is no means to disable default construction.
> 
>> I just don't see them as a replacement for *all* reference types.
> 
> Non-nullable references should be the default.
> 
> 
> Andrei

Like I said in another post of this thread, I believe the issue here is more over initializer semantics than null/non-null references. This is what's causing most of the errors anyways.

Can't the compiler just throw a warning if a variable is used before initialization, and allow "= null" to bypass this ("= void" would still be considered uninitialized). Same thing for fields.

It would be much more convenient than new type variants, both to implement and to use.

It could even be used for any type, the default initializer in D is a cute idea, but not a performance friendly one. I would much prefer the compiler to allow "int a" but warn me if I use it before assigning anything to it than assigning it to zero, and then assigning it to the value I wanted. "= void" is nice but I'm pretty sure I'm way over a thousand uses of it so far.

Jeremie
September 27, 2009
Jeremie Pelletier wrote:
> It could even be used for any type, the default initializer in D is a cute idea, but not a performance friendly one. I would much prefer the compiler to allow "int a" but warn me if I use it before assigning anything to it than assigning it to zero, and then assigning it to the value I wanted. "= void" is nice but I'm pretty sure I'm way over a thousand uses of it so far.

The compiler, when -O is used, should remove nearly all the redundant initializations.
September 27, 2009
Andrei Alexandrescu wrote:
> Walter Bright wrote:
>> Justin Johansson wrote:
>>> Walter, in the heat of this thread I hope you haven't missed the correlation with discussion
>>> on "Dispatching on a variant" and noting:
>>
>> Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. 
> 
> No. There is no means to disable default construction.

Ack, I remember we talked about this, I guess I don't remember the resolution.
« First   ‹ Prev
1 2