www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Null references redux

reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I've never seen any suggestion that Boeing (or Airbus, or the FAA) has 
 changed its philosophy on this. Do you have a reference?

I like to read many books, I have read about this in the chapter Cofee cups in the piloting room, in the book "Turn Signals Are The Facial Expressions Of Automobiles" by Donand Normand. It talks about the "strong silent type" of computer automation, discussed in "In the age of smart machine: The future of work and Power", by Zuboff. An example of such problem is explained in: NTSB 1986 Aircraft Accident report - China Airlines 747-SP, N4522V, 300 nautical miles northwest San Francisco, California, February 19, 1985 (Rapp. NTSB/AAR-86/03), Washington DC.: http://libraryonline.erau.edu/online-full-text/ntsb/aircraft-accident-reports/AAR86-03.pdf It shows a problem that a better design in the autopilot interface can avoid (probably things have improved since 1985).
 I should also point out that this strategy has been extremely 
 successful. Flying is inherently dangerous, yet is statistically 
 incredibly safe. Boeing is doing a LOT right, and I would be extremely 
 cautious of changing the philosophy that so far has delivered 
 spectacular results.

They keep improving all the time.
 BTW, shutting off the autopilot does not cause the airplane to suddenly 
 nosedive. Airliner aerodynamics are designed to be stable and to seek 
 straight and level flight if the controls are not touched. Autopilots do 
 shut themselves off now and then, and the pilot takes command.

See the example I've shown that shows why what you have said can be dangerous anyway. A sudden automatic switch off of the autopilot can be dangerous, because people need time to understand what's happening, when the situation goes from a mostly automatic one to a mostly manual one.
 Please give an example. I'll give one. How about that crash in the 
 Netherlands recently where the autopilot decided to fly the airplane 
 into the ground? As I recall it was getting bad data from the 
 altimeters. I have a firm conviction that if there's a fault in the 
 altimeters, the pilot should be informed and get control back 
 immediately, as opposed to thinking about a sandwich (or whatever) while 
 the autopilot soldiered on. An emergency can escalate very, very fast 
 when you're going 600 mph.

Of course the pilot must have a button to immediately regain manual control when she/he wants so. My point was different, that a sudden automatic full disable of autopilot can be dangerous.
 Failing gracefully is done by shutting down the failed system and 
 engaging a backup, not by trying to convince yourself that a program in 
 an unknown state is capable of continuing to function. Software simply 
 does not work that way - one bit wrong and anything can happen.

Software used to work that way in the past, but this is not set in stone. The famous crash of Ariane was caused by ultra-rigid reaction to errors in software. Do you know fuzzy logic? One of the purposes of fuzzy logic is to design control systems (that can be used for washing machines, cameras, missiles, etc) that work and fail gracefully. They don't work in two binary ways perfect/totallywrong. A graceful failure may have avoided the Ariane to crash and go boom. Today people are studying software systems based on fuzzy logic, neural networks, support vector machines, and more, that are designed to keep working despite some small problems and faults. In some situations (like a TAC machine in an hospital) you may want it to switch off totally if a problem is found, instead of a graceful failure. On the other hand if you put such TAC in a poor hospital in Africa you may want something that keeps working even if some small trouble is present, because a less than perfect machine is going to be the standard situation where there is very little money, and a reduced functionality TAC may save lot of people anyway (if it emits too much X rays it's better to switch it off). Bye, bearophile
Sep 27 2009
parent reply BCS <none anon.com> writes:
Hello bearophile,

 Do you know fuzzy logic? One of the purposes of fuzzy logic is to
 design control systems (that can be used for washing machines,
 cameras, missiles, etc) that work and fail gracefully. They don't work
 in two binary ways perfect/totallywrong. A graceful failure may have
 avoided the Ariane to crash and go boom.
 
 Today people are studying software systems based on fuzzy logic,
 neural networks, support vector machines, and more, that are designed
 to keep working despite some small problems and faults.

But this still assumes some degree of reliability of the code doing the fuzzy logic. If I had to guess, I'd expect that the systems you mention are designed to function under external faults (some expected input vanishes or some other component in a distributed system fails). It would be almost impossible to make a program that can work correctly once it has had an internal fault. Once that has happened, I think Walter is correct and the only thing to do is shut down. In the auto pilot case, this could amount to kill off the current auto pilot process and boot up a very simple fly-straight-and-level program to take over while the pilot reacts to a nice loud klaxon.
Sep 27 2009
parent bearophile <bearophileHUGS lycos.com> writes:
BCS:

 But this still assumes some degree of reliability of the code doing the fuzzy 
 logic.

Fuzzy logic can also be "run" by hardware, fuzzy logic engine chips. Such chips are usually cheap. You can find them in some consumer products. The fuzzy logic rules can also be converted to correct programs by automatic software generators. (Of course mistakes etc are always possible.) Bye, bearophile
Sep 27 2009