www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Embedded software DbC

reply bearophile <bearophileHUGS lycos.com> writes:
A nice article found through Reddit, "Design by Contract (DbC) for Embedded
Software" by Miro Samek, 2009:
http://www.netrino.com/Embedded-Systems/How-To/Design-by-Contract-for-Embedded-Software

It says nothing new, but it says such old things in a nice way, and I like how
it compares assertions to fuses and how it contrasts DbC with a different kind
of defensive programming that is kind of the opposite.

A quotation from the article:

In contrast, every successful test run of code peppered with assertions builds
much more confidence in the software. I don't know exactly what the critical
density of assertions must be, but at some point the tests stop producing
undefined behavior, segmentation faults, or system hangs--all bugs manifest
themselves as assertion failures. This effect of DbC is truly amazing. The
integrity checks embodied in assertions prevent the code from "wandering
around" and even broken builds don't crash-and-burn but rather end up hitting
an assertion.<
I have seen this with the DMD compiler itself, around half of its bugs are found by its assertions. Another quotation:
You can no longer design a system without accounting for testing overhead right
from the start. Assuming that all the CPU cycles, the RAM, and all the ROM will
be devoted strictly to the job at hand simply won't get the job done.<
A third quotation, this seem different from D strategy (here the author is talking about normal PCs):
As an example, consider dynamic memory allocation. In any type of system,
memory allocation with malloc() (or the C++ new operator) can fail. In a
general-purpose computer, a failed malloc() merely indicates that, at this
instant the operating system cannot supply the requested memory. This can
happen easily in a highly dynamic, general-purpose computing environment. When
it happens, you have options to recover from the situation. One option might be
for the application to free up some memory that it allocated and then retry the
allocation. Another choice could be to prompt the user that the problem exists
and encourage them to exit other applications so that the current application
can gather more memory. Yet another option is to save data to the disk and
exit. Whatever the choice, handling this situation requires some drastic
actions, which are clearly off the mainstream behavior of your application.
Nevertheless, you should design and implement such actions because in a desktop
environment, a failed malloc() must be considered an exceptional condition.<
Eventually (thanks to Sean too) D will have readable stack traces on all OSes, but to understand better what the program was doing when the assertion has fired you have to run the code in a debugger or to add lot of print statements (inside contracts!). As alternative, if the program keeps at runtime information about the types that are present in the all the stack frames (this can be done with LLVM, there is support for precise stack scanning, for certain kinds of GCs), then when the stack trace gets written it can also optionally print the state of all the variables in all the stack frames. This logged text can later help debug the program even if the program was running on the production machine (with stack tracing activated and precise stack types activated). This is a paper that shows why DbC can not enough in some situations, by Ken Garlington, 1998: http://home.flash.net/~kennieg/ariane.html In the situation like the one of the Ariane I think the good solution is the introduce a fuzzy control system that has a degradation of its effectiveness as conditions come out of its specs, but avoids a total failure. This is what biological designs too do. It's a kind of 'defensive programming'. Bye, bearophile
Aug 01 2010
parent reply Kagamin <spam here.lot> writes:
bearophile Wrote:

 This is a paper that shows why DbC can not enough in some situations, by Ken
Garlington, 1998:
 http://home.flash.net/~kennieg/ariane.html
 
 In the situation like the one of the Ariane I think the good solution is the
introduce a fuzzy control system that has a degradation of its effectiveness as
conditions come out of its specs, but avoids a total failure. This is what
biological designs too do. It's a kind of 'defensive programming'.
 
From what I heard, the software for Ariane was physically unable to handle Ariane, so no matter what assertions you put into it, it would crash.
Aug 01 2010
parent reply bearophile <bearophileHUGS lycos.com> writes:
Kagamin:

 In the situation like the one of the Ariane I think the good solution is the
introduce a fuzzy control system that has a degradation of its effectiveness as
conditions come out of its specs, but avoids a total failure. This is what
biological designs too do. It's a kind of 'defensive programming'.
 
From what I heard, the software for Ariane was physically unable to handle Ariane, so no matter what assertions you put into it, it would crash.
In that last paragraph I was talking about something that doesn't use assertions, something like: http://en.wikipedia.org/wiki/Fuzzy_Control_System If well designed such systems have a graceful degradation of functionality even when you step out of their specs. Systems like this are used today in critical systems like breakers control systems of subway trains where a sharp shutdown like the one on the Ariane can cause hundred of deaths. When well designed such fuzzy systems do work very well. All this is kind of the opposite of the design strategy behind DbC :-) My theory is that DbC is good to design and test critical systems because it allows you to spot and fix design bugs efficiently. But when the critical system is working, it's better to put beside it another system that shows graceful degradation and doesn't just stop working abruptly when some parameter is outside its designed specs. This is how most control systems in vertebrate brains are designed, they generally never just shut down. Bye, bearophile
Aug 02 2010
parent reply Kagamin <spam here.lot> writes:
bearophile Wrote:

 If well designed such systems have a graceful degradation of functionality
even when you step out of their specs. Systems like this are used today in
critical systems like breakers control systems of subway trains where a sharp
shutdown like the one on the Ariane can cause hundred of deaths. When well
designed such fuzzy systems do work very well. All this is kind of the opposite
of the design strategy behind DbC :-)
 
I doubt that degradation is acceptable in rocket launch. Definitely, there's some window for errors, but the starting rocket already operates at its limit, you don't have free resources necessary to compensate degradation. And the window is small: a couple of degrees and you crash, and if software is inadequate, you easily go out of this window. There can be other factors: with a train you have to control only acceleration, the rocket has much more parameters to control - more bugs in logic.
Aug 02 2010
parent bearophile <bearophileHUGS lycos.com> writes:
Kagamin:
 I doubt that degradation is acceptable in rocket launch.
Then it's the hardware that needs a more flexible design :-) Like something closer to a unmanned shuttle that starts its fly with a 30 degrees "jump trampoline", etc. Just as Space Shuttle has shown that the reentry of your space ship doesn't need to be designed like a stone with parachutes, better designs can be found for the takeoff too, to make them closer to an aeroplane, with more space for errors and corrections, and a smaller amount of fuel burned.
 There can be other factors: with a train you have to control only
acceleration, the rocket has much more parameters to control - more bugs in
logic.
Bugs are always possible, and there is no way to be sure to avoid them all. But fuzzy logic is used in control systems more complex than a simple control for missiles, they are used inside a large number of the electronic gadgets coming from Japan, their control systems can be quite complex, and they generally work :-) Bye, bearophile
Aug 02 2010