www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Making alloca more safe

reply "Denis Koroskin" <2korden gmail.com> writes:
C standard library alloca function has an undefined behavior when  
requested size is large enough to cause a stack overflow, but many (good)  
implementations return null instead. So does DMD, for example. I believe  
it would be even better to go ahead and enforce D implementation to return  
a GC allocated chunk of memory instead of null in that case. It will not  
incur any performance hit in 99.9% of the cases and prevent a bug from  
being happen in the rest. It will also make writing code using it easier  
(and more safe), since you don't have to worry about possible stack  
overflow/null-dereference.
Nov 16 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 C standard library alloca function has an undefined behavior when 
 requested size is large enough to cause a stack overflow, but many 
 (good) implementations return null instead. So does DMD, for example. I 
 believe it would be even better to go ahead and enforce D implementation 
 to return a GC allocated chunk of memory instead of null in that case. 
 It will not incur any performance hit in 99.9% of the cases and prevent 
 a bug from being happen in the rest. It will also make writing code 
 using it easier (and more safe), since you don't have to worry about 
 possible stack overflow/null-dereference.
I'm a little reluctant to do this because alloca is supposed to be a low level routine, not one that has a dependency on the rather large and complex gc. A person using alloca is expecting stack allocation, and that it goes away after the function exits. Switching arbitrarily to the gc will not be detected and may hide a programming error (asking for a gigantic piece of memory is not anticipated for alloca, and could be caused by an overflow or logic error in calculating its size). And secondly, I wish to emphasize that a null pointer seg fault is not an unsafe thing. It does not lead to memory corruption. It simply stops the program.
Nov 16 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 A person using alloca is expecting stack allocation, and 
 that it goes away after the function exits. Switching arbitrarily to the 
 gc will not be detected and may hide a programming error (asking for a 
 gigantic piece of memory is not anticipated for alloca, and could be 
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function). Bye, bearophile
Nov 16 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Walter Bright:
 
 A person using alloca is expecting stack allocation, and 
 that it goes away after the function exits. Switching arbitrarily to the 
 gc will not be detected and may hide a programming error (asking for a 
 gigantic piece of memory is not anticipated for alloca, and could be 
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
Nov 16 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Andrei Alexandrescu (SeeWebsiteForEmail erdani.org)'s article
 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and
 that it goes away after the function exits. Switching arbitrarily to the
 gc will not be detected and may hide a programming error (asking for a
 gigantic piece of memory is not anticipated for alloca, and could be
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you
can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
 Can't be written. Try it.
 Andrei
As a side note, my TempAlloc allocator was intended all along to be a safer and more flexible allocation scheme that was almost as efficient as call stack allocation, and does fall back on heap allocation, or creating a new non-contiguous chunk, when it runs out of space. Also, I think I'll be able to fix the GC scanning issue by fiddling with pointer offset info if/when my precise heap scanning patch gets into druntime. If/when TempAlloc can be made both safe and efficient w.r.t. GC scanning, I'd nominate it for inclusion in Phobos.
Nov 16 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it goes  
 away after the function exits. Switching arbitrarily to the gc will  
 not be detected and may hide a programming error (asking for a  
 gigantic piece of memory is not anticipated for alloca, and could be  
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
Nov 16 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Denis Koroskin wrote:
 On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it 
 goes away after the function exits. Switching arbitrarily to the gc 
 will not be detected and may hide a programming error (asking for a 
 gigantic piece of memory is not anticipated for alloca, and could be 
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
The problem of salloca is that alloca's memory gets released when salloca returns. Andrei
Nov 16 2009
next sibling parent reply Yigal Chripun <yigal100 gmail.com> writes:
Andrei Alexandrescu wrote:
 Denis Koroskin wrote:
 On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it 
 goes away after the function exits. Switching arbitrarily to the gc 
 will not be detected and may hide a programming error (asking for a 
 gigantic piece of memory is not anticipated for alloca, and could 
 be caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
The problem of salloca is that alloca's memory gets released when salloca returns. Andrei
template salloca(alias ptr, alias size) { // horrible name, btw ptr = alloca(size); if (ptr is null) ptr = (new void[size]).ptr; } // use: void foo() { int size = 50; void* ptr; mixin salloca!(ptr, size); //... } wouldn't that work?
Nov 16 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Yigal Chripun wrote:
 Andrei Alexandrescu wrote:
 Denis Koroskin wrote:
 On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it 
 goes away after the function exits. Switching arbitrarily to the 
 gc will not be detected and may hide a programming error (asking 
 for a gigantic piece of memory is not anticipated for alloca, and 
 could be caused by an overflow or logic error in calculating its 
 size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
The problem of salloca is that alloca's memory gets released when salloca returns. Andrei
template salloca(alias ptr, alias size) { // horrible name, btw ptr = alloca(size); if (ptr is null) ptr = (new void[size]).ptr; } // use: void foo() { int size = 50; void* ptr; mixin salloca!(ptr, size); //... } wouldn't that work?
mixin? Interesting. Probably it works. Andrei
Nov 16 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 16 Nov 2009 20:39:57 +0300, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Denis Koroskin wrote:
 On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it  
 goes away after the function exits. Switching arbitrarily to the gc  
 will not be detected and may hide a programming error (asking for a  
 gigantic piece of memory is not anticipated for alloca, and could be  
 caused by an overflow or logic error in calculating its size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
The problem of salloca is that alloca's memory gets released when salloca returns. Andrei
You missed the point of my post. I know it can't be implemented, and I told just that. I also told about 2 possible solutions to this issue.
Nov 16 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Denis Koroskin wrote:
 On Mon, 16 Nov 2009 20:39:57 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Denis Koroskin wrote:
 On Mon, 16 Nov 2009 19:27:41 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 bearophile wrote:
 Walter Bright:

 A person using alloca is expecting stack allocation, and that it 
 goes away after the function exits. Switching arbitrarily to the 
 gc will not be detected and may hide a programming error (asking 
 for a gigantic piece of memory is not anticipated for alloca, and 
 could be caused by an overflow or logic error in calculating its 
 size).
There's another solution, that I'd like to see more often used in Phobos: you can add another function to Phobos, let's call it salloca (safe alloca) that does what Denis Koroskin asks for (it's a very simple function).
Can't be written. Try it. Andrei
It's tricky. It can't be written *without a compiler support*, because it is considered special for a compiler (it always inlines the call to it). It could be written otherwise. I was thinking about proposing either an inline keyword in a language (one that would enforce function inlining, rather than suggesting it to compiler), or allways inline all the functions that make use of alloca. Without either of them, it is impossible to create wrappers around alloca (for example, one that create arrays on stack type-safely and without casts): T[] array_alloca(T)(size_t size) { ... } or one that would return GC-allocated memory when stack allocation fails: void* salloca(size_t size) { void* ptr = alloca(size); if (ptr is null) return (new void[size]).ptr; return ptr; }
The problem of salloca is that alloca's memory gets released when salloca returns. Andrei
You missed the point of my post. I know it can't be implemented, and I told just that. I also told about 2 possible solutions to this issue.
I see now. Apologies. Andrei
Nov 16 2009
prev sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Denis Koroskin wrote:
 C standard library alloca function has an undefined behavior when
 requested size is large enough to cause a stack overflow, but many
 (good) implementations return null instead. So does DMD, for example. I
 believe it would be even better to go ahead and enforce D implementation
 to return a GC allocated chunk of memory instead of null in that case.
 It will not incur any performance hit in 99.9% of the cases and prevent
 a bug from being happen in the rest. It will also make writing code
 using it easier (and more safe), since you don't have to worry about
 possible stack overflow/null-dereference.
I'm a little reluctant to do this because alloca is supposed to be a low level routine, not one that has a dependency on the rather large and complex gc. A person using alloca is expecting stack allocation, and that it goes away after the function exits. Switching arbitrarily to the gc will not be detected and may hide a programming error (asking for a gigantic piece of memory is not anticipated for alloca, and could be caused by an overflow or logic error in calculating its size). And secondly, I wish to emphasize that a null pointer seg fault is not an unsafe thing. It does not lead to memory corruption. It simply stops the program.
Yes, but it stops the program in such a way that it's very hard to figure out why/where it died. The solution, which I've wanted for a while and I think others have proposed, is for DMD to implicitly assert that every pointer is non-null before dereferencing it when in debug mode.
Nov 16 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 16 Nov 2009 17:01:32 +0300, dsimcha <dsimcha yahoo.com> wrote:

 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Denis Koroskin wrote:
 C standard library alloca function has an undefined behavior when
 requested size is large enough to cause a stack overflow, but many
 (good) implementations return null instead. So does DMD, for example.  
I
 believe it would be even better to go ahead and enforce D  
implementation
 to return a GC allocated chunk of memory instead of null in that case.
 It will not incur any performance hit in 99.9% of the cases and  
prevent
 a bug from being happen in the rest. It will also make writing code
 using it easier (and more safe), since you don't have to worry about
 possible stack overflow/null-dereference.
I'm a little reluctant to do this because alloca is supposed to be a low level routine, not one that has a dependency on the rather large and complex gc. A person using alloca is expecting stack allocation, and that it goes away after the function exits. Switching arbitrarily to the gc will not be detected and may hide a programming error (asking for a gigantic piece of memory is not anticipated for alloca, and could be caused by an overflow or logic error in calculating its size). And secondly, I wish to emphasize that a null pointer seg fault is not an unsafe thing. It does not lead to memory corruption. It simply stops the program.
Yes, but it stops the program in such a way that it's very hard to figure out why/where it died. The solution, which I've wanted for a while and I think others have proposed, is for DMD to implicitly assert that every pointer is non-null before dereferencing it when in debug mode.
... or use the type system to enforce all/most of the pointers to be non-null.
Nov 16 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Denis Koroskin (2korden gmail.com)'s article
 On Mon, 16 Nov 2009 17:01:32 +0300, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Denis Koroskin wrote:
 C standard library alloca function has an undefined behavior when
 requested size is large enough to cause a stack overflow, but many
 (good) implementations return null instead. So does DMD, for example.
I
 believe it would be even better to go ahead and enforce D
implementation
 to return a GC allocated chunk of memory instead of null in that case.
 It will not incur any performance hit in 99.9% of the cases and
prevent
 a bug from being happen in the rest. It will also make writing code
 using it easier (and more safe), since you don't have to worry about
 possible stack overflow/null-dereference.
I'm a little reluctant to do this because alloca is supposed to be a low level routine, not one that has a dependency on the rather large and complex gc. A person using alloca is expecting stack allocation, and that it goes away after the function exits. Switching arbitrarily to the gc will not be detected and may hide a programming error (asking for a gigantic piece of memory is not anticipated for alloca, and could be caused by an overflow or logic error in calculating its size). And secondly, I wish to emphasize that a null pointer seg fault is not an unsafe thing. It does not lead to memory corruption. It simply stops the program.
Yes, but it stops the program in such a way that it's very hard to figure out why/where it died. The solution, which I've wanted for a while and I think others have proposed, is for DMD to implicitly assert that every pointer is non-null before dereferencing it when in debug mode.
... or use the type system to enforce all/most of the pointers to be non-null.
Yes, but this would greatly increase the complexity of the type system, which is already getting too complex. A simple implicit assert would solve the problem 90% as well for 10% of the complexity.
Nov 16 2009
prev sibling next sibling parent Frank Benoit <keinfarbton googlemail.com> writes:
dsimcha schrieb:
 Yes, but it stops the program in such a way that it's very hard to figure out
 why/where it died.  The solution, which I've wanted for a while and I think
others
 have proposed, is for DMD to implicitly assert that every pointer is non-null
 before dereferencing it when in debug mode.
This would be great. The compiler could also optimize those away in cases of repeated access without ref assign in between void foo( Object o){ o.toString(); // o !is null checked o.toHash(); // o !is null not checked, because already done }
Nov 16 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 Yes, but it stops the program in such a way that it's very hard to figure out
 why/where it died.
I don't want to get into another loooong thread about should pointers be nullable or not, I just wished to point out that it was not a *safety* issue.
Nov 16 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I don't want to get into another loooong thread about should pointers be 
 nullable or not,
It was a good thread with good ideas.
I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a small problem. One Ariane missile has self-destroyed (and destroyed an extremely important scientific satellite it was carrying whose mission I miss still) because of this silly behaviour united with the inflexibility of the Ada language. A reliable system is a systems that keeps working correctly despite all. If this is not possible, in real life you usually want a "good enough" behaviour. For example, for your TAC medical machine, in Africa if the machine switches itself off at the minimal problem they force the machine to start again, because they don't have money for a 100% perfect fix. So for them it's better a machine that shows a slow and graceful degradation. That's a reliable system, something that looks more like your liver, that doesn't totally switch off as soon it has a small problem (killing you quickly). A program that stops working in a random moment because of a null is not safe. exceptions that show a stack trace. The type system is smart enough to remove most of those tests to improve performance). A safer program is a program that avoids null pointer exception because the type system has formally verified the program has no nulls. Bye, bearophile
Nov 16 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from bearophile (bearophileHUGS lycos.com)'s article
 Walter Bright:
 I don't want to get into another loooong thread about should pointers be
 nullable or not,
It was a good thread with good ideas.
I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a
small problem.
 One Ariane missile has self-destroyed (and destroyed an extremely important
scientific satellite it was carrying whose mission I miss still) because of this silly behaviour united with the inflexibility of the Ada language.
 A reliable system is a systems that keeps working correctly despite all. If
this
is not possible, in real life you usually want a "good enough" behaviour. For example, for your TAC medical machine, in Africa if the machine switches itself off at the minimal problem they force the machine to start again, because they don't have money for a 100% perfect fix. So for them it's better a machine that shows a slow and graceful degradation. That's a reliable system, something that looks more like your liver, that doesn't totally switch off as soon it has a small problem (killing you quickly).
 A program that stops working in a random moment because of a null is not safe.
exceptions that show a stack trace. The type system is smart enough to remove most of those tests to improve performance). A safer program is a program that avoids null pointer exception because the type system has formally verified the program has no nulls.
 Bye,
 bearophile
In a way you're right. However, there is no universal answer for what to do about a null pointer except die **with a good error message explaining what went wrong**. This is the part that's missing. Right now you get an access violation. I'd like an assert failure with a line number and a "Null pointer dereference" error message when I'm not in release mode.
Nov 16 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
dsimcha wrote:
 In a way you're right.  However, there is no universal answer for what to do
about
 a null pointer except die **with a good error message explaining what went
 wrong**.  This is the part that's missing.  Right now you get an access
violation.
  I'd like an assert failure with a line number and a "Null pointer dereference"
 error message when I'm not in release mode.
You do get just that if you run the program under a debugger. There was a patch for Phobos a while back which would use the debug data to print a stack trace on such an exception without needing a debugger. It has languished because nobody has spent the time to verify that it is correctly done and that it won't negatively impact anything else. I can forward it to you if you like and want to take a look at it.
Nov 16 2009
parent grauzone <none example.net> writes:
Walter Bright wrote:
 dsimcha wrote:
 In a way you're right.  However, there is no universal answer for what 
 to do about
 a null pointer except die **with a good error message explaining what 
 went
 wrong**.  This is the part that's missing.  Right now you get an 
 access violation.
  I'd like an assert failure with a line number and a "Null pointer 
 dereference"
 error message when I'm not in release mode.
You do get just that if you run the program under a debugger. There was a patch for Phobos a while back which would use the debug data to print a stack trace on such an exception without needing a debugger. It has languished because nobody has spent the time to verify that it is correctly done and that it won't negatively impact anything else. I can forward it to you if you like and want to take a look at it.
It's in Tango and it works. Both on Linux and Windows. (Needs the Tango svn version.)
Nov 16 2009
prev sibling next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Walter Bright:
 
 I don't want to get into another loooong thread about should
 pointers be nullable or not,
It was a good thread with good ideas.
 I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a small problem. One Ariane missile has self-destroyed (and destroyed an extremely important scientific satellite it was carrying whose mission I miss still) because of this silly behaviour united with the inflexibility of the Ada language. A reliable system is a systems that keeps working correctly despite all. If this is not possible, in real life you usually want a "good enough" behaviour. For example, for your TAC medical machine, in Africa if the machine switches itself off at the minimal problem they force the machine to start again, because they don't have money for a 100% perfect fix. So for them it's better a machine that shows a slow and graceful degradation. That's a reliable system, something that looks more like your liver, that doesn't totally switch off as soon it has a small problem (killing you quickly). A program that stops working in a random moment because of a null is not safe. (And even if you accept this, in safer languages like system is smart enough to remove most of those tests to improve performance). A safer program is a program that avoids null pointer exception because the type system has formally verified the program has no nulls. Bye, bearophile
I think it all has to do with definitions. If you define safety as a yes/no property, then there can be no discussion about "safer". The classic safety definition involves progress and preservation. If those are satisfied for all programs, the language is safe. All languages in actual use cannot satisfy progress for all programs, which makes the definition too restrictive. So people use more relaxed definitions, e.g. allow for stuck states (e.g. use of a null pointer is a stuck state) or work in terms of trapped vs. untrapped errors. D's definition of safety is "no undefined behavior" which is as far as I can understand a tight superset of "no untrapped errors". If we go by that definition we can't talk about safer or less safe. You either have UB or don't. That being said, I like non-null references. The problem is that as soon as someone tries to enumerate safety among the desirable behaviors of non-null references, Walter's regard for the argument completely shuts down, taking down with it all good arguments too. Andrei
Nov 16 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a small problem.
Computers cannot know whether a problem is "small" or not.
 One Ariane missile has self-destroyed (and destroyed an extremely
 important scientific satellite it was carrying whose mission I miss
 still) because of this silly behaviour united with the inflexibility
 of the Ada language.
 
 A reliable system is a systems that keeps working correctly despite
 all. If this is not possible, in real life you usually want a "good
 enough" behaviour. For example, for your TAC medical machine, in
 Africa if the machine switches itself off at the minimal problem they
 force the machine to start again, because they don't have money for a
 100% perfect fix. So for them it's better a machine that shows a slow
 and graceful degradation. That's a reliable system, something that
 looks more like your liver, that doesn't totally switch off as soon
 it has a small problem (killing you quickly).
This is how you make reliable systems: http://dobbscodetalk.com/index.php?option=com_myblog&show=Safe-Systems-from-Unreliable-Parts.html&Itemid=29 http://dobbscodetalk.com/index.php?option=com_myblog&show=Designing-Safe-Software-Systems-Part-2.html&Itemid=29 Pretending a program hasn't failed when it has, and just "soldiering on", is completely unacceptable behavior in a system that must be reliable. The Ariane 5 had a backup system which was engaged, but the backup system had the same software in it, so failed in the same way. That is not how you make reliable systems.
 A program that stops working in a random moment because of a null is
 not safe. (And even if you accept this, in safer languages like

 system is smart enough to remove most of those tests to improve
 performance). A safer program is a program that avoids null pointer
 exception because the type system has formally verified the program
 has no nulls.
You're using two different definitions of the word "safe". Program safety is about not corrupting memory. System safety (i.e. reliability) is a completely different thing. If you've got a system that relies on the software continuing to function after an unexpected null seg fault, you have a VERY BADLY DESIGNED and COMPLETELY UNSAFE system. I really cannot emphasize this enough. P.S. I worked for Boeing for years on flight critical systems. Normally I eschew credentialism, but I feel very strongly about this issue and wish to point out that my knowledge on this is based on decades of real world experience by aviation companies who take this issue extremely seriously.
Nov 16 2009
next sibling parent Derek Parnell <derek psych.ward> writes:
On Mon, 16 Nov 2009 12:48:51 -0800, Walter Bright wrote:

 bearophile wrote:
 Walter Bright:
 I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a small problem.
Computers cannot know whether a problem is "small" or not.
But designers who make the system can.
 Pretending a program hasn't failed when it has, and just "soldiering 
 on", is completely unacceptable behavior in a system that must be reliable.
...
 If you've got a system that relies on the software continuing to 
 function after an unexpected null seg fault, you have a VERY BADLY 
 DESIGNED and COMPLETELY UNSAFE system. I really cannot emphasize this 
 enough.
What is the 'scope' of "system"? Is that if any component in a system fails, then all other components are also in an unknown, and therefore potentially unsafe, state too? For example, can one describe this scenario below as a single system or multiple systems... "A software failure causes the cabin lights to be permanently turned on, so should the 'system' also assume that the toilets must no longer be flushed?" Is the "system" the entire aircraft, i.e. all its components, or is there a set of systems involved here? In the "set of systems" concept, is it possible that a failure of one system can have no impact on another system in the set, or must it be assumed that every system is reliant on all other systems in the same set? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Nov 16 2009
prev sibling next sibling parent reply Tomas Lindquist Olsen <tomas.l.olsen gmail.com> writes:
On Mon, Nov 16, 2009 at 9:48 PM, Walter Bright
<newshound1 digitalmars.com> wrote:
 bearophile wrote:
 Walter Bright:
 I just wished to point out that it was not a *safety* issue.<
A safe system is not a program that switches itself off as soon as there's a small problem.
Computers cannot know whether a problem is "small" or not.
 One Ariane missile has self-destroyed (and destroyed an extremely
 important scientific satellite it was carrying whose mission I miss
 still) because of this silly behaviour united with the inflexibility
 of the Ada language.

 A reliable system is a systems that keeps working correctly despite
 all. If this is not possible, in real life you usually want a "good
 enough" behaviour. For example, for your TAC medical machine, in
 Africa if the machine switches itself off at the minimal problem they
 force the machine to start again, because they don't have money for a
 100% perfect fix. So for them it's better a machine that shows a slow
 and graceful degradation. That's a reliable system, something that
 looks more like your liver, that doesn't totally switch off as soon
 it has a small problem (killing you quickly).
This is how you make reliable systems:
You sure got all the answers...
Nov 16 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Tomas Lindquist Olsen wrote:
 You sure got all the answers...
I had it beaten into my head by people who had 50 years of experience designing reliable airliners - what worked and what didn't work. The consensus on what constitutes best practices for software reliability is steadily improving, but I still think the airliner companies are more advanced in that regard. Even your car has a dual path design (for the brakes)!
Nov 16 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
I am sorry for having mixed global reliability of a system with the discussion
about non nullable class references. It's my fault. Those are two very
different topics, as Walter says. Here I give few comments, but please try to
keep the two things separated. If that's not possible, feel free to ignore this
post...

Adam D. Ruppe:

Would you have preferred it to just randomly do its own thing and potentially
end up landing on people?<
Your mind is now working in terms of 0/1. But that's not how most things in the universe work. A guide system designed with different design principles may have guided it safely, with a small error in the trajectory, that may be fixed later in orbit.
Even expensive, important pieces of equipment can always be replaced.<
The scientific equipment it was carrying is lost, no one has replaced it so far. It was very complex.
What would you have it do? Carry on in the error state, doing Lord knows what?
That's clearly unsafe.<
My idea was to have a type system that avoids such errors :-)
Hospitals know their medical machines might screw up, so they keep a nurse on
duty at all times who can handle the situation - restart the failed machine, or
bring in a replacement before it kills someone.<
This is not how things are.
I wouldn't say safer, though I will concede that it is easier to debug.<
A program that doesn't break in the middle of its run is safer if you have to use it for something more important than a video game :-) ------------------- Walter Bright:
Computers cannot know whether a problem is "small" or not.<
The system designer can explain to the computer what "small" means in the specific situation.
This is how you make reliable systems:<
I'm a biologist, and I like biology-inspired designs. There is not just 1 way to design reliable systems that must work in the real world. Biology shows several other ways. Today people are starting to copy nature in such regard too, for example designing swarms of very tiny robots that are able to perform a task even if some tiny robot gets damaged or struck, etc.
Pretending a program hasn't failed when it has, and just "soldiering on", is
completely unacceptable behavior in a system that must be reliable.<
Well, it's often a matter of degree. On Windows I have amateur-level image editing programs that sometimes have a bug, and one of their windows "dies" or gets struck. I can usually keep working a little with that program and then save the work, and then restart the program.
The Ariane 5 had a backup system which was engaged, but the backup system had
the same software in it, so failed in the same way. That is not how you make
reliable systems.<
I have read enough about that case. I agree that it was badly designed. But in our universe there is more than 1 true way to design a reliable system.
You're using two different definitions of the word "safe". Program safety is
about not corrupting memory. System safety (i.e. reliability) is a completely
different thing.<
I'd like my programs to be safer in the system safety way.
If you've got a system that relies on the software continuing to function after
an unexpected null seg fault, you have a VERY BADLY DESIGNED and COMPLETELY
UNSAFE system. I really cannot emphasize this enough.<
My idea was to introduce ways to avoid nulls in the first place.
by aviation companies who take this issue extremely seriously.<
There are wonderful birds (alatross) that keep flying across thousand of kilometers (and singing&loving to each other and laying large eggs) after 50+ years: http://news.nationalgeographic.com/news/2003/04/0417_030417_oldestbird.html They are biological systems way more complex that a modern aeroplane, they are made of subsystems (like cells in their brain) that are not very reliable. They use a different design strategy to be so reliable. Sorry for mixing two so unrelated topics, my second stupid mistake of today. Bye, bearophile
Nov 16 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 They use a different design strategy to
 be so reliable.
My understanding (I am no biologist) is that biology achieves reliability by using redundancy, not by requiring individual components to be perfect. The redundancy goes down to the DNA level, even. Another way is it uses quantity, rather than quality. Many organisms produce millions of offspring in the hope that one or two survive. Software, on the other hand, is notorious for one bit being wrong out of a billion rendering it completely useless. A strategy of independent redundancy is appropriate here. For example, how would you write a program that would be expected to survive having a random bit in it flipped at random intervals?
Nov 16 2009
parent bearophile writes:
Walter Bright:

is that biology achieves reliability by using redundancy, not by requiring
individual components to be perfect. The redundancy goes down to the DNA level,
even. Another way is it uses quantity, rather than quality. Many organisms
produce millions of offspring in the hope that one or two survive.<
Quantity is a form of redundancy. In biology reliability is achieved in many ways. Your genetic code is degenerated, so many single letter mutations lead to no mutated proteins. This leads to neural mutations. Bones and tendons use redundancy too to be resilient, but they aren't flat organizations, they are hierarchies of structures inside larger structures, at all levels, from the molecular level up; this allows for a different kind of failures, like in earthquakes (many small ones, few large ones, with a power law distribution). A protein is a chain of small parts, and its function is partially determined by its form. This forms is mostly self-created. But once in a while few other proteins help shape up the other proteins, especially when the temperature is too much high. Most biological systems are able to self-repair, that usually means cells that die and duplicate and sometimes they build harder structures like bones. This happens at a sub-cellular level too, cells have many systems to repair and clean themselves, they keep destroying and rebuilding their parts at all levels, and you can see it among neurons too: your memory is encoded (among other things) by the connections between neurons, but they die. So new connections among even very old neurons can be created, and they replace the missing wiring, keeping the distributed memory functional even 100 years after the events, in very old people. Genetic information is encoded in multiple copies, and sometimes in bacteria distribuited in the population. Reliability is necessary when you copy or read the genetic information, this comes from a balance from the energy used to copy and how much reliable you want such read/copy, and how much fast you want it (actually ribosomes and DNA polymerase are about on the theoretical minimum of this 3-variable optimization, you can't do better even in theory). Control systems, like those in the brain, seek reliability in several different ways. One of them is encoding vectors in a small population of neurons. The final direction of where your finger points is found by such vectorized average. Parkinson's disease can kill 90% of the cells in certain zones, yet I can keep being able to move the hand to grab a glass of water (a little shaky, because the average is computed on much less vectors). There is enough stuff to write more than one science popularization article :-)
how would you write a program that would be expected to survive having a random
bit in it flipped at random intervals?<
That's a nice question. The program and all its data is stored somewhere, usually on RAM, caches, and registers. How can you use a program if bits in your RAM can flip at random with a certain (low) probability? There are error-correction RAM memories, based on redundancy codes, like Reed-Solomon one. ECC memory is today common enough. Similar error correction schemes can be added to inner parts of the CPU too (and probably someone has done it, for example in CPUs that must work in space on satellites, where the Sun radiation is not shielded by the earth atmosphere). I am sure related schemes can be used to test if a CPU instruction has done its purpose of if during its execution something has gone wrong. You can fix such things in hardware too. But there are other solutions beside fixing all errors. Today chips keep getting smaller, and power for each transistor keeps going down. Eventually noise and errors will start to grow. Recently some people have realized that on the screen of a mobile telephone you can tolerate few wrongly decompressed pixels from a video, if this allows the chip to use only 1/10 of the normal energy used. Sometimes you want few wrong pixels here and there if they allow you to keep seeing videos on your mobile telephone for twice long. In future CPUs will probably become less reliable, so they software (mostly operating system, I think) will need to invent ways to fix those errors. This will allow to keep programs globally reliable even with fast low powered CPUs. Molecular-scale adders will need software to fix their errors. Eventually this is going to become more and more like cellular biochemistry, with all its active redundancy :-) There's no end to the amount of things you can say on this topic. Bye, bearophile
Nov 16 2009
prev sibling parent reply Max Samukha <spambox d-coding.com> writes:
On Mon, 16 Nov 2009 12:48:51 -0800, Walter Bright
<newshound1 digitalmars.com> wrote:

If you've got a system that relies on the software continuing to 
function after an unexpected null seg fault, you have a VERY BADLY 
DESIGNED and COMPLETELY UNSAFE system. I really cannot emphasize this 
enough.
I have an example of such a software: http://www.steinberg.net/en/products/audiopostproduction_product/nuendo4.html It loads third-party plugins into the host process's address space, an consequently it may fail at any moment. The software's design is not the best ever but it gives the user last chance to save his work in case of fatal error. This feature has saved my back a couple of times.
P.S. I worked for Boeing for years on flight critical systems. Normally 
I eschew credentialism, but I feel very strongly about this issue and 
wish to point out that my knowledge on this is based on decades of real 
world experience by aviation companies who take this issue extremely 
seriously.
Then, instead of sticking with Windows and the likes, you may want to think about porting dmd to a more serious environment specifically designed for developing such systems. What about a real-time microkernel OS like this one: http://www.qnx.com/products/neutrino_rtos/ ?
Nov 17 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Max Samukha wrote:
 On Mon, 16 Nov 2009 12:48:51 -0800, Walter Bright
 <newshound1 digitalmars.com> wrote:
 
 If you've got a system that relies on the software continuing to 
 function after an unexpected null seg fault, you have a VERY BADLY 
 DESIGNED and COMPLETELY UNSAFE system. I really cannot emphasize this 
 enough.
I have an example of such a software: http://www.steinberg.net/en/products/audiopostproduction_product/nuendo4.html It loads third-party plugins into the host process's address space, an consequently it may fail at any moment. The software's design is not the best ever but it gives the user last chance to save his work in case of fatal error. This feature has saved my back a couple of times.
I suppose nobody much cares if it writes out a corrupted audio file. People care very much if their airplane suddenly dives into the ground. Be that as it may, it is certainly possible to catch seg faults in an exception handler and write files out. That would be an unacceptable behavior, though, in a system that needs to be safe.
 
 P.S. I worked for Boeing for years on flight critical systems. Normally 
 I eschew credentialism, but I feel very strongly about this issue and 
 wish to point out that my knowledge on this is based on decades of real 
 world experience by aviation companies who take this issue extremely 
 seriously.
Then, instead of sticking with Windows and the likes, you may want to think about porting dmd to a more serious environment specifically designed for developing such systems. What about a real-time microkernel OS like this one: http://www.qnx.com/products/neutrino_rtos/ ?
dmd targets Windows because that's where probably half the programmers are. I'd certainly like to do embedded systems, too, but realistically that's going to be the purview of gdc or ldc.
Nov 17 2009
next sibling parent reply Tomas Lindquist Olsen <tomas.l.olsen gmail.com> writes:
On Tue, Nov 17, 2009 at 11:51 AM, Walter Bright
<newshound1 digitalmars.com> wrote:
 Max Samukha wrote:
 On Mon, 16 Nov 2009 12:48:51 -0800, Walter Bright
 <newshound1 digitalmars.com> wrote:

 If you've got a system that relies on the software continuing to function
 after an unexpected null seg fault, you have a VERY BADLY DESIGNED and
 COMPLETELY UNSAFE system. I really cannot emphasize this enough.
I have an example of such a software: http://www.steinberg.net/en/products/audiopostproduction_product/nuendo4.html It loads third-party plugins into the host process's address space, an consequently it may fail at any moment. The software's design is not the best ever but it gives the user last chance to save his work in case of fatal error. This feature has saved my back a couple of times.
I suppose nobody much cares if it writes out a corrupted audio file. People care very much if their airplane suddenly dives into the ground. Be that as it may, it is certainly possible to catch seg faults in an exception handler and write files out. That would be an unacceptable behavior, though, in a system that needs to be safe.
You spent quite a bit of effort explaining that segfaults never cause memory corruption, so it seems fairly reasonable to assume that some parts of the application state could still be valid and useful not to throw away.
 P.S. I worked for Boeing for years on flight critical systems. Normally I
 eschew credentialism, but I feel very strongly about this issue and wish to
 point out that my knowledge on this is based on decades of real world
 experience by aviation companies who take this issue extremely seriously.
Then, instead of sticking with Windows and the likes, you may want to think about porting dmd to a more serious environment specifically designed for developing such systems. What about a real-time microkernel OS like this one: http://www.qnx.com/products/neutrino_rtos/ ?
dmd targets Windows because that's where probably half the programmers are. I'd certainly like to do embedded systems, too, but realistically that's going to be the purview of gdc or ldc.
I'm not sure if LDC will ever support D2 (at least wont be by my hand)
Nov 17 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Tomas Lindquist Olsen (tomas.l.olsen gmail.com)'s article
 I'm not sure if LDC will ever support D2 (at least wont be by my hand)
What is it about D2 that makes this unlikely? I thought after LDC D1 support was stable and the D2 spec and front end were stable, the natural progression of things would be for LDC to support D2.
Nov 17 2009
parent reply Tomas Lindquist Olsen <tomas.l.olsen gmail.com> writes:
On Tue, Nov 17, 2009 at 4:45 PM, dsimcha <dsimcha yahoo.com> wrote:
 =3D=3D Quote from Tomas Lindquist Olsen (tomas.l.olsen gmail.com)'s artic=
le
 I'm not sure if LDC will ever support D2 (at least wont be by my hand)
What is it about D2 that makes this unlikely? =C2=A0I thought after LDC D=
1 support was
 stable and the D2 spec and front end were stable, the natural progression=
of
 things would be for LDC to support D2.
LDC requires a lot of changes to the frontend. * DMD is not written as a cross compiler * The runtime interfaces are hardcoded into the frontend semantics * The ast rewrites dmd does are destructive and buggy * The dmd codegen is all over the frontend code, it wasn't meant to be used with another backend But most of all: someone has to do it. Keeping the two in sync is a major PITA, the original merge of our frontend changes, to the D2 frontend was done in a error prone way, which introduced a lot of bugs, codegen that worked in D1 still compiles, and generates code that compiles, but the code doesn't run. This requires time consuming debugging/reviewing and of course fixing. So most of all, like most of things D, it's about a lack of manpower. I personally no longer have the time to maintain LDC besides critical bugfixes now and then, and other devs are in similar situations. Another factor may be more ideological (or something), not everyone is happy about how D evolved, and who want to implement something they don't really care about ? -Tomas
Nov 17 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Tomas Lindquist Olsen:

 LDC requires a lot of changes to the frontend.
 
 * DMD is not written as a cross compiler
 * The runtime interfaces are hardcoded into the frontend semantics
 * The ast rewrites dmd does are destructive and buggy
 * The dmd codegen is all over the frontend code, it wasn't meant to be
 used with another backend
LLVM is one of the best thing happened to D1, so maybe Walter can improve the situation, to allow a simpler/better attach of the D2 front-end to LLVM. If you keep your muzzle shut things will never improve. Maybe someone can write a list of all the points where D2 causes such port problems, so Walter may improve/fix them some. This is quite important, more than most syntax details discussed in the last weeks. Bye, bearophile
Nov 17 2009
parent Tomas Lindquist Olsen <tomas.l.olsen gmail.com> writes:
On Tue, Nov 17, 2009 at 5:58 PM, bearophile <bearophileHUGS lycos.com> wrot=
e:
 Tomas Lindquist Olsen:

 LDC requires a lot of changes to the frontend.

 * DMD is not written as a cross compiler
 * The runtime interfaces are hardcoded into the frontend semantics
 * The ast rewrites dmd does are destructive and buggy
 * The dmd codegen is all over the frontend code, it wasn't meant to be
 used with another backend
LLVM is one of the best thing happened to D1, so maybe Walter can improve=
the situation, to allow a simpler/better attach of the D2 front-end to LLV= M. If you keep your muzzle shut things will never improve. Maybe someone ca= n write a list of all the points where D2 causes such port problems, so Wal= ter may improve/fix them some.

Walter seems to be fairly rigid to work with. While the latest
improvements, like DMD in a SVN repo, is a great step, I just don't
have as much time for LDC as I used to, so the little difference it
makes doesn't really help me that much.

I agree it would be good with more developer documentation. But LDC is
really not implemented very cleanly and writing such would be a huge
amount of work. When I started on LDC I had no idea how DMD worked,
and I made a lot of mistakes along the way.

Now, since then, a lot of people have joined, and helped out. But it
still suffers from some of the issues introduced very early.

Another point is motivation. Personally, I've achieved what I
originally planned for LDC, and quite a lot more. New projects await
out there.

Don't get me wrong. I *use* D (1.0 + Tango). And I need a x86-64
compatible D compiler, so I'm not abandoning LDC. But other people
will have to step in for D2 support. Unless of course I somehow
magically convert to liking D2. But I doubt that's going to happen.

-Tomas

P.S. LDC is an open source project, and while I started it, many other
people now have write access to the repository. I'm not holding anyone
back from making the changes needed for D2 support. And in case
someone out there wants it, I'll be happy to give you access as well
(after seeing a first patch).
Nov 17 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Tomas Lindquist Olsen wrote:
 LDC requires a lot of changes to the frontend.
If you send me the changes, I can incorporate at least some of them, making subsequent versions easier to port to LDC.
Nov 17 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Tomas Lindquist Olsen Wrote:

 On Tue, Nov 17, 2009 at 11:51 AM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 I suppose nobody much cares if it writes out a corrupted audio file. People
 care very much if their airplane suddenly dives into the ground.

 Be that as it may, it is certainly possible to catch seg faults in an
 exception handler and write files out. That would be an unacceptable
 behavior, though, in a system that needs to be safe.
You spent quite a bit of effort explaining that segfaults never cause memory corruption, so it seems fairly reasonable to assume that some parts of the application state could still be valid and useful not to throw away.
At the moment the segfault occurs, sure. But if the process eats the segfault and continues, what happens? If an app is programmed in such a way that segfaults are a part of normal processing (I worked on a DB that performed dynamic loading this way) that's one thing. But other apps are almost definitely going to try and write data near 0x00 after such an occurrence.
Nov 17 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Tomas Lindquist Olsen Wrote:
 
 On Tue, Nov 17, 2009 at 11:51 AM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 I suppose nobody much cares if it writes out a corrupted audio file. People
 care very much if their airplane suddenly dives into the ground.

 Be that as it may, it is certainly possible to catch seg faults in an
 exception handler and write files out. That would be an unacceptable
 behavior, though, in a system that needs to be safe.
You spent quite a bit of effort explaining that segfaults never cause memory corruption, so it seems fairly reasonable to assume that some parts of the application state could still be valid and useful not to throw away.
At the moment the segfault occurs, sure. But if the process eats the segfault and continues, what happens? If an app is programmed in such a way that segfaults are a part of normal processing (I worked on a DB that performed dynamic loading this way) that's one thing. But other apps are almost definitely going to try and write data near 0x00 after such an occurrence.
I think throwing an Error object instead of failing immediately would be occasionally useful. (Same goes about other trapped errors such as integral division by zero.) There are applications out there that want to partially recover from a null pointer error. I wrote a few, so it's difficult to convince me they don't exist. Andrei
Nov 17 2009
parent Sean Kelly <sean invisibleduck.org> writes:
Andrei Alexandrescu Wrote:

 Sean Kelly wrote:
 Tomas Lindquist Olsen Wrote:
 
 On Tue, Nov 17, 2009 at 11:51 AM, Walter Bright
 <newshound1 digitalmars.com> wrote:
 I suppose nobody much cares if it writes out a corrupted audio file. People
 care very much if their airplane suddenly dives into the ground.

 Be that as it may, it is certainly possible to catch seg faults in an
 exception handler and write files out. That would be an unacceptable
 behavior, though, in a system that needs to be safe.
You spent quite a bit of effort explaining that segfaults never cause memory corruption, so it seems fairly reasonable to assume that some parts of the application state could still be valid and useful not to throw away.
At the moment the segfault occurs, sure. But if the process eats the segfault and continues, what happens? If an app is programmed in such a way that segfaults are a part of normal processing (I worked on a DB that performed dynamic loading this way) that's one thing. But other apps are almost definitely going to try and write data near 0x00 after such an occurrence.
I think throwing an Error object instead of failing immediately would be occasionally useful. (Same goes about other trapped errors such as integral division by zero.) There are applications out there that want to partially recover from a null pointer error. I wrote a few, so it's difficult to convince me they don't exist.
I'd love to! And this is how Windows works. But throwing an exception from a signal handler invokes undefined behavior. Last time I googled this I saw as many accounts of it failing horribly as working.
Nov 17 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Tomas Lindquist Olsen wrote:
 You spent quite a bit of effort explaining that segfaults never cause
 memory corruption, so it seems fairly reasonable to assume that some
 parts of the application state could still be valid and useful not to
 throw away.
When a seg fault occurs, it is because your program is in a state that you, the programmer, never anticipated. Therefore, you cannot know what state your data is in. Therefore, your data is unreliable. While it may not be in a bad state from memory corruption, it could very well be in a bad state from your program's logic being wrong. Do you want to bet your life on assuming your program and its data is still valid?
Nov 17 2009
parent BCS <none anon.com> writes:
Hello Walter,

 Tomas Lindquist Olsen wrote:
 
 You spent quite a bit of effort explaining that segfaults never cause
 memory corruption, so it seems fairly reasonable to assume that some
 parts of the application state could still be valid and useful not to
 throw away.
 
When a seg fault occurs, it is because your program is in a state that you, the programmer, never anticipated. Therefore, you cannot know what state your data is in. Therefore, your data is unreliable. While it may not be in a bad state from memory corruption, it could very well be in a bad state from your program's logic being wrong. Do you want to bet your life on assuming your program and its data is still valid?
No, at that point I wouldn't count on the program doing any thing correctly. But that is a long way from trying to get it to do something useful on the way down, like try and save off what data it can and generate a crash log with whatever it can salvage. If either of these fail, I'm, at worst, in exactly the same position I was in before I attempted them and, at best, they work. And before you say it, if the system is truly critical, I'd have the crash handler in ROM, a hardware lock out to stop the system from mucking with any thing external and a watchdog timer to reset it if the crash handler hangs.
Nov 19 2009
prev sibling next sibling parent Max Samukha <spambox d-coding.com> writes:
On Tue, 17 Nov 2009 02:51:13 -0800, Walter Bright
<newshound1 digitalmars.com> wrote:

I suppose nobody much cares if it writes out a corrupted audio file. 
People care very much if their airplane suddenly dives into the ground.

Be that as it may, it is certainly possible to catch seg faults in an 
exception handler and write files out. That would be an unacceptable 
behavior, though, in a system that needs to be safe.
Yeah, you are right. It was just one example where continuing the execution after failure makes sense.
 Then, instead of sticking with Windows and the likes, you may want to
 think about porting dmd to a more serious environment specifically
 designed for developing such systems. What about a real-time
 microkernel OS like this one:
 http://www.qnx.com/products/neutrino_rtos/ ?
dmd targets Windows because that's where probably half the programmers are. I'd certainly like to do embedded systems, too, but realistically that's going to be the purview of gdc or ldc.
Ok.
Nov 17 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 Max Samukha wrote:
 
 On Mon, 16 Nov 2009 12:48:51 -0800, Walter Bright
 <newshound1 digitalmars.com> wrote:
 If you've got a system that relies on the software continuing to
 function after an unexpected null seg fault, you have a VERY BADLY
 DESIGNED and COMPLETELY UNSAFE system. I really cannot emphasize
 this enough.
 
I have an example of such a software: http://www.steinberg.net/en/products/audiopostproduction_product/nuen do4.html It loads third-party plugins into the host process's address space, an consequently it may fail at any moment. The software's design is not the best ever but it gives the user last chance to save his work in case of fatal error. This feature has saved my back a couple of times.
Be that as it may, it is certainly possible to catch seg faults in an exception handler and write files out. That would be an unacceptable behavior, though, in a system that needs to be safe.
For some systems, once you hit a seg-v, things can't get any worse so why not try to make things better by saving what you can?
Nov 19 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 For some systems, once you hit a seg-v, things can't get any worse
Oh, yes they can! You could now be executing a virus. *Anything* the software is connected to can now do anything wrong or malicious. (On my car, I installed an oil pressure switch that shuts off the electric fuel pump if the pressure drops. I also pried a switch off of a junkyard Mustang that shuts off if it gets hit hard, I also plan on installing that to shut off the fuel pump. Think of those like a "seg fault" <g>)
 so why not try to make things better by saving what you can?
Sure, you can try saving things, but you'd better hope that there was already a reasonably recent clean copy of your data. To write safe & reliable software, approach it from "what can go wrong, will go wrong", not "I won't worry about that case, because it's unlikely."
Nov 19 2009
parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 For some systems, once you hit a seg-v, things can't get any worse
 
Oh, yes they can!
For some cases they can, for others they can't.
 You could now be executing a virus. *Anything* the
 software is connected to can now do anything wrong or malicious.
 (On my car, I installed an oil pressure switch that shuts off the
 electric fuel pump if the pressure drops.
It might not translate to CS but there are good reasons that such a device doesn't come standard on cars; the first time one killed a car in rush hour traffic and set off a 50 car pile-up the someone (GM?) goes bankrupt.
 I also pried a switch off of
 a junkyard Mustang that shuts off if it gets hit hard, I also plan on
 installing that to shut off the fuel pump. Think of those like a "seg
 fault" <g>)
That one might even be worse because it only comes into play when you know things are going wrong; "as soon as things go wrong, my car quits working".
 
 so why not try to make things better by saving what you can?
 
Sure, you can try saving things, but you'd better hope that there was already a reasonably recent clean copy of your data.
that or make a very robust dump system (core dump?)
 To write safe & reliable software, approach it from "what can go
 wrong, will go wrong", not "I won't worry about that case, because
 it's unlikely." 
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 (On my car, I installed an oil pressure switch that shuts off the
 electric fuel pump if the pressure drops.
It might not translate to CS but there are good reasons that such a device doesn't come standard on cars; the first time one killed a car in rush hour traffic and set off a 50 car pile-up the someone (GM?) goes bankrupt.
With the pump shut off, you have a few seconds of fuel left in the carb. With no oil pressure, your engine is going to seize anyway.
 I also pried a switch off of
 a junkyard Mustang that shuts off if it gets hit hard, I also plan on
 installing that to shut off the fuel pump. Think of those like a "seg
 fault" <g>)
That one might even be worse because it only comes into play when you know things are going wrong; "as soon as things go wrong, my car quits working".
You *really* don't want your fuel pump to keep on pumping if you're in an accident. That's the purpose of the inertial switch. With older mechanical pumps, the pump would stop whenever the engine did. The gasoline is safer remaining in the tank than being pumped all over the road, the hot engine, and your trapped body.
 so why not try to make things better by saving what you can?
Sure, you can try saving things, but you'd better hope that there was already a reasonably recent clean copy of your data.
that or make a very robust dump system (core dump?)
A core dump, no matter how robust, will not fix your data if it is randomized by an errant program before it seg faulted.
 To write safe & reliable software, approach it from "what can go
 wrong, will go wrong", not "I won't worry about that case, because
 it's unlikely." 
Nov 20 2009
parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 (On my car, I installed an oil pressure switch that shuts off the
 electric fuel pump if the pressure drops.
 
It might not translate to CS but there are good reasons that such a device doesn't come standard on cars; the first time one killed a car in rush hour traffic and set off a 50 car pile-up the someone (GM?) goes bankrupt.
With the pump shut off, you have a few seconds of fuel left in the carb. With no oil pressure, your engine is going to seize anyway.
In a few minutes yes (and it will still run for some time after it's damaged beyond repair), more than long enough off get off the road. I'd put a big buzzer in and let the driver decide when it is safe to shut down the engine. In some situations, I'd gladly cook the engine to get to safety.
 
 I also pried a switch off of
 a junkyard Mustang that shuts off if it gets hit hard, I also plan
 on
 installing that to shut off the fuel pump. Think of those like a
 "seg
 fault" <g>)
That one might even be worse because it only comes into play when you know things are going wrong; "as soon as things go wrong, my car quits working".
You *really* don't want your fuel pump to keep on pumping if you're in an accident. That's the purpose of the inertial switch. With older mechanical pumps, the pump would stop whenever the engine did. The gasoline is safer remaining in the tank than being pumped all over the road, the hot engine, and your trapped body.
So tie it into the inition system or a tilt switch (some 4x4 do that one).
 so why not try to make things better by saving what you can?
 
Sure, you can try saving things, but you'd better hope that there was already a reasonably recent clean copy of your data.
that or make a very robust dump system (core dump?)
A core dump, no matter how robust, will not fix your data if it is randomized by an errant program before it seg faulted.
Who said anything about fixing stuff? I've been thinking only of logging and a recover-your-work,-maybe file kinds of things. I agree, any more than that won't work.
Nov 20 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 With the pump shut off, you have a few seconds of fuel left in the
 carb. With no oil pressure, your engine is going to seize anyway.
In a few minutes yes (and it will still run for some time after it's damaged beyond repair), more than long enough off get off the road. I'd put a big buzzer in and let the driver decide when it is safe to shut down the engine. In some situations, I'd gladly cook the engine to get to safety.
There is an override on the switch to turn the pump on anyway, but it's a push button you have to hold down.
 I also pried a switch off of
 a junkyard Mustang that shuts off if it gets hit hard, I also plan
 on
 installing that to shut off the fuel pump. Think of those like a
 "seg
 fault" <g>)
That one might even be worse because it only comes into play when you know things are going wrong; "as soon as things go wrong, my car quits working".
You *really* don't want your fuel pump to keep on pumping if you're in an accident. That's the purpose of the inertial switch. With older mechanical pumps, the pump would stop whenever the engine did. The gasoline is safer remaining in the tank than being pumped all over the road, the hot engine, and your trapped body.
So tie it into the inition system or a tilt switch (some 4x4 do that one).
It is tied to the ignition system already. The problem is, the ignition doesn't automatically turn off when you crash your car. If you race cars, you are required to install a battery cutoff switch on the outside in an obvious location. This is so emergency personnel running up to save your a** can shut off the power first thing so no spark or whatever will set things on fire. I have a quick disconnect on my battery. Whenever I work on the car, the first thing is always to disconnect it.
Nov 20 2009
parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 With the pump shut off, you have a few seconds of fuel left in the
 carb. With no oil pressure, your engine is going to seize anyway.
 
In a few minutes yes (and it will still run for some time after it's damaged beyond repair), more than long enough off get off the road. I'd put a big buzzer in and let the driver decide when it is safe to shut down the engine. In some situations, I'd gladly cook the engine to get to safety.
There is an override on the switch to turn the pump on anyway, but it's a push button you have to hold down.
your driving down the road talking about programing language design and suddenly an 18 wheeler starts tail gateing and another pulls out to pass. In the middle of that, your engine starts to splutter, something it has never done before. What is your reaction? I'll give 10:1 that it takes you a few seconds to recognize that the fuel has been cut, remember that there is a switch to override it, find said switch and push it. Now add in that you didn't install the switch (it comes standard) and you have never taking the manual out of shrink wrap. You starting to see why it will never come standard.
 So tie it into the inition system or a tilt switch (some 4x4 do that
 one).
 
It is tied to the ignition system already. The problem is, the ignition doesn't automatically turn off when you crash your car.
Yes the ignition (as the the key) doesn't turn off but when the engine quits running the ignition system (as in the magneto or that block of epoxy and silicon under the hood) quits triggering the spark. Tie into that.
 If you race cars, you are required to install a battery cutoff switch
 on the outside in an obvious location. This is so emergency personnel
 running up to save your a** can shut off the power first thing so no
 spark or whatever will set things on fire.
 
 I have a quick disconnect on my battery. Whenever I work on the car,
 the first thing is always to disconnect it.
 
Nov 21 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 your driving down the road talking about programing language design and 
 suddenly an 18 wheeler starts tail gateing and another pulls out to 
 pass. In the middle of that, your engine starts to splutter, something 
 it has never done before. What is your reaction? I'll give 10:1 that it 
 takes you a few seconds to recognize that the fuel has been cut, 
 remember that there is a switch to override it, find said switch and 
 push it. Now add in that you didn't install the switch (it comes 
 standard) and you have never taking the manual out of shrink wrap. You 
 starting to see why it will never come standard.
There's also a large red light that comes on when oil pressure drops, and a large green light that goes out when the fuel pump is not getting power. Me, I like the interface style where a row of green lights says everything is good, and red lights say things are bad. As for anyone else, I designed this system for my own use. It's typical for performance cars, you can get the switch at any good speed shop, and is in fact required if you're on the track. Watch drag racing on TV. Sooner or later, probably sooner, you'll see an engine blow up. Turning off the fuel pump automatically only makes sense. Nobody wants to blow fuel at 60 psi onto an engine fire.
 So tie it into the inition system or a tilt switch (some 4x4 do that
 one).
It is tied to the ignition system already. The problem is, the ignition doesn't automatically turn off when you crash your car.
Yes the ignition (as the the key) doesn't turn off but when the engine quits running the ignition system (as in the magneto or that block of epoxy and silicon under the hood) quits triggering the spark. Tie into that.
Trying to determine if the distributor is no longer turning is a non-trivial circuit. Best to stick with simple things when dealing with safety issues. The inertial switch is pretty darned simple, it's just a ball stuck on the end of a magnet. Knock it hard, it falls off the magnet, opening the circuit.
Nov 22 2009
parent reply BCS <none anon.com> writes:
Hello Walter,

 BCS wrote:
 
 Yes the ignition (as the the key) doesn't turn off but when the
 engine quits running the ignition system (as in the magneto or that
 block of epoxy and silicon under the hood) quits triggering the
 spark. Tie into that.
 
Trying to determine if the distributor is no longer turning is a non-trivial circuit. Best to stick with simple things when dealing with safety issues. The inertial switch is pretty darned simple, it's just a ball stuck on the end of a magnet. Knock it hard, it falls off the magnet, opening the circuit.
If you can find the right spot to tie into all you need is an event failure alarm http://www.elektropage.com/default.asp?page=sub&bid=1&sid=18 A voltage divider across the points for a magneto should work. A more modern system should be even easier as they probably have a low voltage wire somewhere that will work. Of course all the motivations change if your driving in a race.
Nov 23 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 Hello Walter,
 
 BCS wrote:

 Yes the ignition (as the the key) doesn't turn off but when the
 engine quits running the ignition system (as in the magneto or that
 block of epoxy and silicon under the hood) quits triggering the
 spark. Tie into that.
Trying to determine if the distributor is no longer turning is a non-trivial circuit. Best to stick with simple things when dealing with safety issues. The inertial switch is pretty darned simple, it's just a ball stuck on the end of a magnet. Knock it hard, it falls off the magnet, opening the circuit.
If you can find the right spot to tie into all you need is an event failure alarm http://www.elektropage.com/default.asp?page=sub&bid=1&sid=18
No, I'm not going with complex electronics for a fail-safe circuit. A simple fail-open pressure switch is pretty bullet proof. (Integrated circuits can often have problems in a car - heat, vibration, and spiky power supplies. Using off-the-shelf electronics under the hood can be a problem because of that, you need more robust parts.) That circuit also has 3 mechanical moving parts, a transistor, a 555 IC, capacitors, numerous connections, any of which can fail. A pressure switch is much simpler.
 A voltage divider across the points for a magneto should work. A more 
 modern system should be even easier as they probably have a low voltage 
 wire somewhere that will work.
 
 Of course all the motivations change if your driving in a race.
Nov 24 2009
prev sibling parent "Adam D. Ruppe" <destructionator gmail.com> writes:
On Mon, Nov 16, 2009 at 03:19:06PM -0500, bearophile wrote:
 One Ariane missile has self-destroyed (and destroyed an extremely important
scientific satellite it was carrying whose mission I miss still) because of
this silly behaviour united with the inflexibility of the Ada language.
Would you have preferred it to just randomly do its own thing and potentially end up landing on people? Blowing it up over the ocean or the launch site is something they would be prepared for anyway, so it is relatively safe to people, which is what ultimately matters. Even expensive, important pieces of equipment can always be replaced.
 A program that stops working in a random moment because of a null is not safe.
What would you have it do? Carry on in the error state, doing Lord knows what? That's clearly unsafe. Terminating it is a completely predictable situation - one you can design the safe system as a whole around. The rocket scientists know their rocket might blow up at launch, so they build the launch pad out far enough from people and schedule lift off on a day with favourable weather, so if it does explode, the odds of someone getting hurt are low. Hospitals know their medical machines might screw up, so they keep a nurse on duty at all times who can handle the situation - restart the failed machine, or bring in a replacement before it kills someone. Similarly, if your program simply must not fail, null pointer problems don't preclude this. You can predict the eventuality of termination, and set up an external process to restart the dead program: while [ true ] ; do ./buggy-program ; done It might not be convenient all the time, but it is safe. Certainly safer than the alternative of carrying on in an unknown state.
 A safer program is a program that avoids null pointer exception because the
type system has formally verified the program has no nulls.
I wouldn't say safer, though I will concede that it is easier to debug. -- Adam D. Ruppe http://arsdnet.net
Nov 16 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 Can't be written. Try it.
Thank you for being gentle with me still :-) Almost every day I say something stupid in this newsgroup... Bye, bearophile
Nov 16 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 
 Can't be written. Try it.
Thank you for being gentle with me still :-) Almost every day I say something stupid in this newsgroup... Bye, bearophile
Sorry, I was just lacking the time. I also tried to encapsulate alloca once. It becomes obvious once you sit down and try to write the code. Andrei
Nov 16 2009