www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Null references (oh no, not again!)

reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Just noticed this hit Slashdot, and thought I might repost the abstract
here.

http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake

 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner Serendipitous, since I just spent today trying to track down an (expletives deleted) obscure null dereference problem. I figure I must be in good company if even the guy who invented null doesn't like it... It also make me look up this old thing; it's several years old now, but I still think it's got some good points in it. http://www.st.cs.uni-saarland.de/edu/seminare/2005/advanced-fp/docs/sweeny.pdf
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
Tim Sweeny isn't an amateur; he's responsible, at least in part, for one of the most commercially successful game engines ever. I figure if even he has trouble with these things, it's worth trying to fix them. -- Daniel
Mar 03 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Daniel Keep:
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables
...
Nice to see another person appreciate my desire to avoid those bugs as much as possible. Eventually I hope to see D avoid all four of them. There are other bugs too, like ones derived by out-of-bounds pointer arithmetic, but they are less common (Cyclone and other languages are able to avoid them too, but it costs some). Do you remember the ptr==null thing? As soon as D2 has full integral overflow checks, all people will find several bugs in their "large" D2 programs. I have seen this many times, switching on such overflow checks in Delphi programs :-) Bye, bearophile
Mar 03 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Daniel Keep wrote:
 Just noticed this hit Slashdot, and thought I might repost the abstract
 here.
 
 http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake
 
 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
I suggested to Walter an idea he quite took to: offering the ability of disabling the default constructor. This is because at root any null pointer was a pointer created with its default constructor. The feature has some interesting subtleties to it but is nothing out of the ordinary and the code must be written anyway for typechecking invariant constructors. That, together with the up-and-coming alias this feature, will allow the creation of the "perfect" NonNull!(T) type constructor (along with many other cool things). I empathize with those who think non-null should be the default, but probably that won't fly with Walter. Andrei
Mar 03 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Tue, 03 Mar 2009 21:59:16 +0300, Andrei Alexandrescu
<SeeWebsiteForEmail erdani.org> wrote:

 Daniel Keep wrote:
 Just noticed this hit Slashdot, and thought I might repost the abstract
 here.
   
 http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake

 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
I suggested to Walter an idea he quite took to: offering the ability of disabling the default constructor. This is because at root any null pointer was a pointer created with its default constructor. The feature has some interesting subtleties to it but is nothing out of the ordinary and the code must be written anyway for typechecking invariant constructors. That, together with the up-and-coming alias this feature, will allow the creation of the "perfect" NonNull!(T) type constructor (along with many other cool things). I empathize with those who think non-null should be the default, but probably that won't fly with Walter. Andrei
If nullable is the default and NonNull!(T) has no syntactic sugar, I bet it won't be used at all. I know I woudn't, even though I'm one of the biggest advocates of introducing non-nullable types in D. In my opinion, you should teach novices safe practices first, and dangerous tricks last. Not vice-versa. If using of nullable types would be easier that non-nullable once, it won't be widely used. The syntax ought to be less verbose and more clear to get an attention. I hope that this great idea won't get spoiled by broken implementation...
Mar 03 2009
next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Mar 3, 2009 at 3:08 PM, Denis Koroskin <2korden gmail.com> wrote:
 If nullable is the default and NonNull!(T) has no syntactic sugar, I bet it
 won't be used at all. I know I woudn't, even though I'm one of the biggest
 advocates of introducing non-nullable types in D.

 In my opinion, you should teach novices safe practices first, and dangerous
 tricks last. Not vice-versa.
Exactly. I thought one of the ideas behind D was to have "safe" defaults. Yeah, I know, null references can't actually do damage to your computer because of virtual memory, but neither can concurrent access to shared data, or accessing uninitialized variables, but they're taken care of. If nonnull types were the default, Nullable!(T) would be implementable as an Algebraic type, just like in Haskell. One more potential bragging point ;)
Mar 03 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Jarrett Billingsley wrote:
 Exactly.  I thought one of the ideas behind D was to have "safe"
 defaults.  Yeah, I know, null references can't actually do damage to
 your computer because of virtual memory, but neither can concurrent
 access to shared data, or accessing uninitialized variables, but
 they're taken care of.
Those last two *are* unsafe, memory corrupting problems.
Mar 03 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Denis Koroskin wrote:
 On Tue, 03 Mar 2009 21:59:16 +0300, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Daniel Keep wrote:
 Just noticed this hit Slashdot, and thought I might repost the abstract
 here.
  http://qconlondon.com/london-2009/presentation/Null+References:+The+Bi
lion+Dollar+Mistake 


 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
I suggested to Walter an idea he quite took to: offering the ability of disabling the default constructor. This is because at root any null pointer was a pointer created with its default constructor. The feature has some interesting subtleties to it but is nothing out of the ordinary and the code must be written anyway for typechecking invariant constructors. That, together with the up-and-coming alias this feature, will allow the creation of the "perfect" NonNull!(T) type constructor (along with many other cool things). I empathize with those who think non-null should be the default, but probably that won't fly with Walter. Andrei
If nullable is the default and NonNull!(T) has no syntactic sugar, I bet it won't be used at all. I know I woudn't, even though I'm one of the biggest advocates of introducing non-nullable types in D. In my opinion, you should teach novices safe practices first, and dangerous tricks last. Not vice-versa. If using of nullable types would be easier that non-nullable once, it won't be widely used. The syntax ought to be less verbose and more clear to get an attention. I hope that this great idea won't get spoiled by broken implementation...
I did some more research and found a study: http://users.encs.concordia.ca/~chalin/papers/TR-2006-003.v3s-pub.pdf Very interestingly (and exactly the kind of info I was looking for), the study measures how references are meant to be in a real application of medium-large size. Turns out in 2/3 of cases, references are really meant to be non-null... not really a landslide but a comfortable majority. Andrei
Mar 03 2009
next sibling parent Brad Roberts <braddr bellevue.puremagic.com> writes:
On Tue, 3 Mar 2009, Andrei Alexandrescu wrote:

 I did some more research and found a study:
 
 http://users.encs.concordia.ca/~chalin/papers/TR-2006-003.v3s-pub.pdf
 
 Very interestingly (and exactly the kind of info I was looking for), the study
 measures how references are meant to be in a real application of medium-large
 size.
 
 Turns out in 2/3 of cases, references are really meant to be non-null... not
 really a landslide but a comfortable majority.
 
 
 Andrei
I'd love to see a similar study for smart pointers. Are they more like pointers or references? My assumption is references, leading to them beign poorly named. :) Later, Brad
Mar 03 2009
prev sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 I did some more research and found a study:
 http://users.encs.concordia.ca/~chalin/papers/TR-2006-003.v3s-pub.pdf
 ...
 Turns out in 2/3 of cases, references are really meant to be non-null... 
 not really a landslide but a comfortable majority.
Thank you for bringing real data to this debate. Note that 2/3 is relative to nonlocal variables only:
In Java programs, at least 2/3 of declarations (other than local variables)
that are of 
reference types are meant to be non-null, based on design intent. We exclude local variables because their non-nullity can be inferred by intra-procedural analysis< So the total percentage may be different (higher?). Anyway, nonnullable by default seems the way to go if such feature is added. Bye, bearophile
Mar 04 2009
prev sibling next sibling parent Jason House <jason.james.house gmail.com> writes:
Andrei Alexandrescu Wrote:

 Daniel Keep wrote:
 Just noticed this hit Slashdot, and thought I might repost the abstract
 here.
 
 http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake
 
 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
I suggested to Walter an idea he quite took to: offering the ability of disabling the default constructor. This is because at root any null pointer was a pointer created with its default constructor. The feature has some interesting subtleties to it but is nothing out of the ordinary and the code must be written anyway for typechecking invariant constructors. That, together with the up-and-coming alias this feature, will allow the creation of the "perfect" NonNull!(T) type constructor (along with many other cool things). I empathize with those who think non-null should be the default, but probably that won't fly with Walter. Andrei
Alias this?
Mar 03 2009
prev sibling next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Andrei Alexandrescu wrote:
 I suggested to Walter an idea he quite took to: offering the ability of
 disabling the default constructor. This is because at root any null
 pointer was a pointer created with its default constructor. The feature
 has some interesting subtleties to it but is nothing out of the ordinary
 and the code must be written anyway for typechecking invariant
 constructors.
 
 That, together with the up-and-coming alias this feature, will allow the
 creation of the "perfect" NonNull!(T) type constructor (along with many
 other cool things). I empathize with those who think non-null should be
 the default, but probably that won't fly with Walter.
 
 
 Andrei
If Walter can get this working so that we can create NonNull as a wrapper struct that guarantees it will always be non-null, that'll be a big step forward. Yes, I'd prefer it be the default, but better possible than not. :) -- Daniel
Mar 03 2009
prev sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-03-03 13:59:16 -0500, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 I suggested to Walter an idea he quite took to: offering the ability of 
 disabling the default constructor. This is because at root any null 
 pointer was a pointer created with its default constructor. The feature 
 has some interesting subtleties to it but is nothing out of the 
 ordinary and the code must be written anyway for typechecking invariant 
 constructors.
 
 That, together with the up-and-coming alias this feature, will allow 
 the creation of the "perfect" NonNull!(T) type constructor (along with 
 many other cool things). I empathize with those who think non-null 
 should be the default, but probably that won't fly with Walter.
That'd be great, really. But even then, NonNull!(T) will probably be to D what auto_ptr< T > is to C++: a very good idea with a very bad syntax only expert programmers use. C++ makes the safest pointer types the less known; please convince Walter we shouldn't repeat that error in D. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Mar 03 2009
prev sibling next sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Daniel Keep wrote:
 Just noticed this hit Slashdot, and thought I might repost the abstract
 here.
 
 http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake
 
 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years. [...] More

 for non-null references. This is the solution, which I rejected in
 1965.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner Serendipitous, since I just spent today trying to track down an (expletives deleted) obscure null dereference problem. I figure I must be in good company if even the guy who invented null doesn't like it...
There are issues shoe-horning non-nullables into a nullable world: - preallocating arrays (or static arrays) - structs with non-nullable fields - pointers to non-nullables It's sufficient that I gave up on my attempts to implement it. If it were implemented, non-nullable absolutely must be the default. I'm still sad about mutable being the default in d2.
Mar 03 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Christopher Wright:
I'm still sad about mutable being the default in d2.<
Maybe D3 will move more in that direction, I don't know. It's a really big jump from C/C++, quite bigger than nonnullable by default :-) Bye, bearophile
Mar 03 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Daniel Keep wrote:
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
Tim Sweeny isn't an amateur; he's responsible, at least in part, for one of the most commercially successful game engines ever. I figure if even he has trouble with these things, it's worth trying to fix them.
1 and 4 are pernicious, memory corrupting, hard to find problems. 2 is easy to find, does not corrupt memory. It isn't even in the same continent as 1 and 4 are. 3 is a problem, but fortunately it tends to be rare.
Mar 03 2009
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
Tim Sweeny isn't an amateur; he's responsible, at least in part, for one of the most commercially successful game engines ever. I figure if even he has trouble with these things, it's worth trying to fix them.
1 and 4 are pernicious, memory corrupting, hard to find problems. 2 is easy to find, does not corrupt memory. It isn't even in the same continent as 1 and 4 are. 3 is a problem, but fortunately it tends to be rare.
The point was that these were identified as being responsible for the -- Daniel
Mar 03 2009
parent reply Rainer Deyke <rainerd eldwood.com> writes:
Daniel Keep wrote:
 The point was that these were identified as being responsible for the


A sample size of one doesn't mean much. In my experience, none of those four factors account for a significant amount of bugs, since all of them (except integer overflow) can be caught without too much effort through the copious use of assertions. I'd still prefer non-nullable references to be the default though. Writing an assertion for every non-nullable reference argument for every function is tedious. -- Rainer Deyke - rainerd eldwood.com
Mar 03 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is. The hardware won't help you with array overflows or uninitialized variables, however.
Mar 04 2009
next sibling parent Don <nospam nospam.com> writes:
Walter Bright wrote:
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is. The hardware won't help you with array overflows or uninitialized variables, however.
The worst case for an uninitialized variable is when it's a pointer; the best value (by far) that such an a uninitialized pointer can have is null. OTOH I find that null references are an order of magnitude more common in D than in C++. I think that's significant, because in my experience, it's the only category of bugs which is worse in D. It's far too easy to declare a class and forget to 'new' it.
Mar 04 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gole1d$23v4$1 digitalmars.com...
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is.
Yes...at run-time. And even then only if you're lucky enough to hit all of the code paths that lead to a null-reference during testing. It might not cause data-corruption, but it does cause a crash. A crash might not typically be as bad as data-corruption, but both are still unnaceptable in professional software. Plus, a crash *can* be nearly as bad, if not equally bad, as data-corruption when it occurs in something mission-critical. This is not a problem to be taken lightly.
Mar 04 2009
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Nick Sabalausky" <a a.a> wrote in message 
news:golfdk$267h$1 digitalmars.com...
 ...but both are still unnaceptable in professional software.
And so is my terrible spelling ;)
Mar 04 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:gole1d$23v4$1 digitalmars.com...
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is.
Yes...at run-time.
Asserts only fire at run-time, too. This is why I said the asserts are pointless.
 And even then only if you're lucky enough to hit all of 
 the code paths that lead to a null-reference during testing. It might not 
 cause data-corruption, but it does cause a crash.
It's not *remotely* as bad as data corruption. Back in the bad old DOS days, a data corruption problem could, and often *did*, completely scramble your hard disk. Having protection against this in hardware was an enormous improvement. Things were so bad on DOS with this I'd develop code on a different system entirely that had memory protection, then only afterwards port it to DOS as a last step.
 A crash might not 
 typically be as bad as data-corruption, but both are still unnaceptable in 
 professional software. Plus, a crash *can* be nearly as bad, if not equally 
 bad, as data-corruption when it occurs in something mission-critical. This 
 is not a problem to be taken lightly.
I've worked with mission-critical software. You absolutely do NOT rely on it never failing. You design it so that when it fails, and it WILL fail, it does not bring down your critical system. I started my career doing flight critical mechanical designs for Boeing airliners. I had it pounded into me that no matter how perfect you designed the parts, the next step is "assume it fails. Now what?" That is why Boeing airliners have incredible safety records. Assume the parts break. Assume the hydraulics are connected backwards. Assume all the fluid runs out of the hydraulics. Assume it is struck by lightning. Assume it is encased in ice and frozen solid. Assume the cables break. Assume a stray wrench jams the mechanism. Assume it rusts away. Assume nobody lubricates it for years. Assume it was assembled with a bad batch of bolts. Etc. If software is in your flight critical systems, the way one proceeds is to *assume skynet takes it over* and will attempt to do everything possible to crash the airplane.
Mar 04 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 13:55:57 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message  
 news:gole1d$23v4$1 digitalmars.com...
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for  
 every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is.
Yes...at run-time.
Asserts only fire at run-time, too. This is why I said the asserts are pointless.
 And even then only if you're lucky enough to hit all of the code paths  
 that lead to a null-reference during testing. It might not cause  
 data-corruption, but it does cause a crash.
It's not *remotely* as bad as data corruption. Back in the bad old DOS days, a data corruption problem could, and often *did*, completely scramble your hard disk. Having protection against this in hardware was an enormous improvement. Things were so bad on DOS with this I'd develop code on a different system entirely that had memory protection, then only afterwards port it to DOS as a last step.
 A crash might not typically be as bad as data-corruption, but both are  
 still unnaceptable in professional software. Plus, a crash *can* be  
 nearly as bad, if not equally bad, as data-corruption when it occurs in  
 something mission-critical. This is not a problem to be taken lightly.
I've worked with mission-critical software. You absolutely do NOT rely on it never failing. You design it so that when it fails, and it WILL fail, it does not bring down your critical system. I started my career doing flight critical mechanical designs for Boeing airliners. I had it pounded into me that no matter how perfect you designed the parts, the next step is "assume it fails. Now what?" That is why Boeing airliners have incredible safety records. Assume the parts break. Assume the hydraulics are connected backwards. Assume all the fluid runs out of the hydraulics. Assume it is struck by lightning. Assume it is encased in ice and frozen solid. Assume the cables break. Assume a stray wrench jams the mechanism. Assume it rusts away. Assume nobody lubricates it for years. Assume it was assembled with a bad batch of bolts. Etc. If software is in your flight critical systems, the way one proceeds is to *assume skynet takes it over* and will attempt to do everything possible to crash the airplane.
Assume you got a null-derefence under Linux. How are you going to recover from it? You can't catch the NullPointerException, so your program will fail and bring down the whole system *anyway*.
Mar 04 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 On Wed, 04 Mar 2009 13:55:57 +0300, Walter Bright 
 If software is in your flight critical systems, the way one proceeds 
 is to *assume skynet takes it over* and will attempt to do everything 
 possible to crash the airplane.
Assume you got a null-derefence under Linux. How are you going to recover from it? You can't catch the NullPointerException, so your program will fail and bring down the whole system *anyway*.
You design your critical system so it is not vulnerable to the failure of a subsystem of it, even if that subsystem is powered by linux. For example, you might have two computer systems controlling the process. They vote, and if they disagree, they both are removed and the backup is engaged. The two systems use different operating systems - say one linux the other windows, they use different software written with different algorithms in different languages. The space shuttle, for example, had 4 independent flight control computers voting, and a 5th (with reduced capability) that could be manually brought online in case the 4 primaries all failed. Google did an interesting design for their Chrome browser. Each tab in it was powered by a separate process, meaning the hardware isolated it from the operation of the other tabs. So if the browser crashed in one tab, it wouldn't affect the other ones. I've read elsewhere that if you want to create a robust system, you break it up into different modules and run those modules as separate processes (not just separate threads) that communicate via interprocess communication. Any particular module dying could then be restarted without affecting the rest of the modules. The wrong way to do it is to lump everything into one gigantic process. Then, any failure brings everything down.
Mar 04 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 14:40:58 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Denis Koroskin wrote:
 On Wed, 04 Mar 2009 13:55:57 +0300, Walter Bright
 If software is in your flight critical systems, the way one proceeds  
 is to *assume skynet takes it over* and will attempt to do everything  
 possible to crash the airplane.
Assume you got a null-derefence under Linux. How are you going to recover from it? You can't catch the NullPointerException, so your program will fail and bring down the whole system *anyway*.
You design your critical system so it is not vulnerable to the failure of a subsystem of it, even if that subsystem is powered by linux. For example, you might have two computer systems controlling the process. They vote, and if they disagree, they both are removed and the backup is engaged. The two systems use different operating systems - say one linux the other windows, they use different software written with different algorithms in different languages. The space shuttle, for example, had 4 independent flight control computers voting, and a 5th (with reduced capability) that could be manually brought online in case the 4 primaries all failed. Google did an interesting design for their Chrome browser. Each tab in it was powered by a separate process, meaning the hardware isolated it from the operation of the other tabs. So if the browser crashed in one tab, it wouldn't affect the other ones. I've read elsewhere that if you want to create a robust system, you break it up into different modules and run those modules as separate processes (not just separate threads) that communicate via interprocess communication. Any particular module dying could then be restarted without affecting the rest of the modules. The wrong way to do it is to lump everything into one gigantic process. Then, any failure brings everything down.
Most people can't afford their applications run on a few computers just in case one of them fails. Besides, as you yourself pointed out, NPE are often repeatable, so if you re-run the task on another PC, chances are it will fail, too. No doubt, Google Chrome is a beautiful piece of software. It doesn't crash the whole browser when something is null-dereferenced. But the message I've been writing for half an hour is *lost* anyway when the host process fails. The way you suggest writing software is like a doctor who suggests curing/hiding symptoms rather than the cause of an illness. You shouldn't rely on exception recovery when you may avoid the whole class of bugs altogether.
Mar 04 2009
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
Denis Koroskin wrote:
 
 Most people can't afford their applications run on a few computers just 
 in case one of them fails.
Maybe not, but everyone can run a multi-process application, particularly now that multi-core computers are the norm.
 No doubt, Google Chrome is a beautiful piece of software. It doesn't 
 crash the whole browser when something is null-dereferenced. But the 
 message I've been writing for half an hour is *lost* anyway when the 
 host process fails.
Not necessarily. The host process might have been logging your actions or performing periodic backups and recover automatically when restarted. If the app isn't designed this way then the programmer clearly didn't think your message was important enough to try and save :-p
 The way you suggest writing software is like a doctor who suggests 
 curing/hiding symptoms rather than the cause of an illness. You 
 shouldn't rely on exception recovery when you may avoid the whole class 
 of bugs altogether.
This is a fair point, but I think the issue is more the cost of this avoidance... and I suppose whether the avoidance really is avoidance or whether it's a placebo.
Mar 04 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 Most people can't afford their applications run on a few computers just 
 in case one of them fails.
Then you cannot afford to run *critical* systems on them.
 No doubt, Google Chrome is a beautiful piece of software. It doesn't 
 crash the whole browser when something is null-dereferenced. But the 
 message I've been writing for half an hour is *lost* anyway when the 
 host process fails.
That's annoying, sure, but it is not a disaster, and often editors have an "auto-save" feature. After all, power failures happen, too. They happen around here a lot, as I'm at the end of a long road that is always having problems with the wires.
 The way you suggest writing software is like a doctor who suggests 
 curing/hiding symptoms rather than the cause of an illness. You 
 shouldn't rely on exception recovery when you may avoid the whole class 
 of bugs altogether.
It is not hiding the symptom, it is recognizing the reality that you cannot write perfect software, so to require perfect software *and* depend on it being perfect is a recipe for inevitable disaster. The way to have reliable systems is not to assume perfection in every component, but to be tolerant of failure of *any* component.
Mar 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 It is not hiding the symptom, it is recognizing the reality that you 
 cannot write perfect software, so to require perfect software *and* 
 depend on it being perfect is a recipe for inevitable disaster.
This discussion is getting a bit silly. You argue no system is perfect, bug exists, and you have to put ways to save the situation when a trouble has occurred. What almost everyone else is saying is that you are right, but if there are simple ways to avoid a whole class of bugs, then it may be positive to consider trying such ways out. In Boeing they write redundant code and use redundant CPUs and all you want, but they also use very rigorous ways to test things before they can fail. You have to attack bugs from every side and then be prepared to fail. And still, sometimes it doesn't suffice. Bye, bearophile
Mar 04 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 It is not hiding the symptom, it is recognizing the reality that
 you cannot write perfect software, so to require perfect software
 *and* depend on it being perfect is a recipe for inevitable
 disaster.
This discussion is getting a bit silly. You argue no system is perfect, bug exists, and you have to put ways to save the situation when a trouble has occurred. What almost everyone else is saying is that you are right, but if there are simple ways to avoid a whole class of bugs, then it may be positive to consider trying such ways out. In Boeing they write redundant code and use redundant CPUs and all you want, but they also use very rigorous ways to test things before they can fail. You have to attack bugs from every side and then be prepared to fail. And still, sometimes it doesn't suffice.
I agree, but I wanted to be sure and stamp out the implicit assumption in the antecedent that failure cannot be tolerated in mission critical software, because that implies it is possible to write perfect software.
Mar 04 2009
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
Denis Koroskin wrote:
 On Wed, 04 Mar 2009 13:55:57 +0300, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 If software is in your flight critical systems, the way one proceeds 
 is to *assume skynet takes it over* and will attempt to do everything 
 possible to crash the airplane.
Assume you got a null-derefence under Linux. How are you going to recover from it? You can't catch the NullPointerException, so your program will fail and bring down the whole system *anyway*.
Every process is monitored and backed-up by one or more other processes, thus the system is resilient through preemptive failover to back-up systems. It's also common for monitor processes to run every operation in parallel through more than one equivalent sub-process and compare results. If a discrepancy occurs, either a failover is triggered or the "correct" result is determined by consensus. In every case though, attempting in-process error recovery in mission-critical code is a bad idea.
Mar 04 2009
prev sibling next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:golmnp$2i2p$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:gole1d$23v4$1 digitalmars.com...
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for 
 every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is.
Yes...at run-time.
Asserts only fire at run-time, too. This is why I said the asserts are pointless.
 And even then only if you're lucky enough to hit all of the code paths 
 that lead to a null-reference during testing. It might not cause 
 data-corruption, but it does cause a crash.
It's not *remotely* as bad as data corruption. Back in the bad old DOS days, a data corruption problem could, and often *did*, completely scramble your hard disk. Having protection against this in hardware was an enormous improvement. Things were so bad on DOS with this I'd develop code on a different system entirely that had memory protection, then only afterwards port it to DOS as a last step.
 A crash might not typically be as bad as data-corruption, but both are 
 still unnaceptable in professional software. Plus, a crash *can* be 
 nearly as bad, if not equally bad, as data-corruption when it occurs in 
 something mission-critical. This is not a problem to be taken lightly.
I've worked with mission-critical software. You absolutely do NOT rely on it never failing. You design it so that when it fails, and it WILL fail, it does not bring down your critical system. I started my career doing flight critical mechanical designs for Boeing airliners. I had it pounded into me that no matter how perfect you designed the parts, the next step is "assume it fails. Now what?" That is why Boeing airliners have incredible safety records. Assume the parts break. Assume the hydraulics are connected backwards. Assume all the fluid runs out of the hydraulics. Assume it is struck by lightning. Assume it is encased in ice and frozen solid. Assume the cables break. Assume a stray wrench jams the mechanism. Assume it rusts away. Assume nobody lubricates it for years. Assume it was assembled with a bad batch of bolts. Etc. If software is in your flight critical systems, the way one proceeds is to *assume skynet takes it over* and will attempt to do everything possible to crash the airplane.
You're dodging the point. Just because these failsafes might exist does *NOT* excuse the processes from being lax about their reliability in the first place. What would Boeing have said if you designed a bolt with a fatal flaw and excused it with "It's ok, we have failsafes!". A failsafe is good, but even with one, it's far better to not even have the failure in the first place. Like Denis pointed out with the Chrome example, there's only so much that a failsafe can actually do. So, ok, even if data-corruption is much worse than a null-reference crash, a null-reference crash is *still* a major issue.
Mar 04 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote:
 I started my career doing flight critical mechanical designs for Boeing 
 airliners. I had it pounded into me that no matter how perfect you 
 designed the parts, the next step is "assume it fails. Now what?" That is 
 why Boeing airliners have incredible safety records.
Yup. That's what McDonnell didn't do with the DC-10. They were crashing mysteriously in mid-fligt, and nobody survived to tell. The DC-10 had three entirely separate steering systems: a mechanical (as in wires from cockpit to ailerons), a hydraulic one, and an electrical system. After a superior pilot(1) actually brought his plane home after disaster struck, it was found out that the reason to all the crashes was a cargo door lock, which could be shut carelessly and then, if the ground guy was strong enough, lock the latch by force, leaving it only partly locked. Once in the air, the airpressure blew the door open, resulting in the passenger floor collapsing, and shredding the steering systems. The "non-Boeing" designers had drawn all three steering systems next to each other, above the cargo door, below the passenger floor.
 Assume the parts break. Assume the hydraulics are connected backwards. 
 Assume all the fluid runs out of the hydraulics. Assume it is struck by 
 lightning. Assume it is encased in ice and frozen solid. Assume the cables 
 break. Assume a stray wrench jams the mechanism. Assume it rusts away. 
 Assume nobody lubricates it for years. Assume it was assembled with a bad 
 batch of bolts. Etc.
My father was an airline pilot, who had participated in crash investigations. Ever since I was a kid I got it hammered in my head that things break, period. And people make mistakes. Double period! For example, it happens that car tires blow. In the old days, a front tire blowing usually meant you ended up in the ditch or a tree. Volkswagen designed the first car not to veer off the road when that happens, the Golf. The front suspension geometry was such that you didn't even have to have your hands on the steering wheel when the tire blows. No problem. (But the funny thing is, the average driver shouldn't know about that, or he will compensate it with even sloppier driving.)
 If software is in your flight critical systems, the way one proceeds is to 
 *assume skynet takes it over* and will attempt to do everything possible 
 to crash the airplane.
You're dodging the point. Just because these failsafes might exist does *NOT* excuse the processes from being lax about their reliability in the first place. What would Boeing have said if you designed a bolt with a fatal flaw and excused it with "It's ok, we have failsafes!".
Recently, in Sweden, it became known that supervisors in this ultra safe Nuclear Power Plant regularly drank beer on duty. "Why stay alert when nothing ever happens, and even if it does, this plant will shut itself down in an orderly manner." Homer Simpson, anyone? (1) A superior pilot: he learns more than the teachers force him to. He tries to Understand the mechanics and machinery, as opposed to just using it by the manual. He constantly conjures up disaster scenarios and figures out how to deal with them (methods). He also "preloads" such methods in his brain during the various phases of flight. At sudden danger, it is much more efficient to have the preloads at hand, rather than having to start inventing graceful exits when the cockpit is full of hands on the wheel and the knobs. These practices have saved my car from being totalled more than once. While it may look difficult to apply this to software development, especially in one-man projects, the value of this can't be underestimated. When a habit and team practice, it helps productivity. Design by contract is but one example in this direction. PS: it turned out that the DC-10 can be flown without flight controls. Since the three engines make a triangle (as looked at from the front), one can control the plane enough. The engine controls were not drawn next to the cargo door.
Mar 05 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 Yup. That's what McDonnell didn't do with the DC-10. They were crashing 
 mysteriously in mid-fligt, and nobody survived to tell.
 
 The DC-10 had three entirely separate steering systems: a mechanical (as 
  in wires from cockpit to ailerons), a hydraulic one, and an electrical 
 system.
 
 After a superior pilot(1) actually brought his plane home after disaster 
 struck, it was found out that the reason to all the crashes was a cargo 
 door lock, which could be shut carelessly and then, if the ground guy 
 was strong enough, lock the latch by force, leaving it only partly 
 locked. Once in the air, the airpressure blew the door open, resulting 
 in the passenger floor collapsing, and shredding the steering systems.
 
 The "non-Boeing" designers had drawn all three steering systems next to 
 each other, above the cargo door, below the passenger floor.
I started at Boeing soon after that incident. Boeing was very proud that they ran one set of controls under the floor, and the other overhead. Such a failure mode wouldn't happen to our plane. This kind of thing is called "coupling", where a single problem could bring down both supposedly independent systems. It's a hard thing to avoid. For example, in the recent Hudson crash, the engines are designed to be thoroughly independent, so one failure won't propagate to the other. But criminy, who'd have thought birds would be sucked into *both* engines at the same time?
 My father was an airline pilot, who had participated in crash investigations.
How ironic, my dad was a military pilot who also did crash investigations!
 PS: it turned out that the DC-10 can be flown without flight controls. 
 Since the three engines make a triangle (as looked at from the front), 
 one can control the plane enough. The engine controls were not drawn 
 next to the cargo door.
The Sioux City crash, which was a DC-10, amply demonstrated that it was possible even with only 2 of the 3 engines working! The tail engine failed and took out the hydraulics and the flight controls - another coupling point it shouldn't have had. There's a case of an L10-11 that lost all flight controls (ice) and landed the thing by manipulating engine thrust. After the S.C. crash, controlling the airplane via the engines was added to the autopilot, I believe, as then the pilot could just use the joystick and the autopilot would translate that to engine throttle changes. Related to this is the idea of checklists. Checklists dominate flying, and they have a well-proven efficacy in improving safety. Recent trials in hospitals with checklists have shown dramatic improvements in results.
Mar 05 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 The "non-Boeing" designers had drawn all three steering systems next 
 to each other, above the cargo door, below the passenger floor.
I started at Boeing soon after that incident. Boeing was very proud that they ran one set of controls under the floor, and the other overhead. Such a failure mode wouldn't happen to our plane.
I bet!
 This kind of thing is called "coupling", where a single problem could 
 bring down both supposedly independent systems. It's a hard thing to 
 avoid. For example, in the recent Hudson crash, the engines are designed 
 to be thoroughly independent, so one failure won't propagate to the 
 other. But criminy, who'd have thought birds would be sucked into *both* 
 engines at the same time?
Yeah. Things happen. Period. (Although, looking at a jet engine, one would think you can throw a pig into it, with no effect.) A similar thing happened with a DC-9 in Sweden, a few years ago. Both engines broke shortly after takeoff because of ice. The crew did a Hudson-like thing and landed on a field. Pretty well done with low lying clouds and darkness. The fuselage broke in three, but nobody died! It was Christmas.
 My father was an airline pilot, who had participated in crash 
 investigations.
How ironic, my dad was a military pilot who also did crash investigations!
Cool! In the old days both jobs were filled with glamour. Even at 75 he flew an old Dakota filled with enthusiasts. I glued together a Revell model airplane, and for extra detail I painted it the same matte metal as the original. The next year he had folks polish it to a nickel-plated look. I never bothered to repaint the miniature... I still remember the id OH-LCH, which was hard to make because the transfer decals had some other id. http://www.airliners.net/search/photo.search?regsearch=OH-LCH&distinct_entry=true
 The Sioux City crash, which was a DC-10, amply demonstrated that it was 
 possible even with only 2 of the 3 engines working! The tail engine 
 failed and took out the hydraulics and the flight controls - another 
 coupling point it shouldn't have had.
 
 There's a case of an L10-11 that lost all flight controls (ice) and 
 landed the thing by manipulating engine thrust.
Another TriStar crashed in Florida at night. All three of the crew were so busy wondering why the gear-down lamp didn't light that they crashed into a swamp. 75 survived and more than 100 died. An example of Inferior Pilots. Turned out the light bulb was burnt out.
 Related to this is the idea of checklists. Checklists dominate
 flying, and they have a well-proven efficacy in improving safety.
 Recent trials in hospitals with checklists have shown dramatic
 improvements in results.
Dad used to give a hard time to others who didn't aspire to become Superior Pilots. Sometimes, during pre-takeoff checks (one reads the list aloud, ticking done entries, and another does the actual checking), he used to switch to gibberish when reading an item. If the other guy didn't notice, he gave hell for it. It was all about Respect for regulations, Focus, and due Diligence. Not all young pilots understood that *every single word* in air regulations, is the result of someone already dead. Checklists are an underutilised resource in software development. There ought to be checklists "on paper" for pre-release checks for the staff, for example. Also, since computers are good at mundane and repetitive tasks, simple shell scripts that go through systems checking things would be economical. Contract Programming can be viewed as checklists on the micro level. When you call a function, it goes through a list of things to check before actually doing its job.
Mar 06 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Georg Wrede:

Thank you for your interesting notes on such aviation topics.

 Contract Programming can be viewed as checklists on the micro level. 
 When you call a function, it goes through a list of things to check 
 before actually doing its job.
In D such contracts are usually used during training flights only, in the real flight such check lists aren't used anymore :-) Bye, bearophile
Mar 06 2009
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
bearophile wrote:
 Georg Wrede:
 
 Thank you for your interesting notes on such aviation topics.
 
 Contract Programming can be viewed as checklists on the micro level. 
 When you call a function, it goes through a list of things to check 
 before actually doing its job.
In D such contracts are usually used during training flights only, in the real flight such check lists aren't used anymore :-)
I really wish the -release flag could be renamed to something else. -unsafe or -nocontracts or something.
Mar 06 2009
parent Georg Wrede <georg.wrede iki.fi> writes:
Sean Kelly wrote:
 bearophile wrote:
 Georg Wrede:

 Thank you for your interesting notes on such aviation topics.

 Contract Programming can be viewed as checklists on the micro level. 
 When you call a function, it goes through a list of things to check 
 before actually doing its job.
In D such contracts are usually used during training flights only, in the real flight such check lists aren't used anymore :-)
I really wish the -release flag could be renamed to something else. -unsafe or -nocontracts or something.
Yes. A finer grain would be prudent.
Mar 06 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
bearophile wrote:
 Georg Wrede:
 
 Thank you for your interesting notes on such aviation topics.
 
 Contract Programming can be viewed as checklists on the micro
 level. When you call a function, it goes through a list of things
 to check before actually doing its job.
In D such contracts are usually used during training flights only, in the real flight such check lists aren't used anymore :-)
Excellent point! That's why we have input validation. :-) And a good implementation validates /all/ data and other input to the program. It's like not only closing, but locking every door and window before you leave home. Some people should have a checklist for it (I know I do), too. Same with input validation. Leave one entrypoint unchecked, and that's where the storm comes in, guaranteed.
Mar 06 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 Cool! In the old days both jobs were filled with glamour.
So true. When my dad joined the Army Air Corps, one of the big reasons he wanted to be a pilot, instead of one of the other crew positions, is the pilots got the pick of the girls.
 http://www.airliners.net/search/photo.search?regsearch=OH-LCH
distinct_entry=true 
My dad flew gooney birds too.
 Dad used to give a hard time to others who didn't aspire to become 
 Superior Pilots. Sometimes, during pre-takeoff checks (one reads the 
 list aloud, ticking done entries, and another does the actual checking), 
 he used to switch to gibberish when reading an item. If the other guy 
 didn't notice, he gave hell for it. It was all about Respect for 
 regulations, Focus, and due Diligence. Not all young pilots understood 
 that *every single word* in air regulations, is the result of someone 
 already dead.
Yup. My dad has endless stories about overconfident pilots skipping something on the checklist and dying. It's becoming increasingly clear that the recent Turkish Airlines crash was because the pilots were asleep at the switch - they didn't even look out the window during landing approach.
 Checklists are an underutilised resource in software development. There 
 ought to be checklists "on paper" for pre-release checks for the staff, 
 for example. Also, since computers are good at mundane and repetitive 
 tasks, simple shell scripts that go through systems checking things 
 would be economical.
 
 Contract Programming can be viewed as checklists on the micro level. 
 When you call a function, it goes through a list of things to check 
 before actually doing its job.
It's interesting that a lot of my experience at a seemingly unrelated discipline at Boeing has found its way into D's language design!
Mar 06 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 
 It's interesting that a lot of my experience at a seemingly unrelated 
 discipline at Boeing has found its way into D's language design!
Of course. What does programming language/compiler development, and aerospace have in common? You need to make _reliable_ systems and subsystems, and you need to have an understanding of what people actually do (as opposed to what they say or promise they do). That's something where you, Walter, beat academia, institutions, corporations, and crazy individualists at this game. Hands down.
Mar 06 2009
parent "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gos5lc$cf9$1 digitalmars.com...
 Walter Bright wrote:
 It's interesting that a lot of my experience at a seemingly unrelated 
 discipline at Boeing has found its way into D's language design!
Of course. What does programming language/compiler development, and aerospace have in common? You need to make _reliable_ systems and subsystems, and you need to have an understanding of what people actually do (as opposed to what they say or promise they do). That's something where you, Walter, beat academia, institutions, corporations, and crazy individualists at this game. Hands down.
In other words, someone needed to put the "engineering" back into "software engineering". And who better to do that than an engineer? A few months ago I wrote up a little thing that was along these lines (fair warning: dynamic-programming fans might not like parts of it, although it's not overtly insulting or profane or anything. Also fair warning: I'm not much of a writer ;) (and more fair warning, if you venture to the other posts, I do a *lot* of...let's just say "venting" on that site) ): http://blog.dev-scene.com/abscissa/2008/09/16/putting-the-engineering-back-into-software-engineering/
Mar 06 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Georg Wrede wrote:
 Checklists are an underutilised resource in software development. There 
 ought to be checklists "on paper" for pre-release checks for the staff, 
 for example. Also, since computers are good at mundane and repetitive 
 tasks, simple shell scripts that go through systems checking things 
 would be economical.
A good example of these automated checklists is the ./configure script that comes with most applications that are distributed in source form.
Mar 06 2009
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Georg Wrede wrote:
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote:
 I started my career doing flight critical mechanical designs for 
 Boeing airliners. I had it pounded into me that no matter how perfect 
 you designed the parts, the next step is "assume it fails. Now what?" 
 That is why Boeing airliners have incredible safety records.
Yup. That's what McDonnell didn't do with the DC-10. They were crashing mysteriously in mid-fligt, and nobody survived to tell. The DC-10 had three entirely separate steering systems: a mechanical (as in wires from cockpit to ailerons), a hydraulic one, and an electrical system. After a superior pilot(1) actually brought his plane home after disaster struck, it was found out that the reason to all the crashes was a cargo door lock, which could be shut carelessly and then, if the ground guy was strong enough, lock the latch by force, leaving it only partly locked. Once in the air, the airpressure blew the door open, resulting in the passenger floor collapsing, and shredding the steering systems.
At Newark Airport in New Jersey, the Air Control Tower's network is linked to radar and such via redundant cables, as expected. However, these cables are run right next to one another, eliminating any benefit that the redundancy might provide. Funny how things change between the design requirements and implementation.
Mar 06 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Georg Wrede wrote:
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote:
 I started my career doing flight critical mechanical designs for 
 Boeing airliners. I had it pounded into me that no matter how 
 perfect you designed the parts, the next step is "assume it fails. 
 Now what?" That is why Boeing airliners have incredible safety records.
Yup. That's what McDonnell didn't do with the DC-10. They were crashing mysteriously in mid-fligt, and nobody survived to tell. The DC-10 had three entirely separate steering systems: a mechanical (as in wires from cockpit to ailerons), a hydraulic one, and an electrical system. After a superior pilot(1) actually brought his plane home after disaster struck, it was found out that the reason to all the crashes was a cargo door lock, which could be shut carelessly and then, if the ground guy was strong enough, lock the latch by force, leaving it only partly locked. Once in the air, the airpressure blew the door open, resulting in the passenger floor collapsing, and shredding the steering systems.
At Newark Airport in New Jersey, the Air Control Tower's network is linked to radar and such via redundant cables, as expected. However, these cables are run right next to one another, eliminating any benefit that the redundancy might provide. Funny how things change between the design requirements and implementation.
That's why the bad guys took almost succeeded in that Die Hard movie taking place on an airport! Andrei
Mar 06 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Sean Kelly wrote:
 At Newark Airport in New Jersey, the Air Control Tower's network is 
 linked to radar and such via redundant cables, as expected.  However, 
 these cables are run right next to one another, eliminating any benefit 
 that the redundancy might provide.  Funny how things change between the 
 design requirements and implementation.
We had a major phone outage in Finland about ten years ago. Everything except the capital metro area. All national operators, both land lines and cellular, even the military. One single fiber bunch ten miles north of Helsinki, next to a railroad track. A train car derailed, broke a pylon, and the cables were severed in the ground. I couldn't ever have imagined that cellular tower cables are dug with land line trunk cables. Such a thought would never even have occurred to me. One of the biggest points of having a cell phone at the time was to have a separate system.
Mar 06 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Things were so bad on DOS with this I'd develop code on a different 
 system entirely that had memory protection, then only afterwards port it 
 to DOS as a last step.
Oh, those days... Back before we had hard disks, computers had two floppy drivers, you had the operating system and a copy the current application (word processor, spreadsheet, database, compiler, etc.) in one disk drive *physically write protected*, and your data in the other. http://www.classiccmp.org/dunfield/kaypro/h/k2frontl.jpg The need to actuay physically write protect the programs was exactly that. Usually when a program crashed, the havoc was devastating. Instead of getting a GPF or segfault (they didn't exist because there was no hardware memory protection), the program ran around "randomly" in the memory space. It was like a movie where the robot gets insane, yelling "grbl grbl grbl, destroy destoly!" and starts throwing people, furniture and machines into the walls. Too often this results in unwanted writes into the data disk. Bits were spewing all over. http://www.classiccmp.org/dunfield/kaypro/index.htm I've still got this computer, in mint condition! The floppies were 192k, compared to a 4.7GB (single-sided single-layer) DVD, you could fill 24000 floppies from thd DVD. (With my house keys I've a 4GB memory stick, too.) Those floppies would literally *fill* a normal size bedroom. They were expensive, too. I remember paying more than a dollar a piece. It was usual for shops to sell them one-by-one! The computer was good enough to run book keeping, budgeting, correspondence, customer database, personalised snail-mail spam, all that I needed for my 100+ staff company of the time. And of course recreational programming. On another of my computers I had to physically install (as in drill, screwdriver, soldering iron) a reset button. This let me create programs that inspected the computer state after a crash. Kind of what Thompson and Ritchie (the latter of C fame) wrote for UNIX. They made UNIX dump the memory and processor state at crash into a hidden file (for some reason in the Current Directory). One of the first things I wrote when I became a UNIX operator, was a cron script that regularly harvested and deleted them.
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Mar 05 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Alas, they are! Today kids won't do that with a quad-core... Incidentally, I just realized why I've had such a problem with hello.d at ~200k. It doesn't fit on a Kaypro floppy!!! In the old days one had the OS and dozens of programs on one. I need to start thinking outside the floppy ^h^h^h^h^h box. (Oh, the last D2 makes a much smaller hello, thanks!)
Mar 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Fri, 06 Mar 2009 13:20:22 +0300, Georg Wrede <georg.wrede iki.fi> wrote:

 Walter Bright wrote:
 Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Alas, they are! Today kids won't do that with a quad-core... Incidentally, I just realized why I've had such a problem with hello.d at ~200k. It doesn't fit on a Kaypro floppy!!! In the old days one had the OS and dozens of programs on one. I need to start thinking outside the floppy ^h^h^h^h^h box. (Oh, the last D2 makes a much smaller hello, thanks!)
What are you talking about? Is Kaypro some kind of oldish Flash USB stick? :)
Mar 06 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Denis Koroskin escribió:
 On Fri, 06 Mar 2009 13:20:22 +0300, Georg Wrede <georg.wrede iki.fi> wrote:
 
 Walter Bright wrote:
 Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Alas, they are! Today kids won't do that with a quad-core... Incidentally, I just realized why I've had such a problem with hello.d at ~200k. It doesn't fit on a Kaypro floppy!!! In the old days one had the OS and dozens of programs on one. I need to start thinking outside the floppy ^h^h^h^h^h box. (Oh, the last D2 makes a much smaller hello, thanks!)
What are you talking about? Is Kaypro some kind of oldish Flash USB stick? :)
What's a floppy? :-P
Mar 06 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Ary Borenszweig wrote:
 Denis Koroskin escribi:
 On Fri, 06 Mar 2009 13:20:22 +0300, Georg Wrede wrote:
 Walter Bright wrote:
 Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Alas, they are! Today kids won't do that with a quad-core... Incidentally, I just realized why I've had such a problem with hello.d at ~200k. It doesn't fit on a Kaypro floppy!!! In the old days one had the OS and dozens of programs on one. I need to start thinking outside the floppy ^h^h^h^h^h box. (Oh, the last D2 makes a much smaller hello, thanks!)
What are you talking about? Is Kaypro some kind of oldish Flash USB stick? :)
What's a floppy? :-P
Heh, at the time, every once in a while I kept thinking what life will be after the awed year 2000. I imagined a time when the computer I'm sitting at will be considered antique, and floppies and many concepts totally foreign to that day's youth. Mobile phones, satellite dishes on rooftops, the "entire knowledge of humankind" literally at your fingertips, picture telephony (skype), genuine laptop size computers for everyone, on-demand TV (hulu.com), satnav for all. And naturally I thought that with all this, people would stroll the streets with an eternal smile of techno-bliss and love. I did not imagine computer viruses, phishing, spam, global recession, 9/11, oil prices the demise of Soviet Union (not that that's bad like everything else here), global warming, pandemics (bird flu, HIV), a tsunami killing more than Hiroshima nuke. And all of a sudden, I'm here. Time flies all too quickly. I wonder what the world is like 25 years from now. In the old days we used to think about this a lot more than today, because the year 2000 was such a big deal. ("A historical change of millennium, that only one in 50 generations get to even see!") At my company we even had a data entry machine that used 8 inch floppies (and not the 5 1/4 ones). Bet not many of you guys have even touched one. :-)
Mar 06 2009
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Georg Wrede wrote:
 
 At my company we even had a data entry machine that used 8 inch floppies 
 (and not the 5 1/4 ones). Bet not many of you guys have even touched 
 one. :-)
We had a couple of Wang Word Processors (I think) in our school that used those, but I never got to touch one. Only the PETs that used an external audio cassette drive for data storage. Probably took a full 60 seconds to load a program from one of those things... not too shabby for a whole 4k.
Mar 06 2009
parent reply BCS <ao pathlink.com> writes:
Reply to Sean,

 We had a couple of Wang Word Processors (I think) in our school that
 used those, but I never got to touch one.  Only the PETs that used an
 external audio cassette drive for data storage.  Probably took a full
 60 seconds to load a program from one of those things... not too
 shabby for a whole 4k.
 
Computers now have literally orders (many orders) of magnitude more space and power and the load times for real program haven't even improved by even a single order of magnitude. :b
Mar 06 2009
next sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
BCS wrote:
 Reply to Sean,
 
 We had a couple of Wang Word Processors (I think) in our school that
 used those, but I never got to touch one.  Only the PETs that used an
 external audio cassette drive for data storage.  Probably took a full
 60 seconds to load a program from one of those things... not too
 shabby for a whole 4k.
Computers now have literally orders (many orders) of magnitude more space and power and the load times for real program haven't even improved by even a single order of magnitude. :b
You're joking, right? They've got worse, by orders of magnitude. A 5 second boot with the Kaypro, versus (don't even know how long) for Vista on an average computer. Or firing up OpenOffice. (I've got a quad-core screamer now that I got fed up with using only old hardware, and OO still is slow to start.) My VIC-20 booted up in less than 2 seconds. ---- Ok, they're trying. Windows-7 boots up a lot faster than Vista. And Fedora-10 much faster than my Ubuntu or an older Fedora. And Fedora has promised the next release to "really boot fast". But still... With Windows 3.3 on a crappy 386 with a slow hard disk, Excel and Word were on almost as soon as you double clicked the icon. If one wants truly Blazing Speed, get Wine and install W95 on it with Office-95. And for compilation and programming, Borland Pascal from the same era.
Mar 06 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gos7ma$h4b$1 digitalmars.com...
 BCS wrote:
 Reply to Sean,

 We had a couple of Wang Word Processors (I think) in our school that
 used those, but I never got to touch one.  Only the PETs that used an
 external audio cassette drive for data storage.  Probably took a full
 60 seconds to load a program from one of those things... not too
 shabby for a whole 4k.
Computers now have literally orders (many orders) of magnitude more space and power and the load times for real program haven't even improved by even a single order of magnitude. :b
You're joking, right? They've got worse, by orders of magnitude. A 5 second boot with the Kaypro, versus (don't even know how long) for Vista on an average computer. Or firing up OpenOffice. (I've got a quad-core screamer now that I got fed up with using only old hardware, and OO still is slow to start.) My VIC-20 booted up in less than 2 seconds. ---- Ok, they're trying. Windows-7 boots up a lot faster than Vista. And Fedora-10 much faster than my Ubuntu or an older Fedora. And Fedora has promised the next release to "really boot fast". But still... With Windows 3.3 on a crappy 386 with a slow hard disk, Excel and Word were on almost as soon as you double clicked the icon. If one wants truly Blazing Speed, get Wine and install W95 on it with Office-95. And for compilation and programming, Borland Pascal from the same era.
And then there's shut-down speeds. It amazes me how long to takes to flush some IO, kill a few connections, and do god-knows-what with a bunch of ram that's just going to get cleared anyway.
Mar 06 2009
parent BCS <ao pathlink.com> writes:
Reply to Nick,


 And then there's shut-down speeds. It amazes me how long to takes to
 flush some IO, kill a few connections, and do god-knows-what with a
 bunch of ram that's just going to get cleared anyway.
 
http://dslab.epfl.ch/pubs/crashonly/crashonly.pdf
Mar 06 2009
prev sibling parent BCS <ao pathlink.com> writes:
Reply to Georg,

 BCS wrote:
 
 Reply to Sean,
 
 We had a couple of Wang Word Processors (I think) in our school that
 used those, but I never got to touch one.  Only the PETs that used
 an external audio cassette drive for data storage.  Probably took a
 full 60 seconds to load a program from one of those things... not
 too shabby for a whole 4k.
60 seconds for a program load vs 10-30 now for VS or Eclipse
 You're joking, right?
 
 They've got worse, by orders of magnitude. A 5 second boot with the
 Kaypro, versus (don't even know how long) for Vista on an average
 computer.
Mar 06 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 Computers now have literally orders (many orders) of magnitude more 
 space and power and the load times for real program haven't even 
 improved by even a single order of magnitude. :b
And they take even longer to boot up!
Mar 06 2009
parent BCS <ao pathlink.com> writes:
Reply to Walter,

 BCS wrote:
 
 Computers now have literally orders (many orders) of magnitude more
 space and power and the load times for real program haven't even
 improved by even a single order of magnitude. :b
 
And they take even longer to boot up!
I got to play around with a system a few years ago (as in like 5) that could reboot in almost zero time. It could actually be rebooted before you could get your finger off the rest button. OTOH it was a ROM based system that had been upgraded to a whole 4kB of RAM. (It was cutting edge when I was born and dead tech by the time I was eating solid foods)
Mar 06 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"BCS" <ao pathlink.com> wrote in message 
news:78ccfa2d3b9ad8cb6c7444cdbb0a news.digitalmars.com...
 Reply to Sean,

 We had a couple of Wang Word Processors (I think) in our school that
 used those, but I never got to touch one.  Only the PETs that used an
 external audio cassette drive for data storage.  Probably took a full
 60 seconds to load a program from one of those things... not too
 shabby for a whole 4k.
Computers now have literally orders (many orders) of magnitude more space and power and the load times for real program haven't even improved by even a single order of magnitude. :b
With video games it's actually been getting steadily worse. Atari VCS games could go from power-on to gamplay in mere milliseconds (at least, if you were quick enough with the "reset" button). On the NES, many games were just as fast, but some thrid party ones added a few seconds of unskippable "legaleze" upon boot (although it wasn't really a "technical" restriction at that point). But then disc-based systems came along and granted enormous storage, but at the price of noticably worse bootup and load times (ten or so seconds, if not more). Then many XBox1 games started caching stuff to the HDD before any actual gamplay, but that can sometimes take upwards of a few minutes (like in Splinter Cell 3 and 4). Now with the PS3, there are certain games that require an upfront installation of nearly half an hour.
Mar 06 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 At my company we even had a data entry machine that used 8 inch floppies 
 (and not the 5 1/4 ones). Bet not many of you guys have even touched 
 one. :-)
I still have a box of them, for my long-gone PDP-11. I saw "2001" when it first came out. I was in total awe.
Mar 06 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 At my company we even had a data entry machine that used 8 inch 
 floppies (and not the 5 1/4 ones). Bet not many of you guys have even 
 touched one. :-)
I still have a box of them, for my long-gone PDP-11. I saw "2001" when it first came out. I was in total awe.
Me too. I was under age, but Mom talked me into the theatre with her. The coolest shot was of this guy walking in the round corridor, and then turning upside down to go in the next corridor. Of course it was obvious (to a technically minded geek, at least) how they shot it, but I really thought they were cool for having got the idea. The ending was a bit too abstract for me at the time. But the opening scene with the apes and the monolith, it was actually religious. I never got the technophobia that I believe the flick was trying to instill. Heh, I can stay up alone at night with only computers around. I've got a box of original unused punch cards. Maybe I'll start selling them. $5 a piece, anyone? Or maybe $30 in 5 years. Better not sell them yet.
Mar 06 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gos8d3$imj$1 digitalmars.com...
 Walter Bright wrote:
 Georg Wrede wrote:
 At my company we even had a data entry machine that used 8 inch floppies 
 (and not the 5 1/4 ones). Bet not many of you guys have even touched 
 one. :-)
I still have a box of them, for my long-gone PDP-11. I saw "2001" when it first came out. I was in total awe.
Me too. I was under age, but Mom talked me into the theatre with her. The coolest shot was of this guy walking in the round corridor, and then turning upside down to go in the next corridor. Of course it was obvious (to a technically minded geek, at least) how they shot it, but I really thought they were cool for having got the idea. The ending was a bit too abstract for me at the time. But the opening scene with the apes and the monolith, it was actually religious. I never got the technophobia that I believe the flick was trying to instill. Heh, I can stay up alone at night with only computers around.
Sometimes, when I'm in a particularly pessemistic mood, I get the feeling that regardless of its intent, 2001's biggest contribution to society was the creation of what to become one of science fiction's biggest plot cliches: People being put into danger by their own creations (2001, Matrix, Terminator, Battlestar Gallactica...and probably a whole ton of others I can't think of right now.)
Mar 06 2009
next sibling parent BCS <ao pathlink.com> writes:
Reply to Nick,

 Sometimes, when I'm in a particularly pessemistic mood, I get the
 feeling that regardless of its intent, 2001's biggest contribution to
 society was the creation of what to become one of science fiction's
 biggest plot cliches: People being put into danger by their own
 creations (2001, Matrix, Terminator, Battlestar Gallactica...and
 probably a whole ton of others I can't think of right now.)
 
Sorry, that's not 2001: Frankenstein?
Mar 06 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Fri, 6 Mar 2009 17:42:23 -0500, Nick Sabalausky wrote:

 Sometimes, when I'm in a particularly pessemistic mood, I get the feeling 
 that regardless of its intent, 2001's biggest contribution to society was 
 the creation of what to become one of science fiction's biggest plot 
 cliches: People being put into danger by their own creations (2001, Matrix, 
 Terminator, Battlestar Gallactica...and probably a whole ton of others I 
 can't think of right now.)
Predated by "Prometheus" from Greek mythology and by Mary Shelly's "Frankenstein" -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Mar 06 2009
parent "Nick Sabalausky" <a a.a> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:p3z1y7v9zg88.z3yecw0xphen.dlg 40tude.net...
 On Fri, 6 Mar 2009 17:42:23 -0500, Nick Sabalausky wrote:

 Sometimes, when I'm in a particularly pessemistic mood, I get the feeling
 that regardless of its intent, 2001's biggest contribution to society was
 the creation of what to become one of science fiction's biggest plot
 cliches: People being put into danger by their own creations (2001, 
 Matrix,
 Terminator, Battlestar Gallactica...and probably a whole ton of others I
 can't think of right now.)
Predated by "Prometheus" from Greek mythology and by Mary Shelly's "Frankenstein"
I don't remember Prometheus (been a very long time since I last studied greek mythology), but I've always seen Frankenstein much more as a tale about prejudice, fear of the unfamiliar, fear-based mob-mentality, etc., rather than technology-gone-wrong. Edward Scissorhands captures the theme of Frankenstein much more closely than the other movies/shows do. But regardless, I wouldn't be surprised if "people endangered by technology" existed in plenty of fiction long before 2001. Perhaps in my earlier post "popularized" would have been more accurate than "created".
Mar 06 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:goqtd8$sdl$1 digitalmars.com...
 Walter Bright wrote:
 Georg Wrede wrote:
 I've still got this computer, in mint condition!
I designed, soldered, and wire-wrapped together my first computer around a 6800 microprocessor, wrote the rom software, etc. Those days are long gone.
Alas, they are! Today kids won't do that with a quad-core... Incidentally, I just realized why I've had such a problem with hello.d at ~200k. It doesn't fit on a Kaypro floppy!!! In the old days one had the OS and dozens of programs on one. I need to start thinking outside the floppy ^h^h^h^h^h box. (Oh, the last D2 makes a much smaller hello, thanks!)
Don't mean to spam or anything, but the guys over at xgamestation.com have been trying to keep that alive with hobbyist kits. Disclaimer/Bragging: I wrote some of the demo software and drivers that comes with their Hydra (which has 8 cores :) ).
Mar 06 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 Don't mean to spam or anything, but the guys over at xgamestation.com have 
 been trying to keep that alive with hobbyist kits. Disclaimer/Bragging: I 
 wrote some of the demo software and drivers that comes with their Hydra 
 (which has 8 cores :) ). 
This is the kind of thing you should be bragging about!
Mar 06 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 Don't mean to spam or anything, but the guys over at xgamestation.com have 
 been trying to keep that alive with hobbyist kits. Disclaimer/Bragging: I 
 wrote some of the demo software and drivers that comes with their Hydra 
 (which has 8 cores :) ). 
Gee, I bookmarked the site on sight! That stuff has got a lot better since I did a survey 2 years ago. Hmm. Kid's birthday is coming up... Too bad mine is too, and his mother will never believe it's for the kid. I could've used that when I was 12. Damn. ----- And now the inevitable $1M question: when can we get D on it????
Mar 06 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gosa0h$mdd$1 digitalmars.com...
 Nick Sabalausky wrote:
 Don't mean to spam or anything, but the guys over at xgamestation.com 
 have been trying to keep that alive with hobbyist kits. 
 Disclaimer/Bragging: I wrote some of the demo software and drivers that 
 comes with their Hydra (which has 8 cores :) ).
Gee, I bookmarked the site on sight! That stuff has got a lot better since I did a survey 2 years ago. Hmm. Kid's birthday is coming up... Too bad mine is too, and his mother will never believe it's for the kid. I could've used that when I was 12. Damn. ----- And now the inevitable $1M question: when can we get D on it????
As soon as we get a fully-working C-backend for one of the D compilers ;) Although, last time I used the Hydra/Propeller there wasn't actually a C compiler for it, but that was a while ago and I *think* I've heared that there's a C compiler for it now.
Mar 06 2009
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Nick Sabalausky wrote:
 "Georg Wrede" <georg.wrede iki.fi> wrote in message 
 news:gosa0h$mdd$1 digitalmars.com...
 And now the inevitable $1M question: when can we get D on it????
As soon as we get a fully-working C-backend for one of the D compilers ;)
LLVM has a C backend, so as soon as LDC is mature enough...
Mar 06 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Frits van Bommel" <fvbommel REMwOVExCAPSs.nl> wrote in message 
news:got4c1$2dhi$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Georg Wrede" <georg.wrede iki.fi> wrote in message 
 news:gosa0h$mdd$1 digitalmars.com...
 And now the inevitable $1M question: when can we get D on it????
As soon as we get a fully-working C-backend for one of the D compilers ;)
LLVM has a C backend, so as soon as LDC is mature enough...
Now that I think about it more, it would probably require more than just a D->C compiler. Storage on the Hydra's microcontroller (ie, Parallax's Propeller chip) is very tight (512 32-bit words per core (aka "cog"), plus a few tens of k shared ram that can't be used to store code unless you reserve some of the non-shared memory to page code in and out). Plus, of course, there's no streams or filesystem or OS, etc, so because of that and the low memory large chunks of phobos/tango would probably need to be removed, possibly including the GC. If there is indeed a C compiler for it, I'm not sure if or how it handles code paging or other such low-mem concerns. If anyone's really interested, the forums over there would be a good place to ask.
Mar 07 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 "Frits van Bommel" <fvbommel REMwOVExCAPSs.nl> wrote in message 
 news:got4c1$2dhi$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Georg Wrede" <georg.wrede iki.fi> wrote in message 
 news:gosa0h$mdd$1 digitalmars.com...
 And now the inevitable $1M question: when can we get D on it????
As soon as we get a fully-working C-backend for one of the D compilers ;)
LLVM has a C backend, so as soon as LDC is mature enough...
Now that I think about it more, it would probably require more than just a D->C compiler. Storage on the Hydra's microcontroller (ie, Parallax's Propeller chip) is very tight (512 32-bit words per core (aka "cog"), plus a few tens of k shared ram that can't be used to store code unless you reserve some of the non-shared memory to page code in and out). Plus, of course, there's no streams or filesystem or OS, etc, so because of that and the low memory large chunks of phobos/tango would probably need to be removed, possibly including the GC. If there is indeed a C compiler for it, I'm not sure if or how it handles code paging or other such low-mem concerns.
http://en.wikipedia.org/wiki/Parallax_Propeller "There is also a C compiler available from ImageCraft, the ICCV7 for Propeller. Its supports the 32K Large Memory Model, to bypass the 2K limitation per cog, and is typically 5 to 10 times as fast as standard SPIN code."
Mar 07 2009
parent Georg Wrede <georg.wrede iki.fi> writes:
Georg Wrede wrote:
 Nick Sabalausky wrote:
 "Frits van Bommel" <fvbommel REMwOVExCAPSs.nl> wrote in message 
 news:got4c1$2dhi$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Georg Wrede" <georg.wrede iki.fi> wrote in message 
 news:gosa0h$mdd$1 digitalmars.com...
 And now the inevitable $1M question: when can we get D on it????
As soon as we get a fully-working C-backend for one of the D compilers ;)
LLVM has a C backend, so as soon as LDC is mature enough...
Now that I think about it more, it would probably require more than just a D->C compiler. Storage on the Hydra's microcontroller (ie, Parallax's Propeller chip) is very tight (512 32-bit words per core (aka "cog"), plus a few tens of k shared ram that can't be used to store code unless you reserve some of the non-shared memory to page code in and out). Plus, of course, there's no streams or filesystem or OS, etc, so because of that and the low memory large chunks of phobos/tango would probably need to be removed, possibly including the GC. If there is indeed a C compiler for it, I'm not sure if or how it handles code paging or other such low-mem concerns.
http://en.wikipedia.org/wiki/Parallax_Propeller "There is also a C compiler available from ImageCraft, the ICCV7 for Propeller. Its supports the 32K Large Memory Model, to bypass the 2K limitation per cog, and is typically 5 to 10 times as fast as standard SPIN code."
So, yes, let's wait for LDC. They also sell a Propeller QuadRover Robot. (Weight 89 lbs, 40.4 Kilos, gasoline powered. Top speed 12mph.) http://www.youtube.com/watch?v=j1GK00oe170
Mar 07 2009
prev sibling parent Sean Kelly <sean invisibleduck.org> writes:
Nick Sabalausky wrote:
 Plus, a crash *can* be nearly as bad, if not equally 
 bad, as data-corruption when it occurs in something mission-critical. This 
 is not a problem to be taken lightly. 
Mission-critical software is typically designed to terminate the instant an error is detected so that a back-up system can take over. In fact, Erlang (designed for telephone switches) is is designed specifically with this in mind--any exception terminates the app.
Mar 04 2009
prev sibling next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Walter Bright wrote:
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is.
You're missing the point. It's not the moment at which the dereference happens that's the problem. As you point out, we have a hardware trap for that. It's when a null *gets assigned* to a variable that isn't ever supposed to *be* null that's the problem. That's when the "asserts up the posterior" issue comes in, because it's the only mechanism we have for defending against this. I need to know when that null gets stored, not when my code trips over it and explodes later down the line. Non-nullable types (or proxy struct or whatever) means the code won't even compile if there's an untested path. And if we do try to assign a null, we get an exception at THAT moment, so we can trace back to find out where it came from. -- Daniel
Mar 04 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
 Non-nullable types (or proxy struct or whatever) means the code won't
 even compile if there's an untested path.  And if we do try to assign a
 null, we get an exception at THAT moment, so we can trace back to find
 out where it came from.
Yes, I understand that detecting bugs at compile time is better. But there's a downside to this. Every reference type will have two subtypes - a nullable and a non-nullable. We already have const, immutable and shared. Throwing another attribute into the mix is not insignificant. Each one exponentially increases the combinations of types, their conversions from one to the other, overloading rules, etc. Andrei suggests making a library type work for this rather than a language attribute, but it's still an extra thing that will have to be specified everywhere where used. There are a lot of optional attributes that can be applied to reference types. At what point is the additional complexity not worth the gain?
Mar 04 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 14:04:33 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
 Non-nullable types (or proxy struct or whatever) means the code won't
 even compile if there's an untested path.  And if we do try to assign a
 null, we get an exception at THAT moment, so we can trace back to find
 out where it came from.
Yes, I understand that detecting bugs at compile time is better. But there's a downside to this. Every reference type will have two subtypes - a nullable and a non-nullable. We already have const, immutable and shared. Throwing another attribute into the mix is not insignificant. Each one exponentially increases the combinations of types, their conversions from one to the other, overloading rules, etc. Andrei suggests making a library type work for this rather than a language attribute, but it's still an extra thing that will have to be specified everywhere where used. There are a lot of optional attributes that can be applied to reference types. At what point is the additional complexity not worth the gain?
Nullable types may and should be implemented as a library type, with a little of syntax sugar. Just the same as Object class. Besides, I believe introducing non-nullables will make T.init feature obsolete, because you will have to initialize all the values explicitly thus solving one more problem - use of uninitialized value. This time - correctly. And please don't tell me that T.init is a valid initializer and using it is fine and variable is not considered uninitialized anymore. It's a bad workaround that often doesn't work. So essentially this is a feature swap: you add one and remove another. Another question is the feature's usability. It might turn out to be not very handy to have all variables initialized, but we can't know until we really start using it, right?
Mar 04 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Denis Koroskin wrote:
 Nullable types may and should be implemented as a library type, with a 
 little of syntax sugar.
Too much syntactic sugar causes cancer of the semicolon. This is exactly the kind of thing that goes forever and never stops. Changing the default to non-null is one good thing. Adding sugar in the mix just spoils everything. Andrei
Mar 04 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 Another question is the feature's usability. It might turn out to be not 
 very handy to have all variables initialized, but we can't know until we 
 really start using it, right?
I've used compilers that required explicit initializers for all variables. Sounds great in theory, in practice it *causes* bugs. What happens is the compiler dings the user with "initializer required." The user wants to get on with things, so he just throws in an initializer, any initializer, to get the compiler to shut up. The maintenance programmer then finds himself looking at: int x = some_random_value; and wondering why that value, which is never used and makes no sense because the rest of the logic assigns x a proper value before it gets used anyway. So he wastes time figuring out why that dead initializer is used. Then the next maintenance programmer, with a poor understanding of the code, changes the logic so now x's some_random_value actually gets used, and bug happens. In general, it's a bad idea to force the user to throw in dead code to shut the compiler up. A particularly illustrative case of this is Java's exception specification, which they loosened up after it became clear that this was not a good idea.
Mar 04 2009
next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 21:28:17 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Denis Koroskin wrote:
 Another question is the feature's usability. It might turn out to be  
 not very handy to have all variables initialized, but we can't know  
 until we really start using it, right?
I've used compilers that required explicit initializers for all variables. Sounds great in theory, in practice it *causes* bugs. What happens is the compiler dings the user with "initializer required." The user wants to get on with things, so he just throws in an initializer, any initializer, to get the compiler to shut up. The maintenance programmer then finds himself looking at: int x = some_random_value; and wondering why that value, which is never used and makes no sense because the rest of the logic assigns x a proper value before it gets used anyway. So he wastes time figuring out why that dead initializer is used. Then the next maintenance programmer, with a poor understanding of the code, changes the logic so now x's some_random_value actually gets used, and bug happens. In general, it's a bad idea to force the user to throw in dead code to shut the compiler up. A particularly illustrative case of this is Java's exception specification, which they loosened up after it became clear that this was not a good idea.
In fact, that's what happen right now. It's just T.init instead of some_random_value, which is, in fact, "some random value" that is different for different T (0 for ints, 0xFF for char, NaN for floats etc). uninitialized as long as they are not accessed: int main() { int i; // not an error //return i; // error: use of uninitialized variable i = 0; return i; // fine } This effectively elliminates the need for stupid dummy initializers.
Mar 04 2009
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright wrote:
 Denis Koroskin wrote:
 Another question is the feature's usability. It might turn out to be 
 not very handy to have all variables initialized, but we can't know 
 until we really start using it, right?
I've used compilers that required explicit initializers for all variables. Sounds great in theory, in practice it *causes* bugs. What happens is the compiler dings the user with "initializer required." The user wants to get on with things, so he just throws in an initializer, any initializer, to get the compiler to shut up. The maintenance programmer then finds himself looking at: int x = some_random_value; and wondering why that value, which is never used and makes no sense because the rest of the logic assigns x a proper value before it gets used anyway. So he wastes time figuring out why that dead initializer is used. Then the next maintenance programmer, with a poor understanding of the code, changes the logic so now x's some_random_value actually gets used, and bug happens. In general, it's a bad idea to force the user to throw in dead code to shut the compiler up. A particularly illustrative case of this is Java's exception specification, which they loosened up after it became clear that this was not a good idea.
It's not like that. They don't require you to initialize a variable in it's initializer, but just before you read it for the fist time. That's very different. If you force the user to do that, you are never wrong: if the user didn't initialize the variable before reading it's value, then it's an error; if the variable is never read, then it's dead code and the compiler can point that out. What's wrong with that? You always assign a value to a variable *a first time*! See also Denis Koroskin's answer.
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable in 
 it's initializer, but just before you read it for the fist time. That's 
 very different.
The only way to do that 100% reliably is to instrument the running code.
Mar 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable in 
 it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Doing it conservatively still is 100% reliable but has the user occasionally add code that's not needed. Andrei
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Doing it conservatively still is 100% reliable but has the user occasionally add code that's not needed.
That: 1. still has the user adding dead code 2. will generate endless bug reports (*) on why sometimes the compiler asks for an initialization and sometimes it doesn't 3. (2) will behave in an implementation dependent manner, depending on how thorough the flow analysis is, making for non-portable source code P.S. I've been doing tech support for compilers for 25 years, and I wish to reduce the workload by designing out things that confuse people!
Mar 04 2009
parent reply Jason House <jason.james.house gmail.com> writes:
Walter Bright Wrote:

 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Doing it conservatively still is 100% reliable but has the user occasionally add code that's not needed.
That: 1. still has the user adding dead code
I'll gladly add assert(0) if my complex logic confuses the compiler. Actually, it pisses me off when the current dmd compiler complains about an assert(0) being unreachable.
 2. will generate endless bug reports (*) on why sometimes the compiler 
 asks for an initialization and sometimes it doesn't
IMHO, this type of thing is easy to understand. Huge (recurring) threads like this one are a sign of a bigger language issue than ill-informed bug reports.
 3. (2) will behave in an implementation dependent manner, depending on 
 how thorough the flow analysis is, making for non-portable source code
I think the only way to do this is to define simplistic flow analysis in the spec and then stick to it. Afterall complex flow can confuse humans reading the code too.
 P.S. I've been doing tech support for compilers for 25 years, and I wish 
 to reduce the workload by designing out things that confuse people!
Mar 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jason House wrote:
 Walter Bright Wrote:
 
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Doing it conservatively still is 100% reliable but has the user occasionally add code that's not needed.
That: 1. still has the user adding dead code
I'll gladly add assert(0) if my complex logic confuses the compiler. Actually, it pisses me off when the current dmd compiler complains about an assert(0) being unreachable.
 2. will generate endless bug reports (*) on why sometimes the compiler 
 asks for an initialization and sometimes it doesn't
IMHO, this type of thing is easy to understand. Huge (recurring) threads like this one are a sign of a bigger language issue than ill-informed bug reports.
 3. (2) will behave in an implementation dependent manner, depending on 
 how thorough the flow analysis is, making for non-portable source code
I think the only way to do this is to define simplistic flow analysis in the spec and then stick to it. Afterall complex flow can confuse humans reading the code too.
 P.S. I've been doing tech support for compilers for 25 years, and I wish 
 to reduce the workload by designing out things that confuse people!
I agree with Jason that it's relatively easy to define properly the behavior of a flow-informed feature. I also agree with bearophile (I think) about the fact that flow analyses are the norm in today's modern behind in that regard. I also agree with Walter that he has a rather overflowing plate already and he instinctively starts with disagreeing with something that has him do more work. Andrei
Mar 04 2009
parent Jason House <jason.james.house gmail.com> writes:
Andrei Alexandrescu Wrote:

 Jason House wrote:
 Walter Bright Wrote:
 
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Doing it conservatively still is 100% reliable but has the user occasionally add code that's not needed.
That: 1. still has the user adding dead code
I'll gladly add assert(0) if my complex logic confuses the compiler. Actually, it pisses me off when the current dmd compiler complains about an assert(0) being unreachable.
 2. will generate endless bug reports (*) on why sometimes the compiler 
 asks for an initialization and sometimes it doesn't
IMHO, this type of thing is easy to understand. Huge (recurring) threads like this one are a sign of a bigger language issue than ill-informed bug reports.
 3. (2) will behave in an implementation dependent manner, depending on 
 how thorough the flow analysis is, making for non-portable source code
I think the only way to do this is to define simplistic flow analysis in the spec and then stick to it. Afterall complex flow can confuse humans reading the code too.
 P.S. I've been doing tech support for compilers for 25 years, and I wish 
 to reduce the workload by designing out things that confuse people!
I agree with Jason that it's relatively easy to define properly the behavior of a flow-informed feature. I also agree with bearophile (I think) about the fact that flow analyses are the norm in today's modern behind in that regard. I also agree with Walter that he has a rather overflowing plate already and he instinctively starts with disagreeing with something that has him do more work. Andrei
I doubt anyone would complain if Walter said a feature was worthwhile but he has too many things on his plate to get to it in the near future. There's nothing wrong with delegating and empowering. If nobody steps up to update the front end, then maybe it's not as valuable as we all say it is...
Mar 04 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jason House wrote:
 IMHO, this type of thing is easy to understand.
Yeah, well, I still get regular emails (for the last 20 years at least) from the gamut of professional programmers at all levels of expertise who do not understand what "undefined symbol" from the linker means. It happens so often I am forced to consider the idea that the defect lies with me <g>. If I could figure a way to design *that* out of a linker, I would.
Mar 04 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Jason House wrote:
 IMHO, this type of thing is easy to understand.
Yeah, well, I still get regular emails (for the last 20 years at least) from the gamut of professional programmers at all levels of expertise who do not understand what "undefined symbol" from the linker means. It happens so often I am forced to consider the idea that the defect lies with me <g>. If I could figure a way to design *that* out of a linker, I would.
It's easy. Demangle the name of the symbol that you print an error about. I swear that that is the entire problem, right there. Andrei
Mar 04 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Yeah, well, I still get regular emails (for the last 20 years at 
 least) from the gamut of professional programmers at all levels of 
 expertise who do not understand what "undefined symbol" from the 
 linker means. It happens so often I am forced to consider the idea 
 that the defect lies with me <g>.

 If I could figure a way to design *that* out of a linker, I would.
It's easy. Demangle the name of the symbol that you print an error about. I swear that that is the entire problem, right there.
No, the same thing happens with C. It still happened when there was a name demangler in the linker.
Mar 04 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Jason House wrote:
 I'll gladly add assert(0) if my complex logic confuses the compiler.
 Actually, it pisses me off when the current dmd compiler complains
 about an assert(0) being unreachable.
I'd love it if an unreachable assert(0) were a special case. At the start of programming, the code is in serious flux, and many times it would be really handy to have a few extra asserts around. Once the code crystallises you remove the most blatant asserts. I do this in shell scripting, too. For example, in the original rdmd there still is a check that essentially says "if we get here, abort with a message 'logic failure, check code'." Of course, there are people who simply have such a clear head that they essentially have the entire source file figured out before they start typing. I have some such friends. They would not need this feature. But there's some code even in Phobos that looks like it could use this.
Mar 05 2009
parent "Joel C. Salomon" <joelcsalomon gmail.com> writes:
Georg Wrede wrote:
 Jason House wrote:
 I'll gladly add assert(0) if my complex logic confuses the compiler.
 Actually, it pisses me off when the current dmd compiler complains
 about an assert(0) being unreachable.
I'd love it if an unreachable assert(0) were a special case. At the start of programming, the code is in serious flux, and many times it would be really handy to have a few extra asserts around. Once the code crystallises you remove the most blatant asserts.
Don’t be so quick to remove them; when you refactor your code you leave yourself open to these same bugs biting you. Remember “Can’t Happen or /* NOTREACHED */ or Real Programs Dump Core” by Ian Darwin & Geoff Collyer <http://www.literateprogramming.com/canthappen.pdf>. —Joel Salomon
Mar 05 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable in 
 it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time. Ah, I forgot to say, this is only done for local variables. For member variables the default initializer is used. If it's done only for local variables then you don't need to instrument the running code.
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member(); ? (You might ask who would write such, but sometimes the conditions are much more complex, and/or are generated by generic code.)
 If it's done only for local variables then you don't need to instrument 
 the running code.
How about this: Foo f; bar(&f); ? Or in another form: bar(ref Foo f); Foo f; bar(f); Java doesn't have ref parameters.
Mar 04 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member();
Whenever there are branches in code and a variable still doesn't have a value at that point: - if all branches assign a value to that variable, from now on the variable has a value - if not, at then end of the branches the variable still doesn't have a value
 
 ? (You might ask who would write such, but sometimes the conditions are 
 much more complex, and/or are generated by generic code.)
 
 If it's done only for local variables then you don't need to 
 instrument the running code.
How about this: Foo f; bar(&f); ? Or in another form: bar(ref Foo f); Foo f; bar(f); Java doesn't have ref parameters.
just tried it and it says a parameter can't be passed by reference if it doesn't have a value assigned. So your first example should be an error. The same should be applied for &references. (in your first example, if you want to pass f by reference so that bar creates an instance of f, then it should be an out parameter).
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable 
 in it's initializer, but just before you read it for the fist time. 
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member();
Whenever there are branches in code and a variable still doesn't have a value at that point: - if all branches assign a value to that variable, from now on the variable has a value - if not, at then end of the branches the variable still doesn't have a value
That rule gets the wrong answer in the above case. Consider that in order to get where you want to go with this, the flow analysis has to always work, not most of the time work. Otherwise you get bug reports with phrases like "seems to", "sometimes", "somehow", "I can't figure it out", "I can't reproduce the problem", etc. Here's another lovely case: Foo f; if (x < 1) f = new Foo(); ... lots of code ... if (x < 1) f.member(); The code is quite correct and bug-free, but flow analysis will tell you that f in f.member() is "possibly uninitialized".
 ? (You might ask who would write such, but sometimes the conditions 
 are much more complex, and/or are generated by generic code.)

 If it's done only for local variables then you don't need to 
 instrument the running code.
How about this: Foo f; bar(&f); ? Or in another form: bar(ref Foo f); Foo f; bar(f); Java doesn't have ref parameters.
It cannot do it and still support separate compilation.
 I just tried it and it says a parameter can't be passed by reference if it 
 doesn't have a value assigned.
I'll bet that they added this constraint when they got a bug report about that hole <g>.
 So your first example should be an error.
 
 The same should be applied for &references.
 
 (in your first example, if you want to pass f by reference so that bar 
 creates an instance of f, then it should be an out parameter).
Doesn't work if the function conditionally initializes it (that is not uncommon, consider the API case where if the function executes correctly, the reference arg is initialized and filled in with the results). copout or get rid of large chunks of the language like Java does.
Mar 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Ary Borenszweig wrote:
 Foo f;
 if (x < 1) f = new Foo(1);
 else if (x >= 1) f = new Foo(2);
 f.member();
Whenever there are branches in code and a variable still doesn't have a value at that point: - if all branches assign a value to that variable, from now on the variable has a value - if not, at then end of the branches the variable still doesn't have a value
That rule gets the wrong answer in the above case. Consider that in order to get where you want to go with this, the flow analysis has to always work, not most of the time work. Otherwise you get bug reports with phrases like "seems to", "sometimes", "somehow", "I can't figure it out", "I can't reproduce the problem", etc.
and I haven't heard of gotchas involving it. Here are the first two hits on searching java gotchas: http://mindprod.com/jgloss/gotchas.html http://www.firstsql.com/java/gotchas/ The first actually discusses construction issues, but the flow thing is not among them.
 Here's another lovely case:
 
 Foo f;
 if (x < 1) f = new Foo();
 ... lots of code ...
 if (x < 1) f.member();
 
 The code is quite correct and bug-free, but flow analysis will tell you 
 that f in f.member() is "possibly uninitialized".
The code is bug-ridden. It's exactly the kind of maintenance nightmare where you change one line and 1000 lines below something crashes.
 ? (You might ask who would write such, but sometimes the conditions 
 are much more complex, and/or are generated by generic code.)

 If it's done only for local variables then you don't need to 
 instrument the running code.
How about this: Foo f; bar(&f); ? Or in another form: bar(ref Foo f); Foo f; bar(f); Java doesn't have ref parameters.
It cannot do it and still support separate compilation.
Listen to the man. The point is to use non-null as default in function signatures, but relax that rule non-locally so programmers don't feel constrained. It's the best of all worlds.
 I just tried it and it says a parameter can't be passed by reference 
 if it doesn't have a value assigned.
I'll bet that they added this constraint when they got a bug report about that hole <g>.
Link? Evidence?
 So your first example should be an error.

 The same should be applied for &references.

 (in your first example, if you want to pass f by reference so that bar 
 creates an instance of f, then it should be an out parameter).
Doesn't work if the function conditionally initializes it (that is not uncommon, consider the API case where if the function executes correctly, the reference arg is initialized and filled in with the results). copout or get rid of large chunks of the language like Java does.
No. This is exactly the point at which the useless "out" storage class can make itself useful. You can pass an uninitialized variable as an "out" (not ref) parameter, and the callee has the modularly checked obligation to assign to it. It's perfect. You have no case. Andrei
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:

 and I haven't heard of gotchas involving it. Here are the first two hits 
 on searching java gotchas:
 
 http://mindprod.com/jgloss/gotchas.html
 http://www.firstsql.com/java/gotchas/
 
 The first actually discusses construction issues, but the flow thing is 
 not among them.
Disallowing certain combinations makes it hard to compose things using metaprogramming.
 The code is bug-ridden. It's exactly the kind of maintenance nightmare 
 where you change one line and 1000 lines below something crashes.
It does occur in various forms. I know this from experimenting with flow analysis. The problem with saying it's "buggy" is then the programmer throws in a dead assignment to "fix" it rather than refactor it.
 Listen to the man. The point is to use non-null as default in function 
 signatures, but relax that rule non-locally so programmers don't feel 
 constrained. It's the best of all worlds.
But much of this thread is about tracking where a null assignment to a field is coming from. By doing only local analysis, that case is not dealt with at all.
 I'll bet that they added this constraint when they got a bug report 
 about that hole <g>.
Link? Evidence?
No evidence. Just that's how these things usually work <g>.
 No. This is exactly the point at which the useless "out" storage class 
 can make itself useful. You can pass an uninitialized variable as an 
 "out" (not ref) parameter, and the callee has the modularly checked 
 obligation to assign to it. It's perfect.
I'll concede that one.
 You have no case.
Not so fast :-)
Mar 04 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gonnf4$2mnj$1 digitalmars.com...
 The code is bug-ridden. It's exactly the kind of maintenance nightmare 
 where you change one line and 1000 lines below something crashes.
It does occur in various forms. I know this from experimenting with flow analysis. The problem with saying it's "buggy" is then the programmer throws in a dead assignment to "fix" it rather than refactor it.
If someone has code like that than the main issue is that it needs to be refactored. Neither the current status of D, nor "perfect" flow-analysis would do anything to force, or even nudge, the programmer into doing that either, so what? At least attention will get called to it and the programmer will at least have the *opportunity* to choose to fix it. And if they choose to do the dead-assignment hack, well, the code's already crap anyway, it's not like they're really all that worse off.
 Listen to the man. The point is to use non-null as default in function 
 signatures, but relax that rule non-locally so programmers don't feel 
 constrained. It's the best of all worlds.
But much of this thread is about tracking where a null assignment to a field is coming from. By doing only local analysis, that case is not dealt with at all.
Before I address that, let me address one other thing first: workarounds, but not once has the conservatively-biased flow-analysis ever been one of the things that caused me non-trivial work. In fact, I would argue that a "perfect" flow-analysis would be much worse because it would cause code to constantly flip-flop between valid and invalid at the tiniest seemingly-insignificant change. (In other words, if you think the "symbol unresoved" emails you get are bad, it's nothing compared to what would happen with a perfect "zero reject-valid" flow-analysis.) With that out of the way: Yes, for "perfect"-style flow-analysis, both local and non-local analysis would be needed in order to track the origin of a bad accomplish that. theoretically-valid code, you get two benefits over the "perfect"-style analysis: 1. Far easier for the programmer to *know* when something will or won't be accepted by initialization-checks, and minor changes don't cause surprises. 2. The initialization-checks can catch all uses of uninited vars with simple local-only analysis.
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:gonnf4$2mnj$1 digitalmars.com...
 The code is bug-ridden. It's exactly the kind of maintenance nightmare 
 where you change one line and 1000 lines below something crashes.
It does occur in various forms. I know this from experimenting with flow analysis. The problem with saying it's "buggy" is then the programmer throws in a dead assignment to "fix" it rather than refactor it.
If someone has code like that than the main issue is that it needs to be refactored. Neither the current status of D, nor "perfect" flow-analysis would do anything to force, or even nudge, the programmer into doing that either, so what? At least attention will get called to it and the programmer will at least have the *opportunity* to choose to fix it. And if they choose to do the dead-assignment hack, well, the code's already crap anyway, it's not like they're really all that worse off.
This is the same argument for mandatory exception specifications that Java had. Bruce Eckel wrote a fascinating article about it, showing that even the experts who denigrated and dismissed the "bad style" of the quick and dirty fix, used it themselves. The excuse was always "I'll fix it later" and of course it never got fixed. We could debate what good style is and is not, but I think we can agree that the ideal language should make it easy to do good style, and more effort to do bad style, because programmers will naturally follow the path of least resistance - even the ones that know better. That's why php is so popular, yet reviled <g>.
Mar 04 2009
parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gonthc$2ump$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:gonnf4$2mnj$1 digitalmars.com...
 The code is bug-ridden. It's exactly the kind of maintenance nightmare 
 where you change one line and 1000 lines below something crashes.
It does occur in various forms. I know this from experimenting with flow analysis. The problem with saying it's "buggy" is then the programmer throws in a dead assignment to "fix" it rather than refactor it.
If someone has code like that than the main issue is that it needs to be refactored. Neither the current status of D, nor "perfect" flow-analysis would do anything to force, or even nudge, the programmer into doing that either, so what? At least attention will get called to it and the programmer will at least have the *opportunity* to choose to fix it. And if they choose to do the dead-assignment hack, well, the code's already crap anyway, it's not like they're really all that worse off.
This is the same argument for mandatory exception specifications that Java had. Bruce Eckel wrote a fascinating article about it, showing that even the experts who denigrated and dismissed the "bad style" of the quick and dirty fix, used it themselves. The excuse was always "I'll fix it later" and of course it never got fixed. We could debate what good style is and is not, but I think we can agree that the ideal language should make it easy to do good style, and more effort to do bad style, because programmers will naturally follow the path of least resistance - even the ones that know better. That's why php is so popular, yet reviled <g>.
First of all, Java's checked exception system is a far cry from what we're advocating here. That thing basically amounted to a manditory full and complete redundant manual documentation of all exception behavior of any code that ever touched any other code that used the checked exception system. It's no wonder people side-stepped it. What we're talking about here is an occasional tweak to the types of code that are already prone to hiding "use-of-uninited-ref/var" errors anyway, and no cascading effects. That's not even in the same ballpark. occasional kludge to circumvent proper initialization, people just don't programmer to actively create a kludge, but in D all it takes is a trivial initialization than D). As far as the idea of "easy to do good style, harder to do bad style", that only serves to support my argument. Omitting compiler checks for "potential-use-of-uninited-ref/var" makes it *very* easy to use a variable/reference that you haven't properly initialized (and like someone else said, T.init is not always the proper initial value for whatever you're doing). But when you put those checks in, even if the programmer kludges it's not as if they're any worse off.
Mar 04 2009
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a 
 variable in it's initializer, but just before you read it for the 
 fist time. That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member();
Whenever there are branches in code and a variable still doesn't have a value at that point: - if all branches assign a value to that variable, from now on the variable has a value - if not, at then end of the branches the variable still doesn't have a value
That rule gets the wrong answer in the above case. Consider that in order to get where you want to go with this, the flow analysis has to always work, not most of the time work. Otherwise you get bug reports with phrases like "seems to", "sometimes", "somehow", "I can't figure it out", "I can't reproduce the problem", etc. Here's another lovely case: Foo f; if (x < 1) f = new Foo(); ... lots of code ... if (x < 1) f.member(); The code is quite correct and bug-free, but flow analysis will tell you that f in f.member() is "possibly uninitialized".
That's exactly the example of a code with a possible bug that'll get an null pointer access. Imagine in "... lots of code ..." you use f and the compiler complains. But you are sure "x < 1" will be true! Then you need to add an else and some assert(false) or throw an exception and that's it. If the second condition is "if (x >= 1)" and in the next line you use f, then yes, you are definitely sure that "f" is initialized if you didn't touch "x" in "... lots of code ...". Well... not really, if some other thread changed "x" in the middle of the code, then you are not really sure about that. So again, it's an error. If you are really, really sure that this won't happen, you do: Foo f = null; and that's it. You make the compiler help you only when you want it to, and that'll be most of the time. The same goes with: Foo f; bar(&f); If you are sure that bar will assign a value to f if it doesn't have one, go ahead and initialize "f" with null.
Mar 05 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a 
 variable in it's initializer, but just before you read it for the 
 fist time. That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member();
Whenever there are branches in code and a variable still doesn't have a value at that point: - if all branches assign a value to that variable, from now on the variable has a value - if not, at then end of the branches the variable still doesn't have a value
That rule gets the wrong answer in the above case. Consider that in order to get where you want to go with this, the flow analysis has to always work, not most of the time work. Otherwise you get bug reports with phrases like "seems to", "sometimes", "somehow", "I can't figure it out", "I can't reproduce the problem", etc. Here's another lovely case: Foo f; if (x < 1) f = new Foo(); ... lots of code ... if (x < 1) f.member(); The code is quite correct and bug-free, but flow analysis will tell you that f in f.member() is "possibly uninitialized".
That's exactly the example of a code with a possible bug that'll get an null pointer access. Imagine in "... lots of code ..." you use f and the compiler complains. But you are sure "x < 1" will be true! Then you need to add an else and some assert(false) or throw an exception and that's it. If the second condition is "if (x >= 1)" and in the next line you use f, then yes, you are definitely sure that "f" is initialized if you didn't touch "x" in "... lots of code ...". Well... not really, if some other thread changed "x" in the middle of the code, then you are not really sure about that. So again, it's an error. If you are really, really sure that this won't happen, you do: Foo f = null; and that's it. You make the compiler help you only when you want it to, and that'll be most of the time. The same goes with: Foo f; bar(&f); If you are sure that bar will assign a value to f if it doesn't have one, go ahead and initialize "f" with null.
By the way, I'm defending this functionality because I received the "variable might not be initialized" errors many times now, and I know my coworkers also received them many times. In those cases, we were always making a mistake. I can't remember even one time when I had to put a dummy initializer in a variable just to make the compiler happy.
Mar 05 2009
prev sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Thu, 05 Mar 2009 06:54:05 +0300, Walter Bright <newshound1 digitalmars.com>
wrote:

 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 Walter Bright escribió:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable  
 in it's initializer, but just before you read it for the fist time.  
 That's very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member();
[snip]
 Here's another lovely case:

 Foo f;
 if (x < 1) f = new Foo();
 ... lots of code ...
 if (x < 1) f.member();

 The code is quite correct and bug-free, but flow analysis will tell you  
 that f in f.member() is "possibly uninitialized".
These are examples of spaghetti code that will be discouraged by compiler to use. I believe this will lead to code which is easier to read and maintain.
Mar 06 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
Denis Koroskin wrote:
 These are examples of spaghetti code that will be discouraged by 
 compiler to use. I believe this will lead to code which is easier to 
 read and maintain.
It's not the compiler's place to tell me that my code is unmaintainable.
Mar 06 2009
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Christopher Wright escribió:
 Denis Koroskin wrote:
 These are examples of spaghetti code that will be discouraged by 
 compiler to use. I believe this will lead to code which is easier to 
 read and maintain.
It's not the compiler's place to tell me that my code is unmaintainable.
Yes it is!
Mar 06 2009
parent Christopher Wright <dhasenan gmail.com> writes:
Ary Borenszweig wrote:
 Christopher Wright escribió:
 Denis Koroskin wrote:
 These are examples of spaghetti code that will be discouraged by 
 compiler to use. I believe this will lead to code which is easier to 
 read and maintain.
It's not the compiler's place to tell me that my code is unmaintainable.
Yes it is!
Hm. I might not mind optional warnings about that sort of thing, but if they were mandatory errors, I'd have to remove them from the compiler or stop using the language. The compiler isn't smart enough to determine that what I am doing is safe and necessary, even if it is ugly. The compiler must produce an executable that faithfully executes any valid input to the compiler. Style and maintainability don't affect validity.
Mar 07 2009
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Fri, 06 Mar 2009 18:18:26 -0500, Christopher Wright wrote:

 Denis Koroskin wrote:
 These are examples of spaghetti code that will be discouraged by 
 compiler to use. I believe this will lead to code which is easier to 
 read and maintain.
It's not the compiler's place to tell me that my code is unmaintainable.
The compiler is your friend, and friends sometimes have to be brutally honest. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Mar 06 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gon0i2$1nsp$1 digitalmars.com...
 Ary Borenszweig wrote:
 Walter Bright escribi:
 Ary Borenszweig wrote:
 It's not like that. They don't require you to initialize a variable in 
 it's initializer, but just before you read it for the fist time. That's 
 very different.
The only way to do that 100% reliably is to instrument the running code.
Java does it on compile time.
Java is a severely constrained language. Even so, how does it do with this: Foo f; if (x < 1) f = new Foo(1); else if (x >= 1) f = new Foo(2); f.member(); ? (You might ask who would write such, but sometimes the conditions are much more complex, and/or are generated by generic code.)
I can't think of a single example of metaprogramming that would generate that without also happily generating things that would leave f potentially uninited.
Mar 04 2009
prev sibling parent reply "Joel C. Salomon" <joelcsalomon gmail.com> writes:
Walter Bright wrote:
 I've used compilers that required explicit initializers for all
 variables. Sounds great in theory, in practice it *causes* bugs.
 
 What happens is the compiler dings the user with "initializer required."
 The user wants to get on with things, so he just throws in an
 initializer, any initializer, to get the compiler to shut up.
Then the compiler is giving the wrong warning; it ought only to warn that the variable is used before it has been set. —Joel Salomon
Mar 04 2009
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Joel C. Salomon" <joelcsalomon gmail.com> wrote in message 
news:gomli7$15sj$2 digitalmars.com...
 Walter Bright wrote:
 I've used compilers that required explicit initializers for all
 variables. Sounds great in theory, in practice it *causes* bugs.

 What happens is the compiler dings the user with "initializer required."
 The user wants to get on with things, so he just throws in an
 initializer, any initializer, to get the compiler to shut up.
Then the compiler is giving the wrong warning; it ought only to warn that the variable is used before it has been set.
Agreed. But let me pre-emptively address the inevitable "That would require flow analysis" that is scheduled for this point in the discussion: I've been thinking more and more that flow analysis is something that really needs to just get done for D at some point (I sympathetically realize that there are plenty of other things D also needs addressed). I'm no expert at this, but from prior discussions it sounds like this is something that's fairly well-understood. That together with the fact that the lack of it seems to be becoming more and more a stumbling block for various "very good things" leads me to beleive this is something that needs to just get done for D.
Mar 04 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Joel C. Salomon wrote:
 Then the compiler is giving the wrong warning; it ought only to warn
 that the variable is used before it has been set.
That doesn't cover all the cases, nor does it do anything for tracking where a null might come from when going between modules. And yes, it does require full blown data flow analysis.
Mar 04 2009
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-03-04 06:04:33 -0500, Walter Bright <newshound1 digitalmars.com> said:

 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that.
While I can't contradict your personal experience,
 Non-nullable types (or proxy struct or whatever) means the code won't
 even compile if there's an untested path.  And if we do try to assign a
 null, we get an exception at THAT moment, so we can trace back to find
 out where it came from.
Yes, I understand that detecting bugs at compile time is better. But there's a downside to this. Every reference type will have two subtypes - a nullable and a non-nullable. We already have const, immutable and shared. Throwing another attribute into the mix is not insignificant. Each one exponentially increases the combinations of types, their conversions from one to the other, overloading rules, etc. Andrei suggests making a library type work for this rather than a language attribute, but it's still an extra thing that will have to be specified everywhere where used.
Well, if you care about the extra work for users of the language that would have specify wether each pointer can be null or not, I disagree that it's extra work. When you design an API, you have to specify to users of that API whether you accept null pointer or not. You should specify it in the documentation ("this argument must not be null", "this struct member must not be null", etc.), and add contracts in the code to enforce that in debug builds. That's a lot more extra work that adding an attribute to the pointer, where it'll be available both to the compiler and the documentation. Where I work, we do C++ programming. All the time we use things like std::auto_ptr and boost::scoped_ptr to enforce proper ownership and deletion of everything (boost::shared_ptr and intrusive_ptr also helps for shared pointers). We only rarely use raw pointers. The reason for this? Because using the more verbose version ensures correctness, express the intent and how that pointer is to be used. It's true that auto_ptr and cie. in C++ prevent a more dangerous problem than null dereferences: they make sure the memory isn't deallocated prematurally or never, preventing corruptions and leaks...
 There are a lot of optional attributes that can be applied to reference 
 types. At what point is the additional complexity not worth the gain?
Pretty good question. Your are the judge of that, and apparently you don't like adding complexity for the user. Well, I'm with you on that. The thing is I think we're simplifying things by adding non-nullable types. With nullability annotations, you always know when you have to check for null or not (normally, you keep track of that in your mind anyway). And with static enforcement of nullability checks prior a dereference, you don't have to be extra careful before dereferencing: the compiler will tell you if you've forgotten something, so you can free your mind of these details and concentrate on the task at hand. The cost: you must annotate all your nullable pointers. But considering the study Andrei has dug out, most pointers shouldn't be nullable. And as I pointed out above, annotating is something you should to do anyway in documentation and contracts. And I think that having non-nullable by default would make that cost negative: you gain static null dereference checks everywhere, and for each of these pointers you have less documentation and contracts to write, and instead only one third of these need to be annotated as nullable (hopefully with just one character to type). -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Mar 04 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
The world is more diverse than one environment. When I run 200 jobs on a cluster, failure from null pointer usage comes back with no file and line information, no debugger nicely starting, no nothing. And it's not easy to reproduce that on a small machine with GUI and all.
 Non-nullable types (or proxy struct or whatever) means the code won't
 even compile if there's an untested path.  And if we do try to assign a
 null, we get an exception at THAT moment, so we can trace back to find
 out where it came from.
Yes, I understand that detecting bugs at compile time is better. But there's a downside to this. Every reference type will have two subtypes - a nullable and a non-nullable. We already have const, immutable and shared. Throwing another attribute into the mix is not insignificant. Each one exponentially increases the combinations of types, their conversions from one to the other, overloading rules, etc. Andrei suggests making a library type work for this rather than a language attribute, but it's still an extra thing that will have to be specified everywhere where used. There are a lot of optional attributes that can be applied to reference types. At what point is the additional complexity not worth the gain?
The added language complexity is very small and not visible to the user. You change the default behavior. The library takes care of the rest. Non-null is not const, not invariant, not shared, and should not be compared in cost and benefits with them. And there is no reference type with two subtypes. It's one type in the language and one in the library. Maybe-null (the library) is a supertype of non-null (the default). Again: there is no "adding" to the language. It's changing. You don't need to even look at overloading, combinations etc. All you need to look at is constructors. (Oh, Bartosz and I found a couple of other bugs in constructors last night too.) Andrei
Mar 04 2009
next sibling parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
The world is more diverse than one environment. When I run 200 jobs on a cluster, failure from null pointer usage comes back with no file and line information, no debugger nicely starting, no nothing. And it's not easy to reproduce that on a small machine with GUI and all.
 Non-nullable types (or proxy struct or whatever) means the code won't
 even compile if there's an untested path.  And if we do try to assign a
 null, we get an exception at THAT moment, so we can trace back to find
 out where it came from.
Yes, I understand that detecting bugs at compile time is better. But there's a downside to this. Every reference type will have two subtypes - a nullable and a non-nullable. We already have const, immutable and shared. Throwing another attribute into the mix is not insignificant. Each one exponentially increases the combinations of types, their conversions from one to the other, overloading rules, etc. Andrei suggests making a library type work for this rather than a language attribute, but it's still an extra thing that will have to be specified everywhere where used. There are a lot of optional attributes that can be applied to reference types. At what point is the additional complexity not worth the gain?
The added language complexity is very small and not visible to the user. You change the default behavior. The library takes care of the rest. Non-null is not const, not invariant, not shared, and should not be compared in cost and benefits with them. And there is no reference type with two subtypes. It's one type in the language and one in the library. Maybe-null (the library) is a supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions. Incidentally, uses of maybe-null in immutable types must be _extremely_ rare. The usage statistics for D are likely to be skewed towards non-null types, compared to Java. I'd say the 66% is a very conservative lower bound.
 Again: there is no "adding" to the language. It's changing. You don't 
 need to even look at overloading, combinations etc. All you need to look 
 at is constructors. (Oh, Bartosz and I found a couple of other bugs in 
 constructors last night too.)
 
 
 Andrei
Mar 04 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in the 
 language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
 Incidentally, uses of maybe-null in immutable types must be _extremely_ 
 rare. The usage statistics for D are likely to be skewed towards 
 non-null types, compared to Java. I'd say the 66% is a very conservative 
 lower bound.
Very good point. With a mutable null pointer, there's at least hope it will become something interesting later :o). Andrei
Mar 04 2009
next sibling parent reply Don <nospam nospam.com> writes:
Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in 
 the language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
Mar 04 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 19:39:06 +0300, Don <nospam nospam.com> wrote:

 Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in  
 the language and one in the library. Maybe-null (the library) is a  
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
extern(System) int GetTextCharsetInfo( HDC hdc, MaybeNull!(FONTSIGNATURE*) lpSig, // or whatever DWORD dwFlags); GetTextCharsetInfo(hdc, null, flags); // fine GetTextCharsetInfo(hdc, &sig, flags); // also ok
Mar 04 2009
parent reply Don <nospam nospam.com> writes:
Denis Koroskin wrote:
 On Wed, 04 Mar 2009 19:39:06 +0300, Don <nospam nospam.com> wrote:
 
 Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in 
 the language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
extern(System) int GetTextCharsetInfo( HDC hdc, MaybeNull!(FONTSIGNATURE*) lpSig, // or whatever DWORD dwFlags); GetTextCharsetInfo(hdc, null, flags); // fine GetTextCharsetInfo(hdc, &sig, flags); // also ok
But it needs to have the type name mangled as LPFONTSIGNATURE, not as MaybeNull!(FONTSIGNATURE*). Otherwise it can't link to Windows.
Mar 05 2009
parent reply Max Samukha <samukha voliacable.com.removethis> writes:
On Thu, 05 Mar 2009 09:09:42 +0100, Don <nospam nospam.com> wrote:

Denis Koroskin wrote:
 On Wed, 04 Mar 2009 19:39:06 +0300, Don <nospam nospam.com> wrote:
 
 Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in 
 the language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
extern(System) int GetTextCharsetInfo( HDC hdc, MaybeNull!(FONTSIGNATURE*) lpSig, // or whatever DWORD dwFlags); GetTextCharsetInfo(hdc, null, flags); // fine GetTextCharsetInfo(hdc, &sig, flags); // also ok
But it needs to have the type name mangled as LPFONTSIGNATURE, not as MaybeNull!(FONTSIGNATURE*). Otherwise it can't link to Windows.
Parameter names are not mangled for stacall, only their total size. IIRC, the above will get mangled into _GetTextCharsetInfo 12 just like it would without MaybeNull
Mar 05 2009
parent reply Don <nospam nospam.com> writes:
Max Samukha wrote:
 On Thu, 05 Mar 2009 09:09:42 +0100, Don <nospam nospam.com> wrote:
 
 Denis Koroskin wrote:
 On Wed, 04 Mar 2009 19:39:06 +0300, Don <nospam nospam.com> wrote:

 Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in 
 the language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
extern(System) int GetTextCharsetInfo( HDC hdc, MaybeNull!(FONTSIGNATURE*) lpSig, // or whatever DWORD dwFlags); GetTextCharsetInfo(hdc, null, flags); // fine GetTextCharsetInfo(hdc, &sig, flags); // also ok
But it needs to have the type name mangled as LPFONTSIGNATURE, not as MaybeNull!(FONTSIGNATURE*). Otherwise it can't link to Windows.
Parameter names are not mangled for stacall, only their total size. IIRC, the above will get mangled into _GetTextCharsetInfo 12 just like it would without MaybeNull
Cool. So it'd only be an issue with extern(C++).
Mar 05 2009
parent Max Samukha <samukha voliacable.com.removethis> writes:
On Thu, 05 Mar 2009 10:06:14 +0100, Don <nospam nospam.com> wrote:

Max Samukha wrote:
 On Thu, 05 Mar 2009 09:09:42 +0100, Don <nospam nospam.com> wrote:
 
 Denis Koroskin wrote:
 On Wed, 04 Mar 2009 19:39:06 +0300, Don <nospam nospam.com> wrote:

 Andrei Alexandrescu wrote:
 Don wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in 
 the language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
One problem I can see is with extern(C),(Windows) functions, since pointers are maybe-null in C. The name-mangling has to work out. I can't see how this can be done without the compiler knowing SOMETHING about both nullable and non-nullable types. At the bare minimum, you need to deal with maybe-null returns and reference parameters from C functions.
Walter is thinking of making only references non-null and leaving pointers as they are. (I know, cry of horror.) But say pointers are also non-null. Then: extern(C) MaybeNull!(void*) malloc(size_t s); will work, provided that MaybeNull has no size overhead and that word-sized structs are returned in the same register as word returns (I seem to remember Walter told me that's the case already).
Here's a typical annoying Windows API function -------- int GetTextCharsetInfo( HDC hdc, // handle to DC LPFONTSIGNATURE lpSig, // data buffer DWORD dwFlags // reserved; must be zero ); lpSig [out] Pointer to a FONTSIGNATURE data structure that receives font-signature information. The lpSig parameter can be NULL if you do not need the FONTSIGNATURE information. --------- How do you do this? Don.
extern(System) int GetTextCharsetInfo( HDC hdc, MaybeNull!(FONTSIGNATURE*) lpSig, // or whatever DWORD dwFlags); GetTextCharsetInfo(hdc, null, flags); // fine GetTextCharsetInfo(hdc, &sig, flags); // also ok
But it needs to have the type name mangled as LPFONTSIGNATURE, not as MaybeNull!(FONTSIGNATURE*). Otherwise it can't link to Windows.
Parameter names are not mangled for stacall, only their total size. IIRC, the above will get mangled into _GetTextCharsetInfo 12 just like it would without MaybeNull
Parameter names -> parameter types stacall -> stdcall Sorry
 
Cool. So it'd only be an issue with extern(C++).
You're right, but the C++ interface is so limited that I don't think anybody uses it.
Mar 05 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 extern(C) MaybeNull!(void*) malloc(size_t s);
Probably I have missed part of the discussion, because wasn't it appreciated by people to use "?" to denote nullables? Something like: extern(C) void*? malloc(size_t s); Bye, bearophile
Mar 04 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 extern(C) MaybeNull!(void*) malloc(size_t s);
Probably I have missed part of the discussion, because wasn't it appreciated by people to use "?" to denote nullables? Something like: extern(C) void*? malloc(size_t s); Bye, bearophile
Please no more syntax. Andrei
Mar 04 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 Please no more syntax.
Well, the good thing of a library solution is that sugar/syntax can be added later if enough people want it :-) Bye, bearophile
Mar 04 2009
parent "Denis Koroskin" <2korden gmail.com> writes:
On Wed, 04 Mar 2009 22:08:45 +0300, bearophile <bearophileHUGS lycos.com> wrote:

 Andrei Alexandrescu:
 Please no more syntax.
Well, the good thing of a library solution is that sugar/syntax can be added later if enough people want it :-) Bye, bearophile
That's what I hope, too. :)
Mar 04 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
Andrei Alexandrescu wrote:
 bearophile wrote:
 Andrei Alexandrescu:
 extern(C) MaybeNull!(void*) malloc(size_t s);
Probably I have missed part of the discussion, because wasn't it appreciated by people to use "?" to denote nullables? Something like: extern(C) void*? malloc(size_t s); Bye, bearophile
Please no more syntax. Andrei
It seems to me that making references non-nullable and pointers maybe-nullable covers all the use-cases.
Mar 04 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
bearophile wrote:
 Andrei Alexandrescu:
 extern(C) MaybeNull!(void*) malloc(size_t s);
Probably I have missed part of the discussion, because wasn't it appreciated by people to use "?" to denote nullables? Something like: extern(C) void*? malloc(size_t s); Bye, bearophile
This usage is vaguely similar but sufficiently different that I would not use the same symbols, even in a discussion.
Mar 04 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
The world is more diverse than one environment. When I run 200 jobs on a cluster, failure from null pointer usage comes back with no file and line information, no debugger nicely starting, no nothing. And it's not easy to reproduce that on a small machine with GUI and all.
At least Linux might produce a core dump that can be used to re-start the process. Windows users are pretty much out of luck. That said, I agree with you completely. Sean
Mar 04 2009
parent The Anh Tran <trtheanh gmail.com> writes:
Sean Kelly wrote:
 At least Linux might produce a core dump that can be used to re-start 
 the process.  Windows users are pretty much out of luck.  That said, I 
 agree with you completely.
 
 
 Sean
On _simple_ application, you could have complete process memory dump. http://msdn.microsoft.com/en-us/library/ms680360(VS.85).aspx On cluster application using OpenMP, MPI..., it is extremely hard to reproduce.
Mar 04 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in the 
 language and one in the library. Maybe-null (the library) is a supertype 
 of non-null (the default).
I don't know how it cannot wind up being two types. One nullable, the other not. Whether it is a library type or language type, there are still two versions.
Mar 04 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 And there is no reference type with two subtypes. It's one type in the 
 language and one in the library. Maybe-null (the library) is a 
 supertype of non-null (the default).
I don't know how it cannot wind up being two types. One nullable, the other not. Whether it is a library type or language type, there are still two versions.
You said there are three. Andrei
Mar 04 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 You said there are three.
I meant two. Glad we cleared that up!
Mar 04 2009
prev sibling next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 I need to know when that null gets stored, not when my code trips over
 it and explodes later down the line.
Ok, I see the difference, but I've rarely had any trouble finding out where the assignment happened. In fact, I can't remember ever having a problem finding that. That's because the null pointer exception is nearly always repeatable, so it isn't hard to work backwards. The non-repeatable ones have been due to memory corruption, which is a different issue entirely.
In a language like Java where basically every variable is a reference, it can be a lot more difficult to figure out where a null came from. I've never had this problem in code I've written either, but I've had to maintain some Java code that was nearly impenetrable and this was absolutely an issue. Sean
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Sean Kelly wrote:
 In a language like Java where basically every variable is a reference, 
 it can be a lot more difficult to figure out where a null came from. 
 I've never had this problem in code I've written either, but I've had to 
 maintain some Java code that was nearly impenetrable and this was 
 absolutely an issue.
I've found that variables are rarely written to, but often read. A grep finds the handful of places where it is written to (a good IDE should also provide this information much better than grep). There just won't be that many places to look. (I tend to use grep a lot on code, and so tend to write variable names that are amenable to grep. If you use globals named "i", hope you have a decent IDE!)
Mar 04 2009
parent Sean Kelly <sean invisibleduck.org> writes:
Walter Bright wrote:
 Sean Kelly wrote:
 In a language like Java where basically every variable is a reference, 
 it can be a lot more difficult to figure out where a null came from. 
 I've never had this problem in code I've written either, but I've had 
 to maintain some Java code that was nearly impenetrable and this was 
 absolutely an issue.
I've found that variables are rarely written to, but often read. A grep finds the handful of places where it is written to (a good IDE should also provide this information much better than grep). There just won't be that many places to look.
Say one variable is initialized at 4 points by different variables, some are the same type and some are different types with an explicit downcast. Now I have to consider a possible cast failure (the easy case) and I also have to look to see where those 4 variables were initialized, possibly introducing another 4 initialization points per variable, and so on. It's entirely possible to do this, but I'm looking at either inserting checks at an exponential number of locations and then running once, or running and finding the offending initialization then running again to find the preceding offending initialization, etc. Even worse is when a container is involved, since they tend to have a ton more points in the code where their contents are being altered or rearranged. There's one instance in a Java program I've worked on where I've seen a variable be null that, from code inspection, should logically never be null at that point. I'm sure I missed something subtle, but I'll be darned if I know what it is. This is drastically different from a C/C++ application where references are the exception rather than the norm. D stands somewhere between the two depending on programming style.
 (I tend to use grep a lot on code, and so tend to write variable names 
 that are amenable to grep. If you use globals named "i", hope you have a 
 decent IDE!)
Same here. Without find/grep I'd have given up on programming long ago.
Mar 04 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 Andrei suggests making a library type work for this rather than a
 language attribute, but it's still an extra thing that will have to be
 specified everywhere where used.
 
I've considered trying to make a template that compile time enforces Non-null usage but can, with a version flag, be switched to a simple alias/typedef of the internal type. Near zero overhead and possibly just as strong a guard.
Mar 04 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
BCS wrote:
 Hello Walter,
 
 Andrei suggests making a library type work for this rather than a
 language attribute, but it's still an extra thing that will have to be
 specified everywhere where used.
I've considered trying to make a template that compile time enforces Non-null usage but can, with a version flag, be switched to a simple alias/typedef of the internal type. Near zero overhead and possibly just as strong a guard.
It's impossible. You can create a struct that will throw an exception if you use it uninitialized, or if you try assigning null to it. But the struct cannot require that it is initialized. You can add a contract that requires the struct be initialized, and put the contract and declaration in a template.
Mar 04 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Christopher Wright wrote:
 BCS wrote:
 Hello Walter,

 Andrei suggests making a library type work for this rather than a
 language attribute, but it's still an extra thing that will have to be
 specified everywhere where used.
I've considered trying to make a template that compile time enforces Non-null usage but can, with a version flag, be switched to a simple alias/typedef of the internal type. Near zero overhead and possibly just as strong a guard.
It's impossible. You can create a struct that will throw an exception if you use it uninitialized, or if you try assigning null to it. But the struct cannot require that it is initialized. You can add a contract that requires the struct be initialized, and put the contract and declaration in a template.
It is possible (I have one hanging out in my code somewhere). It works like this: struct NonNull(T) if is(T == class) { T value = cast(T) &NullObject!(T); ... alias value this; // does not compile yet } Constructing a null object is not hard - you can do it with static this() code, I forgot exactly how I did it. But this kind of code is not terribly useful, it initializes "null" objects to correct objects. Instead of getting a hard error, you end up having odd behavior because you're using that object. One step forward would be to assign all functions in T's vtable to throw-everything (make T a white hole object). Still fields will be changeable. Andrei
Mar 04 2009
prev sibling parent reply BCS <none anon.com> writes:
Hello Christopher,

 BCS wrote:
 
 Hello Walter,
 
 Andrei suggests making a library type work for this rather than a
 language attribute, but it's still an extra thing that will have to
 be specified everywhere where used.
 
I've considered trying to make a template that compile time enforces Non-null usage but can, with a version flag, be switched to a simple alias/typedef of the internal type. Near zero overhead and possibly just as strong a guard.
It's impossible. You can create a struct that will throw an exception if you use it uninitialized, or if you try assigning null to it. But the struct cannot require that it is initialized. You can add a contract that requires the struct be initialized, and put the contract and declaration in a template.
Your right, but if you switch to a class and factory with no public constructor, you can make it work. The problem of perf going down the drain is avoidable if you can (in that mode) enforce compile time checking of most cases and requiter calls to do run time checks for the rest. If the template works right, than flipping back to alias/typedef mode leaves the run time checks and leave the unchecked code as correct while doing away with the perf problems.
Mar 04 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
BCS wrote:
 Your right, but if you switch to a class and factory with no public 
 constructor, you can make it work. The problem of perf going down the 
 drain is avoidable if you can (in that mode) enforce compile time 
 checking of most cases and requiter calls to do run time checks for the 
 rest. If the template works right, than flipping back to alias/typedef 
 mode leaves the run time checks and leave the unchecked code as correct 
 while doing away with the perf problems.
If you use a class, you're begging the question. It's just that you'll have a null NotNull!(T) rather than a null T. Granted, you can use opAssign(T) instead, but you still need contracts.
Mar 05 2009
parent reply BCS <none anon.com> writes:
Hello Christopher,

 BCS wrote:
 
 Your right, but if you switch to a class and factory with no public
 constructor, you can make it work. The problem of perf going down the
 drain is avoidable if you can (in that mode) enforce compile time
 checking of most cases and requiter calls to do run time checks for
 the rest. If the template works right, than flipping back to
 alias/typedef mode leaves the run time checks and leave the unchecked
 code as correct while doing away with the perf problems.
 
If you use a class, you're begging the question. It's just that you'll have a null NotNull!(T) rather than a null T. Granted, you can use opAssign(T) instead, but you still need contracts.
That won't be an issue because it's a run time concern and I am proposing that the class based version never even run. In fact it could even not be runable. All it does is enforce usage patterns that do work correctly with a different set of definitions.
Mar 06 2009
parent reply Christopher Wright <dhasenan gmail.com> writes:
BCS wrote:
 Hello Christopher,
 
 BCS wrote:

 Your right, but if you switch to a class and factory with no public
 constructor, you can make it work. The problem of perf going down the
 drain is avoidable if you can (in that mode) enforce compile time
 checking of most cases and requiter calls to do run time checks for
 the rest. If the template works right, than flipping back to
 alias/typedef mode leaves the run time checks and leave the unchecked
 code as correct while doing away with the perf problems.
If you use a class, you're begging the question. It's just that you'll have a null NotNull!(T) rather than a null T. Granted, you can use opAssign(T) instead, but you still need contracts.
That won't be an issue because it's a run time concern and I am proposing that the class based version never even run. In fact it could even not be runable. All it does is enforce usage patterns that do work correctly with a different set of definitions.
I don't understand what you are saying. If you can't run a program that uses NotNull, who in their right mind would use it?
Mar 06 2009
parent BCS <ao pathlink.com> writes:
Reply to Christopher,

 BCS wrote:
 
 Hello Christopher,
 
 BCS wrote:
 
 Your right, but if you switch to a class and factory with no public
 constructor, you can make it work. The problem of perf going down
 the drain is avoidable if you can (in that mode) enforce compile
 time checking of most cases and requiter calls to do run time
 checks for the rest. If the template works right, than flipping
 back to alias/typedef mode leaves the run time checks and leave the
 unchecked code as correct while doing away with the perf problems.
 
If you use a class, you're begging the question. It's just that you'll have a null NotNull!(T) rather than a null T. Granted, you can use opAssign(T) instead, but you still need contracts.
That won't be an issue because it's a run time concern and I am proposing that the class based version never even run. In fact it could even not be runable. All it does is enforce usage patterns that do work correctly with a different set of definitions.
I don't understand what you are saying. If you can't run a program that uses NotNull, who in their right mind would use it?
something like this: version(Enforce) { NotNull!(T) MakeNotNull(T)(T) { assert(false); } bool MakeNotNull(T)(T, out NotNull!(T)) { assert(false); } class NotNull(T) { static this(){ assert(fasle); // overload to make valid stuff work and invalid not work } } else { template NotNull(T) // not quier right for classes but oh well { alias *T NotNull; } NotNull!(T) MakeNotNull(T)(T t) { if(t is null) throw new Something(); return t; } bool MakeNotNull(T)(T t, out NotNull!(T) tout) { if(t is null) return false; tout = t; return true; } } compile it with -version=Enforce. If it compiles, recompile without that version and run it if not, fix it & try again
Mar 06 2009
prev sibling parent reply "Joel C. Salomon" <joelcsalomon gmail.com> writes:
Daniel Keep wrote:
 You're missing the point.  It's not the moment at which the dereference
 happens that's the problem.  As you point out, we have a hardware trap
 for that.
 
 It's when a null *gets assigned* to a variable that isn't ever supposed
 to *be* null that's the problem.  That's when the "asserts up the
 posterior" issue comes in, because it's the only mechanism we have for
 defending against this.
I like this formulation of the problem, because it points to related issues. E.g., out-of-bounds checks. In this code: char[5] arr; int idx; … idx = 7; arr[idx] = "d"; the actual bug is in the expression “idx = 7”, although it’s only the indexing in the next line that triggers diagnostics. To avoid this class of bug, you need a simple way to declare what the acceptable values for a variable are. For a pointer, it ought never to be null, or to point outside the address space, or… Can contracts be applied to individual objects, or only to types? —Joel Salomon
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Joel C. Salomon wrote:
 To avoid this class of bug, you need a simple way to declare what the
 acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
Mar 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Joel C. Salomon:
 To avoid this class of bug, you need a simple way to declare what the
 acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
What? We use ranges of integers in Delphi at work today :-) I have even proposed something similar for D twice in the past. (But to be precise, I often don't use ranged integral numbers for the purpose discussed here). Bye, bearophile
Mar 04 2009
next sibling parent "Joel C. Salomon" <joelcsalomon gmail.com> writes:
bearophile wrote:
 Walter Bright:
 Joel C. Salomon:
 To avoid this class of bug, you need a simple way to declare what the
 acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
What? We use ranges of integers in Delphi at work today :-) I have even proposed something similar for D twice in the past. (But to be precise, I often don't use ranged integral numbers for the purpose discussed here).
I was actually thinking of something more ambitious: being able to declare object invariants (vs. class invariants) that may not be simple, e.g., “this variable will always be a valid index (or one-past-the-end) for that array, even when the array is resized. Something like, char[5] arr; int idx invariant() {assert((0 <= idx) && (idx <= arr.len));}; for example. OK, this syntax is clumsy. But some version of this, perhaps amenable to some mixins for common scenarios like not nullable, within some range, valid index to some array, &c., could be useable. —Joel Salomon
Mar 04 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 Joel C. Salomon:
 To avoid this class of bug, you need a simple way to declare what
 the acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
What? We use ranges of integers in Delphi at work today :-) I have even proposed something similar for D twice in the past. (But to be precise, I often don't use ranged integral numbers for the purpose discussed here).
I didn't mean nobody liked them or used them. I mean they have not caught on, despite having been around for 3 decades at least.
Mar 04 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gomvg0$1m3b$1 digitalmars.com...
 bearophile wrote:
 Walter Bright:
 Joel C. Salomon:
 To avoid this class of bug, you need a simple way to declare what
 the acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
What? We use ranges of integers in Delphi at work today :-) I have even proposed something similar for D twice in the past. (But to be precise, I often don't use ranged integral numbers for the purpose discussed here).
I didn't mean nobody liked them or used them. I mean they have not caught on, despite having been around for 3 decades at least.
There have been precedents where things sat in relative obscurity for decades before finally catching on. Such as many functional-programming concepts, like map/reduce and no-side-effect programming, which have only recently become highly in-vogue. Or Babbage's analytical engine. Of course, I'm not saying it would be easy, just that I wouldn't necessarily rule it out on that alone.
Mar 04 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 There have been precedents where things sat in relative obscurity for 
 decades before finally catching on. Such as many functional-programming 
 concepts, like map/reduce and no-side-effect programming, which have only 
 recently become highly in-vogue. Or Babbage's analytical engine. Of course, 
 I'm not saying it would be easy, just that I wouldn't necessarily rule it 
 out on that alone. 
Sure, but it does give one pause.
Mar 06 2009
prev sibling parent Georg Wrede <georg.wrede iki.fi> writes:
Walter Bright wrote:
 bearophile wrote:
 Walter Bright:
 Joel C. Salomon:
 To avoid this class of bug, you need a simple way to declare what
 the acceptable values for a variable are.
Languages have had this capability, but it never caught on. People found it just too tedious.
What? We use ranges of integers in Delphi at work today :-) I have even proposed something similar for D twice in the past. (But to be precise, I often don't use ranged integral numbers for the purpose discussed here).
I didn't mean nobody liked them or used them. I mean they have not caught on, despite having been around for 3 decades at least.
Integer ranges are closely related to contract programming.
Mar 05 2009
prev sibling next sibling parent Christopher Wright <dhasenan gmail.com> writes:
Walter Bright wrote:
 Rainer Deyke wrote:
 Writing an assertion for every non-nullable reference argument for every
 function is tedious.
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is. The hardware won't help you with array overflows or uninitialized variables, however.
The hardware will catch it when you try to use a null pointer. It does nothing for you when you store null in a pointer that's supposed to be valid. In order for you to make sure you don't store null somewhere you can't have null, you need to add an invariant to that effect, make that field private, and always access it via public properties. And you can't compile with -release. This is only a problem with complex data structures, usually, I think. I'd like to try non-nullable by default, but there are a number of issues that would have to be resolved that seriously affect their usability. Also, I don't like the opportunity cost of asking Walter to implement this feature.
Mar 04 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Walter,

 Rainer Deyke wrote:
 
 Writing an assertion for every non-nullable reference argument for
 every function is tedious.
 
It's also quite unnecessary. The hardware will do it for you, and the debugger will tell you where it is. The hardware won't help you with array overflows or uninitialized variables, however.
Even the best debugger I've ever seen doesn't help if the invalid null assignment is 10 call into a call branch that has returned already. Throw in member variables (or worse globals) and it gets even worse. I don't care where a null causes a seg-v, I want to know when that variable got assigned to null.
Mar 04 2009
parent "Joel C. Salomon" <joelcsalomon gmail.com> writes:
BCS wrote:
 I don't care where a null causes a seg-v, I want to know when that
 variable got assigned to null.
Per-object invariants? —Joel Salomon
Mar 04 2009
prev sibling parent reply Rainer Deyke <rainerd eldwood.com> writes:
Walter Bright wrote:
 It's also quite unnecessary. The hardware will do it for you, and the
 debugger will tell you where it is.
By the same reasoning, unit tests are unnecessary. The end user tells you that there's a bug, and the manually stepping through the whole program with a debugger tells you where. I use assertions because they make my life easier. -- Rainer Deyke - rainerd eldwood.com
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Rainer Deyke wrote:
 I use assertions because they make my life easier.
Yeah, but you also said they were tedious <g>.
Mar 04 2009
parent Rainer Deyke <rainerd eldwood.com> writes:
Walter Bright wrote:
 Rainer Deyke wrote:
 I use assertions because they make my life easier.
Yeah, but you also said they were tedious <g>.
Tedious, but better than the same code without the assertions. Non-nullable types would remove the need for manual assertions. -- Rainer Deyke - rainerd eldwood.com
Mar 04 2009
prev sibling parent reply Georg Wrede <georg.wrede iki.fi> writes:
Rainer Deyke wrote:
 Daniel Keep wrote:
 The point was that these were identified as being responsible for the


A sample size of one doesn't mean much. In my experience, none of those four factors account for a significant amount of bugs, since all of them (except integer overflow) can be caught without too much effort through the copious use of assertions.
Well, you got it backwards. Sample size one is your experience. Sample size N is Unreal huge source, which has been written by N guys.
Mar 04 2009
next sibling parent Rainer Deyke <rainerd eldwood.com> writes:
Georg Wrede wrote:
 Rainer Deyke wrote:
 A sample size of one doesn't mean much.  In my experience, none of those
 four factors account for a significant amount of bugs, since all of them
 (except integer overflow) can be caught without too much effort through
 the copious use of assertions.
Well, you got it backwards. Sample size one is your experience. Sample size N is Unreal huge source, which has been written by N guys.
It's one anecdote versus another anecdote. Neither has a significant sample size. (And you're not examining the whole body of work created by those N programmers. You're examining one project.) -- Rainer Deyke - rainerd eldwood.com
Mar 04 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 Sample size one is your experience.
 Sample size N is Unreal huge source, which has been written by N guys.
I've been involved with a lot of projects over the years with a lot of people involved; I've also done compiler tech support for 25 years.
Mar 04 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 3 [Integer overflow] is a problem, but fortunately it tends to be rare.
designers don't agree with you. A nice small post on the topic: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html Bye, bearophile
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 3 [Integer overflow] is a problem, but fortunately it tends to be
 rare.
and LLVM designers don't agree with you. A nice small post on the topic: http://googleresearch.blogspot.com/2006/06/extra-extra-read-all-about-it-nearly.html
There is a SafeInt class built for C++. It should be quite doable for D without needing any particular language support. That kind of thing is precisely what operator overloading is for. The nice thing about it is anyone can write and use such a class - no need to convince anyone else of its merits. Or you could change the compiler to throw an exception on any integer arithmetic overflow. Sounds great, right? Consider that there's no hardware support for this, so the following would have to happen: regular code: add EAX,EBX checked code: add EAX,EBX jc Overflow This is going to slow things down and bloat up the code generation. But wait, it gets worse. The x86 has a lot of complex addressing modes that are used for fast addition, such as: lea EAX,[EBX*8][ESI] None of these optimizations could be used if checking is desired. So, to keep the performance, you'll have to be able to select which one you want, either by a separate parallel set of integer types (doubling the number of types), or by having special code blocks, such as: { x = a + b; } I just don't see that being very popular. Code is full of arithmetic, and adding checked all over the place will not only uglify the code, chances are nearly certain that it will get omitted here and there for operations that might overflow.
Mar 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

Thank you for your answer.
I think you have programmed plenty in Pascal-like languages that support
arithmetic overflow checks, so all/some of the things I write below may sound
obvious to you.
I base what I say on this on many years of experience of programming in
languages that allow me to switch on such arithmetic overflow checks, so your

Clojure, Python and so on, have ways to avoid the arithmetic bugs I was talking
about, I think I'm not alone.


There is a SafeInt class built for C++. It should be quite doable for D without
needing any particular language support. That kind of thing is precisely what
operator overloading is for. The nice thing about it is anyone can write and
use such a class - no need to convince anyone else of its merits.<
You have to modify the code in many places to use it. I'm sure there are Safe-something classes for array bounds too, but they can't avoid most of the out-of-bound errors because very few people use it everywhere in programs, programmers are lazy. So I think a SafeInt class isn't much useful. Most or all C++ programs I see around don't use it. Google code search lists 347 usages of the word 'SafeInt' in C++ code: http://www.google.com/codesearch?hl=en&sa=N&q=SafeInt++lang:c%2B%2B&ct=rr&cs_r=lang:c%2B%2B
Or you could change the compiler to throw an exception on any integer
arithmetic overflow. Sounds great, right? Consider that there's no hardware
support for this, so the following would have to happen:<
This is going to slow things down and bloat up the code generation. But wait,
it gets worse. The x86 has a lot of complex addressing modes that are used for
fast addition, such as:<
None of these optimizations could be used if checking is desired.<
LLVM will have intrinsics to support such things. LDC may use them with a small amount of extra code in the front-end. Some of those checks can be avoided, because sometimes you can infer the operation can't overflow. I have turned on such checks hundred of times in Pascal, TurboPascal, Delphi, and FreePascal programs, and it has allowed me to spot bugs that are far worse than some slowdown during debugging. I have written many times prototypes of programs in Python because it avoids such overflow bugs. I like D also because it allows me to write fast programs, but for most programs most of the code isn't performance-critical, so lot of code isn't so damaged by such checks. That's why a large percentage of programs can today be written in managed languages or scripting languages or that are slower or way slower than good C/C++/D programs. Note that such code bloat and slowdown can be limited to debug time only too, disabling such checks locally or globally in release versions, if the programmer wants so. with and without such overflow checks, and time the performance difference. far faster than the current D ones, its GC is way more refined, it adapts itself to 32 and 64 bit CPUs, it's able to use multicores in easy ways, see parallel LINQ, etc). If you want I can perform some benchmarks later.
So, to keep the performance, you'll have to be able to select which one you
want, either by a separate parallel set of integer types (doubling the number
of types),<
A parallel set of integers doesn't solve the problem, it's just a way to make the situation more complex and messy. (The compiler switch can switch off arithmetic overflow checks for such second set of integral numbers, but it sounds strange and not nice).
or by having special code blocks, such as:
{ x = a + b; } I just don't see that being very popular. Code is full of arithmetic, and adding checked all over the place will not only uglify the code, chances are nearly certain that it will get omitted here and there for operations that might overflow.< Most times you want to switch on and off such checks for the whole program, and maybe to switch them off for high-performance modules or for some functions. This is quick&easy to do and you don't risk omitting some operations. In past posts I too have proposed a local syntax like the following, that is less important than more global switches: safe(overflow, ...) { ... } (Like a "static if" doesn't create a new scope). Bye, bearophile
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 I'm sure there are Safe-something classes for array bounds too, but
 they can't avoid most of the out-of-bound errors because very few
 people use it everywhere in programs, programmers are lazy.
I agree that people simply aren't going to use that class, for the reasons you mentioned.
 So I think a SafeInt class isn't much useful. Most or all C++
 programs I see around don't use it. Google code search lists 347
 usages of the word 'SafeInt' in C++ code: 
 http://www.google.com/codesearch?hl=en&sa=N&q=SafeInt++lang:c%2B%2B&ct=rr&cs_r=lang:c%2B%2B
I'm not in the least surprised how little penetration it has.
 Or you could change the compiler to throw an exception on any
 integer arithmetic overflow. Sounds great, right? Consider that
 there's no hardware support for this, so the following would have
 to happen:< This is going to slow things down and bloat up the code
 generation. But wait, it gets worse. The x86 has a lot of complex
 addressing modes that are used for fast addition, such as:< None of
 these optimizations could be used if checking is desired.<
LLVM will have intrinsics to support such things. LDC may use them with a small amount of extra code in the front-end.
LLVM cannot change the underlying hardware, which does not easily support it.
 Some of those checks can be avoided, because sometimes you can infer
 the operation can't overflow.
I've done data flow analysis to try and prove such things - the cases you can avoid the checks are the small minority. You have to check even if you're just doing a ++. On the plus side, you rarely need an overflow check on pointer arithmetic if you've already got array bounds checking or assume things are in bounds.
 I have turned on such checks hundred of times in Pascal, TurboPascal,
 Delphi, and FreePascal programs, and it has allowed me to spot bugs
 that are far worse than some slowdown during debugging. I have
 written many times prototypes of programs in Python because it avoids
 such overflow bugs.
 
 I like D also because it allows me to write fast programs, but for
 most programs most of the code isn't performance-critical, so lot of
 code isn't so damaged by such checks. That's why a large percentage
 of programs can today be written in managed languages or scripting
 languages or that are slower or way slower than good C/C++/D
 programs.
 
 Note that such code bloat and slowdown can be limited to debug time
 only too, disabling such checks locally or globally in release
 versions, if the programmer wants so.
That's a good point. Global switches are not appropriate for anything other than debugging, however, because some libraries may depend on overflow arithmetic.
Mar 04 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:
 Global switches are not appropriate for anything other than debugging, 
 however, because some libraries may depend on overflow arithmetic.
Right. If a library needs wrap arounds in numbers to work correctly, it has to contain a command to locally switch off overflow checks; such annotations override the global behavior of the compiler. This is how things work in Pascal-like languages. And it also shows why you need both global and local ways to switch them on and off. Bye, bearophile
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 Global switches are not appropriate for anything other than
 debugging, however, because some libraries may depend on overflow
 arithmetic.
Right. If a library needs wrap arounds in numbers to work correctly, it has to contain a command to locally switch off overflow checks; such annotations override the global behavior of the compiler. This is how things work in Pascal-like languages. And it also shows why you need both global and local ways to switch them on and off.
I just find: unchecked { ... code ... } awfully ugly. I doubt it would be used where it would need to be. Global switches that change the behavior of the language are bad, bad ideas. It makes code unverifiable and hence untrustable.
Mar 04 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gomj88$1253$1 digitalmars.com...
 I just find:

 unchecked
 {
     ... code ...
 }

 awfully ugly. I doubt it would be used where it would need to be. Global 
 switches that change the behavior of the language are bad, bad ideas. It 
 makes code unverifiable and hence untrustable.
That srikes me as a weak argument (partucularly since I just don't see the ugliness). Maybe that could be somewhat ugly for very small sections of code, but for those cases there is also this: int x = unchecked(...expr...); Plus, if you're not going to have a global switch for this sort of thing, then being able to turn it on locally becomes that much more important. As far as the SafeInt-style proposal, the problem I see with it is that the need vs lack-of-need for overflow checks tends to be based more on what you're doing with the variables rather than the actual variables themselves. (Plus, weren't you just saying in the null/nonnull discussion that you didn't want more variations on types?)
 I doubt it would be used where it would need to be.
It would certainly be better than nothing. And besides, I could say the same about the SafeInt-style proposal.
 Global switches that change the behavior of the language are bad, bad 
 ideas. It makes code unverifiable and hence untrustable.
Aren't you already doing that with things like bounds checking? I've been under the impression that, when built with "-release", an out-of-bounds access will result in undefined behavior, instead of an exception/assert, just as in C.
Mar 04 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Nick Sabalausky wrote:
 As far as the SafeInt-style proposal, the problem I see with it is that the 
 need vs lack-of-need for overflow checks tends to be based more on what 
 you're doing with the variables rather than the actual variables themselves. 
 (Plus, weren't you just saying in the null/nonnull discussion that you 
 didn't want more variations on types?)
This would be the user's choice. Those that don't care for it, needn't use it. That's the advantage of the SafeInt class.
 Global switches that change the behavior of the language are bad, bad 
 ideas. It makes code unverifiable and hence untrustable.
Aren't you already doing that with things like bounds checking? I've been under the impression that, when built with "-release", an out-of-bounds access will result in undefined behavior, instead of an exception/assert, just as in C.
In Java, you can rely on bounds checking to always be on, so you could, for example: try { for (int i = 0; i; i++) array[i] = ... } catch (ArrayBoundsException a) { } which is perfectly legitimate code in Java. It is dead wrong in D, because the language behavior is defined to not necessarily throw such exceptions. With overflow, there are legitimate uses of overflow arithmetic. You'd be hard pressed to make a statement like "overflow arithmetic is illegal in D" and have a useful systems programming language.
Mar 04 2009
next sibling parent "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:gomoc6$1aoa$1 digitalmars.com...
 Nick Sabalausky wrote:
 As far as the SafeInt-style proposal, the problem I see with it is that 
 the need vs lack-of-need for overflow checks tends to be based more on 
 what you're doing with the variables rather than the actual variables 
 themselves. (Plus, weren't you just saying in the null/nonnull discussion 
 that you didn't want more variations on types?)
This would be the user's choice. Those that don't care for it, needn't use it. That's the advantage of the SafeInt class.
not use it if they don't need it. Second, this doesn't address the original problem with SafeInt that I pointed out. Third, those who do choose to use SafeInt are still back to the issue of two different versions of a type (Not that I have a big problem with this point, but it just seems to conflict with the similar issue you had with null/nonnull).
 Global switches that change the behavior of the language are bad, bad 
 ideas. It makes code unverifiable and hence untrustable.
Aren't you already doing that with things like bounds checking? I've been under the impression that, when built with "-release", an out-of-bounds access will result in undefined behavior, instead of an exception/assert, just as in C.
In Java, you can rely on bounds checking to always be on, so you could, for example: try { for (int i = 0; i; i++) array[i] = ... } catch (ArrayBoundsException a) { } which is perfectly legitimate code in Java. It is dead wrong in D, because the language behavior is defined to not necessarily throw such exceptions.
Isn't that what I was just saying? D's behavior on such code is, depending how you look at it, either undefined, or dependant upon the "-release" / "-debug" compile switches.
 With overflow, there are legitimate uses of overflow arithmetic. You'd be 
 hard pressed to make a statement like "overflow arithmetic is illegal in 
 D" and have a useful systems programming language.
I don't think anyone is suggesting that. Sometimes you want overflow allowed, sometimes you want it disallowed (and sometimes, for better or worse, you don't give a rat's ass). We just want to be able to make that choice when we need to make it. Ie, "Overflow behavior is controllable in D".
Mar 04 2009
prev sibling parent Michiel Helvensteijn <nomail please.com> writes:
Walter Bright wrote:

 With overflow, there are legitimate uses of overflow arithmetic. You'd
 be hard pressed to make a statement like "overflow arithmetic is illegal
 in D" and have a useful systems programming language.
Seems to me that in the general case, programmers want 'normal' integer arithmetic. Overflow arithmetic would be a special case. I would suggest the following: * Overflow checking should be on by default. * It may optionally be turned off for release mode. * Introduce a special type for overflow arithmetic: mod_int!(lower, upper). Static code analysis is getting better and better. And as such, in more and more cases it becomes possible to prove that an overflow is impossible. In those cases, the runtime check can be left out even in debug mode. I myself am doing research in that direction. -- Michiel
Mar 06 2009
prev sibling next sibling parent reply Alex Burton <alexibu mac.com> writes:
bearophile Wrote:

 Andrei Alexandrescu:
 I did some more research and found a study:
 http://users.encs.concordia.ca/~chalin/papers/TR-2006-003.v3s-pub.pdf
 ...
 Turns out in 2/3 of cases, references are really meant to be non-null... 
 not really a landslide but a comfortable majority.
Thank you for bringing real data to this debate. Note that 2/3 is relative to nonlocal variables only:
In Java programs, at least 2/3 of declarations (other than local variables)
that are of 
reference types are meant to be non-null, based on design intent. We exclude local variables because their non-nullity can be inferred by intra-procedural analysis< So the total percentage may be different (higher?). Anyway, nonnullable by default seems the way to go if such feature is added.
I think there is some faulty logic here. People are writing code that has a design intention of nullable (as shown in the study) precisly because that is the default reference type in the language. Inferring that the default for a new language should be nullable based on these statistics would be a logical error. Implementing default non nullable would have the effect of reducing the amount of design intentional nullable used. It would also greatly increase the quality and maintainability of the code, as all references not specifically marked as nullable could be safely dereferenced. Alex
Mar 05 2009
parent reply Alex Burton <alexibu mac.com> writes:
Alex Burton Wrote:

 bearophile Wrote:
 
 Andrei Alexandrescu:
 I did some more research and found a study:
 http://users.encs.concordia.ca/~chalin/papers/TR-2006-003.v3s-pub.pdf
 ...
 Turns out in 2/3 of cases, references are really meant to be non-null... 
 not really a landslide but a comfortable majority.
Thank you for bringing real data to this debate. Note that 2/3 is relative to nonlocal variables only:
In Java programs, at least 2/3 of declarations (other than local variables)
that are of 
reference types are meant to be non-null, based on design intent. We exclude local variables because their non-nullity can be inferred by intra-procedural analysis< So the total percentage may be different (higher?). Anyway, nonnullable by default seems the way to go if such feature is added.
I think there is some faulty logic here. People are writing code that has a design intention of nullable (as shown in the study) precisly because that is the default reference type in the language. Inferring that the default for a new language should be nullable based on these statistics would be a logical error. Implementing default non nullable would have the effect of reducing the amount of design intentional nullable used. It would also greatly increase the quality and maintainability of the code, as all references not specifically marked as nullable could be safely dereferenced. Alex
Oops I'm wrong the 2/3 is NON nullable. My brain seems to have trouble reading all this 'non null' stuff.
Mar 05 2009
parent reply Alex Burton <alexibu mac.com> writes:
Alex Burton Wrote:


 
 Oops I'm wrong the 2/3 is NON nullable. My brain seems to have trouble reading
all this 'non null' stuff.
 
Actually non nullable is a double negative. What we really want in the D language and the language of the discussions about D is simple. 1) Types. 2) Nullable Types - optional There is no need for Non Nullable types. These are Types. Alex
Mar 05 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Alex Burton escribi:
 Alex Burton Wrote:
 
 
 Oops I'm wrong the 2/3 is NON nullable. My brain seems to have trouble reading
all this 'non null' stuff.
Actually non nullable is a double negative. What we really want in the D language and the language of the discussions about D is simple. 1) Types. 2) Nullable Types - optional There is no need for Non Nullable types. These are Types.
I think nullable types can't be optional. How do you implement a linked list without them? You use a dummy value for the "no next node"? Naaah...
 
 Alex
Mar 05 2009
parent Rainer Deyke <rainerd eldwood.com> writes:
Ary Borenszweig wrote:
 I think nullable types can't be optional. How do you implement a linked
 list without them? You use a dummy value for the "no next node"? Naaah...
A nullable type is, conceptually, a container that can contain either zero or one element. If there was no language or library support for nullable types, you could always use a dynamic array or some othet container instead. -- Rainer Deyke - rainerd eldwood.com
Mar 05 2009
prev sibling next sibling parent Alex Burton <alexibu mac.com> writes:
Walter Bright Wrote:

 Daniel Keep wrote:
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
Tim Sweeny isn't an amateur; he's responsible, at least in part, for one of the most commercially successful game engines ever. I figure if even he has trouble with these things, it's worth trying to fix them.
1 and 4 are pernicious, memory corrupting, hard to find problems. 2 is easy to find, does not corrupt memory. It isn't even in the same continent as 1 and 4 are.
That is a very programmer centric statement, you are rating the value of the problems in the cost to your own time once the bug has been received. The full cost to an enterprise of a catastrophic bug (which all 4 of these are ) in a released piece of software are almost certainly many orders of magnitude larger than the cost of programmer time to find and fix it. Alex
Mar 05 2009
prev sibling next sibling parent Alex Burton <alexibu mac.com> writes:
Walter Bright Wrote:

When I see code like this I see bugs.

 
 Foo f;
 if (x < 1) f = new Foo(1);
 else if (x >= 1) f = new Foo(2);
 f.member();
 
This should not compile IMHO default non nullable is necessary. If using a language with default nullable, I would write this as Foo generateFoo() { if (x < 1) return new Foo(1); else if (x >= 1) return new Foo(2); } This way the compiler has to check that there is a returned value for each path. As the conditions become more complex, the compiler enforcing a return value prevents the result from being null (unless of course you return 0 just to prove a point)
 Foo f;
 bar(&f);
 
 ? Or in another form:
 
 bar(ref Foo f);
 Foo f;
 bar(f);
 
 Java doesn't have ref parameters.
Same problem. The prototype of bar should be Foo bar() if the intent of bar is to return a reference to an instance of Foo. Returning what is conceptually the result of a function in by ref parameters is really nasty way to code IMHO. Alex
Mar 05 2009
prev sibling next sibling parent reply Burton Radons <burton.radons gmail.com> writes:
Walter Bright Wrote:

 Jason House wrote:
 IMHO, this type of thing is easy to understand.
Yeah, well, I still get regular emails (for the last 20 years at least) from the gamut of professional programmers at all levels of expertise who do not understand what "undefined symbol" from the linker means. It happens so often I am forced to consider the idea that the defect lies with me <g>. If I could figure a way to design *that* out of a linker, I would.
For every extern generate a weak symbol that does nothing but assert out with an error message; if it's properly resolved it goes away, if not then it's executed when the symbol is called. Now the linker isn't giving any errors. I actually remember doing that once! What the hell was I doing that for? Some kind of late binding malarkey maybe.
Mar 05 2009
parent "Nick Sabalausky" <a a.a> writes:
"Burton Radons" <burton.radons gmail.com> wrote in message 
news:gopodl$dne$1 digitalmars.com...
 Walter Bright Wrote:

 Jason House wrote:
 IMHO, this type of thing is easy to understand.
Yeah, well, I still get regular emails (for the last 20 years at least) from the gamut of professional programmers at all levels of expertise who do not understand what "undefined symbol" from the linker means. It happens so often I am forced to consider the idea that the defect lies with me <g>. If I could figure a way to design *that* out of a linker, I would.
For every extern generate a weak symbol that does nothing but assert out with an error message; if it's properly resolved it goes away, if not then it's executed when the symbol is called. Now the linker isn't giving any errors. I actually remember doing that once! What the hell was I doing that for? Some kind of late binding malarkey maybe.
That's sort of cheating, the error's still there, it's just gets shoved from build-time to run-time.
Mar 05 2009
prev sibling parent reply Kagamin <spam here.lot> writes:
 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
I doubt that blunt non-null forcing will solve this problem. If you're forced to use non-null, you'll invent a means to fool the compiler, some analogue of null reference - a stub object, which use will result into the same bug, with the difference that application won't crash immediately, but will behave in unpredictable way, at some point causing some other exception, so eventually you'll get your crash. Profit will be infinitesimal if any.
Mar 06 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Kagamin" <spam here.lot> wrote in message 
news:goqoup$jta$1 digitalmars.com...
 I call it my billion-dollar mistake. It was the invention of the null
 reference in 1965. [...] This has led to innumerable errors,
 vulnerabilities, and system crashes, which have probably caused a
 billion dollars of pain and damage in the last forty years.
-- Sir Charles Hoare, Inventor of QuickSort, Turing Award Winner
 * Accessing arrays out-of-bounds
 * Dereferencing null pointers
 * Integer overflow
 * Accessing uninitialized variables

 50% of the bugs in Unreal can be traced to these problems!
I doubt that blunt non-null forcing will solve this problem. If you're forced to use non-null, you'll invent a means to fool the compiler, some analogue of null reference - a stub object, which use will result into the same bug, with the difference that application won't crash immediately, but will behave in unpredictable way, at some point causing some other exception, so eventually you'll get your crash. Profit will be infinitesimal if any.
The idea is that non-null would not be forced, but rather be the default with an optional nullable for the times when it really is needed.
Mar 06 2009
parent reply Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 "Kagamin" <spam here.lot> wrote in message 
 I doubt that blunt non-null forcing will solve this problem. If you're 
 forced to use non-null, you'll invent a means to fool the compiler, some 
 analogue of null reference - a stub object, which use will result into the 
 same bug, with the difference that application won't crash immediately, 
 but will behave in unpredictable way, at some point causing some other 
 exception, so eventually you'll get your crash. Profit will be 
 infinitesimal if any.
The idea is that non-null would not be forced, but rather be the default with an optional nullable for the times when it really is needed.
This is interesting. I wonder what the practical result of non-null as default will be. Do programmers bother to specify nullable when needed, or will they "try to do the [perceived] Right Thing" by assigning stupid default values? If the latter happens, then we really are worse off than with nulls. Then searching for the elusive bug will be much more work.
Mar 06 2009
parent reply "Nick Sabalausky" <a a.a> writes:
"Georg Wrede" <georg.wrede iki.fi> wrote in message 
news:gor5ft$1d6c$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Kagamin" <spam here.lot> wrote in message
 I doubt that blunt non-null forcing will solve this problem. If you're 
 forced to use non-null, you'll invent a means to fool the compiler, some 
 analogue of null reference - a stub object, which use will result into 
 the same bug, with the difference that application won't crash 
 immediately, but will behave in unpredictable way, at some point causing 
 some other exception, so eventually you'll get your crash. Profit will 
 be infinitesimal if any.
The idea is that non-null would not be forced, but rather be the default with an optional nullable for the times when it really is needed.
This is interesting. I wonder what the practical result of non-null as default will be. Do programmers bother to specify nullable when needed, or will they "try to do the [perceived] Right Thing" by assigning stupid default values? If the latter happens, then we really are worse off than with nulls. Then searching for the elusive bug will be much more work.
Interesting point. We should probably keep an eye on the languages that use the "Foo" vs "Foo?" syntax for non-null vs nullable to see what usage patterns arise. Although, I generally have little more than contempt for programmers who blindly do what they were taught (by other amateurs) is usually "the right thing" without considering whether it really is appropriate for the situation at hand. Although I would think that there must be plenty of examples of things we already use that could make things worse if people used them improperly.
Mar 06 2009
parent Georg Wrede <georg.wrede iki.fi> writes:
Nick Sabalausky wrote:
 "Georg Wrede" <georg.wrede iki.fi> wrote in message 
 news:gor5ft$1d6c$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Kagamin" <spam here.lot> wrote in message
 I doubt that blunt non-null forcing will solve this problem. If you're 
 forced to use non-null, you'll invent a means to fool the compiler, some 
 analogue of null reference - a stub object, which use will result into 
 the same bug, with the difference that application won't crash 
 immediately, but will behave in unpredictable way, at some point causing 
 some other exception, so eventually you'll get your crash. Profit will 
 be infinitesimal if any.
The idea is that non-null would not be forced, but rather be the default with an optional nullable for the times when it really is needed.
This is interesting. I wonder what the practical result of non-null as default will be. Do programmers bother to specify nullable when needed, or will they "try to do the [perceived] Right Thing" by assigning stupid default values? If the latter happens, then we really are worse off than with nulls. Then searching for the elusive bug will be much more work.
Interesting point. We should probably keep an eye on the languages that use the "Foo" vs "Foo?" syntax for non-null vs nullable to see what usage patterns arise. Although, I generally have little more than contempt for programmers who blindly do what they were taught (by other amateurs) is usually "the right thing" without considering whether it really is appropriate for the situation at hand. Although I would think that there must be plenty of examples of things we already use that could make things worse if people used them improperly.
An interesting thought occurred to me just now. IIRC, Walter's argument to always zeroing memory at allocation, was to give "sensible starting values" and to "more easily see if data is uninitialised". If assignment before use is compulsory, then we don't need to zero out memory anymore. This ought to speed data intensive tasks up.
Mar 06 2009