www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Re: Null references redux

reply bearophile <bearophileHUGS lycos.com> writes:
Walter Bright:

 I used to work at Boeing designing critical flight systems. Absolutely 
 the WRONG failure mode is to pretend nothing went wrong and happily 
 return default values and show lovely green lights on the instrument 
 panel. The right thing is to immediately inform the pilot that something 
 went wrong and INSTANTLY SHUT THE BAD SYSTEM DOWN before it does 
 something really, really bad, because now it is in an unknown state. The 
 pilot then follows the procedure he's trained to, such as engage the backup.

Today we think this design is not the best one, because the pilot suddenly goes from a situation seen as safe where the autopilot does most things, to a situation where the pilot has to do everything. It causes panic. A human needs time to understand the situation and act correctly. So a better solution is to fail gracefully, giving back the control to the human in a progressive way, with enough time to understand the situation. Some of the things you have seen at Boeing today can be done better, there's some progress in the design of human interfaces too. That's why I suggest you to program in dotnet C# for few days.
 You could think of null exceptions like pain - sure it's unpleasant, but 
 people who feel no pain constantly injure themselves and don't live very 
 long. When I went to the dentist as a kid for the first time, he shot my 
 cheek full of novacaine. After the dental work, I went back to school. I 
 found to my amusement that if I chewed on my cheek, it didn't hurt.
 
 Boy was I sorry about that later <g>.

Oh my :-( Bye, bearophile
Sep 26 2009
next sibling parent language_fan <foo bar.com.invalid> writes:
Sat, 26 Sep 2009 19:27:51 -0400, bearophile thusly wrote:

 Some of the things you have seen at Boeing
 today can be done better, there's some progress in the design of human
 interfaces too. That's why I suggest you to program in dotnet C# for few
 days.

That is a really good suggestion. To me it seems that several known language authors have experimented with various kinds of languages before settling down. But Walter has only done assembler/C/C++/D/Java/Pascal? There are so many other important languages, such as Self, Eiffel, Scala, Scheme, SML, Haskell, Prolog, etc. It is not by any means harmful to know about their internals. There is great deals of CS concepts to be learned only by studying the language cores.
Sep 26 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
bearophile wrote:
 Walter Bright:
 
 I used to work at Boeing designing critical flight systems.
 Absolutely the WRONG failure mode is to pretend nothing went wrong
 and happily return default values and show lovely green lights on
 the instrument panel. The right thing is to immediately inform the
 pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM
 DOWN before it does something really, really bad, because now it is
 in an unknown state. The pilot then follows the procedure he's
 trained to, such as engage the backup.

Today we think this design is not the best one, because the pilot suddenly goes from a situation seen as safe where the autopilot does most things, to a situation where the pilot has to do everything. It causes panic.

I've never seen any suggestion that Boeing (or Airbus, or the FAA) has changed its philosophy on this. Do you have a reference? I should also point out that this strategy has been extremely successful. Flying is inherently dangerous, yet is statistically incredibly safe. Boeing is doing a LOT right, and I would be extremely cautious of changing the philosophy that so far has delivered spectacular results. BTW, shutting off the autopilot does not cause the airplane to suddenly nosedive. Airliner aerodynamics are designed to be stable and to seek straight and level flight if the controls are not touched. Autopilots do shut themselves off now and then, and the pilot takes command. Computers control a lot of systems besides the autopilot, too.
 A human needs time to understand the situation and act
 correctly. So a better solution is to fail gracefully, giving back
 the control to the human in a progressive way, with enough time to
 understand the situation. Some of the things you have seen at Boeing
 today can be done better,

Please give an example. I'll give one. How about that crash in the Netherlands recently where the autopilot decided to fly the airplane into the ground? As I recall it was getting bad data from the altimeters. I have a firm conviction that if there's a fault in the altimeters, the pilot should be informed and get control back immediately, as opposed to thinking about a sandwich (or whatever) while the autopilot soldiered on. An emergency can escalate very, very fast when you're going 600 mph. There have been cases of faults in the autopilot causing abrupt, bizarre maneuvers. This is why the autopilot must STOP IMMEDIATELY upon any fault which implies that the system is in an unknown state. Failing gracefully is done by shutting down the failed system and engaging a backup, not by trying to convince yourself that a program in an unknown state is capable of continuing to function. Software simply does not work that way - one bit wrong and anything can happen.
 there's some progress in the design of
 human interfaces too. That's why I suggest you to program in dotnet
 C# for few days.

Sep 26 2009
next sibling parent reply Justin Johansson <procode adam-dott-com.au> writes:
Walter Bright Wrote:

 bearophile wrote:
 Walter Bright:
 
 I used to work at Boeing designing critical flight systems.
 Absolutely the WRONG failure mode is to pretend nothing went wrong
 and happily return default values and show lovely green lights on
 the instrument panel. The right thing is to immediately inform the
 pilot that something went wrong and INSTANTLY SHUT THE BAD SYSTEM
 DOWN before it does something really, really bad, because now it is
 in an unknown state. The pilot then follows the procedure he's
 trained to, such as engage the backup.

Today we think this design is not the best one, because the pilot suddenly goes from a situation seen as safe where the autopilot does most things, to a situation where the pilot has to do everything. It causes panic.

I've never seen any suggestion that Boeing (or Airbus, or the FAA) has changed its philosophy on this. Do you have a reference? I should also point out that this strategy has been extremely successful. Flying is inherently dangerous, yet is statistically incredibly safe. Boeing is doing a LOT right, and I would be extremely cautious of changing the philosophy that so far has delivered spectacular results. BTW, shutting off the autopilot does not cause the airplane to suddenly nosedive. Airliner aerodynamics are designed to be stable and to seek straight and level flight if the controls are not touched. Autopilots do shut themselves off now and then, and the pilot takes command. Computers control a lot of systems besides the autopilot, too.
 A human needs time to understand the situation and act
 correctly. So a better solution is to fail gracefully, giving back
 the control to the human in a progressive way, with enough time to
 understand the situation. Some of the things you have seen at Boeing
 today can be done better,

Please give an example. I'll give one. How about that crash in the Netherlands recently where the autopilot decided to fly the airplane into the ground? As I recall it was getting bad data from the altimeters. I have a firm conviction that if there's a fault in the altimeters, the pilot should be informed and get control back immediately, as opposed to thinking about a sandwich (or whatever) while the autopilot soldiered on. An emergency can escalate very, very fast when you're going 600 mph. There have been cases of faults in the autopilot causing abrupt, bizarre maneuvers. This is why the autopilot must STOP IMMEDIATELY upon any fault which implies that the system is in an unknown state. Failing gracefully is done by shutting down the failed system and engaging a backup, not by trying to convince yourself that a program in an unknown state is capable of continuing to function. Software simply does not work that way - one bit wrong and anything can happen.
 there's some progress in the design of
 human interfaces too. That's why I suggest you to program in dotnet
 C# for few days.


Re:
 As I recall it was getting bad data from the 
 altimeters. I have a firm conviction that if there's a fault in the 
 altimeters, the pilot should be informed and get control back 
 immediately, as opposed to thinking about a sandwich (or whatever) while 
 the autopilot soldiered on.

Walter, in the heat of this thread I hope you haven't missed the correlation with discussion on "Dispatching on a variant" and noting: "Further, and worth mentioning given another raging thread on this forum at the moment, it turns out the ensuring type-safety of my design means that NPE's are a thing of the past (for me at least). This is due to strong static type checking together with runtime type validation all for a pretty reasonable cost." http://www.digitalmars.com/webnews/newsgroups.php?art_group=digitalmars.D&article_id=96847 Regards Justin Johansson
Sep 26 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the correlation
with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. I just don't see them as a replacement for *all* reference types.
Sep 26 2009
next sibling parent Justin Johansson <procode adam-dott-com.au> writes:
Walter Bright Wrote:

 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the correlation
with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. I just don't see them as a replacement for *all* reference types.

What you just said made me think that much of this thread is talking at cross-purposes. Perhaps the problem should be re-framed. The example T bar; bar.foo(); // new magic in hypothetical D doesn't kill the canary just yet is a bad example to base this discussion on. Something like T bar; mar.foo( bar) is a better example to consider. Forgetting about reference types for a moment, consider the following statements: "An int type is an indiscriminate union of negativeIntegerType, nonNegativeIntegerType, positiveIntegerType and other range-checked integer types. Passing around int's to functions that take int arguments, unless full 32 bits of int is what you really mean, is akin to passing around an indiscriminate union value, which is a no no." Pondering this might well shed some light and set useful context for the overall discussion. In other words, it's not so much an argument about calling a method on a reference type, its more about how to treat any type, value or reference, in type-safe, discriminate, manner. Just a thought (?)
Sep 26 2009
prev sibling next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-09-26 22:07:00 -0400, Walter Bright <newshound1 digitalmars.com> said:

 [...] The facilities in D enable one to construct a non-nullable type, 
 and they are appropriate for many designs. I just don't see them as a 
 replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Sep 26 2009
next sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-09-26 23:28:30 -0400, Michel Fortin <michel.fortin michelf.com> said:

 On 2009-09-26 22:07:00 -0400, Walter Bright <newshound1 digitalmars.com> said:
 
 [...] The facilities in D enable one to construct a non-nullable type, 
 and they are appropriate for many designs. I just don't see them as a 
 replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I just want to add: some people here are suggesting the compiler adds code to check for null and throw exceptions... I believe like you that this is the wrong approach because, like you said, it makes people add dummy try/catch statements to ignore the error. What you want a prorammer to do is check for null and properly handle the situation before the error occurs, and this is exactly what the static analysis approach I suggest forces. Take this example where "a" is non-nullable and "b" is nullable: string test(Object a, Object? b) { auto x = a.toString(); auto y = b.toString(); return x ~ y; } This should result in a compiler error on line 4 with a message telling you that "b" needs to be checked for null prior use. The programmer must then fix his error with an if (or some other control structure), like this: string test(Object a, Object? b) { audo result = a.toString(); if (b) result ~= b.toString(); return result; } And now the compiler will let it pass. This is what I'd like to see. What do you think? I'm not totally against throwing exceptions in some cases, but the above approach would be much more useful. Unfortunatly, throwing exceptions it the best you can do with a library type approach. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Sep 26 2009
next sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 27/09/2009 05:45, Michel Fortin wrote:
 On 2009-09-26 23:28:30 -0400, Michel Fortin <michel.fortin michelf.com>
 said:

 On 2009-09-26 22:07:00 -0400, Walter Bright
 <newshound1 digitalmars.com> said:

 [...] The facilities in D enable one to construct a non-nullable
 type, and they are appropriate for many designs. I just don't see
 them as a replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I just want to add: some people here are suggesting the compiler adds code to check for null and throw exceptions... I believe like you that this is the wrong approach because, like you said, it makes people add dummy try/catch statements to ignore the error. What you want a prorammer to do is check for null and properly handle the situation before the error occurs, and this is exactly what the static analysis approach I suggest forces. Take this example where "a" is non-nullable and "b" is nullable: string test(Object a, Object? b) { auto x = a.toString(); auto y = b.toString(); return x ~ y; } This should result in a compiler error on line 4 with a message telling you that "b" needs to be checked for null prior use. The programmer must then fix his error with an if (or some other control structure), like this: string test(Object a, Object? b) { audo result = a.toString(); if (b) result ~= b.toString(); return result; } And now the compiler will let it pass. This is what I'd like to see. What do you think? I'm not totally against throwing exceptions in some cases, but the above approach would be much more useful. Unfortunatly, throwing exceptions it the best you can do with a library type approach.

If you refer to my posts than I want to clarify: I fully agree with you that in the above this can and should be compile-time checked. This is a stricter approach and might seem annoying to some programmers but is far safer. Using non-null references by default will also restrict writing these checks to only the places where it is actually needed. In my posts I was simply answering Walter's claim. Walter was saying that returning null is a valid and in fact better way to indicate errors instead of returning some "default" value which will cause the program to generate bad output. My response to that was that if there's an error, the function should instead throw an exception which provides more information and better error handling. null is a bad way to indicate errors precisely because of the point you make - the compiler does not enforce the programmer to explicitly handle the null case unlike the Option type in FP languages.
Sep 27 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Michel Fortin wrote:
 On 2009-09-26 23:28:30 -0400, Michel Fortin <michel.fortin michelf.com> 
 said:
 
 On 2009-09-26 22:07:00 -0400, Walter Bright 
 <newshound1 digitalmars.com> said:

 [...] The facilities in D enable one to construct a non-nullable 
 type, and they are appropriate for many designs. I just don't see 
 them as a replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I just want to add: some people here are suggesting the compiler adds code to check for null and throw exceptions... I believe like you that this is the wrong approach because, like you said, it makes people add dummy try/catch statements to ignore the error. What you want a prorammer to do is check for null and properly handle the situation before the error occurs, and this is exactly what the static analysis approach I suggest forces. Take this example where "a" is non-nullable and "b" is nullable: string test(Object a, Object? b) { auto x = a.toString(); auto y = b.toString(); return x ~ y; } This should result in a compiler error on line 4 with a message telling you that "b" needs to be checked for null prior use. The programmer must then fix his error with an if (or some other control structure), like this: string test(Object a, Object? b) { audo result = a.toString(); if (b) result ~= b.toString(); return result; } And now the compiler will let it pass. This is what I'd like to see. What do you think? I'm not totally against throwing exceptions in some cases, but the above approach would be much more useful. Unfortunatly, throwing exceptions it the best you can do with a library type approach.

I don't think this would fly. One good thing about nullable references is that they are dynamically checked for validity at virtually zero cost. Non-nullable references, therefore, would not add value in that respect, but would add value by reducing the cases when programmers forgot to initialize references properly. Andrei
Sep 27 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:

 One good thing about nullable references 
 is that they are dynamically checked for validity at virtually zero 
 cost. Non-nullable references, therefore, would not add value in that 
 respect, but would add value by reducing the cases when programmers 
 forgot to initialize references properly.

nonnullable references can also reduce the total amount of code a little, because you don't need to write the null tests often (the points where you use objects are more than the points where you instantiate them). Bye, bearophile
Sep 27 2009
prev sibling parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-09-27 09:41:03 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 Michel Fortin wrote:
 On 2009-09-26 23:28:30 -0400, Michel Fortin <michel.fortin michelf.com> said:
 
 On 2009-09-26 22:07:00 -0400, Walter Bright <newshound1 digitalmars.com> said:
 
 [...] The facilities in D enable one to construct a non-nullable type, 
 and they are appropriate for many designs. I just don't see them as a 
 replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I just want to add: some people here are suggesting the compiler adds code to check for null and throw exceptions... I believe like you that this is the wrong approach because, like you said, it makes people add dummy try/catch statements to ignore the error. What you want a prorammer to do is check for null and properly handle the situation before the error occurs, and this is exactly what the static analysis approach I suggest forces. Take this example where "a" is non-nullable and "b" is nullable: string test(Object a, Object? b) { auto x = a.toString(); auto y = b.toString(); return x ~ y; } This should result in a compiler error on line 4 with a message telling you that "b" needs to be checked for null prior use. The programmer must then fix his error with an if (or some other control structure), like this: string test(Object a, Object? b) { audo result = a.toString(); if (b) result ~= b.toString(); return result; } And now the compiler will let it pass. This is what I'd like to see. What do you think? I'm not totally against throwing exceptions in some cases, but the above approach would be much more useful. Unfortunatly, throwing exceptions it the best you can do with a library type approach.

I don't think this would fly.

You want me to add wings? Please explain.
 One good thing about nullable references is that they are dynamically 
 checked for validity at virtually zero cost.

When you say they are dynamically checked, do you mean it throws an exception when you assign null? I'm not totally against this idea, but I find the above a supperior solution because it forces the programmer to handle the problem where it occurs and it doesn't require any runtime check.
 Non-nullable references, therefore, would not add value in that 
 respect, but would add value by reducing the cases when programmers 
 forgot to initialize references properly.

To me it looks like you're supporting an inferior concept for non-nullable references because the better one will not "fly" (whatever that means). Well, I support both concepts of non-nullable because they both can be very useful, but I believe static checking is a better way than throwing exceptions. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Sep 27 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Michel Fortin wrote:
 On 2009-09-27 09:41:03 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> said:
 
 Michel Fortin wrote:
 On 2009-09-26 23:28:30 -0400, Michel Fortin 
 <michel.fortin michelf.com> said:

 On 2009-09-26 22:07:00 -0400, Walter Bright 
 <newshound1 digitalmars.com> said:

 [...] The facilities in D enable one to construct a non-nullable 
 type, and they are appropriate for many designs. I just don't see 
 them as a replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I just want to add: some people here are suggesting the compiler adds code to check for null and throw exceptions... I believe like you that this is the wrong approach because, like you said, it makes people add dummy try/catch statements to ignore the error. What you want a prorammer to do is check for null and properly handle the situation before the error occurs, and this is exactly what the static analysis approach I suggest forces. Take this example where "a" is non-nullable and "b" is nullable: string test(Object a, Object? b) { auto x = a.toString(); auto y = b.toString(); return x ~ y; } This should result in a compiler error on line 4 with a message telling you that "b" needs to be checked for null prior use. The programmer must then fix his error with an if (or some other control structure), like this: string test(Object a, Object? b) { audo result = a.toString(); if (b) result ~= b.toString(); return result; } And now the compiler will let it pass. This is what I'd like to see. What do you think? I'm not totally against throwing exceptions in some cases, but the above approach would be much more useful. Unfortunatly, throwing exceptions it the best you can do with a library type approach.

I don't think this would fly.

You want me to add wings? Please explain.

I did explain. You suggest that we replace an automated, no-cost checking with a manual, compulsory, conservative, and costly scheme. That pretty much summarizes its disadvantages too :o). Andrei
Sep 27 2009
prev sibling parent reply Christopher Wright <dhasenan gmail.com> writes:
Michel Fortin wrote:
 On 2009-09-26 22:07:00 -0400, Walter Bright <newshound1 digitalmars.com> 
 said:
 
 [...] The facilities in D enable one to construct a non-nullable type, 
 and they are appropriate for many designs. I just don't see them as a 
 replacement for *all* reference types.

As far as I understand this thread, no one here is arguing that non-nullable references/pointers should replace *all* reference/pointer types. The argument made is that non-nullable should be the default and nullable can be specified explicitly any time you need it. So if you need a reference you use "Object" as the type, and if you want that reference to be nullable you write "Object?". The static analysis can then assert that your code properly check for null prior dereferencing a nullable type and issues a compilation error if not.

I dislike these forced checks. Let's say you're dealing with a compiler frontend. You have a semantic node that just went through some semantic pass and is guaranteed, by flow control and contracts, to have a certain property initialized that was not initialized prior to that point. The programmer knows the value isn't null. The compiler shouldn't force checks. At most, it should have automated checks that disappear with -release. Also, it introduces more nesting. Also, unless the compiler's flow analysis is great, it's a nuisance -- you can see that the error is bogus and have to insert extra checks. It should be fine to provide a requireNotNull template and leave it at that.
Sep 27 2009
parent reply Michel Fortin <michel.fortin michelf.com> writes:
On 2009-09-27 07:38:59 -0400, Christopher Wright <dhasenan gmail.com> said:

 I dislike these forced checks.
 
 Let's say you're dealing with a compiler frontend. You have a semantic 
 node that just went through some semantic pass and is guaranteed, by 
 flow control and contracts, to have a certain property initialized that 
 was not initialized prior to that point.
 
 The programmer knows the value isn't null. The compiler shouldn't force 
 checks. At most, it should have automated checks that disappear with 
 -release.

If the programmer knows a value isn't null, why not put the value in a nullable-reference in the first place?
 Also, it introduces more nesting.

Yes and no. It introduces an "if" statement for null checking, but only for nullable references. If you know your reference can't be null it should be non-nullable, and then you don't need to check.
 Also, unless the compiler's flow analysis is great, it's a nuisance -- 
 you can see that the error is bogus and have to insert extra checks.

First you're right, if the feature is implemented it should be well implemented. Second, if in a few place you don't want an "if" clause, you can always cast your nullable reference to a non-nullable one, explicitly bypassing the safeties. If you write a cast, you are making a consious decision of not checking for null, which is much better than the current situation where it's very easy to forget to check for null.
 It should be fine to provide a requireNotNull template and leave it at that.

It's fine to have such a template. But it's not nearly as useful. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Sep 27 2009
parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Michel Fortin wrote:
 On 2009-09-27 07:38:59 -0400, Christopher Wright <dhasenan gmail.com> said:
 
 I dislike these forced checks.

 Let's say you're dealing with a compiler frontend. You have a semantic 
 node that just went through some semantic pass and is guaranteed, by 
 flow control and contracts, to have a certain property initialized 
 that was not initialized prior to that point.

 The programmer knows the value isn't null. The compiler shouldn't 
 force checks. At most, it should have automated checks that disappear 
 with -release.

If the programmer knows a value isn't null, why not put the value in a nullable-reference in the first place?

It may not be nonnull for the entire lifetime of the reference.
 Also, it introduces more nesting.

Yes and no. It introduces an "if" statement for null checking, but only for nullable references. If you know your reference can't be null it should be non-nullable, and then you don't need to check.

I much prefer explicit null checks than implicit ones I can't control.
 Also, unless the compiler's flow analysis is great, it's a nuisance -- 
 you can see that the error is bogus and have to insert extra checks.

First you're right, if the feature is implemented it should be well implemented. Second, if in a few place you don't want an "if" clause, you can always cast your nullable reference to a non-nullable one, explicitly bypassing the safeties. If you write a cast, you are making a consious decision of not checking for null, which is much better than the current situation where it's very easy to forget to check for null.

That's just adding useless verbosity to the language.
 It should be fine to provide a requireNotNull template and leave it at 
 that.

It's fine to have such a template. But it's not nearly as useful.

It definitely is, the whole point is about reference initializations, not what they can or can't initialize to. What about non-nan floats? Or non-invalid characters? I fear nonnull references are a first step in the wrong direction. The focus should be about implementing variable initialization checks to the compiler, since this solves the issue with any variable, not just references. The flow analysis can also be reused for many other optimizations.
Sep 27 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jeremie Pelletier:

 The focus should be 
 about implementing variable initialization checks to the compiler, since 
 this solves the issue with any variable, not just references. The flow 
 analysis can also be reused for many other optimizations.

Are you willing to give your help to implement about 5-10% if this feature? :-) Bye, bearophile
Sep 27 2009
parent Jeremie Pelletier <jeremiep gmail.com> writes:
bearophile wrote:
 Jeremie Pelletier:
 
 The focus should be 
 about implementing variable initialization checks to the compiler, since 
 this solves the issue with any variable, not just references. The flow 
 analysis can also be reused for many other optimizations.

Are you willing to give your help to implement about 5-10% if this feature? :-) Bye, bearophile

Sure, I would love to help implement flow analysis, I don't know enough of the current dmd semantic analysis internals yet, but I'm slowly getting there. Jeremie
Sep 27 2009
prev sibling parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Jarrett Billingsley wrote:
 On Sun, Sep 27, 2009 at 2:07 PM, Jeremie Pelletier <jeremiep gmail.com> wrote:
 
 Yes and no. It introduces an "if" statement for null checking, but only
 for nullable references. If you know your reference can't be null it should
 be non-nullable, and then you don't need to check.


Nonnull types do not create implicit null checks. Nonnull types DO NOT need to be checked. And nullable types WOULD force explicit null checks.

Forcing checks on nullables is just as bad, not all nullables need to be checked every time they're used.
 What about non-nan floats? Or non-invalid characters? I fear nonnull
 references are a first step in the wrong direction. The focus should be
 about implementing variable initialization checks to the compiler, since
 this solves the issue with any variable, not just references. The flow
 analysis can also be reused for many other optimizations.

hash_t foo(Object o) { return o.toHash(); } foo(null); // bamf, I just killed your function. Forcing initialization of locals does NOT solve all the problems that nonnull references would.

You didn't kill my function, you shot yourself in the foot. Something trivial to debug.
Sep 27 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Jarrett Billingsley:

 And if you have a nullable reference that you know is not null for the
 rest of the function? Just put "assert(x !is null)" and everything
 that follows will assume it's not null.

Asserts tend to vanish in release mode, so it may be better to use something different. A possibility is to use the enforce() some people have shown here. Another possibility is the very strange assume() of Visual C++, that I may appreciate for other purposes too: http://msdn.microsoft.com/en-us/library/1b3fsfxw(loband).aspx Bye, bearophile
Sep 27 2009
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the 
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs.

No. There is no means to disable default construction.
 I just 
 don't see them as a replacement for *all* reference types.

Non-nullable references should be the default. Andrei
Sep 26 2009
next sibling parent reply Jeremie Pelletier <jeremiep gmail.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the 
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs.

No. There is no means to disable default construction.
 I just don't see them as a replacement for *all* reference types.

Non-nullable references should be the default. Andrei

Like I said in another post of this thread, I believe the issue here is more over initializer semantics than null/non-null references. This is what's causing most of the errors anyways. Can't the compiler just throw a warning if a variable is used before initialization, and allow "= null" to bypass this ("= void" would still be considered uninitialized). Same thing for fields. It would be much more convenient than new type variants, both to implement and to use. It could even be used for any type, the default initializer in D is a cute idea, but not a performance friendly one. I would much prefer the compiler to allow "int a" but warn me if I use it before assigning anything to it than assigning it to zero, and then assigning it to the value I wanted. "= void" is nice but I'm pretty sure I'm way over a thousand uses of it so far. Jeremie
Sep 26 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Jeremie Pelletier wrote:
 It could even be used for any type, the default initializer in D is a 
 cute idea, but not a performance friendly one. I would much prefer the 
 compiler to allow "int a" but warn me if I use it before assigning 
 anything to it than assigning it to zero, and then assigning it to the 
 value I wanted. "= void" is nice but I'm pretty sure I'm way over a 
 thousand uses of it so far.

The compiler, when -O is used, should remove nearly all the redundant initializations.
Sep 26 2009
parent BCS <none anon.com> writes:
Hello Walter,

 Jeremie Pelletier wrote:
 
 It could even be used for any type, the default initializer in D is a
 cute idea, but not a performance friendly one. I would much prefer
 the compiler to allow "int a" but warn me if I use it before
 assigning anything to it than assigning it to zero, and then
 assigning it to the value I wanted. "= void" is nice but I'm pretty
 sure I'm way over a thousand uses of it so far.
 

initializations.

Sweet, so you already have a bunch of the logic needed to check make sure non-null references get initialized.
Sep 27 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the 
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs.

No. There is no means to disable default construction.

Ack, I remember we talked about this, I guess I don't remember the resolution.
Sep 26 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the 
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs.

No. There is no means to disable default construction.

Ack, I remember we talked about this, I guess I don't remember the resolution.

The resolution was that the language will allow delete'ing the unwanted constructor: struct NonNull(T) if (is(T == class)) { delete this(); ... } Andrei
Sep 27 2009
prev sibling parent Christopher Wright <dhasenan gmail.com> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the 
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs.

No. There is no means to disable default construction.

I looked into this slightly. You'd have to do mark non-nullable fields as requiring ctor initialization, prevent reallocating arrays of non-nullables, and a few other things. At the time I wasn't considering struct constructors; without them, you'd have to forbid structs that contain non-nullable fields, but with them, it's okay.
 I just don't see them as a replacement for *all* reference types.

Non-nullable references should be the default. Andrei

Sep 27 2009
prev sibling parent Yigal Chripun <yigal100 gmail.com> writes:
On 27/09/2009 04:07, Walter Bright wrote:
 Justin Johansson wrote:
 Walter, in the heat of this thread I hope you haven't missed the
 correlation with discussion
 on "Dispatching on a variant" and noting:

Thanks for pointing it out. The facilities in D enable one to construct a non-nullable type, and they are appropriate for many designs. I just don't see them as a replacement for *all* reference types.

No one was claiming that. to reiterate - non-null references are *not* a replacement for *all* reference types, they are just a better, safer *default*. You can use nullable references when needed (that what all T? code snippets are all about) it just isn't the default.
Sep 26 2009
prev sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Sun, Sep 27, 2009 at 3:42 PM, Jeremie Pelletier <jeremiep gmail.com> wrote:
 Jarrett Billingsley wrote:
 Nonnull types do not create implicit null checks. Nonnull types DO NOT
 need to be checked. And nullable types WOULD force explicit null
 checks.

Forcing checks on nullables is just as bad, not all nullables need to be checked every time they're used.

You don't get it, do you. If you have a reference that doesn't need to be checked every time it's used, you make it a *nonnull reference*. You *only* use nullable variables for references where the nullness of the reference should change the program logic. And if you're talking about things like: Foo? x = someFunc(); if(x is null) { // one path } else { // use x here } and you're expecting the "use x here" clause to force you to do (cast(Foo)x) every time you want to use x? That's not the case. The condition of the if has *proven* x to be nonnull in the else clause, so no null checks - at compile time or at runtime - have to be performed there, nor does it have to be cast to a nonnull reference. And if you have a nullable reference that you know is not null for the rest of the function? Just put "assert(x !is null)" and everything that follows will assume it's not null.
 hash_t foo(Object o) { return o.toHash(); }
 foo(null); // bamf, I just killed your function.

 Forcing initialization of locals does NOT solve all the problems that
 nonnull references would.

You didn't kill my function, you shot yourself in the foot. Something trivial to debug.

You're dodging. You claim that forcing variable initialization solves the same problem that nonnull references do. It doesn't.
Sep 27 2009
prev sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Sun, Sep 27, 2009 at 2:07 PM, Jeremie Pelletier <jeremiep gmail.com> wrote:

 Yes and no. It introduces an "if" statement for null checking, but only
 for nullable references. If you know your reference can't be null it should
 be non-nullable, and then you don't need to check.

I much prefer explicit null checks than implicit ones I can't control.

Nonnull types do not create implicit null checks. Nonnull types DO NOT need to be checked. And nullable types WOULD force explicit null checks.
 What about non-nan floats? Or non-invalid characters? I fear nonnull
 references are a first step in the wrong direction. The focus should be
 about implementing variable initialization checks to the compiler, since
 this solves the issue with any variable, not just references. The flow
 analysis can also be reused for many other optimizations.

hash_t foo(Object o) { return o.toHash(); } foo(null); // bamf, I just killed your function. Forcing initialization of locals does NOT solve all the problems that nonnull references would.
Sep 27 2009