www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Introducing Nullable Reference Types in C#. Is there hope for D, too?

reply Michael V. Franklin <slavo5150 yahoo.com> writes:
I ran into this blog post today:  
https://blogs.msdn.microsoft.com/dotnet/2017/11/15/nullable-reference-types-in-csharp/

It peeked my interested, because when I first started studying D, 
the lack of any warning or error for this trivial case surprised 
me.

// Example A
class Test
{
     int Value;
}

void main(string[] args)
{
     Test t;
     t.Value++;  // No compiler error, or warning.  Runtime error!
}
https://run.dlang.io/is/naTgHC



// Example B
class Test
{
     public int Value;
}
					
public class Program
{
     public static void Main()
     {
         Test t;
         t.Value++;  // Error: Use of unassigned local variable 't'
     }
}
https://dotnetfiddle.net/8diEiG

But, it's not perfect:

// Example C
class Test
{
     private static Test _instance;
     public static Test Instance
     {
         get { return _instance; }
     }

     public int Value;
}
					
public class Program
{	
     public static void Main()
     {
         Test t = Test.Instance;
         t.Value++;  // No compiler error, or warning.  Runtime 
error!
     }
}
https://dotnetfiddle.net/GEv2fh

With Microsoft's proposed change, the compiler will emit a 
warning for Example C.  If you want to opt out of the warning, 
you'll need to declare `_instance` as `Test? _instance` (see the 
'?' there).


still considers it an improvement to the language worth pursuing. 
  Is there hope for D, too?

Mike
Nov 16 2017
next sibling parent codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin 
wrote:
 // Example A
 class Test
 {
     int Value;
 }

 void main(string[] args)
 {
     Test t;
     t.Value++;  // No compiler error, or warning.  Runtime 
 error!
 }
//Test t; Test t = new Test;
Nov 16 2017
prev sibling next sibling parent reply codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin 
wrote:
 It peeked my interested, because when I first started studying 
 D, the lack of any warning or error for this trivial case 
 surprised me.

 // Example A
 class Test
 {
     int Value;
 }

 void main(string[] args)
 {
     Test t;
     t.Value++;  // No compiler error, or warning.  Runtime 
 error!
 }
Also, if you start with nothing, and add 1 to it, you still end up with nothing, cause you started with nothing. That makes completed sense to me. So why should that be invalid?
Nov 16 2017
next sibling parent reply rumbu <rumbu rumbu.ro> writes:
On Friday, 17 November 2017 at 02:25:21 UTC, codephantom wrote:
 On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. 
 Franklin wrote:
 It peeked my interested, because when I first started studying 
 D, the lack of any warning or error for this trivial case 
 surprised me.

 // Example A
 class Test
 {
     int Value;
 }

 void main(string[] args)
 {
     Test t;
     t.Value++;  // No compiler error, or warning.  Runtime 
 error!
 }
Also, if you start with nothing, and add 1 to it, you still end up with nothing, cause you started with nothing. That makes completed sense to me. So why should that be invalid?
You are not ending with nothing, you are ending with a run time ending for sure in an error at run time, must be catch at compile-time.
Nov 16 2017
parent reply codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 05:50:24 UTC, rumbu wrote:
 You are not ending with nothing, you are ending with a run time 

 ending for sure in an error at run time, must be catch at 
 compile-time.
Well.. sometimes it's just nice...to do nothing, and I'm glad D lets me do that. And the runtime should just stay out of it. It's always interfering in something - even when its nothing.
Nov 16 2017
parent reply rumbu <rumbu rumbu.ro> writes:
On Friday, 17 November 2017 at 06:32:03 UTC, codephantom wrote:
 On Friday, 17 November 2017 at 05:50:24 UTC, rumbu wrote:
 You are not ending with nothing, you are ending with a run 

 something ending for sure in an error at run time, must be 
 catch at compile-time.
Well.. sometimes it's just nice...to do nothing, and I'm glad D lets me do that. And the runtime should just stay out of it. It's always interfering in something - even when its nothing.
I don't even imagine how the runtime can stay out of dereferencing a null pointer. about safety. And safety is one of the D taglines.
Nov 17 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 09:44:01 UTC, rumbu wrote:

 about safety. And safety is one of the D taglines.
Well the nice thing about that discussion (at: https://blogs.msdn.microsoft.com/dotnet/2017/11/15/nullable-refere ce-types-in-csharp/ ), was that not everyone agreed. So maybe there is hope for points were made for 'not' making the change. Sounds to me, like too many people are writing incorrect code in the first place, and want to offload responsibility onto something other than themself. This is why we have bloated runtime checks these days. I've always thought writing the correct code was the better option anyway. Maybe even (god forbid).. test it. As for safety in D....it's already there. ..... module test; import std.stdio; static immutable Exception TheYouIdiot_Exception = new Exception("You idiot!"); class Test { int Value; } void main() { Test t; //let assume this was intended. Otherwise the fault starts here. // .. if(!t) throw TheYouIdiot_Exception; // problem solved. t.Value++; }
Nov 17 2017
next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Friday, 17 November 2017 at 10:45:13 UTC, codephantom wrote:
 Sounds to me, like too many people are writing incorrect code 
 in the first place,
Also known as "writing code".
 and want to offload responsibility onto something other than 
 themself.
That's the whole point of using a safe language, otherwise we'd be fine with C.
 This is why we have bloated runtime checks these days.
The post was talking about compile-time checks. I'll take bloated runtime checks over memory corruption any day of the week and twice on Sundays. Atila
Nov 17 2017
parent codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 12:18:47 UTC, Atila Neves wrote:
 That's the whole point of using a safe language, otherwise we'd 
 be fine with C.
Personally, I would prefer to teach new students to program in C first - precisely because it's an unsafe language - or at least, can be used unsafely. (that's how i first learnt to program - and actually I taught myself). Because of C, I 'had to' learn how to write code in a defensive manner. These days people often start with a safe language instead, and often use it within an overly sophisticated IDE ( a bit like having your mother hold your hand everytime you cross the road). I think that encourages laziness, in terms of defensive programming/thinking. Programmers become complacent and leave too much stuff up to compile time checks. I think people can write more correct code in the beginning, by simply changing the way they think about the code and how it might interact in the wider ecosystem...and, maybe even by not relying on sophisticated IDE's (at least at the early stages). Of course compile time checks are needed. But they should not be at the expense of writing code correctly in the first place. They should come in at the latter stage of defensive programming, not the first stage. If you check the validity of an object before going on to reference/modify it, then no compile time check is ever needed. nice Dr Dobbs article about it here: http://www.drdobbs.com/defensive-programming/184401915
Nov 17 2017
prev sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Friday, 17 November 2017 at 10:45:13 UTC, codephantom wrote:
 I've always thought writing the correct code was the better 
 option anyway.
It is interesting that you mention this. Our product manager was talking to our senior developer about this very thing. He explained that it was a method of development that an employee at his previous company came up with at that the approach was very effective once implemented. Our senior developer has really take charge on this and really pushing the other developers to just stop writing bugs into the program (it really hasn't been helping the company make money). Its been a little rocky start, but what new policy isn't. I really think this is going to be a savior to our company and that others should adopt it. codephantom, go forth and spread the knowledge that we should stop writing bugs into our programs and instead start with correct code, you won't lose your job over it.
Nov 17 2017
parent codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 15:27:06 UTC, Jesse Phillips wrote:
 It is interesting that you mention this. Our product manager 
 was talking to our senior developer about this very thing. He 
 explained that it was a method of development that an employee 
 at his previous company came up with at that the approach was 
 very effective once implemented.
Just give it a nice 'product' name, like "Conscientious Defensive Programming". Now it's a lot easier to get people on board, even senior management ;-)
Nov 17 2017
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:

 about safety. And safety is one of the D taglines.
Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue. If you dereference a null pointer or reference, your program will segfault. No memory is corrupted, and no memory that should not be accessed is accessed. If dereferencing a null pointer or reference in a program were a memory safety issue, then we'd either have to make it illegal to dereference references or pointers in safe code or add additional runtime null checks beyond what already happens with segfaults, since aside from having non-nullable pointers/references, in the general case, we can't guarantee that a pointer or reference isn't null. At best, the compiler can detect it in certain instances (e.g. when a variable was initialized to null or assigned null, and it wasn't passed to anything else before it was used), but in most cases, it can't know. So, this is purely about the compiler detecting a certain class of bug in programs and giving a warning or error when it does, not about memory safety. - Jonathan M Davis
Nov 17 2017
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.11.2017 12:22, Jonathan M Davis wrote:
 On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:

 about safety. And safety is one of the D taglines.
Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue.
Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this _becomes_ a memory safety issue.
 If
 you dereference a null pointer or reference, your program will segfault. No
 memory is corrupted, and no memory that should not be accessed is accessed.
 If dereferencing a null pointer or reference in a program were a memory
 safety issue, then we'd either have to make it illegal to dereference
 references or pointers in  safe code or add additional runtime null checks
 beyond what already happens with segfaults, since aside from having
 non-nullable pointers/references, in the general case, we can't guarantee
 that a pointer or reference isn't null.
There are type systems that do that, which is what is being proposed for type C, it actually contains a reference to a class instance of type C.
Nov 17 2017
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, November 17, 2017 15:05:48 Timon Gehr via Digitalmars-d wrote:
 On 17.11.2017 12:22, Jonathan M Davis wrote:
 On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:

 about safety. And safety is one of the D taglines.
Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue.
Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this _becomes_ a memory safety issue.
This is definitely not how it is viewed in D. Walter has repeatedly stated that dereferencing a null pointer is considered safe, because doing so will not corrupt memory or access memory that it should not access - and that's all that safe cares about. Whether there's an object of the correct type at that location or not is irrelevant, because it's null. You do have a memory safety issue if you somehow make the pointer or reference refer to an object of a different type than the reference or pointer is allowed to point to, but doing that requires getting around the type system via casting, which would not be allowed in safe code, and badly written trusted code can always screw up safe code. Regardless, given that dereferencing null will segfault, it does not present an safety problem. The only issue with dereferencing a null pointer in safe code is that if the type is sufficiently large (larger than a page of memory IIRC), you don't actually get a segfault, and that hole does need to be plugged by having the compiler add runtime checks where needed. But most null pointers/references do not have that problem. - Jonathan M Davis
Nov 17 2017
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Friday, 17 November 2017 at 14:53:40 UTC, Jonathan M Davis 
wrote:
 [snip] Regardless, given that dereferencing null will segfault, 
 it does not present an  safety problem.
safe is really more of memorysafe. Null safety is orthogonal to memory safety. I don't really use null much in D currently, so this isn't all that important to me ATM. Regardless, I could imagine that if one were writing a language from scratch, you could have a default of nullsafe where there is a compile-time error if you violate some null safety rules. You could then have a nullunsafe where these compile-time errors are disabled and you throw an exception at run-time if you do. nullunsafe is effectively the default in most languages.
Nov 17 2017
prev sibling next sibling parent reply codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 14:53:40 UTC, Jonathan M Davis 
wrote:
 Regardless, given that dereferencing null will segfault, it
 does not present an  safety problem.
"A notion of safety is always relative to some criterion". If your code dereferences a null pointer, and the program segfaults, and that program is supplying me with the oxygen i need to survive...then its probably not safe ;-)
Nov 17 2017
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 18 November 2017 at 03:33:00 UTC, codephantom wrote:
 If your code dereferences a null pointer, and the program 
 segfaults, and that program is supplying me with the oxygen i 
 need to survive...then its probably not safe ;-)
If you have a life-essential system that can't survive a single part randomly failing, including a process terminating abnormally, you're an incompetent engineer.
Nov 17 2017
parent reply codephantom <me noyb.com> writes:
On Saturday, 18 November 2017 at 03:39:50 UTC, Adam D. Ruppe 
wrote:
 If you have a life-essential system that can't survive a single 
 part randomly failing, including a process terminating 
 abnormally, you're an incompetent engineer.
First semester, programming course. Write a life-essential system in C, and simulate it. If patient dies, you fail.
Nov 17 2017
parent codephantom <me noyb.com> writes:
On Saturday, 18 November 2017 at 04:56:19 UTC, codephantom wrote:
 First semester, programming course.

 Write a life-essential system in C, and simulate it.

 If patient dies, you fail.
Second semester. Find vulnerability in another students semester 1 project. If you succeed, they fail the whole year, you pass second semester. I bet we'd see a lot of conscientuous defensive programming in semester 1 ;-)
Nov 17 2017
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 17.11.2017 15:53, Jonathan M Davis wrote:
 On Friday, November 17, 2017 15:05:48 Timon Gehr via Digitalmars-d wrote:
 On 17.11.2017 12:22, Jonathan M Davis wrote:
 On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:

 about safety. And safety is one of the D taglines.
Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue.
Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this_becomes_ a memory safety issue.
This is definitely not how it is viewed in D. Walter has repeatedly stated that dereferencing a null pointer is considered safe, because doing so will not corrupt memory or access memory that it should not access - and that's all that safe cares about.
The current discussion is about how safety *should* be viewed in D in the future, as in, potentially /changing/ how it is viewed. This means rehashing the status quo without giving justification for it is not useful. Why *should* safe only mean "does not corrupt memory" or "accesses memory that it should not access"? Why can't it also mean "does not attempt to dereference null pointers"? Note that it is up to the language to _define_ what safe does and does not mean. If the language evolves, that meaning may evolve too.
Nov 18 2017
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Saturday, November 18, 2017 15:24:49 Timon Gehr via Digitalmars-d wrote:
 On 17.11.2017 15:53, Jonathan M Davis wrote:
 On Friday, November 17, 2017 15:05:48 Timon Gehr via Digitalmars-d 
wrote:
 On 17.11.2017 12:22, Jonathan M Davis wrote:
 On Friday, November 17, 2017 09:44:01 rumbu via Digitalmars-d wrote:

 about safety. And safety is one of the D taglines.
Completely aside from whether having the compile-time checks would be good or not, I would point out that this isn't actually a memory safety issue.
Memory safety is not the only kind of safety. Also, memory safety is usually formalized as (type) preservation which basically says that every memory location actually contains a value of the correct type. Hence, as soon as you have non-nullable pointers in the type system, this_becomes_ a memory safety issue.
This is definitely not how it is viewed in D. Walter has repeatedly stated that dereferencing a null pointer is considered safe, because doing so will not corrupt memory or access memory that it should not access - and that's all that safe cares about.
The current discussion is about how safety *should* be viewed in D in the future, as in, potentially /changing/ how it is viewed. This means rehashing the status quo without giving justification for it is not useful. Why *should* safe only mean "does not corrupt memory" or "accesses memory that it should not access"? Why can't it also mean "does not attempt to dereference null pointers"? Note that it is up to the language to _define_ what safe does and does not mean. If the language evolves, that meaning may evolve too.
safe as it stands provides a way to segregate code that potentially introduces memory corruption and invalid memory accesses. Trying to add anything related to null pointer dereferencing would just increase the amount of code that would have to be system, making it harder to deal with problems actually related to memory corruption. And really, to avoid the possibility of dereferencing null would require introducing non-nullable pointers and references, because you're never going to be able to guarantee it for nullable pointers or references, and honestly, I think that it would destroy safe to treat dereferencing nullable pointers or references as system. Too much code would be system, making it far more difficult to segregate code that dealt with actual, memory safety issues. Perhaps there would be some value in having non-nullable pointers or references, but they're an issue orthogonal to memory safety. And honestly, I'm not even vaguely convinced that null pointers or references are much of a real problem. Personally, pretty much the only time I run into problems with them is when I forget to initialize a class reference. It's incredibly rare that I end up with a program that ends up dereferencing null. And when it does, you get a segfault and potentially a core dump, and the problem is usually easy to fix. Really, I don't think that dealing with potentially dereferencing null is all that different from dealing with potentially dividing by zero. It sucks when it happens, because it kills your program, but it's really not all that hard to avoid. And at least when it does happen, it's obvious rather than introducing subtle and hard to track down problems like you frequently get with code that isn't memory safe. - Jonathan M Davis
Nov 18 2017
parent RomanZ <zrrole ya.ru> writes:
Is it possible to somehow change the concept of uninitialized 
values to something like 'zeroOrOne' instead of 'null'?


   Instrument instrument1 = new Instrument();
   Instrument instrument2 = new Instrument();

   zeroOrOne!Instrument getInstrument() {
	  zeroOrOne!Instrument instrument;
	  if( instrument1.power > 10 ) instrument = instrument1;
	  else if( instrument2.power > 5 ) instrument = instrument2;
	  return instrument;
   }

   //zeroOrOne!Instrument instead of 'InstrumentOrNull'
   auto instrument = getInstrument();

   /*1*/ instrument.setPower( 20.4f );
   /*2*/ instrument.doit();

   /*3*/ instrument1.setPower = 20;

   instrument = getInstrument();
   /*4*/ instrument.doit();
   /*5*/ instrument.setPower( 20.4f );

   /*
   1: opDispatch: setPower
   2: opDispatch: doit

   3: Instrument.setPower: 20

   4: opDispatch: doit
   4: Instrument.doit;
   5: opDispatch: setPower
   5: Instrument.setPower: 20.4
   */
Nov 18 2017
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/17/2017 6:05 AM, Timon Gehr wrote:

 It's pretty straightforward: If I have a variable of class reference type C,
it 
 actually contains a reference to a class instance of type C.
One of the difficulties with this is you'll still need an "empty" instance of C for the non-null reference to point to. Any attempts to use a method on the empty instance should throw. Which is pretty much what a null reference does. (It's also more or less what floating point NaNs do, where every operation on a NaN produces a NaN as a result.)
Nov 17 2017
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.11.2017 05:05, Walter Bright wrote:
 On 11/17/2017 6:05 AM, Timon Gehr wrote:
 There are type systems that do that, which is what is being proposed 

 reference type C, it actually contains a reference to a class instance 
 of type C.
One of the difficulties with this is you'll still need an "empty" instance of C for the non-null reference to point to.
Why would you need an empty instance? Just use a Nullable!C instead of a C if a special 'null' state is actually required.
 Any attempts to use a method on the empty instance should throw. 
The idea is that the type system makes potential such attempts explicit while verifying that they don't occur in most of the cases. Then you can grep for potential null dereferences.
Nov 18 2017
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/18/2017 6:16 AM, Timon Gehr wrote:
 On 18.11.2017 05:05, Walter Bright wrote:
 On 11/17/2017 6:05 AM, Timon Gehr wrote:

 It's pretty straightforward: If I have a variable of class reference type C, 
 it actually contains a reference to a class instance of type C.
One of the difficulties with this is you'll still need an "empty" instance of C for the non-null reference to point to.
Why would you need an empty instance?
Consider the ClassDeclaration in the DMD source. Each ClassDeclaration has a pointer to its base class, `baseClass`. Except for `Object`, which doesn't have a base class. This is represented to assigning `null` to `baseClass`. So I can run up the base class list by: for (b = c; b; b = b.baseClass) { ... } If it cannot be null, I just have to invent something else that does the same thing: for (b = c; b != nullInstanceOfClass; b = b.baseClass) { ... } and nothing really has changed.
 Just use a Nullable!C instead of a C if a 
 special 'null' state is actually required.
What should the default initializer for a type do?
 Any attempts to use a method on the empty instance should throw. 
The idea is that the type system makes potential such attempts explicit while verifying that they don't occur in most of the cases. Then you can grep for potential null dereferences.
There are cases where the actual path taken guarantees initialization, but the graph of all paths does have uninitialized edges. Figuring out which are paths never taken is the halting problem. I found this out when testing my DFA (data flow analysis) algorithms. void test(int i) { int* p = null; if (i) p = &i; ... if (i) *p = 3; ... } Note that the code is correct, but DFA says the *p could be a null dereference. (Replace (i) with any complex condition that there's no way the DFA can prove always produces the same result the second time.)
Nov 18 2017
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.2017 01:07, Walter Bright wrote:
 On 11/18/2017 6:16 AM, Timon Gehr wrote:
 On 18.11.2017 05:05, Walter Bright wrote:
 On 11/17/2017 6:05 AM, Timon Gehr wrote:
 There are type systems that do that, which is what is being proposed 

 reference type C, it actually contains a reference to a class 
 instance of type C.
One of the difficulties with this is you'll still need an "empty" instance of C for the non-null reference to point to.
Why would you need an empty instance?
Consider the ClassDeclaration in the DMD source. Each ClassDeclaration has a pointer to its base class, `baseClass`. Except for `Object`, which doesn't have a base class. This is represented to assigning `null` to `baseClass`. ...
I.e., baseClass should have type Nullable!ClassDeclaration. This does not in any form imply that ClassDeclaration itself needs to have a null value.
 So I can run up the base class list by:
 
      for (b = c; b; b = b.baseClass) { ... }
  > If it cannot be null, I just have to invent something else that does the
 same thing:
 
      for (b = c; b != nullInstanceOfClass; b = b.baseClass) { ... }
 
 and nothing really has changed.
 ...
Nullable!ClassDeclaration can be null, so this is not relevant.
 Just use a Nullable!C instead of a C if a special 'null' state is 
 actually required.
What should the default initializer for a type do? ...
There should be none for non-nullable types.
 
 Any attempts to use a method on the empty instance should throw. 
The idea is that the type system makes potential such attempts explicit while verifying that they don't occur in most of the cases. Then you can grep for potential null dereferences.
There are cases where the actual path taken guarantees initialization, but the graph of all paths does have uninitialized edges. Figuring out which are paths never taken is the halting problem.
The same applies to all other errors prevented by a type system. It's just not a useful argument. The halting problem is undecidable mostly because it is possible to write ridiculous programs. The ones we write in practice are often easier to understand (especially when they come with some useful documentation), because they were /designed/ to serve a particular purpose. Note that the undecidability of the halting problem is not something that applies exclusively to programs, it also applies to programmers.
 I found this out 
 when testing my DFA (data flow analysis) algorithms.
 
    void test(int i) {
      int* p = null;
      if (i) p = &i;
      ...
      if (i) *p = 3;
      ...
    }
 
 Note that the code is correct, but DFA says the *p could be a null 
 dereference. (Replace (i) with any complex condition that there's no way 
 the DFA can prove always produces the same result the second time.)
Yes, there is a way. Put in an assertion. Of course, at that point you are giving up, but this is not the common case. Also, you can often just write the code in a way that the DFA will understand. We are doing this all the time in statically-typed programming languages.
Nov 18 2017
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/18/2017 6:25 PM, Timon Gehr wrote:
 I.e., baseClass should have type Nullable!ClassDeclaration. This does not in
any 
 form imply that ClassDeclaration itself needs to have a null value.
Converting back and forth between the two types doesn't sound appealing.
 What should the default initializer for a type do?
There should be none for non-nullable types.
I suspect you'd wind up needing to create an "empty" object just to satisfy that requirement. Such as for arrays of objects, or objects with a cyclic graph. Interestingly, `int` isn't nullable, and we routinely use rather ugly hacks to fake it being nullable, like reserving a bit pattern like 0, -1 or 0xDEADBEEF and calling it INVALID_VALUE, or carrying around some other separate flag that says if it is valid or not. These are often rich sources of bugs. As you can guess, I happen to like null, because there are no hidden bugs from pretending it is a valid value - you get an immediate program halt - rather than subtly corrupted results. Yes, my own code has produced seg faults from erroneously assuming a value was not null. But it wouldn't have been better with non-nullable types, since the logic error would have been hidden and may have been much, much harder to recognize and track down. I wish there was a null for int types. At least we sort of have one for char types, 0xFF. And there's that lovely NaN for floating point! Too bad it's woefully underused.
 I found this out when testing my DFA (data flow analysis) algorithms.

    void test(int i) {
      int* p = null;
      if (i) p = &i;
      ...
      if (i) *p = 3;
      ...
    }

 Note that the code is correct, but DFA says the *p could be a null 
 dereference. (Replace (i) with any complex condition that there's no way the 
 DFA can prove always produces the same result the second time.)
Yes, there is a way. Put in an assertion. Of course, at that point you are giving up, but this is not the common case.
An assertion can work, but doesn't it seem odd to require adding a runtime check in order to get the code to compile? (This is subtly different from the current use of assert(0) to flag unreachable code.)
 Also, you can often just write the 
 code in a way that the DFA will understand. We are doing this all the time in 
 statically-typed programming languages.
I didn't invent this case. I found it in real code; it happens often enough. The cases are usually much more complex, I just posted the simplest reduction. I was not in a position to tell the customer to restructure his code, though :-)
Nov 18 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright wrote:
 I wish there was a null for int types. At least we sort of have 
 one for char types, 0xFF. And there's that lovely NaN for 
 floating point! Too bad it's woefully underused.
"I wish there was a null for int types." +1000
Nov 18 2017
parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Sunday, 19 November 2017 at 04:19:32 UTC, codephantom wrote:
 On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright 
 wrote:
 I wish there was a null for int types. At least we sort of 
 have one for char types, 0xFF. And there's that lovely NaN for 
 floating point! Too bad it's woefully underused.
"I wish there was a null for int types." +1000
Yes. The only value that can sometime be used as invalid value is int.min.
Nov 19 2017
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.2017 05:04, Walter Bright wrote:
 On 11/18/2017 6:25 PM, Timon Gehr wrote:
 I.e., baseClass should have type Nullable!ClassDeclaration. This does 
 not in any form imply that ClassDeclaration itself needs to have a 
 null value.
Converting back and forth between the two types doesn't sound appealing. ...
I can't see the problem. You go from nullable to non-nullable by checking for null, and the other direction happens implicitly.
 
 What should the default initializer for a type do?
There should be none for non-nullable types.
I suspect you'd wind up needing to create an "empty" object just to satisfy that requirement. Such as for arrays of objects, or objects with a cyclic graph. ...
change makes the type system strictly more expressive. There is nothing that cannot be done after the change that was possible before, it's just that the language allows to document and verify intent better.
 Interestingly, `int` isn't nullable, and we routinely use rather ugly 
 hacks to fake it being nullable, like reserving a bit pattern like 0, -1 
 or 0xDEADBEEF and calling it INVALID_VALUE, or carrying around some 
 other separate flag that says if it is valid or not. These are often 
 rich sources of bugs.
 
 As you can guess, I happen to like null, because there are no hidden 
 bugs from pretending it is a valid value - you get an immediate program 
 halt - rather than subtly corrupted results.
 ...
Making null explicit in the type system is compatible with liking null. (In fact, it is an endorsement of null. There are other options to accommodate optional values in your language.)
 Yes, my own code has produced seg faults from erroneously assuming a 
 value was not null. But it wouldn't have been better with non-nullable 
 types, since the logic error would have been hidden
It was your own design decision to hide the error. This is not something that a null-aware type system promotes, and I doubt this is what you would be promoting if mainstream type systems had gone that route earlier.
 and may have been 
 much, much harder to recognize and track down.
No, it would have been better because you would have been used to the more explicit system from the start and you would have just written essentially the same code with a few more compiler checks in those cases where they apply, and perhaps you would have suffered a handful fewer null dereferences. Being able to document intent across programmers in a compiler-checked way is also useful, even if one manages to remember all assumptions that are valid about one's own code. Note that the set of valid assumptions may change as the code base evolves. The point of types is to classify values into categories such that types in the same category support the same operations. It is not very clean to have a special null value in all those types that does not support any of the operations that references are supposed to support. Decoupling the two concepts into references an optionality gets rid of this issue, cleaning up both concepts.
 I wish there was a null 
 for int types.
 At least we sort of have one for char types, 0xFF. And 
 there's that lovely NaN for floating point! Too bad it's woefully 
 underused.
 ...
It can also be pretty annoying. It really depends on the use case. Also this is in direct contradiction with your earlier points. NaNs don't usually blow up.
 
 I found this out when testing my DFA (data flow analysis) algorithms.

    void test(int i) {
      int* p = null;
      if (i) p = &i;
      ...
      if (i) *p = 3;
      ...
    }

 Note that the code is correct, but DFA says the *p could be a null 
 dereference. (Replace (i) with any complex condition that there's no 
 way the DFA can prove always produces the same result the second time.)
Yes, there is a way. Put in an assertion. Of course, at that point you are giving up, but this is not the common case.
An assertion can work, but doesn't it seem odd to require adding a runtime check in order to get the code to compile? ...
Not really. The runtime check is otherwise just implicit in every pointer dereference (though here there happens to be hardware support for that check).
 (This is subtly different from the current use of assert(0) to flag 
 unreachable code.)
 ...
It's adding a runtime check in order to get the code to compile. ;)
 
 Also, you can often just write the code in a way that the DFA will 
 understand. We are doing this all the time in statically-typed 
 programming languages.
I didn't invent this case. I found it in real code; it happens often enough. The cases are usually much more complex, I just posted the simplest reduction. I was not in a position to tell the customer to restructure his code, though :-)
I don't doubt that this happens. I'm just saying that often enough it does not. (Especially if the check is in the compiler.) I'm not fighting for explicit nullable in D by the way. I'm mostly trying to dispel wrong notions of what it is.
Nov 19 2017
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/19/2017 11:36 AM, Timon Gehr wrote:
 On 19.11.2017 05:04, Walter Bright wrote:
 On 11/18/2017 6:25 PM, Timon Gehr wrote:
 I.e., baseClass should have type Nullable!ClassDeclaration. This does not in 
 any form imply that ClassDeclaration itself needs to have a null value.
Converting back and forth between the two types doesn't sound appealing. ...
I can't see the problem. You go from nullable to non-nullable by checking for null, and the other direction happens implicitly.
Implicit conversions have their problems with overloading, interactions with const, template argument deduction, surprising edge cases, probably breaking a lot of Phobos, etc. It's best not to layer on more of this stuff. Explicit casting is a problem, too. There's also an issue of how to derive a class from a base class.
 What should the default initializer for a type do?
There should be none for non-nullable types.
I suspect you'd wind up needing to create an "empty" object just to satisfy that requirement. Such as for arrays of objects, or objects with a cyclic graph.
makes the type system strictly more expressive. There is nothing that cannot be done after the change that was possible before, it's just that the language allows to document and verify intent better.
This implies one must know all the use cases of a type before designing it.
 Yes, my own code has produced seg faults from erroneously assuming a value was 
 not null. But it wouldn't have been better with non-nullable types, since the 
 logic error would have been hidden
It was your own design decision to hide the error.
No, it was a bug. Nobody makes design decisions to insert bugs :-) The issue is how easy the bug is to have, and how difficult it would be to discover it.
 and may have been much, much harder to recognize and track down.
No, it would have been better because you would have been used to the more explicit system from the start and you would have just written essentially the same code with a few more compiler checks in those cases where they apply, and perhaps you would have suffered a handful fewer null dereferences.
I'm just not convinced of that.
 The point of types is to classify values into categories such that types in
the 
 same category support the same operations. It is not very clean to have a 
 special null value in all those types that does not support any of the 
 operations that references are supposed to support. Decoupling the two
concepts 
 into references an optionality gets rid of this issue, cleaning up both
concepts.
I do understand that point. But I'm not at all convinced that non-nullable types in aggregate results in cleaner, simpler code, for reasons already mentioned.
 I wish there was a null for int types.
Implemented as a pointer to int? That is indeed one way to do it, but rather costly.
 It can also be pretty annoying.
Yes, it can be annoying, so much better to have a number that looks like it might be right, but isn't, because 0.0 was used as a default initializer when it should have been 1.6. :-)
 It really depends on the use case. Also this is 
 in direct contradiction with your earlier points. NaNs don't usually blow up.
"blow up" - as I've said many times, I find the visceral aversion to seg faults puzzling. Why is that worse than belatedly discovering a NaN in your output, which you now have to back search it to its source? My attitude towards programming bugs is to have them immediately halt the program as soon as possible, so: 1. I know an error has occurred, i.e. I don't get corrupt results that I assumed were correct, leading to more adverse consequences 2. The detection of the error is as close as possible to where things went wrong Having floats default initialize to 0.0 is completely anti-ethical to (1) and (2), and NaN at least addresses (1). There have been many long threads on this topic in this forum. Yes, I understand that it's better for game programs to ignore bugs because gamers don't care about corrupt results, they only care that the program continues to run and do something. For the rest of us, are we ready to be done with malware inserted via exploitable bugs? By the way, I was initially opposed to having seg faults produce stack traces, saying it was the debugger's job to do that. I've since changed my mind. I do like very much the convenience of the stack trace dump, and rely on it all the time. I even insert code to force a seg fault to get a stack trace. I was wrong about its utility.
 I'm not fighting for explicit nullable in D by the way.
Thanks for clarifying that.
Nov 19 2017
next sibling parent Nick Treleaven <nick geany.org> writes:
On Sunday, 19 November 2017 at 22:54:38 UTC, Walter Bright wrote:
 There's also an issue of how to derive a class from a base 
 class.
If you want null, use a nullable type: Base b = ...; Derived? d = cast(Derived?) base; if (d !is null) d.method;
 This implies one must know all the use cases of a type before 
 designing it.
Start off with a non-nullable reference. If later you need null, change to T?. T? is implicitly convertible to T where flow analysis can tell that it is not null (e.g. after it is assigned a non-nullable T).
 It was your own design decision to hide the error.
No, it was a bug. Nobody makes design decisions to insert bugs :-) The issue is how easy the bug is to have, and how difficult it would be to discover it.
The compiler would nag you when you try to dereference nullable types, you have to act to confirm you didn't forget to check null, a common mistake in reference heavy APIs.
 No, it would have been better because you would have been used 
 to the more explicit system from the start and you would have 
 just written essentially the same code with a few more 
 compiler checks in those cases where they apply, and perhaps 
 you would have suffered a handful fewer null dereferences.
I'm just not convinced of that.
Maybe you use nullable types a lot and rarely use references that aren't meant to have null as a valid value. Most programmers have plenty of statements where a reference is not supposed to be null, and would appreciate having the compiler enforce this. Popular new programming languages make nullable opt-in, not the default, to reduce the surface area for null dereference bugs.
 I'm not fighting for explicit nullable in D by the way.
Thanks for clarifying that.
To avoid breaking existing code, there is a way however. We would instead have a sigil (such as '$') for non-nullable references: T nullable; T$ nonNullable = new T; typeof(new T) for full compatibility would still be T, but the compiler would know it safely converts to T$.
Nov 20 2017
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 19.11.2017 23:54, Walter Bright wrote:
 ... > There's also an issue of how to derive a class from a base class.
 ...
How so? While we are talking applicability to D: The main issue is to ensure that fields of objects are initialized properly before being accessed. D already needs to do this, but fails, which means references to immutable data are not guaranteed to yield consistent results between dereferences.
 
 What should the default initializer for a type do?
There should be none for non-nullable types.
I suspect you'd wind up needing to create an "empty" object just to satisfy that requirement. Such as for arrays of objects, or objects with a cyclic graph.
change makes the type system strictly more expressive. There is nothing that cannot be done after the change that was possible before, it's just that the language allows to document and verify intent better.
This implies one must know all the use cases of a type before designing it. ...
No. The opposite is the case: you can just change the type once the requirements change. Then the type system shows you precisely where you need to update your code. In D, such a change is quite hard to make.
 
 Yes, my own code has produced seg faults from erroneously assuming a 
 value was not null. But it wouldn't have been better with 
 non-nullable types, since the logic error would have been hidden
It was your own design decision to hide the error.
No, it was a bug. Nobody makes design decisions to insert bugs :-) The issue is how easy the bug is to have, and how difficult it would be to discover it. ...
You added a special invalid instance that does not blow up on dereference. That was a conscious design decision. If your point is that there can be bugs unrelated to null, well that's unrelated to null.
 
 and may have been much, much harder to recognize and track down.
No, it would have been better because you would have been used to the more explicit system from the start and you would have just written essentially the same code with a few more compiler checks in those cases where they apply, and perhaps you would have suffered a handful fewer null dereferences.
I'm just not convinced of that. ...
I'm confident that you would be able to use null safe languages properly if that is what had been available for most of your career.
 
 The point of types is to classify values into categories such that 
 types in the same category support the same operations. It is not very 
 clean to have a special null value in all those types that does not 
 support any of the operations that references are supposed to support. 
 Decoupling the two concepts into references an optionality gets rid of 
 this issue, cleaning up both concepts.
I do understand that point. But I'm not at all convinced that non-nullable types in aggregate results in cleaner, simpler code, for reasons already mentioned. ...
I guess that depends on one's definition of clean and simple. Using nullable references for passing around references known to be non-null is not clean in my book.
 I wish there was a null for int types.
Implemented as a pointer to int? That is indeed one way to do it, but rather costly. ...
It lowers to Nullable<int>, which is a struct with a boolean flag.
 
 It can also be pretty annoying.
Yes, it can be annoying, so much better to have a number that looks like it might be right, but isn't, because 0.0 was used as a default initializer when it should have been 1.6. :-) ...
That's not what I was saying.
 
 It really depends on the use case. Also this is in direct 
 contradiction with your earlier points. NaNs don't usually blow up.
"blow up" - as I've said many times, I find the visceral aversion to seg faults puzzling.
This is a misinterpretation. My language does not carry implicit emotional context during technical discussions. Given that you have a bug, blowing up is often the ideal outcome. It is even better to not have the bug in the first place. That's what explicit null (often) does for you. (Also, I'd posit the reason why you don't understand why segfaults can be very painful to some is that you are in the compiler business.)
 Why is that worse than belatedly discovering a NaN in 
 your output, which you now have to back search it to its source?
 ...
That is the opposite of the point I was making. You said: "bugs should terminate the program" and then "NaNs are underused". If my program calls 'std.math.log' with an argument of '-123.4', then that's probably a bug, so there seemed to be an inconsistency.
 My attitude towards programming bugs is to have them immediately halt 
 the program as soon as possible, so:
 ...
My attitude is that ideally they are catched even sooner, during design time or compilation. You are contrasting bugs that produce incorrect outputs and bugs that crash your program. It's the wrong comparison to make. It does not help to point the finger at an unrelated issue. Null safety turns bugs that crash the program into compilation errors. (Note that there are type system features that allow you to automatically verify the program logic during compilation, but this is even further away from what I consider to be realistic to expect to see in D.)
 1. I know an error has occurred, i.e. I don't get corrupt results that I 
 assumed were correct, leading to more adverse consequences
 2. The detection of the error is as close as possible to where things 
 went wrong
 
 Having floats default initialize to 0.0 is completely anti-ethical to 
 (1) and (2),
Agreed.
 and NaN at least addresses (1).
 ...
It does not really. Comparison of NaNs yields a standard boolean value.
Nov 21 2017
parent reply codephantom <me noyb.com> writes:
On Tuesday, 21 November 2017 at 20:02:06 UTC, Timon Gehr wrote:
 I'm confident that you would be able to use null safe languages 
 properly if that is what had been available for most of your 
 career.
You do realise, that all of the issues you mention can just be handled by coding correctly in the first place. If your program calls 'std.math.log' with an argument of '-123.4', then that's probably NOT a bug. It's more likely to be incorrect code. Why not bounds-check the argument before passing it to the function? If you access a field of an invalid instance of an object, that's probably NOT a bug. It's more likely to be incorrect code. Before you access a field of an object, check that the object is valid. Its seems to be, that you prefer to rely on the type system, during compilation, for safety. This is very unwise. btw. what was the last compiler you wrote?
Nov 21 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom wrote:
 Its seems to be, that you prefer to rely on the type system, 
 during compilation, for safety. This is very unwise.
(i.e. should this compile?) // -------------------------------------------------- using System; public class Program { public static int Main() { Foo(); return 0; } static void Foo() { const object x = null; //if (x != null) //{ Console.WriteLine(x.GetHashCode()); //} } } // --------------------------------------------------
Nov 21 2017
parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:39:21 UTC, codephantom wrote:
 On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom 
 wrote:
 Its seems to be, that you prefer to rely on the type system, 
 during compilation, for safety. This is very unwise.
(i.e. should this compile?) // -------------------------------------------------- using System; public class Program { public static int Main() { Foo(); return 0; } static void Foo() { const object x = null; //if (x != null) //{ Console.WriteLine(x.GetHashCode()); //} } } // --------------------------------------------------
Here is another demonstation of why you can trust your compiler: (i.e. should this compile?) // ------------------------------------- using System; using System.IO; public class Program { public static int Main() { Console.WriteLine( divInt(Int32.MinValue,-1) ); return 0; } static int divInt (int a, int b) { int ret = 0; //if ( (b != 0) && (!((a == Int32.MinValue) && (b == -1))) ) //{ ret = a / b; //} //else //{ // throw new InvalidOperationException("Sorry.. no can do!"); //} return ret; } } // -------------------------------------------------------
Nov 22 2017
parent reply codephantom <me noyb.com> writes:
On Thursday, 23 November 2017 at 06:32:30 UTC, codephantom wrote:
 Here is another demonstation of why you can trust your compiler:
Why you "can't" ... is what i meant to say. I love not being able to edit posts. It's so convenient.
Nov 22 2017
next sibling parent Meta <jared771 gmail.com> writes:
On Friday, 24 November 2017 at 20:29:23 UTC, codephantom wrote:
 On Friday, 24 November 2017 at 12:10:28 UTC, Nick Treleaven 
 wrote:
 On Thursday, 23 November 2017 at 06:35:17 UTC, codephantom 
 wrote:
 I love not being able to edit posts. It's so convenient.
It's not as much of a problem as not being able to hide all posts by a user who repeats arguments, derails the conversation onto irrelevant side discussions and judges individuals instead of the idea they are conveying.
So...you've just described your own post...you moron. Fuck you.
This is going too far. This mailing list is for civil discourse.
Nov 24 2017
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 24.11.2017 13:10, Nick Treleaven wrote:
 On Thursday, 23 November 2017 at 06:35:17 UTC, codephantom wrote:
 ... not being able to edit posts. ...
It's not as much of a problem as not being able to hide all posts by a user ...
Given that it can be accomplished on the client side, it is actually easy to not display posts from specific users. About editing: I'd posit we can simply ask people to think about the news group differently and to just write their posts correctly in one shot.
Nov 24 2017
parent reply codephantom <me noyb.com> writes:
On Saturday, 25 November 2017 at 01:00:43 UTC, Timon Gehr wrote:
 Given that it can be accomplished on the client side, it is 
 actually easy to not display posts from specific users.
Civility returns. Horray... And thankyou. This a much more constructive option for users that disagree with something I say. i.e. Now they can just hide me, instead of attacking me.
 About editing: I'd posit we can simply ask people to think 
 about the news group differently and to just write their posts 
 correctly in one shot.
I'll give that a go next time.. otherwise people will start wanting the forum to implement a spell checker...and a thesuras (how do you spell that anyway??).
Nov 24 2017
parent Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 November 2017 at 01:23:03 UTC, codephantom wrote:
 And thankyou. This a much more constructive option for users 
 that disagree with something I say. i.e. Now they can just hide 
 me, instead of attacking me.
Dont worry, both Walter and Andrei have done far worse in these fora over the years than you do... Or "forums" as the English quite incorrectly spells it.
 I'll give that a go next time.. otherwise people will start 
 wanting the forum to implement a spell checker...and a thesuras 
 (how do you spell that anyway??).
Thesauri ?
Nov 24 2017
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 22, 2017 00:19:51 codephantom via Digitalmars-d 
wrote:
 On Tuesday, 21 November 2017 at 20:02:06 UTC, Timon Gehr wrote:
 I'm confident that you would be able to use null safe languages
 properly if that is what had been available for most of your
 career.
You do realise, that all of the issues you mention can just be handled by coding correctly in the first place.
While I definitely don't think that it's generally very hard to avoid bugs with null pointers/references, telling someone to code correctly in the first place isn't very useful. Of course, it's better to do that, but people make mistakes all the time. The real question is whether the problem is big enough in general or bad enough when it happens to add something to the language to mitigate it - e.g. no one should be failing to initialize variables, but it happens sometimes, and default-initializing variables like D does helps prevent a certain class of bugs. The programmer still needs to make sure that they deal with initialization correctly, but the problems that they have when they screw it up are less drastic than they are in C/C++ where variables don't get default-initialized unless they're classes with default constructors. Personally, I don't think that null pointer dereferencing is enough of a problem to start insisting on non-nullable pointers or references (especially at this point in D's development), and when it happens, it's very clear what went wrong, so you avoid subtle problems like you'd get with something like initializing a variable to garbage. So, I don't think that there's enough value in having non-nullable pointers or references to add them. In my experience, it just isn't hard to avoid problems with null. But at the same time, I think that it's perfectly legitimate to be looking to mitigate a source of bugs, and if you have a pointer or reference that really never should be null, having that guaranteed by the type system prevents mistakes, which is useful.
 Its seems to be, that you prefer to rely on the type system,
 during compilation, for safety. This is very unwise.
Any time the type system can prevent a bug, it's useful. I don't see why that would be a problem or unwise. That's part of why many of us prefer statically typed languages to dynamically typed languages. The compiler catches more bugs for us that way. The question isn't whether we should use the type system to prevent bugs. The question is which set of problems really make sense to prevent with the type system. - Jonathan M Davis
Nov 21 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis 
wrote:
 While I definitely don't think that it's generally very hard to 
 avoid bugs with null pointers/references, telling someone to 
 code correctly in the first place isn't very useful.
Fair enough...perhaps I'm being too explicit with my argument. However, my point is, that one should not overly rely on some magical compiler for telling you what is 'true'. How can a compiler know that G is true if it cannot prove that G is true? You need to take this into account during your coding. Otherwise the runtime system is your last line of defence.
Nov 21 2017
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 22.11.2017 02:09, codephantom wrote:
 On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis wrote:
 While I definitely don't think that it's generally very hard to avoid 
 bugs with null pointers/references, telling someone to code correctly 
 in the first place isn't very useful.
Fair enough...perhaps I'm being too explicit with my argument. However, my point is, that one should not overly rely on some magical compiler for telling you what is 'true'. ...
That is not the role of the compiler here. The task of the compiler in this circumstance is to tell you what is obvious, not what is true.
 How can a compiler know that G is true if it cannot prove that G is true?
 ...
Because you proved it to the compiler.
 You need to take this into account during your coding. Otherwise the 
 runtime system is your last line of defence.
 
You seem to assume that Rice's theorem applies to compilers, but not programmers. Why is that?
Nov 22 2017
prev sibling next sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis 
wrote:
 The question isn't whether we should use the type system to 
 prevent bugs. The question is which set of problems really make 
 sense to prevent with the type system.

 - Jonathan M Davis
Those that can be proven.
Nov 21 2017
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 22, 2017 01:25:48 codephantom via Digitalmars-d 
wrote:
 On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis

 wrote:
 The question isn't whether we should use the type system to
 prevent bugs. The question is which set of problems really make
 sense to prevent with the type system.

 - Jonathan M Davis
Those that can be proven.
Sure. If it can't be proven that something is a bug, then the compiler shouldn't be giving an error in that case (and IMHO, it shouldn't be warning about it either, since any good programmer doesn't leave warnings in their project, effectively making warnings errors). In the case of null, you _can_ prove it if you have non-nullable types. If it's not legal for a pointer or reference to be null, then the compiler can guarantee that it's not null. But then you either have the extra complication of having both nullable and non-nullable pointers/references in the language, or you force all pointers/references to use something like std.typecons.Nullable to treat them as nullable or use a construct in the language which does the same, and that arguably doesn't make a lot of sense given that underneath the hood, all pointers or references are going to be nullable, even if you're not allowed to make them null by the type system. But it would reduce the amount of code where you would have to worry about potentially having null values. However, if you don't have non-nullable pointers/references, then you really can't prove that a pointer or reference is non-null in the general case. You can prove it under certain circumstances, but ultimately you're going to end up with an algorithm that only works part of the time. So, best case, it gives you an error when it definitively knows that you're trying to dereference null, but that would likely generally be in the cases where you would very quickly find it yourself as soon as you ran your code. So, while the compiler check might be useful, I doubt that it would ultimately help much with preventing bugs in practice. - Jonathan M Davis
Nov 21 2017
parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 01:48:55 UTC, Jonathan M Davis 
wrote:
 In the case of null, you _can_ prove it if you have 
 non-nullable types.
True (well...you can at least 'assert' it anyway). But if the intention is to 'assist the compiler towards knowing the truth/correctness about your statement', then this can be easily done without introducing a new nullable reference type - i.e. if(object != null) use it; Either way, checks are made. So I still don't see the point of adding a new nullable reference type to a language, unless one is asserting that it is ok to not already be checking for null (which seems to be the case for a
Nov 21 2017
prev sibling next sibling parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis 
wrote:
 While I definitely don't think that it's generally very hard to 
 avoid bugs with null pointers/references, telling someone to 
 code correctly in the first place isn't very useful.
By 'correct code', I mean code that assists the compiler, so that it can determine what the truth is (or is meant to be).
Nov 21 2017
prev sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 00:49:02 UTC, Jonathan M Davis 
wrote:
 Any time the type system can prevent a bug, it's useful. I 
 don't see why that would be a problem or unwise.
That is not unwise. What is 'unwise' is what I said was unwise..that is, putting your trust in the compiler's capacity to always know what the truth is. That is unwise. Consider the Goldbach Conjecture, that every even positive integer greater than 2 is the sum of two (not necessarily distinct) primes. According to the principle of bivalence, this should be either true or false. But where is the proof that this is either true, or false? There is a fundamental error in assuming that something can only be either true or false. Some things require too much effort to prove, or may simply be unprovable. How much time should the compiler spend trying to prove something?
 The question isn't whether we should use the type system to 
 prevent bugs. The question is which set of problems really make 
 sense to prevent with the type system.
No, the question should be, what can the compiler prove to be true/false, correct/incorrect about your code, and what effort have you made in your code to assist the compiler to make that determination. If you've made no effort to provide the compiler with the context it needs to make a useful determination, then don't complain when the compiler gets it wrong. That is my first point. My second point, is that it is already possible to provide such context to the compiler, without having to make reference types non nullable, and therefore having to introduce a new nullable reference type. Which make more sense? Knowing that a reference type could potentially be null, and therefore check for null, or dealing with all the flow on conquences of making a reference type non nullable by default? Even with such a change, the Goldbach Conjecture still cannot be resolved.
Nov 21 2017
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 22.11.2017 05:55, codephantom wrote:
 ... >> The question isn't whether we should use the type system to prevent
 bugs. The question is which set of problems really make sense to 
 prevent with the type system.
No, the question should be, what can the compiler prove to be true/false, correct/incorrect about your code, and what effort have you made in your code to assist the compiler to make that determination. If you've made no effort to provide the compiler with the context it needs to make a useful determination, then don't complain when the compiler gets it wrong. That is my first point. My second point, is that it is already possible to provide such context to the compiler, without having to make reference types non nullable, and therefore having to introduce a new nullable reference type. ...
It's really not.
 Which make more sense? Knowing that a reference type could potentially 
 be null, and therefore check for null,
You are saying this as if there was always a reasonable thing to do if the reference is in fact null. This is just not the case. I.e. this option sometimes makes no sense. Also, if checking for null is always required, why wouldn't the compiler complain if it is missing?
 or dealing with all the flow on 
 conquences of making a reference type non nullable by default?
 
 Even with such a change, the Goldbach Conjecture still cannot be resolved.
 
If the correctness of a program depends on the Goldbach Conjecture, that's still something one might want to know about. We could then just add the correctness of the Goldbach conjecture as an assumption, and then verify that under the given assumption, the program is actually correct. Once the Goldbach conjecture gets resolved, we can get rid of the assumption.
Nov 22 2017
parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 13:47:19 UTC, Timon Gehr wrote:
 On 22.11.2017 05:55, codephantom wrote:
 No, the question should be, what can the compiler prove to be 
 true/false, correct/incorrect about your code, and what effort 
 have you made in your code to assist the compiler to make that 
 determination.
 
 If you've made no effort to provide the compiler with the 
 context it needs to make a useful determination, then don't 
 complain when the compiler gets it wrong. That is my first 
 point.
 
 My second point, is that it is already possible to provide 
 such context to the compiler, without having to make reference 
 types non nullable, and therefore having to introduce a new 
 nullable reference type.
 ...
It's really not.
Your arguments need a little more work.
Nov 22 2017
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 22 November 2017 at 04:55:39 UTC, codephantom wrote:
 Consider the Goldbach Conjecture, that every even positive 
 integer greater than 2 is the sum of two (not necessarily 
 distinct) primes. According to the principle of bivalence, this 
 should be either true or false.
«The Goldbach conjecture verification project reports that it has computed all primes below 4×10^18» Which is more than you'll ever need in any regular programming context. Next problem?
Nov 22 2017
parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 22:02:11 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 22 November 2017 at 04:55:39 UTC, codephantom 
 wrote:
 Consider the Goldbach Conjecture, that every even positive 
 integer greater than 2 is the sum of two (not necessarily 
 distinct) primes. According to the principle of bivalence, 
 this should be either true or false.
«The Goldbach conjecture verification project reports that it has computed all primes below 4×10^18» Which is more than you'll ever need in any regular programming context. Next problem?
Come on. Really? "It's true as far as we know" != "true" true up to a number < n ... does not address the conjecture correctly. Where it the 'proof' that the conjecture is 'true'. hint. It's not a problem that mathmatics can solve.
Nov 22 2017
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 November 2017 at 00:06:49 UTC, codephantom wrote:
 true up to a number < n  ... does not address the conjecture 
 correctly.
So what? We only need to a proof up to N for regular programming, if at all.
 hint. It's not a problem that mathmatics can solve.
By what proof? And what do you mean by mathematics?
Nov 22 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim 
Grostad wrote:
 On Thursday, 23 November 2017 at 00:06:49 UTC, codephantom 
 wrote:
 true up to a number < n  ... does not address the conjecture 
 correctly.
So what? We only need to a proof up to N for regular programming, if at all.
That's really the point I was making. It's the reason you'll never be able to put your complete trust in a compiler. The compiler can only ever know something, about something, up to a point. That's why we have the concept of 'undefined behaviour'.
Nov 22 2017
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 November 2017 at 01:16:59 UTC, codephantom wrote:
 That's why we have the concept of 'undefined behaviour'.
Errr, no. High level programming languages don't have undefined behaviour. That is a C concept related to the performance of the executable. C tries to get as close to machine language as possible.
Nov 22 2017
parent reply codephantom <me noyb.com> writes:
On Thursday, 23 November 2017 at 07:20:41 UTC, Ola Fosheim 
Grostad wrote:
 On Thursday, 23 November 2017 at 01:16:59 UTC, codephantom 
 wrote:
 That's why we have the concept of 'undefined behaviour'.
Errr, no. High level programming languages don't have undefined behaviour. That is a C concept related to the performance of the executable. C tries to get as close to machine language as possible.
Many high level languages let you use 'unsafe' code, where you can write erroneous operations - and then you're back in the world of undefined behaviour. Are you saying, that a high level language can trap *all* errors? As per the Goldbach conjecture... where is the proof?
Nov 23 2017
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Thursday, 23 November 2017 at 08:47:43 UTC, codephantom wrote:
 Many high level languages let you use 'unsafe' code, where you 
 can write erroneous operations - and then you're back in the 
 world of undefined behaviour.
Not many, but many allow interfacing with C, then it is up to those user to verify the correctness of their C code.
 Are you saying, that a high level language can trap *all* 
 errors?
Not sure what you mean by trap, they use static or runtime checks to uphold the language specification. Whether something is an error or not beyond that is highly subjective. I.e. we cannot talk about errors unless we have a specification to judge the actual behaviour by.
Nov 23 2017
prev sibling parent reply codephantom <me noyb.com> writes:
On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim 
Grostad wrote:
 By what proof? And what do you mean by mathematics?
A mathematical claim, that cannot be proven or disproven, is neither true or false. What you are left with, is just a possibility. Thus, it will always remain an open question as to whether the conjecture is true, or not.
Nov 22 2017
parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Thursday, 23 November 2017 at 01:33:39 UTC, codephantom wrote:
 On Thursday, 23 November 2017 at 00:15:56 UTC, Ola Fosheim 
 Grostad wrote:
 By what proof? And what do you mean by mathematics?
A mathematical claim, that cannot be proven or disproven, is neither true or false. What you are left with, is just a possibility.
And how is this a problem? If your program relies upon the unbounded version you will have to introduce it explicitky as an axiom. But you dont have to, you can use bounded quantifiers. What you seem to be saying is that one should accept all unproven statements as axioms implicitly. Why have a type system at all then?
 Thus, it will always remain an open question as to whether the 
 conjecture is true, or not.
Heh, has the Goldbach conjecture been proven undecidable?
Nov 22 2017
parent codephantom <me noyb.com> writes:
On Thursday, 23 November 2017 at 07:13:37 UTC, Ola Fosheim 
Grostad wrote:
 Heh, has the Goldbach conjecture been proven undecidable?
Not to my knowledge ;-) At best, it's a possiblity - which can go either way. No human or computer will ever make it anything more than that. Ever. Someone saying it's true, up to < n, is not addressing the problem. Someone trying to address the problem, does not even understand the problem ;-)
Nov 23 2017
prev sibling next sibling parent reply Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom wrote:
 btw. what was the last compiler you wrote?
https://github.com/eth-srl/psi https://github.com/tgehr/d-compiler
Nov 22 2017
parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 08:55:03 UTC, Petar Kirov 
[ZombineDev] wrote:
 On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom 
 wrote:
 btw. what was the last compiler you wrote?
https://github.com/eth-srl/psi https://github.com/tgehr/d-compiler
touché ;-) nonetheless. I stand by my arguments.
Nov 22 2017
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 22, 2017 09:28:47 codephantom via Digitalmars-d 
wrote:
 On Wednesday, 22 November 2017 at 08:55:03 UTC, Petar Kirov

 [ZombineDev] wrote:
 On Wednesday, 22 November 2017 at 00:19:51 UTC, codephantom

 wrote:
 btw. what was the last compiler you wrote?
https://github.com/eth-srl/psi https://github.com/tgehr/d-compiler
touché ;-)
LOL. I assumed that you were legitimately asking what the name of his compiler was, because I knew that he was writing a D compiler, whereas you were questioning his knowledge/credentials. Timon is a very smart guy. He knows a lot and has lots of great things to say. I certainly don't always agree with him, but he generally knows what he's talking about. - Jonathan M Davis
Nov 22 2017
parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 10:20:49 UTC, Jonathan M Davis 
wrote:
 LOL. I assumed that you were legitimately asking what the name 
 of his compiler was, because I knew that he was writing a D 
 compiler, whereas you were questioning his 
 knowledge/credentials. Timon is a very smart guy. He knows a 
 lot and has lots of great things to say. I certainly don't 
 always agree with him, but he generally knows what he's talking 
 about.

 - Jonathan M Davis
I thought he was becoming a little confrontational with the Master Wizard (W), so I sought to check his credentials ;-)
Nov 23 2017
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 22.11.2017 01:19, codephantom wrote:
 On Tuesday, 21 November 2017 at 20:02:06 UTC, Timon Gehr wrote:
 I'm confident that you would be able to use null safe languages 
 properly if that is what had been available for most of your career.
You do realise, that all of the issues you mention can just be handled by coding correctly in the first place. ...
Yes, just like everyone else, I realize that if correct code is written, we end up with correct code, but thanks for pointing it out. BTW of course you must realize that you can make the compiler brutally obsolete by just quickly writing down the most efficient possible correct machine code in a hex editor, so I'm not too sure why you participate in a discussion on the forums of a compiled language at all.
 If your program calls 'std.math.log' with an argument of '-123.4', then 
 that's probably NOT a bug. It's more likely to be incorrect code.
https://en.wikipedia.org/wiki/Software_bug
 Why not bounds-check the argument before passing it to the function?
 ...
Walter said NaN is underused, not me.
 If you access a field of an invalid instance of an object, that's 
 probably NOT a bug. It's more likely to be incorrect code.
https://en.wikipedia.org/wiki/Software_bug
 Before you 
 access a field of an object, check that the object is valid.
 ...
If I know that it is valid, I might not want to check it. Then, if, let's say, you come along and read my code, I do not need you to point out that I didn't check the field access. If you still do, I can now either explain to you why it is unnecessary, which will waste my time and does not guarantee that you will buy it, or I can write the code in a language that requires me to provide the proof up front, such that you will not have to bother me. And even if you still doubt that the proof is actually correct, it will not be my problem, but instead you'll need to take it to the guy who wrote the compiler. This is one of the reasons why Walter does not like non-null types. ;o)
 Its seems to be,
Spelling mistakes can be avoided by just spelling correctly.
 that you prefer to rely on the type system, during 
 compilation, for safety.
No, I ideally want the type system to point out when the code is not obviously correct. That does not mean I assume that the code is correct when it compiles (given that I'm using a language that does not require me to prove absence of all bugs, and even if it did I'd at most assume that either the language implementation is incorrect or my code is correct, with a certain margin of error due to undetected hardware failures).
 This is very unwise.
 ...
Thanks for pointing that out.
 btw. what was the last compiler you wrote?
 
Embarrassing questions can be avoided by just coming up with the correct answer yourself.
Nov 22 2017
next sibling parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
 On 22.11.2017 01:19, codephantom wrote:

 No, I ideally want the type system to point out when the code 
 is not obviously correct. That does not mean I assume that the 
 code is correct when it compiles (given that I'm using a 
 language that does not require me to prove absence of all bugs, 
 and even if it did I'd at most assume that either the language 
 implementation is incorrect or my code is correct, with a 
 certain margin of error due to undetected hardware failures).

 This is very unwise.
 ...
Thanks for pointing that out.
You're welcome.
Nov 22 2017
prev sibling next sibling parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
 
 You do realise, that all of the issues you mention can just be 
 handled by coding correctly in the first place.
 ...
Yes, just like everyone else, I realize that if correct code is written, we end up with correct code, but thanks for pointing it out.
You're welcome.
Nov 22 2017
prev sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 13:21:05 UTC, Timon Gehr wrote:
 BTW of course you must realize that you can make the compiler 
 brutally obsolete by just quickly writing down the most 
 efficient possible correct machine code in a hex editor, so I'm 
 not too sure why you participate in a discussion on the forums 
 of a compiled language at all.
I've participated in order to counter the proposition put forward in the subject of this thread. is my view. If, over time, a large number of D programmers have the same programmers, then maybe they'll start demanding the same thing - but even then, I'll argue the same points I've argued thus far. I also think that relying too much on sophisticated IDE's and AI like compilers, really changes the way you think about and write code. I don't rely on either. Perhaps that's why I've never considered nulls to be an issue. I take proactive steps to protect my code, before the compiler ever sees it. And actually, I cannot recall any null related error in any code I've deployed. It's just never been an issue. And that's another reason why this topic interests me - why is it seems to be because they're just not doing null checks. And so the language designers are being forced to step in. If that's not the reason, then I've misunderstood, and await the correct explanation.
Nov 22 2017
parent reply Wyatt <wyatt.epp gmail.com> writes:
On Wednesday, 22 November 2017 at 14:51:02 UTC, codephantom wrote:

 that is my view.
"Need"? Perhaps not. But so far, I haven't seen any arguments that refute the utility of mitigating patterns of human error.
 If, over time, a large number of D programmers have the same 

 programmers, then maybe they'll start demanding the same thing 
 - but even then, I'll argue the same points I've argued thus 
 far.
Null references have been a problem in every language that has them. Just because D is much nicer than its predecessors (and contemporaries, IMO) doesn't mean the "bad old days" (still in progress) of C and C++ didn't happen or that we cannot or should not learn from the experience. Tony Hoare doesn't call null his sin and "billion dollar mistake" as just a fit of pique. In other words, "Well don't do that, silly human!" ends up being an appeal to tradition.
 Perhaps that's why I've never considered nulls to be an issue. 
 I take proactive steps to protect my code, before the compiler 
 ever sees it. And actually, I cannot recall any null related 
 error in any code I've deployed. It's just never been an issue.
Oh, that explains it. He's a _robot_! ;) (The IDE thing is entirely irrelevant to this discussion; why did you bring that up?)
 And that's another reason why this topic interests me - why is 

 it seems to be because they're just not doing null checks. And 
 so the language designers are being forced to step in. If 
 that's not the reason, then I've misunderstood, and await the 
 correct explanation.
enough to vote in general elections but they're only just now finally doing this should be telling. (And I fully expect this conversation has been going for at least half of that time.) It's probably galvanised by the recent proliferation of languages that hold safety to a higher standard and the community realising that the language can and _should_ share the burden of mitigating patterns of human error. -Wyatt
Nov 22 2017
next sibling parent reply codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 18:16:16 UTC, Wyatt wrote:
 "Need"?  Perhaps not.  But so far, I haven't seen any arguments 
 that refute the utility of mitigating patterns of human error.
Ok. that's a good point. But there is more than one way to address human error without having to further regulate human behaviour. How about we change the way we think...for example. I 'expect' bad people to try to do 'bad stuff' using my code. It's the first thing I think about when I start typing. This perspectives alone, really changes the way I write code. It's not perfect, but it's alot better than if I didn't have that perspective. And all it required was to think differently. No language change, no further regulation. So yeah, you can change the language.. or you can change the way people think about their code. When they think differently, their code will change accordingly. My point about sophisticated IDE's and AI like compilers, is that they don't seem to have addressed the real issue - that is, changing the way people think about their code. If anything, they've introduced so many distractions and so much automation, that people are just not thinking about their code anymore. So now, language designers are being forced to step in and start regulating programmer behaviour. I don't like that approach. You rarely hear anything about defensive programming these days, but it's more important now, than it ever was. I'd make it the number one priority for new developers. But you won't even find the concept being taught at our universities. They're too busy teaching students to program in Python ..hahha...the future is looking pretty bleak ;-( Where are the 'Secure Coding Guidelines for Programming in D' (I'm not saying they don't exist. I'm just not aware of them). What if I did a security audit on DMD or PHOBOS. What would I discover? What if I did a security audit on all the D code at github. What would I discover? Sophisticated IDE's and AI like compilers have not rescued us from this inherent flaw in programming. The flaw, is a human flaw. A flaw in the way we think.
Nov 22 2017
parent rjframe <dlang ryanjframe.com> writes:
On Thu, 23 Nov 2017 01:08:45 +0000, codephantom wrote:

 So yeah, you can change the language.. or you can change the way people
 think about their code. When they think differently, their code will
 change accordingly.
 
 My point about sophisticated IDE's and AI like compilers, is that they
 don't seem to have addressed the real issue - that is, changing the way
 people think about their code. If anything, they've introduced so many
 distractions and so much automation, that people are just not thinking
 about their code anymore. So now, language designers are being forced to
 step in and start regulating programmer behaviour. I don't like that
 approach.
 
 You rarely hear anything about defensive programming these days, but
 it's more important now, than it ever was. I'd make it the number one
 priority for new developers. But you won't even find the concept being
 taught at our universities. They're too busy teaching students to
 program in Python ..hahha...the future is looking pretty bleak ;-(
It's easier to write better tools than it is to change people. That seems to me to be a big part of the D language design. The sophisticated IDEs and compilers exist to help developers write better code; large projects are too complex, and open source projects especially receive contributions from people that don't know the code, so if the compiler can help, it should. I left Python for D mostly because of variable annotations[1]. The following is valid in Python 3.6:
 myvar : int = "some string"
 print(myvar)
some string If my compiler/interpreter won't tell me if I do something stupid like that, I don't want to waste my time with it. If your language gives me explicit types, it needs to give me some sort of type safety with them; otherwise your language is a hack. Static analysis will catch this, but I shouldn't need to run a static analysis tool or use an IDE to find an error like that.
 What if I did a security audit on DMD or PHOBOS. What would I discover?
 
 What if I did a security audit on all the D code at github. What would I
 discover?
If you have the skills, this would (in my opinion) be an amazing use of your time. I'd recommend just auditing the core tools and popular libraries, rather than all code unless it's a hobby of yours though. [1]: https://docs.python.org/3.6/whatsnew/3.6.html#whatsnew36-pep526
Nov 23 2017
prev sibling parent codephantom <me noyb.com> writes:
On Wednesday, 22 November 2017 at 18:16:16 UTC, Wyatt wrote:
 Perhaps that's why I've never considered nulls to be an issue. 
 I take proactive steps to protect my code, before the compiler 
 ever sees it. And actually, I cannot recall any null related 
 error in any code I've deployed. It's just never been an issue.
Oh, that explains it. He's a _robot_! ;)
Actually, you touch on an important point, which is implicit in my argument - (i.e changing the way you think, will change the way you write code). We are programmable too ;-) But who's doing the programming...
Nov 23 2017
prev sibling parent Nick Treleaven <nick geany.org> writes:
On Sunday, 19 November 2017 at 22:54:38 UTC, Walter Bright wrote:
 I can't see the problem. You go from nullable to non-nullable 
 by checking for null, and the other direction happens 
 implicitly.
Implicit conversions have their problems with overloading,
"There is no semantic impact of the nullability annotations, other than the warnings. They don’t affect overload resolution or runtime behavior, and generate the same IL output code. They only affect type inference insofar as it passes them through and keeps track of them in order for the right warnings to occur on the other end."
 interactions with const, template argument deduction, 
 surprising edge cases, probably breaking a lot of Phobos, etc. 
 It's best not to layer on more of this stuff. Explicit casting 
 is a problem, too.
Maybe this can be mitigated by having the compiler just do the job of tracking null tests and making this information available to a NotNull user defined type.
Nov 29 2017
prev sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Timon Gehr <timon.gehr gmx.ch> wrote:
 I wish there was a null 
 for int types.
The new thing is explicitly nullable classes (reference types). I'm really looking forward to use those.
Nov 19 2017
parent reply rumbu <rumbu rumbu.ro> writes:
On Monday, 20 November 2017 at 06:24:31 UTC, Tobias Müller wrote:
 Timon Gehr <timon.gehr gmx.ch> wrote:
 I wish there was a null for int types.
The new thing is explicitly nullable classes (reference types). I'm really looking forward to use those.
int? is just syntactic sugar for Nullable<int>. It has been around since 2005. Nullable<T> is just a struct with an implementation similar to Nullable!T from D's std.typecons types": 1. if you declare SomeClass x, x is assumed to *not hold null values*, that means that when you try "x = null" or x =" somepossiblenullvalue", this will result in a compiler warning: "Warning, x is supposed to hold a value" The warning can be avoided by using "x = null!" or "x = somepossiblenullvalue!" 2.if you declare SomeClass? x, x is allowed to *hold null values*, meaning that if you try "x.someFunction()", this will result in a compiler warning: "Warning, x can be null". The warning can be avoided in two ways: 2a. test for null: "if (x != null) { x.someFunction(); }" 2b. show the compiler that you know better:: x!.someFunction() In fact, this is the introduction of a new operator "!", probably named "I know better" operator.
Nov 20 2017
parent reply Biotronic <simen.kjaras gmail.com> writes:
On Monday, 20 November 2017 at 08:49:41 UTC, rumbu wrote:
 In fact, this is the introduction of a new operator "!", 
 probably named "I know better" operator.
It's called the "bang" operator, because of how things blow up when you're wrong.
Nov 20 2017
parent codephantom <me noyb.com> writes:
On Monday, 20 November 2017 at 08:55:54 UTC, Biotronic wrote:
 On Monday, 20 November 2017 at 08:49:41 UTC, rumbu wrote:
 In fact, this is the introduction of a new operator "!", 
 probably named "I know better" operator.
It's called the "bang" operator, because of how things blow up when you're wrong.
aka the 'dig your own grave' operator.
Nov 20 2017
prev sibling next sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright wrote:
 On 11/18/2017 6:25 PM, Timon Gehr wrote:
 I.e., baseClass should have type Nullable!ClassDeclaration. 
 This does not in any form imply that ClassDeclaration itself 
 needs to have a null value.
Converting back and forth between the two types doesn't sound appealing.
Converting isn't necessary - one can instead map over nullable types, with the mapped function not actually being called when it is indeed null. i.e. struct Nullable(T) { //... auto map(alias F)() { return isNull ? ReturnType!F.init : F(_value); } }
 What should the default initializer for a type do?
There shouldn't be one - any usage of a non-nullable type that hasn't been initialised should be a compile-time error. Similar to using a non-initialised reference in C++, but relaxed to allow assignment at a place other than the declaration.
 Interestingly, `int` isn't nullable, and we routinely use 
 rather ugly hacks to fake it being nullable, like reserving a 
 bit pattern like 0, -1 or 0xDEADBEEF and calling it 
 INVALID_VALUE, or carrying around some other separate flag that 
 says if it is valid or not. These are often rich sources of 
 bugs.
Nullable!int.
 As you can guess, I happen to like null, because there are no 
 hidden bugs from pretending it is a valid value - you get an 
 immediate program halt - rather than subtly corrupted results.
The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
 Yes, my own code has produced seg faults from erroneously 
 assuming a value was not null. But it wouldn't have been better 
 with non-nullable types, since the logic error would have been 
 hidden and may have been much, much harder to recognize and 
 track down.
No, it would have been a compile-time error instead. Atila
Nov 20 2017
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 20 November 2017 at 10:07:08 UTC, Atila Neves wrote:
 The problem with null as seen in C++/Java/D is that it's a 
 magical value that different types may have. It breaks the type 
 system.
Not sure if it breaks the type system, but it would be cleaner to construct types with null "int|null", "float|null" etc, but then you would have a high level language and there are many NaN values (two semantic Nan values, but many encodings that might be used for conveying extra information)
 assuming a value was not null. But it wouldn't have been 
 better with non-nullable types, since the logic error would 
 have been hidden and may have been much, much harder to 
 recognize and track down.
No, it would have been a compile-time error instead.
Yes, but you don't need non-nullable types, you could have subtyping of nullable types instead. For floats that would be very useful. E.g. constraint a float to the range [0.0, 1.0> or integers or not-infinity/not-nan etc.
Nov 20 2017
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 20.11.2017 11:07, Atila Neves wrote:
 
 
 As you can guess, I happen to like null, because there are no hidden 
 bugs from pretending it is a valid value - you get an immediate 
 program halt - rather than subtly corrupted results.
The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
Nov 20 2017
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Monday, 20 November 2017 at 11:27:15 UTC, Timon Gehr wrote:
 On 20.11.2017 11:07, Atila Neves wrote:
 
 
 As you can guess, I happen to like null, because there are no 
 hidden bugs from pretending it is a valid value - you get an 
 immediate program halt - rather than subtly corrupted results.
The problem with null as seen in C++/Java/D is that it's a magical value that different types may have. It breaks the type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
Are you thinking about this? https://dl.acm.org/citation.cfm?id=2984004 I don't think it says that it is unsound because of null, but that later features came in conflict with it?
Nov 20 2017
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/20/2017 3:27 AM, Timon Gehr wrote:
 On 20.11.2017 11:07, Atila Neves wrote:
 The problem with null as seen in C++/Java/D is that it's a magical value that 
 different types may have. It breaks the type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
I'm curious. Can you expand on this, please? (In D, casting null to any other pointer type is marked as unsafe.)
Nov 20 2017
parent reply Mark <smarksc gmail.com> writes:
On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright wrote:
 On 11/20/2017 3:27 AM, Timon Gehr wrote:
 On 20.11.2017 11:07, Atila Neves wrote:
 The problem with null as seen in C++/Java/D is that it's a 
 magical value that different types may have. It breaks the 
 type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
I'm curious. Can you expand on this, please? (In D, casting null to any other pointer type is marked as unsafe.)
This blog post seems to summarize the paper he linked to: https://dev.to/rosstate/java-is-unsound-the-industry-perspective
Nov 20 2017
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/20/2017 5:03 PM, Mark wrote:
 On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright wrote:
 On 11/20/2017 3:27 AM, Timon Gehr wrote:
 On 20.11.2017 11:07, Atila Neves wrote:
 The problem with null as seen in C++/Java/D is that it's a magical value 
 that different types may have. It breaks the type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
I'm curious. Can you expand on this, please? (In D, casting null to any other pointer type is marked as unsafe.)
This blog post seems to summarize the paper he linked to: https://dev.to/rosstate/java-is-unsound-the-industry-perspective
Thank you.
Nov 20 2017
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Tuesday, 21 November 2017 at 01:03:36 UTC, Mark wrote:
 On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright 
 wrote:
 On 11/20/2017 3:27 AM, Timon Gehr wrote:
 On 20.11.2017 11:07, Atila Neves wrote:
 The problem with null as seen in C++/Java/D is that it's a 
 magical value that different types may have. It breaks the 
 type system.
In Java, quite literally so. The Java type system is /unsound/ because of null. (I.e. Java is only memory safe because it runs on the JVM.)
I'm curious. Can you expand on this, please? (In D, casting null to any other pointer type is marked as unsafe.)
This blog post seems to summarize the paper he linked to: https://dev.to/rosstate/java-is-unsound-the-industry-perspective
And, like clockwork, the very first post is someone complaining that he insulted Javascript with an offhand example with a thread going 10 posts deep. I'm not clear on whether he means that Java's type system is unsound, or that the type checking algorithm is unsound. From what I can tell, he's asserting the former but describing the latter.
Nov 20 2017
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Tuesday, 21 November 2017 at 06:03:33 UTC, Meta wrote:
 On Tuesday, 21 November 2017 at 01:03:36 UTC, Mark wrote:
 On Monday, 20 November 2017 at 22:56:44 UTC, Walter Bright 
 wrote:
 On 11/20/2017 3:27 AM, Timon Gehr wrote:
This blog post seems to summarize the paper he linked to: https://dev.to/rosstate/java-is-unsound-the-industry-perspective
I'm not clear on whether he means that Java's type system is unsound, or that the type checking algorithm is unsound. From what I can tell, he's asserting the former but describing the latter.
The spec describes unsound language, the hole in type-system are plugged at VM level by run-time checks. Also this jawel: Cat[] cats = new Cat[3]; ... Animals[] animals = cats; // the same array animals[0] = new Dog(); cats[0].smth(); // ClassCast exception or some such
Nov 20 2017
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 21.11.2017 07:46, Dmitry Olshansky wrote:
 
 The spec describes unsound language, the hole in type-system are plugged 
 at VM level by run-time checks.
 
 Also this jawel:
 
 Cat[] cats = new Cat[3];
 ...
 Animals[] animals = cats; // the same array
 
 animals[0] = new Dog();
 
 cats[0].smth(); // ClassCast exception or some such
 
Actually, the "java.lang.ArrayStoreException" will be thrown already when you attempt to add the dog to the cat array. This is by design though (and explicitly supported by the JVM). The reason why the null-related Java type system hole does not lead to memory corruption is that the JVM does not support generics. (It's all translated to explicit type casts that are expected to always succeed, but the JVM still checks them.)
Nov 21 2017
prev sibling parent reply Ola Fosheim Grostad <ola.fosheim.grostad gmail.com> writes:
On Tuesday, 21 November 2017 at 06:03:33 UTC, Meta wrote:
 I'm not clear on whether he means that Java's type system is 
 unsound, or that the type checking algorithm is unsound. From 
 what I can tell, he's asserting the former but describing the 
 latter.
He claims that type systems with existential rules, hierarchical relations between types and null can potentially be unsound. His complaint is that if Java had been correctly implemented to the letter of the spec then this issue could have led to heap corruption if exploited by a malicious programmer. Runtime checks are part of the type system though, so it isn't unsound as implemented as generated JVM does runtime type checks upon assignment. AFAIK the complaint assumes that information from generic constraints isn't kept on a separate level. It is a worst case analysis of the spec...
Nov 21 2017
next sibling parent reply Meta <jared771 gmail.com> writes:
On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim Grostad 
wrote:
 On Tuesday, 21 November 2017 at 06:03:33 UTC, Meta wrote:
 I'm not clear on whether he means that Java's type system is 
 unsound, or that the type checking algorithm is unsound. From 
 what I can tell, he's asserting the former but describing the 
 latter.
He claims that type systems with existential rules, hierarchical relations between types and null can potentially be unsound. His complaint is that if Java had been correctly implemented to the letter of the spec then this issue could have led to heap corruption if exploited by a malicious programmer. Runtime checks are part of the type system though, so it isn't unsound as implemented as generated JVM does runtime type checks upon assignment. AFAIK the complaint assumes that information from generic constraints isn't kept on a separate level. It is a worst case analysis of the spec...
I don't quite understand the logic here, because it seems to be backwards reasoning. Constrain<U,? super T> is a valid type because null inhabits it? That doesn't make sense to me. He also cites the "implicit constraint" that X extends U where X is ? super T, but X does not meet that constraint (Constrain<U, X extends U>) so how can the type checker deduce that X extends U?
Nov 21 2017
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Tuesday, 21 November 2017 at 18:00:37 UTC, Meta wrote:
 I don't quite understand the logic here, because it seems to be 
 backwards reasoning. Constrain<U,? super T> is a valid type 
 because null inhabits it? That doesn't make sense to me. He 
 also cites the "implicit constraint" that X extends U where X 
 is ? super T, but X does not meet that constraint (Constrain<U, 
 X extends U>) so how can  the type checker deduce that X 
 extends U?
I haven't dug into the details of the paper as I think the authors didn't try to appear neutral, e.g. quoting null as the billon dollar mistake, and made their finding seem more spectacular than it is… What I get from skimming over it is this: You get a call: upcast(constrain,x) -> String where: constrain is of type Constrain<String, X> x is of type X (X is unspecified supertype of Integer) return type is String, which has X as subclass So you get String <: X <: Integer The deduction that String is a superclass of Integer could come from: Constrain<String, X> where X = ? super Integer = unknown type that is supertype of Integer
Nov 21 2017
prev sibling parent reply Mark <smarksc gmail.com> writes:
On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim Grostad 
wrote:
 Runtime checks are part of the type system though
I wouldn't say that, particularly if we are talking about a statically typed language (which Java is).
Nov 22 2017
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Wednesday, 22 November 2017 at 17:17:07 UTC, Mark wrote:
 On Tuesday, 21 November 2017 at 09:12:25 UTC, Ola Fosheim 
 Grostad wrote:
 Runtime checks are part of the type system though
I wouldn't say that, particularly if we are talking about a statically typed language (which Java is).
Very few imperative programming languages are fully statically typed.
Nov 22 2017
prev sibling parent reply Dukc <ajieskola gmail.com> writes:
On Sunday, 19 November 2017 at 04:04:04 UTC, Walter Bright wrote:
 Interestingly, `int` isn't nullable, and we routinely use 
 rather ugly hacks to fake it being nullable, like reserving a 
 bit pattern like 0, -1 or 0xDEADBEEF and calling it 
 INVALID_VALUE, or carrying around some other separate flag that 
 says if it is valid or not. These are often rich sources of 
 bugs.

 As you can guess, I happen to like null, because there are no 
 hidden bugs from pretending it is a valid value - you get an 
 immediate program halt - rather than subtly corrupted results.
I don't deny these. Null is an excellent way to denote "empty" or "invalid". Thats just what std.typecons.Nullable!T is for. Granted, it is not quite as elegant as naturally nullable types. But that does not mean nullables are always good. Consider: struct TimeOfDay { byte hours byte minutes byte seconds } While it might make sense to make the TimeOfDay nullable as whole, you definitely do not want all the fields have a null value each. You know statically that if the struct is valid, then all it's members are valid. It would be only a performance slowdown to check for null with them. You could skip those null-checks by convention but for sure you would not always remember, causing sub-optimal performance. Ideally you would want to leave it up to the type user whether worth it's weight is a different question through. About the question what should be default-initialized value for an abstarct type were it non-nulllable, I think the type definer should decide that. A library solution here would sound credible to me. A type that wraps a reference type behaving like a value type. Default initialized value and what to do on copy would be passed as template parameters. Perhaps I should try...
Nov 20 2017
parent Dukc <ajieskola gmail.com> writes:
On Monday, 20 November 2017 at 10:45:20 UTC, Dukc wrote:
 A type that wraps a reference type behaving like a value type. 
 Default initialized value and what to do on copy would be 
 passed as template parameters. Perhaps I should try...
Just realized Unique!T is already pretty close. A few (non-breaking) modifications on it could do the trick.
Nov 20 2017
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.11.2017 03:25, codephantom wrote:
 On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin wrote:
 It peeked my interested, because when I first started studying D, the 
 lack of any warning or error for this trivial case surprised me.

 // Example A
 class Test
 {
     int Value;
 }

 void main(string[] args)
 {
     Test t;
     t.Value++;  // No compiler error, or warning.  Runtime error!
 }
Also, if you start with nothing, and add 1 to it, you still end up with nothing, cause you started with nothing. That makes completed sense to me. So why should that be invalid?
Because, for example, 'int' does not have a special null value, and we don't want it to have one. The code starts with nothing, and tries to increment an 'int' Value that is associated to nothing. What is this value? There is no null in int. And anyway, the code does not say that t is nothing, it says that t is a Test. Then it does not say what kind of Test it is. The new features allow you to specify that t may be nothing, and they add a type int? that carries the cost of a special null value for those who are into that kind of thing.
Nov 17 2017
prev sibling next sibling parent codephantom <me noyb.com> writes:
On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin 
wrote:


 // Example B
 class Test
 {
     public int Value;
 }
 					
 public class Program
 {
     public static void Main()
     {
         Test t;
         t.Value++;  // Error: Use of unassigned local variable 
 't'
     }
 }
 https://dotnetfiddle.net/8diEiG
Let's try reversing that question. nothing. D let's me do that ;-) The problem though, seems to be the runtime environment..My code specifically said I wanted to do something with nothing, the compiler said, sure, go ahead..and then it crashed at execution? Fix the runtime so it does what I want.
Nov 16 2017
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Friday, November 17, 2017 01:47:01 Michael V. Franklin via Digitalmars-d 
wrote:
 With Microsoft's proposed change, the compiler will emit a
 warning for Example C.  If you want to opt out of the warning,
 you'll need to declare `_instance` as `Test? _instance` (see the
 '?' there).
Personally, I'm flat out against anything that would generate a warning rather than an error. If the compiler can't guarantee that your code is wrong, then that check should be left up to a linter. As soon as something is a warning in the compiler, you're going to be forced to fix it whether it makes sense to fix it or not, because it's not appropriate to leave warnings in a project. Having a way to tell the compiler to shut up improves things, but it would be really annoying and is generally the sort of thing that would be left to a linter. If we can add something to the compiler which finds bugs 100% correctly and generates an error for them, then I have no problem with that. But I don't want to see any warnings about things that "might" be wrong. I think that it was a huge mistake for Walter to add warnings to the compiler (which he only did after a lot of nagging, and I suspect that he agrees that it was a mistake; I'm sure that he's not entirely happy about it regardless). The compiler should only ever complain about something that is guaranteed to be wrong. Warnings are just errors in disguise but where what they're complaining about isn't necessarily bad code.

 still considers it an improvement to the language worth pursuing.
   Is there hope for D, too?
In general, Walter doesn't like stuff that requires code flow analysis. I believe that dmd does do _some_ code flow analysis, so I don't think that that's a deal breaker, but it's the sort of thing that I think tends to get shot down, because it gets complicated fast, and in general, D's solution to this problem was to default initialize everything to values that were either perfectly legitimate or as close as possible to error values so that the problem would be caught quickly (and is simpler than code flow analysis). If it can be implemented in a way that the compiler generates an error only when it's completely sure that what you're doing is wrong, then I see no problem with it, but it has to be an error, and the compiler can't be generating errors when what you're doing could actually be fine. Personally, I'm inclined to think that not initializing a variable which defaults to null is a complete non-issue. Sure, it would be nice if the compiler caught it and told you so that you didn't have to even run your unit tests to find the problem (that's always nice), but it's also the sort of thing that's immediately obvious as soon as you hit that piece of code (which is very early on in the process if you're unit testing your code), and it doesn't even require writing additional unit tests to catch it - any tests that test the code properly will find the problem immediately. So, in practice, it's a mistake that's quickly found and fixed. As such, while I have no problem with the compiler giving an error when you've definitively screwed up and are calling a member function on a null object, I question that it actually fixes much, and I definitely don't want to be required to do something to my code to make the compiler shut up, because it's incorrectly decided that what I'm doing is wrong. Another thing to consider with things like this though is generic code. It's pretty trivial to have generic code run afoul of anything warning that what you're doing might be wrong when it's actually just fine and pretty annoying to write in a way that makes the compiler shut up about it (a warning for unused variables would be a prime example where we'd have serious problems like that). At the moment, I can't think of why that would be a problem in this case (presuming that the compiler only complained when it definitively knew that the code was calling a member function on a null reference or pointer), but it's something that would have to be considered. - Jonathan M Davis
Nov 17 2017
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 17.11.2017 11:19, Jonathan M Davis wrote:
   If the compiler can't guarantee that your code is
 wrong, then that check should be left up to a linter.
I.e., you think the following code should compile: class C{} void main(){ size_t a = 2; C b = a; size_t c = b; import std.stdio; writeln(c); // "2" }
Nov 17 2017
prev sibling next sibling parent Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 17 November 2017 at 01:47:01 UTC, Michael V. Franklin 
wrote:
 It peeked my interested, because when I first started studying 
 D, the lack of any warning or error for this trivial case 
 surprised me.
You wanna get freaked out? Try that very same trivial example with the `-O` option to dmd. $ dmd -O pp pp.d(20): Error: null dereference in function _Dmain Yes, the optimizer has a compile time null check... but the mail compiler doesn't. Walter has explained it is because the optimizer does some flow analysis that the semantic step doesn't. But still, sooooo weird.
Nov 17 2017
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/16/2017 5:47 PM, Michael V. Franklin wrote:
 the lack of any warning or error for this trivial case surprised me.
Consider the following code: void test() { int* p; *p = 3; } Compiling it with -O gives: Error: null dereference in function _D5test54testFNaZv The -O is necessary because the optimizer uses data flow analysis, which makes the error easy to pick up. Note that if the code were written: void test(int i) { int* p; if (i) p = &i; *p = 3; } no error is diagnosed. Data flow analysis determines all paths, and some of those paths may (legitimately) never happen, and so giving an error would be spurious.
Nov 17 2017