www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Suggestion: "fix" assert(obj)

reply "Kristian Kilpi" <kjkilpi gmail.com> writes:
This issue has been proposed before. Well, I think it's the time suggest=
  =

it again...

The problem is that

   assert(obj);

does not first check if 'obj' is null. It just executes the object's  =

invariants.
So, one should usually write instead:

   assert(obj !is null);
   assert(obj);


In addition, if you write something like this

   assert(val > 0 && obj);

, then it's checked that 'obj' is not null (instead of running its  =

invariants).


I propose that the two previous cases should be combined.
This won't broke any existing code. Actually, it should make it more bug=
  =

free.

That is, if an object is used inside an assert (anywhere inside it), the=
n  =

first it's checked that the object is not null, and then its invariants =
 =

are run:

   assert(obj);  //=3D=3D "assert(obj !is null && obj.runInvariants());"=


   assert(val > 0 && obj);  //=3D=3D "assert(val > 0 && obj !is null && =
 =

obj.runInvariants());"
Jun 14 2007
next sibling parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Kristian Kilpi wrote:
 That is, if an object is used inside an assert (anywhere inside it), 

I'd change that to "if an object is used *as a boolean value* inside an assert". That way the behavior won't change when comparing objects to each other and calling member functions on them.
 then first it's checked that the object is not null, and then its 
 invariants are run:

Other than that, the only problem I'd have with this is that object references in asserts still have different semantics than object references in normal boolean expressions (like if/while/whatever conditions). That's still better than what we have currently, though.
Jun 14 2007
parent "Kristian Kilpi" <kjkilpi gmail.com> writes:
On Thu, 14 Jun 2007 12:13:10 +0300, Frits van Bommel  
<fvbommel REMwOVExCAPSs.nl> wrote:
 Kristian Kilpi wrote:
 That is, if an object is used inside an assert (anywhere inside it),

I'd change that to "if an object is used *as a boolean value* inside an assert". That way the behavior won't change when comparing objects to each other and calling member functions on them.

Good revision, that's what I meant.
 then first it's checked that the object is not null, and then its  
 invariants are run:

Other than that, the only problem I'd have with this is that object references in asserts still have different semantics than object references in normal boolean expressions (like if/while/whatever conditions). That's still better than what we have currently, though.

Yes, there is that. But I don't see that causing any trouble, even if that's (a bit?) inconsistant. It just makes asserts more strict (which is a good thing(tm)). And, by their nature, invariants must always pass.
Jun 14 2007
prev sibling next sibling parent reply Xinok <xnknet gmail.com> writes:
Kristian Kilpi wrote:
 So, one should usually write instead:
 
   assert(obj !is null);
   assert(obj);

Until we do get a fix for this problem, wouldn't it be easiest to put this check in the invariant itself? invariant{ assert(this !is null); }
Jun 14 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Xinok wrote:
 Kristian Kilpi wrote:
 So, one should usually write instead:

   assert(obj !is null);
   assert(obj);

Until we do get a fix for this problem, wouldn't it be easiest to put this check in the invariant itself? invariant{ assert(this !is null); }

It would be, except that's too late. The code that gets called on assert(obj) goes like this: --- void _d_invariant(Object o) { ClassInfo c; //printf("__d_invariant(%p)\n", o); // BUG: needs to be filename/line of caller, not library routine assert(o !is null); // just do null check, not invariant check c = o.classinfo; do { if (c.classInvariant) { (*c.classInvariant)(o); } c = c.base; } while (c); } --- (Note: The assert is usually not compiled-in since this code is in Phobos and the distributed binary version is compiled with -release :( ) The actual invariant code is called in the innermost nested block. The last statement before the loop accesses the vtable pointer of 'o', which segfaults/causes an access violation if "o is null". Before the invariant even gets a chance to run...
Jun 14 2007
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Kristian Kilpi wrote:
 The problem is that
 
   assert(obj);
 
 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.
Jun 14 2007
next sibling parent "Kristian Kilpi" <kjkilpi gmail.com> writes:
On Thu, 14 Jun 2007 21:14:38 +0300, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Kristian Kilpi wrote:
 The problem is that
    assert(obj);
  does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Heh, well, it would be nicer if the assert would fail instead. ;) I think usually that's what programmers expect.
Jun 14 2007
prev sibling next sibling parent Lionello Lunesu <lio lunesu.remove.com> writes:
Walter Bright wrote:
 Kristian Kilpi wrote:
 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

You're right, but the problem is that you need a debugger attached to catch that exception, whereas asserts help you with debugging without any debugger. This is part of the strength of assert. Many problems will cause some kind of exception that's theoretically catchable in a debugger, but we write asserts nonetheless, right? L.
Jun 15 2007
prev sibling next sibling parent reply Georg Wrede <georg nospam.org> writes:
Walter Bright wrote:
 Kristian Kilpi wrote:
 
 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

That sounds like "this is not negotiable". IMHO, assert(o) should mean check existence. When it now doesn't, people write if(o !is null) all over the place, *simply* because of the symmetry. It's easier to remember one single idiom for the test, rather than having to first learn and then to remember where to use which. That litters code, is cumbersome, and is more verbose than needed, in a Practical language. Having assert(o) mean assert(o.integrity()), or whatever unobvious, saves ink, no question. How much work it saves when we ordinary mortals start using it, is another question. Ink savings are not an argument when there's at most a couple of such asserts in an entire application. But, where ink savings would be an argument, i.e. lots of assert(o)s in the source tree, it is unusable since segfaults don't tell the *source line number*. Thus, again no ink savings. The current assert behavior is unexpected, and it surprises one exactly when one least needs it: when Boss says you gotta fix the bugs, tonight. At any rate, I suspect D programmers are hardly relying on the current behavior, simply because it smells like it can change any day. So a careful coder would write assert(o.checkInvariants()) anyhow. Besides, the object won't have any checkInvariants() to begin with, unless the coder is one of the careful ones! --- What would your reaction be if you find the following line inside a method? assert(); What's the first thought? Apart from that it must cause a compiler error since there's no expression at all. What would you expect it to do? If I told you we've fixed gdc so that an empty assert within a method causes immediate assertion of the current instance's consistency, would you approve? --- Unexpected behavior introduces exceptions (as in normal English), that you must remember, into the language. They immediately make the coder's burde heavier, and later stand in the way of future improvements or changes. This assert thing certainly isn't worth the current and future cost, IMHO.
Jun 15 2007
parent reply Tristam MacDonald <swiftcoder gmail.com> writes:
Actually, I would expect it to be a no-op, since the fewer conditions that must
be satisfied, the less likely the assert is to fail. Think of a nullary logic
ops in scheme, etc. (and, or)...

Georg Wrede Wrote:
 What would your reaction be if you find the following line inside a method?
 
    assert();
 
 What's the first thought? Apart from that it must cause a compiler error 
 since there's no expression at all. What would you expect it to do? If I 
 told you we've fixed gdc so that an empty assert within a method causes 
 immediate assertion of the current instance's consistency, would you 
 approve?

Jun 15 2007
parent Georg Wrede <georg nospam.org> writes:
(( Top-posting makes it harder to follow for the rest of us...))

Tristam MacDonald wrote:
 Georg Wrede Wrote:

 Actually, I would expect it to be a no-op, since the fewer conditions
 that must be satisfied, the less likely the assert is to fail. Think
 of a nullary logic ops in scheme, etc. (and, or)...

See way below. --- if(){writefln("foo");} hardly produces a no-op. It gives "expression expected, not ')'". And it should. (Academically:) if the above error was not errored, then the next thing would be to return false, because the Thruth of Nothing is False.
 What would your reaction be if you find the following line inside a
 method?
 
 assert();
 
 What's the first thought? Apart from that it must cause a compiler
 error since there's no expression at all. What would you expect it
 to do? If I told you we've fixed gdc so that an empty assert within
 a method causes immediate assertion of the current instance's
 consistency, would you approve?


So, logically, assert() should fail. Nothing (as in (), ) equals false. Thus, it should fail. An assertion fails (even by the grammar in Natural languages, as in English) when it does not produce something true. (Assert Victim has pulse. He has, ok. He has no pulse: dead, assertion failed.) assert(null) should produce false. assert(0) should produce false. assert(i) should produce true if i is non-null, else false. assert(o) should produce true if o exists, false otherwise. Few expect, and even fewer hope that testing for existence (as in assert(o)) actually decides to run some canonicalized method of o "to check if the instance is happy".
Jun 15 2007
prev sibling next sibling parent reply Georg Wrede <georg nospam.org> writes:
Walter Bright wrote:
 Kristian Kilpi wrote:
 
 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.
Jun 15 2007
next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:

 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

What I find odd is that Walter often argues that things in D that *look* like C++ should *act* like C++ as much as possible. As a C++ user I continually find it bizarre and hard to remember that assert() has these special cases. There's also something unusual about assert(0), too. It stays enabled in release mode. And it's not a seg fault. But assert on a 0 that happens to have a certain type is. Odd. Not making my life any easier I don't think. I'm sure if I keep poking my brain and telling it "assert(object) is different!", I'll get it internalized eventually, but I really just don't find the "feature" of running invariants all that compelling. --bb
Jun 15 2007
parent reply Georg Wrede <georg nospam.org> writes:
Bill Baxter wrote:
 Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:
 The problem is that
 
 assert(obj);
 
 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

What I find odd is that Walter often argues that things in D that *look* like C++ should *act* like C++ as much as possible.

He should. This is a C family language, designed to woo the C(++) crowd out of their misery. So, things that "look" the same, should act unsurprisingly. And things that don't act like those folks expect, should look something diferent.
 As a C++ user I continually find it bizarre and hard to remember that
 assert() has these special cases.

Exactly my point.
 There's also something unusual about assert(0), too.  It stays
 enabled in release mode.  And it's not a seg fault.  But assert on a
 0 that happens to have a certain type is.  Odd.  Not making my life 
 any easier I don't think.  I'm sure if I keep poking my brain and 
 telling it "assert(object) is different!", I'll get it internalized 
 eventually, but I really just don't find the "feature" of running 
 invariants all that compelling.

Precisely why I got my blood pressure up with this. Actually, this particular case doesn't bother me that much since I personally don't use asserts that much. Instead I use writefl(whatever) for my debugging, and either delete those lines when that particular bug is fixed, or leave them as comments. But I can imagine a whole lot of circumstances where it would be wise to leave the assertion in place. The most obvious example would probably be creating a library of red-black trees. So, this is an issue that carries much _principal_ weight. If we don't close the door for mice right now, then we might as well leave the entire wall open. The end result five years from now won't be that different. (I've lived on the country side, I know.) Currently, msg 54517, shows that the entire "assert show" is broken. It might be argued that the whole concept of calling assert and having it sort out for itself whether there's a classInvariant method, is questionable. (In other threads folks have been dying for a compelling example for the need to have compile time introspection (as in, call this method if it exists or do something else reasonable instead). Well, here it is.) Even if it weren't broken, the question is, should it at all be possible to simply call assert and have it _implicitly_ run methods in the object instance? Such would heavily change the semantics of assert, as compared with any other languages (read: people's expectations).
Jun 15 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
Georg Wrede wrote:
 Bill Baxter wrote:
 Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:
 The problem is that

 assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

What I find odd is that Walter often argues that things in D that *look* like C++ should *act* like C++ as much as possible.

He should. This is a C family language, designed to woo the C(++) crowd out of their misery. So, things that "look" the same, should act unsurprisingly. And things that don't act like those folks expect, should look something diferent.

Right. I was agreeing with you. My point was that Walter *usually* advocates "things that look like C should act like C", but in this case he's going against his own advice. I didn't mean to say that trying to make C constructs to do the same thing in D was odd. --bb
Jun 16 2007
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:

 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems. Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with: 1) zero code size cost 2) zero runtime cost 3) they're there for every pointer dereference 4) they work with the debugger to let you know exactly where the problem is Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.
Jun 15 2007
next sibling parent reply sambeau <spam_sambeau mac.com> writes:
Walter Bright Wrote:
 It's not only asserts done in hardware, it's asserts with:
 
 1) zero code size cost
 2) zero runtime cost
 3) they're there for every pointer dereference
 4) they work with the debugger to let you know exactly where the problem is
 
 Seg faults are not an evil thing, they're there to help you. In fact, 
 I'll often *deliberately* code them in so the debugger will pop up when 
 it hits them.

</lurk> While I get your point completely I believe this style of debugging is only really useful to the hard-core professional. For the novice, where the debugger is a scary place of last resort, it would be more useful to have an assert too. All most of us want is a line number: and segfaults don't give us that. If we also have an assert, hard-core programers can choose whether to assert or segfault, surely?. <lurk>
Jun 16 2007
parent reply Walter Bright <newshound1 digitalmars.com> writes:
sambeau wrote:
 While I get your point completely I believe this style of debugging is only
really useful to the hard-core professional.
 
 For the novice, where the debugger is a scary place of last resort, it would
be more useful to have an assert too. All most of us want is a line number: and
segfaults don't give us that.

They do when combined with symbolic debug info - that is how the debugger is able to display your source code with the highlighted line where it seg faulted.
 
 If we also have an assert, hard-core programers can choose whether to assert
or segfault, surely?.
 <lurk>

Jun 16 2007
next sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
But, anyways, an "Assert failed in line 178" is much more informative 
than "Program segfault".

You'll waste less time to fix the bug if you know which is the line that 
causes trouble, instead of doing these three steps: compiling with 
symbolic debug info, launching the program again and see where it crashes.

You could even fix the bug by just looking at line 178, without debugging.

Walter Bright escribiˇ:
 sambeau wrote:
 While I get your point completely I believe this style of debugging is 
 only really useful to the hard-core professional.

 For the novice, where the debugger is a scary place of last resort, it 
 would be more useful to have an assert too. All most of us want is a 
 line number: and segfaults don't give us that.

They do when combined with symbolic debug info - that is how the debugger is able to display your source code with the highlighted line where it seg faulted.

Jun 16 2007
parent reply Jascha Wetzel <firstname mainia.de> writes:
Ary Manzana wrote:
 You'll waste less time to fix the bug if you know which is the line that 
 causes trouble, instead of doing these three steps: compiling with 
 symbolic debug info, launching the program again and see where it crashes.
 
 You could even fix the bug by just looking at line 178, without debugging.

there is no way how access violations can have line numbers without adding debug symbols (read: line numbers) to the executable. imho, using -g should be default during development and testing.
Jun 16 2007
parent reply Ary Manzana <ary esperanto.org.ar> writes:
The problem in question is that assert(someObject) doesn't check first 
that someObject is not null, so if it's null, calling it's invariant 
results in a segmantation fault. In code:

assert(someObject);

becomes

assert(invariantOf(someObject));

(where invariantOf is a ficticious function that is the invariant of the 
object)

You can avoid segmentation fault by making the compiler rewrite

assert(someObject);

to

assert(someObject !is null, "someObject is null");
assert(invariantOf(someObject));

Why not do it? In the first case you get a segfault, in the second you 
get a clear "Assert failed in line ..., someObject is null".

Jascha Wetzel escribiˇ:
 Ary Manzana wrote:
 You'll waste less time to fix the bug if you know which is the line 
 that causes trouble, instead of doing these three steps: compiling 
 with symbolic debug info, launching the program again and see where it 
 crashes.

 You could even fix the bug by just looking at line 178, without 
 debugging.

there is no way how access violations can have line numbers without adding debug symbols (read: line numbers) to the executable. imho, using -g should be default during development and testing.

Jun 16 2007
parent reply Ary Manzana <ary esperanto.org.ar> writes:
But... thinking a little more, it may be the programmer's fault, 
actually, to think that someObject is not null. If you call 
assert(someObject.property) then you still get a segfault if someObject 
is null, and here it doesn't seam reasonable for the compiler to check 
if someObject is null.

Ary Manzana escribiˇ:
 The problem in question is that assert(someObject) doesn't check first 
 that someObject is not null, so if it's null, calling it's invariant 
 results in a segmantation fault. In code:
 
 assert(someObject);
 
 becomes
 
 assert(invariantOf(someObject));
 
 (where invariantOf is a ficticious function that is the invariant of the 
 object)
 
 You can avoid segmentation fault by making the compiler rewrite
 
 assert(someObject);
 
 to
 
 assert(someObject !is null, "someObject is null");
 assert(invariantOf(someObject));
 
 Why not do it? In the first case you get a segfault, in the second you 
 get a clear "Assert failed in line ..., someObject is null".
 
 Jascha Wetzel escribiˇ:
 Ary Manzana wrote:
 You'll waste less time to fix the bug if you know which is the line 
 that causes trouble, instead of doing these three steps: compiling 
 with symbolic debug info, launching the program again and see where 
 it crashes.

 You could even fix the bug by just looking at line 178, without 
 debugging.

there is no way how access violations can have line numbers without adding debug symbols (read: line numbers) to the executable. imho, using -g should be default during development and testing.


Jun 17 2007
parent Jascha Wetzel <firstname mainia.de> writes:
Ary Manzana wrote:
  If you call
 assert(someObject.property) then you still get a segfault if someObject 
 is null, and here it doesn't seam reasonable for the compiler to check 
 if someObject is null.

yep, the assert can wrap an expression and therefore an arbitrary amount of code. it's the same problem as with checking for AVs in the whole program in general.
Jun 17 2007
prev sibling next sibling parent reply Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Walter Bright wrote:
 sambeau wrote:
 While I get your point completely I believe this style of debugging is 
 only really useful to the hard-core professional.

 For the novice, where the debugger is a scary place of last resort, it 
 would be more useful to have an assert too. All most of us want is a 
 line number: and segfaults don't give us that.

They do when combined with symbolic debug info - that is how the debugger is able to display your source code with the highlighted line where it seg faulted.

Uh... I just don't feel like teaching all my potential testers what a debugger is, and that they should always run the game from within the debugger, or I won't be able to help when something goes wrong :/ -- Tomasz Stachowiak http://h3.team0xf.com/ h3/h3r3tic on #D freenode
Jun 16 2007
parent reply Jascha Wetzel <firstname mainia.de> writes:
Tom S wrote:
 Uh... I just don't feel like teaching all my potential testers what a 
 debugger is, and that they should always run the game from within the 
 debugger, or I won't be able to help when something goes wrong :/

you don't need to. just write a batch file that runs your program in the debugger and make it the default way to start the game for the testers. you'll have Unhandled Exception: EXCEPTION_ACCESS_VIOLATION(0xc0000005) at _Dmain debugees\debuggee.d:237 (0x00402768) instead of Error: Access Violation The problem with access violations is that the CPU raises them, the OS handles them and throws the exception. In order to have the same info in those exceptions as we have in D exceptions, they need to be intercepted and decorated with filenames and source line numbers - data that is available at runtime only from debug symbols. That means, that there needs to be a runtime that can interpret debug symbols and that basically is shipping half a debugger with the executable.
Jun 16 2007
parent reply Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Jascha Wetzel wrote:
 Tom S wrote:
 Uh... I just don't feel like teaching all my potential testers what a 
 debugger is, and that they should always run the game from within the 
 debugger, or I won't be able to help when something goes wrong :/

you don't need to. just write a batch file that runs your program in the debugger and make it the default way to start the game for the testers. you'll have Unhandled Exception: EXCEPTION_ACCESS_VIOLATION(0xc0000005) at _Dmain debugees\debuggee.d:237 (0x00402768) instead of Error: Access Violation The problem with access violations is that the CPU raises them, the OS handles them and throws the exception. In order to have the same info in those exceptions as we have in D exceptions, they need to be intercepted and decorated with filenames and source line numbers - data that is available at runtime only from debug symbols. That means, that there needs to be a runtime that can interpret debug symbols and that basically is shipping half a debugger with the executable.

Ok, point. Thanks! :) But now a tougher question.. What happens if the crash occurs inside a dynamically (DDL) loaded module? Can I somehow tell the debugger where to look for symbols? Right now I'm using the famous Phobos hack for backtraces. I modified it so it provides an interface for feeding new symbols, e.g. from DDLs. -- Tomasz Stachowiak http://h3.team0xf.com/ h3/h3r3tic on #D freenode
Jun 16 2007
parent Jascha Wetzel <firstname mainia.de> writes:
Tom S wrote:
 Ok, point. Thanks! :) But now a tougher question.. What happens if the 
 crash occurs inside a dynamically (DDL) loaded module? Can I somehow 
 tell the debugger where to look for symbols? Right now I'm using the 
 famous Phobos hack for backtraces. I modified it so it provides an 
 interface for feeding new symbols, e.g. from DDLs.

generally the debugger loads symbols from DLL's when they are loaded into the process' address space. i don't know enough about DDL's loading mechanism, but i'm sure it can be handled similarly. besides that, as far as Ddbg is concerned, it doesn't support loading debug symbols from DLL's, yet, but that'll change...
Jun 16 2007
prev sibling parent reply Deewiant <deewiant.doesnotlike.spam gmail.com> writes:
Walter Bright wrote:
 sambeau wrote:
 While I get your point completely I believe this style of debugging is
 only really useful to the hard-core professional.

 For the novice, where the debugger is a scary place of last resort, it
 would be more useful to have an assert too. All most of us want is a
 line number: and segfaults don't give us that.

They do when combined with symbolic debug info - that is how the debugger is able to display your source code with the highlighted line where it seg faulted.

Yes, but you still need the debugger to see it. The fact remains that many novices and stubborn professionals don't use debuggers, no matter how useful they are and how many times they are praised by others. They're happy with their printf debugging, even though they would probably be happier using a debugger, at least in some cases. The situation is a lot like that of many C++ zealots who would like D a lot but simply refuse to try it for some reason or other. Invalidate that reason and you get more users. In this case, either remove all asserts and force everyone to use a debugger in all cases, or add some asserts to make people generally happier and more productive when coding. Personally, I find it annoying that most crashing bugs I can catch easily with a normal run of the program, but when it's "Access Violation" or "Win32 Exception" or "Segmentation fault", I have to recompile with -g and fire up the debugger to find out exactly what happened and where. (OT: A similar case could be made for the Phobos call stack backtracing patch.) -- Remove ".doesnotlike.spam" from the mail address.
Jun 16 2007
next sibling parent Georg Wrede <georg nospam.org> writes:
Deewiant wrote:
 Walter Bright wrote:
sambeau wrote:

While I get your point completely I believe this style of debugging is
only really useful to the hard-core professional.



Right.
For the novice, where the debugger is a scary place of last resort, it
would be more useful to have an assert too. All most of us want is a
line number: and segfaults don't give us that.

They do when combined with symbolic debug info - that is how the debugger is able to display your source code with the highlighted line where it seg faulted.

Yes, but you still need the debugger to see it. The fact remains that many novices and stubborn professionals don't use debuggers, no matter how useful they are and how many times they are praised by others. They're happy with their printf debugging, even though they would probably be happier using a debugger, at least in some cases.

Actually, the debugger and asserts (or even printf statements) all serve different purposes. The debugger is what I'd use when the situation is complex, that is, when there is much to inspect in the program state at one time. For example a tree structure, or other complex problem that I'm clueless about. (And a debugger is of course essential for the trial-and-error type of coder. :-) Sadly, that is the kind of coding that too easy access to [gui]debuggers tends to foster! :-( ) Printfs are real handy for much of the rest, since I usually have a pretty good guess of the one or two variables that might contain wrong values at some point. (Actually, even loop debuggin gets easier since I can put a few printfs inside the loop and then redirect the output to a file, so I have a hard copy of the entire situation, that I can refer to repeatedly, instead of only the momentaneous situation in the debugger. Lessing, grepping, heading or tailing the file are much faster than "creating the situation with single stepping" in the debugger.) Asserts I mainly use in code that's ok at the time, but which might end up with wrong values when I'm debugging or developing another part of the routine or program. (Oh, and obviously and embarrassingly, I often use asserts instead of proper precondition tests.) And, finally, asserts are, IMHO, the easiest way of ensuring I get reasonable info when the program is run by others and it misbehaves.
 In this case, either remove all asserts and force everyone to use a debugger in
 all cases, or add some asserts to make people generally happier and more
 productive when coding.

Exactly.
 Personally, I find it annoying that most crashing bugs I can catch easily with
a
 normal run of the program, but when it's "Access Violation" or "Win32
Exception"
 or "Segmentation fault", I have to recompile with -g and fire up the debugger
to
 find out exactly what happened and where.

Ditto.
Jun 17 2007
prev sibling parent reply "Martin Howe" <martinhowe myprivacy.ca> writes:
Having only discovered that D even exists about a month ago, I've been 
lurking until now, but have to respond to this one:

"Deewiant" <deewiant.doesnotlike.spam gmail.com> wrote in message
news:f51ebj$15e1$1 digitalmars.com...

 for the novice, where the debugger is a scary place of last resort



 would be more useful to have an assert



 all most of us want is a line number



 stubborn professionals don't use debuggers

 they're happy with their printf debugging

 happier using a debugger, at least in some cases

Speaking as a "stubborn professional" who is "happy with my printf debugging" and happier to use a debugger "at least in some cases" (:P), I would say this: To the novice, a debugger is indeed a scary place of last resort; however, to the professional the debugger can also be a place of excessive effort that can often be shown to be unnecessary; thus a debugger is stll a last resort, but with "scary" replaced with "over complicated"; you get the idea :) The object of any commercial exercise is to successfully deal with a problem for as little time/work/cost as possible. This goes double when working in a very-tight-deadlines sector (as I have for most of my career). The metric of "zealously use a debugger *in all* circumstances" **fails** this test because for many many cases, "printf debugging" and line numbers are sufficient. IMO, zealous use of debuggers is potentially bad as zealous non-use of debuggers; the need to use a debugger depends on the job and the obviousness (or not) of the problem location. I'd suggest this as an analogy: debuggers can be viewed as being to programming what in-patient invasive surgery is to medicine: the more operations that can be performed with a local, an endoscope and at an out-patient clinic, the better. Save "cut him open and get your hands covered in blood going right inside his guts" surgery for when it is *absolutely* necessary. Where *unexpected* failures are concerned, it can, as more than one person has already touched on, also depend on practicalities/legalities *over which you have no control* or on ephemeral data such as that from a temperature sensor. In these cases, knowing *exactly* where the program failed, *the very first time it did so*, is essential. Asserts are a good way to do this where a physical possibility of failure exists but is unlikely (you hope!) to be triggered. I'm sure we've all written things like: ....case 3: ........do_something(); ........break; ....default: ........// Should be impossible, but at least trap it if so ........assert((y<=6 || x>=5), "X<5 but Y>6", __LINE__, "x=", x, "y=", y); ........break; ....} It's annoying to have to assert() that the object exists first, but better than having to tell your client's overnight low-tech personnel in country 3,000 miles away how to compile a suite whose source code they do not have, with a compiler they do not own and then run with the debugger that they do not understand on data whose trigger conditions no longer exist, all over the phone at 3am your time.
Jun 21 2007
parent reply Jascha Wetzel <firstname mainia.de> writes:
Martin Howe wrote:
 To the novice, a debugger is indeed a scary place of last resort; however, 
 to the professional the debugger can also be a place of excessive effort 
 that can often be shown to be unnecessary; thus a debugger is stll a last 
 resort, but with "scary" replaced with "over complicated"; you get the idea 
 :)

to a novice, many things might be scary, including D itself, so i won't argue here. to an experienced programmer a debugger should be a tool that saves you from applying multiple add-printf/recompile/restart cycles by letting you intercept the program when it crashes and inspect the stack and heap instantly. it is there to complement asserts and exceptions, not as an alternative to them. that surgery analogy doesn't quite work here, since printf's are even more invasive: you're basically implanting a piece of a debugger into the organism.
 It's annoying to have to assert() that the object exists first, but better 
 than having to tell your client's overnight low-tech personnel in country 
 3,000 miles away how to compile a suite whose source code they do not have, 
 with a compiler they do not own and then run with the debugger that they do 
 not understand on data whose trigger conditions no longer exist, all over 
 the phone at 3am your time.

and that's what memory dumps and post-mortem debugging is for. the client sends you the dump and you can debug the exact instance of the crash.
Jun 21 2007
parent reply "Martin Howe" <martinhowe myprivacy.ca> writes:
"Jascha Wetzel" <firstname mainia.de> wrote in message 
news:f5ecms$2ejq$1 digitalmars.com...
 ........

I won't argue with the rest, but
 that surgery analogy doesn't quite work here

are targeted at the area you know is the likely source of the trouble and involve minimum trouble to implement; thus for simple cases, it's quicker than the alternative. I use a debugger whenever programs crash for **no obvious reason**, because a debugger is the ONLY way to avoid multiple edit/printf/run cycles in such cases. I must admit, the *routine* use of debugger sounds like something that with a bit of discipline might be worth adopting; it still feels like overkill, but then I guess I just haven't worked in a sector where impenetrable errors are common daily occurences.
 and that's what memory dumps and post-mortem debugging is for.
 the client sends you the dump and you can debug the exact instance of the 
 crash.

bleeding-edge expert programmers (which I freely admit to not being one of) to do; certainly wasn't covered by even 3rd-year undergrad stuff; any good web references you can point me to?
Jun 22 2007
parent Jascha Wetzel <firstname mainia.de> writes:
Martin Howe wrote:
 Well, FWIW, IMO it does -- the printf is endoscope surgery...

oh, ok, sorry - i (mis)understood it the other way around.
 I must admit, the *routine* use of debugger sounds like something that with 
 a bit of discipline might be worth adopting; it still feels like overkill, 
 but then I guess I just haven't worked in a sector where impenetrable errors 
 are common daily occurences.

for native executables, some information is provided exclusively by the debugger (or other tools) that VMs give you out of the box, namely stack traces. therefore it's plausible, that debuggers are more commonly used in native programming.
 and that's what memory dumps and post-mortem debugging is for.
 the client sends you the dump and you can debug the exact instance of the 
 crash.

bleeding-edge expert programmers (which I freely admit to not being one of) to do; certainly wasn't covered by even 3rd-year undergrad stuff; any good web references you can point me to?

actually it's less scary for the client than helping the developer to find the problem using log information or Java stacktraces. in most cases it's also easier than trying to describe how to reproduce the problem. every WinXP user knows the "bla.exe has encountered a problem and needs to close"-dialog with the "Send Error Report" and "Don't Send" Buttons. what it does is creating a minidump and sending it to MS's crash statistics server. instead of letting the default dialog pop up, you can replace it with your own or simply save the dump to a file. Here is more info on that: http://www.codeproject.com/debug/postmortemdebug_standalone1.asp as of now, D applications catch all exceptions and print their message. therefore no D application will trigger that crash handler. upcoming releases of Ddbg will allow you to deal with minidumps properly, though.
Jun 22 2007
prev sibling next sibling parent reply Georg Wrede <georg nospam.org> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:
 The problem is that assert(obj); does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems.

Ok, sloppy wording on my part. "Pointer disasters" would've been more accurate.
 Seg faults are not an evil thing, they're there to help you. In fact, 
 I'll often *deliberately* code them in so the debugger will pop up when 
 it hits them.

That's okay. But what you do with D is hardly the average programming task. And not everybody uses a debugger. Many have their code run by others, and a phone call about a segfault is not an option. This is where asserts come in handy. Sure, we can learn to write if(o) and assert(o !is null). Maybe this discussion should be deferred until nothing big is going on in D.
Jun 16 2007
parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Georg Wrede wrote:

 Sure, we can learn to write if(o) and assert(o !is null). 
 Maybe this
 discussion should be deferred until nothing big is going on in D.

Aww. But we have to keep ourselves entertained *somehow* while Walter is off implementing const. :-) --bb
Jun 16 2007
parent Georg Wrede <georg nospam.org> writes:
Bill Baxter wrote:
 Georg Wrede wrote:
 
 Sure, we can learn to write if(o) and assert(o !is null). 

 Maybe this discussion should be deferred until nothing big is going on in D.

Aww. But we have to keep ourselves entertained *somehow* while Walter is off implementing const. :-)

:-) Besides, it's not insurmountable to learn to "use this there and that otherwise". My main problem was that we should strive to keep the language clean and consistent. And this was a prime example of frivolously destroying that for a truly third rate reason. --- PS, if I ever start seeing code like if(o) assert(o); then I will puke.
Jun 17 2007
prev sibling next sibling parent reply Don Clugston <dac nospam.com.au> writes:
Walter Bright wrote:
 Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:

 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems. Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with: 1) zero code size cost 2) zero runtime cost 3) they're there for every pointer dereference 4) they work with the debugger to let you know exactly where the problem is Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.

True, but forgetting to 'new' a class is an extremely common mistake. The first time I ever used classes in D, I didn't 'new' it (I bet this will happen to almost everyone from a C++ background!). Getting an AV with no line number is pretty off-putting. This remains the #1 situation where I use a debugger. And I hate using debuggers to find silly typos. Getting an assert failure with a line number would be enormously more productive. BTW, the same 'it segfaults anyway' argument could be used to some extent for array bounds checking.
Jun 19 2007
next sibling parent BCS <ao pathlink.com> writes:
Reply to Don,
 
 BTW, the same 'it segfaults anyway' argument could be used to some
 extent for array bounds checking.
 

char[] buff = new char[10]; buff[$] = 'c'; // past end and (I think) more often then not won't AV
Jun 19 2007
prev sibling parent Yoni Lavi <l_yoni yahoo.com> writes:
Don Clugston Wrote:

 Walter Bright wrote:
 Georg Wrede wrote:
 Walter Bright wrote:
 Kristian Kilpi wrote:

 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

Asserts were INVENTED to *avoid segfaults*.

I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems. Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with: 1) zero code size cost 2) zero runtime cost 3) they're there for every pointer dereference 4) they work with the debugger to let you know exactly where the problem is Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.

True, but forgetting to 'new' a class is an extremely common mistake. The first time I ever used classes in D, I didn't 'new' it (I bet this will happen to almost everyone from a C++ background!). Getting an AV with no line number is pretty off-putting. This remains the #1 situation where I use a debugger. And I hate using debuggers to find silly typos. Getting an assert failure with a line number would be enormously more productive. BTW, the same 'it segfaults anyway' argument could be used to some extent for array bounds checking.

I whole-heartedly agree on the line number complaint, I can't imagine debugging not at source level in this day and age. Though it's always fun to see how well the optimizer handles 8-10 levels of inlining in template methods/classes in my C++ code :) I guess I'm spoiled by MSVC 8, which generates line number info even in release builds
Jun 20 2007
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 15 de junio a las 21:33 me escribiste:
 Georg Wrede wrote:
Walter Bright wrote:
Kristian Kilpi wrote:

The problem is that

  assert(obj);

does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.


I don't know when assert first appeared. But I first encountered them in the 80's, when the most popular machine for programming was the x86. The x86 had no hardware protection. When you wrote through a NULL pointer, you scrambled the operating systems, and all kinds of terrible, unpredictable things ensued. Asserts were used a lot to try and head off these problems. Enter the 286. What a godsend it was to develop in protected mode, when if you accessed a NULL pointer you got a seg fault instead of a scrambled system. Nirvana! What was even better, was the debugger would pop you right to where the problem was. It's not only asserts done in hardware, it's asserts with: 1) zero code size cost 2) zero runtime cost 3) they're there for every pointer dereference 4) they work with the debugger to let you know exactly where the problem is Seg faults are not an evil thing, they're there to help you. In fact, I'll often *deliberately* code them in so the debugger will pop up when it hits them.

OTOH, wasn't assert included as a language construct so it can throw an exception, giving the user the option to continue with the execution and try to fix the situation? I find this very useful por applications where high-availability is crucial, I can allways catch any exception (even an assertion) and try to continue working. Sure you can handle a segfault with a signal handler, but the handling code must be separated from the point of failure, making almost impossible to take a meaningful action. -- LUCA - Leandro Lucarella - Usando Debian GNU/Linux Sid - GNU Generation ------------------------------------------------------------------------ E-Mail / JID: luca lugmen.org.ar GPG Fingerprint: D9E1 4545 0F4B 7928 E82C 375D 4B02 0FE0 B08B 4FB2 GPG Key: gpg --keyserver pks.lugmen.org.ar --recv-keys B08B4FB2 ------------------------------------------------------------------------ Peperino nos ense├▒a que debemos ofrendirnos con ofrendas de vino si queremos obtener la recompensa de la parte del medio del vac├şo. -- Peperino P├│moro
Jun 21 2007
parent Jascha Wetzel <firstname mainia.de> writes:
Leandro Lucarella wrote:
 Sure you can handle a segfault with a signal handler, but the handling
 code must be separated from the point of failure, making almost impossible
 to take a meaningful action.

you can also catch an AV with an exception handler.
Jun 21 2007
prev sibling parent reply "Stewart Gordon" <smjg_1998 yahoo.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:f4s0i8$1vk8$1 digitalmars.com...
 Kristian Kilpi wrote:
 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

There are two problems with this. Firstly, this'll only work if the class actually has an invariant, and this invariant actually accesses object members (as opposed to merely checking static stuff). Secondly, given a segfault or AV, how's one supposed to pinpoint the cause? An assert error, OTOH, has a filename and line number attached. It's the same with the more general request of null reference checking - people want an error class that unambiguously indicates dereferencing a null pointer and which gives an immediate clue of where the problem is. Stewart.
Jun 17 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Stewart Gordon wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:f4s0i8$1vk8$1 digitalmars.com...
 Kristian Kilpi wrote:
 The problem is that

   assert(obj);

 does not first check if 'obj' is null.

Yes it does, it's just that the hardware does the check, and gives you a seg fault exception if it is null.

There are two problems with this. Firstly, this'll only work if the class actually has an invariant, and this invariant actually accesses object members (as opposed to merely checking static stuff).

Not true. The mere act of looking up if an objects class has an invariant requires access to the .classinfo, causing a segfault. (I posted the code earlier in this thread)
 Secondly, given a segfault or AV, how's one supposed to pinpoint the 
 cause? An assert error, OTOH, has a filename and line number attached.  
 It's the same with the more general request of null reference checking - 
 people want an error class that unambiguously indicates dereferencing a 
 null pointer and which gives an immediate clue of where the problem is.

According to Walter: compile with debug info and run in a debugger. However, I dislike this option. The first reason I have for disliking this is that this requires the person seeing the error to have a debug build and run it in a debugger (requiring a debugger be installed). I can see this being troublesome in situations where the person experiencing the error isn't the developer (or even a tester). I'd say segfaults in deployed applications would be the worst kind, yet those are least likely to be run as a debug build in a debugger... Secondly, what about applications where the segfault doesn't occur deterministically? (e.g. a multithreaded application) Or if the exact situation is just hard to repeat? (e.g. interaction with other systems like databases whose state changes between runs[1]) I'd very much prefer it if the first time something like this went wrong the application would display as much information as possible without requiring debug info + debugger and without requiring the fault be duplicated in a new run of the application. [1]: And might be hard to roll back to the exact state that caused the error, for instance because they have multiple clients (that must perhaps continue to run during fault isolation?) and/or are multithreaded themselves.
Jun 17 2007
prev sibling next sibling parent "Kristian Kilpi" <kjkilpi gmail.com> writes:
On Thu, 14 Jun 2007 11:55:39 +0300, Kristian Kilpi <kjkilpi gmail.com>  =

wrote:
 This issue has been proposed before. Well, I think it's the time sugge=

 it again...

 The problem is that

    assert(obj);

 does not first check if 'obj' is null. It just executes the object's  =

 invariants.
 So, one should usually write instead:

    assert(obj !is null);
    assert(obj);


 In addition, if you write something like this

    assert(val > 0 && obj);

 , then it's checked that 'obj' is not null (instead of running its  =

 invariants).


 I propose that the two previous cases should be combined.
 This won't broke any existing code. Actually, it should make it more b=

 free.

 That is, if an object is used inside an assert (anywhere inside it),  =

 then first it's checked that the object is not null, and then its  =

 invariants are run:

    assert(obj);  //=3D=3D "assert(obj !is null && obj.runInvariants())=

    assert(val > 0 && obj);  //=3D=3D "assert(val > 0 && obj !is null &=

 obj.runInvariants());"

Well, actually, if the object's invariants will be executed whenever the= = object is used as an boolean value inside an assert, it *could* break = existent code: some asserts could fail when they shouldn't. Ok, that = should not happen commonly though. Hm, maybe some 'obj.invariantAssert()= ' = syntax should/could be used instead. I don't know.
Jun 14 2007
prev sibling parent reply Ary Manzana <ary esperanto.org.ar> writes:
Why would you want to check an object's invariant? Isn't the compiler 
doing that before and after you execute a public method? What's the use 
for it?

Kristian Kilpi escribiˇ:
 
 This issue has been proposed before. Well, I think it's the time suggest 
 it again...
 
 The problem is that
 
   assert(obj);
 
 does not first check if 'obj' is null. It just executes the object's 
 invariants.

Jun 15 2007
parent reply Georg Wrede <georg nospam.org> writes:
((Reversing another top-post))

Ary Manzana wrote:
 Kristian Kilpi escribiˇ:
 
 This issue has been proposed before. Well, I think it's the time 
 suggest it again...
 
 The problem is that
 
 assert(obj);
 
 does not first check if 'obj' is null. It just executes the
 object's invariants.


 Why would you want to check an object's invariant? Isn't the compiler
 doing that before and after you execute a public method? What's the
 use for it?

First: checking invariants is (most normally) a runtime excercise. Normally the situations that lead to a risk of "breaking" the invariants are due to circumstances the programmer didn't think of at first hand, which naturally suggest a runtime check, usually with Real Use Case Data. ( Ahh, not for Ary, but for general explaining: object (or more to the point, instance) invariants are not variables that stay the same. Not even variables that stay the same before and after method calls. Rather, they are _logical_ properties of the instance that the instance should want to maintain, even if not explicitly mentioned in the algorithms or source code. An example: We have a class that tracks hours and minutes charged from customers. Hours worked are added, but for convenience, you can add the whole day and then subtract coffee breaks and lunches. The invariant for the class HoursWorked would then be !(minutes < 0 && minutes => 60) The invariant is kind of a consistency check between the values of the instance variables. So, an invariant is any function (of one or more of the variables of the instance) that has to resolve to True _whenever_ evaluated between method calls. E.g., a class of carpet sizes would have width and length and area, and the invariant would be width*length==area, and possibly also (width>=0 && length>=0). ) Second: invariant checking should be run both before and after any method is run, which makes it a property of non-release code. For that purpose, the invariant section exists in D. (Or will be.) --- Now, having assert(o) check for the invariants makes for sloppy coding. If the coder is lazy enough to not want to write assert(myFancyObject.checkInvariants()) and wants to instead only write assert(myFancyObject) then it should be expected he'd be stupid enough to make un-called-for shortcuts elsewhere too. At the same time the rest of us have (as it seems) learned to never write if (o) {act;} and instead write if (o !is null) {act;} which not only does contain a double negation, it also is unnecessarily verbose and prone to logical errors. Not to mention, it violates C family heritage -- thanks to a trivial detail, not even related to the issue at hand!
Jun 15 2007
parent reply Ary Manzana <ary esperanto.org.ar> writes:
What I meant was: if the compiler runs the invariants before and after 
each method call... when would you like to explicitly check the 
invariant using assert?

---
SomeObject o = new SomeObject;

// Here the invariant is called automaticaly
o.someMethod();
// Here the invariant is called automaticaly

assert(o); // what's the point? It was done just a second.
---

Unless... Unless you modify a field, and then call the invariant to 
check if everything is in place. But that's a smell of bad design.

Georg Wrede escribiˇ:
 ((Reversing another top-post))
 
 Ary Manzana wrote:
 Kristian Kilpi escribiˇ:

 This issue has been proposed before. Well, I think it's the time 
 suggest it again...

 The problem is that

 assert(obj);

 does not first check if 'obj' is null. It just executes the
 object's invariants.


 Why would you want to check an object's invariant? Isn't the compiler
 doing that before and after you execute a public method? What's the
 use for it?

First: checking invariants is (most normally) a runtime excercise. Normally the situations that lead to a risk of "breaking" the invariants are due to circumstances the programmer didn't think of at first hand, which naturally suggest a runtime check, usually with Real Use Case Data.

Jun 15 2007
parent Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Ary Manzana wrote:
 What I meant was: if the compiler runs the invariants before and after 
 each method call... when would you like to explicitly check the 
 invariant using assert?

In the middle of a method called on it?
 ---
 SomeObject o = new SomeObject;
 
 // Here the invariant is called automaticaly
 o.someMethod();
 // Here the invariant is called automaticaly
 
 assert(o); // what's the point? It was done just a second.
 ---
 
 Unless... Unless you modify a field, and then call the invariant to 
 check if everything is in place. But that's a smell of bad design.

If the field was modified externally, I agree. But if the field was modified inside a method of the object, that method might want to check consistency before continuing with other stuff (that presumably doesn't start with calling a public method, or it would still be redundant).
Jun 16 2007