www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - recoverable and unrecoverable errors or exceptions

reply "Ben Hinkle" <bhinkle mathworks.com> writes:
I'm looking into the Error/Exception situation in phobos and previous posts
by Walter and others generally argued that Exceptions are recoverable and
Errors are not. I believe there isn't an application-independent definition
of what recoverable means and I would like to pursue an exception class

Java is poorly designed and can be covered in D by subclassing Object
directly. The existing class heirarchy in phobos and user code also
seemingly randomly subclasses Error or Exception.

For example the class hierachy I have in mind looks like
Object
  OutOfMemory
  AssertionFailure
  Exception
    FileException
    StreamException
    ... etc, all the other exceptions and subclasses ...

where Exception is
class Exception {
    char[] msg;
    Object cause;
    this(char[] msg, Object cause = null);
    void print(); // print this exception and any causes
    char[] toString(); // string summarizes this exception
}

For reference see
http://www.digitalmars.com/d/archives/digitalmars/D/6049.html
http://www.digitalmars.com/d/archives/digitalmars/D/9556.html
http://www.digitalmars.com/d/archives/digitalmars/D/10415.html

comments?
Apr 11 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 11 Apr 2005 19:37:21 -0400, Ben Hinkle <bhinkle mathworks.com>  
wrote:
 I'm looking into the Error/Exception situation in phobos and previous  
 posts
 by Walter and others generally argued that Exceptions are recoverable and
 Errors are not.
That seemed to me to be the general consesus.
 I believe there isn't an application-independent definition
 of what recoverable means and I would like to pursue an exception class

I agree.
 The distinction in Java is poorly designed and can be covered in D by  
 subclassing Object
 directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
 The existing class heirarchy in phobos and user code also
 seemingly randomly subclasses Error or Exception.
That's the way it seems to me also. That and the docs should note the exceptions/errors thrown by each method/function, but that's a task for after this one.
 For example the class hierachy I have in mind looks like
 Object
   OutOfMemory
   AssertionFailure
   Exception
     FileException
     StreamException
     ... etc, all the other exceptions and subclasses ...
Where does "missing/incorrect parameter" fit in? My feeling is that it's a subclass of Object and not Exception? or in fact should it be handled with an assert statement and thus be an AssertionFailure? But you probably didn't want specifics at this point, so I'll be quiet now. ;)
 where Exception is
 class Exception {
     char[] msg;
     Object cause;
     this(char[] msg, Object cause = null);
     void print(); // print this exception and any causes
     char[] toString(); // string summarizes this exception
 }

 For reference see
 http://www.digitalmars.com/d/archives/digitalmars/D/6049.html
 http://www.digitalmars.com/d/archives/digitalmars/D/9556.html
 http://www.digitalmars.com/d/archives/digitalmars/D/10415.html

 comments?
I think it's a great idea, I believe it needs to be done before D 1.0 and I'd like to help in any way I can. Regan
Apr 11 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 The distinction in Java is poorly designed and can be covered in D by 
 subclassing Object
 directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
What do you have in mind as a user-defined unrecoverable error? I was expecting users to subclass Exception (or some subclass of Exception) by convention. The only case that comes to mind where I could see subclassing something else would be to mimic the built-in AssertionFailure by subclassing it and adding some custom behavior. From a technical point of view anyone can throw or catch any Object.
 For example the class hierachy I have in mind looks like
 Object
   OutOfMemory
   AssertionFailure
   Exception
     FileException
     StreamException
     ... etc, all the other exceptions and subclasses ...
Where does "missing/incorrect parameter" fit in? My feeling is that it's a subclass of Object and not Exception? or in fact should it be handled with an assert statement and thus be an AssertionFailure?
I see assertion failures as different than a incorrect parameters. An assertion is a statement that must *always* be true no matter how the user has called your function or used your object. If an assertion fails the internal state of your part of the system is in doubt. By contrast an incorrect parameter is to be expected. It can get confusing when one starts to consider the user code as part of the same system as the function being called, but then the user code can have asserts to make sure its own interal state is consistent. To be concrete I was thinking that ArgumentException would subclass Exception and have the .Net hierarchy Exception ArgumentException ArgumentNullException ArgumentOutOfRangeException
 But you probably didn't want specifics at this point, so I'll be quiet 
 now. ;)
please ask any/all questions. The more the better.
Apr 11 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 11 Apr 2005 21:01:40 -0400, Ben Hinkle <ben.hinkle gmail.com>  
wrote:
 The distinction in Java is poorly designed and can be covered in D by
 subclassing Object
 directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
What do you have in mind as a user-defined unrecoverable error?
Nothing new, I was looking at: OutOfMemory AssertionFailure and thought, for most applications these are treated as unrecoverable errors.
 I was
 expecting users to subclass Exception (or some subclass of Exception) by
 convention.
Agreed, 99% (maybe 100%) of the time. I wondered however whether it was possible for a user to want to create an unrecoverable error, i.e. for their application "Foo" is unrecoverable, if so they can do so, by subclassing Object, and ensuring they never catch(Object), or catch it in main and exit. Thus their application will always fail hard on "Foo".
 The only case that comes to mind where I could see subclassing
 something else would be to mimic the built-in AssertionFailure by
 subclassing it and adding some custom behavior.
Yeah, to add their own custom unrecoverable error of some sort.
 From a technical point of view anyone can throw or catch any Object.
Yep. I was thinking in terms of a style/convention which would be used to add custom unrecoverable errors.
 For example the class hierachy I have in mind looks like
 Object
   OutOfMemory
   AssertionFailure
   Exception
     FileException
     StreamException
     ... etc, all the other exceptions and subclasses ...
Where does "missing/incorrect parameter" fit in? My feeling is that it's a subclass of Object and not Exception? or in fact should it be handled with an assert statement and thus be an AssertionFailure?
I see assertion failures as different than a incorrect parameters. An assertion is a statement that must *always* be true no matter how the user has called your function or used your object. If an assertion fails the internal state of your part of the system is in doubt. By contrast an incorrect parameter is to be expected. It can get confusing when one starts to consider the user code as part of the same system as the function being called, but then the user code can have asserts to make sure its own interal state is consistent.
What you say makes sense to me. I've never really used assertions. Where I was getting confused was that I was looking at it backwards, eg. foo(5) if '5' is an invalid parameter, this will *always* fail. foo(<convert input from user to int>); this will fail sometimes. is there any difference in how the calls above should be handled? or should both simply be "ArgumentOutOfRangeException"
 To be concrete I was thinking that ArgumentException
 would subclass Exception and have the .Net hierarchy
   Exception
     ArgumentException
       ArgumentNullException
       ArgumentOutOfRangeException
Call me lazy but "ArgumentOutOfRangeException" seems like a really long name to use. Apart from that, sounds good. Regan
Apr 11 2005
next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 I was
 expecting users to subclass Exception (or some subclass of Exception) by
 convention.
Agreed, 99% (maybe 100%) of the time. I wondered however whether it was possible for a user to want to create an unrecoverable error, i.e. for their application "Foo" is unrecoverable, if so they can do so, by subclassing Object, and ensuring they never catch(Object), or catch it in main and exit. Thus their application will always fail hard on "Foo".
ok. Seems fine to me.
 The only case that comes to mind where I could see subclassing
 something else would be to mimic the built-in AssertionFailure by
 subclassing it and adding some custom behavior.
Yeah, to add their own custom unrecoverable error of some sort.
 From a technical point of view anyone can throw or catch any Object.
Yep. I was thinking in terms of a style/convention which would be used to add custom unrecoverable errors.
I think if we have a convention something is wrong :-P. Why have a convention for something that is discouraged?
 foo(5)
 if '5' is an invalid parameter, this will *always* fail.

 foo(<convert input from user to int>);
 this will fail sometimes.

 is there any difference in how the calls above should be handled? or 
 should both simply be "ArgumentOutOfRangeException"
Personally I would prefer that ArgumentOutOfRangeException be used for arguments to functions that get checked at the start of the function. User input should get treated more gracefully than throwing an exception IMO: class InputOutOfRangeException {...} bar() { barstart: int x = ... prompt the user ... if (x == 5) {printf("try again doofus"); goto barstart;} foo(x); } void foo(int x) { if (x == 5) throw new ArgumentOutOfRangeException("Caller is a doofus"); }
 To be concrete I was thinking that ArgumentException
 would subclass Exception and have the .Net hierarchy
   Exception
     ArgumentException
       ArgumentNullException
       ArgumentOutOfRangeException
Call me lazy but "ArgumentOutOfRangeException" seems like a really long name to use.
Yeah. I agree. I'm tempted to suggest using Error instead of Exception because it is easier to read and type but I have a feeling that would send confusing signals to people used to "exceptions" and Exception. There was an earlier thread about names but I only vaguely remember Exception "won". Maybe we could shorten these ArgumentExceptions to ArgExceptions or ParamException since it's obvious what Arg and Param mean. So how about ParamException ParamNullException ParamRangeException
 Apart from that, sounds good.
ok
Apr 11 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 11 Apr 2005 22:52:46 -0400, Ben Hinkle <ben.hinkle gmail.com>  
wrote:
 The only case that comes to mind where I could see subclassing
 something else would be to mimic the built-in AssertionFailure by
 subclassing it and adding some custom behavior.
Yeah, to add their own custom unrecoverable error of some sort.
 From a technical point of view anyone can throw or catch any Object.
Yep. I was thinking in terms of a style/convention which would be used to add custom unrecoverable errors.
I think if we have a convention something is wrong :-P. Why have a convention for something that is discouraged?
Because unless we address the issue (of defining a custom unrecoverable error) people will do it anyway and might do it in some other weird way. At least this way we can explain why it's discouraged, when it might be okay to use it and how to use it. I'm not suggesting we encourage it, just that we acknowledge it's existance and give a suggestion.
 foo(5)
 if '5' is an invalid parameter, this will *always* fail.

 foo(<convert input from user to int>);
 this will fail sometimes.

 is there any difference in how the calls above should be handled? or
 should both simply be "ArgumentOutOfRangeException"
Personally I would prefer that ArgumentOutOfRangeException be used for arguments to functions that get checked at the start of the function. User input should get treated more gracefully than throwing an exception IMO: class InputOutOfRangeException {...} bar() { barstart: int x = ... prompt the user ... if (x == 5) {printf("try again doofus"); goto barstart;} foo(x); } void foo(int x) { if (x == 5) throw new ArgumentOutOfRangeException("Caller is a doofus"); }
Thanks, that makes sense.
 To be concrete I was thinking that ArgumentException
 would subclass Exception and have the .Net hierarchy
   Exception
     ArgumentException
       ArgumentNullException
       ArgumentOutOfRangeException
Call me lazy but "ArgumentOutOfRangeException" seems like a really long name to use.
Yeah. I agree. I'm tempted to suggest using Error instead of Exception because it is easier to read and type but I have a feeling that would send confusing signals to people used to "exceptions" and Exception. There was an earlier thread about names but I only vaguely remember Exception "won".
I think you're right (on both points).
 Maybe we could shorten these ArgumentExceptions to ArgExceptions or
 ParamException since it's obvious what Arg and Param mean. So how about
      ParamException
        ParamNullException
        ParamRangeException
Just thinking about this, null and out of range are effectively the same thing, as null is "out of range" for a parameter that "cannot be null'. So we *could* just combine them. But then, of course, the reason for not combining them is so we can catch them seperately: void foo(int a){} try { foo(5); } catch(ParamNullException e) { } catch(ParamRangeException e) { } Otherwise we could just say: class ParamException { this(char[] param, char[] problem) { } } And have it print "ParamException: (%s) is %s", eg: "ParamException: (a) is out of range" "ParamException: (a) is null" "ParamException: (a) is not my favourite number" "ParamException: (a) is unlucky for some" <Warning: wacky idea> It's a pity the class tree isn't more 'obvious' then we could simply drop the redundant 'Exception' alltogether. eg. Exception Parameter Null OutOfRange Is this: try { } catch(OutOfRange e) { } really that opaque that it requires this: try { } catch(ParamRangeException e) { } ? Of course, it then becomes possible to have collisions within the tree. Exception Aaaaa Here Bbbbb Here Exception.Aaaaa.Here Exception.Bbbbb.Here are both called "Here". </wacky idea> Regan
Apr 11 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 Maybe we could shorten these ArgumentExceptions to ArgExceptions or
 ParamException since it's obvious what Arg and Param mean. So how about
      ParamException
        ParamNullException
        ParamRangeException
Just thinking about this, null and out of range are effectively the same thing, as null is "out of range" for a parameter that "cannot be null'. So we *could* just combine them.
The .Net doc explains that Range is for things that are non-null but still illegal (or for primitive types out of range). For example if a function takes a pointer to an int and the int must not be 5. I think it would be handy to have both. A null check is probably 75% of all input checking anyway - at least with a project with lots of classes.
 But then, of course, the reason for not combining them is so we can catch 
 them seperately:

 void foo(int a){}

 try {
   foo(5);
 } catch(ParamNullException e) {
 } catch(ParamRangeException e) {
 }

 Otherwise we could just say:

 class ParamException {
   this(char[] param, char[] problem) {
   }
 }

 And have it print "ParamException: (%s) is %s", eg:
 "ParamException: (a) is out of range"
 "ParamException: (a) is null"
 "ParamException: (a) is not my favourite number"
 "ParamException: (a) is unlucky for some"
If one doesn't care about the distinction between Null and Range then one can catch ParamException. It's true the string and printed exception would have the more detailed class name.
 <Warning: wacky idea>
 It's a pity the class tree isn't more 'obvious' then we could simply drop 
 the redundant 'Exception' alltogether. eg.

 Exception
   Parameter
     Null
     OutOfRange

 Is this:

 try {
 } catch(OutOfRange e) {
 }

 really that opaque that it requires this:

 try {
 } catch(ParamRangeException e) {
 }

 ?

 Of course, it then becomes possible to have collisions within the tree.

 Exception
   Aaaaa
     Here
   Bbbbb
     Here

 Exception.Aaaaa.Here
 Exception.Bbbbb.Here

 are both called "Here".
 </wacky idea>
This was suggested in the naming thread. It isn't that bad since the Here's can be distinguished by package and module names. Now that I think about it the Exception was used to indicate that it subclasses Exception, which seems useful. ps - I mistakenly printed out those old threads and now I can symathize with Walter's statement that removing "dead quotes" from replies is useful - probably 75% or more of the printouts were lazy quoting that added nothing to the thread. One printout was 73 pages long.
Apr 12 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 08:10:10 -0400, Ben Hinkle <ben.hinkle gmail.com>  
wrote:
 Maybe we could shorten these ArgumentExceptions to ArgExceptions or
 ParamException since it's obvious what Arg and Param mean. So how about
      ParamException
        ParamNullException
        ParamRangeException
Just thinking about this, null and out of range are effectively the same thing, as null is "out of range" for a parameter that "cannot be null'. So we *could* just combine them.
The .Net doc explains that Range is for things that are non-null but still illegal (or for primitive types out of range). For example if a function takes a pointer to an int and the int must not be 5. I think it would be handy to have both. A null check is probably 75% of all input checking anyway - at least with a project with lots of classes.
Agreed. Could/should we then make Null a subclass of OutOfRange?
 <Warning: wacky idea>
 It's a pity the class tree isn't more 'obvious' then we could simply  
 drop
 the redundant 'Exception' alltogether. eg.
<snip>
 This was suggested in the naming thread. It isn't that bad since the  
 Here's
 can be distinguished by package and module names. Now that I think about  
 it
 the Exception was used to indicate that it subclasses Exception, which  
 seems
 useful.
Why? I mean if we only have "Exception" and no "Error" class then we can probably say that anything you throw/catch will be a subclass of "Exception".
 ps - I mistakenly printed out those old threads and now I can symathize  
 with
 Walter's statement that removing "dead quotes" from replies is useful -
 probably 75% or more of the printouts were lazy quoting that added  
 nothing
 to the thread. One printout was 73 pages long.
Yeah.. I try to cut replies down, but like to leave anything to which I refer, or respond. Regan
Apr 12 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opso3kiga523k2f5 nrage.netwin.co.nz...
 On Mon, 11 Apr 2005 21:01:40 -0400, Ben Hinkle
 <ben.hinkle gmail.com>  wrote:
 The distinction in Java is poorly designed and can be covered
 in D by
 subclassing Object
 directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
What do you have in mind as a user-defined unrecoverable error?
Nothing new, I was looking at: OutOfMemory AssertionFailure and thought, for most applications these are treated as unrecoverable errors.
They may be, but that's quite wrong. OutOfMemory is practically unrecoverable, but should not be classed as an unrecoverable exception. Conversely, AssertionFailure is practically recoverable, but most certainly should be classed as unrecoverable. (And the language should mandate and enforce the irrecoverability.)
Apr 11 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 15:16:55 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opso3kiga523k2f5 nrage.netwin.co.nz...
 On Mon, 11 Apr 2005 21:01:40 -0400, Ben Hinkle
 <ben.hinkle gmail.com>  wrote:
 The distinction in Java is poorly designed and can be covered
 in D by
 subclassing Object
 directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
What do you have in mind as a user-defined unrecoverable error?
Nothing new, I was looking at: OutOfMemory AssertionFailure and thought, for most applications these are treated as unrecoverable errors.
They may be, but that's quite wrong. OutOfMemory is practically unrecoverable, but should not be classed as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
 Conversely, AssertionFailure is practically recoverable, but most
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree. Regan
Apr 11 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 OutOfMemory is practically unrecoverable, but should not be 
 classed
 as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
 Conversely, AssertionFailure is practically recoverable, but most
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Apr 11 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 16:23:06 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 OutOfMemory is practically unrecoverable, but should not be
 classed
 as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
 Conversely, AssertionFailure is practically recoverable, but most
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
So.. you want: Catchable Recoverable OutOfMemory NonRecoverable AssertionFailure ?
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Sarcasm is the lowest form of wit. Why do you need to force a program to terminate? If the programmer wants to continue and can do so, they will, if not, they wont. I see no need to enforce it. Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opso3x5rhn23k2f5 nrage.netwin.co.nz...
 On Tue, 12 Apr 2005 16:23:06 +1000, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 OutOfMemory is practically unrecoverable, but should not be
 classed
 as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
 Conversely, AssertionFailure is practically recoverable, but 
 most
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
So.. you want: Catchable Recoverable OutOfMemory NonRecoverable AssertionFailure
You've changed the names of things - Catchable<=>Throwable, NonRecoverable <=> Unrecoverable <=> Irrecoverable - which is perfectly fine. I'm not allied to them. You're close to what I'm thinking. Using your terms Catchable Recoverable FileNotFoundException NonRecoverable AssertionFailure But, as I said in a previous post, I think the jury's still out on OutOfMemory. I've a strong suspicion it should fall out as being an exception, but I think there's some mileage in discussing whether it might have another 'type', e.g. ResourceExhaustion. Reason being that, though memory exhaustion is in principle recoverable, in practice it is not recoverable (since getting memory to throw/handle is tricky, and may require workarounds) and also often so unlikely as to make it not worth worrying about. So, the conversation I'm interesting in seeing is whether these characteristics of OutOfMemory are shared by any other resource exhaustion. If so, it might be worth our while to do something different. After all, a file not being present can be a 'normal' (from a user's perspective) runtime condition, whereas memory exhaustion is likely to be seen otherwise, even though both are, in principle, recoverable. One that comes to mind is threading keys and/or slots. Whether Win32 TLS or PTHREADS TSD, inability to allocate a TSS key, or aquire a slot for an allocated key, is a pretty terminal condition. More so, than running out memory, in fact, although the shutdown might be more graceful. So, I see the taxonomy as being either (now using my/Ben's names): Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException Exhaustion MemoryExhaustion TSSKeyExhaustion or, if we just lump exhaustions in with exceptions Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException MemoryExhaustionException TSSKeyExhaustionException
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Sarcasm is the lowest form of wit.
Maybe so, but unstubstantiated opinion is worth precisely nothing. It's an inconsiderate waste of other people's time.
 Why do you need to force a program to terminate? If the programmer 
 wants  to continue and can do so, they will, if not, they wont. I 
 see no need to  enforce it.
There are several flaws suggested in your understanding of the issue by just those two sentences. First, "Why do you need to force a program to terminate?". This one's simple: The program must be forced to terminate because it has violated its design. (Let me digress for a moment and confess that before I made the leap into CP-grok-ville this never seemed simple, or at least never cut and dried.) What's at issue here is not that the program has encountered a runtime error, or even (the very poorly termed) "unexpected situation". What a contract violation states is _precisely_ and _absolutely_ that a program has violated its design. An example from my recent work was that one of the server components, which served as an intermediate between three other components, should never have data in its input queues when not in a fully connected state. That is the expression of the design. (FYI: two of the other components with which it communicated were sockets based, whereas the third was a message queue.) When we put it into production all was well, until one component outside our control (another, much bigger (<g>) company was responsible for that) started sending up information through the message queue when it shouldn't. What happened was that our process immediately cacked itself, and printed out a nice lot of information about "VIOLATION: downstream XYZ queue not empty in state ABC". Thus, this exposed that the implementation of the component was violating the design assumptions. The result of that immediate failure was an almost immediate realisation of the problem - in about 2 minutes, and a very rapid fix - about 90mins work, IIRC. Now consider if the violation had been recoverable, which would certainly have been possible, since there was no memory corruption, and, indeed, no corruption of any of the internals of the component which violated. These systems produce a *lot* of logging output, and it's perfectly possible that a non-terminating log entry would have been missed. Furthermore, the error would have manifested considerably later, after the now out-of-date messages from the downstream queue was passed up through our components and out into the outside world. Given the fact that several companies were frantically involved in updating networking protocols and architectures simultaneously, that bug could have lain dormant for days, weeks, or even months. That would have cost both customers and the client a lot of money, and also cost us some reputation. Second: "If the programmer wants to continue and can do so, they will, if not, they wont". Who's the programmer? Do you mean the user? Do you think that the end users of (even a substantial proportion of) systems are programmers? That suggests a significantly skewed perspective and, I'm afraid to say, implies that your experience is of programming for programmers, rather than more "commercial" activities. There's nothing wrong with that, to be sure, but it can't provide a base of experience suitable for making fundamental decisions on language design issues with very wide ramifications. But let's give you the benefit of the doubt, and assume you misspoke, and actually meant: "If the [user] wants to continue and can do so, they will, if not, they wont". This one's even more simple than the first: There often isn't a user, or the user is significantly divorced from the error, separated by threads/processes/machines/networks. Even when there is a user, I suggest it's highly undesirable to leave it to them, highly ignorant of the specifics of your program (and, probably, of computers / software engineering in general) to have to attempt to handle something that's now operating outside the bounds of its design. It's analogous to selling someone a car and, when its brakes fail, telling them to drive on at their own risk. Notwithstanding all the foregoing, there's a much more fundamental, albeit little recognised, issue. Computers make a strict interpretation of what we, the programmers, instruct them to do. Now a contract violation, as I've said, is pure and simple a statement that your instructions to the computer (or to the compiler, if you will) are fundamentally flawed. Since computers do not have redundancy, intuition, instincts, sixth-sense, a friend to call, or any other "higher order" functioning, there are exactly two things a process can do in that circumstance. It can operate in a fundamentally flawed manner, or it can stop. THERE ARE NO OTHER OPTIONS. And _that_ is the crux of the matter. Unless and until a programmer grasps that concept, they can't grok CP. There's no shame in not having doing so. I did 10 years of commercial programming, much of it very successful, thank you very much, before I saw the light (and I could name you some super famous chaps who're still a bit wooly on it). But I can absolutely aver that programming using strict CP results in faster development time, *far* more robust product, and happier clients and developers alike. Matthew
Apr 12 2005
next sibling parent reply xs0 <xs0 xs0.com> writes:
Not that I disagree, in principle, but there are still cases where you'd 
want to recover from an "unrecoverable" error. Consider an environment 
like Eclipse, which is built totally out of plugins, and there can be 
hundreds of them. What you're saying is that whenever one of those 
faults, the whole system should abort, which is definitely not what I'd 
want it to do.

I mean, when I do something like inspect a value in a debugger, and the 
value is 150MB and the debugger runs out of memory, I don't want it to 
stop, just because it can't show me a var (which can just as easily 
manifest as an assertion error or whatever)...

Enforcing unrecoverability would be a mistake, if you ask me, although I 
totally support an exception hierarchy that would make "unrecoverable" 
errors something people wouldn't try to catch normally.


xs0


Matthew wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message 
 news:opso3x5rhn23k2f5 nrage.netwin.co.nz...
 
On Tue, 12 Apr 2005 16:23:06 +1000, Matthew 
<admin stlsoft.dot.dot.dot.dot.org> wrote:

OutOfMemory is practically unrecoverable, but should not be
classed
as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
Conversely, AssertionFailure is practically recoverable, but 
most
certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
So.. you want: Catchable Recoverable OutOfMemory NonRecoverable AssertionFailure
You've changed the names of things - Catchable<=>Throwable, NonRecoverable <=> Unrecoverable <=> Irrecoverable - which is perfectly fine. I'm not allied to them. You're close to what I'm thinking. Using your terms Catchable Recoverable FileNotFoundException NonRecoverable AssertionFailure But, as I said in a previous post, I think the jury's still out on OutOfMemory. I've a strong suspicion it should fall out as being an exception, but I think there's some mileage in discussing whether it might have another 'type', e.g. ResourceExhaustion. Reason being that, though memory exhaustion is in principle recoverable, in practice it is not recoverable (since getting memory to throw/handle is tricky, and may require workarounds) and also often so unlikely as to make it not worth worrying about. So, the conversation I'm interesting in seeing is whether these characteristics of OutOfMemory are shared by any other resource exhaustion. If so, it might be worth our while to do something different. After all, a file not being present can be a 'normal' (from a user's perspective) runtime condition, whereas memory exhaustion is likely to be seen otherwise, even though both are, in principle, recoverable. One that comes to mind is threading keys and/or slots. Whether Win32 TLS or PTHREADS TSD, inability to allocate a TSS key, or aquire a slot for an allocated key, is a pretty terminal condition. More so, than running out memory, in fact, although the shutdown might be more graceful. So, I see the taxonomy as being either (now using my/Ben's names): Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException Exhaustion MemoryExhaustion TSSKeyExhaustion or, if we just lump exhaustions in with exceptions Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException MemoryExhaustionException TSSKeyExhaustionException
(And the language
should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Sarcasm is the lowest form of wit.
Maybe so, but unstubstantiated opinion is worth precisely nothing. It's an inconsiderate waste of other people's time.
Why do you need to force a program to terminate? If the programmer 
wants  to continue and can do so, they will, if not, they wont. I 
see no need to  enforce it.
There are several flaws suggested in your understanding of the issue by just those two sentences. First, "Why do you need to force a program to terminate?". This one's simple: The program must be forced to terminate because it has violated its design. (Let me digress for a moment and confess that before I made the leap into CP-grok-ville this never seemed simple, or at least never cut and dried.) What's at issue here is not that the program has encountered a runtime error, or even (the very poorly termed) "unexpected situation". What a contract violation states is _precisely_ and _absolutely_ that a program has violated its design. An example from my recent work was that one of the server components, which served as an intermediate between three other components, should never have data in its input queues when not in a fully connected state. That is the expression of the design. (FYI: two of the other components with which it communicated were sockets based, whereas the third was a message queue.) When we put it into production all was well, until one component outside our control (another, much bigger (<g>) company was responsible for that) started sending up information through the message queue when it shouldn't. What happened was that our process immediately cacked itself, and printed out a nice lot of information about "VIOLATION: downstream XYZ queue not empty in state ABC". Thus, this exposed that the implementation of the component was violating the design assumptions. The result of that immediate failure was an almost immediate realisation of the problem - in about 2 minutes, and a very rapid fix - about 90mins work, IIRC. Now consider if the violation had been recoverable, which would certainly have been possible, since there was no memory corruption, and, indeed, no corruption of any of the internals of the component which violated. These systems produce a *lot* of logging output, and it's perfectly possible that a non-terminating log entry would have been missed. Furthermore, the error would have manifested considerably later, after the now out-of-date messages from the downstream queue was passed up through our components and out into the outside world. Given the fact that several companies were frantically involved in updating networking protocols and architectures simultaneously, that bug could have lain dormant for days, weeks, or even months. That would have cost both customers and the client a lot of money, and also cost us some reputation. Second: "If the programmer wants to continue and can do so, they will, if not, they wont". Who's the programmer? Do you mean the user? Do you think that the end users of (even a substantial proportion of) systems are programmers? That suggests a significantly skewed perspective and, I'm afraid to say, implies that your experience is of programming for programmers, rather than more "commercial" activities. There's nothing wrong with that, to be sure, but it can't provide a base of experience suitable for making fundamental decisions on language design issues with very wide ramifications. But let's give you the benefit of the doubt, and assume you misspoke, and actually meant: "If the [user] wants to continue and can do so, they will, if not, they wont". This one's even more simple than the first: There often isn't a user, or the user is significantly divorced from the error, separated by threads/processes/machines/networks. Even when there is a user, I suggest it's highly undesirable to leave it to them, highly ignorant of the specifics of your program (and, probably, of computers / software engineering in general) to have to attempt to handle something that's now operating outside the bounds of its design. It's analogous to selling someone a car and, when its brakes fail, telling them to drive on at their own risk. Notwithstanding all the foregoing, there's a much more fundamental, albeit little recognised, issue. Computers make a strict interpretation of what we, the programmers, instruct them to do. Now a contract violation, as I've said, is pure and simple a statement that your instructions to the computer (or to the compiler, if you will) are fundamentally flawed. Since computers do not have redundancy, intuition, instincts, sixth-sense, a friend to call, or any other "higher order" functioning, there are exactly two things a process can do in that circumstance. It can operate in a fundamentally flawed manner, or it can stop. THERE ARE NO OTHER OPTIONS. And _that_ is the crux of the matter. Unless and until a programmer grasps that concept, they can't grok CP. There's no shame in not having doing so. I did 10 years of commercial programming, much of it very successful, thank you very much, before I saw the light (and I could name you some super famous chaps who're still a bit wooly on it). But I can absolutely aver that programming using strict CP results in faster development time, *far* more robust product, and happier clients and developers alike. Matthew
Apr 12 2005
next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 11:52:53 +0200, xs0 <xs0 xs0.com> wrote:
 Not that I disagree, in principle, but there are still cases where you'd  
 want to recover from an "unrecoverable" error. Consider an environment  
 like Eclipse, which is built totally out of plugins, and there can be  
 hundreds of them. What you're saying is that whenever one of those  
 faults, the whole system should abort, which is definitely not what I'd  
 want it to do.

 I mean, when I do something like inspect a value in a debugger, and the  
 value is 150MB and the debugger runs out of memory, I don't want it to  
 stop, just because it can't show me a var (which can just as easily  
 manifest as an assertion error or whatever)...

 Enforcing unrecoverability would be a mistake, if you ask me, although I  
 totally support an exception hierarchy that would make "unrecoverable"  
 errors something people wouldn't try to catch normally.
My sentiments exactly. Regan
 Matthew wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message  
 news:opso3x5rhn23k2f5 nrage.netwin.co.nz...

 On Tue, 12 Apr 2005 16:23:06 +1000, Matthew  
 <admin stlsoft.dot.dot.dot.dot.org> wrote:

 OutOfMemory is practically unrecoverable, but should not be
 classed
 as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
 Conversely, AssertionFailure is practically recoverable, but most
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
So.. you want: Catchable Recoverable OutOfMemory NonRecoverable AssertionFailure
You've changed the names of things - Catchable<=>Throwable, NonRecoverable <=> Unrecoverable <=> Irrecoverable - which is perfectly fine. I'm not allied to them. You're close to what I'm thinking. Using your terms Catchable Recoverable FileNotFoundException NonRecoverable AssertionFailure But, as I said in a previous post, I think the jury's still out on OutOfMemory. I've a strong suspicion it should fall out as being an exception, but I think there's some mileage in discussing whether it might have another 'type', e.g. ResourceExhaustion. Reason being that, though memory exhaustion is in principle recoverable, in practice it is not recoverable (since getting memory to throw/handle is tricky, and may require workarounds) and also often so unlikely as to make it not worth worrying about. So, the conversation I'm interesting in seeing is whether these characteristics of OutOfMemory are shared by any other resource exhaustion. If so, it might be worth our while to do something different. After all, a file not being present can be a 'normal' (from a user's perspective) runtime condition, whereas memory exhaustion is likely to be seen otherwise, even though both are, in principle, recoverable. One that comes to mind is threading keys and/or slots. Whether Win32 TLS or PTHREADS TSD, inability to allocate a TSS key, or aquire a slot for an allocated key, is a pretty terminal condition. More so, than running out memory, in fact, although the shutdown might be more graceful. So, I see the taxonomy as being either (now using my/Ben's names): Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException Exhaustion MemoryExhaustion TSSKeyExhaustion or, if we just lump exhaustions in with exceptions Object <= not throwable, btw Throwable Error <= Unrecoverable exceptions ContractViolation Assertion Exception FileNotFoundException XMLParseException MemoryExhaustionException TSSKeyExhaustionException
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Sarcasm is the lowest form of wit.
Maybe so, but unstubstantiated opinion is worth precisely nothing. It's an inconsiderate waste of other people's time.
 Why do you need to force a program to terminate? If the programmer  
 wants  to continue and can do so, they will, if not, they wont. I see  
 no need to  enforce it.
There are several flaws suggested in your understanding of the issue by just those two sentences. First, "Why do you need to force a program to terminate?". This one's simple: The program must be forced to terminate because it has violated its design. (Let me digress for a moment and confess that before I made the leap into CP-grok-ville this never seemed simple, or at least never cut and dried.) What's at issue here is not that the program has encountered a runtime error, or even (the very poorly termed) "unexpected situation". What a contract violation states is _precisely_ and _absolutely_ that a program has violated its design. An example from my recent work was that one of the server components, which served as an intermediate between three other components, should never have data in its input queues when not in a fully connected state. That is the expression of the design. (FYI: two of the other components with which it communicated were sockets based, whereas the third was a message queue.) When we put it into production all was well, until one component outside our control (another, much bigger (<g>) company was responsible for that) started sending up information through the message queue when it shouldn't. What happened was that our process immediately cacked itself, and printed out a nice lot of information about "VIOLATION: downstream XYZ queue not empty in state ABC". Thus, this exposed that the implementation of the component was violating the design assumptions. The result of that immediate failure was an almost immediate realisation of the problem - in about 2 minutes, and a very rapid fix - about 90mins work, IIRC. Now consider if the violation had been recoverable, which would certainly have been possible, since there was no memory corruption, and, indeed, no corruption of any of the internals of the component which violated. These systems produce a *lot* of logging output, and it's perfectly possible that a non-terminating log entry would have been missed. Furthermore, the error would have manifested considerably later, after the now out-of-date messages from the downstream queue was passed up through our components and out into the outside world. Given the fact that several companies were frantically involved in updating networking protocols and architectures simultaneously, that bug could have lain dormant for days, weeks, or even months. That would have cost both customers and the client a lot of money, and also cost us some reputation. Second: "If the programmer wants to continue and can do so, they will, if not, they wont". Who's the programmer? Do you mean the user? Do you think that the end users of (even a substantial proportion of) systems are programmers? That suggests a significantly skewed perspective and, I'm afraid to say, implies that your experience is of programming for programmers, rather than more "commercial" activities. There's nothing wrong with that, to be sure, but it can't provide a base of experience suitable for making fundamental decisions on language design issues with very wide ramifications. But let's give you the benefit of the doubt, and assume you misspoke, and actually meant: "If the [user] wants to continue and can do so, they will, if not, they wont". This one's even more simple than the first: There often isn't a user, or the user is significantly divorced from the error, separated by threads/processes/machines/networks. Even when there is a user, I suggest it's highly undesirable to leave it to them, highly ignorant of the specifics of your program (and, probably, of computers / software engineering in general) to have to attempt to handle something that's now operating outside the bounds of its design. It's analogous to selling someone a car and, when its brakes fail, telling them to drive on at their own risk. Notwithstanding all the foregoing, there's a much more fundamental, albeit little recognised, issue. Computers make a strict interpretation of what we, the programmers, instruct them to do. Now a contract violation, as I've said, is pure and simple a statement that your instructions to the computer (or to the compiler, if you will) are fundamentally flawed. Since computers do not have redundancy, intuition, instincts, sixth-sense, a friend to call, or any other "higher order" functioning, there are exactly two things a process can do in that circumstance. It can operate in a fundamentally flawed manner, or it can stop. THERE ARE NO OTHER OPTIONS. And _that_ is the crux of the matter. Unless and until a programmer grasps that concept, they can't grok CP. There's no shame in not having doing so. I did 10 years of commercial programming, much of it very successful, thank you very much, before I saw the light (and I could name you some super famous chaps who're still a bit wooly on it). But I can absolutely aver that programming using strict CP results in faster development time, *far* more robust product, and happier clients and developers alike. Matthew
Apr 12 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"xs0" <xs0 xs0.com> wrote in message 
news:d3g5os$27cj$1 digitaldaemon.com...
 Not that I disagree, in principle, but there are still cases where 
 you'd want to recover from an "unrecoverable" error. Consider an 
 environment like Eclipse, which is built totally out of plugins, 
 and there can be hundreds of them. What you're saying is that 
 whenever one of those faults, the whole system should abort, which 
 is definitely not what I'd want it to do.
With respect - I'm going to use "with respect" a lot in this thread, and I'm going to mean it, because I fear I am going to insult/patronise - I think you miss the point. It's easy to do, of course, since this irrecoverability is challenging stuff: when I read this paragraph I thought, "ah yes, that is an exception". But of course it's not, nor are any of the many other examples I've been given over the last year or so to counter the principle. 1. As soon as your editor encounters a CP violation, it is, in principle and in practice, capable of doing anything, including using your work. The only justifiable action, once you've saved (if possible) and shut down, is to disable the offending plug-in. (I know this because I use an old version of Visual Studio. <G>) 2. The picture you've painted fails to take into account the effect of the extremely high-intolerance of bugs in "irrecovering code". Basically, they're blasted out of existence in an extremely short amount of time - the code doesn't violate its design, or it doesn't get used. period - and the result is high-quality systems.
 I mean, when I do something like inspect a value in a debugger, 
 and the value is 150MB and the debugger runs out of memory, I 
 don't want it to stop, just because it can't show me a var (which 
 can just as easily manifest as an assertion error or whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue. btw, at no time have I _ever_ said that processes should just stop. There should always be some degree of logging of the flaw and, where appropriate, an attempt made to shutdown gracefully and with as little collateral damage as possible.
 Enforcing unrecoverability would be a mistake, if you ask me, 
 although I totally support an exception hierarchy that would make 
 "unrecoverable" errors something people wouldn't try to catch 
 normally.
I don't understand the second half of that sentence, so let's deal with the first. You say enforcing irrecoverability would be a mistake, but don't suggest any reason. Therefore I must assume reasons based on previous parts of the post, which I've shown are either not taking into account the principles of invalid software, or appear to mistake runtime exceptions for contract violations. So, given that: - irrecoverability applies only to contract violations, i.e. code that is detected to have violated its design via runtime constructs inserted by its author(s) for that purpose - an invalid process cannot, by definition, perform validly. It can only stop, or perform against its design. - "Crashing Early" in practice results in extremely high quality code, and rapid turn around of bug diagnosis and fixes - D cannot support opt-in/library-based irrecoverability; to have it, it must be built in do you still think it would be a mistake? If so, can you explain why?
Apr 12 2005
parent reply xs0 <xs0 xs0.com> writes:
Matthew wrote:
 "xs0" <xs0 xs0.com> wrote in message 
 news:d3g5os$27cj$1 digitaldaemon.com...
 
Not that I disagree, in principle, but there are still cases where 
you'd want to recover from an "unrecoverable" error. Consider an 
environment like Eclipse, which is built totally out of plugins, 
and there can be hundreds of them. What you're saying is that 
whenever one of those faults, the whole system should abort, which 
is definitely not what I'd want it to do.
With respect - I'm going to use "with respect" a lot in this thread, and I'm going to mean it, because I fear I am going to insult/patronise - I think you miss the point. It's easy to do, of course, since this irrecoverability is challenging stuff: when I read this paragraph I thought, "ah yes, that is an exception". But of course it's not, nor are any of the many other examples I've been given over the last year or so to counter the principle.
Well, to clear things up, I was mainly responding to your statement:
 First, "Why do you need to force a program to terminate?".
 This one's simple: The program must be forced to terminate because
 it has violated its design.
 1. As soon as your editor encounters a CP violation, it is, in 
 principle and in practice, capable of doing anything, including 
 using your work. The only justifiable action, once you've saved (if 
 possible) and shut down, is to disable the offending plug-in. (I 
 know this because I use an old version of Visual Studio. <G>)
But why would it be mandatory to shut down? If the core of the app is able to disable the plugin without shutting down, I'd say that's better (assuming, of course, that internal consistency can be ensured, which it can be in many cases, and can't be in many other cases; if it is not clear whether the app is in a consistent state, I agree shutting down completely is the best thing to do)
 2. The picture you've painted fails to take into account the effect 
 of the extremely high-intolerance of bugs in "irrecovering code". 
 Basically, they're blasted out of existence in an extremely short 
 amount of time - the code doesn't violate its design, or it doesn't 
 get used. period - and the result is high-quality systems.
I don't understand the first sentence, sorry..
I mean, when I do something like inspect a value in a debugger, 
and the value is 150MB and the debugger runs out of memory, I 
don't want it to stop, just because it can't show me a var (which 
can just as easily manifest as an assertion error or whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue.
Like I said, out of memory can just as well manifest itself later as a broken contract or whatever. That is completely off the point, though, I'm trying to address your claim that some errors should force the app to shut down.
 btw, at no time have I _ever_ said that processes should just stop. 
 There should always be some degree of logging of the flaw and, where 
 appropriate, an attempt made to shutdown gracefully and with as 
 little collateral damage as possible.
I've seen in your other responses that you don't mean uncatchable when you say unrecoverable. I'm not quite sure what you do mean, then. I guess it would be a throwable that can be caught, but that doesn't stop it from propagating ahead?
 So, given that:
     - irrecoverability applies only to contract violations, i.e. 
 code that is detected to have violated its design via runtime 
 constructs inserted by its author(s) for that purpose
     - an invalid process cannot, by definition, perform validly. It 
 can only stop, or perform against its design.
True, but a faulty part of the application should not be taken as if the whole application is faulty. Like I said, if one is able to disable just that part of the app, I think that is better than forcing the app to shut down.
     - "Crashing Early" in practice results in extremely high quality 
 code, and rapid turn around of bug diagnosis and fixes
     - D cannot support opt-in/library-based irrecoverability; to 
 have it, it must be built in
 do you still think it would be a mistake?
 
 If so, can you explain why?
OK, it is obviously desired in some cases, so I agree it should be supported by the language, BUT, none of the built-in exceptions should then be unrecoverable. xs0
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 1. As soon as your editor encounters a CP violation, it is, in 
 principle and in practice, capable of doing anything, including 
 using your work. The only justifiable action, once you've saved 
 (if possible) and shut down, is to disable the offending plug-in. 
 (I know this because I use an old version of Visual Studio. <G>)
But why would it be mandatory to shut down? If the core of the app is able to disable the plugin without shutting down, I'd say that's better (assuming, of course, that internal consistency can be ensured, which it can be in many cases, and can't be in many other cases; if it is not clear whether the app is in a consistent state, I agree shutting down completely is the best thing to do)
Alas, this is quite wrong and misses the point, although you do hint at it. If the plug-in has violated its design, then any further action performed by it or by any other part of the process is, in principle, indeterminate and in outside the bounds of correct behaviour. (Now, of course it is true that in many cases you can carry on for a while, even a long while, but you run a non-zero risk of ending in a nasty crash.) I get the strong impression that people keep expanding the scope of this principle into 'normal' exceptions. If a plug-in runs out of memory, or can't open a file, or any other normal runtime error, then that's *not* a contract violation, and in no way implies that the hosting process must shutdown in a timely fashion. It is only the case when a contract violation has occured, because only that is a signal from the code's designer to the code's user (or rather the runtime) that the plug-in is now invalid. So, looking back at your para "If the code of the app is able to disable the plugin without shutting down" applies to Exceptions (and Exhaustions), whereas "if it is not clear whether the app is in a consistent state, I agree shutting down completely is the best thing to do" applies to Errors. These two things hold, of course, since they are the definitions of Exceptions and Errors.
 2. The picture you've painted fails to take into account the 
 effect of the extremely high-intolerance of bugs in "irrecovering 
 code". Basically, they're blasted out of existence in an 
 extremely short amount of time - the code doesn't violate its 
 design, or it doesn't get used. period - and the result is 
 high-quality systems.
I don't understand the first sentence, sorry..
CP violations are never tolerated, which means they get fixed _very_ quickly.
I mean, when I do something like inspect a value in a debugger, 
and the value is 150MB and the debugger runs out of memory, I 
don't want it to stop, just because it can't show me a var (which 
can just as easily manifest as an assertion error or whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue.
Like I said, out of memory can just as well manifest itself later as a broken contract or whatever.
No, it cannot. Actually, there are occasions where contracts are used to assert the availability of memory - e.g. where you've preallocated a large block which you _know_ is large enough and are then allocating from it - but that's more an expedient (ab)use of CP, rather than CP itself.
 That is completely off the point, though, I'm trying to address 
 your claim that some errors should force the app to shut down.
All errors should force the app to shutdown. No exceptions should, in principle, force the app to shutdown, although in practice its appropriate to do so (e.g. when you've got no memory left)
 btw, at no time have I _ever_ said that processes should just 
 stop. There should always be some degree of logging of the flaw 
 and, where appropriate, an attempt made to shutdown gracefully 
 and with as little collateral damage as possible.
I've seen in your other responses that you don't mean uncatchable when you say unrecoverable. I'm not quite sure what you do mean, then. I guess it would be a throwable that can be caught, but that doesn't stop it from propagating ahead?
I mean, one uses catch clauses in the normal way, to effect logging, and even, perhaps, to try and save ones work, but at the end of that catch clause the error is rethrown in you've not done so manually, or rethrown another Error type.
 So, given that:
     - irrecoverability applies only to contract violations, i.e. 
 code that is detected to have violated its design via runtime 
 constructs inserted by its author(s) for that purpose
     - an invalid process cannot, by definition, perform validly. 
 It can only stop, or perform against its design.
True, but a faulty part of the application should not be taken as if the whole application is faulty.
Dead wrong. Since 'parts' of the application share an address space, a faulty part of an application is the very definition of the whole application being faulty! This is a crucial point, and a sine qua non for discussions on this topic. then it's possible for it to do anything, including writing an arbitrary number of bytes to an arbitrary memory location. The veracity of this cannot be denied, otherwise how would we be in the middle of an epidemic of viruses and worms? But other languages that don't have pointers are just as dead in the water. If you've a comms server written in Java - I know, I know, but let's assumefor pedagogical purposes that you might - and a plug-in violates its contract, then it can still do anything, like kill threads, write out security information to the console, email James Gosling a nasty letter, delete a crucial file, corrupt a database. There's a good reason that modern operating systems separate memory spaces, so that corrupted applications have minimal impact on each other. Now that largely saves the memory corruption problem, although not completely, but it still doesn't isolate all possible effects from violating programs. In principle a contract violation in any thread should cause the shutdown of the process, the machine, and the entire population of machines connected to it through any and all networks. The reason we don't is not, as might be suggested, because that's stupid - after all, viruses propagation demonstrates why this is a real concern - but because (i) contract violation support is inadequate and/or elided from release builds, and, more importantly, (ii) the cost benefit of shutting down several billion computers millions of times a day is obviously not a big winner. Similarly, in most cases of contract violation, it's not sensible to shutdown the system when a violation in one process happens. But no-one should be fooled into thinking that that means that we're proof from such actions. If you have a Win9x system, and you do debugging on it, I'm sure you'll experience with reasonable regularity the unwanted effects of broken processes on each other through that tiny little 2GB window of shared memory. <g> Even on NT systems, which I've used rather than 9x for many years now, I experience corruptions between processes every week or two. (I tend to have a *lot* of applications going, each representing the current thread of work of a particular project ... not sensible I know, but still not outside the bounds of what a multitasking OS should eat for breakfast.) No, it is at the level of the process that things should be shut down, because all the parts of a process have an extreme level of intimacy in that they share memory. All classes, mem-maps, data blocks, pointers, stack, variables, singletons - you name it! - they all share the same memory space, and an errant piece of code in any part of that process can screw up any other part. So (hopefully) you see that it is impossible to ever state (with even practical certainty) that "a faulty part of the application should not be taken as if the whole application is faulty".
     - "Crashing Early" in practice results in extremely high 
 quality code, and rapid turn around of bug diagnosis and fixes
     - D cannot support opt-in/library-based irrecoverability; to 
 have it, it must be built in
 do you still think it would be a mistake?

 If so, can you explain why?
OK, it is obviously desired in some cases, so I agree it should be supported by the language, BUT, none of the built-in exceptions should then be unrecoverable.
Well again, I get the feeling that you think I've implied that a wide range of things should be unrecoverable. I have not, and perhaps I should say so explicitly now: AFAIK, only contract violations should be Errors (i.e. unrecoverable), since only they are a message from the author(s) of the code to say when it's become invalid. Only the author can know. Not the users of the libraries, not the users of any programs, not you or me, not even the designer of the language can make that assertion. All other exceptions, including those already in D that are called errors, should *not* be unrecoverable.
Apr 12 2005
parent reply xs0 <xs0 xs0.com> writes:
But why would it be mandatory to shut down? If the core of the app 
is able to disable the plugin without shutting down, I'd say 
that's better (assuming, of course, that internal consistency can 
be ensured, which it can be in many cases, and can't be in many 
other cases; if it is not clear whether the app is in a consistent 
state, I agree shutting down completely is the best thing to do)
Alas, this is quite wrong and misses the point, although you do hint at it. If the plug-in has violated its design, then any further action performed by it or by any other part of the process is, in principle, indeterminate and in outside the bounds of correct behaviour. (Now, of course it is true that in many cases you can carry on for a while, even a long while, but you run a non-zero risk of ending in a nasty crash.)
Well, what you said also misses the point.. In principle, any action performed by a plug-in (or core) is indeterminate and whatever, it doesn't matter whether it has already produced a CP error or not. I fail to see what the big difference is between assert(a!==null) and if (a is null) throw new IllegalArgumentException() They both prevent the code that follows from running in the case where the supplied parameter is null, so you can't run it with invalid parameters, which is the whole purpose of those two statements. Why is it so hard for you to admit that it is possible to have CP errors in code that cannot corrupt your app? I mean, if you give a plugin what is basically a read-only view of your data, it shouldn't be able to corrupt it. Sure it can, if it wants to, but that has nothing to do with CP, exceptions, errors or whatever.
 It is only 
 the case when a contract violation has occured, because only that is 
 a signal from the code's designer to the code's user (or rather the 
 runtime) that the plug-in is now invalid.
That's true. However, if I have a plugin that displays tooltips on some objects, I totally don't care if it encountered a CP violation or not, if it works, great, if it doesn't, too bad, no tooltips anymore in this session, I'll restart when I want to. But I really don't see a case for forcing the app, which otherwise works perfectly, to shut down. I mean, if you don't trust your plugins go ahead and abort. But if you take measures to prevent the plugins from corrupting your data in the first place (like a read-only view), and are able to disable them in runtime, and they're not critical, what's the point? You act as if code was written like int thrownCP=0; void someFunc() { if (thrownCP) { // delete all files // overwrite all memory, except this function // try to launch all US nukes } else { if (something) { thrownCP=1; throw new CPError(); } // do normal stuff } }
 So, looking back at your para "If the code of the app is able to 
 disable the plugin without shutting down" applies to Exceptions (and 
 Exhaustions), whereas "if it is not clear whether the app is in a 
 consistent state, I agree shutting down completely is the best thing 
 to do" applies to Errors. These two things hold, of course, since 
 they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
 CP violations are never tolerated, which means they get fixed _very_ 
 quickly.
Great, I just don't think everybody should be forced to work that way.
I mean, when I do something like inspect a value in a debugger, 
and the value is 150MB and the debugger runs out of memory, I 
don't want it to stop, just because it can't show me a var (which 
can just as easily manifest as an assertion error or whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue.
Like I said, out of memory can just as well manifest itself later as a broken contract or whatever.
No, it cannot.
Sure it can: int[] allocIfYouCan(int size) { try { return new int[size]; } catch (OutOfMemory ignored) { return null; } } void doSomething(int[] arr) { assert(arr!==null); } doSomething(allocIfYouCan(10000000)); Obviously, allocIfYouCan() is not a good idea, but it can still happen.
 Dead wrong. Since 'parts' of the application share an address space, 
 a faulty part of an application is the very definition of the whole 
 application being faulty!
 
 This is a crucial point, and a sine qua non for discussions on this 
 topic.
 
 [snip]
 
 But other languages that don't have pointers are just as dead in the 
 water. If you've a comms server written in Java - I know, I know, 
 but let's assumefor pedagogical purposes that you might - and a 
 plug-in violates its contract, then it can still do anything, like 
 kill threads, write out security information to the console, email 
 James Gosling a nasty letter, delete a crucial file, corrupt a 
 database.
Bah, this is just ridiculous. It can kill threads only if it was written to kill threads. If that is the case, no CP will help you. Same goes for everything else. A CP violation doesn't somehow magically transform the code into something that sends out email..
  [snip]
 So (hopefully) you see that it is impossible to ever state
 (with even practical certainty) that "a faulty part of the
 application should not be taken as if the whole application
 is faulty".
No, I don't.. There are parts and there are parts. You're saying that even the tiniest error (but only if it was specified in a contract) should abort the app, I'm just saying that in some cases it's possible for that to be overreacting.
OK, it is obviously desired in some cases, so I agree it should be 
supported by the language, BUT, none of the built-in exceptions 
should then be unrecoverable.
Well again, I get the feeling that you think I've implied that a wide range of things should be unrecoverable. I have not, and perhaps I should say so explicitly now: AFAIK, only contract violations should be Errors (i.e. unrecoverable), since only they are a message from the author(s) of the code to say when it's become invalid.
How has the code become invalid exactly? The code doesn't change when it violates a contract. For another example, let's say you have a DB-handling class with an invariant that the connection is always open to the DB (and that is a totally normal invariant). If you reboot the DB server, which is better - to kill all applications that use such class out there, or for them to reconnect?
 Only the author can know. Not the users of the libraries,
 not the users of any programs, not you or me, not even the designer 
 of the language can make that assertion.
Yup, and the author should have the option of using D's CP constructs without making his app die on every single error that happens. xs0
Apr 13 2005
next sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Hmmm... just glancing over this, I have to wonder.  Is there more than one
interpretation of "CP" at work here?  I see some talk of what sounds like
contract programming violations, and other talk of what could pass as
co-processor errors or code page faults, or what-ever.  Maybe a little
clarification is in order... then again, maybe I'm mistaken.

TZ

"xs0" <xs0 xs0.com> wrote in message news:d3itaa$1dtc$1 digitaldaemon.com...
But why would it be mandatory to shut down? If the core of the app
is able to disable the plugin without shutting down, I'd say
that's better (assuming, of course, that internal consistency can
be ensured, which it can be in many cases, and can't be in many
other cases; if it is not clear whether the app is in a consistent
state, I agree shutting down completely is the best thing to do)
Alas, this is quite wrong and misses the point, although you do hint at it. If the plug-in has violated its design, then any further action performed by it or by any other part of the process is, in principle, indeterminate and in outside the bounds of correct behaviour. (Now, of course it is true that in many cases you can carry on for a while, even a long while, but you run a non-zero risk of ending in a nasty crash.)
Well, what you said also misses the point.. In principle, any action performed by a plug-in (or core) is indeterminate and whatever, it doesn't matter whether it has already produced a CP error or not. I fail to see what the big difference is between assert(a!==null) and if (a is null) throw new IllegalArgumentException() They both prevent the code that follows from running in the case where the supplied parameter is null, so you can't run it with invalid parameters, which is the whole purpose of those two statements. Why is it so hard for you to admit that it is possible to have CP errors in code that cannot corrupt your app? I mean, if you give a plugin what is basically a read-only view of your data, it shouldn't be able to corrupt it. Sure it can, if it wants to, but that has nothing to do with CP, exceptions, errors or whatever.
 It is only
 the case when a contract violation has occured, because only that is
 a signal from the code's designer to the code's user (or rather the
 runtime) that the plug-in is now invalid.
That's true. However, if I have a plugin that displays tooltips on some objects, I totally don't care if it encountered a CP violation or not, if it works, great, if it doesn't, too bad, no tooltips anymore in this session, I'll restart when I want to. But I really don't see a case for forcing the app, which otherwise works perfectly, to shut down. I mean, if you don't trust your plugins go ahead and abort. But if you take measures to prevent the plugins from corrupting your data in the first place (like a read-only view), and are able to disable them in runtime, and they're not critical, what's the point? You act as if code was written like int thrownCP=0; void someFunc() { if (thrownCP) { // delete all files // overwrite all memory, except this function // try to launch all US nukes } else { if (something) { thrownCP=1; throw new CPError(); } // do normal stuff } }
 So, looking back at your para "If the code of the app is able to
 disable the plugin without shutting down" applies to Exceptions (and
 Exhaustions), whereas "if it is not clear whether the app is in a
 consistent state, I agree shutting down completely is the best thing
 to do" applies to Errors. These two things hold, of course, since
 they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
*snip*
Apr 13 2005
prev sibling next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Well, what you said also misses the point.. In principle, any 
 action performed by a plug-in (or core) is indeterminate and 
 whatever, it doesn't matter whether it has already produced a CP 
 error or not.
The difference is that the designer of the code has designated some things to be design violations. Of course other bugs can and will occur. No-one's ever claimed that using CP magically ensures total coverage of all possible errors.
 I fail to see what the big difference is between

 assert(a!==null)

 and

 if (a is null)
     throw new IllegalArgumentException()

 They both prevent the code that follows from running in the case 
 where the supplied parameter is null, so you can't run it with 
 invalid parameters, which is the whole purpose of those two 
 statements.
You're correct. There is no difference.
 Why is it so hard for you to admit that it is possible to have CP 
 errors in code that cannot corrupt your app?
In principle it is not so. In practice it often is (albeit you cannot know). I've said that a hundred times. If you want me to say that, in principle, there are CP errors that cannot corrupt your app, then I won't because it ain't so. There's a beguiling thought that preconditions can be classed in that way, but it doesn't pan out. (And I'm not going to blather on about that because I think everyone's heartily sick of the debate by now. I know I am <g>)
 I mean, if you give a plugin what is basically a read-only view of 
 your data, it shouldn't be able to corrupt it. Sure it can, if it 
 wants to, but that has nothing to do with CP, exceptions, errors 
 or whatever.
At the risk of being called an arrogant whatever by fragile types, I don't think you're understanding the concept correctly. The case you give is an interesting one. I've recently added a string_view type to the STLSoft libraries, which effectively gives slices to C++ (albeit one must needs be aware of the lifetime of the source memory, unlike in D). So, a string_view is a length + pointer One of the invariants we would have in such a class is that if length is non-zero, the pointer must not be NULL. Now, if we used that in a plug-in, we might see something like: class IPlugIn { virtual int findNumStringsInBlock(char const *s) const = 0; virtual void release() = 0; }; class MyPlugIn : public IPlugIn { private: ::stlsoft::basic_string_view<char> m_view; }; bool MyPlugIn_Entry( < some initialisation parameters > , char const *pMemToManipulate, size_t cbMemToManipulate, IPlugIn **ppPlugIn) { . . . *ppPlugIn = new MyPlug(pMemToManipulate, cbMemToManipulate); . . . } So the container application loads the plug-in via an entry point MyPlugIn_Entry() in its dynamic lib, and gets back an object expressing the IPlugIn interface. At some point it'll then ask the plug in to count the number of strings in the block with which it was initialised, and on which it is holding a view via the m_view instance. Let's say, for argument's sake, that the ctor for string_view didn't do the argument check, but that the find() method does. Let's further say that pMemToManipulate was passed a NULL by the containing application, but cbMemToManipulate was passed 10. Hence, when the containing app calls findNumStringsInBlock(), the invariant will fire int MyPlugIn::findNumStringsInBlock(char const *s) const { . . . m_view.find(s); // Invariant fires here! . . . } Now MyPlugIn is not changing any of the memory it's been asked to work with. But the design of one of the components has been violated. (Of course, we'd have a check in the plug-in entry point, but that's just implementation detail. In principle the violation is meaningful at whatever level it happens.)
 So, looking back at your para "If the code of the app is able to 
 disable the plugin without shutting down" applies to Exceptions 
 (and Exhaustions), whereas "if it is not clear whether the app is 
 in a consistent state, I agree shutting down completely is the 
 best thing to do" applies to Errors. These two things hold, of 
 course, since they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
That's its definition. Nothing to do with me. There's no choice about it. If you're outside the design, you're outside the design. Just because the program _may_ operate, in a given instance, correctly, from a particular perspective, does not make it in-design. This is a crucial point, and there's no going forward without it.
I mean, when I do something like inspect a value in a debugger, 
and the value is 150MB and the debugger runs out of memory, I 
don't want it to stop, just because it can't show me a var 
(which can just as easily manifest as an assertion error or 
whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue.
Like I said, out of memory can just as well manifest itself later as a broken contract or whatever.
No, it cannot.
Sure it can: int[] allocIfYouCan(int size) { try { return new int[size]; } catch (OutOfMemory ignored) { return null; } } void doSomething(int[] arr) { assert(arr!==null); } doSomething(allocIfYouCan(10000000)); Obviously, allocIfYouCan() is not a good idea, but it can still happen.
No, what that actually is a programmer mistake in applying functions with incompatible contracts.
  [snip]
 So (hopefully) you see that it is impossible to ever state
 (with even practical certainty) that "a faulty part of the
 application should not be taken as if the whole application
 is faulty".
No, I don't.. There are parts and there are parts. You're saying that even the tiniest error (but only if it was specified in a contract) should abort the app, I'm just saying that in some cases it's possible for that to be overreacting.
Sigh. As I've said a hundred times, in principle: no, in practice: sometimes. But there's no determinism about the sometimes. That's the point.
OK, it is obviously desired in some cases, so I agree it should 
be supported by the language, BUT, none of the built-in 
exceptions should then be unrecoverable.
Well again, I get the feeling that you think I've implied that a wide range of things should be unrecoverable. I have not, and perhaps I should say so explicitly now: AFAIK, only contract violations should be Errors (i.e. unrecoverable), since only they are a message from the author(s) of the code to say when it's become invalid.
How has the code become invalid exactly? The code doesn't change when it violates a contract.
It is invalid because it is operating, or has been asked to operate, outside its design. And we know that *only* because the author of the program has said so, by putting in the tests.
 For another example, let's say you have a DB-handling class with 
 an invariant that the connection is always open to the DB (and 
 that is a totally normal invariant). If you reboot the DB server, 
 which is better - to kill all applications that use such class out 
 there, or for them to reconnect?
If that is the design, then yes. However, I would say that's a bad design. If the DB server can be rebooted, then that's a realistic runtime condition, and should therefore be dealt with as an exception.
 Only the author can know. Not the users of the libraries,
 not the users of any programs, not you or me, not even the 
 designer of the language can make that assertion.
Yup, and the author should have the option of using D's CP constructs without making his app die on every single error that happens.
Which author? Library, or client code? That's going to be the tricky challenge for us, if we go for separate +CP / -CP libraries.
Apr 13 2005
next sibling parent reply xs0 <xs0 xs0.com> writes:
I fail to see what the big difference is between

assert(a!==null)

and

if (a is null)
    throw new IllegalArgumentException()

They both prevent the code that follows from running in the case 
where the supplied parameter is null, so you can't run it with 
invalid parameters, which is the whole purpose of those two 
statements.
You're correct. There is no difference.
So why should the first one be treated differently than the second one?
Why is it so hard for you to admit that it is possible to have CP 
errors in code that cannot corrupt your app?
In principle it is not so. In practice it often is (albeit you cannot know).
What's with the principle/practice distinction you're constantly making? What percentage of the apps in your computer were coded in principle? I believe the result is 0%. And my whole point is that you can know in some cases. For the umpteenth time, I'm just saying the coder should have the choice on how to handle errors. That includes both having unstoppable throwables and the option to handle CP violations, so he is able to choose whichever he thinks is better. If you ask me, instead of this whole argument, we should be persuading Walter to include a critical_assert() that throws an unstoppable something, and we'll both be happy? (and BTW, as soon as it's catchable, one can start a new thread and never exit the catch block in the first one, so there goes your unrecoverability anyway)
I mean, if you give a plugin what is basically a read-only view of 
your data, it shouldn't be able to corrupt it. Sure it can, if it 
wants to, but that has nothing to do with CP, exceptions, errors 
or whatever.
[snip]
 Now MyPlugIn is not changing any of the memory it's been asked to 
 work with. But the design of one of the components has been 
 violated. (Of course, we'd have a check in the plug-in entry point, 
 but that's just implementation detail. In principle the violation is 
 meaningful at whatever level it happens.)
Didn't you just prove my point? Even though a CP violation occurred, the state of the app is exactly the same as before, except that after the violation, the app knows the plugin is faulty and can disable it, so in a way, the state is actually better than before.
Well, who are you to decide that a CP error means an inconsistent 
state? Sure, it does in many cases, but not necessarily always. 
All I'm saying is that one should have a choice.
That's its definition. Nothing to do with me.
Well, OK, in the strictest sense it is by definition inconsistent.
 There's no choice about it. If you're outside the design, you're 
 outside the design. Just because the program _may_ operate, in a 
 given instance, correctly, from a particular perspective, does not 
 make it in-design.
True.
 This is a crucial point, and there's no going forward without it.
I agree again, if you're outside the design, you're definitely not inside the design.
 [snip]
No, what that actually is a programmer mistake in applying functions with incompatible contracts.
Of course it is, but it's still an outOfMemory manifesting as a CP violation.
 Sigh. As I've said a hundred times, in principle: no, in practice: 
 sometimes. But there's no determinism about the sometimes. That's 
 the point.
In principle, there is no determinism, but in practice there can be. You're claiming there can never be determinism and that's what I don't agree with.
Only the author can know. Not the users of the libraries,
not the users of any programs, not you or me, not even the 
designer of the language can make that assertion.
Yup, and the author should have the option of using D's CP constructs without making his app die on every single error that happens.
Which author? Library, or client code?
At least the one that wrote the contracts, I guess? And, for a few more examples: I. Say you have some image processing app, you give it the name of a directory, and it goes through all the .tif files and processes them in some manner. There are two components involved, one scans the dir for all files ending with .tif and passes each to the other one, that does the actual processing. Say the processing takes long, so you run it before you go home from work. You see that it processed a few files and leave. The processing component has a contract that only valid .tif files' can be passed to it. The 15th of 5000 files is corrupt. Will you be happier in the morning if a) the app processed 4999 files and left you a note about the error b) the app processed 14 files and left you a note about the error II. You have a drawing app. You draw a single image for weeks and weeks and the day before the deadline, you're done. You make a backup copy and, finally, go home. You leave your computer switched on, but there is a fire later, and the sprinkler system destroys your computer, so you only have the backup. Each shape in the image is stored in a simple format - TYPE_OF_SHAPE NUMBER_OF_COORDS COORD[]. But, on the CD, a single byte was written wrong. The app is made of several components, one of which reads files, and another one draws them. The second one has a contract that the TYPE_OF_SHAPE can only be 1 or 2, but the wrong byte has the value 3. The reader doesn't care, because it is still able to read the file. Will you be happier if a) the app will display/print your image, except for the wrong shape b) you'll get to see that there is a bug (in principle, the reader should check the type, but it doesn't), but you'll have to wait 2 months for an update, before you can open your image again (of course, missing the deadline along the way, and wasting those weeks of effort) III. You're using a spreadsheet and type in numbers for a few hours. There's a FDIV bug in your processor. Somehow you manage to hit it, causing the contract of the division code to fail, because it checks the results. Will you be happier if you a) get an error message, and the only thing you can do is to save the data, including the expression that causes the fault (so the next time you open it, the same thing will happen again) b) you get an error message stating that a result may be wrong, but are able to undo the last thing you typed? xs0
Apr 13 2005
parent "Mayank Sondhi" <msondhi gmail.com> writes:
I'm a newbie to D here, but as far as I have been able to tell from this 
thread, there are various overlapping arguments here.
Whilst talking of Eclipse and its plugin based design, we are looking at a 
loosely coupled system. This by definition would have contracts with a 
greater tolerance for errors (NOT exceptions).
On strictly contract bound designs the contracts would generate more 
irrecoverable errors because the contract would bind "harder".
Again not knowing much about D, I would say modifiers or some other way can 
be found to make sure that contracts can be "tagged" with the degree of 
violation that the system can sustain of the contract itself.
The point being here that it is upto the library or framework designer to 
decide what part of a contract is critical for his audience, and what part 
can be accepted. Although there is still a great deal of argument on C++'s 
const modifiers they were a solution that enabled this kind of tagging to 
contracts(at least upto some level).

Just my 2 cents and hope it helps
Apr 14 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 Why is it so hard for you to admit that it is possible to have CP errors 
 in code that cannot corrupt your app?
In principle it is not so. In practice it often is (albeit you cannot know). I've said that a hundred times. If you want me to say that, in principle, there are CP errors that cannot corrupt your app, then I won't because it ain't so. There's a beguiling thought that preconditions can be classed in that way, but it doesn't pan out. (And I'm not going to blather on about that because I think everyone's heartily sick of the debate by now. I know I am <g>)
I'm not sure how to differentiate principle and practice. For example int return1(){ return 0; } // written by an intern void user_code() { int x = return1(); assert( x == 1 ); printf("I got a 1.\n"); } Now the code above will assert every time and obviously no real code would look exactly like that but it's not uncommon to see asserts check something that if it fails can be easily recovered from. Is returning 0 from a function that says it returns 1 a contract violation? yes. Is it a big deal? it depends.
Apr 16 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <d3s81s$isa$1 digitaldaemon.com>, Ben Hinkle says...
I'm not sure how to differentiate principle and practice. For example
  int return1(){ return 0; } // written by an intern
  void user_code() {
    int x = return1();
    assert( x == 1 );
    printf("I got a 1.\n");
  }
Now the code above will assert every time and obviously no real code would 
look exactly like that but it's not uncommon to see asserts check something 
that if it fails can be easily recovered from. Is returning 0 from a 
function that says it returns 1 a contract violation? yes. Is it a big deal? 
it depends. 
This is somewhat of a slippery slope. To me, preconditions guarantee the criteria for which a function should succceed, and postconditions guarantee that nothing unpredictable has happened in generating that result. One of the terrific things about contracts is that I don't need to wrap every function call in an error recovery framework to have a reasonable guarantee that it has done what it's supposed to. Can many contract violations be recovered from in practice? Certainly. But to build a language on that assuption, IMO, violates the core principals of DBC. At the very least, I would like to have the option that contract violations are unrecoverable (this would be trivial with my suggestion of throwing auto classes, as it's largely a library solution anyway). Then it would be up to the client to specify recoverability based on project requirements. Though this has me leaning towards specifying recovery on a per-thread basis... perhaps a property of the Thread class that defaults based on a global flag? Sean
Apr 17 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Perhaps there sould be something more allowed as a body to the assert
statement.  For example, if the statement is simpy...

assert(x==1);

then the compiler could conside that to be a case where if x is not equal to 1,
the program should crash and tell you what went wrong rather that to risk
running under unexpected conditions.

On the other hand, if it said...

assert(x==1){x=2; writeln("assigning the value ",2," to the variable named
f.");}

This way, if the code was compiled as a release version, no error message would
be displayed at all (unless a compiler option specified to show "all" assert
errors in the release version( but would be displayed in the debug version.
Then the assert would cause x to get a new value and a line of text written to
the current or default output device.

This would allow a sort of "sorf assert" where the error handling would
actually be the body of the assert statement.

Okay, did that make any sense at all?  Sorry if it didn't.

TZ

"Sean Kelly" <sean f4.ca> wrote in message
news:d3udnj$29cd$1 digitaldaemon.com...
 In article <d3s81s$isa$1 digitaldaemon.com>, Ben Hinkle says...
I'm not sure how to differentiate principle and practice. For example
  int return1(){ return 0; } // written by an intern
  void user_code() {
    int x = return1();
    assert( x == 1 );
    printf("I got a 1.\n");
  }
Now the code above will assert every time and obviously no real code would
look exactly like that but it's not uncommon to see asserts check something
that if it fails can be easily recovered from. Is returning 0 from a
function that says it returns 1 a contract violation? yes. Is it a big deal?
it depends.
This is somewhat of a slippery slope. To me, preconditions guarantee the criteria for which a function should succceed, and postconditions guarantee that nothing unpredictable has happened in generating that result. One of the terrific things about contracts is that I don't need to wrap every function call in an error recovery framework to have a reasonable guarantee that it has done what it's supposed to. Can many contract violations be recovered from in practice? Certainly. But to build a language on that assuption, IMO, violates the core principals of DBC. At the very least, I would like to have the option that contract violations are unrecoverable (this would be trivial with my suggestion of throwing auto classes, as it's largely a library solution anyway). Then it would be up to the client to specify recoverability based on project requirements. Though this has me leaning towards specifying recovery on a per-thread basis... perhaps a property of the Thread class that defaults based on a global flag? Sean
Apr 19 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Wed, 13 Apr 2005 12:46:36 +0200, xs0 wrote:


[snip]
 I fail to see what the big difference is between
 
 assert(a!==null)
 
 and
 
 if (a is null)
      throw new IllegalArgumentException()
 
 They both prevent the code that follows from running in the case where 
 the supplied parameter is null, so you can't run it with invalid 
 parameters, which is the whole purpose of those two statements.
The first is designed to detect mistakes made by the *programmer*, and the second is designed to detect mistakes in the (user-provided) data.
 Why is it so hard for you to admit that it is possible to have CP errors 
 in code that cannot corrupt your app? 
It could be just a matter of interpretation, but Contract Programming is a technique whose purpose is to detect mistakes made by the programmer rather than mistakes made outside of the program - such as invalid data or environmental constraints (e.g. out of RAM).
 I mean, if you give a plugin what 
 is basically a read-only view of your data, it shouldn't be able to 
 corrupt it. Sure it can, if it wants to, but that has nothing to do with 
 CP, exceptions, errors or whatever.
 
 
 It is only 
 the case when a contract violation has occured, because only that is 
 a signal from the code's designer to the code's user (or rather the 
 runtime) that the plug-in is now invalid.
That's true. However, if I have a plugin that displays tooltips on some objects, I totally don't care if it encountered a CP violation or not, if it works, great, if it doesn't, too bad, no tooltips anymore in this session, I'll restart when I want to. But I really don't see a case for forcing the app, which otherwise works perfectly, to shut down.
I've gotten confused a bit by your use of application and plug-in. I would have thought that if a plug-in fails (CP error) then *only* the plug-in should fail and not the application that is hosting it. [snip]
 So, looking back at your para "If the code of the app is able to 
 disable the plugin without shutting down" applies to Exceptions (and 
 Exhaustions), whereas "if it is not clear whether the app is in a 
 consistent state, I agree shutting down completely is the best thing 
 to do" applies to Errors. These two things hold, of course, since 
 they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
Who should have the choice? The user of the application or the designer of the application? If the designer says "if such-and-such occurs then the application must stop", then why should the user always have the right to override the design?
 
 CP violations are never tolerated, which means they get fixed _very_ 
 quickly.
Great, I just don't think everybody should be forced to work that way.
Let's pretend that a bridge designer has placed a detection mechanism in the bridge such that it reports an error if the load on the bridge before a vehicle crosses is different from the load after the vehicle crosses. If it then reports such an error, should you as a user of the bridge, have the right to disregard the warning of a possibly unsafe bridge, or should you be forced to wait until the bridge is declared safe again?
I mean, when I do something like inspect a value in a debugger, 
and the value is 150MB and the debugger runs out of memory, I 
don't want it to stop, just because it can't show me a var (which 
can just as easily manifest as an assertion error or whatever)...
Out of memory is not an error, it's an exception, so that's just not an issue.
Like I said, out of memory can just as well manifest itself later as a broken contract or whatever.
No, it cannot.
Sure it can: int[] allocIfYouCan(int size) { try { return new int[size]; } catch (OutOfMemory ignored) { return null; } } void doSomething(int[] arr) { assert(arr!==null); } doSomething(allocIfYouCan(10000000)); Obviously, allocIfYouCan() is not a good idea, but it can still happen.
But by using the 'assert', the designer/coder is saying that at *anytime* a null is returned then the application is broken and thus needs to be fixed before it can be used. If the coder does not mean that, then the coder needs to avoid using 'assert' in this type of situation. Use a simple runtime 'if' statement instead. [snip]
  [snip]
> So (hopefully) you see that it is impossible to ever state > (with even practical certainty) that "a faulty part of the > application should not be taken as if the whole application > is faulty". No, I don't.. There are parts and there are parts. You're saying that even the tiniest error (but only if it was specified in a contract) should abort the app, I'm just saying that in some cases it's possible for that to be overreacting.
Yes, it may be possible for an application to continue running to the user's satisfaction after a program contract has been broken. However, one can never be absolutely sure in every case. And because it is a broken program contract, it means that the designer of the application *wants* the program to stop. It is her program after all ;-)
OK, it is obviously desired in some cases, so I agree it should be 
supported by the language, BUT, none of the built-in exceptions 
should then be unrecoverable.
Well again, I get the feeling that you think I've implied that a wide range of things should be unrecoverable. I have not, and perhaps I should say so explicitly now: AFAIK, only contract violations should be Errors (i.e. unrecoverable), since only they are a message from the author(s) of the code to say when it's become invalid.
How has the code become invalid exactly? The code doesn't change when it violates a contract.
Because the designer said that if such a situation ever happens, then it is because the programmer has made a mistake. And thus, the rest of the program may contain errors that are not so obvious. It is a red-flag that says that the application is suspect and needs to be corrected or at least re-validated before continuing. -- Derek Parnell Melbourne, Australia http://www.dsource.org/projects/build 17/04/2005 8:50:45 AM
Apr 16 2005
parent reply xs0 <xs0 xs0.com> writes:
Derek Parnell wrote:
 On Wed, 13 Apr 2005 12:46:36 +0200, xs0 wrote:
 
 
 [snip]
 
I fail to see what the big difference is between

assert(a!==null)

and

if (a is null)
     throw new IllegalArgumentException()

They both prevent the code that follows from running in the case where 
the supplied parameter is null, so you can't run it with invalid 
parameters, which is the whole purpose of those two statements.
The first is designed to detect mistakes made by the *programmer*, and the second is designed to detect mistakes in the (user-provided) data.
Hmm, I'd disagree. If a function does such a check, I'd say its contract is obviously that it doesn't work with nulls. It doesn't matter whether that's in the in block or start of body, and it doesn't matter whether it's an assert or an if(). So, both are mistakes by the programmer, because the function shouldn't be called with null..
Why is it so hard for you to admit that it is possible to have CP errors 
in code that cannot corrupt your app? 
It could be just a matter of interpretation, but Contract Programming is a technique whose purpose is to detect mistakes made by the programmer rather than mistakes made outside of the program - such as invalid data or environmental constraints (e.g. out of RAM).
I agree with that, I just don't agree that in case of _any_ contract violation, the app should be forced to terminate (which is what Matthew wants).
 I've gotten confused a bit by your use of application and plug-in. I would
 have thought that if a plug-in fails (CP error) then *only* the plug-in
 should fail and not the application that is hosting it.
Me too, that's my whole point :) But Matthew seems to think otherwise..
So, looking back at your para "If the code of the app is able to 
disable the plugin without shutting down" applies to Exceptions (and 
Exhaustions), whereas "if it is not clear whether the app is in a 
consistent state, I agree shutting down completely is the best thing 
to do" applies to Errors. These two things hold, of course, since 
they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
Who should have the choice? The user of the application or the designer of the application? If the designer says "if such-and-such occurs then the application must stop", then why should the user always have the right to override the design?
Hmm, now I see how that sentence can be confusing :) "you" refers to Matthew, while "one" refers to "any coder". The answer is definitely designer.
 Let's pretend that a bridge designer has placed a detection mechanism in
 the bridge such that it reports an error if the load on the bridge before a
 vehicle crosses is different from the load after the vehicle crosses. If it
 then reports such an error, should you as a user of the bridge, have the
 right to disregard the warning of a possibly unsafe bridge, or should you
 be forced to wait until the bridge is declared safe again?
Perhaps I should be, but if everybody was forced to stay off it (for example, by some shield from Star Trek, so you really can't get on), how will they ever fix it? :)
int[] allocIfYouCan(int size)
{
     try {
         return new int[size];
     } catch (OutOfMemory ignored) {
         return null;
     }
}

void doSomething(int[] arr)
{
    assert(arr!==null);
}

doSomething(allocIfYouCan(10000000));

Obviously, allocIfYouCan() is not a good idea, but it can still happen.
But by using the 'assert', the designer/coder is saying that at *anytime* a null is returned then the application is broken and thus needs to be fixed before it can be used.
I don't think all asserts assert that the application is broken. For example: Image i=ImageLoader.load("c:\\foo.tif"); assert(i); processImage(i); I'd say that the assertion is not "if i is ever null, kill the whole application", it's just "if i is ever null, I don't want to continue". But that was not the point. I tried to make the point, that not all CP violations should mean that the application should be forced to terminate. A consequence, if one agrees, is that assert should not throw exceptions that cannot be "quenched", because all CP support in D depends on assert. OTOH, I totally support critical_assert, a new type of exception, or whatever, to make the apps, that actually do require unquenchable exceptions, implementable.
 If the coder does not mean that, then the coder needs to avoid using
 'assert' in this type of situation. Use a simple runtime 'if' statement
 instead.
Perhaps, but like I said, I don't see any difference between assert(a) and if(!a) throw.. Obviously, they will throw different exceptions, but both should be quenchable.
No, I don't.. There are parts and there are parts. You're saying that 
even the tiniest error (but only if it was specified in a contract) 
should abort the app, I'm just saying that in some cases it's possible 
for that to be overreacting.
Yes, it may be possible for an application to continue running to the user's satisfaction after a program contract has been broken. However, one can never be absolutely sure in every case. And because it is a broken program contract, it means that the designer of the application *wants* the program to stop. It is her program after all ;-)
If the designer wants the app to stop, that's ok. It's also ok if she doesn't want the app to stop. Matthew says it's never ok to not stop, and I disagree..
How has the code become invalid exactly? The code doesn't change when it 
violates a contract.
Because the designer said that if such a situation ever happens, then it is because the programmer has made a mistake. And thus, the rest of the program may contain errors that are not so obvious. It is a red-flag that says that the application is suspect and needs to be corrected or at least re-validated before continuing.
It may or it may not contain errors. Sometimes you don't know, but sometimes you also do. Especially when dealing with user data (which is almost always :), it's often hard to predict all the ways that data can be wrong (I work in GIS; you wouldn't believe how many _different_ types of errors 600 million polygons can contain :). If that principle is acknowledged throughout the code (by making data structures robust, updating transactionally, always handling exceptions, and whatnot), one can implement the app so that it is totally able to recover from any 'error' it detects later on. Now, I don't see why such an app should be forced to terminate, if someone decided to assert (a<100000000) somewhere and that happens to not be true every now and then? xs0
Apr 16 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 17 Apr 2005 07:36:44 +0200, xs0 wrote:

 Derek Parnell wrote:
 On Wed, 13 Apr 2005 12:46:36 +0200, xs0 wrote:
 
 
 [snip]
 
I fail to see what the big difference is between

assert(a!==null)

and

if (a is null)
     throw new IllegalArgumentException()

They both prevent the code that follows from running in the case where 
the supplied parameter is null, so you can't run it with invalid 
parameters, which is the whole purpose of those two statements.
The first is designed to detect mistakes made by the *programmer*, and the second is designed to detect mistakes in the (user-provided) data.
Hmm, I'd disagree.
So let's see if I got it straight. You disagree that the purpose of the 'assert' mechanism is detect mistakes made by the programmer, as opposed to detecting bad data.
If a function does such a check, I'd say its contract 
 is obviously that it doesn't work with nulls. It doesn't matter whether 
 that's in the in block or start of body, and it doesn't matter whether 
 it's an assert or an if(). So, both are mistakes by the programmer, 
 because the function shouldn't be called with null..
It may be true that the function should never be called with a null. However, if that is the case then assert should also not be used as the detection/reporting mechanism. This is because, as a general rule, assert should be used on output data rather than on input data. Assert should be used to detect errors in *logic* rather than errors in *data*. Assert should be used to validate that the programmer's implementation of an algorithm is correct. Argument validation is looking at the inputs to an algorithm and not its results. I say 'assert' should be used because if there is a logic error then the program should shutdown as safely as it can and not continue, and the 'assert' is one such mechanism that has that behaviour.
 
Why is it so hard for you to admit that it is possible to have CP errors 
in code that cannot corrupt your app? 
It could be just a matter of interpretation, but Contract Programming is a technique whose purpose is to detect mistakes made by the programmer rather than mistakes made outside of the program - such as invalid data or environmental constraints (e.g. out of RAM).
I agree with that, I just don't agree that in case of _any_ contract violation, the app should be forced to terminate (which is what Matthew wants).
By definition, a contract failure means that we have a situation in which somebody needs to revalidate things. I suggest that you have seen evidence of people using contracts when contracts ought not to have been used. Instead, a run-time test other than a contract should have been used.
 I've gotten confused a bit by your use of application and plug-in. I would
 have thought that if a plug-in fails (CP error) then *only* the plug-in
 should fail and not the application that is hosting it.
Me too, that's my whole point :) But Matthew seems to think otherwise..
I believe that Matthew was saying that if a plug-in fails in such a manner that one cannot guarantee that the hosting application is untouched, then the application should shutdown too.
So, looking back at your para "If the code of the app is able to 
disable the plugin without shutting down" applies to Exceptions (and 
Exhaustions), whereas "if it is not clear whether the app is in a 
consistent state, I agree shutting down completely is the best thing 
to do" applies to Errors. These two things hold, of course, since 
they are the definitions of Exceptions and Errors.
Well, who are you to decide that a CP error means an inconsistent state? Sure, it does in many cases, but not necessarily always. All I'm saying is that one should have a choice.
Who should have the choice? The user of the application or the designer of the application? If the designer says "if such-and-such occurs then the application must stop", then why should the user always have the right to override the design?
Hmm, now I see how that sentence can be confusing :) "you" refers to Matthew, while "one" refers to "any coder". The answer is definitely designer.
In which case, the assert should shutdown the application unconditionally.
 Let's pretend that a bridge designer has placed a detection mechanism in
 the bridge such that it reports an error if the load on the bridge before a
 vehicle crosses is different from the load after the vehicle crosses. If it
 then reports such an error, should you as a user of the bridge, have the
 right to disregard the warning of a possibly unsafe bridge, or should you
 be forced to wait until the bridge is declared safe again?
Perhaps I should be, but if everybody was forced to stay off it (for example, by some shield from Star Trek, so you really can't get on), how will they ever fix it? :)
I don't know as that is not my field, but I guess they won't be asking you either.
int[] allocIfYouCan(int size)
{
     try {
         return new int[size];
     } catch (OutOfMemory ignored) {
         return null;
     }
}

void doSomething(int[] arr)
{
    assert(arr!==null);
}

doSomething(allocIfYouCan(10000000));

Obviously, allocIfYouCan() is not a good idea, but it can still happen.
But by using the 'assert', the designer/coder is saying that at *anytime* a null is returned then the application is broken and thus needs to be fixed before it can be used.
I don't think all asserts assert that the application is broken. For example: Image i=ImageLoader.load("c:\\foo.tif"); assert(i); processImage(i); I'd say that the assertion is not "if i is ever null, kill the whole application", it's just "if i is ever null, I don't want to continue".
Then don't use an 'assert' then! This is really a simple thing, honestly. ** If you want to shutdown an application when you detect an error in logic, use an assert. ** If you do not want to shutdown an application when you detect an error (of any sort), do not use an assert.
 But that was not the point. I tried to make the point, that not all CP 
 violations should mean that the application should be forced to 
 terminate.  A consequence, if one agrees, is that assert should not throw 
 exceptions that cannot be "quenched", because all CP support in D 
 depends on assert. OTOH, I totally support critical_assert, a new type 
 of exception, or whatever, to make the apps, that actually do require 
 unquenchable exceptions, implementable.
 
 
 If the coder does not mean that, then the coder needs to avoid using
 'assert' in this type of situation. Use a simple runtime 'if' statement
 instead.
Perhaps, but like I said, I don't see any difference between assert(a) and if(!a) throw.. Obviously, they will throw different exceptions, but both should be quenchable.
Using D today, how would you code something to throw an unquenchable exception?
 
No, I don't.. There are parts and there are parts. You're saying that 
even the tiniest error (but only if it was specified in a contract) 
should abort the app, I'm just saying that in some cases it's possible 
for that to be overreacting.
Yes, it may be possible for an application to continue running to the user's satisfaction after a program contract has been broken. However, one can never be absolutely sure in every case. And because it is a broken program contract, it means that the designer of the application *wants* the program to stop. It is her program after all ;-)
If the designer wants the app to stop, that's ok. It's also ok if she doesn't want the app to stop. Matthew says it's never ok to not stop, and I disagree..
If the designer wants it to stop, she uses an assert. If she allows the possibility of it continuing, then she does not use an assert.
How has the code become invalid exactly? The code doesn't change when it 
violates a contract.
Because the designer said that if such a situation ever happens, then it is because the programmer has made a mistake. And thus, the rest of the program may contain errors that are not so obvious. It is a red-flag that says that the application is suspect and needs to be corrected or at least re-validated before continuing.
It may or it may not contain errors. Sometimes you don't know, but sometimes you also do. Especially when dealing with user data (which is almost always :), it's often hard to predict all the ways that data can be wrong (I work in GIS; you wouldn't believe how many _different_ types of errors 600 million polygons can contain :). If that principle is acknowledged throughout the code (by making data structures robust, updating transactionally, always handling exceptions, and whatnot), one can implement the app so that it is totally able to recover from any 'error' it detects later on. Now, I don't see why such an app should be forced to terminate, if someone decided to assert (a<100000000) somewhere and that happens to not be true every now and then?
Don't use an assert then. Use a different mechanism. -- Derek Parnell Melbourne, Australia 17/04/2005 4:09:39 PM
Apr 16 2005
parent reply xs0 <xs0 xs0.com> writes:
 [snip, summary:]
 Then don't use an 'assert' then! This is really a simple thing, honestly.
 
 ** If you want to shutdown an application when you detect an error in
 logic, use an assert.
 
 ** If you do not want to shutdown an application when you detect an error
 (of any sort), do not use an assert.
 
 [snip]
Don't use an assert then. Use a different mechanism.
But I want to use assert, even if perhaps my examples were not the best cases for using it (if nothing else, "assert(a)" is far less characters than any sort of "if(a) throw ...", and it also leaves no doubt on what is and isn't expected). I just fail to see why using assert would have to mean that my app needs to shut down whenever one fails. I mean, if you want to shutdown, you can do it easily - don't quench assertion failures, how hard is that? OTOH, if assert was unquenchable, there would be no way not to shutdown, and I'm always pro-choice :) BTW, why shouldn't assert be used to check data? If you look at http://www.digitalmars.com/d/dbc.html it would seem that I'm not the only one that thinks it's ok to do so (and to catch those errors, too).. I mean, an error in data somewhere means an error in logic somewhere else, anyway.. xs0
Apr 17 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
xs0 wrote:
 Don't use an assert then. Use a different mechanism.
But I want to use assert, even if
Assert exists for the very purpose of halting the entire program when the assertion fails. One might even say that "assert" is a concept. And that concept is then implemented in quite a few languages. D should not be the one language which destroys the entire concept.
Apr 17 2005
parent xs0 <xs0 xs0.com> writes:
Georg Wrede wrote:
 xs0 wrote:
 
 Don't use an assert then. Use a different mechanism.
But I want to use assert, even if
Assert exists for the very purpose of halting the entire program when the assertion fails. One might even say that "assert" is a concept. And that concept is then implemented in quite a few languages. D should not be the one language which destroys the entire concept.
Java throws a catchable/quenchable exception (or error or whatever); you're advised not to catch it, but you still can. There is a choice. PHP issues a warning by default, you can define a callback and do whatever you want, or you can choose to abort on errors. There is a choice. D throws an exception that can currently be quenched. If you don't want to do that, you have all the power you need not to. There is a choice. What some of you seem to want would take that choice away, and I (still) fail to see the benefit; really, you can abort if you want to, what's the big deal?? xs0
Apr 17 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 17 Apr 2005 11:24:21 +0200, xs0 wrote:

  > [snip, summary:]
 Then don't use an 'assert' then! This is really a simple thing, honestly.
 
 ** If you want to shutdown an application when you detect an error in
 logic, use an assert.
 
 ** If you do not want to shutdown an application when you detect an error
 (of any sort), do not use an assert.
 
>> [snip] >
 Don't use an assert then. Use a different mechanism.
But I want to use assert
Well that's a shame then.
, even if perhaps my examples were not the best 
 cases for using it (if nothing else, "assert(a)" is far less characters 
 than any sort of "if(a) throw ...", and it also leaves no doubt on what 
 is and isn't expected). I just fail to see why using assert would have 
 to mean that my app needs to shut down whenever one fails.
Hang around a bit longer, it'll come to you eventually.
 I mean, if you want to shutdown, you can do it easily - don't quench 
 assertion failures, how hard is that?
 
 OTOH, if assert was unquenchable, there would be no way not to shutdown, 
Exactly! Now you're getting it.
 and I'm always pro-choice :)
Like ... fclose() shouldn't always close the file if I don't want it to? ... sometimes I might not want function X to return any value but sometimes I might? The purpose of assert, the reason it exists, is to shutdown programs when the programmer chooses to.
 BTW, why shouldn't assert be used to check data? If you look at
 http://www.digitalmars.com/d/dbc.html
 it would seem that I'm not the only one that thinks it's ok to do so 
 (and to catch those errors, too).. I mean, an error in data somewhere 
 means an error in logic somewhere else, anyway..
Duh?! Of course it checks data, but it ought to check *output* data and not *input* data. The output data is a result of *your* coding and you need to check to see if you implemented the algorithm correctly. That is what assert is designed for. The input data may or may not have come from your code. It could have come from external sources such as users of your program. You shouldn't crash a program with assert if the input data is bad. You may crash it by some other mechanism though. Assert is a debugging mechanism; that is, it is used to find bugs (specifically your coding errors) before people get to use your program. A production program, one compiled with the -release switch, will not execute any assert statements in your code. /me shakes head and walks away. -- Derek Parnell Melbourne, Australia 17/04/2005 9:03:07 PM
Apr 17 2005
parent reply xs0 <xs0 xs0.com> writes:
I mean, if you want to shutdown, you can do it easily - don't quench 
assertion failures, how hard is that?

OTOH, if assert was unquenchable, there would be no way not to shutdown, 
Exactly! Now you're getting it.
and I'm always pro-choice :)
Like ... fclose() shouldn't always close the file if I don't want it to? ... sometimes I might not want function X to return any value but sometimes I might?
wtf?? Of course assert should always do the same thing (throw something), I'm just saying I want a choice whether to terminate the app or not.
 The purpose of assert, the reason it exists, is to shutdown programs when
 the programmer chooses to.
Couldn't have said it better myself. But, you don't want to give the programmer a choice when to shutdown..
BTW, why shouldn't assert be used to check data? If you look at
http://www.digitalmars.com/d/dbc.html
it would seem that I'm not the only one that thinks it's ok to do so 
(and to catch those errors, too).. I mean, an error in data somewhere 
means an error in logic somewhere else, anyway..
Duh?! Of course it checks data, but it ought to check *output* data and not *input* data. The output data is a result of *your* coding and you need to check to see if you implemented the algorithm correctly. That is what assert is designed for.
Where did you get that (output data only)? http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.1.html checks input data http://www.digitalmars.com/d/dbc.html checks input and output data http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html checks input and output data.. I mean, a pre-condition is by definition an input data check, I think?
 The input data may or may not have come from your
 code. It could have come from external sources such as users of your
 program. You shouldn't crash a program with assert if the input data is
 bad. You may crash it by some other mechanism though. Assert is a debugging
 mechanism; that is, it is used to find bugs (specifically your coding
 errors) before people get to use your program. A production program, one
 compiled with the -release switch, will not execute any assert statements
 in your code.
Duh?! But, if it was possible to detect all bugs, this whole conversation would be pointless, as assertion errors would never occur anyway, so, obviously they can occur even after a lot of testing when used in production (assuming you don't disable them, which is a good idea, if you ask me). When they do, I totally agree that the default thing should be to abort. OTOH, I totally don't agree that one should not have the choice at all of trying to recover. I know that theoretically, that defeats the purpose of asserts, but I still want to be able to do it. For example, void main(char[][] args) { for (int a=0; a<args.length; a++) { try { (new Processor()).process(args[a].dup); } catch (Object o) { // an error occurred, display it to user } } } Now, if I know that Processor and all its libraries don't use any global variables at all (and it is possible to be 100% sure of this), I am in effect restarting the whole application for each file, even when there aren't any errors. So, there is nothing to be gained by aborting. Get it? I can just see how you'll say that it's stupid to continue with an application that has bugs. But just recently, I had an app that aborted on errors. I was asked by the client to change how it behaves and to continue on errors. The reason there were errors was that double doesn't have enough precision to handle the data they had, so in some cases it was not possible to satisfy some post-conditions (it was not even a bug as such, it simply turned out that some data wasn't possible to handle at all). Still, they wanted to continue processing other data. Now, what I am supposed to say? "Here's the new version, sorry, but I had to disable all internal error checking, because D is such a terrific language it doesn't allow me to continue, so please double-check all the output.." ??? xs0
Apr 18 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 18 Apr 2005 09:15:22 +0200, xs0 wrote:

I mean, if you want to shutdown, you can do it easily - don't quench 
assertion failures, how hard is that?

OTOH, if assert was unquenchable, there would be no way not to shutdown, 
Exactly! Now you're getting it.
and I'm always pro-choice :)
Like ... fclose() shouldn't always close the file if I don't want it to? ... sometimes I might not want function X to return any value but sometimes I might?
wtf?? Of course assert should always do the same thing (throw something), I'm just saying I want a choice whether to terminate the app or not.
Then don't use assert.
 The purpose of assert, the reason it exists, is to shutdown programs when
 the programmer chooses to.
Couldn't have said it better myself. But, you don't want to give the programmer a choice when to shutdown..
By "programmer" I was referring to the person who wrote the assert statement, not the person who is attempting to catch it. In other words, The purpose of assert, the reason it exists, is to shutdown programs when the programmer who coded the assert statement has decided to. That is, the assert statement does the shutting down because its coder has already decided that's the proper thing to do.
BTW, why shouldn't assert be used to check data? If you look at
http://www.digitalmars.com/d/dbc.html
it would seem that I'm not the only one that thinks it's ok to do so 
(and to catch those errors, too).. I mean, an error in data somewhere 
means an error in logic somewhere else, anyway..
Duh?! Of course it checks data, but it ought to check *output* data and not *input* data. The output data is a result of *your* coding and you need to check to see if you implemented the algorithm correctly. That is what assert is designed for.
Where did you get that (output data only)? http://www.acm.uiuc.edu/webmonkeys/book/c_guide/2.1.html checks input data http://www.digitalmars.com/d/dbc.html checks input and output data http://java.sun.com/j2se/1.4.2/docs/guide/lang/assert.html checks input and output data..
Just because people have used assert to validate input data doesn't mean that this is best practice. Remember, asserts only apply to debug editions of the application. So if you are not running a debug edition, and you only use asserts for input data validation, then you have no input data validation. Not a good idea, IMNSHO. Input data really ought to be validated by mechanisms other than asserts, so that they can at least run in production editions of the application.
 I mean, a pre-condition is by definition an input data check, I think?
Not always, and a pre-condition does not always have to use asserts. Also, "in" blocks and "out" blocks are stripped off production editions of the application (-release switch). In other words, they are only used in debug editions of your application. Not all pre-conditions inspect input data. They could be used to inspect the environment, or the state of global variables, etc... But the important consideration is that they are used to catch mistakes *during* the time the application is being developed. Once the application is shipped as a production edition, you must rely on other ways to detect runtime errors; do not rely on contract programming techniques to catch mistakes in production code.
 The input data may or may not have come from your
 code. It could have come from external sources such as users of your
 program. You shouldn't crash a program with assert if the input data is
 bad. You may crash it by some other mechanism though. Assert is a debugging
 mechanism; that is, it is used to find bugs (specifically your coding
 errors) before people get to use your program. A production program, one
 compiled with the -release switch, will not execute any assert statements
 in your code.
Duh?! But, if it was possible to detect all bugs, this whole conversation would be pointless, as assertion errors would never occur anyway, so, obviously they can occur even after a lot of testing when used in production (assuming you don't disable them, which is a good idea, if you ask me).
I never mentioned the possibility of detecting *all* bugs. I said that use of asserts is designed to catch logic errors. I did not say that using asserts will catch every logic error in your code. One can detect logic errors by validating the output of functions/algorithms against known values or complementary algorithms. You cannot always detect logic errors by inspecting input data, as the input data could have come from sources outside of your application's control.
 When they do, I totally agree that the default thing should 
 be to abort. OTOH, I totally don't agree that one should not have the 
 choice at all of trying to recover. I know that theoretically, that 
 defeats the purpose of asserts, but I still want to be able to do it. 
 For example,
 
 void main(char[][] args)
 {
      for (int a=0; a<args.length; a++) {
          try {
 	        (new Processor()).process(args[a].dup);
          } catch (Object o) {
               // an error occurred, display it to user
          }
      }
 }
 
 Now, if I know that Processor and all its libraries don't use any global 
 variables at all (and it is possible to be 100% sure of this), I am in 
 effect restarting the whole application for each file, even when there 
 aren't any errors. So, there is nothing to be gained by aborting. Get it?
If asserts have been used in a proper manner, that is, used to trap logic errors, and the application catches an error "catch(Object o) ..." then you can be sure of at least two things: you are still debugging the application, and a logic error somewhere in the application has been found. There is something useful to be gained in aborting the application at this point; namely that it can be repaired while still in development. Get it?
 I can just see how you'll say that it's stupid to continue with an 
 application that has bugs. But just recently, I had an app that aborted 
 on errors. I was asked by the client to change how it behaves and to 
 continue on errors. The reason there were errors was that double doesn't 
 have enough precision to handle the data they had, so in some cases it 
 was not possible to satisfy some post-conditions (it was not even a bug 
 as such, it simply turned out that some data wasn't possible to handle 
 at all). Still, they wanted to continue processing other data. Now, what 
 I am supposed to say? "Here's the new version, sorry, but I had to 
 disable all internal error checking, because D is such a terrific 
 language it doesn't allow me to continue, so please double-check all the 
 output.." ???
To use your own words "it was not even a bug as such". What you have experienced is a client-sponsored change or clarification in the business rules of the application. So no, you don't disable all internal error checking. Instead, you implement the new business specification for the application. I hoped you charged them for the upgrade ;-) -- Derek Melbourne, Australia 18/04/2005 5:44:21 PM
Apr 18 2005
parent reply xs0 <xs0 xs0.com> writes:
Of course assert should always do the same thing (throw 
something), I'm just saying I want a choice whether to terminate the app 
or not.
Then don't use assert.
We're going in circles here.. I want to use assert, because assert is exactly the thing that should be used for checking the stuff I want to check. For example: if (a<b) { c=(a+b)*0.5; assert(a<c && c<b); } else { throw ..; } I guess you agree that an assert is valid here, and that the assertion is not buggy, nor is the code? It is still possible that the assert triggers an error, though. Now, I don't see why I need to use something else, just because I want to give others that use this code a choice to shutdown or not. Can't you see that if assert errors are unquenchable, you can't ever leave them in code after debugging, because it is always theoretically possible that you (or someone else using your code) will want not to abort, and they don't have that choice? For the umpteenth time - don't quench assert errors, if you don't want to. If you want to abort anyway, why should it also be enforced in the language?
 By "programmer" I was referring to the person who wrote the assert
 statement, not the person who is attempting to catch it. In other words,
 The purpose of assert, the reason it exists, is to shutdown programs when
 the programmer who coded the assert statement has decided to. That is, the
 assert statement does the shutting down because its coder has already
 decided that's the proper thing to do.
That's a bit simplistic, if you ask me.. Would you use a library that formatted your disk if you ever pass it invalid parameters? Just killing one's app can do even more damage.. I think you assume too much, we're not living in some theoretical ideal world, where aborting is always the right thing to do.
 Just because people have used assert to validate input data doesn't mean
 that this is best practice. Remember, asserts only apply to debug editions
 of the application. So if you are not running a debug edition, and you only
 use asserts for input data validation, then you have no input data
 validation. Not a good idea, IMNSHO.
Whatever you use asserts for, if you disable them, you don't have that validation, so it's not a good idea to disable them in the first place. And if I do leave them on, I want control over my apps lifecycle, not the language. (OTOH, if you do want to abort, there's nothing stopping you from doing so)
 Input data really ought to be validated by mechanisms other than asserts,
 so that they can at least run in production editions of the application.
I though the idea was to use other mechanisms where you expect invalid data to be possible, and to use asserts where you don't?
I mean, a pre-condition is by definition an input data check, I think?
Not always, and a pre-condition does not always have to use asserts. Also, "in" blocks and "out" blocks are stripped off production editions of the application (-release switch). In other words, they are only used in debug editions of your application.
Not always, not always. I agree. It's you that is claiming that something is always true, like that an assert error should abort the app. Generally yes, but not always!
 Not all pre-conditions inspect input data. They could be used to inspect
 the environment, or the state of global variables, etc... 
How are those not input data? (and if they're really not, why are they being checked?)
 But the important
 consideration is that they are used to catch mistakes *during* the time the
 application is being developed. Once the application is shipped as a
 production edition, you must rely on other ways to detect runtime errors;
 do not rely on contract programming techniques to catch mistakes in
 production code.
So what would you have me do? version(DEBUG) { assert(...); } else { if (...) throw ...; } What is wrong with using CP in production code? If there is a contract, it should be obeyed even after you decide to label the code "production"? I mean, since when is more checking a bad thing? The only reason to disable it at all is for performance, which may or may not be what one prefers over the other.
 I never mentioned the possibility of detecting *all* bugs. I said that use
 of asserts is designed to catch logic errors. I did not say that using
 asserts will catch every logic error in your code.
So, is it not better to leave them on in production, too? That way, the remaining errors will at least get detected.. I mean, your reasoning is weird: - you want debug editions to be forced to abort on assert errors, and - you want to disable assert in production, which is the only place where it matters whether aborting is done or not
 If asserts have been used in a proper manner, that is, used to trap logic
 errors, and the application catches an error "catch(Object o) ..." then you
 can be sure of at least two things: you are still debugging the
 application, and a logic error somewhere in the application has been found.
 There is something useful to be gained in aborting the application at this
 point; namely that it can be repaired while still in development.
Perhaps, but perhaps not. As I tried to demonstrate with the example that followed, there was nothing to repair (and it obviously was not still in development). At least, there was nothing to repair in the code that threw the assert error, nor in any code that used it. All I did was add that catch(Object), so processing would continue with other files. If you had it your way, I'd have to rewrite hundreds of asserts, and the only thing I'd gain is more bloat (like 80 characters per assert) and a lot of wasted time. xs0
Apr 18 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 18 Apr 2005 12:52:11 +0200, xs0 wrote:

[snip]
 I mean, your reasoning is weird:
Whatever ... We obviously don't agree with each other and we are unable to convince the other of our 'rightness', so let's stop wasting bandwidth and our colleagues' patience. Instead, we really ought to agree to disagree and get on with our work. -- Derek Parnell Melbourne, Australia http://www.dsource.org/projects/build 18/04/2005 9:26:56 PM
Apr 18 2005
parent reply xs0 <xs0 xs0.com> writes:
Derek Parnell wrote:
 On Mon, 18 Apr 2005 12:52:11 +0200, xs0 wrote:
 
 [snip]
 
I mean, your reasoning is weird:
Whatever ... We obviously don't agree with each other and we are unable to convince the other of our 'rightness', so let's stop wasting bandwidth and our colleagues' patience. Instead, we really ought to agree to disagree and get on with our work.
Well, a few posts ago you said:
 I would have thought that if a plug-in fails (CP error) then
 *only* the plug-in should fail and not the application that is
 hosting it.
so I don't agree we disagree :) It would just seem that we moved from what we agreed on to assert specifics, which we don't seem to agree on. I would like to get to the bottom of our disagreement and resolve it, but of course, feel free not to continue our discussion, if you don't want to. Or, we can move it to e-mail, mine is included with all my posts. xs0
Apr 18 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 18 Apr 2005 14:25:17 +0200, xs0 wrote:

Or, we can move it to e-mail, mine is included with all my posts.
I just tried your email address but got the message "Could not resolve domain". I sent it to 'xs0' using the domain of 'xs0.com' -- Derek Parnell Melbourne, Australia 18/04/2005 11:23:00 PM
Apr 18 2005
parent xs0 <xs0 xs0.com> writes:
That's odd. I think I need to change my domain provider, some other 
people had problems visiting my site, too..

Try with mslenc at the Google's mail domain, which starts with g, ends 
with mail and there's nothing inbetween, while its TLD is .com :)


xs0

Derek Parnell wrote:
 On Mon, 18 Apr 2005 14:25:17 +0200, xs0 wrote:
 
 
Or, we can move it to e-mail, mine is included with all my posts.
I just tried your email address but got the message "Could not resolve domain". I sent it to 'xs0' using the domain of 'xs0.com'
Apr 18 2005
prev sibling next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 18:57:10 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opso3x5rhn23k2f5 nrage.netwin.co.nz...
 On Tue, 12 Apr 2005 16:23:06 +1000, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
<snip>
 Because there has to be a common type that's catchable, and I
 don't
 think Object is the appropriate choice.
So.. you want: Catchable Recoverable OutOfMemory NonRecoverable AssertionFailure
You've changed the names of things - Catchable<=>Throwable, NonRecoverable <=> Unrecoverable <=> Irrecoverable - which is perfectly fine. I'm not allied to them.
I was trying to reflect your statement above about wanting something 'catchable'. Food for thought - Catchable (sorta) implies throwable, throwable doesn't (necesarily) imply catchable. <snip>
 though memory exhaustion is in principle recoverable, in
 practice it is not recoverable (since getting memory to throw/handle
 is tricky, and may require workarounds) and also often so unlikely
 as to make it not worth worrying about.
The point Ben made in his original post (to which I agree) is that this is a per-application decision, it cannot be made "in general".
 So, the conversation I'm interesting in seeing is whether these
 characteristics of OutOfMemory are shared by any other resource
 exhaustion.
Like running out of operating system handles, file handles, socket handles... <snip>
 So, I see the taxonomy as being either (now using my/Ben's names):

 Object <= not throwable, btw
     Throwable
         Error <= Unrecoverable exceptions
             ContractViolation
             Assertion
         Exception
             FileNotFoundException
             XMLParseException
         Exhaustion
             MemoryExhaustion
             TSSKeyExhaustion

 or, if we just lump exhaustions in with exceptions

 Object <= not throwable, btw
     Throwable
         Error <= Unrecoverable exceptions
             ContractViolation
             Assertion
         Exception
             FileNotFoundException
             XMLParseException
             MemoryExhaustionException
             TSSKeyExhaustionException
So, in essence you just want to insert "throwable" in between object and the rest of the tree.
 Why do you need to force a program to terminate? If the programmer
 wants  to continue and can do so, they will, if not, they wont. I
 see no need to  enforce it.
<snip>
 First, "Why do you need to force a program to terminate?". This
 one's simple: The program must be forced to terminate because it has
 violated its design.
Correction: The program *must* terminate *if* it has violated its design. Do you know the design of a program someone is going to write in the future? How can you say with utmost surety that under circumstance X that program has violated it's design and *must* terminate? <snip>
 Second: "If the programmer wants to continue and can do so, they
 will, if not, they wont". Who's the programmer? Do you mean the
 user?
No. <snip>
 But let's give you the benefit of the doubt, and assume you
 misspoke, and actually meant: "If the [user] wants to continue and
 can do so, they will, if not, they wont".
No, I did not mean that. <snip>
 Notwithstanding all the foregoing, there's a much more fundamental,
 albeit little recognised, issue. Computers make a strict
 interpretation of what we, the programmers, instruct them to do. Now
 a contract violation, as I've said, is pure and simple a statement
 that your instructions to the computer (or to the compiler, if you
 will) are fundamentally flawed.
Correct, and if the program violates it's design it should terminate (in the most sensible way possible). However, none of this addresses the issue that you cannot know a future programs design nor whether it should terminate under any given 'exception'. So as to clear any confusion, under what circumstance would you enforce program termination? in other words, which exception or error would you make un-catchable? Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 So, I see the taxonomy as being either (now using my/Ben's 
 names):

 Object <= not throwable, btw
     Throwable
         Error <= Unrecoverable exceptions
             ContractViolation
             Assertion
         Exception
             FileNotFoundException
             XMLParseException
         Exhaustion
             MemoryExhaustion
             TSSKeyExhaustion

 or, if we just lump exhaustions in with exceptions

 Object <= not throwable, btw
     Throwable
         Error <= Unrecoverable exceptions
             ContractViolation
             Assertion
         Exception
             FileNotFoundException
             XMLParseException
             MemoryExhaustionException
             TSSKeyExhaustionException
So, in essence you just want to insert "throwable" in between object and the rest of the tree.
Yes and, importantly, not allow anything not defined from Throwable to be thrown/caught.
 Do you know the design of a program someone is going to write in 
 the  future?

 How can you say with utmost surety that under circumstance X that 
 program  has violated it's design and *must* terminate?
Somewhere along the way we've had a gigantic disconnect. Maybe this stuff's new to you and I've assumed too much? A programmer uses CP constructs - assertions, pre/postconditions, invariants - to assert truths about the logic of their program, i.e. that a given truth will hold if the program is behaving according to its design. So its not the case that _I_ say/know anything about the program's design, or any such fantastic thing, but that the programmer(s) know(s) the design as they're writing, and they _assert_ the truths about that design within the code.
 Notwithstanding all the foregoing, there's a much more 
 fundamental,
 albeit little recognised, issue. Computers make a strict
 interpretation of what we, the programmers, instruct them to do. 
 Now
 a contract violation, as I've said, is pure and simple a 
 statement
 that your instructions to the computer (or to the compiler, if 
 you
 will) are fundamentally flawed.
Correct, and if the program violates it's design it should terminate (in the most sensible way possible). However, none of this addresses the issue that you cannot know a future programs design nor whether it should terminate under any given 'exception'.
Hopefully I've now made that clear.
 So as to clear any confusion, under what circumstance would you 
 enforce  program termination?
When a program violates its design, as detected by the assertions inserted into it by its creator(s). It must always do so.
 in other words, which exception or error would you  make 
 un-catchable?
Sigh. I'm trying my best not to be rude, but it seems that you've really not bothered to read the posts in this thread before replying to them. 1. At no point have I _ever_ said anything was to be uncatchable. Indeed, I've specifically, and regularly, discussed the action(s) to be taken when catching an irrecoverable exception 2. As long as this debate's been going on, which, IIHIC, is the best part of a year, the distinction between errors and exceptions has been (ir)recoverability. By convention in D, an Exception is recoverable and an Error is irrecoverable, but the latter is not enforced, and, as I've shown not possible in D by library extension. So, no exceptions would be irrecoverable. All errors would be irrecoverable. The only place, IMO, where there's a grey area is with resource exhaustion, which are theoretically recoverable (since they don't mean the program has violated its design) but are often practically irrecoverable (by virtue of the ramifications of the problem that's caused them).
Apr 12 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 12 Apr 2005 22:23:56 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Do you know the design of a program someone is going to write in
 the  future?

 How can you say with utmost surety that under circumstance X that
 program  has violated it's design and *must* terminate?
Somewhere along the way we've had a gigantic disconnect. Maybe this stuff's new to you and I've assumed too much?
 A programmer uses CP constructs - assertions, pre/postconditions,
 invariants - to assert truths about the logic of their program, i.e.
 that a given truth will hold if the program is behaving according to
 its design. So its not the case that _I_ say/know anything about the
 program's design, or any such fantastic thing, but that the
 programmer(s) know(s) the design as they're writing, and they
 _assert_ the truths about that design within the code.
Assertions pre/post conditions etc are removed in a release build. I assume from the above you leave them in? Regardless lets apply the above to the plugin example posted in this thread in several places by several people. If a plugin asserts should the main program be forced to terminate. IMO no. Assuming assertions are removed all you're left with is exceptions. Should a program be forced to terminate on an exception? IMO no. The reasoning for my opinions above is quite simple, the programmer (of the program) is the only person in a position to decide what the design and behaviour of the program is, not the writer of a plugin, not the writer of the std library or language. <snip>
 So as to clear any confusion, under what circumstance would you
 enforce  program termination?
When a program violates its design, as detected by the assertions inserted into it by its creator(s). It must always do so.
Sure, but if I write a program that uses your plugin should your plugin (your creation) dictate to my program (my creation) what it's design is? By enforcing program termination you do so. <snip> Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opso43bibk23k2f5 nrage.netwin.co.nz...
 On Tue, 12 Apr 2005 22:23:56 +1000, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Do you know the design of a program someone is going to write in
 the  future?

 How can you say with utmost surety that under circumstance X 
 that
 program  has violated it's design and *must* terminate?
Somewhere along the way we've had a gigantic disconnect. Maybe this stuff's new to you and I've assumed too much?
 A programmer uses CP constructs - assertions, pre/postconditions,
 invariants - to assert truths about the logic of their program, 
 i.e.
 that a given truth will hold if the program is behaving according 
 to
 its design. So its not the case that _I_ say/know anything about 
 the
 program's design, or any such fantastic thing, but that the
 programmer(s) know(s) the design as they're writing, and they
 _assert_ the truths about that design within the code.
Assertions pre/post conditions etc are removed in a release build. I assume from the above you leave them in?
This is something we've not covered for some weeks. I'm moving in my own work towards leaving them in more and more, and, as I've said, in a recent high-risk project they are in, and doing their job very nicely (which is to say nothing at all after the first few days in system testing, which gives me a very nice calm feeling, given the amount of money travelling through those components each day!). But, I think in D there should be an option to have them elided, yes. Again, I'd err on the side of them always being in abset a -no_cp_violations flag, but I can live with them being opt-in rather than opt-out. The crucial thing we need is language support for irrecoverability, whether it can then be disabled on the command-line or not, since there is no means within D to provide it by library.
 Regardless lets apply the above to the plugin example posted in 
 this  thread in several places by several people.
 If a plugin asserts should the main program be forced to 
 terminate. IMO no.

 Assuming assertions are removed all you're left with is 
 exceptions. Should  a program be forced to terminate on an 
 exception? IMO no.
I've never said a program should be forced to terminate on an exception. In principle there is no need. In practice one might well do so, of course, but that's beside the/this point.
 The reasoning for my opinions above is quite simple, the 
 programmer (of  the program) is the only person in a position to 
 decide what the design  and behaviour of the program is, not the 
 writer of a plugin, not the  writer of the std library or 
 language.
Sorry, again this is completely wrong. Once the programmer is using a plug-in outside the bounds of its correctness *it is impossible* for that programmer to decide what the behaviour of his/her program is. It really amazes me that people don't get this. Is it some derring-do attitude that 'we shall overcome' all obstacles by dint of hard-work and resolve? No-one binds together nuts and bolts with a hammer, no matter how hard they hit them.
 <snip>

 So as to clear any confusion, under what circumstance would you
 enforce  program termination?
When a program violates its design, as detected by the assertions inserted into it by its creator(s). It must always do so.
Sure, but if I write a program that uses your plugin should your plugin (your creation) dictate to my program (my creation) what it's design is? By enforcing program termination you do so.
If you use my plug-in counter to its design, it might do anything, include scramble your entire hard disk. Once you've stepped outside the bounds of its contract all bets are off. Not to mention the fact that it cannot possibly be expected to work Let's turn it round: 1. Why do you want to use a software component contrary to its design? 2. What do you expect it to do for you in that circumstance?
Apr 12 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 08:32:05 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Let's turn it round:
     1. Why do you want to use a software component contrary to its  
 design?
I dont.
     2. What do you expect it to do for you in that circumstance?
Nothing. I expect to be able to disable/stop using a component that fails (in whatever fashion) and continue with my primary purpose whatever that happens to be. Under what circumstances do you see the goal above to be impossible? You've mentioned scrambled memory and I agree if your plugin has scrambled my memory there is no way I can continue sanely. However, how can we detect that situation? further how can I even be sure my program is going to terminate how I intend/want it to, more likely it crashes somewhere random. Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opso45k5x523k2f5 nrage.netwin.co.nz...
 On Wed, 13 Apr 2005 08:32:05 +1000, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Let's turn it round:
     1. Why do you want to use a software component contrary to 
 its  design?
I dont.
     2. What do you expect it to do for you in that circumstance?
Nothing. I expect to be able to disable/stop using a component that fails (in whatever fashion) and continue with my primary purpose whatever that happens to be.
But you can't, don't you see. Once it's experienced a single instruction past the point at which it's violated its design, all bets are off.
 Under what circumstances do you see the goal above to be 
 impossible?
It is theoretically impossible in all circumstances. (FTR: We're only talking about contract violations here. I'm going to keep saying that, just to be clear) It is practically impossible in probably a minor of cases. In, say, 80% of cases you could carry on quite happily. Maybe it's 90%? Maybe even 99%. But the point is that it is absolutely never 100%, and you _cannot know_ when that 1%, or 10% or 20% is going to bite. And therefore if the application is written to attempt recovery of invalid behaviour it is going to get you. One day you'll lose a very important piece of user data, or contents of your harddrive will be scrambled.
 You've mentioned scrambled memory and I agree if your plugin has 
 scrambled  my memory there is no way I can continue sanely.
Cool! :-)
 However, how can we  detect that situation?
We cannot. The only person that's got the faintest chance of specifying the conditions for which it's not going to happen (at least not by design <g>) is the author of the particular peice of code. And the way they specify it is to reify the contracts of their code in CP constructs: assertions, invariants, etc. The duty of the programmer of the application that hosts that code is to acknowledge the _fact_ that their application is now in an invalid state and the _likelihood_ that something bad will happen, and to shutdown in as timely and graceful a manner as possible. Since D is (i) new and open to improvement and (ii) currently not capable of supporting irrecoverability by library, I am campaigning for it to have it built in.
 further how can I even be sure my program is going  to terminate 
 how I intend/want it to, more likely it crashes somewhere  random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly, and in practice it is likely to do so an uncomfortable/unacceptable proportion of the time. If the application is designed (or forced by the language) to respect the detection of its invalid state as provided by the code's author(s), then you stand a very good chance in practice of being able to effect a graceful shutdown. (Even though in principle you cannot be sure that you can do so.)
Apr 12 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 09:04:15 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opso45k5x523k2f5 nrage.netwin.co.nz...
 On Wed, 13 Apr 2005 08:32:05 +1000, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Let's turn it round:
     1. Why do you want to use a software component contrary to
 its  design?
I dont.
     2. What do you expect it to do for you in that circumstance?
Nothing. I expect to be able to disable/stop using a component that fails (in whatever fashion) and continue with my primary purpose whatever that happens to be.
But you can't, don't you see. Once it's experienced a single instruction past the point at which it's violated its design, all bets are off.
The module/plugin won't execute a single instruction past the point at which it violates it's design. It will be killed. My program hasn't violated it's design at all. The only problem I can see occurs when the module/plugin corrupts my programs memory space.
 Under what circumstances do you see the goal above to be
 impossible?
It is theoretically impossible in all circumstances. (FTR: We're only talking about contract violations here. I'm going to keep saying that, just to be clear)
Can you give me an example of the sort of contract violation you're referring to. I'm seeing... class Foo { int a; invariant { if (a != 5) assert("contract violation"); } } which can be caused by any number of things: - buggy algorithm - unexpected input, without an assertion - memory corruption so if this occurs in a plugin/module only that last one can possibly corrupt the main program.
 It is practically impossible in probably a minor of cases. In, say,
 80% of cases you could carry on quite happily. Maybe it's 90%? Maybe
 even 99%. But the point is that it is absolutely never 100%, and you
 _cannot know_ when that 1%, or 10% or 20% is going to bite.
Agreed.
 And
 therefore if the application is written to attempt recovery of
 invalid behaviour it is going to get you. One day you'll lose a very
 important piece of user data, or contents of your harddrive will be
 scrambled.
I don't want the plugin/module that has asserted to continue, I want it to die. I want my main program to continue, the only situation I can see where this is likely to cause a problem is memory corruption and in that case, yes it's possible it will have the effects you describe. As you say above, the probability is very small, thus each application needs to make the decision about whether to continue or not, for some the risk might be acceptable, for others it might not.
 However, how can we  detect that situation?
We cannot. The only person that's got the faintest chance of specifying the conditions for which it's not going to happen (at least not by design <g>) is the author of the particular peice of code. And the way they specify it is to reify the contracts of their code in CP constructs: assertions, invariants, etc. The duty of the programmer of the application that hosts that code is to acknowledge the _fact_ that their application is now in an invalid state and the _likelihood_ that something bad will happen, and to shutdown in as timely and graceful a manner as possible.
If that is what they want to do, they could equally decide the risk was small (as it is) and continue.
 Since D is (i) new and open to improvement and (ii) currently not
 capable of supporting irrecoverability by library, I am campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
 further how can I even be sure my program is going  to terminate
 how I intend/want it to, more likely it crashes somewhere  random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
I _might_ crash. If the assertion was due to memory corruption, and even then it might be localised to the module in which the assertion was raised, if so it has no effect on the main program.
 , and in practice it
 is likely to do so an uncomfortable/unacceptable proportion of the
 time.
unacceptable to whom? you? me? the programmer of application X 10 years in the future?
 If the application is designed (or forced by the language) to
 respect the detection of its invalid state as provided by the code's
 author(s), then you stand a very good chance in practice of being
 able to effect a graceful shutdown. (Even though in principle you
 cannot be sure that you can do so.)
I think you stand a very good chance of annoying the hell out of a future program author by forcing him/her into a design methodology that they do not aspire to, whether it's correct or not. For the records I do agree failing hard and fast is usually the best practice. I just don't believe people should be forced into it, all the time. Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opso47kto123k2f5 nrage.netwin.co.nz...
 On Wed, 13 Apr 2005 09:04:15 +1000, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opso45k5x523k2f5 nrage.netwin.co.nz...
 On Wed, 13 Apr 2005 08:32:05 +1000, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Let's turn it round:
     1. Why do you want to use a software component contrary to
 its  design?
I dont.
     2. What do you expect it to do for you in that 
 circumstance?
Nothing. I expect to be able to disable/stop using a component that fails (in whatever fashion) and continue with my primary purpose whatever that happens to be.
But you can't, don't you see. Once it's experienced a single instruction past the point at which it's violated its design, all bets are off.
The module/plugin won't execute a single instruction past the point at which it violates it's design. It will be killed. My program hasn't violated it's design at all.
Without executing any instructions beyond that point, how does it (or we) know it's invalid?
 The only problem I can see occurs when the module/plugin corrupts 
 my  programs memory space.
Which it may have done before anyone, including it, knows its invalid.
 Under what circumstances do you see the goal above to be
 impossible?
It is theoretically impossible in all circumstances. (FTR: We're only talking about contract violations here. I'm going to keep saying that, just to be clear)
Can you give me an example of the sort of contract violation you're referring to. I'm seeing... class Foo { int a; invariant { if (a != 5) assert("contract violation"); } } which can be caused by any number of things: - buggy algorithm - unexpected input, without an assertion - memory corruption
Alas, I'm really not smart enough to work on partial examples. Can you flesh out a small but complete example which will demonstrate what you're after, and I'll do my best to prove my case on it?
 And
 therefore if the application is written to attempt recovery of
 invalid behaviour it is going to get you. One day you'll lose a 
 very
 important piece of user data, or contents of your harddrive will 
 be
 scrambled.
I don't want the plugin/module that has asserted to continue, I want it to die. I want my main program to continue, the only situation I can see where this is likely to cause a problem is memory corruption and in that case, yes it's possible it will have the effects you describe.
I agree that that's the desirable situation. Alas, it's impossible (in principle; as I've said, it's possible some Heisenbergian proportion of the time in practice).
 As you say above, the probability is very small, thus each 
 application  needs to make the decision about whether to continue 
 or not, for some the  risk might be acceptable, for others it 
 might not.
Yeah, it sounds persuasive. But that'd only be valid if an application were to pop a dialog that said: "The third-party component ReganGroovy.dll has encountered a condition outside the bounds of its design, and cannot be used further. You are strongly advised to shutdown the application immediately to ensure your work is not lost. If you do not follow this advice there is a non-negligble chance of deleterious effects, ranging from the loss of your unsaved work or deletion of the file(s) you are working with, to your system being rendered inoperable or damage to the integrity of your corporate network. Do you wish to continue?" Now *maybe* if that was the case, then the programmers of that application can argue that using the "-no-cp-violations" flag is valid. But can you see a manager agreeing to that message? Of course we live in a world of mendacity motivated by greed, so your manager's going to have you water down that dialog faster than you can "gorporate creed". In which case, we should all just violate away. (But I believe that most engineers care about their craft, and would have trouble sleeping in such circumstances.)
 However, how can we  detect that situation?
We cannot. The only person that's got the faintest chance of specifying the conditions for which it's not going to happen (at least not by design <g>) is the author of the particular peice of code. And the way they specify it is to reify the contracts of their code in CP constructs: assertions, invariants, etc. The duty of the programmer of the application that hosts that code is to acknowledge the _fact_ that their application is now in an invalid state and the _likelihood_ that something bad will happen, and to shutdown in as timely and graceful a manner as possible.
If that is what they want to do, they could equally decide the risk was small (as it is) and continue.
Who decides? The programmer, or the user?
 Since D is (i) new and open to improvement and (ii) currently not
 capable of supporting irrecoverability by library, I am 
 campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
So would I, but D does not have the mechanisms to support that, so it needs language support.
 further how can I even be sure my program is going  to terminate
 how I intend/want it to, more likely it crashes somewhere 
 random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
I _might_ crash. If the assertion was due to memory corruption, and even then it might be localised to the module in which the assertion was raised, if so it has no effect on the main program.
Indeed, it might. In many cases it will. But you'll never know for sure. It might have corrupted your stack such that the next file you open is C:\boot.ini, and the next time you reboot your machine it doesn't start. If the programmer makes that decision for an uninformed user, they deserve to be sued, IMO.
 , and in practice it
 is likely to do so an uncomfortable/unacceptable proportion of 
 the
 time.
unacceptable to whom? you? me? the programmer of application X 10 years in the future?
The user, of course. The end victim of all such invalid behaviour is the user, whether it's a bank losing millions of dollars because the comms services sent messages the wrong way, or Joe Image the graphic designer who's lost 14 hours of work 2 hours before he has to present it to his major client, who'll terminate his contract and put his company under.
 I think you stand a very good chance of annoying the hell out of a 
 future  program author by forcing him/her into a design 
 methodology that they do  not aspire to, whether it's correct or 
 not.

 For the records I do agree failing hard and fast is usually the 
 best  practice. I just don't believe people should be forced into 
 it, all the  time.
Again it boils down to two things, the theoretical "what do you expect of your software once it's operating outside its design?" and the practical "wouldn't you like to use software that's been subject to a highly fault-intolerant design/development/testing methodology?" There's simply no getting away from the first, and many good reasons to embrace the second.
Apr 12 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 10:07:40 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 I expect to be able to disable/stop using a component that fails
 (in  whatever fashion) and continue with my primary purpose
 whatever that  happens to be.
But you can't, don't you see. Once it's experienced a single instruction past the point at which it's violated its design, all bets are off.
The module/plugin won't execute a single instruction past the point at which it violates it's design. It will be killed. My program hasn't violated it's design at all.
Without executing any instructions beyond that point, how does it (or we) know it's invalid?
Ok, missunderstanding, the 'point' in my mind was the assert statement, but you're saying it's where the erroneous 'thing' was carried out, at some stage before the assert, correct? If so, agreed.
 The only problem I can see occurs when the module/plugin corrupts
 my  programs memory space.
Which it may have done before anyone, including it, knows its invalid.
Yep.
 Under what circumstances do you see the goal above to be
 impossible?
It is theoretically impossible in all circumstances. (FTR: We're only talking about contract violations here. I'm going to keep saying that, just to be clear)
Can you give me an example of the sort of contract violation you're referring to. I'm seeing... class Foo { int a; invariant { if (a != 5) assert("contract violation"); } } which can be caused by any number of things: - buggy algorithm - unexpected input, without an assertion - memory corruption
Alas, I'm really not smart enough to work on partial examples. Can you flesh out a small but complete example which will demonstrate what you're after, and I'll do my best to prove my case on it?
I was asking *you* for an example, the above is half-formed because I am trying to guess what you mean. Feel free to modify it, and/or start from scratch. Basically I'm asking: 1- What are the causes of contract violations? 2- How many of those would corrupt the "main program" if they occured in a plugin/module. The point I am driving at is that a very small subset of contract violations corrupt the main program in such a way as to cause it to crash, the rest can be logged, the bad code disabled/not used and execution can continue to operate in a perfectly valid and normal fashion. In other words, only in a small subset of contract violations would the main program start to "operating outside it's design". The choice about whether to continue or not lies in the hands of the programmer of that application.
 And
 therefore if the application is written to attempt recovery of
 invalid behaviour it is going to get you. One day you'll lose a
 very
 important piece of user data, or contents of your harddrive will
 be
 scrambled.
I don't want the plugin/module that has asserted to continue, I want it to die. I want my main program to continue, the only situation I can see where this is likely to cause a problem is memory corruption and in that case, yes it's possible it will have the effects you describe.
I agree that that's the desirable situation. Alas, it's impossible (in principle; as I've said, it's possible some Heisenbergian proportion of the time in practice).
Some _large_ proportion of the time in practice, as far as I can see.
 As you say above, the probability is very small, thus each
 application  needs to make the decision about whether to continue
 or not, for some the  risk might be acceptable, for others it
 might not.
Yeah, it sounds persuasive. But that'd only be valid if an application were to pop a dialog that said: "The third-party component ReganGroovy.dll has encountered a condition outside the bounds of its design, and cannot be used further. You are strongly advised to shutdown the application immediately to ensure your work is not lost. If you do not follow this advice there is a non-negligble chance of deleterious effects, ranging from the loss of your unsaved work or deletion of the file(s) you are working with, to your system being rendered inoperable or damage to the integrity of your corporate network. Do you wish to continue?"
That would be nice, but it's not required. The choice is in the hands of the programmer, not the user. If the user doesn't like the choice made by the programmer, they'll stop using the program.
 Now *maybe* if that was the case, then the programmers of that
 application can argue that using the "-no-cp-violations" flag is
 valid. But can you see a manager agreeing to that message?
That depends on the manager. I vaguely recall recieving a windows error message very much like that one shown above. On serveral occasions, most of which continued to run, albeit with other errors, some of which died shortly thereafter.
 Of course we live in a world of mendacity motivated by greed, so
 your manager's going to have you water down that dialog faster than
 you can "gorporate creed". In which case, we should all just violate
 away. (But I believe that most engineers care about their craft, and
 would have trouble sleeping in such circumstances.)
Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is unique to each situation and application. Thus why we cannot mandate program termination. But, by all means, provide one in the library.
 However, how can we  detect that situation?
We cannot. The only person that's got the faintest chance of specifying the conditions for which it's not going to happen (at least not by design <g>) is the author of the particular peice of code. And the way they specify it is to reify the contracts of their code in CP constructs: assertions, invariants, etc. The duty of the programmer of the application that hosts that code is to acknowledge the _fact_ that their application is now in an invalid state and the _likelihood_ that something bad will happen, and to shutdown in as timely and graceful a manner as possible.
If that is what they want to do, they could equally decide the risk was small (as it is) and continue.
Who decides? The programmer, or the user?
The programmer.
 Since D is (i) new and open to improvement and (ii) currently not
 capable of supporting irrecoverability by library, I am
 campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
So would I, but D does not have the mechanisms to support that, so it needs language support.
I assume you're referring to the fact that you can catch Object. IMO catching object is an advanced technique. Once the Exception tree is sorted out people will be catching "Exception" if they want to catch "everything" and that won't include asserts and other contract violations. This will leave the _possibility_ of catching Object if desired and all will be happy.
 further how can I even be sure my program is going  to terminate
 how I intend/want it to, more likely it crashes somewhere
 random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
I _might_ crash. If the assertion was due to memory corruption, and even then it might be localised to the module in which the assertion was raised, if so it has no effect on the main program.
Indeed, it might. In many cases it will. But you'll never know for sure. It might have corrupted your stack such that the next file you open is C:\boot.ini, and the next time you reboot your machine it doesn't start. If the programmer makes that decision for an uninformed user, they deserve to be sued, IMO.
In which case the user will move on to another program. The programmer will hopefully learn from the mistake and improve. Either way, it's not ours to mandate.
 , and in practice it
 is likely to do so an uncomfortable/unacceptable proportion of
 the
 time.
unacceptable to whom? you? me? the programmer of application X 10 years in the future?
The user, of course. The end victim of all such invalid behaviour is the user, whether it's a bank losing millions of dollars because the comms services sent messages the wrong way, or Joe Image the graphic designer who's lost 14 hours of work 2 hours before he has to present it to his major client, who'll terminate his contract and put his company under.
(as above) The user will choose, the programmer will learn.. or not.
 I think you stand a very good chance of annoying the hell out of a
 future  program author by forcing him/her into a design
 methodology that they do  not aspire to, whether it's correct or
 not.

 For the records I do agree failing hard and fast is usually the
 best  practice. I just don't believe people should be forced into
 it, all the  time.
Again it boils down to two things, the theoretical "what do you expect of your software once it's operating outside its design?"
If it *is* operating outside it's design you cannot expect anything from it. However, (assuming plugin/main context) you cannot know that it (main program) *is* operating outside it's design, it only *might* be (if the plugin has corrupted it).
 and
 the practical "wouldn't you like to use software that's been subject
 to a highly fault-intolerant design/development/testing
 methodology?"
Of course. But that does not mean I agre with mandatory program termination.
 There's simply no getting away from the first
Indeed.
 , and many good reasons
 to embrace the second.
Agreed. And we can/will with a revised exception tree and clear description of expected practices i.e. catching exception but not object (unless you're doing x, y, z) and why that is frowned upon. Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Can you give me an example of the sort of contract violation
 you're  referring to. I'm seeing...

 class Foo {
   int a;

   invariant {
     if (a != 5) assert("contract violation");
   }
 }

 which can be caused by any number of things:
  - buggy algorithm
  - unexpected input, without an assertion
  - memory corruption
Alas, I'm really not smart enough to work on partial examples. Can you flesh out a small but complete example which will demonstrate what you're after, and I'll do my best to prove my case on it?
I was asking *you* for an example, the above is half-formed because I am trying to guess what you mean. Feel free to modify it, and/or start from scratch.
Ah. More work for me. :-) I'll give you the real example from the comms system. One component, call it B, serves as a bridge between two others translating messages on TCP connections from the upstream component to message queue entries to be dispatched to the downstream. It effects a reverse translation from downstream MQ back through to upstream TCP. The invariant that was violated was that the channel's internal message container for receiving from the MQ should not contain any messages when the upstream (TCP) is not connected to its peer. The reason this fired is that the downstream process, under some circumstances, did indeed send up messages. Because the upstream entities maintain transaction state based on connectivity, any messages from a previous TCP connection are by defintion from a previous transaction, and so to pass them up would violate the protocol, and therefore entirely unreasonable. The only reasonable action is to drop them. Because the program as originally designed did not expect to encounter this scenario, that assumption was codified in the class invariant for the channel type. Thus, when it was encountered in practice - i.e. the program now encountered a condition that violated its design - a contract violation fired, and the program did an informative suicide. As soon as this happened we were able to infer that our design assumptions were wrong, and we corrected the design such that stale messages are now expected, and are dealt with by a visit to the bit bucket. As I said in another post, imagine if that CP violation had not happened. The stale messages could have been encountered every few days/weeks/months, and may well have been treated as some emergent and unknown behaviour of the overall system complexity, or may have been masked in other errors encountered in this large and diverse insfrastructure. Who knows? What I think we can say with a high certainty is that it would not have been immediately diagnosed and rapidly rectified. If you want, I'll have a root around my code base for another in a bit ...
 Basically I'm asking:
 1- What are the causes of contract violations?
Whatever the programmer dictates.
 2- How many of those would corrupt the "main program" if they
 occured in a  plugin/module.
Impossible to determine. In principle all do. In practice probably a minority. (Bear in mind that just because the numbers of incidence are likely to be low, its not valid to infer that the ramifications of each corruption will be similarly low.)
 The point I am driving at is that a very small subset of contract
 violations corrupt the main program in such a way as to cause it
 to crash,
May I ask on what you base this statement? Further, remember that crashing is but one of many deleterious consequences of contract violation. In some ways a crash is the best one might hope for. What if you code editing program appears to be working perfectly fine, but it saves everything in lowercase, or when you finally close it down it deletes its configuration files. Much more nastily, what if you're doing a financial system, and letting it continue results in good uptime, but transactions are corrupted. When the client finally finds out it's your responsibility, you're going to wish you'd gone for those cosmetically unappealing shutdowns, rather than staying up in error.
 the rest can be logged, the bad code disabled/not used and
 execution can  continue to operate in a perfectly valid and normal
 fashion.
     "The third-party component ReganGroovy.dll has encountered a
 condition outside the bounds of its design, and cannot be used
 further. You are strongly advised to shutdown the application
 immediately to ensure your work is not lost. If you do not follow
 this advice there is a non-negligble chance of deleterious
 effects,
 ranging from the loss of your unsaved work or deletion of the
 file(s) you are working with, to your system being rendered
 inoperable or damage to the integrity of your corporate network.
 Do
 you wish to continue?"
That would be nice, but it's not required. The choice is in the hands of the programmer, not the user. If the user doesn't like the choice made by the programmer, they'll stop using the program.
They sure will. But they may've been seriously inconvenienced by it, to the detriment of
 Of course we live in a world of mendacity motivated by greed, so
 your manager's going to have you water down that dialog faster
 than
 you can "gorporate creed". In which case, we should all just
 violate
 away. (But I believe that most engineers care about their craft,
 and
 would have trouble sleeping in such circumstances.)
Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is unique to each situation and application. Thus why we cannot mandate program termination. But, by all means, provide one in the library.
We can't, because D does not have the facilities to do so.
 Since D is (i) new and open to improvement and (ii) currently
 not
 capable of supporting irrecoverability by library, I am
 campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
So would I, but D does not have the mechanisms to support that, so it needs language support.
I assume you're referring to the fact that you can catch Object. IMO catching object is an advanced technique. Once the Exception tree is sorted out people will be catching "Exception" if they want to catch "everything" and that won't include asserts and other contract violations. This will leave the _possibility_ of catching Object if desired and all will be happy.
What's the merit in catching Object? I just don't get it. Does anyone have a motivating case for doing so? Does anyone have any convincing experience in throwing fundamental types in C++? IIRC, the only reason for allowing it in the first place was so that one could throw literal strings to avoid issues of stack existentiality in early exception-handling infrastructures. Nowadays, no-one talks of throwing anything other than a std::exception-derived type, and with good reason. Leaving it as Object-throw/catchable is just the same as the situation with the opApply return value. It's currently not being abused, so it's "good enough"! :-(
 further how can I even be sure my program is going  to
 terminate
 how I intend/want it to, more likely it crashes somewhere
 random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
I _might_ crash. If the assertion was due to memory corruption, and even then it might be localised to the module in which the assertion was raised, if so it has no effect on the main program.
Indeed, it might. In many cases it will. But you'll never know for sure. It might have corrupted your stack such that the next file you open is C:\boot.ini, and the next time you reboot your machine it doesn't start. If the programmer makes that decision for an uninformed user, they deserve to be sued, IMO.
In which case the user will move on to another program. The programmer will hopefully learn from the mistake and improve. Either way, it's not ours to mandate.
You don't think that user would rather have received a dialog telling him that something's gone wrong and that his work has been saved for him and that he must shut down? He'd rather have his machine screwed? You don't think that developer would rather receive a mildly irrirated complaint from a user with an accompanying auto-generated error-report, which he can use to fix the problem forthwith? He'd rather have his reputation trashed in newsgroups, field very irate email, be sued?
 Again it boils down to two things, the theoretical "what do you
 expect of your software once it's operating outside its design?"
If it *is* operating outside it's design you cannot expect anything from it. However, (assuming plugin/main context) you cannot know that it (main program) *is* operating outside it's design, it only *might* be (if the plugin has corrupted it).
I don't think it's divisible. If the program has told you it might be operating outside its design, then it is operating outside its design. After all, a program cannot, by definition, be designed to work with a part of it that is operating outside its design. How could the interaction with that component be spelt out in design, never mind codified?
 and
 the practical "wouldn't you like to use software that's been
 subject
 to a highly fault-intolerant design/development/testing
 methodology?"
Of course. But that does not mean I agre with mandatory program termination.
 There's simply no getting away from the first
Indeed.
btw, I think we've covered most of the stuff now. I'm happy to continue if you are, but equally happy to let the group decide (that irrecoverability is not worth having ;< ) Cheers Matthew
Apr 12 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 11:49:25 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Can you give me an example of the sort of contract violation
 you're  referring to. I'm seeing...

 class Foo {
   int a;

   invariant {
     if (a != 5) assert("contract violation");
   }
 }

 which can be caused by any number of things:
  - buggy algorithm
  - unexpected input, without an assertion
  - memory corruption
Alas, I'm really not smart enough to work on partial examples. Can you flesh out a small but complete example which will demonstrate what you're after, and I'll do my best to prove my case on it?
I was asking *you* for an example, the above is half-formed because I am trying to guess what you mean. Feel free to modify it, and/or start from scratch.
Ah. More work for me. :-) I'll give you the real example from the comms system. One component, call it B, serves as a bridge between two others translating messages on TCP connections from the upstream component to message queue entries to be dispatched to the downstream. It effects a reverse translation from downstream MQ back through to upstream TCP. The invariant that was violated was that the channel's internal message container for receiving from the MQ should not contain any messages when the upstream (TCP) is not connected to its peer. The reason this fired is that the downstream process, under some circumstances, did indeed send up messages. Because the upstream entities maintain transaction state based on connectivity, any messages from a previous TCP connection are by defintion from a previous transaction, and so to pass them up would violate the protocol, and therefore entirely unreasonable. The only reasonable action is to drop them. Because the program as originally designed did not expect to encounter this scenario, that assumption was codified in the class invariant for the channel type. Thus, when it was encountered in practice - i.e. the program now encountered a condition that violated its design - a contract violation fired, and the program did an informative suicide. As soon as this happened we were able to infer that our design assumptions were wrong, and we corrected the design such that stale messages are now expected, and are dealt with by a visit to the bit bucket. As I said in another post, imagine if that CP violation had not happened. The stale messages could have been encountered every few days/weeks/months, and may well have been treated as some emergent and unknown behaviour of the overall system complexity, or may have been masked in other errors encountered in this large and diverse insfrastructure. Who knows? What I think we can say with a high certainty is that it would not have been immediately diagnosed and rapidly rectified. If you want, I'll have a root around my code base for another in a bit ...
No need, this is fine, thank you. One important point is that I'm not recommending the removal of the CP violation, quite the opposite, but, I believe that the programmer should be able to make the informed decision about whether it's terminal or not. In your example, the best course was/is to terminate, as it's main goal/purpose cannot be achieved without significant risk of corruption. In another example, the plugin one, the main program can continue, it's main goal/purpose can still be achieved without significant risk of corruption. (assuming here the plugin is non-essential to it's main goal). It comes down to the priority of the task that has asserted, if it's low priority then given circumstances it's concievable that the programmer may want to log it, and continue, so as to achieve his/her main priority. There is risk involved, but it's admittedly small.
 Basically I'm asking:
 1- What are the causes of contract violations?
Whatever the programmer dictates.
Not contract conditions, those the programmer dictates. I want to know what can cause a contract to be violated, I was thinking: - buggy algorithm - memory corruption ..
 2- How many of those would corrupt the "main program" if they
 occured in a  plugin/module.
Impossible to determine. In principle all do. In practice probably a minority. (Bear in mind that just because the numbers of incidence are likely to be low, its not valid to infer that the ramifications of each corruption will be similarly low.)
I agree. Incidence is low, the ramifications may be large. But they may not be, in which case let the programmer decide.
 The point I am driving at is that a very small subset of contract
 violations corrupt the main program in such a way as to cause it
 to crash,
May I ask on what you base this statement?
On the comments you and I have made about the likelihood of memory corruption, which, so far, appears to be the only one that causes an undetectable/handleable crash.
 Further, remember that crashing is but one of many deleterious
 consequences of contract violation. In some ways a crash is the best
 one might hope for. What if you code editing program appears to be
 working perfectly fine, but it saves everything in lowercase, or
 when you finally close it down it deletes its configuration files.

 Much more nastily, what if you're doing a financial system, and
 letting it continue results in good uptime, but transactions are
 corrupted. When the client finally finds out it's your
 responsibility, you're going to wish you'd gone for those
 cosmetically unappealing shutdowns, rather than staying up in error.
Sure, in these circumstances the programmer should "choose" to crash. I just want to retain the option.
     "The third-party component ReganGroovy.dll has encountered a
 condition outside the bounds of its design, and cannot be used
 further. You are strongly advised to shutdown the application
 immediately to ensure your work is not lost. If you do not follow
 this advice there is a non-negligble chance of deleterious
 effects,
 ranging from the loss of your unsaved work or deletion of the
 file(s) you are working with, to your system being rendered
 inoperable or damage to the integrity of your corporate network.
 Do
 you wish to continue?"
That would be nice, but it's not required. The choice is in the hands of the programmer, not the user. If the user doesn't like the choice made by the programmer, they'll stop using the program.
They sure will. But they may've been seriously inconvenienced by it, to the detriment of
... ? thier business, health, bank account.
 Of course we live in a world of mendacity motivated by greed, so
 your manager's going to have you water down that dialog faster
 than
 you can "gorporate creed". In which case, we should all just
 violate
 away. (But I believe that most engineers care about their craft,
 and
 would have trouble sleeping in such circumstances.)
Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is unique to each situation and application. Thus why we cannot mandate program termination. But, by all means, provide one in the library.
We can't, because D does not have the facilities to do so.
You cannot "enforce" or "mandate" it, but you can "provide" it.
 Since D is (i) new and open to improvement and (ii) currently
 not
 capable of supporting irrecoverability by library, I am
 campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
So would I, but D does not have the mechanisms to support that, so it needs language support.
I assume you're referring to the fact that you can catch Object. IMO catching object is an advanced technique. Once the Exception tree is sorted out people will be catching "Exception" if they want to catch "everything" and that won't include asserts and other contract violations. This will leave the _possibility_ of catching Object if desired and all will be happy.
What's the merit in catching Object? I just don't get it. Does anyone have a motivating case for doing so? Does anyone have any convincing experience in throwing fundamental types in C++? IIRC, the only reason for allowing it in the first place was so that one could throw literal strings to avoid issues of stack existentiality in early exception-handling infrastructures. Nowadays, no-one talks of throwing anything other than a std::exception-derived type, and with good reason. Leaving it as Object-throw/catchable is just the same as the situation with the opApply return value. It's currently not being abused, so it's "good enough"! :-(
Well, personally I don't care what it's called, I just want to be able to catch everything, including Assertions etc. I'm happy with it being uncommon, eg. Object <- not throw/catch-able Throwable Assertion Exception ..etc.. you dont catch Throwable generally speaking, only Exception.
 further how can I even be sure my program is going  to
 terminate
 how I intend/want it to, more likely it crashes somewhere
 random.
If it experiences a contract violation then, left to its own devices, in principle it _will_ crash randomly
I _might_ crash. If the assertion was due to memory corruption, and even then it might be localised to the module in which the assertion was raised, if so it has no effect on the main program.
Indeed, it might. In many cases it will. But you'll never know for sure. It might have corrupted your stack such that the next file you open is C:\boot.ini, and the next time you reboot your machine it doesn't start. If the programmer makes that decision for an uninformed user, they deserve to be sued, IMO.
In which case the user will move on to another program. The programmer will hopefully learn from the mistake and improve. Either way, it's not ours to mandate.
You don't think that user would rather have received a dialog telling him that something's gone wrong and that his work has been saved for him and that he must shut down? He'd rather have his machine screwed?
Sure.
 You don't think that developer would rather receive a mildly
 irrirated complaint from a user with an accompanying auto-generated
 error-report, which he can use to fix the problem forthwith? He'd
 rather have his reputation trashed in newsgroups, field very irate
 email, be sued?
Sure. But again, it's his/her choice.
 Again it boils down to two things, the theoretical "what do you
 expect of your software once it's operating outside its design?"
If it *is* operating outside it's design you cannot expect anything from it. However, (assuming plugin/main context) you cannot know that it (main program) *is* operating outside it's design, it only *might* be (if the plugin has corrupted it).
I don't think it's divisible. If the program has told you it might be operating outside its design, then it is operating outside its design.
Not if it's design includes trying to handle that situation.
 After all, a program cannot, by definition, be designed to
 work with a part of it that is operating outside its design.
Why not?
 How
 could the interaction with that component be spelt out in design,
 never mind codified?
if (outside_design) component = disabled; if (component == disabled) return ;
 and
 the practical "wouldn't you like to use software that's been
 subject
 to a highly fault-intolerant design/development/testing
 methodology?"
Of course. But that does not mean I agre with mandatory program termination.
 There's simply no getting away from the first
Indeed.
btw, I think we've covered most of the stuff now. I'm happy to continue if you are, but equally happy to let the group decide (that irrecoverability is not worth having ;< )
Ok, lets leave it here then. I'll leave the comments I've just made, feel free to ignore them :) Regan
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 If you want, I'll have a root around my code base for another in 
 a
 bit ...
No need, this is fine, thank you. One important point is that I'm not recommending the removal of the CP violation, quite the opposite, but, I believe that the programmer should be able to make the informed decision about whether it's terminal or not.
Ok. My understanding of your position is that that decision is rightly within the purview of the programmer of a component's client code, whereas I'm saying that (at least in principle) it can only be within the purview of the component's programmer. Is that a fair characterisation of our respective positions?
 In your example, the best course was/is to terminate, as it's main 
 goal/purpose cannot be achieved without significant risk of 
 corruption.
Yes, but I'd rephrase to say "only" course, since there is not, in principle, any other valid course.
 In another example, the plugin one, the main program can continue, 
 it's  main goal/purpose can still be achieved without significant 
 risk of  corruption. (assuming here the plugin is non-essential to 
 it's main goal).
In practice, likely in most cases. In principle no. [Sorry to keep pointing out both parts, but I feel that it's necessary to keep that distinction out there.]
 It comes down to the priority of the task that has asserted, if 
 it's low  priority then given circumstances it's concievable that 
 the programmer may  want to log it, and continue, so as to achieve 
 his/her main priority.  There is risk involved, but it's 
 admittedly small.
By priority do you mean the 'importance' of the user's activities? If so, I agree that that has merit, but it once again takes us back to the I use GVIM when I'm not plodding along inside VS98. Let's say that GVIM has a bug that, once encountered, causes it to wink off into the aether every 100-200 runs. Most times I am writing very small scripts (usually a file manipulation in Ruby using recls/Ruby), and could certainly live with them disappearing mid-save very few hundred runs. However, some times I'm writing very large makefile generator templates, and were GVIM to lose that, even at such a low frequency, would be a massive hit. [NOTE: GVIM's never had any kind of crash, or any other bug that I've ever used. It's great! (Although I wish I could find out how to get it to remember the last frame size, so I don't have to resize it every time)] Now, if the decision on how to deal with bugs were with the GVIM programmer, and they opted for quenching those errors, then it's quite likely that I'd lose some very important work. Conversely, if IZT were in use then it's _not_ the case that I'd have to shutdown when I don't want to all the time: the bug wouldn't exist! GVIM users would have fed back the behaviour to the GVIM author and it'd've been fixed, and I'd never see it. (Since I do never see it, I suspect GVIM might be using IZT already <g>) If the decision was to leave it to the user, then we wouldn't lose work, but the problem may well persist, and we'd be clicking Continue buttons every few hours. Not terrible, but not good ether. And it'd still kill us with big complex edits anytime we instintively clicked Continue followed by the 1/200 wink. :-(
 Basically I'm asking:
 1- What are the causes of contract violations?
Whatever the programmer dictates.
Not contract conditions, those the programmer dictates. I want to know what can cause a contract to be violated, I was thinking: - buggy algorithm - memory corruption ..
No, it really _is_ just whatever the programmer decides. Only they can know (or help to define) what constitutes a buggy algorithm, a class invariant, even a memory corruption. (Although the OS can determine some memory corruptions, such as accessing an unmapped page, or attempting to write to a read-only page, or what not. But in general there's no overarching help from hardware or anything to be had for compiled languages.)
 2- How many of those would corrupt the "main program" if they
 occured in a  plugin/module.
Impossible to determine. In principle all do. In practice probably a minority. (Bear in mind that just because the numbers of incidence are likely to be low, its not valid to infer that the ramifications of each corruption will be similarly low.)
I agree. Incidence is low, the ramifications may be large. But they may not be, in which case let the programmer decide.
Wearing the hat of flexibility for a moment, I still don't see how it's the choice of the programmer (of the application, not the plug-in). What's happened is that the programmer of a plug-in has reified, in executable tests, the design of their code. When one of those tests fires, it communicates to the process within which it resides that it has now acted outside of its design. There are now several options: - the plug-in calls abort() and terminates the process - the plug-in throws an exception/error With that error/exception: - the process calls abort() - the process shuts down, taking appropriate graceful steps, e.g. telling the user it's saved their work / offering the user the option to save the work / dumping server context to a log - the process, if a GUI app, gives the user the choice of what to do - the process unloads the offending plug-in and continues executing
 The point I am driving at is that a very small subset of 
 contract
 violations corrupt the main program in such a way as to cause it
 to crash,
May I ask on what you base this statement?
On the comments you and I have made about the likelihood of memory corruption, which, so far, appears to be the only one that causes an undetectable/handleable crash.
Ok. There are lots more, of course, such as premature release of system resource (window handles, TSS keys, etc.etc)
 That would be nice, but it's not required. The choice is in the
 hands of  the programmer, not the user. If the user doesn't like
 the choice made by  the programmer, they'll stop using the
 program.
They sure will. But they may've been seriously inconvenienced by it, to the detriment of
... ? thier business, health, bank account.
Eep. Lost my thread. :$
 Of course we live in a world of mendacity motivated by greed, 
 so
 your manager's going to have you water down that dialog faster
 than
 you can "gorporate creed". In which case, we should all just
 violate
 away. (But I believe that most engineers care about their 
 craft,
 and
 would have trouble sleeping in such circumstances.)
Principle/Ideal vs Reality/Practice it's a fine line/balance, one that is unique to each situation and application. Thus why we cannot mandate program termination. But, by all means, provide one in the library.
We can't, because D does not have the facilities to do so.
You cannot "enforce" or "mandate" it, but you can "provide" it.
Not sure what you mean? Legally/morally/technically? It's all doable.
 Since D is (i) new and open to improvement and (ii) currently
 not
 capable of supporting irrecoverability by library, I am
 campaigning
 for it to have it built in.
I'd prefer an optional library solution. For reasons expressed in this post/thread.
So would I, but D does not have the mechanisms to support that, so it needs language support.
I assume you're referring to the fact that you can catch Object. IMO catching object is an advanced technique. Once the Exception tree is sorted out people will be catching "Exception" if they want to catch "everything" and that won't include asserts and other contract violations. This will leave the _possibility_ of catching Object if desired and all will be happy.
What's the merit in catching Object? I just don't get it. Does anyone have a motivating case for doing so? Does anyone have any convincing experience in throwing fundamental types in C++? IIRC, the only reason for allowing it in the first place was so that one could throw literal strings to avoid issues of stack existentiality in early exception-handling infrastructures. Nowadays, no-one talks of throwing anything other than a std::exception-derived type, and with good reason. Leaving it as Object-throw/catchable is just the same as the situation with the opApply return value. It's currently not being abused, so it's "good enough"! :-(
Well, personally I don't care what it's called, I just want to be able to catch everything, including Assertions etc.
Again, no-one's talking about anything not being catchable. It's whether the caught exception is then rethrown or not. I assume you mean you want to be able to catch and quench everything.
 I'm happy with it being  uncommon, eg.

 Object <- not throw/catch-able
   Throwable
     Assertion
     Exception
        ..etc..

 you dont catch Throwable generally speaking, only Exception.
yes, that's along the lines I'm thinking. Being able to catch Object just seems like a loophole with no upside, just like C++'s catching of fundamental types.
 Again it boils down to two things, the theoretical "what do you
 expect of your software once it's operating outside its 
 design?"
If it *is* operating outside it's design you cannot expect anything from it. However, (assuming plugin/main context) you cannot know that it (main program) *is* operating outside it's design, it only *might* be (if the plugin has corrupted it).
I don't think it's divisible. If the program has told you it might be operating outside its design, then it is operating outside its design.
Not if it's design includes trying to handle that situation.
 After all, a program cannot, by definition, be designed to
 work with a part of it that is operating outside its design.
Why not?
 How
 could the interaction with that component be spelt out in design,
 never mind codified?
if (outside_design) component = disabled; if (component == disabled) return ;
The program itself cannot be designed to work with something it's not designed to work with. It's axiomatic.
 and
 the practical "wouldn't you like to use software that's been
 subject
 to a highly fault-intolerant design/development/testing
 methodology?"
Of course. But that does not mean I agre with mandatory program termination.
 There's simply no getting away from the first
Indeed.
btw, I think we've covered most of the stuff now. I'm happy to continue if you are, but equally happy to let the group decide (that irrecoverability is not worth having ;< )
Ok, lets leave it here then. I'll leave the comments I've just made, feel free to ignore them :)
LOL. I should have read to the end first. Well, hrumph, I'm not wasting all that typing, so here it is. [If you wish, just post a "Last Word" and I promise not to top it. ;) ]
Apr 12 2005
next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 13:46:47 +1000, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 If you want, I'll have a root around my code base for another in
 a
 bit ...
No need, this is fine, thank you. One important point is that I'm not recommending the removal of the CP violation, quite the opposite, but, I believe that the programmer should be able to make the informed decision about whether it's terminal or not.
Ok. My understanding of your position is that that decision is rightly within the purview of the programmer of a component's client code, whereas I'm saying that (at least in principle) it can only be within the purview of the component's programmer. Is that a fair characterisation of our respective positions?
Yep.
 btw, I think we've covered most of the stuff now. I'm happy to
 continue if you are, but equally happy to let the group decide
 (that
 irrecoverability is not worth having ;< )
Ok, lets leave it here then. I'll leave the comments I've just made, feel free to ignore them :)
LOL. I should have read to the end first. Well, hrumph, I'm not wasting all that typing, so here it is. [If you wish, just post a "Last Word" and I promise not to top it. ;) ]
Resisting the urge to post something controversial to test this promise .. done! Regan
Apr 12 2005
prev sibling next sibling parent reply zwang <nehzgnaw gmail.com> writes:
Matthew wrote:
 [NOTE: GVIM's never had any kind of crash, or any other bug that 
 I've ever used. It's great! (Although I wish I could find out how to 
 get it to remember the last frame size, so I don't have to resize it 
 every time)]
 
[snip] I wrote the following lines in $VIM/_vimrc to set up the window size: winpos 0 0 set lines=53 set columns=166 This is of course not a remember-the-last-frame-size solution, but you might find it useful.
Apr 12 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"zwang" <nehzgnaw gmail.com> wrote in message 
news:d3i61c$p7r$1 digitaldaemon.com...
 Matthew wrote:
 [NOTE: GVIM's never had any kind of crash, or any other bug that 
 I've ever used. It's great! (Although I wish I could find out how 
 to get it to remember the last frame size, so I don't have to 
 resize it every time)]
[snip] I wrote the following lines in $VIM/_vimrc to set up the window size: winpos 0 0 set lines=53 set columns=166 This is of course not a remember-the-last-frame-size solution, but you might find it useful.
You're now officialy my best friend. :-) Many thanks
Apr 12 2005
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
In article <d3i4nm$o0r$1 digitaldaemon.com>, Matthew says...
 One important point is that I'm not recommending the removal of 
 the CP  violation, quite the opposite, but, I believe that the 
 programmer should  be able to make the informed decision about 
 whether it's terminal or not.
Ok. My understanding of your position is that that decision is rightly within the purview of the programmer of a component's client code, whereas I'm saying that (at least in principle) it can only be within the purview of the component's programmer.
At the risk of muddying the waters a bit. I think one might reasonably argue that violation of preconditions is recoverable while violation of postconditions is not. In the first case, the client could theoretically detect the error, fix the parameters and call the function again, while in the second case the client has been stuck in a bad application state and there's little he can do about it. Might it make sense to separate these concerns rather than just throwing AssertErrors in all cases? Sean
Apr 13 2005
parent Sean Kelly <sean f4.ca> writes:
In article <d3jd4b$1sb6$1 digitaldaemon.com>, Sean Kelly says...
At the risk of muddying the waters a bit.  I think one might reasonably argue
that violation of preconditions is recoverable while violation of postconditions
is not.  In the first case, the client could theoretically detect the error, fix
the parameters and call the function again, while in the second case the client
has been stuck in a bad application state and there's little he can do about it.
Might it make sense to separate these concerns rather than just throwing
AssertErrors in all cases?
This has me wondering (and I'm posting this to remind myself to test it), if an exception is thrown out of a public class method with a postcondition, is that postcondition evaluated? What if the postcondition has an assertion failure as well? Also, is the class invariant evaluated if an exception is thrown? What if that also has an assert failure? I know that auto classes are not currently handled properly if execution leaves scope because of a thrown exception, what about these other cases where we could theoretically have two or more exceptions in flight at the same time? Note that I'm ignoring finally blocks as exceptions thrown there are ignored and just cause them to complete early (this is an issue in itself and one might argue that finally blocks could violate application integrity depending on what's in them). Sean
Apr 13 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 Sorry, again this is completely wrong. Once the programmer is using 
 a plug-in outside the bounds of its correctness *it is impossible* 
 for that programmer to decide what the behaviour of his/her program 
 is.
 
 It really amazes me that people don't get this. Is it some 
 derring-do attitude that 'we shall overcome' all obstacles by dint 
 of hard-work and resolve? No-one binds together nuts and bolts with 
 a hammer, no matter how hard they hit them.
Your metaphor is just riveting! Seriously, I get the feeling that you're not getting through, not at least with the current tac. Folks have a hard time understanding that the entire application should be shut down just because a single contract has failed. Especially if we are talking about plugins. After all, the same plugin may have other bugs, etc., and in those cases the mandatory shut-down never becomes an issue as long as no contract traps fire. As I see it, the value in mandatory shutdown with CP is in making it so painfully obvious to all concerned (customer, main contractor, subcontractor, colleagues, etc.) what went wrong, (and whose fault it was!) that this simply forces the bug fixed in "no time". We must also remember that not every program is written for the same kind of environment. Moving gigabucks is where absolute correctness is a must. Another might be hospital equipment. Or space ship software. But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque. This is made even worse with people getting filthy rich peddling blatantly inferior programs and "operating systems". Programmers, and especially the pointy haired bosses, have a hard time becoming motivated to do things Right.
Apr 12 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 13 Apr 2005 02:21:07 +0300, Georg Wrede <georg.wrede nospam.org>  
wrote:
 Matthew wrote:
 Sorry, again this is completely wrong. Once the programmer is using a  
 plug-in outside the bounds of its correctness *it is impossible* for  
 that programmer to decide what the behaviour of his/her program is.
  It really amazes me that people don't get this. Is it some derring-do  
 attitude that 'we shall overcome' all obstacles by dint of hard-work  
 and resolve? No-one binds together nuts and bolts with a hammer, no  
 matter how hard they hit them.
Your metaphor is just riveting! Seriously, I get the feeling that you're not getting through, not at least with the current tac. Folks have a hard time understanding that the entire application should be shut down just because a single contract has failed. Especially if we are talking about plugins. After all, the same plugin may have other bugs, etc., and in those cases the mandatory shut-down never becomes an issue as long as no contract traps fire. As I see it, the value in mandatory shutdown with CP is in making it so painfully obvious to all concerned (customer, main contractor, subcontractor, colleagues, etc.) what went wrong, (and whose fault it was!) that this simply forces the bug fixed in "no time". We must also remember that not every program is written for the same kind of environment. Moving gigabucks is where absolute correctness is a must. Another might be hospital equipment. Or space ship software. But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque.
Or they, like you Georg, can see how it happens in the real world, despite it not being "Right".
 This is made even worse with people getting filthy rich peddling  
 blatantly inferior programs and "operating systems". Programmers, and  
 especially the pointy haired bosses, have a hard time becoming motivated  
 to do things Right.
Amen. (to Bob) The real world so often intrudes on purity of design. I can understand Matthews position, where he's coming from. For the most part I agree with his points/concerns. I just don't think it's the right thing for D to enforce, it's not flexible enough for real world situations. Perhaps Matthew is right, and we should beat the world into submission, but I think a better tack is to subvert it slowly to our design. You don't throw a frog into boiling water, it will jump out, instead you heat it slowly. Regan
Apr 12 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opso47vqdb23k2f5 nrage.netwin.co.nz...
 On Wed, 13 Apr 2005 02:21:07 +0300, Georg Wrede
 <georg.wrede nospam.org>  wrote:
 Matthew wrote:
 Sorry, again this is completely wrong. Once the programmer is
 using a  plug-in outside the bounds of its correctness *it is
 impossible* for  that programmer to decide what the behaviour of
 his/her program is.
  It really amazes me that people don't get this. Is it some
 derring-do  attitude that 'we shall overcome' all obstacles by
 dint of hard-work  and resolve? No-one binds together nuts and
 bolts with a hammer, no  matter how hard they hit them.
Your metaphor is just riveting! Seriously, I get the feeling that you're not getting through, not at least with the current tac. Folks have a hard time understanding that the entire application should be shut down just because a single contract has failed. Especially if we are talking about plugins. After all, the same plugin may have other bugs, etc., and in those cases the mandatory shut-down never becomes an issue as long as no contract traps fire. As I see it, the value in mandatory shutdown with CP is in making it so painfully obvious to all concerned (customer, main contractor, subcontractor, colleagues, etc.) what went wrong, (and whose fault it was!) that this simply forces the bug fixed in "no time". We must also remember that not every program is written for the same kind of environment. Moving gigabucks is where absolute correctness is a must. Another might be hospital equipment. Or space ship software. But (alas) most programmers are forced to work in environments where you debug only enough for the customer to accept your bill. They might find the argumentation seen so far in this thread, er, opaque.
Or they, like you Georg, can see how it happens in the real world, despite it not being "Right".
I take that point, and am in sympathy with it (practically, not in principle). The answer to "the real world" is how effective the methodology is when used. I've been involved with all kinds of different approaches over the years, and I'm telling you I've seen nothing anywere near as effective as Informative Zero Tolerance (IZT - did I just invent a new acronym? <g>) for producing good code fast. (Informative because it tells you as soon as possible what is wrong and where it is)
 This is made even worse with people getting filthy rich peddling
 blatantly inferior programs and "operating systems". Programmers,
 and  especially the pointy haired bosses, have a hard time
 becoming motivated  to do things Right.
Amen. (to Bob) The real world so often intrudes on purity of design. I can understand Matthews position, where he's coming from. For the most part I agree with his points/concerns. I just don't think it's the right thing for D to enforce, it's not flexible enough for real world situations. Perhaps Matthew is right, and we should beat the world into submission, but I think a better tack is to subvert it slowly to our design. You don't throw a frog into boiling water, it will jump out, instead you heat it slowly.
Well, politically, I can agree with that. For me, then, the slow boiling is the "-no-cp-violations" flag (or absence of the -debug flag, if you will). If we do not have irrecoverable support for the contract violation class(es) within D, then there's no way to ever get it to boiling point. It'll just be warm.
Apr 12 2005
prev sibling next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:425C57E3.7050409 nospam.org...
 Matthew wrote:
 Sorry, again this is completely wrong. Once the programmer is 
 using a plug-in outside the bounds of its correctness *it is 
 impossible* for that programmer to decide what the behaviour of 
 his/her program is.

 It really amazes me that people don't get this. Is it some 
 derring-do attitude that 'we shall overcome' all obstacles by 
 dint of hard-work and resolve? No-one binds together nuts and 
 bolts with a hammer, no matter how hard they hit them.
Your metaphor is just riveting!
Woof!
 Seriously, I get the feeling that you're not getting through, not 
 at least with the current tac.
Ya think?!? :-)
 Folks have a hard time understanding that the entire application 
 should be shut down just because a single contract has failed. 
 Especially if we are talking about plugins. After all, the same 
 plugin may have other bugs, etc., and in those cases the mandatory 
 shut-down never becomes an issue as long as no contract traps 
 fire.
I know. I had this same feeling when I started getting into it. FTR: it was a combination of discussions with Walter, reading "The Pragmatic Programmer" and inductive reasoning that got me to this point. But now I'm here I can't go back, in part because no-one's ever offered an answer to what can be expected of software once it's reached an invalid state.
 As I see it, the value in mandatory shutdown with CP is in making 
 it so painfully obvious to all concerned (customer, main 
 contractor, subcontractor, colleagues, etc.) what went wrong, (and 
 whose fault it was!) that this simply forces the bug fixed in "no 
 time".
It's twofold. The philosophical is "why do you want software to act outside its design, and what good do you think can come of that?" The practical is that this methodology acts like a knife through butter when cutting out bugs. In every application in which I've strictly enforced irrecoverability I've seen significant differences in the amount of time it takes to get stable. It's almost insanely effective. (To experiment, try, for two weeks, pressing Abort everytime you're presented with Abort, Retry, Ignore.)
 We must also remember that not every program is written for the 
 same kind of environment. Moving gigabucks is where absolute 
 correctness is a must. Another might be hospital equipment. Or 
 space ship software.
Amen to all of the above.
 But (alas) most programmers are forced to work in environments 
 where you debug only enough for the customer to accept your bill. 
 They might find the argumentation seen so far in this thread, er, 
 opaque.
Indeed. This is one of those where I shall have to think. Instinct tells me strict-CP will win out, but I need to think about it. :)
 This is made even worse with people getting filthy rich peddling 
 blatantly inferior programs and "operating systems". Programmers, 
 and especially the pointy haired bosses, have a hard time becoming 
 motivated to do things Right.
True. When I plugged in the irrecoverability to my client's comms systems, the technical manager - a very smart cookie with very wide experience - really cacked himself, and instructed me not to tell the management anything about it. This was during dev/pre-system testing. He and the prime engineer turned round within a day: they thought it was marvellous that the processes would detect design violations, tell you what and where the problem was and stop dead, and that I'd have them fixed and up and running on to the next one within minutes. When you're dealing with several multi-threaded comms processes, involving different comms protocols, such behaviour was previously unheard of: to them and to me. We still didn't tell the management that we were using this methodology for a considerable time, however, not until it was working without problems for a couple of weeks. <g>
Apr 12 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:425C57E3.7050409 nospam.org...
But (alas) most programmers are forced to work in environments 
where you debug only enough for the customer to accept your bill. 
They might find the argumentation seen so far in this thread, er, 
opaque.
Indeed. This is one of those where I shall have to think. Instinct tells me strict-CP will win out, but I need to think about it. :)
I, for one, am 100% for strict here. The idea of CP gets diluted if you can have all kinds of methods for deferring shutdown. Or if you somehow can choose when and where it does what. Either compile with all contracts, or compile without. (And while I'm at it, Phobos should either come as source code only, or (hopefully) rather as several binaries precompiled with different switches: contracts, debugging, optimized, for example, automatically chosen by the compiler.)
 He and the prime engineer turned round within a day: they 
 thought it was marvellous that the processes would detect design 
 violations, tell you what and where the problem was and stop dead, 
 and that I'd have them fixed and up and running on to the next one 
 within minutes. When you're dealing with several multi-threaded 
 comms processes, involving different comms protocols, such behaviour 
 was previously unheard of: to them and to me.
The problem with "ordinary" (non-critical) software development is, that using CP _looks_ harder and more labourious at the outset. Probably because you then can't "see" the massive amounts of unnecessary work done when not using CP. Heh, I've caught myself more than once taking shortcuts through fields, woods, bushes, or hills, only to find that it took more energy, ruined my clothes -- and worst of all, took more time than the regular road.
Apr 12 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 As I see it, the value in mandatory shutdown with CP is in making it so 
 painfully obvious to all concerned (customer, main contractor, 
 subcontractor, colleagues, etc.) what went wrong, (and whose fault it 
 was!) that this simply forces the bug fixed in "no time".
That's a tough sell. I'm not in sales but I imagine they wouldn't like saying "yeah unlike our competitors who recover gracefully from errors - when we have something go wrong we exit the whole app and create this nifty file called 'core' that has all your data. It's good for you, or so say the experts."
 We must also remember that not every program is written for the same kind 
 of environment. Moving gigabucks is where absolute correctness is a must. 
 Another might be hospital equipment. Or space ship software.

 But (alas) most programmers are forced to work in environments where you 
 debug only enough for the customer to accept your bill. They might find 
 the argumentation seen so far in this thread, er, opaque.
Customers don't like programs crashing. In practice many errors - even many asserts - are not fatal and can be recovered from.
 This is made even worse with people getting filthy rich peddling blatantly 
 inferior programs and "operating systems". Programmers, and especially the 
 pointy haired bosses, have a hard time becoming motivated to do things 
 Right.
I assume you are talking about Windows. There are many pressures on software companies. I won't defend Microsoft's decisions but I doubt that if they had made Windows crash "harder" than it did that they would have been more motivated to fix bugs. They had a different measure of release criteria than we do today.
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Customers don't like programs crashing. In practice many errors - 
 even many asserts - are not fatal and can be recovered from.
Do you mean pure crashes? Or do you mean crashes and/or orderly shutdowns? If the latter, I'd suggest that that might still be due to the fact that they're not used to them.
 This is made even worse with people getting filthy rich peddling 
 blatantly inferior programs and "operating systems". Programmers, 
 and especially the pointy haired bosses, have a hard time 
 becoming motivated to do things Right.
I assume you are talking about Windows. There are many pressures on software companies. I won't defend Microsoft's decisions but I doubt that if they had made Windows crash "harder" than it did that they would have been more motivated to fix bugs. They had a different measure of release criteria than we do today.
Very true. I too doubt that some sectors of our industry would benefit from IZT. But that's politics and commerce, which is *way* outside my area of expertise. Seriously, though, don't we think it'd be nice if D made, or supported, serious advances in such areas? Kind of like a beam of light ... ;)
Apr 12 2005
next sibling parent reply "Kris" <fu bar.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org>
 Seriously, though, don't we think it'd be nice if D made, or
 supported, serious advances in such areas? Kind of like a beam of
 light ... ;)
It would. But please get in line behind "reduction of maintenance costs". That would also be a beam of light; and one dispensing lots of dollar bills :-) Pet peeves aside: I suspect the issue here is 'Forced Adoption', and not the IZT concept itself (although one might argue they are hand-in-glove?). I'm told you can't force religion upon anyone, anymore. Is that true?
Apr 12 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Kris" <fu bar.com> wrote in message 
news:d3i31n$n1s$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org>
 Seriously, though, don't we think it'd be nice if D made, or
 supported, serious advances in such areas? Kind of like a beam of
 light ... ;)
It would. But please get in line behind "reduction of maintenance costs". That would also be a beam of light; and one dispensing lots of dollar bills :-)
Another worthy cause. No argument from me there.
 Pet peeves aside: I suspect the issue here is 'Forced Adoption', 
 and not the
 IZT concept itself (although one might argue they are 
 hand-in-glove?).
Yes, I very much suspect the same thing.
 I'm told you can't force religion upon anyone, anymore. Is that 
 true?
Perhaps one more adroit in conflict resolution than me can see a way through this. (Of course, even when we've battled ourselves through leagues of wolf-infested forests we will then have to swim through the infinite see of WhatMess? and then throw ourselves on the impassive walls of Walter's keep of GoodEnough.) Charon the Invaliant
Apr 12 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:d3i1p2$m2b$1 digitaldaemon.com...
 Customers don't like programs crashing. In practice many errors - even 
 many asserts - are not fatal and can be recovered from.
Do you mean pure crashes? Or do you mean crashes and/or orderly shutdowns? If the latter, I'd suggest that that might still be due to the fact that they're not used to them.
I wasn't making a distinction - though if an orderly shutdown includes simply printing the warning "you might want to save relevant data and quit" then I'm all for that :-)
 This is made even worse with people getting filthy rich peddling 
 blatantly inferior programs and "operating systems". Programmers, and 
 especially the pointy haired bosses, have a hard time becoming motivated 
 to do things Right.
I assume you are talking about Windows. There are many pressures on software companies. I won't defend Microsoft's decisions but I doubt that if they had made Windows crash "harder" than it did that they would have been more motivated to fix bugs. They had a different measure of release criteria than we do today.
Very true. I too doubt that some sectors of our industry would benefit from IZT. But that's politics and commerce, which is *way* outside my area of expertise. Seriously, though, don't we think it'd be nice if D made, or supported, serious advances in such areas? Kind of like a beam of light ... ;)
yup. I wonder ... if Walter writes a D++ compiler in D would it take as long as writing a D compiler in C++ ;-)
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d3i392$n89$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:d3i1p2$m2b$1 digitaldaemon.com...
 Customers don't like programs crashing. In practice many 
 errors - even many asserts - are not fatal and can be recovered 
 from.
Do you mean pure crashes? Or do you mean crashes and/or orderly shutdowns? If the latter, I'd suggest that that might still be due to the fact that they're not used to them.
I wasn't making a distinction - though if an orderly shutdown includes simply printing the warning "you might want to save relevant data and quit" then I'm all for that :-)
Includes or comprises? Certainly it would include that (for GUI apps). But it would comprise that + shutting down. I'm not 100% sure that's what you meant.
 This is made even worse with people getting filthy rich 
 peddling blatantly inferior programs and "operating systems". 
 Programmers, and especially the pointy haired bosses, have a 
 hard time becoming motivated to do things Right.
I assume you are talking about Windows. There are many pressures on software companies. I won't defend Microsoft's decisions but I doubt that if they had made Windows crash "harder" than it did that they would have been more motivated to fix bugs. They had a different measure of release criteria than we do today.
Very true. I too doubt that some sectors of our industry would benefit from IZT. But that's politics and commerce, which is *way* outside my area of expertise. Seriously, though, don't we think it'd be nice if D made, or supported, serious advances in such areas? Kind of like a beam of light ... ;)
yup. I wonder ... if Walter writes a D++ compiler in D would it take as long as writing a D compiler in C++ ;-)
Not clear. Are you being cute? If you're pointing out that a D++ compiler written in D would take less time (and presumably be more robust) than in C++, then I'd have to say: (i) that's conjecture, albeit I agree it's probably right (ii) that's but one small application area. If D is _only_ superior as a development technology to C++ in writing compilers, it's not going to set many new standards. D's promise is great but its problems are considerable, and it seems like suggestions for improvements - not just talking about my own, mind - are rarely embraced. It's like contentment rules, and yet, despite so much to hope for, there's little to be content about.
Apr 12 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 I wasn't making a distinction - though if an orderly shutdown includes 
 simply printing the warning "you might want to save relevant data and 
 quit" then I'm all for that :-)
Includes or comprises? Certainly it would include that (for GUI apps). But it would comprise that + shutting down. I'm not 100% sure that's what you meant.
Forcing shutdown is annoying. As long as you tell the user what happened and how serious it could be who are we to quit their app? (though honestly I hope we don't start this whole thread over again) In some sense for the user it's like when Windows tells you at some point you have to reboot after an install (well... not all installs but you know what I mean). Windows doesn't pop up a dialog that says "Reboot?" that just has an OK button. Error recovery should inform the user but let them decide what to do - or worst case let the application developer decide.
 Seriously, though, don't we think it'd be nice if D made, or supported, 
 serious advances in such areas? Kind of like a beam of light ... ;)
yup. I wonder ... if Walter writes a D++ compiler in D would it take as long as writing a D compiler in C++ ;-)
Not clear. Are you being cute?
Tried to be cute (hence the wink). I was poking fun at Walter for taking so long to finish D.
Apr 12 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d3i5dn$oda$1 digitaldaemon.com...
 I wasn't making a distinction - though if an orderly shutdown 
 includes simply printing the warning "you might want to save 
 relevant data and quit" then I'm all for that :-)
Includes or comprises? Certainly it would include that (for GUI apps). But it would comprise that + shutting down. I'm not 100% sure that's what you meant.
Forcing shutdown is annoying. As long as you tell the user what happened and how serious it could be who are we to quit their app? (though honestly I hope we don't start this whole thread over again)
Agreed.
 In some sense for the user it's like when Windows tells you at 
 some point you have to reboot after an install (well... not all 
 installs but you know what I mean). Windows doesn't pop up a 
 dialog that says "Reboot?" that just has an OK button. Error 
 recovery should inform the user but let them decide what to do - 
 or worst case let the application developer decide.
Actually, some things do. I think Norton Anti Virus just puts an ok button and nothing else. Needless to say I have an running the background that detects this and kills the window without rebooting. Damn! Did I just shoot down my own thesis? :-)
Apr 12 2005
parent reply J C Calvarese <jcc7 cox.net> writes:
Matthew wrote:
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
 news:d3i5dn$oda$1 digitaldaemon.com...
 
...
In some sense for the user it's like when Windows tells you at 
some point you have to reboot after an install (well... not all 
installs but you know what I mean). Windows doesn't pop up a 
dialog that says "Reboot?" that just has an OK button. Error 
recovery should inform the user but let them decide what to do - 
or worst case let the application developer decide.
Actually, some things do. I think Norton Anti Virus just puts an ok button and nothing else. Needless to say I have an running the background that detects this and kills the window without rebooting. Damn! Did I just shoot down my own thesis? :-)
Actually, I've seen this something like this, too (it must have been back when I used Win95 -- yuck!). Since I don't like to follow directions, I'd move the OK box under the taskbar if I wasn't ready to reboot yet. -- jcc7 http://jcc_7.tripod.com/d/
Apr 17 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"J C Calvarese" <jcc7 cox.net> wrote in message 
news:d3uirs$2ecv$1 digitaldaemon.com...
 Matthew wrote:
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
 news:d3i5dn$oda$1 digitaldaemon.com...
...
In some sense for the user it's like when Windows tells you at 
some point you have to reboot after an install (well... not all 
installs but you know what I mean). Windows doesn't pop up a 
dialog that says "Reboot?" that just has an OK button. Error 
recovery should inform the user but let them decide what to do - 
or worst case let the application developer decide.
Actually, some things do. I think Norton Anti Virus just puts an ok button and nothing else. Needless to say I have an running the background that detects this and kills the window without rebooting. Damn! Did I just shoot down my own thesis? :-)
Actually, I've seen this something like this, too (it must have been back when I used Win95 -- yuck!). Since I don't like to follow directions, I'd move the OK box under the taskbar if I wasn't ready to reboot yet.
I do that for arbitrary things, but it leaves you open to inadvertently pressing it, since it's still in the top-window Z-order, and if you're a mad Alt-TAB-ber like me you can find that you've pressed the OK button by switching too quickly. That's _very_ annoying. :-)
Apr 17 2005
prev sibling parent Sean Kelly <sean f4.ca> writes:
In article <d3i5dn$oda$1 digitaldaemon.com>, Ben Hinkle says...
Forcing shutdown is annoying. As long as you tell the user what happened and 
how serious it could be who are we to quit their app? (though honestly I 
hope we don't start this whole thread over again) In some sense for the user 
it's like when Windows tells you at some point you have to reboot after an 
install (well... not all installs but you know what I mean). Windows doesn't 
pop up a dialog that says "Reboot?" that just has an OK button. Error 
recovery should inform the user but let them decide what to do - or worst 
case let the application developer decide.
The only instance where this seems reasonable to me is in kernel code, and in that case I expect the programmer would avoid the use of asserts (in release mode) if he wants to insure the code never halts. In other cases my personal preference is to terminate the application and use a process monitor to restart it if necessary. Sean
Apr 13 2005
prev sibling next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 So, I see the taxonomy as being either (now using my/Ben's names):

 Object <= not throwable, btw
    Throwable
        Error <= Unrecoverable exceptions
            ContractViolation
            Assertion
        Exception
            FileNotFoundException
            XMLParseException
        Exhaustion
            MemoryExhaustion
            TSSKeyExhaustion
Why shouldn't Object be throwable? It has useful methods like toString() and print() (I'm starting to think print() should stay). What would Throwable provide that Object doesn't? I would make it harder to throw the wrong thing I suppose: throw new Studebacker() OutOfMemory is special because in typical usage if you run out of memory you can't even allocate another exception safely. Memory is the one resource programs have a very very hard time running without. It must be catchable because otherwise there's no way to tell if a large allocation failed or not. Running out of threads is much less catastrophic than running out of memory. Similarly I would treat assertion failures as more catastrophic than other exceptions. But as I indicated in my original post I believe there is no such thing as an unrecoverable (but catchable) error/exception.
 (And the language
 should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
Sarcasm is the lowest form of wit.
Maybe so, but unstubstantiated opinion is worth precisely nothing. It's an inconsiderate waste of other people's time.
 Why do you need to force a program to terminate? If the programmer wants 
 to continue and can do so, they will, if not, they wont. I see no need to 
 enforce it.
There are several flaws suggested in your understanding of the issue by just those two sentences.
<sarcasm> Windows used to have this feature where if you tried hard enough the screen would go blue. It was called the "blue screen of death" because some people died of joy when they saw it. Fearing lawsuits, though, Microsoft had to remove the feature, much to the dismay of Windows users worldwide. </sarcasm>
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d3gff9$2gvc$1 digitaldaemon.com...
 So, I see the taxonomy as being either (now using my/Ben's 
 names):

 Object <= not throwable, btw
    Throwable
        Error <= Unrecoverable exceptions
            ContractViolation
            Assertion
        Exception
            FileNotFoundException
            XMLParseException
        Exhaustion
            MemoryExhaustion
            TSSKeyExhaustion
Why shouldn't Object be throwable? It has useful methods like toString() and print() (I'm starting to think print() should stay). What would Throwable provide that Object doesn't? I would make it harder to throw the wrong thing I suppose: throw new Studebacker()
Exactly that. Is that not adequate motivation?
 OutOfMemory is special because in typical usage if you run out of 
 memory you can't even allocate another exception safely.
Indeed, although that's easily obviated by having the exception ready, on a thread-specific basis. (I'm sure you know this Ben, but it's worth saying for purposes of general edification.)
 Memory is the one resource programs have a very very hard time 
 running without.
"the one"? Surely not. I've already mentioned TSS keys. They're at least as hard to live without. And what about stack? (Assuming you mean heap, as I did.)
 It must be catchable because otherwise there's no way to tell if a 
 large allocation failed or not.
Well, no-one's debating whether or not that, or any other error/exception, is catchable. (It's worrying me that there are now two people talking about catchability, as if the possibility of uncatchability has been raised.)
 Running out of threads is much less catastrophic than running out 
 of memory.
I agree, but who's talked about threads? I mentioned TSS keys, but they're not the same thing at all. Maybe someone else has discussed threads, and I've missed it.
 Similarly I would treat assertion failures as more catastrophic 
 than other exceptions.
Naturally. They're terminal.
 But as I indicated in my original post I believe there is no such 
 thing as an unrecoverable (but catchable) error/exception.
Do you mean you believe there is no such thing in D now? If so, you're quite right. Do you mean that you believe there will never be such a thing in D? If so, I think you're likely to be proved right. Do you mean there's no such thing as an irrecoverable exception _anywhere_? If so, you're wrong. I've written, and make good use of, one in C++ Do you mean you don't believe there is a motivating case for irrecoverability? If so, can you explain why, address the points I've just made to xs0: - irrecoverability applies only to contract violations, i.e. code that is detected to have violated its design via runtime constructs inserted by its author(s) for that purpose - an invalid process cannot, by definition, perform validly. It can only stop, or perform against its design. - "Crashing Early" in practice results in extremely high quality code, and rapid turn around of bug diagnosis and fixes - D cannot support opt-in/library-based irrecoverability; to have it, it must be built in Specifically, I'm intrigued to hear of a case that shows irrecoverability to be a bad idea (excepting debugging of course, which I've already recognised).
 <sarcasm>
 Windows used to have this feature where if you tried hard enough 
 the screen would go blue. It was called the "blue screen of death" 
 because some people died of joy when they saw it. Fearing 
 lawsuits, though, Microsoft had to remove the feature, much to the 
 dismay of Windows users worldwide.
 </sarcasm>
But I fear this is what people, through ignorance or laziness or whatever, are portraying the irrecoverability / "Crashing Early" debate to be, and it's quite disingenuous. Although there can be no guarantees in what's supportable behaviour after a contract violation occurs, practically speaking there's always scope for creating a log to screen/file, and often for saving current work (in GUI contexts). Since I've had nothing but marked success using the technique over the last year, and it's lauded by people such as The Pragmatic Programmers, I'd be very interested to hear from anyone who has a negative experience using it. Otherwise, if there's nothing but unsubstantiated opinion on one side, versus positive supportive evidentiary experience on the other, it's kind of pointless trying to have a balanced debate, don't you think? Matthew
Apr 12 2005
parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:d3ggkh$2i42$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
 news:d3gff9$2gvc$1 digitaldaemon.com...
 So, I see the taxonomy as being either (now using my/Ben's names):

 Object <= not throwable, btw
    Throwable
        Error <= Unrecoverable exceptions
            ContractViolation
            Assertion
        Exception
            FileNotFoundException
            XMLParseException
        Exhaustion
            MemoryExhaustion
            TSSKeyExhaustion
Why shouldn't Object be throwable? It has useful methods like toString() and print() (I'm starting to think print() should stay). What would Throwable provide that Object doesn't? I would make it harder to throw the wrong thing I suppose: throw new Studebacker()
Exactly that. Is that not adequate motivation?
Not to me - but then I haven't seen much newbie code involving exception handling.
 OutOfMemory is special because in typical usage if you run out of memory 
 you can't even allocate another exception safely.
Indeed, although that's easily obviated by having the exception ready, on a thread-specific basis. (I'm sure you know this Ben, but it's worth saying for purposes of general edification.)
Easily? Preallocating exceptions or other objects in case an OutOfMemory is thrown is an advanced maneuver IMO.
 Memory is the one resource programs have a very very hard time running 
 without.
"the one"? Surely not. I've already mentioned TSS keys. They're at least as hard to live without. And what about stack? (Assuming you mean heap, as I did.)
I view running out of TSS keys as running out of gas in a car - a pain but expected. I view running out of memory as running out of oxygen in the atmosphere - a bigger pain and unexpected.
 It must be catchable because otherwise there's no way to tell if a large 
 allocation failed or not.
Well, no-one's debating whether or not that, or any other error/exception, is catchable. (It's worrying me that there are now two people talking about catchability, as if the possibility of uncatchability has been raised.)
yup - I just wanted to be clear.
 Running out of threads is much less catastrophic than running out of 
 memory.
I agree, but who's talked about threads? I mentioned TSS keys, but they're not the same thing at all. Maybe someone else has discussed threads, and I've missed it.
TSS means "thread-specific storage", correct? I was guessing that was what it meant but maybe I was wrong. Personally
 Similarly I would treat assertion failures as more catastrophic than 
 other exceptions.
Naturally. They're terminal.
 But as I indicated in my original post I believe there is no such thing 
 as an unrecoverable (but catchable) error/exception.
Do you mean you believe there is no such thing in D now? If so, you're quite right. Do you mean that you believe there will never be such a thing in D? If so, I think you're likely to be proved right. Do you mean there's no such thing as an irrecoverable exception _anywhere_? If so, you're wrong. I've written, and make good use of, one in C++ Do you mean you don't believe there is a motivating case for irrecoverability? If so, can you explain why, address the points I've just made to xs0: - irrecoverability applies only to contract violations, i.e. code that is detected to have violated its design via runtime constructs inserted by its author(s) for that purpose - an invalid process cannot, by definition, perform validly. It can only stop, or perform against its design. - "Crashing Early" in practice results in extremely high quality code, and rapid turn around of bug diagnosis and fixes - D cannot support opt-in/library-based irrecoverability; to have it, it must be built in Specifically, I'm intrigued to hear of a case that shows irrecoverability to be a bad idea (excepting debugging of course, which I've already recognised).
 <sarcasm>
 Windows used to have this feature where if you tried hard enough the 
 screen would go blue. It was called the "blue screen of death" because 
 some people died of joy when they saw it. Fearing lawsuits, though, 
 Microsoft had to remove the feature, much to the dismay of Windows users 
 worldwide.
 </sarcasm>
But I fear this is what people, through ignorance or laziness or whatever, are portraying the irrecoverability / "Crashing Early" debate to be, and it's quite disingenuous. Although there can be no guarantees in what's supportable behaviour after a contract violation occurs, practically speaking there's always scope for creating a log to screen/file, and often for saving current work (in GUI contexts).
Let me give another example besides BSOD why "unrecoverable" is application-specific. Let's say I'm writing an application like Photoshop or GIMP (or, say, MATLAB) that has a concept of plug-ins. Now if a plug-in asserts and gets itself into a bad state the controlling application must be able to catch that and recover. Any decent application would print out some message like "the plugin Foo had an internal error and has been unloaded" and unload the offending plug-in. It would be unacceptable for the language/run-time to force the controlling application to quit because of a faulty plug-in. In the same way modern OSes don't quit when an application has an internal error.
Apr 12 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Why shouldn't Object be throwable? It has useful methods like 
 toString() and print() (I'm starting to think print() should 
 stay). What would Throwable provide that Object doesn't? I would 
 make it harder to throw the wrong thing I suppose:
  throw new Studebacker()
Exactly that. Is that not adequate motivation?
Not to me - but then I haven't seen much newbie code involving exception handling.
Let's just leave that as a philosophical difference then. Nothing to be gained here by further debate.
 OutOfMemory is special because in typical usage if you run out 
 of memory you can't even allocate another exception safely.
Indeed, although that's easily obviated by having the exception ready, on a thread-specific basis. (I'm sure you know this Ben, but it's worth saying for purposes of general edification.)
Easily? Preallocating exceptions or other objects in case an OutOfMemory is thrown is an advanced maneuver IMO.
Sure, but easy for the designer of a language's core library. (They're gonna have a lot harder tasks than that, methinks.)
 Memory is the one resource programs have a very very hard time 
 running without.
"the one"? Surely not. I've already mentioned TSS keys. They're at least as hard to live without. And what about stack? (Assuming you mean heap, as I did.)
I view running out of TSS keys as running out of gas in a car - a pain but expected. I view running out of memory as running out of oxygen in the atmosphere - a bigger pain and unexpected.
Hmm. Again, we probably need to agree that this is fatuous to debate. (It might be that out-of-mem in C++ never troubles too painfully because by the time the exception's caught - usually in main()/do_main() - many death tractors have been deterministically caught and memory released; especially if you're using memory parachutes. In Java, I guess it's different, and the same will apply to D, save for a few auto-instances.)
 It must be catchable because otherwise there's no way to tell if 
 a large allocation failed or not.
Well, no-one's debating whether or not that, or any other error/exception, is catchable. (It's worrying me that there are now two people talking about catchability, as if the possibility of uncatchability has been raised.)
yup - I just wanted to be clear.
Ok, we're agreed. All things are catchable. Some few will be rethrown by the language if you don't do it yourself. Hence, they're unquenchable.
 Running out of threads is much less catastrophic than running 
 out of memory.
I agree, but who's talked about threads? I mentioned TSS keys, but they're not the same thing at all. Maybe someone else has discussed threads, and I've missed it.
TSS means "thread-specific storage", correct?
It does. And to use it one needs to allocate slots, the keys for which are well-known values shared between all threads that act as indexes into tables of thread-specific data. One gets at one's TSS data by specifying the key, and the TSS library works out the slot for the calling thread, and gets/sets the value for that slot for you. TSS underpins multi-threaded libraries - e.g. errno / GetLastError() per thread is one of the more simple uses - and running out of TSS keys is a catastrophic event. If you run out before you've built the runtime structures for you C runtime library, there's really nothing you can do, and precious little you can say about it.
 I was guessing that was what it meant but maybe I was wrong. 
 Personally
Sure, I think you mentioned problems allocating threads, which is a different matter, and I wanted to make clear the distinction otherwise our conversation might look woolly to someone else.
 But I fear this is what people, through ignorance or laziness or 
 whatever, are portraying the irrecoverability / "Crashing Early" 
 debate to be, and it's quite disingenuous. Although there can be 
 no guarantees in what's supportable behaviour after a contract 
 violation occurs, practically speaking there's always scope for 
 creating a log to screen/file, and often for saving current work 
 (in GUI contexts).
Let me give another example besides BSOD why "unrecoverable" is application-specific. Let's say I'm writing an application like Photoshop or GIMP (or, say, MATLAB) that has a concept of plug-ins. Now if a plug-in asserts and gets itself into a bad state the controlling application must be able to catch that and recover. Any decent application would print out some message like "the plugin Foo had an internal error and has been unloaded" and unload the offending plug-in. It would be unacceptable for the language/run-time to force the controlling application to quit because of a faulty plug-in.
Everyone keeps using vague terms - "a bad state" - which helps their cause, I think. ;) If the plug-in has violated its contract, then the process within which it resides must shutdown. Naturally, an application that has user interaction, and may support user data, should make good efforts to save that user's data, otherwise it'll be pretty unpopular. To not unload raises the questions: 1. Why do you want to use a software component contrary to its design? 2. What do you expect it to do for you in that circumstance? Instead of thinking of this as an intrusion into one's freedom, why not look at it as what it is intended to be, and what it proves to be in practice: a very sharp tool for cutting out bugs. Imagine that applications did as I'm suggesting (and as some actually do in reality). In that case, buggy plug-ins would not be tolerated. The bugs would be filtered back to their creators rapidly. And they'd be well armed to fix them because (i) less time would elapse because people wouldn't live with that bug as they might otherwise do, and (ii) the bug would manifest close to its source: rather than waiting for a crash (in which you could lose your data!) the bug would report its context, e.g. "./GIMP/Plugins/Xyz/Abc.d;1332: Contract Violation: MattRenderer contains active overlays without primary images" I know from personal experience that this kind of thing leads to near-instant diagnosis and very rapid correction.
  In the same way modern OSes don't quit when an application has an 
 internal error.
This is an exceedingly specious analogy, and I'm surprised at you Ben. You know full well that modern OSs isolate applications from each other's address spaces. This is pure misinformation, and makes it hard to have a serious debate. People reading the thread for whom the subject is new will be unduly influenced by such misrepresentations.
Apr 12 2005
prev sibling parent Sean Kelly <sean f4.ca> writes:
In article <d3g2ho$2477$1 digitaldaemon.com>, Matthew says...
But, as I said in a previous post, I think the jury's still out on 
OutOfMemory. I've a strong suspicion it should fall out as being an 
exception, but I think there's some mileage in discussing whether it 
might have another 'type', e.g. ResourceExhaustion. Reason being 
that, though memory exhaustion is in principle recoverable, in 
practice it is not recoverable (since getting memory to throw/handle 
is tricky, and may require workarounds) and also often so unlikely 
as to make it not worth worrying about.
Tricky perhaps, but that's why this sort of thing is taken care of by a standard library. This should already work in Phobos (there's a static OutOfMemory exception somewhere IIRC), and I was considering creating an OutOfMemoryError per thread in Ares, though that's an issue that likely warrants GC discussion to see if it's necessary.
So, I see the taxonomy as being either (now using my/Ben's names):

Object <= not throwable, btw
This would be nice, but it's quite obviously a language issue, not a runtime issue. So Walter will have to chime in here.
    Throwable
        Error <= Unrecoverable exceptions
            ContractViolation
            Assertion
        Exception
            FileNotFoundException
            XMLParseException
        Exhaustion
            MemoryExhaustion
            TSSKeyExhaustion

or, if we just lump exhaustions in with exceptions

Object <= not throwable, btw
    Throwable
        Error <= Unrecoverable exceptions
            ContractViolation
            Assertion
        Exception
            FileNotFoundException
            XMLParseException
            MemoryExhaustionException
            TSSKeyExhaustionException
I kind of like the first method only because exhaustion errors tend to need special handling and I'm not sure I like that they can be ignored simply by catching an Exception object. Then again, if there's still no memory available when handing an OutOfMemory error then another OutOfMemory error will be thrown, so there's little danger in ending up in an invalid state.
First, "Why do you need to force a program to terminate?". This 
one's simple: The program must be forced to terminate because it has 
violated its design. (Let me digress for a moment and confess that 
before I made the leap into CP-grok-ville this never seemed simple, 
or at least never cut and dried.)
I agree. Frankly, preventing application failure on contract violation is as simple as compiling without DBC anyway. I don't like the idea of being able to situationally handle contract violations by catching AssertError.
Notwithstanding all the foregoing, there's a much more fundamental, 
albeit little recognised, issue. Computers make a strict 
interpretation of what we, the programmers, instruct them to do. Now 
a contract violation, as I've said, is pure and simple a statement 
that your instructions to the computer (or to the compiler, if you 
will) are fundamentally flawed. Since computers do not have 
redundancy, intuition, instincts, sixth-sense, a friend to call, or 
any other "higher order" functioning, there are exactly two things a 
process can do in that circumstance. It can operate in a 
fundamentally flawed manner, or it can stop. THERE ARE NO OTHER 
OPTIONS.
It would be an interesting feature of a programming language to have a support staff that it consults in the event of unexpected errors. Perhaps the next iterationasic could have this feature? (sorry, the mental image was just too entertaining to pass up) Sean
Apr 12 2005
prev sibling parent reply J C Calvarese <jcc7 cox.net> writes:
Matthew wrote:
OutOfMemory is practically unrecoverable, but should not be 
classed
as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
Conversely, AssertionFailure is practically recoverable, but most
certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
(And the language
should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
IMO mandating this, that, and the other thing isn't necessarily the D-style. If someone for whatever crazy (and/or genius) reason wants to try to recover from what would usually be considered irrecoverable error, why does the compiler need to stop them? Here's a possibly insane example. I'm trying to imagine a situation where it might be useful to try to make an irrevoverable error recoverable. Let's say you're using someone else's library (no source provided except headers). The library's pretty useful in some respects, but it has some stupid assert that don't make any sense and halts the program. Well, if the compiler won't let you try to recover after the assert then you might have to give up (it was a long shot anyways). Sure, it's not "proper programming practice" to subvert the library author's asserts, but sometimes "it kind of works" is all you need. D should be the kind of language that allows creativity. -- jcc7 http://jcc_7.tripod.com/d/
Apr 12 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"J C Calvarese" <jcc7 cox.net> wrote in message 
news:d3fspo$1u0j$1 digitaldaemon.com...
 Matthew wrote:
OutOfMemory is practically unrecoverable, but should not be 
classed
as an unrecoverable exception.
Agreed, in part, why "class" it as anything but what it is?
Conversely, AssertionFailure is practically recoverable, but 
most
certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
Because there has to be a common type that's catchable, and I don't think Object is the appropriate choice.
(And the language
should mandate and enforce the irrecoverability.)
Disagree.
For reasons so blindingly obvious/insightful that you needn't specify them, I guess.
IMO mandating this, that, and the other thing isn't necessarily the D-style. If someone for whatever crazy (and/or genius) reason wants to try to recover from what would usually be considered irrecoverable error, why does the compiler need to stop them?
There are two parts to this: 1. Implicit in your argument is that the "programmer knows best in *all* possible circumstances". I can tell you that, as someone who's worked for over 10 years, written 1.5 books, and worked on some very successful high risk projects, the one thing I'm sure of is that I do _not_ know everything, and I do not always know best. I learn something new from more experienced people at least once a month. I understand that D has come from the C school where freedom of expression (both micro and macro) is valued highly, and is also attracting Linux types by whom freedom in all its facets is prized most high. In fact I share a lot of that instinct and sentiment. But the fact of the matter is that there are very good reasons - espoused by people loftier than I - for irrecoverability, and no reasons for not having it save for some ephemeral notion of "freedom". 2. As I've demonstrated with the STLSoft library's unrecoverable class, implementing irrecoverability can be done easily and effectively in C++ in the form of a library which one may opt to use or not use at ones own discretion. If we would do the same in D, there'd be some reason for not having it built into the language. However, the reason it can work in C++ is that one can (and should) throw by value - and catch by reference, of course - and the death tractors of exception objects are invoked in a deterministic manner. However, in D exceptions are thrown by reference, and their destructors are useless. There is, therefore, no mechanism for a library solution to irrecoverability in D. Therefore, we need a language-based solution.
 Here's a possibly insane example. I'm trying to imagine a 
 situation where it might be useful to try to make an irrevoverable 
 error recoverable. Let's say you're using someone else's library 
 (no source provided except headers). The library's pretty useful 
 in some respects, but it has some stupid assert that don't make 
 any sense and halts the program. Well, if the compiler won't let 
 you try to recover after the assert then you might have to give up 
 (it was a long shot anyways). Sure, it's not "proper programming 
 practice" to subvert the library author's asserts, but sometimes 
 "it kind of works" is all you need. D should be the kind of 
 language that allows creativity.
It's wrongheaded, I'm afraid. If you fire an assert in a library then you're using it counter to its design. If (the expression of) its design is wrong, i.e. the assert shouldn't be there, then you've got even bigger problems, and any use of it you make which is counter to its design cannot be characterised to be anything else. Two wrongs cannot make a right. AFAIK - and I've been discussing this subject with friends and colleagues in various settings for a long while - there's only one scenario where irrecoverability (for CP violations) makes sense: Debuggers. I absolutely want D to support debuggers correctly, but I don't accept that the real and significant advantages of irrecoverability have to be sacrified for that. Nonetheless, I do have serious doubts that irrecoverability will be incorporated into D, since Walter tends to favour "good enough" solutions rather than aiming for strict/theoretical best, and because the principles and advantages of irrecoverability are not yet sufficiently mainstream. It's a pity though, because it'd really lift D's head above its peers. (And without it, there'll be another area in which C++ will continue to be able to claim supremacy, because D cannot support it in library form.) Cheers Matthew
Apr 12 2005
next sibling parent Sean Kelly <sean f4.ca> writes:
In article <d3g49q$261c$1 digitaldaemon.com>, Matthew says...
Nonetheless, I do have serious doubts that irrecoverability will be 
incorporated into D, since Walter tends to favour "good enough" 
solutions rather than aiming for strict/theoretical best, and 
because the principles and advantages of irrecoverability are not 
yet sufficiently mainstream.
It may be possible to do this in the standard library, though the lack of deterministic destruction makes things a bit more difficult. it's an idea worth considering even without Walter's approval--I know some applications where I'd like this behavior. Sean
Apr 12 2005
prev sibling parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
 Nonetheless, I do have serious doubts that irrecoverability will be 
 incorporated into D, since Walter tends to favour "good enough" solutions 
 rather than aiming for strict/theoretical best, and because the principles 
 and advantages of irrecoverability are not yet sufficiently mainstream. 
 It's a pity though, because it'd really lift D's head above its peers. 
 (And without it, there'll be another area in which C++ will continue to be 
 able to claim supremacy, because D cannot support it in library form.)
I think Walter has made the right choices - except the hierarchy has gotten out of whack. Robust, fault-tolerant software is easier to write with D than C++.
Apr 12 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Matthew: Nonetheless, I do have serious doubts that 
 irrecoverability will be incorporated into D, since Walter tends 
 to favour "good enough" solutions rather than aiming for 
 strict/theoretical best, and because the principles and 
 advantages of irrecoverability are not yet sufficiently 
 mainstream. It's a pity though, because it'd really lift D's head 
 above its peers. (And without it, there'll be another area in 
 which C++ will continue to be able to claim supremacy, because D 
 cannot support it in library form.)
Ben: I think Walter has made the right choices - except the hierarchy has gotten out of whack. Robust, fault-tolerant software is easier to write with D than C++.
Bold statement. Can you back it up? I'm not just being a prick, I am genuinely interested in why people vaunt this sentiment so readily. In my personal experience I encounter bugs in the code *far* more in D than I do in C++. Now of course that's at least in part because I've done a lot of C++ over the last 10+ years, but that being the case does not, in and of itself, act as a supportive argument for your proposition. (FYI: I I'm less experienced in those languages than I am in D) Take a couple of cases: 1. D doesn't have pointers. Sounds great. Except that one can get null-pointer violations when attempting value comparisons. I've had those a fair amount in D. Not had such a thing in C++ in as long as I can remember. C++ cleanly delineates between references (as aliases for instances) and pointers. When I type x == y in C++ I _know_ - Machiavelli weilding int &x=*(int*)0; aside - that I'm not going to have an access violation. I do _not_ know that in D. 2. Take the current debate about irrecoverability. As I've said I've been using irrecoverable CP in the real world in a pretty high-stakes project - lots of AU$ to be lost! - over the last year, and its effect has been to save time and increase robustness, to a surprising (including to me) degree: system testing/production had only two bugs. One was diagnosed within minutes of a halted process with "file(line: VIOLATION: <details here>", and was fixed and running in less than two hours. The other took about a week of incredibly hard debugging, walk throughs, arguments, and rants and raves, because, ironically, I'd _not_ added a some contract enforcements I'd deemed never-going-to-happen!! So, unless and until I hear from people with _practical experience_ of these techniques that they've had bad experiences - and the only things I read about from people such as the Pragmatic Programmers is in line with my experience - I cannot be anything but convinced of their power to increase robustness and aid development and testing effectiveness and efficiency. Now C++ has deterministic destruction, which means I was easily able to create an unrecoverable exception type - found in <stlsoft/unrecoverable.hpp> for anyone interested - to get the behaviour I need. D does not support deterministic destruction of thrown exceptions, so it is not possible to provide irrecoverability in D. Score 0 for 2. And so it might go on. Naturally, this is my perspective, and I don't seek to imply that my perspective is any more absolute than anyone else's. But that being the case, such blanket statements about D's superiority, when used as a palliative in debates concerning improvements to D, are not worth very much. I'm keen to hear from others from all backgrounds, including C++, their take on Ben's statement (with accompanying rationale, of course). Cheers Matthew
Apr 12 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:d3hlb1$dqa$1 digitaldaemon.com...
 Matthew: Nonetheless, I do have serious doubts that irrecoverability 
 will be incorporated into D, since Walter tends to favour "good enough" 
 solutions rather than aiming for strict/theoretical best, and because 
 the principles and advantages of irrecoverability are not yet 
 sufficiently mainstream. It's a pity though, because it'd really lift 
 D's head above its peers. (And without it, there'll be another area in 
 which C++ will continue to be able to claim supremacy, because D cannot 
 support it in library form.)
Ben: I think Walter has made the right choices - except the hierarchy has gotten out of whack. Robust, fault-tolerant software is easier to write with D than C++.
Bold statement. Can you back it up? I'm not just being a prick, I am genuinely interested in why people vaunt this sentiment so readily. In my personal experience I encounter bugs in the code *far* more in D than I do in C++. Now of course that's at least in part because I've done a lot of C++ over the last 10+ years, but that being the case does not, in and of itself, act as a supportive argument in Python or in Ruby, and I'm less experienced in those languages than I am in D)
ok.
 Take a couple of cases:

 1. D doesn't have pointers. Sounds great. Except that one can get 
 null-pointer violations when attempting value comparisons. I've had those 
 a fair amount in D. Not had such a thing in C++ in as long as I can 
 remember. C++ cleanly delineates between references (as aliases for 
 instances) and pointers. When I type x == y in C++ I _know_ - Machiavelli 
 weilding int &x=*(int*)0; aside - that I'm not going to have an access 
 violation. I do _not_ know that in D.
D does have pointers. But if you want to ignore that wrinkle most pointer errors are due to dangling pointers (and those are squashed by garbage collection). A null pointer violation is the easiest pointer error to debug IMO. In terms of == you are comparing apples and oranges since you well know using == on object references in D is very different than pointer ==. I've chased plenty of dangling pointers and one of the joys of using Java (and I hope D) is not having to worry about that anymore.
 2. Take the current debate about irrecoverability. As I've said I've been 
 using irrecoverable CP in the real world in a pretty high-stakes project - 
 lots of AU$ to be lost! - over the last year, and its effect has been to 
 save time and increase robustness, to a surprising (including to me) 
 degree: system testing/production had only two bugs. One was diagnosed 
 within minutes of a halted process with "file(line: VIOLATION: <details 
 here>", and was fixed and running in less than two hours. The other took 
 about a week of incredibly hard debugging, walk throughs, arguments, and 
 rants and raves, because, ironically, I'd _not_ added a some contract 
 enforcements I'd deemed never-going-to-happen!!
I don't see what a missing assert has to do with recoverable on irrecoverable. Or did you have an assert that was swallowed? I can't tell.
 So, unless and until I hear from people with _practical experience_ of 
 these techniques that they've had bad experiences - and the only things I 
 read about from people such as the Pragmatic Programmers is in line with 
 my experience - I cannot be anything but convinced of their power to 
 increase robustness and aid development and testing effectiveness and 
 efficiency.
Hard failures during debugging is fine. My own experience in code robustness comes from working with engineering companies (who use MATLAB to generate code for cars and planes) where an unrecoverable error means your car shuts down when you are doing 65 on the highway. Or imagine if that happens with an airplane. That is not acceptable. They have drilled over and over into our heads that a catastrophic error means people die. I don't mean to be overly dramatic but it is a fact. With the current D AssertError subclassing Error I agree it is too easy to catch assertion failures. That is why in my proposal AssertionFailure subclasses Object directly. When a newbie is tempted to be lazy and swallow errors without thinking they will most likely swallow Exception. Only the truely unfortunate will think "oh - I can catch Object and swallow OutOfMemory and AssertionFailure, too!" My experience with Java is that newbies catch Exception and not Throwable or Error.
 Now C++ has deterministic destruction, which means I was easily able to 
 create an unrecoverable exception type - found in 
 <stlsoft/unrecoverable.hpp> for anyone interested - to get the behaviour I 
 need. D does not support deterministic destruction of thrown exceptions, 
 so it is not possible to provide irrecoverability in D.

 Score 0 for 2. And so it might go on.
I'd score 2 for 0..
 Naturally, this is my perspective, and I don't seek to imply that my 
 perspective is any more absolute than anyone else's. But that being the 
 case, such blanket statements about D's superiority, when used as a 
 palliative in debates concerning improvements to D, are not worth very 
 much.

 I'm keen to hear from others from all backgrounds, including C++, their 
 take on Ben's statement (with accompanying rationale, of course).

 Cheers

 Matthew

 
Apr 12 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Take a couple of cases:

 1. D doesn't have pointers. Sounds great. Except that one can get 
 null-pointer violations when attempting value comparisons. I've 
 had those a fair amount in D. Not had such a thing in C++ in as 
 long as I can remember. C++ cleanly delineates between references 
 (as aliases for instances) and pointers. When I type x == y in 
 C++ I _know_ - Machiavelli weilding int &x=*(int*)0; aside - that 
 I'm not going to have an access violation. I do _not_ know that 
 in D.
D does have pointers. But if you want to ignore that wrinkle most pointer errors are due to dangling pointers (and those are squashed by garbage collection).
Indeed. Don't know why I said it like that.
A null pointer violation is the easiest pointer error to debug IMO. 
In terms of == you are comparing apples and oranges since you well 
know using == on object references in D is very different than 
pointer ==.
I'm talking about the value comparison of references. C++ has references as aliases, which cannot be NULL unless someone's done something deliberately wrong. D has faux references, which are really just pointers with a different syntax, and which can be null. That's the point I was making.
I've chased plenty of dangling pointers and one of the joys of 
using Java (and I hope D) is not having to worry about that 
anymore.

 2. Take the current debate about irrecoverability. As I've said 
 I've been using irrecoverable CP in the real world in a pretty 
 high-stakes project - lots of AU$ to be lost! - over the last 
 year, and its effect has been to save time and increase 
 robustness, to a surprising (including to me) degree: system 
 testing/production had only two bugs. One was diagnosed within 
 minutes of a halted process with "file(line: VIOLATION: <details 
 here>", and was fixed and running in less than two hours. The 
 other took about a week of incredibly hard debugging, walk 
 throughs, arguments, and rants and raves, because, ironically, 
 I'd _not_ added a some contract enforcements I'd deemed 
 never-going-to-happen!!
I don't see what a missing assert has to do with recoverable on irrecoverable. Or did you have an assert that was swallowed? I can't tell.
I was saying the presence of a contract violation assertion detected a design error, and facilitated a very rapid fix. And the absence of one resulted in a *lot* of effort that'd've been spared had I not been so stupid as to think they'd never happen.
 So, unless and until I hear from people with _practical 
 experience_ of these techniques that they've had bad 
 experiences - and the only things I read about from people such 
 as the Pragmatic Programmers is in line with my experience - I 
 cannot be anything but convinced of their power to increase 
 robustness and aid development and testing effectiveness and 
 efficiency.
Hard failures during debugging is fine. My own experience in code robustness comes from working with engineering companies (who use MATLAB to generate code for cars and planes) where an unrecoverable error means your car shuts down when you are doing 65 on the highway. Or imagine if that happens with an airplane. That is not acceptable. They have drilled over and over into our heads that a catastrophic error means people die. I don't mean to be overly dramatic but it is a fact.
They sound powerfully persuasive on first reading. But it's still wrong, I'm afraid. What would you expect of your car/plane when it's operating outside its design? That is truly frightening! I think the real issue is that the examples you've described are not pure software engineering challenges. Frankly, I don't want to drive a car, or be in a plane where one computer is in total control, and it's going to be allowed to carry on when it's operating outside its design. From what I know of the space shuttle, it has three identical systems, and a supervisory controller that ignores an errant member of the triumvirate. Of course, that gets us to who monitors the controller. I don't know anything about that, but I would hope it's something that can effect a NMI and reboot itself within a few ms. All such things are a risk balance, naturally, and it may be that the risk estimate in such circumstances is that operating out of bounds is better than rebooting. In which case, build with "-no-cp-violations". Just don't crack on that these special circumstances somehow are exempt from the possibility that continuing is worth that stopping. What's wrong with having a car reboot at 65 on the highway? Why does reboot on such an embedded system have to take more than a ms or two? Why cannot such an embedded system start up seamlessly within a moving vehicle, and take over from where its previous incarnation had correctly steered it until its apoptosis? That'd be the car I'd trust.
 With the current D AssertError subclassing Error I agree it is too 
 easy to catch assertion failures. That is why in my proposal 
 AssertionFailure subclasses Object directly. When a newbie is 
 tempted to be lazy and swallow errors without thinking they will 
 most likely swallow Exception. Only the truely unfortunate will 
 think "oh - I can catch Object and swallow OutOfMemory and 
 AssertionFailure, too!" My experience with Java is that newbies 
 catch Exception and not Throwable or Error.
Why not just make it impossible?
 Now C++ has deterministic destruction, which means I was easily 
 able to create an unrecoverable exception type - found in 
 <stlsoft/unrecoverable.hpp> for anyone interested - to get the 
 behaviour I need. D does not support deterministic destruction of 
 thrown exceptions, so it is not possible to provide 
 irrecoverability in D.

 Score 0 for 2. And so it might go on.
I'd score 2 for 0..
You mean 2 for 2, I think. The point I was trying to raise is that you and Walter and others trot out these granular statements that D is better for writing robust software, and I want to know why that should be so? Is it just because of GC? I mean, a great many people have proposed a great many changes with the intent of improving D's ability to write robust software, and yet they've fallen on fallow ground. Is D's 'ethos' of good enough applying here, i.e. is having a GC that makes dangling pointer problem irrelevant such a big gain that we needn't care about anything else? I'm not (just) being sarcastic, I really want to know! Cheers
Apr 12 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:d3hnug$flk$1 digitaldaemon.com...
 Hard failures during debugging is fine. My own experience in code
robustness
 comes from working with engineering companies (who use MATLAB to generate
 code for cars and planes) where an unrecoverable error means your car
shuts
 down when you are doing 65 on the highway. Or imagine if that happens with
 an airplane. That is not acceptable. They have drilled over and over into
 our heads that a catastrophic error means people die. I don't mean to be
 overly dramatic but it is a fact.
The reason airliners are safe is because they are designed to be tolerant of any single failure. Computer systems are very unreliable, and the first thing the designer thinks of is "assume the computer system goes beserk and does the worst thing possible, how do I design the system to prevent that from bringing down the airliner?" Having worked on airliner design, I know how computer controlled subsystems handle self-detected faults. They do it by shutting themselves down and switching to the backup system. They don't try to soldier on. To do so would be to, by definition, be operating in an undefined, untested, and unknown configuration. I wouldn't want to bet my life on that. Even if the software was perfect, which it never is, the chips themselves are both prone to random failure and are uninspectable. Therefore, in my opinion from having worked on systems that must be safe, a system that cannot stand a catastrophic failure of a computer system is an inherently unsafe design to begin with. Making the computer more reliable does not solve the problem. What CP provides is another layer of security offering the capability of a program to self-detect a fault. The only reasonable thing it can do then is shut itself down and engage the backup. But if you're writing, say, a word processor, one might decide to attempt to save the user's data upon a CP violation and hope for the best. In a word processor, safety and security aren't the top priority. There isn't one size that fits all applications, the engineer writing the program will have to decide. Therefore, having a class of errors that is not catchable at all would be a mistake.
Apr 15 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 What CP provides is another layer of security offering the 
 capability of a
 program to self-detect a fault. The only reasonable thing it can 
 do then is
 shut itself down and engage the backup.

 But if you're writing, say, a word processor, one might decide to 
 attempt to
 save the user's data upon a CP violation and hope for the best. In 
 a word
 processor, safety and security aren't the top priority.

 There isn't one size that fits all applications, the engineer 
 writing the
 program will have to decide. Therefore, having a class of errors 
 that is not
 catchable at all would be a mistake.
Do you mean Catchable, or Quenchable? They have very quite different implications. AFAIK, only Sean has mentioned anything even uncatchable-like. What I've been proposing is that CP violations should be unquechable. This is no way prevents the word processor doing its best to save your work.
Apr 15 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <d3pegp$1be7$1 digitaldaemon.com>, Matthew says...
Do you mean Catchable, or Quenchable? They have very quite different 
implications. AFAIK, only Sean has mentioned anything even 
uncatchable-like.
And my suggestion was only that Errors should be unrecoverable (unquenchable by your terminology?)--they can be re-thrown six ways to sunday.
What I've been proposing is that CP violations 
should be unquechable. This is no way prevents the word processor 
doing its best to save your work.
I think Walter meant quenchable. That assertion failures currently throw an exception rather than halting the app immediately tell me that Walter isn't against tidying the cabin before going down with the ship. Sean
Apr 15 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Sean Kelly" <sean f4.ca> wrote in message 
news:d3pgmo$1ctk$1 digitaldaemon.com...
 In article <d3pegp$1be7$1 digitaldaemon.com>, Matthew says...
Do you mean Catchable, or Quenchable? They have very quite 
different
implications. AFAIK, only Sean has mentioned anything even
uncatchable-like.
And my suggestion was only that Errors should be unrecoverable (unquenchable by your terminology?)--they can be re-thrown six ways to sunday.
I wasn't trying to say that you'd said that things _should_ be uncatchable, merely that you'd entertained a solution whereby irrecoverability could be achieved within the current definition of the language, by 'throwing' auto classes.
What I've been proposing is that CP violations
should be unquechable. This is no way prevents the word processor
doing its best to save your work.
I think Walter meant quenchable.
Then if he did, he would need to supply a motivating example for "having a class of errors that is not catchable at all would be a mistake", because the word processor does not represent such. It can (do its best to) save the data without violating irrecoverability. Anyway, this has been debated ad nauseum, and I'm quite content that people should have the options to make weighted decisions of risk/severity. My only contention is that the Principle of Irrecoverability has no exceptions, and its consequences should be weighed when writing software. Further, I am now almost completely convinced, for practical reasons, by Georg's proposition that D libraries (incl runtime libs) should be in either CP or no-CP form, and that one should build an app one-way or the other. I humbly suggest that the only remaining point worthy of debate on the issue is what happens in a process consisting of dynamic link-units when some are built with CP and some without.
Apr 15 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d3po2o$1iek$2 digitaldaemon.com...
 Then if he did, he would need to supply a motivating example for
 "having a class of errors that is not catchable at all would be a
 mistake", because the word processor does not represent such. It can
 (do its best to) save the data without violating irrecoverability.
Here's one. One garbage collection strategy is to set pages in the gc heap as 'read only'. Then set up an exception handler to catch the GP faults from writes to those pages. Internally mark those pages as 'modified' and then turn off the read only protection on the page. Restart the instruction that caused the GP fault. There are other things you can do by manipulating the page protections and intercepting the generated faults. The operating system, for example, catches stack overflow faults and extends the stack.
Apr 15 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3pt4i$1lb6$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3po2o$1iek$2 digitaldaemon.com...
 Then if he did, he would need to supply a motivating example for
 "having a class of errors that is not catchable at all would be a
 mistake", because the word processor does not represent such. It 
 can
 (do its best to) save the data without violating 
 irrecoverability.
Here's one. One garbage collection strategy is to set pages in the gc heap as 'read only'. Then set up an exception handler to catch the GP faults from writes to those pages. Internally mark those pages as 'modified' and then turn off the read only protection on the page. Restart the instruction that caused the GP fault.
Sorry to be so brusque, but that's pure piffle. In such a case, the programmer who is designing their software is designating the (processing of the) hardware exception as a part of the normal processing of their application. It in no way represents a violation of the design, since it's part of the design. As I've said innumerable times throughout this thread, what does and does not contradict the design of a component is entirely within the purview of the author of that component. By the way, stack expansion is carried out by exactly the process you describe on many operating systems, as I mention in Chapter 32 of Imperfect C++. Given your implication, every program run on, say, Win32 is violating its design. (Here's where everyone can plug in their winks ;)
 There are other things you can do by manipulating the page 
 protections and
 intercepting the generated faults. The operating system, for 
 example,
 catches stack overflow faults and extends the stack.
Bugger, I just said that above. Nice to see you contradict your own point within proximity of the next paragraph, anyway. <g>
Apr 15 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d3pu39$1lvp$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message
 news:d3pt4i$1lb6$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3po2o$1iek$2 digitaldaemon.com...
 Then if he did, he would need to supply a motivating example for
 "having a class of errors that is not catchable at all would be a
 mistake", because the word processor does not represent such. It
 can
 (do its best to) save the data without violating
 irrecoverability.
Here's one. One garbage collection strategy is to set pages in the gc heap as 'read only'. Then set up an exception handler to catch the GP faults from writes to those pages. Internally mark those pages as 'modified' and then turn off the read only protection on the page. Restart the instruction that caused the GP fault.
Sorry to be so brusque, but that's pure piffle. In such a case, the programmer who is designing their software is designating the (processing of the) hardware exception as a part of the normal processing of their application. It in no way represents a violation of the design, since it's part of the design.
I thought we were talking about the D programming language specifying a class as being not catchable. The example I gave is about a program design where the programmer needs to catch such exceptions. D, being a system programming language, has to allow any exception to be caught at the programmer's discretion. I obviously have no idea what you're talking about when you asked for an example.
Apr 15 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d3q0ll$1nlj$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3pu39$1lvp$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message
 news:d3pt4i$1lb6$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3po2o$1iek$2 digitaldaemon.com...
 Then if he did, he would need to supply a motivating example
 for
 "having a class of errors that is not catchable at all would
 be a
 mistake", because the word processor does not represent such.
 It
 can
 (do its best to) save the data without violating
 irrecoverability.
Here's one. One garbage collection strategy is to set pages in the gc heap as 'read only'. Then set up an exception handler to catch the GP faults from writes to those pages. Internally mark those pages as 'modified' and then turn off the read only protection on the page. Restart the instruction that caused the GP fault.
Sorry to be so brusque, but that's pure piffle. In such a case, the programmer who is designing their software is designating the (processing of the) hardware exception as a part of the normal processing of their application. It in no way represents a violation of the design, since it's part of the design.
I thought we were talking about the D programming language specifying a class as being not catchable. The example I gave is about a program design where the programmer needs to catch such exceptions. D, being a system programming language, has to allow any exception to be caught at the programmer's discretion. I obviously have no idea what you're talking about when you asked for an example.
Well the heat of the recent debate has been that some people have argued that there are exceptions to the Principle of Irrecoverability. I've conceded - and, of course, use of every day - that there are _practical_ exceptions to the rule, but they are based on the knowledge that one operates a piece of software, in such circumstances, _outside_ the parameters of its design. Some have argued that the Principle of Irrecoverability does not hold in all cases, which has been the primary source of the heat. [FWIW, I've been busy in finalising the draft of "The Nuclear Reactor and the Deep Space Probe", which I started in October, over the last few days. It's had a number of improvements as a result of the scorching analysis of my ideas, and has some new sections, including "The Fallacy of the Recoverable Precondition", which I believe proves that point completely. I shall have it to Bjorn in a couple of days, and then hopefully it'll be out on TCS within about a week.] The practical side of the debate has been that some people have argued that D does not need to support irrecoverability. Being dogmatic about the _principle_ - in secret I'm eminently reasonable about the practice - I've argued that D should make a strict interpretation of the principle, such that a new 'exception' type, ContractViolation (or whatever) would be indeed irrecoverable (although it would be infinitely catchable (and rethrown, explicitly or implicitly) all the way up to main()). One of the reasons for the strictness of my position has been that there is no way to supply irrecoverability on D - other than to just call some exitWithReason() function at the point of violation. Over the last few days, however, I've moved in behind Georg's idea that D should support precisely two modes of compilation/execution: with-CP and without-CP. This seems the only practicable measure that will satisfy all parties. The only area of doubt as to the viability of this solution is what would happen with heterogeneous mixes of dynamically merged link units. That is, what happens if a CP process loads a (violating) non-CP plug-in? It just crashes without invoking the graceful shutdown / logging stuff, I guess. Conversely, if a non-CP process loads a (violating) CP plug-in? Same thing happens, I guess. I'm interested in people's thoughts on this. As for my asking for an example: you'd said "There isn't one size that fits all applications, the engineer writing the program will have to decide. Therefore, having a class of errors that is not catchable at all would be a mistake.". Assuming by "not catchable" you meant "not recoverable", I think you're wrong, and was asking for a motivating example, since no-one, not in this debate nor the one you and I had with the TCS people in October, nor in any others, has ever been able to provide one that violates the principle. That's all. :-)
Apr 15 2005
parent Sean Kelly <sean f4.ca> writes:
In article <d3q20d$1oag$1 digitaldaemon.com>, Matthew says...
The only area of doubt as to the viability of this solution is what
would happen with heterogeneous mixes of dynamically merged link
units. That is, what happens if a CP process loads a (violating)
non-CP plug-in? It just crashes without invoking the graceful
shutdown / logging stuff, I guess. Conversely, if a non-CP process
loads a (violating) CP plug-in? Same thing happens, I guess. I'm
interested in people's thoughts on this.
Perhaps it would be worthwhile to have some metadata built into libraries? It might be useful to be able to query this kind of thing at load-time. Sean
Apr 17 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3otcn$sja$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3hnug$flk$1 digitaldaemon.com...
 Hard failures during debugging is fine. My own experience in code
robustness
 comes from working with engineering companies (who use MATLAB to generate
 code for cars and planes) where an unrecoverable error means your car
shuts
 down when you are doing 65 on the highway. Or imagine if that happens 
 with
 an airplane. That is not acceptable. They have drilled over and over into
 our heads that a catastrophic error means people die. I don't mean to be
 overly dramatic but it is a fact.
The reason airliners are safe is because they are designed to be tolerant of any single failure. Computer systems are very unreliable, and the first thing the designer thinks of is "assume the computer system goes beserk and does the worst thing possible, how do I design the system to prevent that from bringing down the airliner?" Having worked on airliner design, I know how computer controlled subsystems handle self-detected faults. They do it by shutting themselves down and switching to the backup system. They don't try to soldier on. To do so would be to, by definition, be operating in an undefined, untested, and unknown configuration. I wouldn't want to bet my life on that. Even if the software was perfect, which it never is, the chips themselves are both prone to random failure and are uninspectable. Therefore, in my opinion from having worked on systems that must be safe, a system that cannot stand a catastrophic failure of a computer system is an inherently unsafe design to begin with. Making the computer more reliable does not solve the problem. What CP provides is another layer of security offering the capability of a program to self-detect a fault. The only reasonable thing it can do then is shut itself down and engage the backup. But if you're writing, say, a word processor, one might decide to attempt to save the user's data upon a CP violation and hope for the best. In a word processor, safety and security aren't the top priority. There isn't one size that fits all applications, the engineer writing the program will have to decide. Therefore, having a class of errors that is not catchable at all would be a mistake.
My analogy with systems on an airplane is that the code that realized a subsystem failed and decided to shut that particular subsystem down effectively "caught" the problem in the subsystem. The same approach applies to a piece of software that catches a problem in a subsystem and deals with it. Just as an error in a subsystem on an airplane doesn't shut down the entire airplane so an error in a subsystem in a piece of software should shut down the entire software. I agree the level of safety and reduncancy is much higher in an airplane but my argument was that *something* on that airplane realized the subsystem was in error and dealt with it.
Apr 15 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that realized a
 subsystem failed and decided to shut that particular subsystem down
 effectively "caught" the problem in the subsystem. The same approach
applies
 to a piece of software that catches a problem in a subsystem and deals
with
 it. Just as an error in a subsystem on an airplane doesn't shut down the
 entire airplane so an error in a subsystem in a piece of software should
 shut down the entire software. I agree the level of safety and reduncancy
is
 much higher in an airplane but my argument was that *something* on that
 airplane realized the subsystem was in error and dealt with it.
Ok. We're on the same page, then.
Apr 15 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Looks like a good page to be on too.  :)

TZ

"Walter" <newshound digitalmars.com> wrote in message
news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that realized a
 subsystem failed and decided to shut that particular subsystem down
 effectively "caught" the problem in the subsystem. The same approach
applies
 to a piece of software that catches a problem in a subsystem and deals
with
 it. Just as an error in a subsystem on an airplane doesn't shut down the
 entire airplane so an error in a subsystem in a piece of software should
 shut down the entire software. I agree the level of safety and reduncancy
is
 much higher in an airplane but my argument was that *something* on that
 airplane realized the subsystem was in error and dealt with it.
Ok. We're on the same page, then.
Apr 19 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message 
news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message 
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that 
 realized a
 subsystem failed and decided to shut that particular subsystem 
 down
 effectively "caught" the problem in the subsystem. The same 
 approach
applies
 to a piece of software that catches a problem in a subsystem 
 and deals
with
 it. Just as an error in a subsystem on an airplane doesn't shut 
 down the
 entire airplane so an error in a subsystem in a piece of 
 software should
 shut down the entire software. I agree the level of safety and 
 reduncancy
is
 much higher in an airplane but my argument was that *something* 
 on that
 airplane realized the subsystem was in error and dealt with it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 19 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Well, that is one way of looking at it.  What I see though, is that they agree
on the princible of what "should" be done, to the extent that it is possible to
do.  I also agree with the way Ben put it, because there is a definate parallel
there.

For example, just like parts of a running program share the same address space,
the parts of  an aircraft in flight also share the same sky.  For that matter,
different programs on the same computer also share the same address space at
least on a hardware level, and likewise any number of distinct pieces of flying
hardware may be carrying passangers and such around in the same sky at the same
time.

It is, however, possible to designate a "portion" of the sky for a specific
aricraft, and a "portion" of the computer's addrerss space for a specific
program.  In both cases, there exists the possibility of the assigned
boundaries being violated, and in either case, the results can potentially lead
to a crash.

With this in mind, remember that while the programmer has little or no control
over the nature of the hardware that the program will run on, the aircraft
designer also has little or no control over the atmosphere in which thier
creation will fly.  As such, both the software engineer and the aricraft
engineer are restricted in their ability to add fault tollerance, but there are
still things that can be done even within such restrictions.

Microprocessors make mistakes all the time... and I don't mean simply that they
make mistakes every so many billion operations.  The whole reason it's so hard
for microprocessor designers to keep finding ways to make them smaller, is
because their very design is based on the idea that electrons will go where
they want whether we like it or not.

Make it a little easier for electrons to go one way than another way, and more
of them will take the easier way.  Meanwhile, a few will take the harder way
because those they were repelled by those taking the easier way, and a few will
take the harder way because their particular allignment made that way easier
for them specifically, and a few will take the harder way simply because "they
can" but the chip is designed to take advantage of the "average" action by
using excessive redundancy.  As chips get smaller, the amount of redundancy
possible in a single electron path decreases substantially, and compensations
have to be made.

In other words, if every stray electron that went "the wrong way" crashed the
microprocessor that it was traveling through, there wouldn't be time to execute
a single hardwired microcode instruction like "fetch" before the chip crashed.

Extreme fault tollerance requires sacrafice though.  You could use hashes and
checksums all over the place, and redundantly write every piece of data to
multiple memory locations, and periodically back up every piece of data to
external storage along with the processor state and so on... but don't expect
your program to get anything done while you're still alive to see it because it
would be so slow it would pass for frozen.

Looks to me like Walter and Ben have the right idea.  Yes, it may be foriegn to
you, but that doesn't mean it won't work.  Give it a chance... see how it works
out.

TZ

"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d44q3s$2cmj$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that
 realized a
 subsystem failed and decided to shut that particular subsystem
 down
 effectively "caught" the problem in the subsystem. The same
 approach
applies
 to a piece of software that catches a problem in a subsystem
 and deals
with
 it. Just as an error in a subsystem on an airplane doesn't shut
 down the
 entire airplane so an error in a subsystem in a piece of
 software should
 shut down the entire software. I agree the level of safety and
 reduncancy
is
 much higher in an airplane but my argument was that *something*
 on that
 airplane realized the subsystem was in error and dealt with it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 19 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I applaud your dedication to painting a picture of unalloyed 
fraternity, but your argument is plasmaware.

    "It is, however, possible to designate a "portion" of the sky 
for a specific aricraft, and a "portion" of the computer's addrerss 
space for a specific program"

You're not even correctly addressing my point, since the analogous 
construct would be:

    "a "portion" of the computer's address space for a 
_subcomponent_ of a specific program"

That is _at best_, as any programmer will tell you, 'not currently 
realistic'.

Anyway, I'm not getting into it any further, as being insulting 
isn't good for anyone, and constantly going round the same old 
houses of misapprehension is tiring, to say the least.


"TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message 
news:d44uj9$2fqg$1 digitaldaemon.com...
 Well, that is one way of looking at it.  What I see though, is 
 that they agree on the princible of what "should" be done, to the 
 extent that it is possible to do.  I also agree with the way Ben 
 put it, because there is a definate parallel there.

 For example, just like parts of a running program share the same 
 address space, the parts of  an aircraft in flight also share the 
 same sky.  For that matter, different programs on the same 
 computer also share the same address space at least on a hardware 
 level, and likewise any number of distinct pieces of flying 
 hardware may be carrying passangers and such around in the same 
 sky at the same time.

 It is, however, possible to designate a "portion" of the sky for a 
 specific aricraft, and a "portion" of the computer's addrerss 
 space for a specific program.  In both cases, there exists the 
 possibility of the assigned boundaries being violated, and in 
 either case, the results can potentially lead to a crash.

 With this in mind, remember that while the programmer has little 
 or no control over the nature of the hardware that the program 
 will run on, the aircraft designer also has little or no control 
 over the atmosphere in which thier creation will fly.  As such, 
 both the software engineer and the aricraft engineer are 
 restricted in their ability to add fault tollerance, but there are 
 still things that can be done even within such restrictions.

 Microprocessors make mistakes all the time... and I don't mean 
 simply that they make mistakes every so many billion operations. 
 The whole reason it's so hard for microprocessor designers to keep 
 finding ways to make them smaller, is because their very design is 
 based on the idea that electrons will go where they want whether 
 we like it or not.

 Make it a little easier for electrons to go one way than another 
 way, and more of them will take the easier way.  Meanwhile, a few 
 will take the harder way because those they were repelled by those 
 taking the easier way, and a few will take the harder way because 
 their particular allignment made that way easier for them 
 specifically, and a few will take the harder way simply because 
 "they can" but the chip is designed to take advantage of the 
 "average" action by using excessive redundancy.  As chips get 
 smaller, the amount of redundancy possible in a single electron 
 path decreases substantially, and compensations have to be made.

 In other words, if every stray electron that went "the wrong way" 
 crashed the microprocessor that it was traveling through, there 
 wouldn't be time to execute a single hardwired microcode 
 instruction like "fetch" before the chip crashed.

 Extreme fault tollerance requires sacrafice though.  You could use 
 hashes and checksums all over the place, and redundantly write 
 every piece of data to multiple memory locations, and periodically 
 back up every piece of data to external storage along with the 
 processor state and so on... but don't expect your program to get 
 anything done while you're still alive to see it because it would 
 be so slow it would pass for frozen.

 Looks to me like Walter and Ben have the right idea.  Yes, it may 
 be foriegn to you, but that doesn't mean it won't work.  Give it a 
 chance... see how it works out.

 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:d44q3s$2cmj$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that
 realized a
 subsystem failed and decided to shut that particular 
 subsystem
 down
 effectively "caught" the problem in the subsystem. The same
 approach
applies
 to a piece of software that catches a problem in a subsystem
 and deals
with
 it. Just as an error in a subsystem on an airplane doesn't 
 shut
 down the
 entire airplane so an error in a subsystem in a piece of
 software should
 shut down the entire software. I agree the level of safety 
 and
 reduncancy
is
 much higher in an airplane but my argument was that 
 *something*
 on that
 airplane realized the subsystem was in error and dealt with 
 it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 20 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
It's not meant as an argument at all, but rather as a statement of opinion from
my perspective.
My point is exactly that...
there are different perspectives.
Yours is one.

As for your statement that I was incorrectly addressing your point by speaking
of airplains and programs,
you may want to consider the fact that I was attempting to address the bigger
picture, since the smaller sub-picture is obviously a part of it.
I didn't think it necessary to mention that within each airplane are it's parts,
and each part occupies a subset of the space that the airplane occupies,
or the fact that those parts are made of materials and/or sub-parts which are
in turn made of materials and/or sub-parts or how such parts and sub-parts
interact.
The fact remains that it is a reasonable analogy whether or not any specific
person understands the connection or sees the parallel.

You're obviously an intellegent person, but...
you don't know "everything" so your argument that my argument is plasmaware
can't have taken into account my knowledge or experiences that you lack
awareness of.

TZ

"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d450cf$2il5$1 digitaldaemon.com...
 I applaud your dedication to painting a picture of unalloyed
 fraternity, but your argument is plasmaware.

     "It is, however, possible to designate a "portion" of the sky
 for a specific aricraft, and a "portion" of the computer's addrerss
 space for a specific program"

 You're not even correctly addressing my point, since the analogous
 construct would be:

     "a "portion" of the computer's address space for a
 _subcomponent_ of a specific program"

 That is _at best_, as any programmer will tell you, 'not currently
 realistic'.

 Anyway, I'm not getting into it any further, as being insulting
 isn't good for anyone, and constantly going round the same old
 houses of misapprehension is tiring, to say the least.


 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44uj9$2fqg$1 digitaldaemon.com...
 Well, that is one way of looking at it.  What I see though, is
 that they agree on the princible of what "should" be done, to the
 extent that it is possible to do.  I also agree with the way Ben
 put it, because there is a definate parallel there.

 For example, just like parts of a running program share the same
 address space, the parts of  an aircraft in flight also share the
 same sky.  For that matter, different programs on the same
 computer also share the same address space at least on a hardware
 level, and likewise any number of distinct pieces of flying
 hardware may be carrying passangers and such around in the same
 sky at the same time.

 It is, however, possible to designate a "portion" of the sky for a
 specific aricraft, and a "portion" of the computer's addrerss
 space for a specific program.  In both cases, there exists the
 possibility of the assigned boundaries being violated, and in
 either case, the results can potentially lead to a crash.

 With this in mind, remember that while the programmer has little
 or no control over the nature of the hardware that the program
 will run on, the aircraft designer also has little or no control
 over the atmosphere in which thier creation will fly.  As such,
 both the software engineer and the aricraft engineer are
 restricted in their ability to add fault tollerance, but there are
 still things that can be done even within such restrictions.

 Microprocessors make mistakes all the time... and I don't mean
 simply that they make mistakes every so many billion operations.
 The whole reason it's so hard for microprocessor designers to keep
 finding ways to make them smaller, is because their very design is
 based on the idea that electrons will go where they want whether
 we like it or not.

 Make it a little easier for electrons to go one way than another
 way, and more of them will take the easier way.  Meanwhile, a few
 will take the harder way because those they were repelled by those
 taking the easier way, and a few will take the harder way because
 their particular allignment made that way easier for them
 specifically, and a few will take the harder way simply because
 "they can" but the chip is designed to take advantage of the
 "average" action by using excessive redundancy.  As chips get
 smaller, the amount of redundancy possible in a single electron
 path decreases substantially, and compensations have to be made.

 In other words, if every stray electron that went "the wrong way"
 crashed the microprocessor that it was traveling through, there
 wouldn't be time to execute a single hardwired microcode
 instruction like "fetch" before the chip crashed.

 Extreme fault tollerance requires sacrafice though.  You could use
 hashes and checksums all over the place, and redundantly write
 every piece of data to multiple memory locations, and periodically
 back up every piece of data to external storage along with the
 processor state and so on... but don't expect your program to get
 anything done while you're still alive to see it because it would
 be so slow it would pass for frozen.

 Looks to me like Walter and Ben have the right idea.  Yes, it may
 be foriegn to you, but that doesn't mean it won't work.  Give it a
 chance... see how it works out.

 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d44q3s$2cmj$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code that
 realized a
 subsystem failed and decided to shut that particular
 subsystem
 down
 effectively "caught" the problem in the subsystem. The same
 approach
applies
 to a piece of software that catches a problem in a subsystem
 and deals
with
 it. Just as an error in a subsystem on an airplane doesn't
 shut
 down the
 entire airplane so an error in a subsystem in a piece of
 software should
 shut down the entire software. I agree the level of safety
 and
 reduncancy
is
 much higher in an airplane but my argument was that
 *something*
 on that
 airplane realized the subsystem was in error and dealt with
 it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 20 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message 
news:d459k7$2q6s$1 digitaldaemon.com...
 It's not meant as an argument at all, but rather as a statement of 
 opinion from my perspective.
Opinion is worthless without argument.
 My point is exactly that...
 there are different perspectives.
 Yours is one.
Naturally.
 As for your statement that I was incorrectly addressing your point 
 by speaking of airplains and programs,
 you may want to consider the fact that I was attempting to address 
 the bigger picture, since the smaller sub-picture is obviously a 
 part of it.
So what? You appear to assume a given truth that such 'magnification' is meaningful. Why? On what grounds? Is that some universal truth? Focusing in is appropriate and meaningful in some cases; focusing out is in others; and zooming in any direction irrelevant to others. (Surely that's obvious almost to the point of axiom.)
 I didn't think it necessary to mention that within each airplane 
 are it's parts,
 and each part occupies a subset of the space that the airplane 
 occupies,
 or the fact that those parts are made of materials and/or 
 sub-parts which are in turn made of materials and/or sub-parts or 
 how such parts and sub-parts interact.
 The fact remains that it is a reasonable analogy whether or not 
 any specific person understands the connection or sees the 
 parallel.
It is absolutely _not_ a reasonable analogy. The constituent parts of a process may not be distinguised as separate in terms of safety, since they have an intimate relationship in terms of shared access to process resources, principally, but not exclusively, memory. To put forward the notion that separate aircraft, with independent control systems and, most importantly, sentient human beings, are as intimately linked is simply wrong. For example, it is not the case that one aircraft flying some 100s or 1000s or even 10s of km away from another will suddenly find itself touching the same airspace of another, and therefore suffering from jetwash. But such proximity is not only possible and likely between intra-process entities, it is their very function. To deny this, or rather to aver that its converse is true, is either uninformed, mistaken or mendacious.
 You're obviously an intellegent person, but...
Assumption. But a nice one. Are you trying to disarm me with flattery? :-) Far more important than intelligence - which not guarantee against ignorance, inexperience, vanity or criticism - I have a large and most certainly healthy portion of skepticism about my own wisdom _and_ that of others, which is why I always back up my opinions with reasoned argument, and why I am unable to give credit to those who do not. And I'm not afraid to say so. _And_, I don't see why one should be. Ignorance is not something to fear.
 you don't know "everything"
Naturally
so your argument that my argument is plasmaware can't have taken 
into account my knowledge or experiences that you lack awareness 
of.
That's irrelevant. I do not know what you do or do not know. To imply that that's the foundation of my argument is specious. I was commenting on an _argument_ which you advanced. If you wish to argue with that then do so, but please don't dress up my post and ascribe knowledge and intent of which you have no knowledge. That's, er, hypocrisy, no? As to the arguments, which I did indeed comment on: When you make statements that are plainly - at least to me - nonsensical, I can't help but notice. Perhaps the wise, or at least the purposeful, course is to just ignore such things. But then what's the point of this newsgroup? It's already 90% vacuous congratulation. For my part, I believe advancement is made through honest criticism, as readily of oneself as of others, and action to answer it, but it just doesn't seem to be something that people are able/willing/interested in doing in general. Don't worry. I'm just about out of reserves of energy for swimming against the D stream. Which I think is a shame, but I suspect most others won't. Meet me where the G.O.M. hang out. :-) Matthew P.S. Since it's become necessary for me to do so in respect of disagreement: None of the foregoing is meant as personal insult or slight to you, or as an attempt to make _you_ feel unwelcome to the newsgroup. Your enthusiasm and experience are welcome. (It may be mine that aren't <g>)
 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:d450cf$2il5$1 digitaldaemon.com...
 I applaud your dedication to painting a picture of unalloyed
 fraternity, but your argument is plasmaware.

     "It is, however, possible to designate a "portion" of the sky
 for a specific aricraft, and a "portion" of the computer's 
 addrerss
 space for a specific program"

 You're not even correctly addressing my point, since the 
 analogous
 construct would be:

     "a "portion" of the computer's address space for a
 _subcomponent_ of a specific program"

 That is _at best_, as any programmer will tell you, 'not 
 currently
 realistic'.

 Anyway, I'm not getting into it any further, as being insulting
 isn't good for anyone, and constantly going round the same old
 houses of misapprehension is tiring, to say the least.


 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44uj9$2fqg$1 digitaldaemon.com...
 Well, that is one way of looking at it.  What I see though, is
 that they agree on the princible of what "should" be done, to 
 the
 extent that it is possible to do.  I also agree with the way 
 Ben
 put it, because there is a definate parallel there.

 For example, just like parts of a running program share the 
 same
 address space, the parts of  an aircraft in flight also share 
 the
 same sky.  For that matter, different programs on the same
 computer also share the same address space at least on a 
 hardware
 level, and likewise any number of distinct pieces of flying
 hardware may be carrying passangers and such around in the same
 sky at the same time.

 It is, however, possible to designate a "portion" of the sky 
 for a
 specific aricraft, and a "portion" of the computer's addrerss
 space for a specific program.  In both cases, there exists the
 possibility of the assigned boundaries being violated, and in
 either case, the results can potentially lead to a crash.

 With this in mind, remember that while the programmer has 
 little
 or no control over the nature of the hardware that the program
 will run on, the aircraft designer also has little or no 
 control
 over the atmosphere in which thier creation will fly.  As such,
 both the software engineer and the aricraft engineer are
 restricted in their ability to add fault tollerance, but there 
 are
 still things that can be done even within such restrictions.

 Microprocessors make mistakes all the time... and I don't mean
 simply that they make mistakes every so many billion 
 operations.
 The whole reason it's so hard for microprocessor designers to 
 keep
 finding ways to make them smaller, is because their very design 
 is
 based on the idea that electrons will go where they want 
 whether
 we like it or not.

 Make it a little easier for electrons to go one way than 
 another
 way, and more of them will take the easier way.  Meanwhile, a 
 few
 will take the harder way because those they were repelled by 
 those
 taking the easier way, and a few will take the harder way 
 because
 their particular allignment made that way easier for them
 specifically, and a few will take the harder way simply because
 "they can" but the chip is designed to take advantage of the
 "average" action by using excessive redundancy.  As chips get
 smaller, the amount of redundancy possible in a single electron
 path decreases substantially, and compensations have to be 
 made.

 In other words, if every stray electron that went "the wrong 
 way"
 crashed the microprocessor that it was traveling through, there
 wouldn't be time to execute a single hardwired microcode
 instruction like "fetch" before the chip crashed.

 Extreme fault tollerance requires sacrafice though.  You could 
 use
 hashes and checksums all over the place, and redundantly write
 every piece of data to multiple memory locations, and 
 periodically
 back up every piece of data to external storage along with the
 processor state and so on... but don't expect your program to 
 get
 anything done while you're still alive to see it because it 
 would
 be so slow it would pass for frozen.

 Looks to me like Walter and Ben have the right idea.  Yes, it 
 may
 be foriegn to you, but that doesn't mean it won't work.  Give 
 it a
 chance... see how it works out.

 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d44q3s$2cmj$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code 
 that
 realized a
 subsystem failed and decided to shut that particular
 subsystem
 down
 effectively "caught" the problem in the subsystem. The 
 same
 approach
applies
 to a piece of software that catches a problem in a 
 subsystem
 and deals
with
 it. Just as an error in a subsystem on an airplane 
 doesn't
 shut
 down the
 entire airplane so an error in a subsystem in a piece of
 software should
 shut down the entire software. I agree the level of 
 safety
 and
 reduncancy
is
 much higher in an airplane but my argument was that
 *something*
 on that
 airplane realized the subsystem was in error and dealt 
 with
 it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 20 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Opinion without argument is no more meaningless than disagreement without war.
I've stated my opinion, and you obviously disagree with it.
I can accept that, without having to choose between asserting that I am "still
right" or conceding that I must have been mistaken.
I am sure of what I stated containing valuable and accurate information.
You can either find the value and benefit from it, or insist that it doesn't
exist because you can't see it.
That choice is yours.
I have no wish to argue.

TZ

"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d45djj$2tv2$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d459k7$2q6s$1 digitaldaemon.com...
 It's not meant as an argument at all, but rather as a statement of
 opinion from my perspective.
Opinion is worthless without argument.
 My point is exactly that...
 there are different perspectives.
 Yours is one.
Naturally.
 As for your statement that I was incorrectly addressing your point
 by speaking of airplains and programs,
 you may want to consider the fact that I was attempting to address
 the bigger picture, since the smaller sub-picture is obviously a
 part of it.
So what? You appear to assume a given truth that such 'magnification' is meaningful. Why? On what grounds? Is that some universal truth? Focusing in is appropriate and meaningful in some cases; focusing out is in others; and zooming in any direction irrelevant to others. (Surely that's obvious almost to the point of axiom.)
 I didn't think it necessary to mention that within each airplane
 are it's parts,
 and each part occupies a subset of the space that the airplane
 occupies,
 or the fact that those parts are made of materials and/or
 sub-parts which are in turn made of materials and/or sub-parts or
 how such parts and sub-parts interact.
 The fact remains that it is a reasonable analogy whether or not
 any specific person understands the connection or sees the
 parallel.
It is absolutely _not_ a reasonable analogy. The constituent parts of a process may not be distinguised as separate in terms of safety, since they have an intimate relationship in terms of shared access to process resources, principally, but not exclusively, memory. To put forward the notion that separate aircraft, with independent control systems and, most importantly, sentient human beings, are as intimately linked is simply wrong. For example, it is not the case that one aircraft flying some 100s or 1000s or even 10s of km away from another will suddenly find itself touching the same airspace of another, and therefore suffering from jetwash. But such proximity is not only possible and likely between intra-process entities, it is their very function. To deny this, or rather to aver that its converse is true, is either uninformed, mistaken or mendacious.
 You're obviously an intellegent person, but...
Assumption. But a nice one. Are you trying to disarm me with flattery? :-) Far more important than intelligence - which not guarantee against ignorance, inexperience, vanity or criticism - I have a large and most certainly healthy portion of skepticism about my own wisdom _and_ that of others, which is why I always back up my opinions with reasoned argument, and why I am unable to give credit to those who do not. And I'm not afraid to say so. _And_, I don't see why one should be. Ignorance is not something to fear.
 you don't know "everything"
Naturally
so your argument that my argument is plasmaware can't have taken
into account my knowledge or experiences that you lack awareness
of.
That's irrelevant. I do not know what you do or do not know. To imply that that's the foundation of my argument is specious. I was commenting on an _argument_ which you advanced. If you wish to argue with that then do so, but please don't dress up my post and ascribe knowledge and intent of which you have no knowledge. That's, er, hypocrisy, no? As to the arguments, which I did indeed comment on: When you make statements that are plainly - at least to me - nonsensical, I can't help but notice. Perhaps the wise, or at least the purposeful, course is to just ignore such things. But then what's the point of this newsgroup? It's already 90% vacuous congratulation. For my part, I believe advancement is made through honest criticism, as readily of oneself as of others, and action to answer it, but it just doesn't seem to be something that people are able/willing/interested in doing in general. Don't worry. I'm just about out of reserves of energy for swimming against the D stream. Which I think is a shame, but I suspect most others won't. Meet me where the G.O.M. hang out. :-) Matthew P.S. Since it's become necessary for me to do so in respect of disagreement: None of the foregoing is meant as personal insult or slight to you, or as an attempt to make _you_ feel unwelcome to the newsgroup. Your enthusiasm and experience are welcome. (It may be mine that aren't <g>)
 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d450cf$2il5$1 digitaldaemon.com...
 I applaud your dedication to painting a picture of unalloyed
 fraternity, but your argument is plasmaware.

     "It is, however, possible to designate a "portion" of the sky
 for a specific aricraft, and a "portion" of the computer's
 addrerss
 space for a specific program"

 You're not even correctly addressing my point, since the
 analogous
 construct would be:

     "a "portion" of the computer's address space for a
 _subcomponent_ of a specific program"

 That is _at best_, as any programmer will tell you, 'not
 currently
 realistic'.

 Anyway, I'm not getting into it any further, as being insulting
 isn't good for anyone, and constantly going round the same old
 houses of misapprehension is tiring, to say the least.


 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44uj9$2fqg$1 digitaldaemon.com...
 Well, that is one way of looking at it.  What I see though, is
 that they agree on the princible of what "should" be done, to
 the
 extent that it is possible to do.  I also agree with the way
 Ben
 put it, because there is a definate parallel there.

 For example, just like parts of a running program share the
 same
 address space, the parts of  an aircraft in flight also share
 the
 same sky.  For that matter, different programs on the same
 computer also share the same address space at least on a
 hardware
 level, and likewise any number of distinct pieces of flying
 hardware may be carrying passangers and such around in the same
 sky at the same time.

 It is, however, possible to designate a "portion" of the sky
 for a
 specific aricraft, and a "portion" of the computer's addrerss
 space for a specific program.  In both cases, there exists the
 possibility of the assigned boundaries being violated, and in
 either case, the results can potentially lead to a crash.

 With this in mind, remember that while the programmer has
 little
 or no control over the nature of the hardware that the program
 will run on, the aircraft designer also has little or no
 control
 over the atmosphere in which thier creation will fly.  As such,
 both the software engineer and the aricraft engineer are
 restricted in their ability to add fault tollerance, but there
 are
 still things that can be done even within such restrictions.

 Microprocessors make mistakes all the time... and I don't mean
 simply that they make mistakes every so many billion
 operations.
 The whole reason it's so hard for microprocessor designers to
 keep
 finding ways to make them smaller, is because their very design
 is
 based on the idea that electrons will go where they want
 whether
 we like it or not.

 Make it a little easier for electrons to go one way than
 another
 way, and more of them will take the easier way.  Meanwhile, a
 few
 will take the harder way because those they were repelled by
 those
 taking the easier way, and a few will take the harder way
 because
 their particular allignment made that way easier for them
 specifically, and a few will take the harder way simply because
 "they can" but the chip is designed to take advantage of the
 "average" action by using excessive redundancy.  As chips get
 smaller, the amount of redundancy possible in a single electron
 path decreases substantially, and compensations have to be
 made.

 In other words, if every stray electron that went "the wrong
 way"
 crashed the microprocessor that it was traveling through, there
 wouldn't be time to execute a single hardwired microcode
 instruction like "fetch" before the chip crashed.

 Extreme fault tollerance requires sacrafice though.  You could
 use
 hashes and checksums all over the place, and redundantly write
 every piece of data to multiple memory locations, and
 periodically
 back up every piece of data to external storage along with the
 processor state and so on... but don't expect your program to
 get
 anything done while you're still alive to see it because it
 would
 be so slow it would pass for frozen.

 Looks to me like Walter and Ben have the right idea.  Yes, it
 may
 be foriegn to you, but that doesn't mean it won't work.  Give
 it a
 chance... see how it works out.

 TZ

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d44q3s$2cmj$1 digitaldaemon.com...
 "TechnoZeus" <TechnoZeus PeoplePC.com> wrote in message
 news:d44p6j$2c2a$1 digitaldaemon.com...
 Looks like a good page to be on too.  :)

 TZ

 "Walter" <newshound digitalmars.com> wrote in message
 news:d3q3jv$1q48$2 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d3pt4s$1lbb$1 digitaldaemon.com...
 My analogy with systems on an airplane is that the code
 that
 realized a
 subsystem failed and decided to shut that particular
 subsystem
 down
 effectively "caught" the problem in the subsystem. The
 same
 approach
applies
 to a piece of software that catches a problem in a
 subsystem
 and deals
with
 it. Just as an error in a subsystem on an airplane
 doesn't
 shut
 down the
 entire airplane so an error in a subsystem in a piece of
 software should
 shut down the entire software. I agree the level of
 safety
 and
 reduncancy
is
 much higher in an airplane but my argument was that
 *something*
 on that
 airplane realized the subsystem was in error and dealt
 with
 it.
Ok. We're on the same page, then.
I don't think they are on the same page, because that would assume that the barriers between valid and invalid code within a process are analogous to those between separate redundant computer systems with external arbitration mechanisms in an aircraft control system. Since the former have the extreme intimacy of a shared address space and thread(s) of execution, and the latter are (as far as I can guess, as I do not know) connected only by physical actuators and a shared power supply (maybe?), this analogy cannot hold. Yes, it's conceivable that one of the redundant computer systems on the aircraft could, in principle, cause, in its death throes, sufficient perturbation of the power supply of one of the other ones as to effect a catatrophic failure, but that is, I suggest, within the limit of acceptable risk. ;) Anyway, I've foresworn further comment on the issue until my article's published, so I'll shut up again. Sorry. ;-)
Apr 20 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <bhinkle mathworks.com> wrote in message
news:d3h2sd$40v$1 digitaldaemon.com...
 I think Walter has made the right choices - except the hierarchy has
gotten
 out of whack. Robust, fault-tolerant software is easier to write with D
than
 C++.
D is easier to write robust code in than C++ because it is more fault *intolerant* than C++ is. The idea is not to paper over bugs and soldier on, which would be fault-tolerant, but to set things up so that any faults cannot be ignored. Here's an example: C++: class Foo { ... void func(); ... }; ... void bar() { Foo *f; // oops, forgot to initialize it ... f->func(); } f is initialized with garbage. The f->foo() may or may not fail. If it doesn't fail, the bug may proceed unnoticed. Does this kind of problem happen in the wild? Happens all the time. D: class Foo { ... void func(); ... } ... void bar() { Foo f; // oops, forgot to initialize it ... f.func(); } D will provide a default initialization of f to null. Then, if f is dereference as in f.func(), you'll get a null pointer exception. Every time, in the same place. This helps with two main needs for writing robust code: 1) flushing out the bugs so they appear and 2) having the symptoms of the bug be repeatable. Here's another one: C++: Foo *f = new Foo; Foo *g = f; ... delete f; ... g->func(); That'll appear to 'work' most of the time, but will rarely fail, and such problems are typically very hard to track down. D: Foo f = new Foo; Foo g = f; ... delete f; ... g.func(); You're much more likely to get a null pointer exception with D, because when delete deletes a class reference, it nulls out the vptr. It's not perfect, as in the meantime a new class object could be allocated using that same chunk of memory, but in my own experience it has been very helpful in exposing bugs, much more so than C++'s method.
Apr 15 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3phdv$1dkv$1 digitaldaemon.com...
 "Ben Hinkle" <bhinkle mathworks.com> wrote in message
 news:d3h2sd$40v$1 digitaldaemon.com...
 I think Walter has made the right choices - except the hierarchy 
 has
gotten
 out of whack. Robust, fault-tolerant software is easier to write 
 with D
than
 C++.
D is easier to write robust code in than C++ because it is more fault *intolerant* than C++ is. The idea is not to paper over bugs and soldier on, which would be fault-tolerant, but to set things up so that any faults cannot be ignored. Here's an example: C++: class Foo { ... void func(); ... }; ... void bar() { Foo *f; // oops, forgot to initialize it ... f->func(); } f is initialized with garbage. The f->foo() may or may not fail. If it doesn't fail, the bug may proceed unnoticed. Does this kind of problem happen in the wild? Happens all the time. D: class Foo { ... void func(); ... } ... void bar() { Foo f; // oops, forgot to initialize it ... f.func(); } D will provide a default initialization of f to null. Then, if f is dereference as in f.func(), you'll get a null pointer exception. Every time, in the same place. This helps with two main needs for writing robust code: 1) flushing out the bugs so they appear and 2) having the symptoms of the bug be repeatable. Here's another one: C++: Foo *f = new Foo; Foo *g = f; ... delete f; ... g->func(); That'll appear to 'work' most of the time, but will rarely fail, and such problems are typically very hard to track down. D: Foo f = new Foo; Foo g = f; ... delete f; ... g.func(); You're much more likely to get a null pointer exception with D, because when delete deletes a class reference, it nulls out the vptr. It's not perfect, as in the meantime a new class object could be allocated using that same chunk of memory, but in my own experience it has been very helpful in exposing bugs, much more so than C++'s method.
True. But I believe I did say that my skepticisim was from a personal perspective, as, I believe, is the proposition itself. Simply put, I think D does not fulfil the promise to me. I don't doubt that it will do so to others. But my suspicion is that it addresses the easy/trivial/neophytic gotchas - which is a great thing, to be sure - while failing to address the larger/deeper/harder problems and in some cases exacerbating them. The non-auto nature of exceptions is one, albeit quite arcane. The use of root-level potentially meaningless methods - opCmp() et al - is another. These, to me personally, present a much less robust programming environment than the one I am experienced in. I don't say that that's going to be a common perspective, much less the majority one, but it is a valid perspective, and one which is shared by others. Thus, my skepticism about D's ability to make such claims. And to be clear and forestall any mirepresentation my position: I think the above examples show that D is making welcome advances in facilitating good coding. I just happen to think it's also taking some backwards steps, and that those areas are the ones which present a far greater challenge to people of all levels of experience, skill and diligence.
Apr 15 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d3po2n$1iek$1 digitaldaemon.com...
 True. But I believe I did say that my skepticisim was from a
 personal perspective, as, I believe, is the proposition itself.
 Simply put, I think D does not fulfil the promise to me. I don't
 doubt that it will do so to others. But my suspicion is that it
 addresses the easy/trivial/neophytic gotchas - which is a great
 thing, to be sure - while failing to address the
 larger/deeper/harder problems and in some cases exacerbating them.
 The non-auto nature of exceptions is one, albeit quite arcane. The
 use of root-level potentially meaningless methods - opCmp() et al -
 is another. These, to me personally, present a much less robust
 programming environment than the one I am experienced in. I don't
 say that that's going to be a common perspective, much less the
 majority one, but it is a valid perspective, and one which is shared
 by others. Thus, my skepticism about D's ability to make such
 claims.

 And to be clear and forestall any mirepresentation my position: I
 think the above examples show that D is making welcome advances in
 facilitating good coding. I just happen to think it's also taking
 some backwards steps, and that those areas are the ones which
 present a far greater challenge to people of all levels of
 experience, skill and diligence.
I'm going to posit that your 10+ years of professional experience in C++ might be skewing your perceptions here. I have nearly 20 years experience writing C++, and I rarely make mistakes any more that are not algorithmic (rather than dumb things like forgetting to initialize). I am so *used* to C++ that I've learned to compensate for its faults, potholes, and goofball error prone issues, such that they aren't a personal issue with me anymore. I suspect the same goes for you. But then you work with D, and it's different. The comfortable old compensations are inapplicable, and even when it's the same, assume it's different (like the &this issue you had in another thread). Until you've programmed enough in D to build up a comfortable mental model of how it works, I wouldn't be surprised at all if *initially* you're going to be writing buggier code than you would in C++. To me it's like driving a manual transmission all my life. I am so used to its quirks. Then, I try to drive an automatic. When stopping for a red light, my left foot comes down hard on the clutch. There is no clutch, and it hits the extended power brake pedal instead. The car slows down suddenly. My reflex is to hit the clutch harder so the engine won't stall. The car stands on its nose, much to the amusement of my passengers. It's not that the auto transmission is badly designed, it just requires a different mental model to operate. P.S. I said these C++ problems rarely cause me trouble anymore in my own code. So why fix them? Because I remember the trouble that they used to cause me, and see the trouble they cause other people. Being in the compiler business, I wind up helping a lot of people with a very wide variety of code, so I tend to see what kinds of things bring on the grief. P.P.S. I believe the C++ uninitialized variable problem is a far deeper and costlier one than you seem willing to credit it for. It just stands head, shoulders, and trunk above most of the other problems with the exception of goofing up managing memory allocations. It's one of those things that I have lost enormous time to, and as a consequence have learned a lot of compensations for. I suspect you have, too. P.P.P.S. I know for a fact that I spend significantly less time writing an app in D than I would the same thing in C++, and that's including debugging time. There's no question about it. When I've translated working & debugged apps from C++ to D, D's extra error checking exposed several bugs that had gone undetected by C++ (array overflows was a big one), even one with the dreaded C++ implicit default break; in the switch statement <g>.
Apr 15 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3prc1$1k77$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3po2n$1iek$1 digitaldaemon.com...
 True. But I believe I did say that my skepticisim was from a
 personal perspective, as, I believe, is the proposition itself.
 Simply put, I think D does not fulfil the promise to me. I don't
 doubt that it will do so to others. But my suspicion is that it
 addresses the easy/trivial/neophytic gotchas - which is a great
 thing, to be sure - while failing to address the
 larger/deeper/harder problems and in some cases exacerbating 
 them.
 The non-auto nature of exceptions is one, albeit quite arcane. 
 The
 use of root-level potentially meaningless methods - opCmp() et 
 al -
 is another. These, to me personally, present a much less robust
 programming environment than the one I am experienced in. I don't
 say that that's going to be a common perspective, much less the
 majority one, but it is a valid perspective, and one which is 
 shared
 by others. Thus, my skepticism about D's ability to make such
 claims.

 And to be clear and forestall any mirepresentation my position: I
 think the above examples show that D is making welcome advances 
 in
 facilitating good coding. I just happen to think it's also taking
 some backwards steps, and that those areas are the ones which
 present a far greater challenge to people of all levels of
 experience, skill and diligence.
I'm going to posit that your 10+ years of professional experience in C++ might be skewing your perceptions here. I have nearly 20 years experience writing C++, and I rarely make mistakes any more that are not algorithmic (rather than dumb things like forgetting to initialize). I am so *used* to C++ that I've learned to compensate for its faults, potholes, and goofball error prone issues, such that they aren't a personal issue with me anymore. I suspect the same goes for you. But then you work with D, and it's different. The comfortable old compensations are inapplicable, and even when it's the same, assume it's different (like the &this issue you had in another thread). Until you've programmed enough in D to build up a comfortable mental model of how it works, I wouldn't be surprised at all if *initially* you're going to be writing buggier code than you would in C++. To me it's like driving a manual transmission all my life. I am so used to its quirks. Then, I try to drive an automatic. When stopping for a red light, my left foot comes down hard on the clutch. There is no clutch, and it hits the extended power brake pedal instead. The car slows down suddenly. My reflex is to hit the clutch harder so the engine won't stall. The car stands on its nose, much to the amusement of my passengers. It's not that the auto transmission is badly designed, it just requires a different mental model to operate. P.S. I said these C++ problems rarely cause me trouble anymore in my own code. So why fix them? Because I remember the trouble that they used to cause me, and see the trouble they cause other people. Being in the compiler business, I wind up helping a lot of people with a very wide variety of code, so I tend to see what kinds of things bring on the grief. P.P.S. I believe the C++ uninitialized variable problem is a far deeper and costlier one than you seem willing to credit it for. It just stands head, shoulders, and trunk above most of the other problems with the exception of goofing up managing memory allocations. It's one of those things that I have lost enormous time to, and as a consequence have learned a lot of compensations for. I suspect you have, too. P.P.P.S. I know for a fact that I spend significantly less time writing an app in D than I would the same thing in C++, and that's including debugging time. There's no question about it. When I've translated working & debugged apps from C++ to D, D's extra error checking exposed several bugs that had gone undetected by C++ (array overflows was a big one), even one with the dreaded C++ implicit default break; in the switch statement <g>.
But, my friend, you've just done your usual and not answered the point I made. I disagree with none of the contents of this post, save for the implication that it addresses my point. The issue I have with many languages that appear, on the surface, to be 'better' than C++ is that the limits they place on the experienced programmer are usually very hard. An example we've seen with D in recent days is that we need compiler support for irrecoverability. Now whether as a result of the singular genius of Dr Stroustrup, or as an emergent property of its complexity, C++ has, for the ingenious/intrepid, almost infinite capacity for providing fixes to the language by dint of libraries. Look at my fast string concatenation, at Boost's Lambda, etc. etc. Other languages are left for dead in this regard. In this regard, D is a second class citizen to C++, probably by design, and probably for good reason. After all, most of the ingenious solutions in C++ are not accessible to most programmers, even very experienced ones, as it's just so complex and susceptible to dialecticism. But if D proscribes this (arguably semi-insane) level of ingenuity, then, IMO, it should address the fundamental "big" issues more seriously than it does. Otherwise, people who can reach answers for their biggest questions in other languages are going to, as I currently do, find it impossible to digest blanket statements about D's superiority.
Apr 15 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d3ptmg$1llb$1 digitaldaemon.com...
 But, my friend, you've just done your usual and not answered the
 point I made. I disagree with none of the contents of this post,
 save for the implication that it addresses my point.
Your point, as I saw it, was that D doesn't address the "larger/deeper/harder" problems. I addressed it by disagreeing with your classification about what were the hard problems.
 The issue I have with many languages that appear, on the surface, to
 be 'better' than C++ is that the limits they place on the
 experienced programmer are usually very hard. An example we've seen
 with D in recent days is that we need compiler support for
 irrecoverability.
I'm at a loss understanding what you mean by irrecoverability. Perhaps a pointer to the thread?
 Now whether as a result of the singular genius of Dr Stroustrup, or
 as an emergent property of its complexity, C++ has, for the
 ingenious/intrepid, almost infinite capacity for providing fixes to
 the language by dint of libraries.
I know of no C++ capacity to fix the: Foo *p; uninitialized pointer problem. This is a large, hard, deep problem in my opinion. Based on my experience helping others debug code over the years, largely addressed by moving to automatic memory management).
 Look at my fast string
 concatenation, at Boost's Lambda, etc. etc. Other languages are left
 for dead in this regard.

 In this regard, D is a second class citizen to C++, probably by
 design, and probably for good reason. After all, most of the
 ingenious solutions in C++ are not accessible to most programmers,
 even very experienced ones, as it's just so complex and susceptible
 to dialecticism. But if D proscribes this (arguably semi-insane)
 level of ingenuity, then, IMO, it should address the fundamental
 "big" issues more seriously than it does. Otherwise, people who can
 reach answers for their biggest questions in other languages are
 going to, as I currently do, find it impossible to digest blanket
 statements about D's superiority.
There's no way to do CP in C++ in the general case. I know you're sold on what CP can do for program robustness. Can't do foreach. No nested functions. No function literals. No UTF. These things are simple and easy to use in D, and accessible to ordinary programmers. The Boost stuff is generally neither. A lot of Boost is devoted to solving problems with C++ that aren't problems in D. That said, once D gets implicit template function instantiation, it'll leave C++ metaprogramming behind. P.S. As you know, I converted DMDScript from C++ to D. The source code shrunk by about 1/3, line for line, even *excluding* the gc I wrote for C++ and all the other support code C++ needed. The code runs faster, took me a whole week to get debugged and through the test suite, is easier to understand, and is much less brittle. As a challenge to you, take the D version of DMDScript and convert it back to C++. Use Boost, whatever you want. I'd love to see the results. Heck, it'd make a great case history for an article series you can do!
Apr 15 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3q3jv$1q48$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3ptmg$1llb$1 digitaldaemon.com...
 But, my friend, you've just done your usual and not answered the
 point I made. I disagree with none of the contents of this post,
 save for the implication that it addresses my point.
Your point, as I saw it, was that D doesn't address the "larger/deeper/harder" problems. I addressed it by disagreeing with your classification about what were the hard problems.
 The issue I have with many languages that appear, on the surface, 
 to
 be 'better' than C++ is that the limits they place on the
 experienced programmer are usually very hard. An example we've 
 seen
 with D in recent days is that we need compiler support for
 irrecoverability.
I'm at a loss understanding what you mean by irrecoverability. Perhaps a pointer to the thread?
 Now whether as a result of the singular genius of Dr Stroustrup, 
 or
 as an emergent property of its complexity, C++ has, for the
 ingenious/intrepid, almost infinite capacity for providing fixes 
 to
 the language by dint of libraries.
I know of no C++ capacity to fix the: Foo *p; uninitialized pointer problem. This is a large, hard, deep problem in my opinion. Based on my experience helping others debug code over the years, (and is largely addressed by moving to automatic memory management).
 Look at my fast string
 concatenation, at Boost's Lambda, etc. etc. Other languages are 
 left
 for dead in this regard.

 In this regard, D is a second class citizen to C++, probably by
 design, and probably for good reason. After all, most of the
 ingenious solutions in C++ are not accessible to most 
 programmers,
 even very experienced ones, as it's just so complex and 
 susceptible
 to dialecticism. But if D proscribes this (arguably semi-insane)
 level of ingenuity, then, IMO, it should address the fundamental
 "big" issues more seriously than it does. Otherwise, people who 
 can
 reach answers for their biggest questions in other languages are
 going to, as I currently do, find it impossible to digest blanket
 statements about D's superiority.
There's no way to do CP in C++ in the general case. I know you're sold on what CP can do for program robustness. Can't do foreach. No nested functions. No function literals. No UTF. These things are simple and easy to use in D, and accessible to ordinary programmers. The Boost stuff is generally neither. A lot of Boost is devoted to solving problems with C++ that aren't problems in D. That said, once D gets implicit template function instantiation, it'll leave C++ metaprogramming behind. P.S. As you know, I converted DMDScript from C++ to D. The source code shrunk by about 1/3, line for line, even *excluding* the gc I wrote for C++ and all the other support code C++ needed. The code runs faster, took me a whole week to get debugged and through the test suite, is easier to understand, and is much less brittle. As a challenge to you, take the D version of DMDScript and convert it back to C++. Use Boost, whatever you want. I'd love to see the results. Heck, it'd make a great case history for an article series you can do!
Sigh. This is just more "D has all these great features" propaganda again. You're not answering my points, just telling me that D is really good at chewing much of the existing low-hanging fruit, and _some_ of the high. My thesis is that there is precious little (if any) evidence for reasons not to chew most/all of the high, and also that it's introduced some of its own new low. But I know you just don't see this, so I yield.
Apr 15 2005
parent reply Kevin Bealer <Kevin_member pathlink.com> writes:
In article <d3q625$1ss3$1 digitaldaemon.com>, Matthew says...
"Walter" <newshound digitalmars.com> wrote in message 
news:d3q3jv$1q48$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d3ptmg$1llb$1 digitaldaemon.com...
 But, my friend, you've just done your usual and not answered the
 point I made. I disagree with none of the contents of this post,
 save for the implication that it addresses my point.
Your point, as I saw it, was that D doesn't address the "larger/deeper/harder" problems. I addressed it by disagreeing with your classification about what were the hard problems.
 The issue I have with many languages that appear, on the surface, 
 to
 be 'better' than C++ is that the limits they place on the
 experienced programmer are usually very hard. An example we've 
 seen
 with D in recent days is that we need compiler support for
 irrecoverability.
I'm at a loss understanding what you mean by irrecoverability. Perhaps a pointer to the thread?
 Now whether as a result of the singular genius of Dr Stroustrup, 
 or
 as an emergent property of its complexity, C++ has, for the
 ingenious/intrepid, almost infinite capacity for providing fixes 
 to
 the language by dint of libraries.
I know of no C++ capacity to fix the: Foo *p; uninitialized pointer problem. This is a large, hard, deep problem in my opinion. Based on my experience helping others debug code over the years, (and is largely addressed by moving to automatic memory management).
 Look at my fast string
 concatenation, at Boost's Lambda, etc. etc. Other languages are 
 left
 for dead in this regard.

 In this regard, D is a second class citizen to C++, probably by
 design, and probably for good reason. After all, most of the
 ingenious solutions in C++ are not accessible to most 
 programmers,
 even very experienced ones, as it's just so complex and 
 susceptible
 to dialecticism. But if D proscribes this (arguably semi-insane)
 level of ingenuity, then, IMO, it should address the fundamental
 "big" issues more seriously than it does. Otherwise, people who 
 can
 reach answers for their biggest questions in other languages are
 going to, as I currently do, find it impossible to digest blanket
 statements about D's superiority.
There's no way to do CP in C++ in the general case. I know you're sold on what CP can do for program robustness. Can't do foreach. No nested functions. No function literals. No UTF. These things are simple and easy to use in D, and accessible to ordinary programmers. The Boost stuff is generally neither. A lot of Boost is devoted to solving problems with C++ that aren't problems in D. That said, once D gets implicit template function instantiation, it'll leave C++ metaprogramming behind. P.S. As you know, I converted DMDScript from C++ to D. The source code shrunk by about 1/3, line for line, even *excluding* the gc I wrote for C++ and all the other support code C++ needed. The code runs faster, took me a whole week to get debugged and through the test suite, is easier to understand, and is much less brittle. As a challenge to you, take the D version of DMDScript and convert it back to C++. Use Boost, whatever you want. I'd love to see the results. Heck, it'd make a great case history for an article series you can do!
Sigh. This is just more "D has all these great features" propaganda again. You're not answering my points, just telling me that D is really good at chewing much of the existing low-hanging fruit, and _some_ of the high. My thesis is that there is precious little (if any) evidence for reasons not to chew most/all of the high, and also that it's introduced some of its own new low. But I know you just don't see this, so I yield.
GC, auto-init of data, checking of switch defaults, array bounds checking, and especially null checking have been quite valuable to me for reliability. GC and auto-init in particular are useful, no just because they provide reliability, but because they unclutter the code. For me a top contender for bug incidence in C++ is simply that so much has to be specified, that the extra syntax provides cover for broken code. On another note, recently I converted an AI toy program from Java to D, then tightened up the performance (by many orders of magnitude). I simply renamed the files to ".d", and was amazed to discover that after fixing the syntax errors and implementing a few missing parts, there was only one code error: argc was off by one (language difference). After that, it -just ran-. In other words, the syntax is very good at guiding you hand. In C++ many many things are legal code but don't do what is expected. The D version caught virtually everything at compile time. Thanks, Walter. Kevin (P.S. If anyone wants a program to solve Sokoban puzzles in D...)
Apr 16 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Kevin Bealer" <Kevin_member pathlink.com> wrote in message
news:d3qdtc$23js$1 digitaldaemon.com...
 GC, auto-init of data, checking of switch defaults, array bounds checking,
and
 especially null checking have been quite valuable to me for reliability.

 GC and auto-init in particular are useful, no just because they provide
 reliability, but because they unclutter the code.  For me a top contender
for
 bug incidence in C++ is simply that so much has to be specified, that the
extra
 syntax provides cover for broken code.
Yes. For example, most declarations in C++ have to be done twice. Differences can inadvertently creep in. This is not what one would expect to see in a modern language. Just being able to express directly what you want to be done improves the reliability of the code, as you wrote.
 On another note, recently I converted an AI toy program from Java to D,
then
 tightened up the performance (by many orders of magnitude).  I simply
renamed
 the files to ".d", and was amazed to discover that after fixing the syntax
 errors and implementing a few missing parts, there was only one code
error:
 argc was off by one (language difference).  After that, it -just ran-.
Cool. I hadn't tried converting any Java programs. Converting C++ code involves hitting the delete key a lot <g>. The real algorithm emerges from the baggage. That's one of the reasons why the D version of DMDScript is faster than the C++. I could see the algorithm better, and so it was easier to tweak.
 In other words, the syntax is very good at guiding you hand.  In C++ many
many
 things are legal code but don't do what is expected.  The D version caught
 virtually everything at compile time.

 Thanks, Walter.
You're welcome.
Apr 16 2005
parent reply Sean Kelly <sean f4.ca> writes:
In article <d3qn6o$2fbf$1 digitaldaemon.com>, Walter says...
"Kevin Bealer" <Kevin_member pathlink.com> wrote in message
news:d3qdtc$23js$1 digitaldaemon.com...
 GC, auto-init of data, checking of switch defaults, array bounds checking,
and
 especially null checking have been quite valuable to me for reliability.

 GC and auto-init in particular are useful, no just because they provide
 reliability, but because they unclutter the code.  For me a top contender
for
 bug incidence in C++ is simply that so much has to be specified, that the
extra
 syntax provides cover for broken code.
Yes. For example, most declarations in C++ have to be done twice. Differences can inadvertently creep in. This is not what one would expect to see in a modern language. Just being able to express directly what you want to be done improves the reliability of the code, as you wrote.
Related question. Would you suggest relying on the default initializers when the default value is acceptable or is always initializing variables the preferred method? I ask this because of comments you've made in the past that you'd prefer if variables were all default initialized to trap values rather than "usable" values. Basically, I'm finding myself beginning to rely on integral variables default initializing to 0 and not bothering to explicitly initialize them, and I don't want to fall into this habit if there's any chance of the default values changing. Sean
Apr 17 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Sean Kelly" <sean f4.ca> wrote in message
news:d3u7os$240m$1 digitaldaemon.com...
 Related question.  Would you suggest relying on the default initializers
when
 the default value is acceptable or is always initializing variables the
 preferred method?  I ask this because of comments you've made in the past
that
 you'd prefer if variables were all default initialized to trap values
rather
 than "usable" values.  Basically, I'm finding myself beginning to rely on
 integral variables default initializing to 0 and not bothering to
explicitly
 initialize them, and I don't want to fall into this habit if there's any
chance
 of the default values changing.
That's a very good question. If there was a 'trap' value for, say, ints, D would use it as the default initializer. If one does appear in the future sometime, would D be changed to use it? No, as that would break about every D program in existence, including yours and mine <g>. Stylistically, however, I think it is better style to put in an explicit initializer when that value will be used. That way, the maintenance programmer knows it's intentional. Also: void foo(out int x) { ... } ... int i = 0; foo(i); I would argue is bad style, as i is intentionally set to a value that is never used. (Let's dub these things 'dead initializers'.) This kind of thing is common in C/C++ programs to get the compiler to quit squawking about "possible uninitialized use of 'i'". Dead initializers and dead code are sources of confusion for maintenance programming, and a language should not force programmers to put them in.
Apr 17 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 "Sean Kelly" <sean f4.ca> wrote in message 
 news:d3u7os$240m$1 digitaldaemon.com...
 
 Related question.  Would you suggest relying on the default
 initializers
when
 the default value is acceptable or is always initializing variables
 the preferred method?  I ask this because of comments you've made
 in the past
that
 you'd prefer if variables were all default initialized to trap
 values
rather
 than "usable" values.  Basically, I'm finding myself beginning to
 rely on integral variables default initializing to 0 and not
 bothering to
explicitly
 initialize them, and I don't want to fall into this habit if
 there's any
chance
 of the default values changing.
That's a very good question. If there was a 'trap' value for, say, ints, D would use it as the default initializer. If one does appear in the future sometime, would D be changed to use it? No, as that would break about every D program in existence, including yours and mine <g>.
They wouldn't. Since these trap values will never come to 32bit ints. But 64bit is different.
 Stylistically, however, I think it is better style to put in an
 explicit initializer when that value will be used. That way, the
 maintenance programmer knows it's intentional.
 
 Also: void foo(out int x) { ... } ... int i = 0; foo(i); I would
 argue is bad style, as i is intentionally set to a value that is 
 never used. (Let's dub these things 'dead initializers'.) This kind
 of thing is common in C/C++ programs to get the compiler to quit
 squawking about "possible uninitialized use of 'i'". Dead
 initializers and dead code are sources of confusion for maintenance
 programming, and a language should not force programmers to put them
 in.
In the "bad old days" of ints, we only had 16 bits to work with. That meant, that we regularly did need to use the entire gamut of values of the data type. The day we get "64bit ints" as default (which is 12 months from now), we'll have "ints" that have such a latitude of values that we can spare some for Exceptional Conditions. (Just as is today done with floats.) In other words: that day, we could stipulate, that one particular value of "int" is illegal. For any purpose. (!!!) The same could be stipulated for signed "int64s". ------ Say we decide that (int64_max_value - 1) is an official NAN-INT. And for the signed, (int64_max_value/2 -1) would be the value. Questions: 1) Can anybody give a thoroughly solid reason why we _cannot_ skip using these particular values as "NAN"s? (In other words, can someone pretend that we really (even in theory) will actually need _all_ 64bit integer values in production code?) 2) Could it be possible to persuade the "C99 kings" to adopt this practice? 3) If so, could we/they persuade to have chip makers implement hardware traps for these values? (As today null checks and of/uf are done?) 4) Is it possible at all to have the Old Farts, who hold the keys to Programming World, to become susceptible to our demands? ----- What's so different today? In the old days the amount of memory was more than the width of the processor. (8 bits, and 64k of memory.) Today, they're about equal. (32 bits and 4gigs.) Tomorrow, we'll have 64bit processors, and it'll take a while before we get to 18 Terabytes of mainboard ram. (Likely we'll have 128bit machines before 18TB of memory.) Plus, 64bit integers can cover stuff like the World Budget counted in cents. So, the time has arisen, when we actually can spare a single int value for "frivolous purposes", i.e. NAN functionality. ---- Actually, I could make a deal with Walter: the day you're going to talk with The White Bearded Men, I'll tag along, with a baseball bat on my shoulder, wearing dark Wayfarers.
Apr 17 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:4262E94B.400 nospam.org...
 Say we decide that (int64_max_value - 1) is an official NAN-INT. And for
 the signed, (int64_max_value/2 -1) would be the value.

 Questions:

 1) Can anybody give a thoroughly solid reason why we _cannot_ skip using
 these particular values as "NAN"s? (In other words, can someone pretend
 that we really (even in theory) will actually need _all_ 64bit integer
 values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
 2) Could it be possible to persuade the "C99 kings" to adopt this
practice? I just think that no way would that ever happen.
 3) If so, could we/they persuade to have chip makers implement hardware
 traps for these values? (As today null checks and of/uf are done?)
The hardware people could do it by adding an "uninitialized" bit for each byte, much like the parity bit. Then, a hardware fault could be generated when an "uninitialized" memory location is read. This won't break existing practice.
 4) Is it possible at all to have the Old Farts, who hold the keys to
 Programming World, to become susceptible to our demands?
LOL.
Apr 17 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
(Disclaimer: I'm not going to start a war about this. :-) I am however
interested in pursuing this for a small while.)

Walter wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:4262E94B.400 nospam.org...
 
 Say we decide that (int64_max_value - 1) is an official NAN-INT.
 And for the signed, (int64_max_value/2 -1) would be the value.
 
 Questions:
 
 1) Can anybody give a thoroughly solid reason why we _cannot_ skip
 using these particular values as "NAN"s? (In other words, can
 someone pretend that we really (even in theory) will actually need
 _all_ 64bit integer values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
One could argue that if someone uses (64bit) fixed point arithmetic in a field where there is a risk for approaching (our newly invented NAN, i.e. int64_max_value - 1) illegal values, then one should be using 128bit for that particular purpose instead anyway. (Similar holds for signed, too.) And, that actually is the whole problem of my suggestion. I expect it to be monstrously difficult to get "them" to see, that this is a valid idea -- now that we suddenly have "very big" integer types at our disposal. One could use the exact same arguments against "wasting" the bit patterns of floating NANs. (And there are truckloads of individual bit patterns wasted there!) ---- To implement this, all we need are two things: 1) A new fetch instruction that traps this particular bit pattern in hardware. (I estimate, on a 64bit CPU, this should take less than 500 gates -- pretty small compared to some of the other "new" (and IMHO less pressing) features.) 2) Compiler industry to agree to use this whenever we are fetching this as an rvalue. (The "regular" fetch would still exist, for all other purposes (i.e. when the 64bit entity represents whatever else than a 64bit int)). And while we're at it, of course the same for 128bit ints. The compiler would use the "new" fetch only where it does not know for sure that the variable has been initialised. So, for example loops would be free to wrap around. (Not that I'd want them to.) ----- So, technically this would be trivial and cheap to implement. All we need is the will to do it. That will has to be aroused somehow. Maybe Walter or Matthew could take this up in CUJ, or Dr.Dobb's? -- I'd even be happy to assist in the writing, or countering the counter arguments you two might get! (The baseball bat.) <pr-talk-mode> You'd get some extra fame for raising hell about this, too. No matter which side wins at the end. And all that fame is needed the day we start pushing D with a vengeance. Or maybe I should get Andy Koenig raving about this instead? </pr-talk-mode> This is an important issue, and (as so many times in history) when the means to do something about it become only gradually possible, it often is unduely hard to get everybody to see that "hey, we can already do it".
 2) Could it be possible to persuade the "C99 kings" to adopt this
 practice?
I just think that no way would that ever happen.
 3) If so, could we/they persuade to have chip makers implement
 hardware traps for these values? (As today null checks and of/uf
 are done?)
Oh, and incidentally, I've figured out how to extend this to 128bit, 256bit, etc. on 64bit machines.
 The hardware people could do it by adding an "uninitialized" bit for
 each byte, much like the parity bit. Then, a hardware fault could be
 generated when an "uninitialized" memory location is read. This won't
 break existing practice.
Way too complicated. Oh, and about practice: we wouldn't break any existing code _that_is_not_broken_already_. For this new setup to break a single program: First, somebody has to be un-rigorous about initialising his variables, second, the guy has to be operating on values pretty near wrap-around. Now, I do see that used, but only for chip testing, or other very special purposes. (And somebody else doing uint math relying on underflow for home-made "temporary" signed math ought to be shot anyway.) Besides, this would of course be trappable and quenchable. :-) If not (gasp) also switchable with a pragma.
 4) Is it possible at all to have the Old Farts, who hold the keys
 to Programming World, to become susceptible to our demands?
LOL.
I've had ministers of the Cabinet do what I want. It's just a matter of making them see. And the thing to see here is in it's entirety: 64bit is suddenly so big that we really and honestly can actually _afford_ to waste a single value. -- Which never has happened before (with 32, 16, 8bits).
Apr 18 2005
next sibling parent Thomas Kuehne <thomas-dloop kuehne.thisisspam.cn> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Georg Wrede schrieb am Mon, 18 Apr 2005 12:13:25 +0300:
 (Disclaimer: I'm not going to start a war about this. :-) I am however
 interested in pursuing this for a small while.)
 1) Can anybody give a thoroughly solid reason why we _cannot_ skip
 using these particular values as "NAN"s? (In other words, can
 someone pretend that we really (even in theory) will actually need
 _all_ 64bit integer values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
One could argue that if someone uses (64bit) fixed point arithmetic in a field where there is a risk for approaching (our newly invented NAN, i.e. int64_max_value - 1) illegal values, then one should be using 128bit for that particular purpose instead anyway. (Similar holds for signed, too.) And, that actually is the whole problem of my suggestion. I expect it to be monstrously difficult to get "them" to see, that this is a valid idea -- now that we suddenly have "very big" integer types at our disposal. One could use the exact same arguments against "wasting" the bit patterns of floating NANs. (And there are truckloads of individual bit patterns wasted there!) ---- To implement this, all we need are two things: 1) A new fetch instruction that traps this particular bit pattern in hardware. (I estimate, on a 64bit CPU, this should take less than 500 gates -- pretty small compared to some of the other "new" (and IMHO less pressing) features.) 2) Compiler industry to agree to use this whenever we are fetching this as an rvalue. (The "regular" fetch would still exist, for all other purposes (i.e. when the 64bit entity represents whatever else than a 64bit int)). And while we're at it, of course the same for 128bit ints. The compiler would use the "new" fetch only where it does not know for sure that the variable has been initialised. So, for example loops would be free to wrap around. (Not that I'd want them to.) ----- So, technically this would be trivial and cheap to implement. All we need is the will to do it. That will has to be aroused somehow. Maybe Walter or Matthew could take this up in CUJ, or Dr.Dobb's? -- I'd even be happy to assist in the writing, or countering the counter arguments you two might get! (The baseball bat.)
<snip> I can only wholeheartedly agree and support anyone trying to add NAN-int support. The lack of NAN-int has caused me some really hard to trace bugs. Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFCY4Bm3w+/yD4P9tIRAhMzAJ9koZsZUmJtdwF1gsGJD/bKHLfvbwCdHxfG DHpohE8qxnHg/SK5935Oz1Q= =VyVL -----END PGP SIGNATURE-----
Apr 18 2005
prev sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
You're overlooking the fact that a 2^x bit integer isn't arbitrary like a
floating point representation is.  There is no "unused" or "redundant" value
within the range of possible bit patterns, because the range of bit patterns
"is" the numeric range.  In fact, the signed and unsigned representations are
exactly the same except that in the unsigned representation addition of the
place value of the high order bit is replaced by subtraction of that same value.

For example, a 2^1 bit signed range has a -2's place, and a 1's place, giving
it a range of -2 to +1, while a 2^1 bit unsigned range has a 2's place and a
1's place, giving it a range of 0 to 3.  Double the number of data bits, for a
2^2 bit signed range, and you get has a -8's place, a 4's place, a 2's place,
and a 1's place, giving it a range of -8 to +7, while a 2^2 bit unsigned range
has an 8's place, a 4's place, a 2's place, and a 1's place, giving it a range
of 0 to 15.  The same simple math holds true for any number of data bits, even
if it's not an exact power or two, but increasing by anything other than a
power of two leaves the representation of bit positions un-uniform.  Likewise,
one of the advantages to the straight binary unsigned and 2's compliment binary
signed data types is that they are uniform.  No extra information is necessary
to interpret them, and no extra steps are necessary before they can be used.

TZ

"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:42637A35.50407 nospam.org...
 (Disclaimer: I'm not going to start a war about this. :-) I am however
 interested in pursuing this for a small while.)

 Walter wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:4262E94B.400 nospam.org...

 Say we decide that (int64_max_value - 1) is an official NAN-INT.
 And for the signed, (int64_max_value/2 -1) would be the value.

 Questions:

 1) Can anybody give a thoroughly solid reason why we _cannot_ skip
 using these particular values as "NAN"s? (In other words, can
 someone pretend that we really (even in theory) will actually need
 _all_ 64bit integer values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
One could argue that if someone uses (64bit) fixed point arithmetic in a field where there is a risk for approaching (our newly invented NAN, i.e. int64_max_value - 1) illegal values, then one should be using 128bit for that particular purpose instead anyway. (Similar holds for signed, too.) And, that actually is the whole problem of my suggestion. I expect it to be monstrously difficult to get "them" to see, that this is a valid idea -- now that we suddenly have "very big" integer types at our disposal. One could use the exact same arguments against "wasting" the bit patterns of floating NANs. (And there are truckloads of individual bit patterns wasted there!) ---- To implement this, all we need are two things: 1) A new fetch instruction that traps this particular bit pattern in hardware. (I estimate, on a 64bit CPU, this should take less than 500 gates -- pretty small compared to some of the other "new" (and IMHO less pressing) features.) 2) Compiler industry to agree to use this whenever we are fetching this as an rvalue. (The "regular" fetch would still exist, for all other purposes (i.e. when the 64bit entity represents whatever else than a 64bit int)). And while we're at it, of course the same for 128bit ints. The compiler would use the "new" fetch only where it does not know for sure that the variable has been initialised. So, for example loops would be free to wrap around. (Not that I'd want them to.) ----- So, technically this would be trivial and cheap to implement. All we need is the will to do it. That will has to be aroused somehow. Maybe Walter or Matthew could take this up in CUJ, or Dr.Dobb's? -- I'd even be happy to assist in the writing, or countering the counter arguments you two might get! (The baseball bat.) <pr-talk-mode> You'd get some extra fame for raising hell about this, too. No matter which side wins at the end. And all that fame is needed the day we start pushing D with a vengeance. Or maybe I should get Andy Koenig raving about this instead? </pr-talk-mode> This is an important issue, and (as so many times in history) when the means to do something about it become only gradually possible, it often is unduely hard to get everybody to see that "hey, we can already do it".
 2) Could it be possible to persuade the "C99 kings" to adopt this
 practice?
I just think that no way would that ever happen.
 3) If so, could we/they persuade to have chip makers implement
 hardware traps for these values? (As today null checks and of/uf
 are done?)
Oh, and incidentally, I've figured out how to extend this to 128bit, 256bit, etc. on 64bit machines.
 The hardware people could do it by adding an "uninitialized" bit for
 each byte, much like the parity bit. Then, a hardware fault could be
 generated when an "uninitialized" memory location is read. This won't
 break existing practice.
Way too complicated. Oh, and about practice: we wouldn't break any existing code _that_is_not_broken_already_. For this new setup to break a single program: First, somebody has to be un-rigorous about initialising his variables, second, the guy has to be operating on values pretty near wrap-around. Now, I do see that used, but only for chip testing, or other very special purposes. (And somebody else doing uint math relying on underflow for home-made "temporary" signed math ought to be shot anyway.) Besides, this would of course be trappable and quenchable. :-) If not (gasp) also switchable with a pragma.
 4) Is it possible at all to have the Old Farts, who hold the keys
 to Programming World, to become susceptible to our demands?
LOL.
I've had ministers of the Cabinet do what I want. It's just a matter of making them see. And the thing to see here is in it's entirety: 64bit is suddenly so big that we really and honestly can actually _afford_ to waste a single value. -- Which never has happened before (with 32, 16, 8bits).
Apr 20 2005
prev sibling next sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Yes, 64 bit ints were standardized long before processors with a 16 bit data
bus became common, and therea re many reasons why it would be a bad idea to try
to "replace" that standard at this time.

That said, however, I don't think there would be anything wrong with adding a
new "enhanced" integer data type, that includes thing like a "not an integer"
value, as well as other enhancements.

The trouble, I think, would be getting enough people to agree on a "standard"
enhanced integer type.  I have, for example, made data types with only integer
magnitude, but a floating point angle from the "positive" vector direction to
the data's location within the complex number plain.  Very useful, but not
something "everyone" would want.

I've also made "nearly integer" data types, such as a large range of integer
values with a small range of floating point positions, to expand the range of
valid numbers within the available precision.

For example, using an exponent value in the range of -3 to 4 representing
multiplication of the integer mantissa my 1000 raised to the exponent value can
change a signed 32 bit range (-32768..32767) into -0.000008192 .. 0.000008191,
-0.008192 .. 0.008191, -8.192 .. 8.191, -8192 .. 8191, -8192000 .. 8191000,
-8192000000 .. 8191000000, -8192000000000 .. 8191000000000, -8192000000000000
.. 8191000000000000, which can be handy with certain type of statistical data,
but probably isn't much use for anything else.  Thange the base of the exponent
from 1000 ro 100, or 16, and which uses are reasonable also changes... so what
is best turns out to be strictly a matter of opinion.

Yes, a useful extended int, or pseudo-int be agreed on by enough people that it
could become "a standard" eventually, but I don't see it as likely that it
would ever replace the current standard because it simpler is usually more
universal... and the existing standard for integer representations of any power
of two number of binary digits is so simple that it's almost "generic" in
nature.

TZ


"Walter" <newshound digitalmars.com> wrote in message
news:d3v6kb$2ucq$1 digitaldaemon.com...
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:4262E94B.400 nospam.org...
 Say we decide that (int64_max_value - 1) is an official NAN-INT. And for
 the signed, (int64_max_value/2 -1) would be the value.

 Questions:

 1) Can anybody give a thoroughly solid reason why we _cannot_ skip using
 these particular values as "NAN"s? (In other words, can someone pretend
 that we really (even in theory) will actually need _all_ 64bit integer
 values in production code?)
We can't. 64 bit ints are often used, for example, to implement fixed point arithmetic. Furthermore, NANs work best when the hardware is set up to throw an exception if they are used.
 2) Could it be possible to persuade the "C99 kings" to adopt this
practice? I just think that no way would that ever happen.
 3) If so, could we/they persuade to have chip makers implement hardware
 traps for these values? (As today null checks and of/uf are done?)
The hardware people could do it by adding an "uninitialized" bit for each byte, much like the parity bit. Then, a hardware fault could be generated when an "uninitialized" memory location is read. This won't break existing practice.
 4) Is it possible at all to have the Old Farts, who hold the keys to
 Programming World, to become susceptible to our demands?
LOL.
Apr 19 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 Is it possible at all to have the Old Farts, who hold the keys to
 Programming World, to become susceptible to our demands?
LOL.
Suppose the "over 60" bit type was actually 63 bits. How many would seriously start crying about 63 bits being seriously deficient? Their opposition would surely suggest 127 bits for those purposes. Well now we would have 2^64 - 1 values. We lose one single value. A national Disaster. And we'd lose this value _only_ in situations where we're dealing with a possibly uninitialized value. Not a huge loss, IMHO. The specific issue here is that "they" are so old, that they'd have a hard time noticing all of a sudden that we actually do have values to spare. Extending this thought, it should be equally hard to have them designate one single value for NAN even with 1024 bit integers. (Seriously.) I can see the worries of these guys. Traditionally a lot of code has been dependent on "integer" overflow hardware traps, wich have been extensively used when doing integer arithmetic wider than the accumulator in the processor. And even today, with all kinds of encryption getting increasingly important, that is not going to go away. I think this would not be hampered in any way with a compiler that initialises ints with this NAN value (in the cases where it can't immediately see for sure that the variable is assigned a value before first use). By the same token, at reading the value in all places where the compiler "knows" that the variable is already initialised, the NAN checks would not be done. ----- While I'm not suggesting or demanding, I still write the following: (Oh, and don't anybody even think of D 1.0 here. This is purely academic at this stage.) There are several alternative things one could do here: 1) Suppose we create a new data type, called GuardedInt64. This type would always be checked against the NAN value upon retrieval. (A single machine instruction comparing with an immediate value. Not Enormously Slow.) 2) Or, we could decide that non-release builds check against NAN if there is reason to be unsure whether the variable is already initialised. If the compiler can see that there is no danger of that, then it would not bother to check. (This NAN thing would also be switchable on the command line, and/or with a pragma.) 3) Or, in non-release code, for every int there could be a "shadow variable", whose purpose would only be to know whether the real variable was already initialised. (This would be slow, but hey, non-release code in D is already slower -- which is OK.) (The naïve implementation being that every time you write to your int, the shadow int gets written a 1. And every time you read yuor int, the shadow int gets checked for null. Null denoting that we have a read before any write.) 4-300) And probably hundreds of other variations. :-) ----------- It's a shame that we don't yet have the Micro Forks releases (those "Everyone Tinker Yourself Instead of Crowding the Newsgroup with Demands and Dreams" packages.) It would be so easy for someone to create a (for example) GuardedInt64 type, and let others use that particular compiler for a while, for some real-world experiences. With the same effort, probably anybody doing that would implement the same for 32bit ints, both signed and unsigned. This would give us even more real world experience. (( Aahh, the Micro Forks releases! Man, we could speed up D research and development! We'd multiply the speed, instead of merely adding. Why do you think open source runs circles around Bill, and even Larry E? It's not that there are thousands of developers. It's that everyone gets the code to tinker with, doing his on pet peeve, or his own dream. A lot of tiny springs make the Amazon. We need them now, not when D is at 5.5. Ha, the C++ Beards wouldn't even know what hit them, by that time we'd be in Jupiter! )) Actually, this could be done with my (still in planning ;-( ) "D preprocessor / D language lab / D meta language compiler". But, alas, that's at least months away (if not Quarters). I wish I were 30 years younger! But more practical, sooner, and for the common good a lot better solution would really be the Micro Forks tinkering distro. (Be it Walter's, Anders', David's or somebody else's.) ----------- A language that has the audacity to screw up thoroughly established practices ("bit" booleans, a serious language with GC, etc. (disclaimer: IMHO the latter is good)), could very well be the one forerunner with int-NANs. _Somebody_ has to take the first step.
Apr 20 2005
parent reply "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
I think you missed the point.  The number 64 isn't arbitrary.  It's a function
of the binary effect of the hardware's wiring.  Yes, you can do what ever you
want with the data that fits in those 64 bits, but you need a way to know what
they are first... and the standard 64 bit integer representations are there for
that purpose.

TZ

"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:42669804.2060705 nospam.org...
 Walter wrote:
 Is it possible at all to have the Old Farts, who hold the keys to
 Programming World, to become susceptible to our demands?
LOL.
Suppose the "over 60" bit type was actually 63 bits. How many would seriously start crying about 63 bits being seriously deficient? Their opposition would surely suggest 127 bits for those purposes. Well now we would have 2^64 - 1 values. We lose one single value. A national Disaster. And we'd lose this value _only_ in situations where we're dealing with a possibly uninitialized value. Not a huge loss, IMHO. The specific issue here is that "they" are so old, that they'd have a hard time noticing all of a sudden that we actually do have values to spare. Extending this thought, it should be equally hard to have them designate one single value for NAN even with 1024 bit integers. (Seriously.) I can see the worries of these guys. Traditionally a lot of code has been dependent on "integer" overflow hardware traps, wich have been extensively used when doing integer arithmetic wider than the accumulator in the processor. And even today, with all kinds of encryption getting increasingly important, that is not going to go away. I think this would not be hampered in any way with a compiler that initialises ints with this NAN value (in the cases where it can't immediately see for sure that the variable is assigned a value before first use). By the same token, at reading the value in all places where the compiler "knows" that the variable is already initialised, the NAN checks would not be done. ----- While I'm not suggesting or demanding, I still write the following: (Oh, and don't anybody even think of D 1.0 here. This is purely academic at this stage.) There are several alternative things one could do here: 1) Suppose we create a new data type, called GuardedInt64. This type would always be checked against the NAN value upon retrieval. (A single machine instruction comparing with an immediate value. Not Enormously Slow.) 2) Or, we could decide that non-release builds check against NAN if there is reason to be unsure whether the variable is already initialised. If the compiler can see that there is no danger of that, then it would not bother to check. (This NAN thing would also be switchable on the command line, and/or with a pragma.) 3) Or, in non-release code, for every int there could be a "shadow variable", whose purpose would only be to know whether the real variable was already initialised. (This would be slow, but hey, non-release code in D is already slower -- which is OK.) (The naïve implementation being that every time you write to your int, the shadow int gets written a 1. And every time you read yuor int, the shadow int gets checked for null. Null denoting that we have a read before any write.) 4-300) And probably hundreds of other variations. :-) ----------- It's a shame that we don't yet have the Micro Forks releases (those "Everyone Tinker Yourself Instead of Crowding the Newsgroup with Demands and Dreams" packages.) It would be so easy for someone to create a (for example) GuardedInt64 type, and let others use that particular compiler for a while, for some real-world experiences. With the same effort, probably anybody doing that would implement the same for 32bit ints, both signed and unsigned. This would give us even more real world experience. (( Aahh, the Micro Forks releases! Man, we could speed up D research and development! We'd multiply the speed, instead of merely adding. Why do you think open source runs circles around Bill, and even Larry E? It's not that there are thousands of developers. It's that everyone gets the code to tinker with, doing his on pet peeve, or his own dream. A lot of tiny springs make the Amazon. We need them now, not when D is at 5.5. Ha, the C++ Beards wouldn't even know what hit them, by that time we'd be in Jupiter! )) Actually, this could be done with my (still in planning ;-( ) "D preprocessor / D language lab / D meta language compiler". But, alas, that's at least months away (if not Quarters). I wish I were 30 years younger! But more practical, sooner, and for the common good a lot better solution would really be the Micro Forks tinkering distro. (Be it Walter's, Anders', David's or somebody else's.) ----------- A language that has the audacity to screw up thoroughly established practices ("bit" booleans, a serious language with GC, etc. (disclaimer: IMHO the latter is good)), could very well be the one forerunner with int-NANs. _Somebody_ has to take the first step.
Apr 20 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
TechnoZeus wrote:
 I think you missed the point.  The number 64 isn't arbitrary.  It's a
 function of the binary effect of the hardware's wiring.  Yes, you can
 do what ever you want with the data that fits in those 64 bits, but
 you need a way to know what they are first... and the standard 64 bit
 integer representations are there for that purpose.
Oh, ok, I must be getting old and dumb.
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:42669804.2060705 nospam.org...
 
 Walter wrote:
 
 Is it possible at all to have the Old Farts, who hold the keys
 to Programming World, to become susceptible to our demands?
LOL.
Suppose the "over 60" bit type was actually 63 bits. How many would seriously start crying about 63 bits being seriously deficient? Their opposition would surely suggest 127 bits for those purposes. Well now we would have 2^64 - 1 values. We lose one single value. A national Disaster. And we'd lose this value _only_ in situations where we're dealing with a possibly uninitialized value. Not a huge loss, IMHO. The specific issue here is that "they" are so old, that they'd have a hard time noticing all of a sudden that we actually do have values to spare. Extending this thought, it should be equally hard to have them designate one single value for NAN even with 1024 bit integers. (Seriously.) I can see the worries of these guys. Traditionally a lot of code has been dependent on "integer" overflow hardware traps, wich have been extensively used when doing integer arithmetic wider than the accumulator in the processor. And even today, with all kinds of encryption getting increasingly important, that is not going to go away. I think this would not be hampered in any way with a compiler that initialises ints with this NAN value (in the cases where it can't immediately see for sure that the variable is assigned a value before first use). By the same token, at reading the value in all places where the compiler "knows" that the variable is already initialised, the NAN checks would not be done. ----- While I'm not suggesting or demanding, I still write the following: (Oh, and don't anybody even think of D 1.0 here. This is purely academic at this stage.) There are several alternative things one could do here: 1) Suppose we create a new data type, called GuardedInt64. This type would always be checked against the NAN value upon retrieval. (A single machine instruction comparing with an immediate value. Not Enormously Slow.) 2) Or, we could decide that non-release builds check against NAN if there is reason to be unsure whether the variable is already initialised. If the compiler can see that there is no danger of that, then it would not bother to check. (This NAN thing would also be switchable on the command line, and/or with a pragma.) 3) Or, in non-release code, for every int there could be a "shadow variable", whose purpose would only be to know whether the real variable was already initialised. (This would be slow, but hey, non-release code in D is already slower -- which is OK.) (The naïve implementation being that every time you write to your int, the shadow int gets written a 1. And every time you read yuor int, the shadow int gets checked for null. Null denoting that we have a read before any write.) 4-300) And probably hundreds of other variations. :-) ----------- It's a shame that we don't yet have the Micro Forks releases (those "Everyone Tinker Yourself Instead of Crowding the Newsgroup with Demands and Dreams" packages.) It would be so easy for someone to create a (for example) GuardedInt64 type, and let others use that particular compiler for a while, for some real-world experiences. With the same effort, probably anybody doing that would implement the same for 32bit ints, both signed and unsigned. This would give us even more real world experience. (( Aahh, the Micro Forks releases! Man, we could speed up D research and development! We'd multiply the speed, instead of merely adding. Why do you think open source runs circles around Bill, and even Larry E? It's not that there are thousands of developers. It's that everyone gets the code to tinker with, doing his on pet peeve, or his own dream. A lot of tiny springs make the Amazon. We need them now, not when D is at 5.5. Ha, the C++ Beards wouldn't even know what hit them, by that time we'd be in Jupiter! )) Actually, this could be done with my (still in planning ;-( ) "D preprocessor / D language lab / D meta language compiler". But, alas, that's at least months away (if not Quarters). I wish I were 30 years younger! But more practical, sooner, and for the common good a lot better solution would really be the Micro Forks tinkering distro. (Be it Walter's, Anders', David's or somebody else's.) ----------- A language that has the audacity to screw up thoroughly established practices ("bit" booleans, a serious language with GC, etc. (disclaimer: IMHO the latter is good)), could very well be the one forerunner with int-NANs. _Somebody_ has to take the first step.
Apr 20 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
Actually,  I would say just the opposite.

Fact is, there's no reason an "additional" data type couldn't become
a new standard representation for use in higher level languages and such...
as long as the underlying "base standard" is still supported.

As I mentioned, there are many such derived data types possible.  I think the
real challange would be to come up with one that gets used enough to be
considered a potential standard.

Hmmm... I like challanges.  Maybe some day, if I get the time.  :)  Would you
mind?

TZ


"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:4266E2BD.2070804 nospam.org...
 TechnoZeus wrote:
 I think you missed the point.  The number 64 isn't arbitrary.  It's a
 function of the binary effect of the hardware's wiring.  Yes, you can
 do what ever you want with the data that fits in those 64 bits, but
 you need a way to know what they are first... and the standard 64 bit
 integer representations are there for that purpose.
Oh, ok, I must be getting old and dumb.
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:42669804.2060705 nospam.org...

 Walter wrote:

 Is it possible at all to have the Old Farts, who hold the keys
 to Programming World, to become susceptible to our demands?
LOL.
Suppose the "over 60" bit type was actually 63 bits. How many would seriously start crying about 63 bits being seriously deficient? Their opposition would surely suggest 127 bits for those purposes. Well now we would have 2^64 - 1 values. We lose one single value. A national Disaster. And we'd lose this value _only_ in situations where we're dealing with a possibly uninitialized value. Not a huge loss, IMHO. The specific issue here is that "they" are so old, that they'd have a hard time noticing all of a sudden that we actually do have values to spare. Extending this thought, it should be equally hard to have them designate one single value for NAN even with 1024 bit integers. (Seriously.) I can see the worries of these guys. Traditionally a lot of code has been dependent on "integer" overflow hardware traps, wich have been extensively used when doing integer arithmetic wider than the accumulator in the processor. And even today, with all kinds of encryption getting increasingly important, that is not going to go away. I think this would not be hampered in any way with a compiler that initialises ints with this NAN value (in the cases where it can't immediately see for sure that the variable is assigned a value before first use). By the same token, at reading the value in all places where the compiler "knows" that the variable is already initialised, the NAN checks would not be done. ----- While I'm not suggesting or demanding, I still write the following: (Oh, and don't anybody even think of D 1.0 here. This is purely academic at this stage.) There are several alternative things one could do here: 1) Suppose we create a new data type, called GuardedInt64. This type would always be checked against the NAN value upon retrieval. (A single machine instruction comparing with an immediate value. Not Enormously Slow.) 2) Or, we could decide that non-release builds check against NAN if there is reason to be unsure whether the variable is already initialised. If the compiler can see that there is no danger of that, then it would not bother to check. (This NAN thing would also be switchable on the command line, and/or with a pragma.) 3) Or, in non-release code, for every int there could be a "shadow variable", whose purpose would only be to know whether the real variable was already initialised. (This would be slow, but hey, non-release code in D is already slower -- which is OK.) (The naïve implementation being that every time you write to your int, the shadow int gets written a 1. And every time you read yuor int, the shadow int gets checked for null. Null denoting that we have a read before any write.) 4-300) And probably hundreds of other variations. :-) ----------- It's a shame that we don't yet have the Micro Forks releases (those "Everyone Tinker Yourself Instead of Crowding the Newsgroup with Demands and Dreams" packages.) It would be so easy for someone to create a (for example) GuardedInt64 type, and let others use that particular compiler for a while, for some real-world experiences. With the same effort, probably anybody doing that would implement the same for 32bit ints, both signed and unsigned. This would give us even more real world experience. (( Aahh, the Micro Forks releases! Man, we could speed up D research and development! We'd multiply the speed, instead of merely adding. Why do you think open source runs circles around Bill, and even Larry E? It's not that there are thousands of developers. It's that everyone gets the code to tinker with, doing his on pet peeve, or his own dream. A lot of tiny springs make the Amazon. We need them now, not when D is at 5.5. Ha, the C++ Beards wouldn't even know what hit them, by that time we'd be in Jupiter! )) Actually, this could be done with my (still in planning ;-( ) "D preprocessor / D language lab / D meta language compiler". But, alas, that's at least months away (if not Quarters). I wish I were 30 years younger! But more practical, sooner, and for the common good a lot better solution would really be the Micro Forks tinkering distro. (Be it Walter's, Anders', David's or somebody else's.) ----------- A language that has the audacity to screw up thoroughly established practices ("bit" booleans, a serious language with GC, etc. (disclaimer: IMHO the latter is good)), could very well be the one forerunner with int-NANs. _Somebody_ has to take the first step.
Apr 20 2005
prev sibling parent reply Kevin Bealer <Kevin_member pathlink.com> writes:
In article <d3u9t4$25qd$1 digitaldaemon.com>, Walter says...

<snip>
That's a very good question. If there was a 'trap' value for, say, ints, D
would use it as the default initializer. If one does appear in the future
sometime, would D be changed to use it? No, as that would break about every
D program in existence, including yours and mine <g>.

Stylistically, however, I think it is better style to put in an explicit
initializer when that value will be used. That way, the maintenance
programmer knows it's intentional.
I like this approach somewhat, but let me offer a counter argument: I've heard a rule, that if you have a lot of code that is constantly checking for empty loops like this: if (x.size != 0) { for( loop over x ) { } } .. that you are probably coding your end conditions or containers wrong. Similarly, I've found that the value of zero seems to be the correct starting value for integer types in the overwhelming majority of cases. For example, the "straightforward" implementation of almost any class in the STL can initialize all its fields to 0 or null. /// (fake) vector class that tracks statistics on its contents statistical_vector!(T) { .. private: // zeroes look okay here... size_t size, capacity, max_index, min_index, mean_index; // (.init) looks okay here... T default, total, max, min, mean; // explicit values; not terrible, but not pulitzer material... size_t size=0, capacity=0, max_index=0, min_index=0, mean_index=0; T default=T.init, total=T.init, max=T.init, min=T.init, mean=T.init; }
Also:
    void foo(out int x) { ... }
    ...
    int i = 0;
    foo(i);
I would argue is bad style, as i is intentionally set to a value that is
never used. (Let's dub these things 'dead initializers'.) This kind of thing
is common in C/C++ programs to get the compiler to quit squawking about
"possible uninitialized use of 'i'". Dead initializers and dead code are
sources of confusion for maintenance programming, and a language should not
force programmers to put them in.
Particularly when it comes to warnings and default-generated methods, C++ tends to have many cases where you need to overspecify or disable things manually. In some cases, the design decision is understandable, but it does produce clutter. If warnings do become a standard part of D, let's have a default set of them defined by the language and not up to the implementor. I've spent a lot of time (at work) compiling my C++ code and then fixing (for correct code) seperate sets of warnings on ia32, ia64, Solaris, Windows and OS/X... Kevin
Apr 17 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Kevin Bealer" <Kevin_member pathlink.com> wrote in message
news:d3v3hp$2rge$1 digitaldaemon.com...
 I've spent a lot of time (at work) compiling my C++ code and then fixing
(for
 correct code) seperate sets of warnings on ia32, ia64, Solaris, Windows
and
 OS/X...
Yes, that's a very common problem. It gets really annoying when two compilers each issue warnings for the complement of the way the other one insists the code should be written.
Apr 17 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d3v6kc$2ucq$2 digitaldaemon.com...
 "Kevin Bealer" <Kevin_member pathlink.com> wrote in message
 news:d3v3hp$2rge$1 digitaldaemon.com...
 I've spent a lot of time (at work) compiling my C++ code and then 
 fixing
(for
 correct code) seperate sets of warnings on ia32, ia64, Solaris, 
 Windows
and
 OS/X...
Yes, that's a very common problem. It gets really annoying when two compilers each issue warnings for the complement of the way the other one insists the code should be written.
Hmph! Tell me about it!
Apr 17 2005
prev sibling parent reply zwang <nehzgnaw gmail.com> writes:
Kevin Bealer wrote:
 GC, auto-init of data, checking of switch defaults, array bounds checking, and
 especially null checking have been quite valuable to me for reliability.
 
 GC and auto-init in particular are useful, no just because they provide
 reliability, but because they unclutter the code.  For me a top contender for
 bug incidence in C++ is simply that so much has to be specified, that the extra
 syntax provides cover for broken code.
 
 On another note, recently I converted an AI toy program from Java to D, then
 tightened up the performance (by many orders of magnitude).  I simply renamed
 the files to ".d", and was amazed to discover that after fixing the syntax
 errors and implementing a few missing parts, there was only one code error:
 argc was off by one (language difference).  After that, it -just ran-.
 
 In other words, the syntax is very good at guiding you hand.  In C++ many many
 things are legal code but don't do what is expected.  The D version caught
 virtually everything at compile time.
 
 Thanks, Walter.
 
 Kevin
 
 (P.S. If anyone wants a program to solve Sokoban puzzles in D...)
Hi Kevin, I'm interested in the pruning algorithm of your Sokoban solver.
Apr 16 2005
parent Kevin Bealer <Kevin_member pathlink.com> writes:
In article <d3qovm$2gl3$1 digitaldaemon.com>, zwang says...
Kevin Bealer wrote:
 GC, auto-init of data, checking of switch defaults, array bounds checking, and
 especially null checking have been quite valuable to me for reliability.
 
 GC and auto-init in particular are useful, no just because they provide
 reliability, but because they unclutter the code.  For me a top contender for
 bug incidence in C++ is simply that so much has to be specified, that the extra
 syntax provides cover for broken code.
 
 On another note, recently I converted an AI toy program from Java to D, then
 tightened up the performance (by many orders of magnitude).  I simply renamed
 the files to ".d", and was amazed to discover that after fixing the syntax
 errors and implementing a few missing parts, there was only one code error:
 argc was off by one (language difference).  After that, it -just ran-.
 
 In other words, the syntax is very good at guiding you hand.  In C++ many many
 things are legal code but don't do what is expected.  The D version caught
 virtually everything at compile time.
 
 Thanks, Walter.
 
 Kevin
 
 (P.S. If anyone wants a program to solve Sokoban puzzles in D...)
Hi Kevin, I'm interested in the pruning algorithm of your Sokoban solver.
It's an A* search (I would like to do SMA*, but haven't gotten there yet.) This means that for every board position I need to compute a cost of completion. The higher number it returns, the better, as long as it never overestimates. The algorithm produces the optimal solution. Pruning was originally done by checking for "stuck boxes". Now I compute a table of the the minimal costs from every location to every other location, using only legal "pushes" (boxes cannot move in N unless the player can stand on the square to the south of the box.). This is done once at the beginning for the entire board (it's a quick computation). "Pruning" only happens in two ways. First, boxes will not be pushed onto squares that have infinite-cost (in the above table) to all target squares. Secondly, because positions are expanded in least-cost-first order (using the cost calculation function + existing costs), the first time you see a position is essentially the lowest cost to get the board into that state. So once I see expand a board position, I never expand it again. I keep a hash of all previously seen board positions. There are also several ways to relax the optimality requirement, to find a solution sequence faster. Kevin
Apr 16 2005
prev sibling parent xs0 <xs0 xs0.com> writes:
 An example we've seen 
 with D in recent days is that we need compiler support for 
 irrecoverability.
 Now whether as a result of the singular genius of Dr Stroustrup, or 
 as an emergent property of its complexity, C++ has, for the 
 ingenious/intrepid, almost infinite capacity for providing fixes to 
 the language by dint of libraries. Look at my fast string 
 concatenation, at Boost's Lambda, etc. etc. Other languages are left 
 for dead in this regard.
What kind of argument is that? C++'s flaws can be fixed by writing some code, but you don't allow the same to D. If you don't want to recover from CP violations, then don't, and there you have irrecoverability. Isn't that a far cleaner solution to not recovering, compared to the language-assisted purist library writer shooting your app in the head (and I don't mean you personally here)? Furthermore, considering that D has support for inline assembler, you can code absolutely anything you want. So, make us all happy, like you did for C++, and write a function that does whatever an irrecoverable exception proceeding down the call stack would do, and we'll all be able to use it when we want. xs0
Apr 15 2005
prev sibling parent Sean Kelly <sean f4.ca> writes:
In article <d3prc1$1k77$1 digitaldaemon.com>, Walter says...
I'm going to posit that your 10+ years of professional experience in C++
might be skewing your perceptions here. I have nearly 20 years experience
writing C++, and I rarely make mistakes any more that are not algorithmic
(rather than dumb things like forgetting to initialize). I am so *used* to
C++ that I've learned to compensate for its faults, potholes, and goofball
error prone issues, such that they aren't a personal issue with me anymore.
I suspect the same goes for you.
Same here. But I assume Matthew is suggesting fixes for problems he's had in the passt.
P.S. I said these C++ problems rarely cause me trouble anymore in my own
code. So why fix them? Because I remember the trouble that they used to
cause me, and see the trouble they cause other people. Being in the compiler
business, I wind up helping a lot of people with a very wide variety of
code, so I tend to see what kinds of things bring on the grief.
As I assume Matthew is doing with his suggestions.
P.P.S. I believe the C++ uninitialized variable problem is a far deeper and
costlier one than you seem willing to credit it for. It just stands head,
shoulders, and trunk above most of the other problems with the exception of
goofing up managing memory allocations. It's one of those things that I have
lost enormous time to, and as a consequence have learned a lot of
compensations for. I suspect you have, too.
Personally, this has never been much of an issue for me, as I've always been diligent about initializing variables. At the same time, I do support your efforts here as when these bugs do occur they're a nightmare to track down.
P.P.P.S. I know for a fact that I spend significantly less time writing an
app in D than I would the same thing in C++, and that's including debugging
time. There's no question about it. When I've translated working & debugged
apps from C++ to D, D's extra error checking exposed several bugs that had
gone undetected by C++ (array overflows was a big one), even one with the
dreaded C++ implicit default break; in the switch statement <g>.
Agreed :) This is one reason I have so much faith in D--I have far less experience with it than with C++ and yet I still find myself more productive with D. Sean
Apr 16 2005
prev sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Regan Heath wrote:
 On Tue, 12 Apr 2005 15:16:55 +1000, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 
 "Regan Heath" <regan netwin.co.nz> wrote in message 
 news:opso3kiga523k2f5 nrage.netwin.co.nz...
 
 On Mon, 11 Apr 2005 21:01:40 -0400, Ben Hinkle 
 <ben.hinkle gmail.com>  wrote:
 
 The distinction in Java is poorly designed and can be
 covered in D by subclassing Object directly.
So if you want an un-recoverable error you subclass object and never catch Object directly?
What do you have in mind as a user-defined unrecoverable error?
Nothing new, I was looking at: OutOfMemory AssertionFailure and thought, for most applications these are treated as unrecoverable errors.
They may be, but that's quite wrong. OutOfMemory is practically unrecoverable, but should not be classed as an unrecoverable exception.
I want to allocate memory. I could first check if there is enough, but since I'm on a Real Operating system, there are other users, who also may allocate memory between my check and my actual allocating. I let my program allocate, and if it runs out of memory, it then deallocates some, and tries to get by with what it got. Another scenario: In 1982 I bought my first computer. It wrote "ready" as the prompt, and you could do arithmetic right on the command line. I remember the first time "it spoke to me": I was trying out increasingly more complicated functions, and suddenly it wrote "formula too complex". Man, I sat quiet for a good while, staring in awe at the unbelievable intelligence of a piece of silicon. Now I know the interpreter had run out of stack space. But instead of crashing, it recovered gracefully. Trying to load a too big picture or movie in an editing program should not terminate the program. It should inform you, and then let you choose some other file. --------- In general, I think the responsibility to have the program terminated, should lie on the particular piece of code that first becomes aware of the unrecoverability of a situation. Typically, a server program should terminate if it can't open the log file. A filter program should terminate if it can't open stdout. A hd-formatter should terminate if there is no hd. But there are no hard rules like "can't open file, so terminate". But the "recognition of a terminate situation" may not always be at the first site of "error detection". It may just as well be several throw/catches up in the call stack. An example would be a routine that can't do something. It throws an exception, and somewhere higher up there's another routine that knows that this exception in these circumstances means we got an unrecoverable situation. And then _that_ piece of code throws an uncatchable. -------- Many languages offer programmers the ability to write program termiation handlers. These are usually for situations where the program has to do some unusual tidying up before actual termination. In Real Operating systems the programmer can additionally catch _any_ signal (except the one uncatchable "shot in the neck") that would normally lead to program termination, and do things he pleases instead. Ultimately we (as language and compiler people) should let the programmer decide what to do, and not force a particular chain of events upon him. That said, the Error/Exception hierarchy, combined with throw/catch/finally is there to help the programmer sort out this mess. To make this possible, the libraries should give a good and solid foundation for doing this. But nothing more. ---------- Done right, this means that most not-hoped-for situations and events throw something. This need not necessarily be caught at the next level, if that level wouldn't know what to do with it. And if it is not caught even in main(), then the runtime should terminate the program (of course displaying the reason). The programmer may do catch-alls (if he is lazy/stupid), and it is not for us to prevent this. On the other hand, there should exist non-catchable throwables. These are for situations where the code (be it library code, or top level user-code, or whatever) notices we have to terminate. An example would be a program that operates a robot arm. Certain combinations of inputs from the robot arm sensors may denote an unacceptable or unhandlable condition. The piece of code that notices this should throw an uncatchable error. (Lame example, but "it's the thought that counts". :-) ) Whether we intend to use non catchable exceptions, should not prevent us from incorporating them in the hierarchy. We should have both. ------------- The _sole_ point of having a hierarchy is to let catch clauses catch several different kinds of errors at a time. (For example, someone might want to catch all file errors.) A good hierarchy has this as its main goal. In other words, it is not sufficient to just classify errors "in a logical way". What is needed is diligent study of how and when such errors might be thrown and caught, and how this could be made as simple and natural as possible to use in the user code. ------------- Personally I think the language need not define more than two throwables: Error and Exception. The former being uncatchable, and the latter being catchable. The runtime and standard libraries can then define the hierarchy as they want. I'd like the hierarchy to be of tight granularity (a big and thick tree). This would make it convenient and natural for the programmer to actually get in the habit of using exceptions, catching precisely what is needed, and gradually start creating his own exceptions.
 Agreed, in part, why "class" it as anything but what it is?
 
 Conversely, AssertionFailure is practically recoverable, but most 
 certainly should be classed as unrecoverable.
As above, why "class" it as anything but what it is?
 (And the language should mandate and enforce the irrecoverability.)
Disagree.
Apr 12 2005
prev sibling next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <bhinkle mathworks.com> wrote in message 
news:d3f1nh$rj4$1 digitaldaemon.com...
 I'm looking into the Error/Exception situation in phobos and 
 previous posts
 by Walter and others generally argued that Exceptions are 
 recoverable and
 Errors are not. I believe there isn't an application-independent 
 definition
 of what recoverable means
An exception that may not be quenched, or in whose quenching the runtime takes over to terminate the process. Naturally, the system should provide an opportunity for cleanup of arbitrary sophistication, but the infrastructure should not support resuming normal processing. However meandering the path - as a consequence of the requirements of the application (e.g. dumping unsaved data in as fine a form as possible) - it must lead to process termination. Off the top of my head, only CP violations are unrecoverable, so it's not exactly
and I would like to pursue an exception class

 distinction in
 Java is poorly designed and can be covered in D by subclassing 
 Object
 directly. The existing class heirarchy in phobos and user code 
 also
 seemingly randomly subclasses Error or Exception.
All of the that speaks to the likelihood that those hierarchies were ill-considered, rather than that there's an intrinsic problem in segregating the exceptions/errors. In my C++ work, I use exception classes (always derived directly/indirectly from std::exception) for exceptions, and types derived from ::stlsoft::unrecoverable for, er, unrecoverable errors. Works a treat, and never caused any probs. (I've had this stuff, coupled with a release-mode contract enforcement, and verbose, but immediate, application termination in a real-time multi-protocol financial system running the last 4-5 months. It caught two fundamental design oversights in the first week, and hasn't uttered a peep since. All is working tickety-boo.)
 For example the class hierachy I have in mind looks like
 Object
  OutOfMemory
  AssertionFailure
  Exception
    FileException
    StreamException
    ... etc, all the other exceptions and subclasses ...

 where Exception is
 class Exception {
    char[] msg;
    Object cause;
    this(char[] msg, Object cause = null);
    void print(); // print this exception and any causes
    char[] toString(); // string summarizes this exception
 }

 For reference see
 http://www.digitalmars.com/d/archives/digitalmars/D/6049.html
 http://www.digitalmars.com/d/archives/digitalmars/D/9556.html
 http://www.digitalmars.com/d/archives/digitalmars/D/10415.html

 comments?
Problems with the above: 1. I want to see the ability to throw/catch an Object removed. (Or, if someone can possibly proffer a justification for wanting to do that - what has it ever benefited anyone in C++ to be able to throw a double? - incur a warning.) What's wrong with something Throwable? It's not like anyone will (or at least should!) be mixing an exception's duties with some other functionality, so there's no need to fear the constraints of a strict single-inheritance hierarchy. 2. I grok that you've identified the strange nature of OutOfMemory, in so far as it's theoretically recoverable but practically irrecoverable (except when one is supplying custom allocation). I agree that it's not a simple straightforward issue. I'd suggest that this is the one area that needs more thinking (including how other resource exhaustion might relate). 3. AssertionFailure - which I presume is the parent to a more detailed CP hierarchy - should be an Error, and irrecoverable (as per discussion above). I'd started work some months ago on suggestions for how Errors should be intrinsically handled within the language. I'll try and dig it out and post it. But, AFAICR, I posit that they should be analogous to function-try blocks in C++: it's fine (in fact professionally questionable not!) to catch them and exercise the application-specific level of informing of the user / dumping data / logging to SysLog/event-log/file/whatever, but the runtime, *in all threads*, will always rethrow any caught error, and will also provide intrinsic "catch(Error) { exit(EXIT_FAILURE); }" at the topmost level. The way I achieve this in the ::stlsoft::unrecoverable type is by ref-counting (similar to the shared_ptr mechanism) and the last one (copy, that is) out turns the lights off; it works a treat, as I mentioned above, and is not the least constricting, since it kindly waits until you've finished all your cleanup, even if you do several levels of cleanup by rethrowing the error. But since D holds exception objects by reference, all that jazz is unnecessary. In any case, such things are far better encapsulated in the language given that we (as I assert we do) want to avoid throwing of arbitrary types. I know this is going to stir the same hornet's nest of people who feel like it's big-brother-ism, or who don't understand that a program that has violated its design is, from then on, invalid and open to *any* behaviour (obviously the more exotic things, like bringing down the internet or signalling to aliens that we want to be liberated by a benevolent megamilitarist, are somewhat less likely than the more prosaic). Killing it is the only right thing to do. And it's also the effective and convenient thing to do. As I've said, I've been running this stuff in the real world for some time, and clients always shit themselves at the prospect, but then completely come around when they see the speed at which the code reaches a point where it's no longer violating its design. In the words of the Pragmatic Programmers (section "DBC and Crashing Early", pp115, of TPP): "It's much easier to find and diagnose the problem by crashing early, at the site of the problem." There is one valid scenario where errors should be quenchable: debuggers. For that, I have no immediate prescription, but I'm sure we can find a way to facilitate that without breaking the clean (and easily understand) delineation between exceptions and errors. (One idea might be that a base library function to turn on error quenching could be tied to the security infrastructure used by debuggers. Those with more experience than me in writing debuggers can no doubt some sense of this.) Charon
Apr 11 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Off the top of my head, only CP violations are unrecoverable, so 
 it's not exactly
onerous or limiting
Apr 11 2005
prev sibling parent reply "Maxime Larose" <mlarose broadsoft.com> writes:
"Ben Hinkle" <bhinkle mathworks.com> wrote in message
news:d3f1nh$rj4$1 digitaldaemon.com...
 I'm looking into the Error/Exception situation in phobos and previous
posts
 by Walter and others generally argued that Exceptions are recoverable and
 Errors are not. I believe there isn't an application-independent
definition
 ...
Ben, There is a lot of stuff in this thread, and quite understandly so, as this is a somewhat touchy subject. care about. I suppose it is the same for the vast majority of people out there: they will use whatever is available, as long as it meets their needs. Below is what I believe an error-handling design should address. I unfortunetaly do not have time to dwelve in each point for very long, but I doubt anyone would disagree with them. (And let me know if you do!) The error-handling design: - shall allow recoverability from normal exceptional cases... ... without checking every line for a return code. Any throw-catch design has this. - shall allow specifying a common exit procedure wheter an exception occured or not A try-catch-finally construct allows this. - shall allow the ability to programatically analyze exceptions It is important that the exception-handling code be able to look at the exception. It should be able to print out an error message, display something in the logs, send a SNMP alarm, do some correlation, send it through a socket, etc. Some of this is achieved by simply catching exceptions of a specific type, and some is not (for instance, the human-readable error message). This leads to this first axiom: *Every throwable object in the system should implement a basic interface* (or derive from a base class). This must be so for every library, every method, every class in the system. *Allowing just any objects to be thrown specifically violates this.* - shall allow all possible exceptional cases to be caught This is more or less a sub-point of the above. It may be important for reporting purposes that all possible errors be caught and reported. This can be allowed with a catch(...) statement, but honestly, after (...) has been caught, what can you possibly do with it? This is one of the most stupid 'features' of C++. I usually report: "Unkown exception caught". There is then a 50% chance that I will find the root cause if I am in the lab and 0% if it hapenned on a customer's system. (More on debugging later.) So, this comes back to the point above that all exceptions should derive the same interface/base class with (at least) a method to get a human readable error message. Note: some have argued that throwing an object would allow the application to crash hard. Whereas I am not sure I understand why anyone would want to do this (they haven't worked with critical systems for sure), not catching an error (or rethrowing it) will lead to the same result. - shall allow for checked and unchecked exceptions Depending where you come from, I guess you can be all for checked exceptions, or all against it. The truth of the matter is that both are necessary in given circumstances. Checked exceptions are exceptions that clients should be *explicitly* made aware of and should deal with (if only to rethow it ). I know, I know, this can be abused and lead to a lot of code simply rethrowing exceptions. It is true that client code often cannot recover if the callee was not able to. No matter, these exceptions are useful in certain circumstances. To mitigate abuses, the default "Exception" class should probably be unchecked (and not the other way around, like was done in Java). Methods should list (as per their definitions) all checked exceptions they throw and compilers should enforce that no other checked could be thrown. - the language/default library shall define most (all) of its exceptions as unchecked In other words, it should assume the application logic ensures error-free operation. For instance, an array list throwing IndexNotFoundExceptions should not force clients to catch it. - shall allow exceptions to be cascaded In some situations, cascading exceptions is the best way to convey what happened exactly. Also, it allows midware to specify aggregate exceptions (that could have many causes) and let clients get to the exact root cause if they so desire (most probably for error reporting). Cascading is usually done by adding a parameter to the constructor and adding a getter to the base exception class/interface: this(..., Exception causedBy = null); Exception causedBy(); - shall help debugging as much as possible This is probably the most important point and can't be stressed enough. An error-handling scheme that allows me to know exactly where the error occured, by who the function was called (stack trace) and what were the parameters, even when the error happened only once in the field is an incredible blessing. It is great being able to tell a customer: "we know what the problem is and a patch will be ready for you tomorrow". Even when the root problem is an obscure race condition that never occured in your gazillion tests in the lab. This saves a lot of people's asses and makes the programmer's job a lot more enjoyable (and a hell of a lot less stressful when you work with critical systems). You may think I exagerate, but I don't... (speaking from experience here). And bugs are a reality in software development, even in critical systems. No matter how good or witty you are. More than anything else stack tracing leads to trustable code and rapid development. Java does this well enough, but I understand that it may not be possible to extract all the run-time information available to a JVM in compiled code like D (and C++). However, every efforts should be made toward that goal. For instance, if it is possible, at a performance cost, to have stack traces, etc. available in the exception, I believe it should be made available. In debug mode only perhaps, but I would let the programmer decide if he still wants that info in production code. Performance is only one factor in production code, but customers are often a lot more interested in stability. If having stack traces costs 10% in performance but allows me to iron out hard to find bugs in a day vs a week (or never), I'd chose the 10% performance cost every time. Buying hardware 10% faster is cheaper than paying a programmer (me!) finding hard race-conditions bugs. I am no compiler writer though, so I don't know exactly what that would entail. Hoping I am not forgetting anything... BTW, thanks for all your hard work on D. I can honestly say D is the most exciting thing I have come across in a looonnnngggg time. I spent all (last) weekend checking it out, and I couldn't get to sleep... Went to bed at 5 or 6 AM every morning to the dismay of my SO... I am now tired as hell, but very, very excited. Anyway, I am about to send a few bugs and glitches here and there (and some suggestions perhaps). D has a few rough edges (like exception-handling in fact), but I will program in D in my spare time from now on. Got a project up my sleeve, and I believe I found a language to grace it with! ;) Max
Apr 13 2005
parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
comments inline

"Maxime Larose" <mlarose broadsoft.com> wrote in message 
news:d3j3m2$1jsu$1 digitaldaemon.com...
 "Ben Hinkle" <bhinkle mathworks.com> wrote in message
 news:d3f1nh$rj4$1 digitaldaemon.com...
 I'm looking into the Error/Exception situation in phobos and previous
posts
 by Walter and others generally argued that Exceptions are recoverable and
 Errors are not. I believe there isn't an application-independent
definition
 ...
Ben, There is a lot of stuff in this thread, and quite understandly so, as this is a somewhat touchy subject. care about. I suppose it is the same for the vast majority of people out there: they will use whatever is available, as long as it meets their needs. Below is what I believe an error-handling design should address. I unfortunetaly do not have time to dwelve in each point for very long, but I doubt anyone would disagree with them. (And let me know if you do!) The error-handling design: - shall allow recoverability from normal exceptional cases... ... without checking every line for a return code. Any throw-catch design has this. - shall allow specifying a common exit procedure wheter an exception occured or not A try-catch-finally construct allows this. - shall allow the ability to programatically analyze exceptions It is important that the exception-handling code be able to look at the exception. It should be able to print out an error message, display something in the logs, send a SNMP alarm, do some correlation, send it through a socket, etc. Some of this is achieved by simply catching exceptions of a specific type, and some is not (for instance, the human-readable error message). This leads to this first axiom: *Every throwable object in the system should implement a basic interface* (or derive from a base class). This must be so for every library, every method, every class in the system. *Allowing just any objects to be thrown specifically violates this.*
Why does having a base class of Object violate this? It has two useful methods: toString() and print(). To me that's what I would want in an exception base class. It would be nice to have methods for getting or printing a stack trace like Java and .Net but I'm not too worried about that and besides it wouldn't show up in D for a while (if it does show up it can be an interface). Is there something in particular that you'd like to see in the exception tree base class that isn't in Object?
 - shall allow all possible exceptional cases to be caught
 This is more or less a sub-point of the above. It may be important for
 reporting purposes that all possible errors be caught and reported. This 
 can
 be allowed with a catch(...) statement, but honestly, after (...) has been
 caught, what can you possibly do with it? This is one of the most stupid
 'features' of C++. I usually report: "Unkown exception caught". There is
 then a 50% chance that I will find the root cause if I am in the lab and 
 0%
 if it hapenned on a customer's system. (More on debugging later.) So, this
 comes back to the point above that all exceptions should derive the same
 interface/base class with (at least) a method to get a human readable 
 error
 message.
 Note: some have argued that throwing an object would allow the application
 to crash hard. Whereas I am not sure I understand why anyone would want to
 do this (they haven't worked with critical systems for sure), not catching
 an error (or rethrowing it) will lead to the same result.
D doesn't have catch(...). Instead one can catch(Object obj) and query obj. Technically the doc says you can catch without supplying a variable and it will catch everything and swallow the exception. Hopefully the compiler will warn if someone does that because that seems like an extreme catch.
 - shall allow for checked and unchecked exceptions
 Depending where you come from, I guess you can be all for checked
 exceptions, or all against it. The truth of the matter is that both are
 necessary in given circumstances.
 Checked exceptions are exceptions that clients should be *explicitly* made
 aware of and should deal with (if only to rethow it ). I know, I know, 
 this
 can be abused and lead to a lot of code simply rethrowing exceptions. It 
 is
 true that client code often cannot recover if the callee was not able to. 
 No
 matter, these exceptions are useful in certain circumstances.
 To mitigate abuses, the default "Exception" class should probably be
 unchecked (and not the other way around, like was done in Java).
 Methods should list (as per their definitions) all checked exceptions they
 throw and compilers should enforce that no other checked could be thrown.
I think there are archived threads about checked/unchecked exceptions. Walter didn't seem very keen on them but that might be because of a problem with Java's approach and maybe you are right that a slightly different approach would make them more attractive to Walter. Do you have a set of checked exceptions in mind?
 - the language/default library shall define most (all) of its exceptions 
 as
 unchecked
 In other words, it should assume the application logic ensures error-free
 operation. For instance, an array list throwing IndexNotFoundExceptions
 should not force clients to catch it.
agreed.
 - shall allow exceptions to be cascaded
 In some situations, cascading exceptions is the best way to convey what
 happened exactly. Also, it allows midware to specify aggregate exceptions
 (that could have many causes) and let clients get to the exact root cause 
 if
 they so desire (most probably for error reporting). Cascading is usually
 done by adding a parameter to the constructor and adding a getter to the
 base exception class/interface:
 this(..., Exception causedBy = null);
 Exception causedBy();
agreed. There have been recent (and probably achived, too) threads about this kind of thing and using it more often. For example Regan had a nice idea about cascading a SystemError to supply platform-specific error information and I've been looking at the various "foo not supported" exceptions in phobos with an eye towards cascading a NotSupportedException.
 - shall help debugging as much as possible
 This is probably the most important point and can't be stressed enough. An
 error-handling scheme that allows me to know exactly where the error
 occured, by who the function was called (stack trace) and what were the
 parameters, even when the error happened only once in the field is an
 incredible blessing. It is great being able to tell a customer: "we know
 what the problem is and a patch will be ready for you tomorrow". Even when
 the root problem is an obscure race condition that never occured in your
 gazillion tests in the lab. This saves a lot of people's asses and makes 
 the
 programmer's job a lot more enjoyable (and a hell of a lot less stressful
 when you work with critical systems). You may think I exagerate, but I
 don't... (speaking from experience here). And bugs are a reality in 
 software
 development, even in critical systems. No matter how good or witty you 
 are.
 More than anything else stack tracing leads to trustable code and rapid
 development. Java does this well enough, but I understand that it may not 
 be
 possible to extract all the run-time information available to a JVM in
 compiled code like D (and C++). However, every efforts should be made 
 toward
 that goal. For instance, if it is possible, at a performance cost, to have
 stack traces, etc. available in the exception, I believe it should be made
 available. In debug mode only perhaps, but I would let the programmer 
 decide
 if he still wants that info in production code. Performance is only one
 factor in production code, but customers are often a lot more interested 
 in
 stability. If having stack traces costs 10% in performance but allows me 
 to
 iron out hard to find bugs in a day vs a week (or never), I'd chose the 
 10%
 performance cost every time. Buying hardware 10% faster is cheaper than
 paying a programmer (me!) finding hard race-conditions bugs. I am no
 compiler writer though, so I don't know exactly what that would entail.
Agreed that stack traces are very very useful - especially when a report comes in from the field that is not reproducible. If anyone knows of implementations or knows how to implement a "get the stack trace" function I'm sure it would be appreciated. I'm guessing it is compiler dependent, though. I know something exists since MATLAB does it for its C code and of course Java and .Net do it.
 Hoping I am not forgetting anything...
Thanks for the thoughtful post. Hearing fresh voices is very important.
 BTW, thanks for all your hard work on D. I can honestly say D is the most
 exciting thing I have come across in a looonnnngggg time. I spent all 
 (last)
 weekend checking it out, and I couldn't get to sleep... Went to bed at 5 
 or
 6 AM every morning to the dismay of my SO... I am now tired as hell, but
 very, very excited. Anyway, I am about to send a few bugs and glitches 
 here
 and there (and some suggestions perhaps). D has a few rough edges (like
 exception-handling in fact), but I will program in D in my spare time from
 now on. Got a project up my sleeve, and I believe I found a language to
 grace it with!  ;)
cool. welcome!
Apr 13 2005
next sibling parent reply "Maxime Larose" <mlarose broadsoft.com> writes:
Your points about throwing Objects are well noted. I didn't realize that the
toString and print functions would solve most of the problems I mentionned.
I still believe throwing a specific class (or interface) is better in the
case more stuff creeps in (like the stack traces). I mean, that's the whole
idea behind specializing (a class) in the first place right? In fact, the
main idea behind OO inheritance in general. Why would an Object be
throwable, when you can have a specialized Throwable class (or whatever)
that offers specialized services (like take a snapshot of the stack trace at
construction). IMO, it is better to make these kinds of
used-all-over-the-place-and-then-some classes/constructs (exceptions,
strings, etc.) thinking well into the future. Obviously, not all future
cases can be thought of now. However, if you foresee a possible change and
if, all other things being equal, a design better accomodates the change
than another, why not use the better accomodating design?

You ask about examples using checked exceptions... Hmmm... I guess you could
say that checked exceptions are a very useful part of contract programming.
Checked exceptions are useful when the exceptional case is part of the
possible operation of a given function, but rare enough that you wouldn't
want checking the return result at every call. And/or when the exceptional
case completely changes your program flow. That is, you want to *assume*
things went a certain way, because they will indeed go that way 99% of the
time. However you don't want clients to simply forget to implement that
case, because it is a very possible case, one they should deal with. A side
benefit is that the code is easier to read/maintain as the human mind works
in terms of normal cases vs exceptional cases - even if the exceptional case
is "normal" for the program itself.

A contrived example: let's say you have part of a program that parses a file
with an arbitrary number of lines in it. One method parses a single line. It
is possible the line is malformed, but very unlikely. How should you define
that method?
1. You return a value saying if the line was successfully parsed or not.
Welcome back in the old days.
2. You throw an exception. Throwing an unchecked exception gives you no
garantee that the client will deal with the case. This is especially true if
the method is in a library. Good programmers read the docs, true?
The real solution in these cases is throwing a checked exception. Clients
will *have* to act upon it. In our cases, an error message could be
displayed that line n is malformed, and parsing continues/stops/whatever.
You have some assurance that the exception will not be caught three callers
up, said caller having absolutely no idea about what went wrong and having
no choice than to simply display the exception text as is.

Again, these checked exceptions are not for the the D core library nor any
default language constructs (at all?). At any rate, unchecked exceptions
should be the default. Checked ones are for applications and "higher"
libraries, to help code and maintain contracts between components.

Thanks,

Max



"Ben Hinkle" <bhinkle mathworks.com> wrote in message
news:d3j9av$1oso$1 digitaldaemon.com...
 comments inline

 "Maxime Larose" <mlarose broadsoft.com> wrote in message
 news:d3j3m2$1jsu$1 digitaldaemon.com...
 "Ben Hinkle" <bhinkle mathworks.com> wrote in message
 news:d3f1nh$rj4$1 digitaldaemon.com...
 I'm looking into the Error/Exception situation in phobos and previous
posts
 by Walter and others generally argued that Exceptions are recoverable
and
 Errors are not. I believe there isn't an application-independent
definition
 ...
Ben, There is a lot of stuff in this thread, and quite understandly so, as
this
 is a somewhat touchy subject.


really
 care about. I suppose it is the same for the vast majority of people out
 there: they will use whatever is available, as long as it meets their
 needs.
 Below is what I believe an error-handling design should address. I
 unfortunetaly do not have time to dwelve in each point for very long,
but
 I
 doubt anyone would disagree with them. (And let me know if you do!)

 The error-handling design:
 - shall allow recoverability from normal exceptional cases...
 ... without checking every line for a return code. Any throw-catch
design
 has this.

 - shall allow specifying a common exit procedure wheter an exception
 occured
 or not
 A try-catch-finally construct allows this.

 - shall allow the ability to programatically analyze exceptions
 It is important that the exception-handling code be able to look at the
 exception. It should be able to print out an error message, display
 something in the logs, send a SNMP alarm, do some correlation, send it
 through a socket, etc. Some of this is achieved by simply catching
 exceptions of a specific type, and some is not (for instance, the
 human-readable error message). This leads to this first axiom:
 *Every throwable object in the system should implement a basic
interface*
 (or derive from a base class).
 This must be so for every library, every method, every class in the
 system.
 *Allowing just any objects to be thrown specifically violates this.*
Why does having a base class of Object violate this? It has two useful methods: toString() and print(). To me that's what I would want in an exception base class. It would be nice to have methods for getting or printing a stack trace like Java and .Net but I'm not too worried about
that
 and besides it wouldn't show up in D for a while (if it does show up it
can
 be an interface). Is there something in particular that you'd like to see
in
 the exception tree base class that isn't in Object?

 - shall allow all possible exceptional cases to be caught
 This is more or less a sub-point of the above. It may be important for
 reporting purposes that all possible errors be caught and reported. This
 can
 be allowed with a catch(...) statement, but honestly, after (...) has
been
 caught, what can you possibly do with it? This is one of the most stupid
 'features' of C++. I usually report: "Unkown exception caught". There is
 then a 50% chance that I will find the root cause if I am in the lab and
 0%
 if it hapenned on a customer's system. (More on debugging later.) So,
this
 comes back to the point above that all exceptions should derive the same
 interface/base class with (at least) a method to get a human readable
 error
 message.
 Note: some have argued that throwing an object would allow the
application
 to crash hard. Whereas I am not sure I understand why anyone would want
to
 do this (they haven't worked with critical systems for sure), not
catching
 an error (or rethrowing it) will lead to the same result.
D doesn't have catch(...). Instead one can catch(Object obj) and query
obj.
 Technically the doc says you can catch without supplying a variable and it
 will catch everything and swallow the exception. Hopefully the compiler
will
 warn if someone does that because that seems like an extreme catch.

 - shall allow for checked and unchecked exceptions
 Depending where you come from, I guess you can be all for checked
 exceptions, or all against it. The truth of the matter is that both are
 necessary in given circumstances.
 Checked exceptions are exceptions that clients should be *explicitly*
made
 aware of and should deal with (if only to rethow it ). I know, I know,
 this
 can be abused and lead to a lot of code simply rethrowing exceptions. It
 is
 true that client code often cannot recover if the callee was not able
to.
 No
 matter, these exceptions are useful in certain circumstances.
 To mitigate abuses, the default "Exception" class should probably be
 unchecked (and not the other way around, like was done in Java).
 Methods should list (as per their definitions) all checked exceptions
they
 throw and compilers should enforce that no other checked could be
thrown.
 I think there are archived threads about checked/unchecked exceptions.
 Walter didn't seem very keen on them but that might be because of a
problem
 with Java's approach and maybe you are right that a slightly different
 approach would make them more attractive to Walter. Do you have a set of
 checked exceptions in mind?

 - the language/default library shall define most (all) of its exceptions
 as
 unchecked
 In other words, it should assume the application logic ensures
error-free
 operation. For instance, an array list throwing IndexNotFoundExceptions
 should not force clients to catch it.
agreed.
 - shall allow exceptions to be cascaded
 In some situations, cascading exceptions is the best way to convey what
 happened exactly. Also, it allows midware to specify aggregate
exceptions
 (that could have many causes) and let clients get to the exact root
cause
 if
 they so desire (most probably for error reporting). Cascading is usually
 done by adding a parameter to the constructor and adding a getter to the
 base exception class/interface:
 this(..., Exception causedBy = null);
 Exception causedBy();
agreed. There have been recent (and probably achived, too) threads about this kind of thing and using it more often. For example Regan had a nice idea about cascading a SystemError to supply platform-specific error information and I've been looking at the various "foo not supported" exceptions in phobos with an eye towards cascading a
NotSupportedException.
 - shall help debugging as much as possible
 This is probably the most important point and can't be stressed enough.
An
 error-handling scheme that allows me to know exactly where the error
 occured, by who the function was called (stack trace) and what were the
 parameters, even when the error happened only once in the field is an
 incredible blessing. It is great being able to tell a customer: "we know
 what the problem is and a patch will be ready for you tomorrow". Even
when
 the root problem is an obscure race condition that never occured in your
 gazillion tests in the lab. This saves a lot of people's asses and makes
 the
 programmer's job a lot more enjoyable (and a hell of a lot less
stressful
 when you work with critical systems). You may think I exagerate, but I
 don't... (speaking from experience here). And bugs are a reality in
 software
 development, even in critical systems. No matter how good or witty you
 are.
 More than anything else stack tracing leads to trustable code and rapid
 development. Java does this well enough, but I understand that it may
not
 be
 possible to extract all the run-time information available to a JVM in
 compiled code like D (and C++). However, every efforts should be made
 toward
 that goal. For instance, if it is possible, at a performance cost, to
have
 stack traces, etc. available in the exception, I believe it should be
made
 available. In debug mode only perhaps, but I would let the programmer
 decide
 if he still wants that info in production code. Performance is only one
 factor in production code, but customers are often a lot more interested
 in
 stability. If having stack traces costs 10% in performance but allows me
 to
 iron out hard to find bugs in a day vs a week (or never), I'd chose the
 10%
 performance cost every time. Buying hardware 10% faster is cheaper than
 paying a programmer (me!) finding hard race-conditions bugs. I am no
 compiler writer though, so I don't know exactly what that would entail.
Agreed that stack traces are very very useful - especially when a report comes in from the field that is not reproducible. If anyone knows of implementations or knows how to implement a "get the stack trace" function I'm sure it would be appreciated. I'm guessing it is compiler dependent, though. I know something exists since MATLAB does it for its C code and of course Java and .Net do it.
 Hoping I am not forgetting anything...
Thanks for the thoughtful post. Hearing fresh voices is very important.
 BTW, thanks for all your hard work on D. I can honestly say D is the
most
 exciting thing I have come across in a looonnnngggg time. I spent all
 (last)
 weekend checking it out, and I couldn't get to sleep... Went to bed at 5
 or
 6 AM every morning to the dismay of my SO... I am now tired as hell, but
 very, very excited. Anyway, I am about to send a few bugs and glitches
 here
 and there (and some suggestions perhaps). D has a few rough edges (like
 exception-handling in fact), but I will program in D in my spare time
from
 now on. Got a project up my sleeve, and I believe I found a language to
 grace it with!  ;)
cool. welcome!
Apr 13 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Maxime Larose" <mlarose broadsoft.com> wrote in message
news:d3jd0v$1s8q$1 digitaldaemon.com...
 Your points about throwing Objects are well noted. I didn't realize that
 the toString and print functions would solve most of the problems I
 mentionned.I still believe throwing a specific class (or interface) is 
 better in the
 case more stuff creeps in (like the stack traces). I mean, that's the
 whole idea behind specializing (a class) in the first place right? In 
 fact, the
 main idea behind OO inheritance in general. Why would an Object be
 throwable, when you can have a specialized Throwable class (or whatever)
 that offers specialized services (like take a snapshot of the stack trace
 at construction). IMO, it is better to make these kinds of
 used-all-over-the-place-and-then-some classes/constructs (exceptions,
 strings, etc.) thinking well into the future. Obviously, not all future
 cases can be thought of now. However, if you foresee a possible change and
 if, all other things being equal, a design better accomodates the change
 than another, why not use the better accomodating design?
Exception is practically speaking the root of the exception tree. One issue that makes me nervous about getting the stack trace for all exceptions is what to do with OutOfMemory since there might not be space for the stack trace. Currently the OutOfMemory exception is a singleton (it throws the classinfo's .init value) so attaching a stack trace to it would be troublesome. Presumably then OutOfMemory would not save or print any stack trace (not that it would be a huge loss). Practically speaking Exception is the root of the exception tree so it can get the stack trace capabilities. In my initial proposed hierarchy since OutOfMemory wouldn't subclass Exception (just like today) then it wouldn't get the stack trace API. Introducing a class between Object and Exception would be fine with me as long as it served a practical purpose - but that depends on the details of the exception inheritance tree.
 You ask about examples using checked exceptions... Hmmm... I guess you
 could say that checked exceptions are a very useful part of contract 
 programming.
I am not saying they aren't - I'm just saying if you want to convince Walter I would recommend reading his past posts about checked exceptions and address his concerns. I vaguely remember his concerns are that many times Java coders (just for example) don't pay enough attention to checked exceptions and in the process of shutting up the compiler the coder winds up doing more harm than good. In a perfect world checked exceptions are wonderful - but we don't live in a perfect world, unfortunately. [snip rest of checked exceptions paragraph to shorten reply post]
Apr 13 2005
parent reply "Maxime Larose" <mlarose broadsoft.com> writes:
All right.

I'm not sure I want to give myself the trouble of sifting trhough tons of
old threads to try and convince Walter about the benefits of checked
exceptions. They are the same as CP, so that he disagrees is a bit strange,
but... and because people shut up the compiler?!? Let them shut up the
compiler if they so desire. Their app is their own business... For all
practical purposes, it is better to receive a compiler error that some few
people will then shut up than to not receive such errors in the first
place... Anyway...  In fact, I *am* sure I don't want to give myself that
trouble. He has his biases, like any of us do, and I guess he's the one
making the calls for now. ("For now" not to patronize or imply he is not
doing a good job. I mean it in the sense that as D gets more accepted -
something we all hope - there will be a point where moving from
one-man-decides-all to a committe kind of things will be necessary. It is
the way of life and will have to be done for the good of D. On the other
hand, from his point of view it must be awful to see all sorts of weird
proposals, everyone trying to pull in their own directions. I totally agree
with the fact that it is entirely his endevour and he has every right to
chose what to put in the language and what not to put in the language. D is
like any other: it has advantages and disadvantages, in large part brought
in by the language designer!)

That some Exception not be stack traced is quite ok with me. Either you make
the non-stack traced exceptions apart from the inheritance tree or you
remove the stack tracing method from the non-stack traced Exceptions. The
latter is preferrable, because it is a difference in *behavior*, not in
*is-a*. In other words, a non-stack traced exception is an Exception, but
has a different behavior. That's the whole point behind sub-classing: the
sub-class, while "being a" super-class, has a different behavior.

So, in fact, the best design to me seems to be very similar to what java has
done (with the *very* important distinction that Throwable is unchecked):
Throwable (has stack tracing abilities)
  |
  -> Exception (usually catched)
        |
        ->CheckedException (obviously, compiler has to enforce checked
semantics)
               |
               > User-defined classes (*no* system exceptions here)
  |
  -> Error (usually only catched by main for reporting before exiting - if
caught at all)
           |
           ->OutOfMemoryError (overrides stack tracing)

- OR -
...
   -> Error
           |
           > NonStackTracedError
                     |
                     > OutOfMemoryError


throwables...

Anyway, I would love to seek my teeth in your proposal. I don't care at all
if you don't agree with me on a few points. We agree on a lot of points.
(Parenthesis: After having gone from newsgroup and such for a very long
time, coming back here feels eerie... You'd expect less dogma from people
supposed to be "thinking-men"...)

My offer to implement stack tracing still stands. The more I think about it,
the more it seems to me like the way to go. I am still waiting on Walter's
reply on that issue (hoping the email address I had was good.)

Have a nice day,

Max




"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:d3khfn$2r30$1 digitaldaemon.com...
 "Maxime Larose" <mlarose broadsoft.com> wrote in message
 news:d3jd0v$1s8q$1 digitaldaemon.com...
 Your points about throwing Objects are well noted. I didn't realize that
 the toString and print functions would solve most of the problems I
 mentionned.I still believe throwing a specific class (or interface) is
 better in the
 case more stuff creeps in (like the stack traces). I mean, that's the
 whole idea behind specializing (a class) in the first place right? In
 fact, the
 main idea behind OO inheritance in general. Why would an Object be
 throwable, when you can have a specialized Throwable class (or whatever)
 that offers specialized services (like take a snapshot of the stack
trace
 at construction). IMO, it is better to make these kinds of
 used-all-over-the-place-and-then-some classes/constructs (exceptions,
 strings, etc.) thinking well into the future. Obviously, not all future
 cases can be thought of now. However, if you foresee a possible change
and
 if, all other things being equal, a design better accomodates the change
 than another, why not use the better accomodating design?
Exception is practically speaking the root of the exception tree. One
issue
 that makes me nervous about getting the stack trace for all exceptions is
 what to do with OutOfMemory since there might not be space for the stack
 trace. Currently the OutOfMemory exception is a singleton (it throws the
 classinfo's .init value) so attaching a stack trace to it would be
 troublesome. Presumably then OutOfMemory would not save or print any stack
 trace (not that it would be a huge loss). Practically speaking Exception
is
 the root of the exception tree so it can get the stack trace capabilities.
 In my initial proposed hierarchy since OutOfMemory wouldn't subclass
 Exception (just like today) then it wouldn't get the stack trace API.
 Introducing a class between Object and Exception would be fine with me as
 long as it served a practical purpose - but that depends on the details of
 the exception inheritance tree.

 You ask about examples using checked exceptions... Hmmm... I guess you
 could say that checked exceptions are a very useful part of contract
 programming.
I am not saying they aren't - I'm just saying if you want to convince
Walter
 I would recommend reading his past posts about checked exceptions and
 address his concerns. I vaguely remember his concerns are that many times
 Java coders (just for example) don't pay enough attention to checked
 exceptions and in the process of shutting up the compiler the coder winds
up
 doing more harm than good. In a perfect world checked exceptions are
 wonderful - but we don't live in a perfect world, unfortunately.

 [snip rest of checked exceptions paragraph to shorten reply post]
Apr 14 2005
parent "Walter" <newshound digitalmars.com> writes:
"Maxime Larose" <mlarose broadsoft.com> wrote in message
news:d3llc3$pas$1 digitaldaemon.com...
 My offer to implement stack tracing still stands. The more I think about
it,
 the more it seems to me like the way to go. I am still waiting on Walter's
 reply on that issue (hoping the email address I had was good.)
If you can make it work in a reasonable fashion, I'll add it in.
Apr 15 2005
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
In article <d3j9av$1oso$1 digitaldaemon.com>, Ben Hinkle says...
comments inline

"Maxime Larose" <mlarose broadsoft.com> wrote in message 
news:d3j3m2$1jsu$1 digitaldaemon.com...
 - shall allow the ability to programatically analyze exceptions
 It is important that the exception-handling code be able to look at the
 exception. It should be able to print out an error message, display
 something in the logs, send a SNMP alarm, do some correlation, send it
 through a socket, etc. Some of this is achieved by simply catching
 exceptions of a specific type, and some is not (for instance, the
 human-readable error message). This leads to this first axiom:
 *Every throwable object in the system should implement a basic interface*
 (or derive from a base class).
 This must be so for every library, every method, every class in the 
 system.
 *Allowing just any objects to be thrown specifically violates this.*
Why does having a base class of Object violate this? It has two useful methods: toString() and print(). To me that's what I would want in an exception base class. It would be nice to have methods for getting or printing a stack trace like Java and .Net but I'm not too worried about that and besides it wouldn't show up in D for a while (if it does show up it can be an interface). Is there something in particular that you'd like to see in the exception tree base class that isn't in Object?
I think the Object base class violates this to some people because it violates the "what the hell is this?" principle (which I just made up). If an application throws something that is not an exception (ie. that doesn't describe itself in some way) then the client has no idea what the error was. Sure I could throw a ClientAccount, but what was the error that caused the problem in the first place? Printing the ClientAccount won't help anyone in determining that. It would make much more sense to wrap it in an Exception with a bit of descriptive information. Sean
Apr 13 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 I think the Object base class violates this to some people because it 
 violates
 the "what the hell is this?" principle (which I just made up).  If an
 application throws something that is not an exception (ie. that doesn't 
 describe
 itself in some way) then the client has no idea what the error was.
And throwing something that subclasses Exception instead of Object will add more information about what the error was? It has the same toString and print methods as Object. Checking strings for information is not safe to i18n so no code should start parsing strings or messages to drive program logic. The only reasonable thing for code to do is look at inheritance by dynamically casting to what it knows how to deal with. Or it can just catch what it knows how to deal with in the first place :-) Or when you say client do you mean the human client? I assumed you meant client code.
 Sure I
 could throw a ClientAccount, but what was the error that caused the 
 problem in
 the first place?  Printing the ClientAccount won't help anyone in 
 determining
 that.  It would make much more sense to wrap it in an Exception with a bit 
 of
 descriptive information.
So don't throw a ClientAccount :-)
Apr 13 2005
parent reply "Kris" <fu bar.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 And throwing something that subclasses Exception instead of Object will
add
 more information about what the error was? It has the same toString and
 print methods as Object.
So why not move print() to the Exception root? Or why have print() at all, if you can call toString() on it? Just exactly how often does a programmer invoke Exception.print()? And why can't they just call print(exception) instead? There's something about tight coupling at this level that really troubles me.
 Checking strings for information is not safe to i18n so no code
 should start parsing strings or messages to drive program
 logic.
And print()/printf() handles i18n? If the exception root /must/ have a print() method, surely it should be made pluggable by calling some externally defined function dedicated to the task? Hard-coding printf(), or annything else, anywhere in there is just totally bogus for all kinds of reasons; in my terribly humble and grouchy opinion :{ All the more reason for calling print(exception) instead, where that function is defined somewhere in the IO layer. 2c. - Kris
Apr 13 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Kris" <fu bar.com> wrote in message 
news:d3kjqb$2sk4$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 And throwing something that subclasses Exception instead of 
 Object will
add
 more information about what the error was? It has the same 
 toString and
 print methods as Object.
So why not move print() to the Exception root? Or why have print() at all, if you can call toString() on it? Just exactly how often does a programmer invoke Exception.print()? And why can't they just call print(exception) instead? There's something about tight coupling at this level that really troubles me.
Hear, hear.
 Checking strings for information is not safe to i18n so no code
 should start parsing strings or messages to drive program
 logic.
And print()/printf() handles i18n? If the exception root /must/ have a print() method, surely it should be made pluggable by calling some externally defined function dedicated to the task? Hard-coding printf(), or annything else, anywhere in there is just totally bogus for all kinds of reasons; in my terribly humble and grouchy opinion :{ All the more reason for calling print(exception) instead, where that function is defined somewhere in the IO layer.
And again
Apr 13 2005