www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Exception/Error division in D

reply Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
Let's talk about an abstract situation without caring about breaking 
existing code, current docs, implementation etc.

Definitions:
* an Exception is something that tigers scope guards and executes 
catch/finally blocks if thrown;
* an Error is something that doesn't do it.

As a result _we can't do any clean-up if an Error is thrown_ because 
scope guards and catch/finally blocks aren't executed and the program is 
in invalid state because of this. Of course it's theoretically possible 
to code without scope guards and catch/finally blocks but isn't 
applicably for a real project. E.g. in some editor if and Error is 
thrown there is no ability to save opened documents.


Main Question: What do you think can be an Error?

Can "Integer Divide by Zero" be an Error? Definitely, not.

Can "Access Violation" be an Error? No, because it's very common to 
access a field/virtual member function of a null object.

Can "Out of memory" be an Error? No, because e.g. if I read a user file 
that require me to create a large array (> 100 MiB, e.g.) I don't want 
to crash, but just tell, that "Dear user, the file can't be opened 
because it requires..."


So what am I think can be an Error? IMHO, nothing. Because throwing 
everything can indicate that program in very good/terribly bad state and 
compiler doesn't know anything about that. And because fatal error is 
fatal the program should just try to print error and close instead of 
throwing something.


Let's now return to the real D world. Current implementation treats 
Errors as Exceptions for now. Documentation keeps silence. All listed 
"can't be an error" cases are Errors (and it's terrible).

So why do we has Exception/Error division in D? Because of nothrow. 
Personally I don't need nothrow for that high cost of making D unusable 
for me. Lets realize and solve Exception/Error problem and solve nothrow 
in the second place.


Related links:
http://forum.dlang.org/thread/1566418.J7qGkEti3s lyonel
http://d.puremagic.com/issues/show_bug.cgi?id=8135
http://d.puremagic.com/issues/show_bug.cgi?id=8136
http://d.puremagic.com/issues/show_bug.cgi?id=8137


P.S.
By the way, the only problem I see in current implementation is a luck 
of "Object finalized" assertion ("True disposable objects (add 
"Finalized!" assertion)" NG thread that didn't interest anybody).

-- 
Денис В. Шеломовский
Denis V. Shelomovskij
May 24 2012
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On May 24, 2012, at 3:27 AM, Denis Shelomovskij wrote:

 Let's talk about an abstract situation without caring about breaking =
existing code, current docs, implementation etc.
=20
 Definitions:
 * an Exception is something that tigers scope guards and executes =
catch/finally blocks if thrown;
 * an Error is something that doesn't do it.
=20
 As a result _we can't do any clean-up if an Error is thrown_ because =
scope guards and catch/finally blocks aren't executed and the program is = in invalid state because of this. Of course it's theoretically possible = to code without scope guards and catch/finally blocks but isn't = applicably for a real project. E.g. in some editor if and Error is = thrown there is no ability to save opened documents.
=20
=20
 Main Question: What do you think can be an Error?
=20
 Can "Integer Divide by Zero" be an Error? Definitely, not.
=20
 Can "Access Violation" be an Error? No, because it's very common to =
access a field/virtual member function of a null object.
=20
 Can "Out of memory" be an Error? No, because e.g. if I read a user =
file that require me to create a large array (> 100 MiB, e.g.) I don't = want to crash, but just tell, that "Dear user, the file can't be opened = because it requires..."
=20
=20
 So what am I think can be an Error? IMHO, nothing. Because throwing =
everything can indicate that program in very good/terribly bad state and = compiler doesn't know anything about that. And because fatal error is = fatal the program should just try to print error and close instead of = throwing something. I agree. However, the benefit to having Error is so that nothrow can = exist. If every exception were considered recoverable then we'd have to = throw out nothrow as well, since basically anything can generate an = access violation, for example. Or we could weaken nothrow so that it = didn't even allow memory allocations, which would render it largely = useless. For what it's worth, the D runtime does do clean-up for both = Errors and Exceptions today. The only difference is codgen for scope = statements and such inside nothrow functions--instead of being rewritten = as try/finally the on-exit code is just inserted at the proper scope = exit points.
 Let's now return to the real D world. Current implementation treats =
Errors as Exceptions for now. Documentation keeps silence. All listed = "can't be an error" cases are Errors (and it's terrible).
=20
 So why do we has Exception/Error division in D? Because of nothrow. =
Personally I don't need nothrow for that high cost of making D unusable = for me. Lets realize and solve Exception/Error problem and solve nothrow = in the second place. Seems you already know this. Oops. I'm inclined to agree, personally. = nothrow is less useful in D than in C++ because it's safe to throw from = dtors in D (problems related to throwing from a finalizer during a GC = collection aside--that's more an exception safety issue for the GC than = anything else).=
May 24 2012
prev sibling next sibling parent Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
I'd say, removing nothrow and Error from D would be a good idea. Everybody
throws Exception from anywhere. What would be the practical reason not to
do that (besides potentially breaking code)?

On Thu, May 24, 2012 at 9:51 PM, Sean Kelly <sean invisibleduck.org> wrote:

 On May 24, 2012, at 3:27 AM, Denis Shelomovskij wrote:

 Let's talk about an abstract situation without caring about breaking
existing code, current docs, implementation etc.
 Definitions:
 * an Exception is something that tigers scope guards and executes
catch/finally blocks if thrown;
 * an Error is something that doesn't do it.

 As a result _we can't do any clean-up if an Error is thrown_ because
scope guards and catch/finally blocks aren't executed and the program is in invalid state because of this. Of course it's theoretically possible to code without scope guards and catch/finally blocks but isn't applicably for a real project. E.g. in some editor if and Error is thrown there is no ability to save opened documents.
 Main Question: What do you think can be an Error?

 Can "Integer Divide by Zero" be an Error? Definitely, not.

 Can "Access Violation" be an Error? No, because it's very common to
access a field/virtual member function of a null object.
 Can "Out of memory" be an Error? No, because e.g. if I read a user file
that require me to create a large array (> 100 MiB, e.g.) I don't want to crash, but just tell, that "Dear user, the file can't be opened because it requires..."
 So what am I think can be an Error? IMHO, nothing. Because throwing
everything can indicate that program in very good/terribly bad state and compiler doesn't know anything about that. And because fatal error is fatal the program should just try to print error and close instead of throwing something. I agree. However, the benefit to having Error is so that nothrow can exist. If every exception were considered recoverable then we'd have to throw out nothrow as well, since basically anything can generate an access violation, for example. Or we could weaken nothrow so that it didn't even allow memory allocations, which would render it largely useless. For what it's worth, the D runtime does do clean-up for both Errors and Exceptions today. The only difference is codgen for scope statements and such inside nothrow functions--instead of being rewritten as try/finally the on-exit code is just inserted at the proper scope exit points.
 Let's now return to the real D world. Current implementation treats
Errors as Exceptions for now. Documentation keeps silence. All listed "can't be an error" cases are Errors (and it's terrible).
 So why do we has Exception/Error division in D? Because of nothrow.
Personally I don't need nothrow for that high cost of making D unusable for me. Lets realize and solve Exception/Error problem and solve nothrow in the second place. Seems you already know this. Oops. I'm inclined to agree, personally. nothrow is less useful in D than in C++ because it's safe to throw from dtors in D (problems related to throwing from a finalizer during a GC collection aside--that's more an exception safety issue for the GC than anything else).
-- Bye, Gor Gyolchanyan.
May 24 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 24 May 2012 06:27:12 -0400, Denis Shelomovskij  
<verylonglogin.reg gmail.com> wrote:

 Let's talk about an abstract situation without caring about breaking  
 existing code, current docs, implementation etc.

 Definitions:
 * an Exception is something that tigers scope guards and executes  
 catch/finally blocks if thrown;
 * an Error is something that doesn't do it.
I'll give you a different definition: * an Exception is something that can be handled far away from the context of the error, because the system can safely unwind the stack. * an Error must be handled at the point the error occurred, or the program state is by definition invalid.
 As a result _we can't do any clean-up if an Error is thrown_ because  
 scope guards and catch/finally blocks aren't executed and the program is  
 in invalid state because of this. Of course it's theoretically possible  
 to code without scope guards and catch/finally blocks but isn't  
 applicably for a real project. E.g. in some editor if and Error is  
 thrown there is no ability to save opened documents.


 Main Question: What do you think can be an Error?

 Can "Integer Divide by Zero" be an Error? Definitely, not.
I agree with you there. However, the problem isn't nearly as bad as you say. The runtime doesn't actually deal with SIGFPE on Unix based systems, and most places where an "Error" is thrown, it's within asserts, which are compiled out in release code. If you get one of these, you can handle the signal. I think on Windows it's handled for you, but there should be a way to intercept this and do recovery.
 Can "Access Violation" be an Error? No, because it's very common to  
 access a field/virtual member function of a null object.
I'd argue this is unrecoverable. Access Violation results from corruption, and the damage has already been done. Even a null pointer access can be the cause of previous corruption. There is no "recovery" from memory corruption. There is no way to plan for it. Now, if you want to handle it specifically for your program, that should be doable, at your own risk. But there's no way throwing an Exception is a valid result.
 Can "Out of memory" be an Error? No, because e.g. if I read a user file  
 that require me to create a large array (> 100 MiB, e.g.) I don't want  
 to crash, but just tell, that "Dear user, the file can't be opened  
 because it requires..."
Right, out of memory is only an error if your program's invariant depends on the memory allocation. You can plan for the above easily enough, but not realistically for all tasks and all library code that require allocation. For example, let's say you are restructuring a hash table, and you reallocate some nodes. You have half transferred over the old structure to the new structure, and you run out of memory. How to recover from this? In summary, I think we can adjust Windows divide by zero errors to allow you to handle them manually, the Posix version is fine (no Error is generated), access violations are unequivocally Errors, and out of memory should be fixable by making allocation routines that do not throw (just return null). An interesting idea would be to allow a global "handler" for all errors. So if you throw an Exception, it unwinds the stack like normal. If you throw an Error, it checks a handler to see if you want to handle that error (via registering a function/delegate), and if not handled throws Error without properly unwinding the stack. -Steve
May 24 2012
next sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
On May 24, 2012, at 11:39 AM, Steven Schveighoffer wrote:

 On Thu, 24 May 2012 06:27:12 -0400, Denis Shelomovskij =
<verylonglogin.reg gmail.com> wrote:
=20
 Let's talk about an abstract situation without caring about breaking =
existing code, current docs, implementation etc.
=20
 Definitions:
 * an Exception is something that tigers scope guards and executes =
catch/finally blocks if thrown;
 * an Error is something that doesn't do it.
=20 I'll give you a different definition: =20 * an Exception is something that can be handled far away from the =
context of the error, because the system can safely unwind the stack.
 * an Error must be handled at the point the error occurred, or the =
program state is by definition invalid. This is a good point. OutOfMemory conditions aside, the only time I'd = want to recover from an Error condition was at the point the event = occurred, not somewhere up the stack.
 Can "Access Violation" be an Error? No, because it's very common to =
access a field/virtual member function of a null object.
=20
 I'd argue this is unrecoverable.  Access Violation results from =
corruption, and the damage has already been done. Even a null pointer = access can be the cause of previous corruption. There is no "recovery" = from memory corruption. There is no way to plan for it.
=20
 Now, if you want to handle it specifically for your program, that =
should be doable, at your own risk. But there's no way throwing an = Exception is a valid result. Right. Recovery is potentially valid at the point of failure, not = somewhere up the stack.
 Can "Out of memory" be an Error? No, because e.g. if I read a user =
file that require me to create a large array (> 100 MiB, e.g.) I don't = want to crash, but just tell, that "Dear user, the file can't be opened = because it requires..."
=20
 Right, out of memory is only an error if your program's invariant =
depends on the memory allocation. You can plan for the above easily = enough, but not realistically for all tasks and all library code that = require allocation.
=20
 For example, let's say you are restructuring a hash table, and you =
reallocate some nodes. You have half transferred over the old structure = to the new structure, and you run out of memory. How to recover from = this? I think it's fair to expect code that allocates be exception-safe in the = face of allocation errors. I know I'm always very careful with = containers so that an allocation failure doesn't result in corruption, = for example.
 In summary, I think we can adjust Windows divide by zero errors to =
allow you to handle them manually, the Posix version is fine (no Error = is generated), access violations are unequivocally Errors, and out of = memory should be fixable by making allocation routines that do not throw = (just return null). It would be kind of cool if there were some sort of unified way to = handle system-generated errors, though I don't know that this is = practical. signals sort of work on Windows, but I'm pretty sure the = contextual sigaction stuff does not.
 An interesting idea would be to allow a global "handler" for all =
errors. So if you throw an Exception, it unwinds the stack like normal. = If you throw an Error, it checks a handler to see if you want to handle = that error (via registering a function/delegate), and if not handled = throws Error without properly unwinding the stack. ^^ this=
May 24 2012
next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 24 May 2012 15:33:07 -0400, Sean Kelly <sean invisibleduck.org>  
wrote:

 On May 24, 2012, at 11:39 AM, Steven Schveighoffer wrote:
 Can "Out of memory" be an Error? No, because e.g. if I read a user  
 file that require me to create a large array (> 100 MiB, e.g.) I don't  
 want to crash, but just tell, that "Dear user, the file can't be  
 opened because it requires..."
Right, out of memory is only an error if your program's invariant depends on the memory allocation. You can plan for the above easily enough, but not realistically for all tasks and all library code that require allocation. For example, let's say you are restructuring a hash table, and you reallocate some nodes. You have half transferred over the old structure to the new structure, and you run out of memory. How to recover from this?
I think it's fair to expect code that allocates be exception-safe in the face of allocation errors. I know I'm always very careful with containers so that an allocation failure doesn't result in corruption, for example.
I don't think it's fair to expect *all* code to be able to safely recover from an out of memory exception. I pretty much *never* write code that worries about out of memory errors. One cannot always expect an operation involving hundreds of allocations to be atomic. That being said, we should provide a mechanism so you can handle it, as it's reliably detectable and very recoverable in many situations. -Steve
May 24 2012
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-05-24 21:33, Sean Kelly wrote:

 This is a good point.  OutOfMemory conditions aside, the only time I'd want to
recover from an Error condition was at the point the event occurred, not
somewhere up the stack.
You never feel you want to catch at the top level, print a sensible error message and then exit the application? Instead of the application just disappearing for user. If the application can't print the error message it just fails in the same way as if you had not catch the error. -- /Jacob Carlborg
May 24 2012
parent reply Sean Kelly <sean invisibleduck.org> writes:
On May 24, 2012, at 11:50 PM, Jacob Carlborg wrote:

 On 2012-05-24 21:33, Sean Kelly wrote:
=20
 This is a good point.  OutOfMemory conditions aside, the only time =
I'd want to recover from an Error condition was at the point the event = occurred, not somewhere up the stack.
=20
 You never feel you want to catch at the top level, print a sensible =
error message and then exit the application? Instead of the application = just disappearing for user. Well sure, but I wouldn't consider this recovery.=
May 29 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 29/05/2012 18:53, Sean Kelly a crit :
 On May 24, 2012, at 11:50 PM, Jacob Carlborg wrote:

 On 2012-05-24 21:33, Sean Kelly wrote:

 This is a good point.  OutOfMemory conditions aside, the only time I'd want to
recover from an Error condition was at the point the event occurred, not
somewhere up the stack.
You never feel you want to catch at the top level, print a sensible error message and then exit the application? Instead of the application just disappearing for user.
Well sure, but I wouldn't consider this recovery.
As said, recovery isn't the only point of exceptions. For problems as bad as this, you often want to fail cleanly, eventually print an error message or something. Exception handling discussion are often very focussed on recovery, but the fact is that it is A use case, but not THE use case. It is very common in real case that you cannot recover from some problems, and just want to fail without messing everything up.
May 30 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 24/05/2012 21:33, Sean Kelly a crit :
 On May 24, 2012, at 11:39 AM, Steven Schveighoffer wrote:

 On Thu, 24 May 2012 06:27:12 -0400, Denis
Shelomovskij<verylonglogin.reg gmail.com>  wrote:

 Let's talk about an abstract situation without caring about breaking existing
code, current docs, implementation etc.

 Definitions:
 * an Exception is something that tigers scope guards and executes
catch/finally blocks if thrown;
 * an Error is something that doesn't do it.
I'll give you a different definition: * an Exception is something that can be handled far away from the context of the error, because the system can safely unwind the stack. * an Error must be handled at the point the error occurred, or the program state is by definition invalid.
This is a good point. OutOfMemory conditions aside, the only time I'd want to recover from an Error condition was at the point the event occurred, not somewhere up the stack.
Often, the point of Exception isn't to recover, but to fail as cleanly as possible. To do so, Error must trigger finally blocks and scope statement. They probably shouldn't be catchable in safe code because of the possible invalid state of the program. But still, often recovering isn't the point when it come to problem as bad as Errors.
May 30 2012
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 30.05.2012 12:33, deadalnix wrote:
 Le 24/05/2012 21:33, Sean Kelly a crit :
 On May 24, 2012, at 11:39 AM, Steven Schveighoffer wrote:

 On Thu, 24 May 2012 06:27:12 -0400, Denis
 Shelomovskij<verylonglogin.reg gmail.com> wrote:

 Let's talk about an abstract situation without caring about breaking
 existing code, current docs, implementation etc.

 Definitions:
 * an Exception is something that tigers scope guards and executes
 catch/finally blocks if thrown;
 * an Error is something that doesn't do it.
I'll give you a different definition: * an Exception is something that can be handled far away from the context of the error, because the system can safely unwind the stack. * an Error must be handled at the point the error occurred, or the program state is by definition invalid.
This is a good point. OutOfMemory conditions aside, the only time I'd want to recover from an Error condition was at the point the event occurred, not somewhere up the stack.
Often, the point of Exception isn't to recover, but to fail as cleanly as possible. To do so, Error must trigger finally blocks and scope statement.
Yes, finally the voice of wisdom!
 They probably shouldn't be catchable in  safe code because of the
 possible invalid state of the program.
Interesting point btw.
 But still, often recovering isn't the point when it come to problem as
 bad as Errors.
-- Dmitry Olshansky
May 30 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 24/05/2012 20:39, Steven Schveighoffer a écrit :
 I'd argue this is unrecoverable. Access Violation results from
 corruption, and the damage has already been done. Even a null pointer
 access can be the cause of previous corruption. There is no "recovery"
 from memory corruption. There is no way to plan for it.

 Now, if you want to handle it specifically for your program, that should
 be doable, at your own risk. But there's no way throwing an Exception is
 a valid result.
https://github.com/D-Programming-Language/druntime/pull/187 I think this is a mandatory patch due to nullable types by default. Arguably, nullable by default is the problem.
May 30 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 24/05/2012 12:27, Denis Shelomovskij a écrit :
 Let's talk about an abstract situation without caring about breaking
 existing code, current docs, implementation etc.

 Definitions:
 * an Exception is something that tigers scope guards and executes
 catch/finally blocks if thrown;
 * an Error is something that doesn't do it.

 As a result _we can't do any clean-up if an Error is thrown_ because
 scope guards and catch/finally blocks aren't executed and the program is
 in invalid state because of this. Of course it's theoretically possible
 to code without scope guards and catch/finally blocks but isn't
 applicably for a real project. E.g. in some editor if and Error is
 thrown there is no ability to save opened documents.


 Main Question: What do you think can be an Error?

 Can "Integer Divide by Zero" be an Error? Definitely, not.

 Can "Access Violation" be an Error? No, because it's very common to
 access a field/virtual member function of a null object.

 Can "Out of memory" be an Error? No, because e.g. if I read a user file
 that require me to create a large array (> 100 MiB, e.g.) I don't want
 to crash, but just tell, that "Dear user, the file can't be opened
 because it requires..."


 So what am I think can be an Error? IMHO, nothing. Because throwing
 everything can indicate that program in very good/terribly bad state and
 compiler doesn't know anything about that. And because fatal error is
 fatal the program should just try to print error and close instead of
 throwing something.


 Let's now return to the real D world. Current implementation treats
 Errors as Exceptions for now. Documentation keeps silence. All listed
 "can't be an error" cases are Errors (and it's terrible).

 So why do we has Exception/Error division in D? Because of nothrow.
 Personally I don't need nothrow for that high cost of making D unusable
 for me. Lets realize and solve Exception/Error problem and solve nothrow
 in the second place.


 Related links:
 http://forum.dlang.org/thread/1566418.J7qGkEti3s lyonel
 http://d.puremagic.com/issues/show_bug.cgi?id=8135
 http://d.puremagic.com/issues/show_bug.cgi?id=8136
 http://d.puremagic.com/issues/show_bug.cgi?id=8137


 P.S.
 By the way, the only problem I see in current implementation is a luck
 of "Object finalized" assertion ("True disposable objects (add
 "Finalized!" assertion)" NG thread that didn't interest anybody).
The fact that error don't trigger scope and everything is nonsensial. Today, we know how to implement exception with NO RUNTIME COST when exception isn't thrown. No reason not to do it, except executable size. As this is a specific constraint, we may want to enable it by a compiler switch, but not by default. I see Error as problem that can occur anywhere in any piece of code. I think D have some flaw in Exception management. See Segfault vs NullPointerException discussions in that very forum. Segfault may be OK for some application, but not for a server app that need to be robust. Error exists because of nothrow. As some exceptions can be thrown ANYWHERE, we need a way to separate what is expected to fail, and that have to be handled to be nothrow, and what can go wrong anywhere (basically, when druntime cannot do its job for some reasons).
May 30 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state. - Jonathan M Davis
May 30 2012
parent reply Don Clugston <dac nospam.com> writes:
On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage. In fact, generally, the point of an AssertError is to prevent the program from entering an invalid state. And it's very valuable to log it properly.
May 30 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, May 30, 2012 11:32:00 Don Clugston wrote:
 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage. In fact, generally, the point of an AssertError is to prevent the program from entering an invalid state.
An assertion failure really isn't all that different from a segfault. By definition, if an assertion fails, the program is an invalid state, because the whole point of the assertion is to guarantee something about the program's state. Now, if a segfault occurs (particularly if it's caused by something other than a null pointer), the program is likely to be in a _worse_ state, but it's in an invalid state in either case. In neither case does it make any sense to try and recover, and in both cases, there's a definite risk in executing any further code - including cleanup code. Yes, the segfault is probably worse but not necessarily all that much worse. A logic error can be just as insidious to the state of a program as memory corruption, depending on what it is.
 And it's very valuable to log it properly.
Yes, which is why it's better to have an Error thrown rather than a halt instruction be executed. But that doesn't mean that any attempt at cleanup is any more valid. - Jonathan M Davis
May 30 2012
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-05-30 12:59, Jonathan M Davis wrote:

 Yes, which is why it's better to have an Error thrown rather than a halt
 instruction be executed. But that doesn't mean that any attempt at cleanup is
 any more valid.
If you're not supposed to be able to catch Errors then what's the difference? -- /Jacob Carlborg
May 30 2012
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 30.05.2012 17:28, Jacob Carlborg wrote:
 On 2012-05-30 12:59, Jonathan M Davis wrote:

 Yes, which is why it's better to have an Error thrown rather than a halt
 instruction be executed. But that doesn't mean that any attempt at
 cleanup is
 any more valid.
If you're not supposed to be able to catch Errors then what's the difference?
Having half-flushed/synced database file is no good. I've had pleasure of restoring such things by hand. Trust me, you DON'T want it. A common technique that can kick start half-flushed binary file is appending certain amount of zeros until it "fits". Depending on structure it may need more then that. -- Dmitry Olshansky
May 30 2012
prev sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, May 30, 2012 15:28:22 Jacob Carlborg wrote:
 On 2012-05-30 12:59, Jonathan M Davis wrote:
 Yes, which is why it's better to have an Error thrown rather than a halt
 instruction be executed. But that doesn't mean that any attempt at cleanup
 is any more valid.
If you're not supposed to be able to catch Errors then what's the difference?
You can catch them to print out additional information or whatever is useful to generate more information about the Error. In fact, just what the Error gives you is already more useful: message, file, line number, stack trace, etc. That alone makes an Error more useful than a halt instruction. You can catch them to attempt explicit cleanup that absolutely must be done for whatever reason (with the knowledge that it's potentially dangerous to do that cleanup due to the Error). You can catch them in very controlled circumstances where you know that continuing is safe (obviously this isn't the sort of thing that you do in safe code). For instance, in some restricted cases, that could be done with an OutOfMemoryError. But when you do that sort of thing you have to catch the Error _very_ close to the throw point and be sure that there's no cleanup code in between. It only works when you can guarantee yourself that the program state is not being compromised by the Error, and you're able to guarantee that continuing from the catch point is safe. That works in some cases with AssertError in unit test code but becomes problematic as such code becomes more complex. - Jonathan M Davis
May 30 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-05-30 23:16, Jonathan M Davis wrote:

 You can catch them to print out additional information or whatever is useful
 to generate more information about the Error. In fact, just what the Error
 gives you is already more useful: message, file, line number, stack trace, etc.
 That alone makes an Error more useful than a halt instruction.
If I recall correctly you has been arguing that Errors shouldn't be catchable, as they are today, and this needed to be fixed. Hmm, or was that Steven.
 You can catch them to attempt explicit cleanup that absolutely must be done
 for whatever reason (with the knowledge that it's potentially dangerous to do
 that cleanup due to the Error).

 You can catch them in very controlled circumstances where you know that
 continuing is safe (obviously this isn't the sort of thing that you do in
  safe code). For instance, in some restricted cases, that could be done with
 an OutOfMemoryError. But when you do that sort of thing you have to catch the
 Error _very_ close to the throw point and be sure that there's no cleanup code
 in between. It only works when you can guarantee yourself that the program
 state is not being compromised by the Error, and you're able to guarantee that
 continuing from the catch point is safe. That works in some cases with
 AssertError in unit test code but becomes problematic as such code becomes
 more complex.
I'm mostly interested in letting the user know something went wrong and then exit the application. -- /Jacob Carlborg
May 30 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Thursday, May 31, 2012 08:26:18 Jacob Carlborg wrote:
 On 2012-05-30 23:16, Jonathan M Davis wrote:
 You can catch them to print out additional information or whatever is
 useful to generate more information about the Error. In fact, just what
 the Error gives you is already more useful: message, file, line number,
 stack trace, etc. That alone makes an Error more useful than a halt
 instruction.
If I recall correctly you has been arguing that Errors shouldn't be catchable, as they are today, and this needed to be fixed. Hmm, or was that Steven.
No. I haven't been arguing that. It's not particularly safe to catch Errors, but they're catchable on purpose. It's catching and _handling_ an Error that's generally a bad idea - that and continuing to execute after catching the Error rather than letting the program terminate. It may be appropriate in very rare examples where the programmer knows what they're doing, but in general, catching them to do much beyond print out additional information or maybe do some absolutely critical cleanup is a bad idea. The real question is whether any cleanup should be attempted on Error (i.e. destructors, scope statements, and finally blocks). Running that code is risky when an Error occurs, because the program is in an invalid state. It's possible that running it could actually do damage of some variety, depending on the state of the program and what the cleanup does. But skipping all of that cleanup can be a problem too, since that cleanup generally needs to be done. Depending on what the Error was, the cleanup may actually work and therefore leave the program in a less invalid state. So, it's a question of whether attempting cleanup in an invalid state or skipping cleanup in an invalid state is riskier. Per Walter, there's no guarantee that that cleanup will occur, but with the current implementation it almost always does.
 You can catch them to attempt explicit cleanup that absolutely must be
 done
 for whatever reason (with the knowledge that it's potentially dangerous to
 do that cleanup due to the Error).
 
 You can catch them in very controlled circumstances where you know that
 continuing is safe (obviously this isn't the sort of thing that you do in
  safe code). For instance, in some restricted cases, that could be done
 with an OutOfMemoryError. But when you do that sort of thing you have to
 catch the Error _very_ close to the throw point and be sure that there's
 no cleanup code in between. It only works when you can guarantee yourself
 that the program state is not being compromised by the Error, and you're
 able to guarantee that continuing from the catch point is safe. That
 works in some cases with AssertError in unit test code but becomes
 problematic as such code becomes more complex.
I'm mostly interested in letting the user know something went wrong and then exit the application.
That would be the most typical use case for catching an Error and certainly is the least risky of the reasons that you might do it. - Jonathan M Davis
May 30 2012
parent Jacob Carlborg <doob me.com> writes:
On 2012-05-31 08:39, Jonathan M Davis wrote:

 No. I haven't been arguing that. It's not particularly safe to catch Errors,
 but they're catchable on purpose. It's catching and _handling_ an Error that's
 generally a bad idea - that and continuing to execute after catching the Error
 rather than letting the program terminate. It may be appropriate in very rare
 examples where the programmer knows what they're doing, but in general,
 catching them to do much beyond print out additional information or maybe do
 some absolutely critical cleanup is a bad idea.

 The real question is whether any cleanup should be attempted on Error (i.e.
 destructors, scope statements, and finally blocks). Running that code is risky
 when an Error occurs, because the program is in an invalid state. It's
 possible that running it could actually do damage of some variety, depending
 on the state of the program and what the cleanup does. But skipping all of
 that cleanup can be a problem too, since that cleanup generally needs to be
 done. Depending on what the Error was, the cleanup may actually work and
 therefore leave the program in a less invalid state. So, it's a question of
 whether attempting cleanup in an invalid state or skipping cleanup in an
 invalid state is riskier. Per Walter, there's no guarantee that that cleanup
 will occur, but with the current implementation it almost always does.
Ok, I'm sorry if I misunderstood you or confused you with someone else. -- /Jacob Carlborg
May 31 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 30/05/2012 12:59, Jonathan M Davis a écrit :
 And it's very valuable to log it properly.
Yes, which is why it's better to have an Error thrown rather than a halt instruction be executed. But that doesn't mean that any attempt at cleanup is any more valid.
Sorry but that is bullshit. What can be the benefit of not trying to clean things up ? Do you really consider that corrupted files, client waiting forever at the other end of a connection or any similar stuff is a good thing ? Because that is what you are advocating. I may sound good on the paper, but in real life, system DOES fail. It isn't a question of if, but a question of when and how often, and what to do about it.
May 30 2012
parent Sean Kelly <sean invisibleduck.org> writes:
On May 30, 2012, at 7:21 AM, deadalnix <deadalnix gmail.com> wrote:

 Le 30/05/2012 12:59, Jonathan M Davis a =C3=A9crit :
 And it's very valuable to log it properly.
=20 Yes, which is why it's better to have an Error thrown rather than a halt instruction be executed. But that doesn't mean that any attempt at cleanu=
p is
 any more valid.
=20
=20 Sorry but that is bullshit. What can be the benefit of not trying to clean=
things up ?
=20
 Do you really consider that corrupted files, client waiting forever at the=
other end of a connection or any similar stuff is a good thing ? Because th= at is what you are advocating.
=20
 I may sound good on the paper, but in real life, system DOES fail. It isn'=
t a question of if, but a question of when and how often, and what to do abo= ut it. I'd certainly at least want to be given the option of cleaning up when an Er= ror is thrown. If not, I have a feeling that in circumstances where I reall= y wanted it I'd do something horrible to make sure it happened in some other= way.=20=
May 30 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 30/05/12 12:59, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 11:32:00 Don Clugston wrote:
 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage. In fact, generally, the point of an AssertError is to prevent the program from entering an invalid state.
An assertion failure really isn't all that different from a segfault. By definition, if an assertion fails, the program is an invalid state, because the whole point of the assertion is to guarantee something about the program's state.
There's a big difference. A segfault is a machine error. The integrity of the machine model has been violated, and the machine is in an out-of-control state. In particular, the stack may be corrupted, so stack unwinding may not be successful. But, in an assert error, the machine is completely intact; the error is at a higher level, which does not interfere with stack unwinding. Damage is possible only if you've written your destructors/finally code extremely poorly. Note that, unlike C++, it's OK to throw a new Error or Exception from inside a destructor. But with (say) a stack overflow, you don't necessarily know what code is being executed. It could do anything.
 Now, if a segfault occurs (particularly if it's caused by something
 other than a null pointer), the program is likely to be in a _worse_ state,
 but it's in an invalid state in either case. In neither case does it make any
 sense to try and recover, and in both cases, there's a definite risk in
 executing any further code - including cleanup code.
 Yes, the segfault is
 probably worse but not necessarily all that much worse. A logic error can be
 just as insidious to the state of a program as memory corruption, depending on
 what it is.
I'm surprised by your response, I didn't think this was controversial. We could just as easily have said assert() throws an AssertException. (Or have two kinds of assert, one which is an Error and the other merely an Exception).
May 30 2012
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, May 30, 2012 17:29:30 Don Clugston wrote:
 On 30/05/12 12:59, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 11:32:00 Don Clugston wrote:
 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage. In fact, generally, the point of an AssertError is to prevent the program from entering an invalid state.
An assertion failure really isn't all that different from a segfault. By definition, if an assertion fails, the program is an invalid state, because the whole point of the assertion is to guarantee something about the program's state.
There's a big difference. A segfault is a machine error. The integrity of the machine model has been violated, and the machine is in an out-of-control state. In particular, the stack may be corrupted, so stack unwinding may not be successful. But, in an assert error, the machine is completely intact; the error is at a higher level, which does not interfere with stack unwinding. Damage is possible only if you've written your destructors/finally code extremely poorly. Note that, unlike C++, it's OK to throw a new Error or Exception from inside a destructor. But with (say) a stack overflow, you don't necessarily know what code is being executed. It could do anything.
There is definitely a difference in severity. Clearly memory corruption is more severe than a logic error in your code. However, in the general case, if you have a logic error in your code which is caught by an assertion, there's no way to know without actually examining the code how valid the state of the program is at that point. It's in an invalid state _by definition_, because the assertion was testing the validity of the state of the program, and it failed. So, at that point, it's only a question of degree. _How_ invalid is the state? Since there's no way for the program to know how severe the logic error was, it has no way of knowing whether it's safe to run any cleanup code (the same as the program has no way of knowing whether a segfault is relatively minor - e.g. a null pointer - or absolutely catastrophic - e.g. memory is horribly corrupted). If you got an OutOfMemoryError rather than one specifically indicating a logic error (as with Errors such as AssertError or RangeError), then that's specifcally telling you that your program has run out of a particular resource (i.e. memory), which means that any code which assumes that that resource is available (which in the case of memory is pretty much all code) will fail. Running cleanup code could be very precarious at that point if it allocates any memory (which a lot of cleanup code wouldn't, but I'm sure that it would be very easy to find cleanup code which did). Any further attempts at allocation would result in more OutOfMemoryErrors and leaving the cleanup code only partially run, thereby possibly making things even worse, depending on what the cleanup code does. Running cleanup code is _not_ safe when an Error is thrown, because the program is definitely in an invalid state at that point, even if it's not as bad as a segfault can be. Now, it may be that that risk is worth it, especially since a lot of the time, cleanup code won't be invalidated in the least by whatever caused Errors elsewhere in the program, and there are definitely plenty of cases where at least attempting to cleanup everything is better than skipping it all due of an Error somewhere else in the program. But it's still not safe. It's a question of whether we think that the risks posed by trying to run cleanup code after the program is in an invalid enough state that an Error was thrown are too great to attempt cleanup or whether we think that the problems caused by skipping that cleanup are greater.
 Now, if a segfault occurs (particularly if it's caused by something
 other than a null pointer), the program is likely to be in a _worse_
 state,
 but it's in an invalid state in either case. In neither case does it make
 any sense to try and recover, and in both cases, there's a definite risk
 in executing any further code - including cleanup code.
 
 Yes, the segfault is
 probably worse but not necessarily all that much worse. A logic error can
 be just as insidious to the state of a program as memory corruption,
 depending on what it is.
I'm surprised by your response, I didn't think this was controversial. We could just as easily have said assert() throws an AssertException. (Or have two kinds of assert, one which is an Error and the other merely an Exception).
In general, a segfault is definitely worse, but logic errors can_ be just as bad in terms of the damage that they can do (especially in cmparison with segfaults caused by null pointers as opposed to those caused by memory corruption). It all depends on what the logic error is and what happens if you try and continue with the program in such a state. - Jonathan M Davis
May 30 2012
parent "Regan Heath" <regan netmail.co.nz> writes:
On Wed, 30 May 2012 22:36:50 +0100, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:
 In general, a segfault is definitely worse, but logic errors can_ be  
 just as bad in terms of the damage that they can do (especially in  
 cmparison with
 segfaults caused by null pointers as opposed to those caused by memory
 corruption). It all depends on what the logic error is and what happens  
 if you try and continue with the program in such a state.
In fact.. a logic error - or rather an assert triggered by a bad argument or similar may be the result of memory corruption which was undetected i.e. buffer overflow on the stack overwriting an integer passed to a function which asserts it is within a range, and it isn't. So, you can't even be sure that logic errors aren't caused by memory corruption. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
May 31 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 30/05/2012 17:29, Don Clugston a écrit :
 There's a big difference. A segfault is a machine error. The integrity
 of the machine model has been violated, and the machine is in an
 out-of-control state. In particular, the stack may be corrupted, so
 stack unwinding may not be successful.

 But, in an assert error, the machine is completely intact; the error is
 at a higher level, which does not interfere with stack unwinding.

 Damage is possible only if you've written your destructors/finally code
 extremely poorly. Note that, unlike C++, it's OK to throw a new Error or
 Exception from inside a destructor.
 But with (say) a stack overflow, you don't necessarily know what code is
 being executed. It could do anything.
Most segfault are null deference or unintizialized pointer deference. Both are recoverable.
May 30 2012
next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, May 31, 2012 00:01:16 deadalnix wrote:
 Le 30/05/2012 17:29, Don Clugston a écrit :
 There's a big difference. A segfault is a machine error. The integrity
 of the machine model has been violated, and the machine is in an
 out-of-control state. In particular, the stack may be corrupted, so
 stack unwinding may not be successful.
 
 But, in an assert error, the machine is completely intact; the error is
 at a higher level, which does not interfere with stack unwinding.
 
 Damage is possible only if you've written your destructors/finally code
 extremely poorly. Note that, unlike C++, it's OK to throw a new Error or
 Exception from inside a destructor.
 But with (say) a stack overflow, you don't necessarily know what code is
 being executed. It could do anything.
Most segfault are null deference or unintizialized pointer deference. Both are recoverable.
If you dereferenced a null pointer, it's a bug in your code. Your code is assuming that the pointer was non-null, which was obviously incorrect, because it was null. That's _not_ recoverable in the general case. Your code was obviously written with the assumption that the pointer was non-null, so your code is wrong, and so continuing to execute it makes no sense, because it's in an invalid state and could do who-knows-what. If there's any possibility of a pointer being null, the correct thing to do is to check it before dereferencing it. If you don't, it's bug. Now, it's perfectly possible to design code which never checks for null pointers and if a null pointer is dereferenced throws an Exception and attempts to recover from it (assuming that it's possible to detect the dereference and throw at that point, which AFAIK is impossible with segfaults - maybe it could be done on Windows with its Access Violations, but segfaults trigger a signal handler, and you're screwed at that point). But writing code which just assumes that pointers are non-null and will throw if they are null is incredibly sloppy. It means that you're treating pointers like you'd treat user input rather than as part of your code. - Jonathan M Davis
May 30 2012
next sibling parent reply "Kapps" <opantm2+spam gmail.com> writes:
On Wednesday, 30 May 2012 at 22:22:01 UTC, Jonathan M Davis wrote:
 On Thursday, May 31, 2012 00:01:16 deadalnix wrote:
 Most segfault are null deference or unintizialized pointer 
 deference.
 Both are recoverable.
If you dereferenced a null pointer, it's a bug in your code. Your code is assuming that the pointer was non-null, which was obviously incorrect, because it was null. That's _not_ recoverable in the general case. Your code was obviously written with the assumption that the pointer was non-null, so your code is wrong, and so continuing to execute it makes no sense, because it's in an invalid state and could do who-knows-what. If there's any possibility of a pointer being null, the correct thing to do is to check it before dereferencing it. If you don't, it's bug. Now, it's perfectly possible to design code which never checks for null pointers and if a null pointer is dereferenced throws an Exception and attempts to recover from it (assuming that it's possible to detect the dereference and throw at that point, which AFAIK is impossible with segfaults - maybe it could be done on Windows with its Access Violations, but segfaults trigger a signal handler, and you're screwed at that point). But writing code which just assumes that pointers are non-null and will throw if they are null is incredibly sloppy. It means that you're treating pointers like you'd treat user input rather than as part of your code. - Jonathan M Davis
All code has bugs in it. It's nice being notified about it and all, but if you release a server application, and it crashes every single time it encounters any bug... well, your customers will not have a fun time.
May 30 2012
next sibling parent "Kapps" <opantm2+spam gmail.com> writes:
On Thursday, 31 May 2012 at 00:58:42 UTC, Kapps wrote:
 All code has bugs in it. It's nice being notified about it and 
 all, but if you release a server application, and it crashes 
 every single time it encounters any bug... well, your customers 
 will not have a fun time.
Note that this assumes you're in a part of code where you can isolate and revert anything the error may cause. Roll back a database transaction, disconnect a client, abort their web page request, etc.
May 30 2012
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On May 30, 2012, at 5:58 PM, "Kapps" <opantm2+spam gmail.com> wrote:

 On Wednesday, 30 May 2012 at 22:22:01 UTC, Jonathan M Davis wrote:
 On Thursday, May 31, 2012 00:01:16 deadalnix wrote:
 Most segfault are null deference or unintizialized pointer deference.
 Both are recoverable.
=20 If you dereferenced a null pointer, it's a bug in your code. Your code is=
 assuming that the pointer was non-null, which was obviously incorrect, be=
cause
 it was null. That's _not_ recoverable in the general case. Your code was
 obviously written with the assumption that the pointer was non-null, so y=
our
 code is wrong, and so continuing to execute it makes no sense, because it=
's in
 an invalid state and could do who-knows-what. If there's any possibility o=
f a
 pointer being null, the correct thing to do is to check it before
 dereferencing it. If you don't, it's bug.
=20
 Now, it's perfectly possible to design code which never checks for null
 pointers and if a null pointer is dereferenced throws an Exception and
 attempts to recover from it (assuming that it's possible to detect the
 dereference and throw at that point, which AFAIK is impossible with segfa=
ults
 - maybe it could be done on Windows with its Access Violations, but segfa=
ults
 trigger a signal handler, and you're screwed at that point). But writing c=
ode
 which just assumes that pointers are non-null and will throw if they are n=
ull
 is incredibly sloppy. It means that you're treating pointers like you'd t=
reat
 user input rather than as part of your code.
=20
 - Jonathan M Davis
=20 All code has bugs in it. It's nice being notified about it and all, but if=
you release a server application, and it crashes every single time it encou= nters any bug... well, your customers will not have a fun time. It's worth noting that Google apps abort on error. For server apps, a proces= s crash is often designed to be invisible to the client, as much to allow se= amless code upgrades as anything.=20=
May 30 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/30/2012 5:58 PM, Kapps wrote:
 All code has bugs in it. It's nice being notified about it and all, but if you
 release a server application, and it crashes every single time it encounters
any
 bug... well, your customers will not have a fun time.
The correct response to a server app crashing is to restart it, not attempt to keep a program in an invalid state running. Attempting to continue normal operation of a program that has entered an invalid state is bad news from front to back. It is wrong wrong wrong wrong.
May 30 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 04:10, Walter Bright a écrit :
 On 5/30/2012 5:58 PM, Kapps wrote:
 All code has bugs in it. It's nice being notified about it and all,
 but if you
 release a server application, and it crashes every single time it
 encounters any
 bug... well, your customers will not have a fun time.
The correct response to a server app crashing is to restart it, not attempt to keep a program in an invalid state running.
Should I mention to restart it AFTER A GRACEFUL SHUTDOWN ?
May 31 2012
parent "Jouko Koski" <joukokoskispam101 netti.fi> writes:
"deadalnix" <deadalnix gmail.com> wrote:
 Le 31/05/2012 04:10, Walter Bright a écrit :
 The correct response to a server app crashing is to restart it, not
 attempt to keep a program in an invalid state running.
Should I mention to restart it AFTER A GRACEFUL SHUTDOWN ?
No. Abort with crash dump is good way to do controlled shutdown. If there is need for a cleanup, it is better to do it in the upcoming startup where the program state is valid.
Jun 01 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 00:21, Jonathan M Davis a écrit :
 On Thursday, May 31, 2012 00:01:16 deadalnix wrote:
 Le 30/05/2012 17:29, Don Clugston a écrit :
 There's a big difference. A segfault is a machine error. The integrity
 of the machine model has been violated, and the machine is in an
 out-of-control state. In particular, the stack may be corrupted, so
 stack unwinding may not be successful.

 But, in an assert error, the machine is completely intact; the error is
 at a higher level, which does not interfere with stack unwinding.

 Damage is possible only if you've written your destructors/finally code
 extremely poorly. Note that, unlike C++, it's OK to throw a new Error or
 Exception from inside a destructor.
 But with (say) a stack overflow, you don't necessarily know what code is
 being executed. It could do anything.
Most segfault are null deference or unintizialized pointer deference. Both are recoverable.
If you dereferenced a null pointer, it's a bug in your code. Your code is assuming that the pointer was non-null, which was obviously incorrect, because it was null. That's _not_ recoverable in the general case. Your code was obviously written with the assumption that the pointer was non-null, so your code is wrong, and so continuing to execute it makes no sense, because it's in an invalid state and could do who-knows-what. If there's any possibility of a pointer being null, the correct thing to do is to check it before dereferencing it. If you don't, it's bug.
I want to remind you that the subject is knowing if scope and finally block should be triggered by errors. Such blocks are perfectly safe to execute in such a situation. Additionally, I may want to abort the current operation, but not the whole program. This is doable on such an error.
 Now, it's perfectly possible to design code which never checks for null
 pointers and if a null pointer is dereferenced throws an Exception and
 attempts to recover from it (assuming that it's possible to detect the
 dereference and throw at that point, which AFAIK is impossible with segfaults
 - maybe it could be done on Windows with its Access Violations, but segfaults
 trigger a signal handler, and you're screwed at that point).
Well, I have a pull request in druntime that do exactly what you claim is impossible.
May 31 2012
prev sibling parent reply Artur Skawina <art.08.09 gmail.com> writes:
On 05/31/12 00:21, Jonathan M Davis wrote:
 Now, it's perfectly possible to design code which never checks for null 
 pointers and if a null pointer is dereferenced throws an Exception and 
 attempts to recover from it (assuming that it's possible to detect the 
 dereference and throw at that point, which AFAIK is impossible with segfaults 
 - maybe it could be done on Windows with its Access Violations, but segfaults 
 trigger a signal handler, and you're screwed at that point). But writing code 
No, it's easily recoverable. That does not mean however that it would be a good idea to map segfaults to exceptions as a language feature. And dereferencing a null pointer is *not* guaranteed to trap, all you need is a large enough offset and you will get silent data corruption. int i = 42; auto j = cast(size_t)&i; ubyte* p = null; p[j] = 13; assert(i!=42); // oops artur
May 30 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 03:06, Artur Skawina a écrit :
 On 05/31/12 00:21, Jonathan M Davis wrote:
 Now, it's perfectly possible to design code which never checks for null
 pointers and if a null pointer is dereferenced throws an Exception and
 attempts to recover from it (assuming that it's possible to detect the
 dereference and throw at that point, which AFAIK is impossible with segfaults
 - maybe it could be done on Windows with its Access Violations, but segfaults
 trigger a signal handler, and you're screwed at that point). But writing code
No, it's easily recoverable. That does not mean however that it would be a good idea to map segfaults to exceptions as a language feature. And dereferencing a null pointer is *not* guaranteed to trap, all you need is a large enough offset and you will get silent data corruption. int i = 42; auto j = cast(size_t)&i; ubyte* p = null; p[j] = 13; assert(i!=42); // oops artur
Most system protect at least the first page (the first 4Kb ). For other it is doable within druntime with page protection. For bigger stuff, they have to be forbidden in safe code or runtime check should be added.
May 31 2012
prev sibling next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 30/05/2012 11:32, Don Clugston a écrit :
 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage. In fact, generally, the point of an AssertError is to prevent the program from entering an invalid state. And it's very valuable to log it properly.
For segfault, it has been proven to be useful in other languages.
May 30 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac nospam.com> wrote:

 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.
There's also no reason to assume that orderly cleanup *doesn't* cause any damage. In fact, it's not reasonable to assume *anything*. Which is the point. If you want to recover from an error, you have to do it manually. It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want). But there is no reasonable *default* for handling an error that the runtime can assume. I'd classify errors/exceptions into three categories: 1. corruption/segfault -- not recoverable under any reasonable circumstances. Special cases exist (such as a custom paging mechanism). 2. program invariant errors (i.e. assert errors) -- Recovery is not defined by the runtime, so you must do it manually. Any decision the runtime makes will be arbitrary, and could be wrong. 3. try/catch exceptions -- these are planned for and *expected* to occur because the program cannot control it's environment. e.g. EOF when none was expected. The largest problem with the difference between 2 and 3 is the actual decision of whether an exceptional case is categorized as 2 or 3 can be decoupled from the code that decides between them. For example: double invert(double x) { assertOrEnfoce?(x != 0); // which should it be? return 1.0/x; } case 1: void main() { writeln(invert(0)); // clearly a program error } case 2: int main(string[] args) { writeln(invert(to!double(args[1])); // clearly a catchable error } I don't know of a good way to solve that... -Steve
May 30 2012
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On May 30, 2012, at 8:05 AM, "Steven Schveighoffer" <schveiguy yahoo.com> wr=
ote:

 On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac nospam.com> wrote:
=20
 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
=20 If an Error is truly unrecoverable (as they're generally supposed to be)=
, then
 what does it matter? Something fatal occured in your program, so it
 terminates. Because it's an Error, you can get a stack trace and report
 something before the program actually terminates, but continuing executi=
on
 after an Error is considered to be truly _bad_ idea, so in general, why d=
oes
 it matter whether scope statements, finally blocks, or destructors get
 executed? It's only rarer cases where you're trying to do something like=
 create a unit test framework on top of assert that you would need to cat=
ch an
 Error, and that's questionable enough as it is. In normal program execut=
ion,
 an error is fatal, so cleanup is irrelevant and even potentially dangero=
us,
 because your program is already in an invalid state.
=20 That's true for things like segfaults, but in the case of an AssertError,=
there's no reason to believe that cleanup would cause any damage.
=20
 There's also no reason to assume that orderly cleanup *doesn't* cause any d=
amage. In fact, it's not reasonable to assume *anything*.
=20
 Which is the point.  If you want to recover from an error, you have to do i=
t manually. It should be doable, but the default handling should not need t= o be defined (i.e. implementations should be free to do whatever they want).=
=20
 But there is no reasonable *default* for handling an error that the runtim=
e can assume.
=20
 I'd classify errors/exceptions into three categories:
=20
 1. corruption/segfault -- not recoverable under any reasonable circumstanc=
es. Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) --  Recovery is not defin=
ed by the runtime, so you must do it manually. Any decision the runtime mak= es will be arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and *expected* to occur b=
ecause the program cannot control it's environment. e.g. EOF when none was e= xpected.
=20
 The largest problem with the difference between 2 and 3 is the actual deci=
sion of whether an exceptional case is categorized as 2 or 3 can be decouple= d from the code that decides between them.
=20
 For example:
=20
 double invert(double x)
 {
   assertOrEnfoce?(x !=3D 0); // which should it be?
   return 1.0/x;
 }
=20
 case 1:
=20
 void main()
 {
    writeln(invert(0)); // clearly a program error
 }
=20
 case 2:
=20
 int main(string[] args)
 {
   writeln(invert(to!double(args[1])); // clearly a catchable error
 }
=20
 I don't know of a good way to solve that...
Sounds like a good argument for the assert handler in core.runtime.=20=
May 30 2012
prev sibling next sibling parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Steven Schveighoffer wrote:
 On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac nospam.com> wrote:
 
On 30/05/12 10:40, Jonathan M Davis wrote:
On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.
There's also no reason to assume that orderly cleanup *doesn't* cause any damage. In fact, it's not reasonable to assume *anything*. Which is the point. If you want to recover from an error, you have to do it manually. It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want). But there is no reasonable *default* for handling an error that the runtime can assume. I'd classify errors/exceptions into three categories: 1. corruption/segfault -- not recoverable under any reasonable circumstances. Special cases exist (such as a custom paging mechanism). 2. program invariant errors (i.e. assert errors) -- Recovery is not defined by the runtime, so you must do it manually. Any decision the runtime makes will be arbitrary, and could be wrong. 3. try/catch exceptions -- these are planned for and *expected* to occur because the program cannot control it's environment. e.g. EOF when none was expected. The largest problem with the difference between 2 and 3 is the actual decision of whether an exceptional case is categorized as 2 or 3 can be decoupled from the code that decides between them. For example: double invert(double x) { assertOrEnfoce?(x != 0); // which should it be? return 1.0/x; }
It's a logic error. Thus, double invert(double x) in { assert(x != 0); } body { return 1.0/x; }
 case 1:
 
 void main()
 {
     writeln(invert(0)); // clearly a program error
 }
Obviously a logic error.
 case 2:
 
 int main(string[] args)
 {
    writeln(invert(to!double(args[1])); // clearly a catchable error
 }
This should be int main(string[] args) { auto arg = to!double(args[1]); enforce(arg != 0); writeln(invert(arg)); } The enforce is needed because args[1] is user input. If the programmer controlled the value of arg and believes arg != 0 always holds then no enforce would be needed. Doesn't this make sense? Jens PS For the record, I think (like most) that Errors should like Exceptions work with scope, etc. The only arguments against is the theoretical possibility of causing more damage while cleaning up. I say theoretical because there was no practical example given. It seems that it may cause more damage but it does not need to. Of course, if damage happens it's the programmers fault but it's also the programmer's fault if he does not try to do a graceful shutdown, i.e. closing sockets, sending a crash report, or similar.
May 30 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 30 May 2012 11:47:34 -0400, Jens Mueller <jens.k.mueller gmx.de>  
wrote:

 Steven Schveighoffer wrote:
 case 2:

 int main(string[] args)
 {
    writeln(invert(to!double(args[1])); // clearly a catchable error
 }
This should be int main(string[] args) { auto arg = to!double(args[1]); enforce(arg != 0); writeln(invert(arg)); } The enforce is needed because args[1] is user input. If the programmer controlled the value of arg and believes arg != 0 always holds then no enforce would be needed. Doesn't this make sense?
Yes and no. Yes, the ultimate result of what you wrote is the desired functionality. But no, I don't think you have properly solved the problem. Consider that user data, or environment data, can come from anywhere, and at any time. Consider also that you have decoupled the function parameter validation from the function itself! Ideally, invert should be the one deciding whether the original data is valid or not. In order to write correct code, I must "know" what the contents of invert are as the writer of main. I'd rather do something like: int main(string[] args) { auto argToInvert = to!double(args[1]); validateInvertArgs(argToInvert); // uses enforce invert(argToInvert); } Note that even *this* isn't ideal, because now the author of invert has to write and maintain a separate function for validating its arguments, even though invert is *already* validating its arguments. It's almost as if, I want to re-use the same code inside invert that validates its arguments, but use a different mechanism to flag an error, depending on the source of the arguments. It can get even more tricky, if say a function has two parameters, and one is hard-coded and the other comes from user input.
 PS
 For the record, I think (like most) that Errors should like Exceptions
 work with scope, etc. The only arguments against is the theoretical
 possibility of causing more damage while cleaning up. I say theoretical
 because there was no practical example given. It seems that it may cause
 more damage but it does not need to. Of course, if damage happens it's
 the programmers fault but it's also the programmer's fault if he does
 not try to do a graceful shutdown, i.e. closing sockets, sending a crash
 report, or similar.
Indeed, it's all up to the programmer to handle the situation properly. If an assert occurs, the program may be already in an invalid state, and *trying to save* files or close/flush databases may corrupt the data. My point is, it's impossible for the runtime to know that your code is properly handling the error or not, and that running all the finally/scope blocks will not be worse than not doing it. -Steve
May 30 2012
parent Jens Mueller <jens.k.mueller gmx.de> writes:
Steven Schveighoffer wrote:
 On Wed, 30 May 2012 11:47:34 -0400, Jens Mueller
 <jens.k.mueller gmx.de> wrote:
 
Steven Schveighoffer wrote:
case 2:

int main(string[] args)
{
   writeln(invert(to!double(args[1])); // clearly a catchable error
}
This should be int main(string[] args) { auto arg = to!double(args[1]); enforce(arg != 0); writeln(invert(arg)); } The enforce is needed because args[1] is user input. If the programmer controlled the value of arg and believes arg != 0 always holds then no enforce would be needed. Doesn't this make sense?
Yes and no. Yes, the ultimate result of what you wrote is the desired functionality. But no, I don't think you have properly solved the problem. Consider that user data, or environment data, can come from anywhere, and at any time. Consider also that you have decoupled the function parameter validation from the function itself! Ideally, invert should be the one deciding whether the original data is valid or not. In order to write correct code, I must "know" what the contents of invert are as the writer of main. I'd rather do something like: int main(string[] args) { auto argToInvert = to!double(args[1]); validateInvertArgs(argToInvert); // uses enforce invert(argToInvert); } Note that even *this* isn't ideal, because now the author of invert has to write and maintain a separate function for validating its arguments, even though invert is *already* validating its arguments. It's almost as if, I want to re-use the same code inside invert that validates its arguments, but use a different mechanism to flag an error, depending on the source of the arguments. It can get even more tricky, if say a function has two parameters, and one is hard-coded and the other comes from user input.
Why should invert validate its arguments? invert just states if the input has this and that property, then I will return the inverse of the argument. And it makes sure that its assumptions actually hold. And these assumption are that fundamental that failing to verify these is an error. Why should it do more than that? Actually, it can't do more than that because it does not know what to do. Assuming the user passed 0 then different recovery approaches are possible.
PS
For the record, I think (like most) that Errors should like Exceptions
work with scope, etc. The only arguments against is the theoretical
possibility of causing more damage while cleaning up. I say theoretical
because there was no practical example given. It seems that it may cause
more damage but it does not need to. Of course, if damage happens it's
the programmers fault but it's also the programmer's fault if he does
not try to do a graceful shutdown, i.e. closing sockets, sending a crash
report, or similar.
Indeed, it's all up to the programmer to handle the situation properly. If an assert occurs, the program may be already in an invalid state, and *trying to save* files or close/flush databases may corrupt the data. My point is, it's impossible for the runtime to know that your code is properly handling the error or not, and that running all the finally/scope blocks will not be worse than not doing it.
I thought this is the only argument for not executing finally/scope blocks. Because running these in case of an Error may actually be worse than not running them. Not that we have an example of such code but that's the theoretical issue brought up against executing finally/scope in case of an Error. Jens
May 30 2012
prev sibling next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 30.05.2012 19:05, Steven Schveighoffer wrote:
 On Wed, 30 May 2012 05:32:00 -0400, Don Clugston <dac nospam.com> wrote:

 On 30/05/12 10:40, Jonathan M Davis wrote:
 On Wednesday, May 30, 2012 10:26:36 deadalnix wrote:
 The fact that error don't trigger scope and everything is nonsensial.
If an Error is truly unrecoverable (as they're generally supposed to be), then what does it matter? Something fatal occured in your program, so it terminates. Because it's an Error, you can get a stack trace and report something before the program actually terminates, but continuing execution after an Error is considered to be truly _bad_ idea, so in general, why does it matter whether scope statements, finally blocks, or destructors get executed? It's only rarer cases where you're trying to do something like create a unit test framework on top of assert that you would need to catch an Error, and that's questionable enough as it is. In normal program execution, an error is fatal, so cleanup is irrelevant and even potentially dangerous, because your program is already in an invalid state.
That's true for things like segfaults, but in the case of an AssertError, there's no reason to believe that cleanup would cause any damage.
There's also no reason to assume that orderly cleanup *doesn't* cause any damage. In fact, it's not reasonable to assume *anything*. Which is the point. If you want to recover from an error, you have to do it manually. It should be doable, but the default handling should not need to be defined (i.e. implementations should be free to do whatever they want). But there is no reasonable *default* for handling an error that the runtime can assume.
I'd say that calling scope, destructors etc. on Error being thrown is the most _useful_ thing in all cases. If you realy-realy afraid of memory corruption killing sensitive data, taking control of OS and so on - you just catch Errors early on inside such sensitive functions. And call C's abort(). And that's it. Let's make common and hard case default and automatic plz. -- Dmitry Olshansky
May 30 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/30/2012 8:05 AM, Steven Schveighoffer wrote:
 I'd classify errors/exceptions into three categories:

 1. corruption/segfault -- not recoverable under any reasonable circumstances.
 Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) -- Recovery is not defined by
 the runtime, so you must do it manually. Any decision the runtime makes will be
 arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and *expected* to occur
because
 the program cannot control it's environment. e.g. EOF when none was expected.
A recoverable exception is NOT a logic bug in your program, which is why it is recoverable. If there is recovery possible from a particular assert error, then you are using asserts incorrectly. Assert errors occur because your program has entered an unanticipated, invalid state. There's no such thing as knowing how to put it back into a valid state, because you don't know where it went wrong and how much is corrupted, etc.
May 30 2012
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 04:17, Walter Bright a écrit :
 On 5/30/2012 8:05 AM, Steven Schveighoffer wrote:
 I'd classify errors/exceptions into three categories:

 1. corruption/segfault -- not recoverable under any reasonable
 circumstances.
 Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) -- Recovery is not
 defined by
 the runtime, so you must do it manually. Any decision the runtime
 makes will be
 arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and *expected* to
 occur because
 the program cannot control it's environment. e.g. EOF when none was
 expected.
A recoverable exception is NOT a logic bug in your program, which is why it is recoverable. If there is recovery possible from a particular assert error, then you are using asserts incorrectly. Assert errors occur because your program has entered an unanticipated, invalid state. There's no such thing as knowing how to put it back into a valid state, because you don't know where it went wrong and how much is corrupted, etc.
A failure in database component should prevent me from close a network connection properly in an unrelated part of the program. This is called failing gracefully. And this highly recommended, and you KNOW that the system will fail at some point.
May 31 2012
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 31.05.2012 13:06, deadalnix wrote:
 Le 31/05/2012 04:17, Walter Bright a écrit :
 On 5/30/2012 8:05 AM, Steven Schveighoffer wrote:
 I'd classify errors/exceptions into three categories:

 1. corruption/segfault -- not recoverable under any reasonable
 circumstances.
 Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) -- Recovery is not
 defined by
 the runtime, so you must do it manually. Any decision the runtime
 makes will be
 arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and *expected* to
 occur because
 the program cannot control it's environment. e.g. EOF when none was
 expected.
A recoverable exception is NOT a logic bug in your program, which is why it is recoverable. If there is recovery possible from a particular assert error, then you are using asserts incorrectly. Assert errors occur because your program has entered an unanticipated, invalid state. There's no such thing as knowing how to put it back into a valid state, because you don't know where it went wrong and how much is corrupted, etc.
A failure in database component should prevent me from close a network connection properly in an unrelated part of the program. This is called failing gracefully. And this highly recommended, and you KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do. Because crashing is easy ( hek catch(Error) abort(42); ), but not calling scope/destructors lays awful burden on programmer to do graceful clean up. The it's granted that nobody will ever bother doing it. The fun boys of "crush immediately" should just call special runtime hook: crashOnError(true); Or the make it the default, but do _allow_ proper stack unwinding (= usual cleanup) after call to crashOnError(false). -- Dmitry Olshansky
May 31 2012
next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Thursday, 31 May 2012 at 10:22:04 UTC, Dmitry Olshansky wrote:
 Or the make it the default, but do _allow_ proper stack 
 unwinding (= usual cleanup) after call to crashOnError(false).
This would render nothrow useless, at least if the current semantics of it (throwing Errors is acceptable) are kept. David
May 31 2012
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 31.05.2012 20:30, David Nadlinger wrote:
 On Thursday, 31 May 2012 at 10:22:04 UTC, Dmitry Olshansky wrote:
 Or the make it the default, but do _allow_ proper stack unwinding (=
 usual cleanup) after call to crashOnError(false).
This would render nothrow useless, at least if the current semantics of it (throwing Errors is acceptable) are kept. David
What are benefits of nothrow. Aside from probably saving on codegen and documenting intent ? -- Dmitry Olshansky
May 31 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
 On 31.05.2012 13:06, deadalnix wrote:
 This is called failing gracefully. And this highly recommended, and you
 KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state. This is a completely wrong assumption. It might be ok if the program is not critical and has no control over important things like delivering insulin, executing million dollar trades, or adjusting the coolant levels in a nuclear reactor. If the code controls anything that matters, then it is not the best thing to do, not at all. The right thing to do is to take the shortest path to stopping the program. A critical system would be monitoring those programs, and will restart them if they so fail, or will engage the backup system. [When I worked on flight critical airplane systems, the only acceptable response for a self-detected fault was to IMMEDIATELY stop the system, physically DISENGAGE it from the flight controls, and inform the pilot.]
May 31 2012
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 5/31/12 6:16 PM, Walter Bright wrote:
 On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
 On 31.05.2012 13:06, deadalnix wrote:
 This is called failing gracefully. And this highly recommended, and you
 KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state. This is a completely wrong assumption. It might be ok if the program is not critical and has no control over important things like delivering insulin, executing million dollar trades, or adjusting the coolant levels in a nuclear reactor. If the code controls anything that matters, then it is not the best thing to do, not at all. The right thing to do is to take the shortest path to stopping the program. A critical system would be monitoring those programs, and will restart them if they so fail, or will engage the backup system. [When I worked on flight critical airplane systems, the only acceptable response for a self-detected fault was to IMMEDIATELY stop the system, physically DISENGAGE it from the flight controls, and inform the pilot.]
I wonder how we could work into this enthusiasm fixing bugs created by comparing enumerated values of distinct types... It did happen in a program of mine to confuse a UserID with a CityID. If the corrupt CityID would be subsequently entered into the navigation system of an airplane, it would lead the airplane on the wrong path. Andrei
May 31 2012
prev sibling next sibling parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Walter Bright wrote:
 On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
On 31.05.2012 13:06, deadalnix wrote:
This is called failing gracefully. And this highly recommended, and you
KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state. This is a completely wrong assumption. It might be ok if the program is not critical and has no control over important things like delivering insulin, executing million dollar trades, or adjusting the coolant levels in a nuclear reactor. If the code controls anything that matters, then it is not the best thing to do, not at all. The right thing to do is to take the shortest path to stopping the program. A critical system would be monitoring those programs, and will restart them if they so fail, or will engage the backup system. [When I worked on flight critical airplane systems, the only acceptable response for a self-detected fault was to IMMEDIATELY stop the system, physically DISENGAGE it from the flight controls, and inform the pilot.]
This is perfectly valid when developing such critical systems. But limiting D to effectively only allow developing such particular systems cannot be the appropriate response. There are plenty of other systems that do not operate in such a constrained environment. Jens
Jun 01 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 12:45 AM, Jens Mueller wrote:
 This is perfectly valid when developing such critical systems. But
 limiting D to effectively only allow developing such particular systems
 cannot be the appropriate response. There are plenty of other systems
 that do not operate in such a constrained environment.
You can catch thrown asserts if you want, after all, D is a systems programming language. But that isn't a valid way to write robust software.
Jun 01 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 12:15, Walter Bright a crit :
 On 6/1/2012 12:45 AM, Jens Mueller wrote:
 This is perfectly valid when developing such critical systems. But
 limiting D to effectively only allow developing such particular systems
 cannot be the appropriate response. There are plenty of other systems
 that do not operate in such a constrained environment.
You can catch thrown asserts if you want, after all, D is a systems programming language. But that isn't a valid way to write robust software.
No, you can't because this is planed to be half broken with error, on the fallacy that the program can be in corrupted state. If your program is in corrupted state, throwing an error is already a stupid idea. Good luck unwinding a corrupted stack. What is needed here is a flag to HALT on error, that is used for critical systems.
Jun 01 2012
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 01.06.2012 5:16, Walter Bright wrote:
 On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
 On 31.05.2012 13:06, deadalnix wrote:
 This is called failing gracefully. And this highly recommended, and you
 KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state. This is a completely wrong assumption.
To be frank a "completely wrong assumption" is flat-out exaggeration. The only problem that can make it "completely wrong" is memory corruption. Others just depend on specifics of system, e.g. wrong arithmetic in medical software == critical, arithmetic bug in "refracted light color component" in say 3-D game is no problem, just log it and recover. Or better - save game and then crash gracefully. Keep in mind both of the above are likely to be assert(smth), even though the last arguably shouldn't be it. But it is logic invariant check. safe D code should be enough to avoid memory corruption. So in safe D code AssertError is not memory corruption. Being able to do some logging and gracefull teardown in this case would be awesome. I mean an OPTION to do so. Wrong values don't always corrupt "the whole program" state. It's too conservative point of view. It is a reasonable DEFAULT, not a rule. (just look at all these PHP websites, I'd love them to crash on critical errors yet they still crawl after cascade failures with their DBs, LOL) BTW OutOfMemory is not an Error. To me it's like can't open file. Yeah, it could be critical if your app depends on this articular file but not in general. To summarize: I agree there are irrecoverable errors: -->call abort immediately. I agree there are some I don't know if critical: --> call user hook to do some logging/attempt to save data, then abort or ---> provide stack undiwinding so that thing cleans it up itself (more dangerous) I don't agree that OutOfMemory is critical: --> make it an exception ? -- Dmitry Olshansky
Jun 01 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:
 On 01.06.2012 5:16, Walter Bright wrote:
 On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
 On 31.05.2012 13:06, deadalnix wrote:
 This is called failing gracefully. And this highly recommended, and you
 KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state.
> This is a completely wrong assumption. To be frank a "completely wrong assumption" is flat-out exaggeration. The only problem that can make it "completely wrong" is memory corruption. Others just depend on specifics of system, e.g. wrong arithmetic in medical software == critical, arithmetic bug in "refracted light color component" in say 3-D game is no problem, just log it and recover. Or better - save game and then crash gracefully.
Except that you do not know why the arithmetic turned out wrong - it could be the result of memory corruption.
  safe D code should be enough to avoid memory corruption. So in  safe D code
 AssertError is not memory corruption. Being able to do some logging and
 gracefull teardown in this case would be awesome. I mean an OPTION to do so.
You do have the option of catching assert errors in D, but such cannot be represented as a correct or robust way of doing things.
 Wrong values don't always corrupt "the whole program" state.
Right, but since you cannot know how those values got corrupt, you cannot know that the rest of the program is in a valid state. In fact, you reliably know nothing about the state of a program after an assert fail.
 It's too conservative point of view. It is a reasonable DEFAULT, not a rule.
It's a rule. Break it at your peril :-) I am not going to pretend that it is a reasonable thing to do to try and keep running the program.
 (just look at all these PHP websites, I'd love them to crash on critical errors
 yet they still crawl after cascade failures with their DBs, LOL)
Other people writing crappy, unreliable software is no excuse for us.
 BTW OutOfMemory is not an Error. To me it's like can't open file. Yeah, it
could
 be critical if your app depends on this articular file but not in general.
OOM is a special case. I agree that that isn't a corruption error. But I've almost never seen a program that could recover from OOM, even if it was designed to. (For one reason, the recovery logic for such is almost never tested, and so when it is tripped, it fails.)
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
The reason it is made non-recoverable is so that pure functions can do something useful.
Jun 01 2012
next sibling parent reply deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 12:26, Walter Bright a écrit :
 Except that you do not know why the arithmetic turned out wrong - it
 could be the result of memory corruption.
Yes. wrong calculation often comes from memory corruption. Almost never from programmer having screwed up in the said calculation. It is so perfectly reasonable and completely match my experience. I'm sure everybody here will agree. Not to mention that said memory corruption obviously come from compiler bug. As always. What programmer does mistakes in his code ? We write programs, not bugs !
Jun 01 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, June 01, 2012 14:00:01 deadalnix wrote:
 Le 01/06/2012 12:26, Walter Bright a écrit :
 Except that you do not know why the arithmetic turned out wrong - it
 could be the result of memory corruption.
Yes. wrong calculation often comes from memory corruption. Almost never from programmer having screwed up in the said calculation. It is so perfectly reasonable and completely match my experience. I'm sure everybody here will agree. Not to mention that said memory corruption obviously come from compiler bug. As always. What programmer does mistakes in his code ? We write programs, not bugs !
I'd have to agree that the odds of an arithmetic error being caused by memory corruption are generally quite low, but the problem is that when an assertion fails, there's _no_ way for the program to know how bad things really are or what the cause is. The programmer would have to examine the entire program state (which probably still isn't enough in many cases, since you don't have the whole history of the program state - only its current state) and the code that generated the assertion in order to figure out what really happened. When an assertion fails, the program has to assume the worst case scenario, because it doesn't have the information required to figure out how bad the situation really is. When you use an assertion, you're saying that if it fails, there is a bug in your program, and it must be terminated. If you want to recover from whatever the assertion is testing, then _don't use an assertion_. - Jonathan M Davis
Jun 01 2012
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 01.06.2012 23:38, Jonathan M Davis wrote:
 On Friday, June 01, 2012 14:00:01 deadalnix wrote:
 Le 01/06/2012 12:26, Walter Bright a écrit :
 Except that you do not know why the arithmetic turned out wrong - it
 could be the result of memory corruption.
Yes. wrong calculation often comes from memory corruption. Almost never from programmer having screwed up in the said calculation. It is so perfectly reasonable and completely match my experience. I'm sure everybody here will agree. Not to mention that said memory corruption obviously come from compiler bug. As always. What programmer does mistakes in his code ? We write programs, not bugs !
I'd have to agree that the odds of an arithmetic error being caused by memory corruption are generally quite low, but the problem is that when an assertion fails, there's _no_ way for the program to know how bad things really are or what the cause is.
Indeed it's quite bad to assume both extremes - either "oh, my god everything is corrupted" or "blah, whatever, keep going". I thought D was trying to hold keep reasonable compromises where possible. By the way memory corruption is checkable. And even recoverable, one just needs to have certain precautions like adding checksums or better yet ECC codes to _every_ important data structure (including of course stack security hashes). Seems very useful for compiler generated code with '-debug' switch. It even can ask GC to recheck ECC on every GC datastructure. Do that memory check on each throw of error dunno. Trust me to do the thing manually I dunno. Provide some options, damn it. For all I care the program is flawless it's cosmic rays that are funky in this area. Certain compilers by the way already do something like that on each stack entry/leave in debug mode (stack hash sums). P.S. Trying to pour more and more of "generally impossible" "can't do this", "can't do that" and ya-da-ya-da doesn't help solving problems. -- Dmitry Olshansky
Jun 01 2012
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 02.06.2012 0:06, Dmitry Olshansky wrote:
 On 01.06.2012 23:38, Jonathan M Davis wrote:
 On Friday, June 01, 2012 14:00:01 deadalnix wrote:
 Le 01/06/2012 12:26, Walter Bright a écrit :
 Except that you do not know why the arithmetic turned out wrong - it
 could be the result of memory corruption.
Yes. wrong calculation often comes from memory corruption. Almost never from programmer having screwed up in the said calculation. It is so perfectly reasonable and completely match my experience. I'm sure everybody here will agree. Not to mention that said memory corruption obviously come from compiler bug. As always. What programmer does mistakes in his code ? We write programs, not bugs !
I'd have to agree that the odds of an arithmetic error being caused by memory corruption are generally quite low, but the problem is that when an assertion fails, there's _no_ way for the program to know how bad things really are or what the cause is.
Indeed it's quite bad to assume both extremes - either "oh, my god everything is corrupted" or "blah, whatever, keep going". I thought D was trying to hold keep reasonable compromises where possible. By the way memory corruption is checkable. And even recoverable, one just needs to have certain precautions like adding checksums or better yet ECC codes to _every_ important data structure (including of course stack security hashes). Seems very useful for compiler generated code with '-debug' switch. It even can ask GC to recheck ECC on every GC datastructure. Do that memory check on each throw of error dunno. Trust me to do the thing manually I dunno. Provide some options, damn it. For all I care the program is flawless it's cosmic rays that are funky in this area. Certain compilers by the way already do something like that on each stack entry/leave in debug mode (stack hash sums). P.S. Trying to pour more and more of "generally impossible" "can't do this", "can't do that" and ya-da-ya-da doesn't help solving problems.
Ah, forgot the most important thing: I'm not convinced that throwing Error that _loosely_ _unwinds_ stack is better then straight abort on spot or _proper_ stack unwinding. nothrow is not argument of itself. I've yet to see argument for what it gives us that so important. (C++ didn't have proper nothrow for ages, it did worked somehow) BTW stack can be corrupted (in fact it's quite often is, and that's most dangerous thing). Even C's runtime abort can be corrupted. Mission critical software should just use straight HALT instruction*. So point is don't get too paranoid by default please. Yet it will leave things that this program operates on in undefined state, like half-open airlock. -- Dmitry Olshansky
Jun 01 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 1:14 PM, Dmitry Olshansky wrote:
 nothrow is not argument of itself. I've yet to see argument for
 what it gives us that so important.
What nothrow gives is mechanically checkable documentation on the possible results of a function.
 (C++ didn't have proper nothrow for ages, it did worked somehow)
C++ is infamous for not being able to look at the signature of a function and glean much useful information on what its inputs, outputs, and side effects are. This makes it highly resistant to analysis.
Jun 01 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 1:06 PM, Dmitry Olshansky wrote:
 Indeed it's quite bad to assume both extremes - either "oh, my god everything
is
 corrupted" or "blah, whatever, keep going". I thought D was trying to hold keep
 reasonable compromises where possible.
D has a lot of bias towards being able to mechanically guarantee as much as possible, with, of course, allowing the programmer to circumvent these if he so desires. For example, you can XOR pointers in D. But if I were your program manager, you'd need an extraordinary justification to allow such a practice. My strongly held opinion on how to write reliable software is based on decades of experience by others in the aviation business on how to do it. And the proof that this works is obvious. It's also obvious to me that the designers of the Deep Water Horizon rig and the Fukishima plant did not follow these principles, and a heavy price was paid. D isn't going to make anyone follow these principles - but it is going to make it more difficult to violate them. I believe D should be promoting, baked into the design of the language, proven successful best practices. In programming courses and curriculum I've seen, very little attention is paid to this, and programmers are left to discover it the hard way.
 By the way memory corruption is checkable. And even recoverable, one just needs
 to have certain precautions like adding checksums or better yet ECC codes to
 _every_ important data structure (including of course stack security hashes).
 Seems very useful for compiler generated code with '-debug' switch. It even can
 ask GC to recheck ECC on every GC datastructure. Do that memory check on each
 throw of error dunno. Trust me to do the thing manually I dunno. Provide some
 options, damn it.

 For all I care the program is flawless it's cosmic rays that are funky in this
 area.

 Certain compilers by the way already do something like that on each stack
 entry/leave in debug mode (stack hash sums).

 P.S. Trying to pour more and more of "generally impossible" "can't do this",
 "can't do that" and ya-da-ya-da doesn't help solving problems.
It doesn't even have to be memory corruption that puts your program in an invalid state where it cannot reliably continue. The assert would have detected a logic bug, and the invalid state of the program is not at all necessarily memory corruption. Invalid does not imply corruption, though corruption does imply invalid.
Jun 01 2012
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 02.06.2012 0:52, Walter Bright wrote:
 On 6/1/2012 1:06 PM, Dmitry Olshansky wrote:
 Indeed it's quite bad to assume both extremes - either "oh, my god
 everything is
 corrupted" or "blah, whatever, keep going". I thought D was trying to
 hold keep
 reasonable compromises where possible.
D has a lot of bias towards being able to mechanically guarantee as much as possible, with, of course, allowing the programmer to circumvent these if he so desires. For example, you can XOR pointers in D.
Thanks. Could come in handy one day :)
 But if I were your program manager, you'd need an extraordinary
 justification to allow such a practice.
http://en.wikipedia.org/wiki/XOR_linked_list
 My strongly held opinion on how to write reliable software is based on
 decades of experience by others in the aviation business on how to do
 it. And the proof that this works is obvious.
No argument here. Planes are in fact surprisingly safe transport.
 It's also obvious to me that the designers of the Deep Water Horizon rig
 and the Fukishima plant did not follow these principles, and a heavy
 price was paid.

 D isn't going to make anyone follow these principles - but it is going
 to make it more difficult to violate them. I believe D should be
 promoting, baked into the design of the language, proven successful best
 practices.
I'm just hope there is some provision to customize a bit what exactly to do on Error. There might be few things to try before dying. Like close the airlock.
 In programming courses and curriculum I've seen, very little attention
 is paid to this, and programmers are left to discover it the hard way.
That's something I'll agree on. I've had little exposure to in-house firmware design. It's not just people or skills but the development process that is just wrong. These days I won't trust a toaster with "smart" MCU in it. Better old analog stuff.
 Certain compilers by the way already do something like that on each stack
 entry/leave in debug mode (stack hash sums).

 P.S. Trying to pour more and more of "generally impossible" "can't do
 this",
 "can't do that" and ya-da-ya-da doesn't help solving problems.
It doesn't even have to be memory corruption that puts your program in an invalid state where it cannot reliably continue.
Point was memory corruption is hard to check. Yet it's quite checkable. Logical invariants in fact easier to check. There are various other techniques to make sure if global state is intact and what parts of it can be save and so on. Trying to cut all of it down with single cure is not good. Again I'm speaking of options here, pretty much like XORing a pointer they are surely not everyday thing. The assert would
 have detected a logic bug, and the invalid state of the program is not
 at all necessarily memory corruption. Invalid does not imply corruption,
 though corruption does imply invalid.
Correct. -- Dmitry Olshansky
Jun 01 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 01/06/12 12:26, Walter Bright wrote:
 On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:
 On 01.06.2012 5:16, Walter Bright wrote:
 On 5/31/2012 3:22 AM, Dmitry Olshansky wrote:
 On 31.05.2012 13:06, deadalnix wrote:
 This is called failing gracefully. And this highly recommended, and
 you
 KNOW that the system will fail at some point.
Exactly. + The point I tried to argue but it was apparently lost: doing stack unwinding and cleanup on most Errors (some Errors like stack overflow might not recoverable) is the best thing to do.
This is all based on the assumption that the program is still in a valid state after an assert fail, and so any code executed after that and the data it relies on is in a workable state. This is a completely wrong assumption.
To be frank a "completely wrong assumption" is flat-out exaggeration. The only problem that can make it "completely wrong" is memory corruption. Others just depend on specifics of system, e.g. wrong arithmetic in medical software == critical, arithmetic bug in "refracted light color component" in say 3-D game is no problem, just log it and recover. Or better - save game and then crash gracefully.
Except that you do not know why the arithmetic turned out wrong - it could be the result of memory corruption.
This argument seems to be: 1. There exist cases where you cannot know why the assert failed. 2. Therefore you never know why an assert failed. 3. Therefore it is not safe to unwind the stack from a nothrow function. Spot the fallacies. The fallacy in moving from 2 to 3 is more serious than the one from 1 to 2: this argument is not in any way dependent on the assert occuring in a nothrow function. Rather, it's an argument for not having AssertError at all.
Jun 04 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston <dac nospam.com> wrote:

 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1 to  
 2: this argument is not in any way dependent on the assert occuring in a  
 nothrow function. Rather, it's an argument for not having AssertError at  
 all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
Jun 04 2012
next sibling parent reply Don Clugston <dac nospam.com> writes:
On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston <dac nospam.com> wrote:

 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
Jun 04 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
 On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston <dac nospam.com> wrote:
 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow function.
 
 Spot the fallacies.
 
 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors. As such, no one should be relying on stack unwinding when an Error is thrown. The implementation may manage it in some cases, but it's going to be unreliable in the general case regardless of how desirable it may or may not be. The question is whether it's better to skip stack undwinding entirely when an Error is thrown. There are definitely cases where that would be better, since running cleanup code could just make things worse, corrupting even more stuff (including files and the like which may persist passed the termination of the program). On the other hand, there's a lot of cleanup code which would execute just fine when most Errors are thrown, and not running cleanup code causes its own set of problems. There's no way for the program to know which of the two situations that it's in when an Error is thrown. So, we have to pick one or the other. I really don't know which is the better way to go. I'm very tempted to go with Walter on this one, since it would avoid making the worst case scenario worse, and if you have cleanup which _must_ be done, you're going to have to find a different way to handle it, because even perfect stack unwinding won't protect you from everything (e.g. power loss killing the computer). But arguably, the general case is cleaner if we do as much stack unwinding as we can. Regardless, I think that there are a number of people in this thread who are mistaken in how recoverable they think Errors and/or segfaults are, and they seem to be the ones pushing the hardest for full stack unwinding on the theory that they could somehow ensure safe recovery and a clean shutdown when an Error occurs, which is almost never possible, and certainly isn't possible in the general case. - Jonathan M Davis
Jun 05 2012
parent reply Don Clugston <dac nospam.com> writes:
On 05/06/12 09:07, Jonathan M Davis wrote:
 On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
 On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac nospam.com>  wrote:
 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors.
Well that's a motherhood statement. Obviously in the face of extreme memory corruption you can't guarantee *any* code is valid. The *main* reason why stack unwinding would not be possible is if nothrow intentionally omits stack unwinding code.
 As such, no one should be relying on stack unwinding when an
 Error is thrown.
This conclusion DOES NOT FOLLOW. And I am getting so sick of the number of times this fallacy has been repeated in this thread. These kinds of generalizations are completely invalid in a systems programming language.
 Regardless, I think that there are a number of people in this thread who are
 mistaken in how recoverable they think Errors and/or segfaults are, and they
 seem to be the ones pushing the hardest for full stack unwinding on the theory
 that they could somehow ensure safe recovery and a clean shutdown when an
 Error occurs, which is almost never possible, and certainly isn't possible in
 the general case.

 - Jonathan M Davis
Well I'm pushing it because I implemented it (on Windows). I'm less knowledgeable about what happens on other systems, but know that on Windows, the whole system is far, far more robust than most people on this thread seem to think. I can't see *any* problem with executing catch(Error) clauses. I cannot envisage a situation where that can cause a problem. I really cannot. And catch(Exception) clauses won't be run, because of the exception chaining scheme we have implemented. The only difficult case is 'finally' clauses, which may be expecting an Exception.
Jun 05 2012
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 05.06.2012 15:57, Don Clugston wrote:
 On 05/06/12 09:07, Jonathan M Davis wrote:
 On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
 On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac nospam.com> wrote:
 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow
 function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors.
Well that's a motherhood statement. Obviously in the face of extreme memory corruption you can't guarantee *any* code is valid. The *main* reason why stack unwinding would not be possible is if nothrow intentionally omits stack unwinding code.
 As such, no one should be relying on stack unwinding when an
 Error is thrown.
This conclusion DOES NOT FOLLOW. And I am getting so sick of the number of times this fallacy has been repeated in this thread.
Finally voice of reason. My prayers must have touched somebody up above...
 These kinds of generalizations are completely invalid in a systems
 programming language.

 Regardless, I think that there are a number of people in this thread
 who are
 mistaken in how recoverable they think Errors and/or segfaults are,
 and they
 seem to be the ones pushing the hardest for full stack unwinding on
 the theory
 that they could somehow ensure safe recovery and a clean shutdown when an
 Error occurs, which is almost never possible, and certainly isn't
 possible in
 the general case.

 - Jonathan M Davis
Well I'm pushing it because I implemented it (on Windows). I'm less knowledgeable about what happens on other systems, but know that on Windows, the whole system is far, far more robust than most people on this thread seem to think.
Exactly, hence the whole idea about SEH in the OS.
 I can't see *any* problem with executing catch(Error) clauses. I cannot
 envisage a situation where that can cause a problem. I really cannot.

 And catch(Exception) clauses won't be run, because of the exception
 chaining scheme we have implemented.

 The only difficult case is 'finally' clauses, which may be expecting an
 Exception.
-- Dmitry Olshansky
Jun 05 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, June 05, 2012 13:57:14 Don Clugston wrote:
 On 05/06/12 09:07, Jonathan M Davis wrote:
 On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
 On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac nospam.com>  wrote:
 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow
 function.
 
 Spot the fallacies.
 
 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors.
Well that's a motherhood statement. Obviously in the face of extreme memory corruption you can't guarantee *any* code is valid. The *main* reason why stack unwinding would not be possible is if nothrow intentionally omits stack unwinding code.
It's not possible precisely because of nothrow.
 As such, no one should be relying on stack unwinding when an
 Error is thrown.
This conclusion DOES NOT FOLLOW. And I am getting so sick of the number of times this fallacy has been repeated in this thread. These kinds of generalizations are completely invalid in a systems programming language.
If nothrow prevents the stack from being correctly unwound, then no, you shouldn't be relying on stack unwinding when an Error is thrown, because it's _not_ going to work properly.
 Regardless, I think that there are a number of people in this thread who
 are mistaken in how recoverable they think Errors and/or segfaults are,
 and they seem to be the ones pushing the hardest for full stack unwinding
 on the theory that they could somehow ensure safe recovery and a clean
 shutdown when an Error occurs, which is almost never possible, and
 certainly isn't possible in the general case.
 
 - Jonathan M Davis
Well I'm pushing it because I implemented it (on Windows). I'm less knowledgeable about what happens on other systems, but know that on Windows, the whole system is far, far more robust than most people on this thread seem to think. I can't see *any* problem with executing catch(Error) clauses. I cannot envisage a situation where that can cause a problem. I really cannot.
In many cases, it's probably fine, but if the program is in a bad enough state that an Error is thrown, then you can't know for sure that any particular such block will execute properly (memory corruption being the extreme case), and if it doesn't run correctly, then it could make things worse (e.g. writing invalid data to a file, corrupting that file). Also, if the stack is not unwound perfectly (as nothrow prevents), then the program's state will become increasingly invalid the farther that the program gets from the throw point, which will increase the chances of cleanup code functioning incorrectly, as any assumptions that they've made about the program state are increasingly likely to be wrong (as well as it being increasingly likely that the variables that they operate on no longer being valid). A lot of it comes down to worst case vs typical case. In the typical case, the code causing the Error is isolated enough and the code doing the cleanup is self-contained enough that trying to unwind the stack as much as possible will result in more correct behavior than skipping it all. But in the worst case, you can't rely on running any code being safe, because the state of the program is very much invalid, in which case, it's better to kill the program ASAP. Walter seems to subscribe to the approach that it's best to assume the worst case (e.g. that an assertion failure indicates horrible memory corruption), and always have Errors function that way, whereas others subscribe to the approach that things are almost never that bad, so we should just assume that they aren't, since skipping all of that cleanup causes other problems. And it's not that the error-handling system isn't robust, it's that if the program state is invalid, then you can't actually assume that _any_ of it's valid, no matter how well it's written, in which case, you _cannot_ know whether running the cleanup code is better or worse than skipping it. Odds are that it's just fine, but you have no such guarantee, because there's no way for the program to know how severe or isolated an Error is when it occurs. It just knows that something went horribly wrong. - Jonathan M Davis
Jun 05 2012
parent Don Clugston <dac nospam.com> writes:
On 05/06/12 17:44, Jonathan M Davis wrote:
 On Tuesday, June 05, 2012 13:57:14 Don Clugston wrote:
 On 05/06/12 09:07, Jonathan M Davis wrote:
 On Tuesday, June 05, 2012 08:53:16 Don Clugston wrote:
 On 04/06/12 21:29, Steven Schveighoffer wrote:
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston<dac nospam.com>   wrote:
 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow
 function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
No, this whole issue started because the compiler currently does do unwinding whenever it can. And Walter claimed that's a bug, and it should be explicitly disabled. It is, in my view, an absurd position. AFAIK not a single argument has been presented in favour of it. All arguments have been about "you should never unwind Errors".
It's quite clear that we cannot completely, correctly unwind the stack in the face of Errors.
Well that's a motherhood statement. Obviously in the face of extreme memory corruption you can't guarantee *any* code is valid. The *main* reason why stack unwinding would not be possible is if nothrow intentionally omits stack unwinding code.
It's not possible precisely because of nothrow.
nothrow only means 'does not throw Exceptions'. It doesn't mean 'does not throw Errors'. Therefore, given: int foo() nothrow { ...} try { foo(); } catch (Error e) { ... } even though there are no throw statements inside foo(), the compiler is NOT permitted to remove the catch(Error), whereas it could remove catch(Exception). The problem is 'finally' clauses. Are they called only on Exception, or on Exception and Error?
 Regardless, I think that there are a number of people in this thread who
 are mistaken in how recoverable they think Errors and/or segfaults are,
 and they seem to be the ones pushing the hardest for full stack unwinding
 on the theory that they could somehow ensure safe recovery and a clean
 shutdown when an Error occurs, which is almost never possible, and
 certainly isn't possible in the general case.

 - Jonathan M Davis
Well I'm pushing it because I implemented it (on Windows). I'm less knowledgeable about what happens on other systems, but know that on Windows, the whole system is far, far more robust than most people on this thread seem to think. I can't see *any* problem with executing catch(Error) clauses. I cannot envisage a situation where that can cause a problem. I really cannot.
In many cases, it's probably fine, but if the program is in a bad enough state that an Error is thrown, then you can't know for sure that any particular such block will execute properly (memory corruption being the extreme case), and if it doesn't run correctly, then it could make things worse (e.g. writing invalid data to a file, corrupting that file). Also, if the stack is not unwound perfectly (as nothrow prevents), then the program's state will become increasingly invalid the farther that the program gets from the throw point, which will increase the chances of cleanup code functioning incorrectly, as any assumptions that they've made about the program state are increasingly likely to be wrong (as well as it being increasingly likely that the variables that they operate on no longer being valid). A lot of it comes down to worst case vs typical case. In the typical case, the code causing the Error is isolated enough and the code doing the cleanup is self-contained enough that trying to unwind the stack as much as possible will result in more correct behavior than skipping it all. But in the worst case, you can't rely on running any code being safe, because the state of the program is very much invalid, in which case, it's better to kill the program ASAP. Walter seems to subscribe to the approach that it's best to assume the worst case (e.g. that an assertion failure indicates horrible memory corruption), and always have Errors function that way, whereas others subscribe to the approach that things are almost never that bad, so we should just assume that they aren't, since skipping all of that cleanup causes other problems.
I believe I now understand the root issue behind this dispute. Consider: if (x) throw new FileError; if (x) throw new FileException; What is the difference between these two, from the point of view of the compiler? Practically nothing. Only the name is different. There is absolutely no difference in the validity of the machine state when executing the first, rather than the second. In both cases it is possible that something has gone horribly wrong; it's also possible that it's a superficial problem. The difference between Error and Exception is a matter of *convention*. Now, what people have been pointing out is that *even with things like null pointer exceptions* there are still cases where the machine state has remained valid. Now, we can say that when an Error is thrown, the machine is in an invalid state *by definition*, regardless of whether it really is, or not. If we do this, then Walters statements about catching AssertErrors become valid, but for a different reason. When you have thrown an Error, you've told the compiler that the machine is in an invalid state. Catching it and continuing is wrong not because the machine is unstable (which might be true, or might not); rather it's wrong because it's logically inconsistent: by throwing an Error you've told the compiler that it is not recoverable, but by catching it, you've also told it that it is recoverable! If we chose to say that Error means that the machine is in an invalid state, there are still a couple of issues: (1) How to deal with cases where the compiler generates an Error, but you know that the machine state is still valid, and you want to supress the Error and continue. I think Steven's point near the start of the thread was excellent: in the cases where recovery is possible, it is almost always extremely close to the point where the Error was generated. (2) Does it make sense to run finally clauses on Error, if we are saying that the machine state is invalid? Ie, at present they are finally_Throwable clauses, should they instead be finally_Exception clauses? I cannot see any way in which it makes sense to run them if they're in a throwable function, but not if they are in a nothrow function. If we take the view that Error by definition implies an invalid machine state, then I don't think they should run at all. But noting that in a significant fraction of cases, the machine state isn't actually invalid, I think it can be reasonable to provide a mechanism to make them be run. BTW its worth noting that cleanup is not necessarily performed even in the Exception case. If an exception is thrown while processing a finally clause (eg, inside a destructor) then the destructor didn't completely run. C++ just aborts the program if this happens. We've got exception chaining so that case is well defined, and can be detected, nonetheless it's got a lot of similarities to the Error case.
Jun 06 2012
prev sibling parent reply Sean Kelly <sean invisibleduck.org> writes:
On Jun 5, 2012, at 8:44 AM, Jonathan M Davis <jmdavisProg gmx.com> wrote:
=20
 In many cases, it's probably fine, but if the program is in a bad enough s=
tate=20
 that an Error is thrown, then you can't know for sure that any particular s=
uch=20
 block will execute properly (memory corruption being the extreme case), an=
d if=20
 it doesn't run correctly, then it could make things worse (e.g. writing=20=
 invalid data to a file, corrupting that file). Also, if the stack is not u=
nwound=20
 perfectly (as nothrow prevents), then the program's state will become=20
 increasingly invalid the farther that the program gets from the throw poin=
t,=20
 which will increase the chances of cleanup code functioning incorrectly, a=
s=20
 any assumptions that they've made about the program state are increasingly=
=20
 likely to be wrong (as well as it being increasingly likely that the varia=
bles=20
 that they operate on no longer being valid).
Then we should really just abort on Error. What I don't understand is the as= sertion that it isn't safe to unwind the stack on Error and yet that catch(E= rror) clauses should still execute. If the program state is really so bad th= at nothing can be done safely then why would the user attempt to log the err= or condition or anything else? I think an argument could be made that the current behavior of stack unwindi= ng should continue and a hook should be added to let the user call abort or w= hatever instead. But we couldn't make abort the default and let the user dis= able that.=20=
Jun 05 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 05/06/2012 18:21, Sean Kelly a crit :
 On Jun 5, 2012, at 8:44 AM, Jonathan M Davis<jmdavisProg gmx.com>  wrote:
 In many cases, it's probably fine, but if the program is in a bad enough state
 that an Error is thrown, then you can't know for sure that any particular such
 block will execute properly (memory corruption being the extreme case), and if
 it doesn't run correctly, then it could make things worse (e.g. writing
 invalid data to a file, corrupting that file). Also, if the stack is not
unwound
 perfectly (as nothrow prevents), then the program's state will become
 increasingly invalid the farther that the program gets from the throw point,
 which will increase the chances of cleanup code functioning incorrectly, as
 any assumptions that they've made about the program state are increasingly
 likely to be wrong (as well as it being increasingly likely that the variables
 that they operate on no longer being valid).
Then we should really just abort on Error. What I don't understand is the assertion that it isn't safe to unwind the stack on Error and yet that catch(Error) clauses should still execute. If the program state is really so bad that nothing can be done safely then why would the user attempt to log the error condition or anything else?
Yes, either we consider the environement may have been compromised and it don't even make sense to throw an Error, or we consider this environement is still consistent, and we have a logic bug. If so, scope (especially failure) should run when stack is unwinded. As need depend on the software (an office suite should try its best to fail gracefully, a plane autpilot should crash ASAP and give control back to the pilot), what is needed here is a compiler switch.
Jun 06 2012
parent reply Sean Kelly <sean invisibleduck.org> writes:
On Jun 6, 2012, at 9:45 AM, deadalnix wrote:

 Le 05/06/2012 18:21, Sean Kelly a =E9crit :
 On Jun 5, 2012, at 8:44 AM, Jonathan M Davis<jmdavisProg gmx.com>  =
wrote:
=20
 In many cases, it's probably fine, but if the program is in a bad =
enough state
 that an Error is thrown, then you can't know for sure that any =
particular such
 block will execute properly (memory corruption being the extreme =
case), and if
 it doesn't run correctly, then it could make things worse (e.g. =
writing
 invalid data to a file, corrupting that file). Also, if the stack is =
not unwound
 perfectly (as nothrow prevents), then the program's state will =
become
 increasingly invalid the farther that the program gets from the =
throw point,
 which will increase the chances of cleanup code functioning =
incorrectly, as
 any assumptions that they've made about the program state are =
increasingly
 likely to be wrong (as well as it being increasingly likely that the =
variables
 that they operate on no longer being valid).
=20 Then we should really just abort on Error. What I don't understand is =
the assertion that it isn't safe to unwind the stack on Error and yet = that catch(Error) clauses should still execute. If the program state is = really so bad that nothing can be done safely then why would the user = attempt to log the error condition or anything else?
=20
=20 Yes, either we consider the environement may have been compromised and =
it don't even make sense to throw an Error, or we consider this = environement is still consistent, and we have a logic bug. If so, scope = (especially failure) should run when stack is unwinded.
=20
 As need depend on the software (an office suite should try its best to =
fail gracefully, a plane autpilot should crash ASAP and give control = back to the pilot), what is needed here is a compiler switch. I think a runtime hook is reasonable instead. But the default case has = to be the more permissive case. It's safe to tighten the rules but = never to loosen them, since external code will be written assuming the = default behavior.=
Jun 06 2012
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On 06.06.2012 20:53, Sean Kelly wrote:
 On Jun 6, 2012, at 9:45 AM, deadalnix wrote:

 Le 05/06/2012 18:21, Sean Kelly a crit :
 On Jun 5, 2012, at 8:44 AM, Jonathan M Davis<jmdavisProg gmx.com>   wrote:
 In many cases, it's probably fine, but if the program is in a bad enough state
 that an Error is thrown, then you can't know for sure that any particular such
 block will execute properly (memory corruption being the extreme case), and if
 it doesn't run correctly, then it could make things worse (e.g. writing
 invalid data to a file, corrupting that file). Also, if the stack is not
unwound
 perfectly (as nothrow prevents), then the program's state will become
 increasingly invalid the farther that the program gets from the throw point,
 which will increase the chances of cleanup code functioning incorrectly, as
 any assumptions that they've made about the program state are increasingly
 likely to be wrong (as well as it being increasingly likely that the variables
 that they operate on no longer being valid).
Then we should really just abort on Error. What I don't understand is the assertion that it isn't safe to unwind the stack on Error and yet that catch(Error) clauses should still execute. If the program state is really so bad that nothing can be done safely then why would the user attempt to log the error condition or anything else?
Yes, either we consider the environement may have been compromised and it don't even make sense to throw an Error, or we consider this environement is still consistent, and we have a logic bug. If so, scope (especially failure) should run when stack is unwinded. As need depend on the software (an office suite should try its best to fail gracefully, a plane autpilot should crash ASAP and give control back to the pilot), what is needed here is a compiler switch.
I think a runtime hook is reasonable instead. But the default case has to be the more permissive case. It's safe to tighten the rules but never to loosen them, since external code will be written assuming the default behavior.
Yes, that's what I had in mind when I renamed topic to "runtime hook for Crash on Error". Default should be treat Error as just another type of throwable that is logically not caught by catch(Exception). -- Dmitry Olshansky
Jun 06 2012
parent deadalnix <deadalnix gmail.com> writes:
Le 06/06/2012 18:56, Dmitry Olshansky a crit :
 On 06.06.2012 20:53, Sean Kelly wrote:
 On Jun 6, 2012, at 9:45 AM, deadalnix wrote:

 Le 05/06/2012 18:21, Sean Kelly a crit :
 On Jun 5, 2012, at 8:44 AM, Jonathan M Davis<jmdavisProg gmx.com>
 wrote:
 In many cases, it's probably fine, but if the program is in a bad
 enough state
 that an Error is thrown, then you can't know for sure that any
 particular such
 block will execute properly (memory corruption being the extreme
 case), and if
 it doesn't run correctly, then it could make things worse (e.g.
 writing
 invalid data to a file, corrupting that file). Also, if the stack
 is not unwound
 perfectly (as nothrow prevents), then the program's state will become
 increasingly invalid the farther that the program gets from the
 throw point,
 which will increase the chances of cleanup code functioning
 incorrectly, as
 any assumptions that they've made about the program state are
 increasingly
 likely to be wrong (as well as it being increasingly likely that
 the variables
 that they operate on no longer being valid).
Then we should really just abort on Error. What I don't understand is the assertion that it isn't safe to unwind the stack on Error and yet that catch(Error) clauses should still execute. If the program state is really so bad that nothing can be done safely then why would the user attempt to log the error condition or anything else?
Yes, either we consider the environement may have been compromised and it don't even make sense to throw an Error, or we consider this environement is still consistent, and we have a logic bug. If so, scope (especially failure) should run when stack is unwinded. As need depend on the software (an office suite should try its best to fail gracefully, a plane autpilot should crash ASAP and give control back to the pilot), what is needed here is a compiler switch.
I think a runtime hook is reasonable instead. But the default case has to be the more permissive case. It's safe to tighten the rules but never to loosen them, since external code will be written assuming the default behavior.
Yes, that's what I had in mind when I renamed topic to "runtime hook for Crash on Error". Default should be treat Error as just another type of throwable that is logically not caught by catch(Exception).
A better alternative is the condition system, inspired by LISP and proposed by H. S. Teoh . I definitively make sense in this case, and a better solution than both your and my proposal.
Jun 06 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 04/06/2012 21:29, Steven Schveighoffer a écrit :
 On Mon, 04 Jun 2012 06:20:56 -0400, Don Clugston <dac nospam.com> wrote:

 1. There exist cases where you cannot know why the assert failed.
 2. Therefore you never know why an assert failed.
 3. Therefore it is not safe to unwind the stack from a nothrow function.

 Spot the fallacies.

 The fallacy in moving from 2 to 3 is more serious than the one from 1
 to 2: this argument is not in any way dependent on the assert occuring
 in a nothrow function. Rather, it's an argument for not having
 AssertError at all.
I'm not sure that is the issue here at all. What I see is that the unwinding of the stack is optional, based on the assumption that there's no "right" answer. However, there is an underlying driver for not unwinding the stack -- nothrow. If nothrow results in the compiler optimizing out whatever hooks a function needs to properly unwind itself (my limited understanding is that this helps performance), then there *is no choice*, you can't properly unwind the stack. -Steve
It change nothing in term of performances as long as you not throw. And when you throw, performance are not your main problem.
Jun 05 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky  
<dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce). Then you call that when you are using it in a recoverable way. OutOfMemory is critical if you did not write code to handle it. It's impossible for the compiler to know this, since it has no idea if the error will be caught. A vast majority of code does *not* recover from out of memory, so the default should be, throw an error. -Steve
Jun 01 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 5:29 AM, Steven Schveighoffer wrote:
 No. What we need is a non-throwing version of malloc that returns NULL.
We have it. It's called "malloc"! :-)
Jun 01 2012
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 01 Jun 2012 13:50:16 -0400, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 6/1/2012 5:29 AM, Steven Schveighoffer wrote:
 No. What we need is a non-throwing version of malloc that returns NULL.
We have it. It's called "malloc"!
Oh sorry, I meant *GC.malloc* :) -Steve
Jun 01 2012
parent Artur Skawina <art.08.09 gmail.com> writes:
On 06/01/12 19:59, Steven Schveighoffer wrote:
 On Fri, 01 Jun 2012 13:50:16 -0400, Walter Bright <newshound2 digitalmars.com>
wrote:
 
 On 6/1/2012 5:29 AM, Steven Schveighoffer wrote:
 No. What we need is a non-throwing version of malloc that returns NULL.
We have it. It's called "malloc"!
Oh sorry, I meant *GC.malloc* :)
auto GC_malloc(size_t s) nothrow { void *p; try p = GC.malloc(s); catch {} return p; } Which isn't ideal, but probably good enough - it's not like OOM will happen often enough that the exception overhead matters. The various implicit allocations will be more problematic, once GC.malloc starts to fail. artur
Jun 01 2012
prev sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer 
wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky 
 <dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code. IMO, failing assertions and out-of-bounds errors should just abort(), or, as Sean suggests, call a special handler. -Lars
Jun 06 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated. I'd hate to see regular new not be allowed in nothrow functions. Having a way to allocate and return null on failure would definitely be a good feature for those trying to handle running out of memory, but for 99.9999999% of programs, it's just better to throw the Error thereby killing the program and making it clear what happened. - Jonathan M Davis
Jun 06 2012
next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated. [...]
I agree; it would make nothrow an "advanced feature", which kind of sucks. -Lars
Jun 06 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, June 06, 2012 19:22:13 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated. [...]
I agree; it would make nothrow an "advanced feature", which kind of sucks.
Which makes the suggestion DOA IMHO. - Jonathan M Davis
Jun 06 2012
parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 18:11:42 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 19:22:13 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis 
 wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad 
 wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated. [...]
I agree; it would make nothrow an "advanced feature", which kind of sucks.
Which makes the suggestion DOA IMHO.
I'm not so sure it's worse than the current situation. Newbie: "This nothrow thing looks kinda cool. So, if I use it, I can be sure nothing gets thrown from this function, right?" Community: "Right. Unless you run out of memory, or an assertion fails, or something like that. Then you get an Error, and nothrow doesn't prevent those." Newbie: "Ok, I guess that makes sense. Luckily, it looks like Error is just another kind of exception, so at least I know that all my destructors are run and my program terminates gracefully. Right?" Community: "Yeah, about that..."
Jun 06 2012
prev sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated.
"nothrow new" is easily greppable, though. That would be the first course of action upon getting a segfault. -Lars
Jun 06 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, June 06, 2012 19:40:03 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated.
"nothrow new" is easily greppable, though. That would be the first course of action upon getting a segfault.
But unless you got a core dump, you have _no_ idea where in the program the segfault occurred. So, that really isn't helpful. - Jonathan M Davis
Jun 06 2012
parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 18:11:36 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 19:40:03 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis 
 wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad 
 wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated.
"nothrow new" is easily greppable, though. That would be the first course of action upon getting a segfault.
But unless you got a core dump, you have _no_ idea where in the program the segfault occurred. So, that really isn't helpful.
Of course it's helpful. If you were using nothrow new (or malloc, for that matter), you should *always* check its return value. If you get a segfault, you simply locate all uses of nothrow new in your program and ensure you have checked the return value of each and every one of them. If it turns out they are all checked, well, then the problem isn't OOM. -Lars
Jun 06 2012
parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, June 06, 2012 22:47:55 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 18:11:36 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 19:40:03 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis
 
 wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad
 
 wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated.
"nothrow new" is easily greppable, though. That would be the first course of action upon getting a segfault.
But unless you got a core dump, you have _no_ idea where in the program the segfault occurred. So, that really isn't helpful.
Of course it's helpful. If you were using nothrow new (or malloc, for that matter), you should *always* check its return value. If you get a segfault, you simply locate all uses of nothrow new in your program and ensure you have checked the return value of each and every one of them. If it turns out they are all checked, well, then the problem isn't OOM.
But I do _not_ want to have to care about OOM. Having the program throw an Error and kill the program when it occurs is _perfect_ IMHO. So, if it were changed so that you could only allocate memory in a nothrow function through a mechanism which did not throw OutOfMemoryError when you tried to allocate and failed due to a lack of free memory, then nothrow would _suck_. It would be extremely annoying to use, and completely cripple it IMHO. Having a mechanism which allows you to allocate without throwing OOM is great for the cases where someone actually needs, it but I'm _completely_ against requiring it anywhere. - Jonathan M Davis
Jun 06 2012
parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 21:05:51 UTC, Jonathan M Davis wrote:
 On Wednesday, June 06, 2012 22:47:55 Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 18:11:36 UTC, Jonathan M Davis 
 wrote:
 On Wednesday, June 06, 2012 19:40:03 Lars T. Kyllingstad 
 wrote:
 On Wednesday, 6 June 2012 at 09:38:35 UTC, Jonathan M Davis
 
 wrote:
 On Wednesday, June 06, 2012 11:13:39 Lars T. Kyllingstad
 
 wrote:
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven 
 Schveighoffer
 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 
 <dmitry.olsh gmail.com> wrote:
 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
But then instead of getting a nice, clear, OutOfMemoryError, you get a segfault - and that's assuming that it gets dereferenced anywhere near where it's allocated.
"nothrow new" is easily greppable, though. That would be the first course of action upon getting a segfault.
But unless you got a core dump, you have _no_ idea where in the program the segfault occurred. So, that really isn't helpful.
Of course it's helpful. If you were using nothrow new (or malloc, for that matter), you should *always* check its return value. If you get a segfault, you simply locate all uses of nothrow new in your program and ensure you have checked the return value of each and every one of them. If it turns out they are all checked, well, then the problem isn't OOM.
But I do _not_ want to have to care about OOM. Having the program throw an Error and kill the program when it occurs is _perfect_ IMHO. So, if it were changed so that you could only allocate memory in a nothrow function through a mechanism which did not throw OutOfMemoryError when you tried to allocate and failed due to a lack of free memory, then nothrow would _suck_. It would be extremely annoying to use, and completely cripple it IMHO. Having a mechanism which allows you to allocate without throwing OOM is great for the cases where someone actually needs, it but I'm _completely_ against requiring it anywhere.
You're probably right. Besides, I just browsed through core/exception.d, and it seems that my list of Errors was far from exhaustive. In addition to OutOfMemoryError, AssertError and RangeError, there is FinalizeError, HiddenFuncError, InvalidMemoryOperationError and SwitchError. Working around nothrow for all of those would be painful. SwitchError should probably be deprecated, by the way. -Lars
Jun 06 2012
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 06/07/2012 12:11 AM, Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 21:05:51 UTC, Jonathan M Davis wrote:
 ...
 Having a mechanism which allows you to allocate without throwing OOM
 is great
 for the cases where someone actually needs, it but I'm _completely_
 against
 requiring it anywhere.
You're probably right. Besides, I just browsed through core/exception.d, and it seems that my list of Errors was far from exhaustive. In addition to OutOfMemoryError, AssertError and RangeError, there is FinalizeError, HiddenFuncError, InvalidMemoryOperationError and SwitchError. Working around nothrow for all of those would be painful. SwitchError should probably be deprecated, by the way. -Lars
HiddenFuncError should be deprecated as well.
Jun 07 2012
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 06 Jun 2012 05:13:39 -0400, Lars T. Kyllingstad  
<public kyllingen.net> wrote:

 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky  
 <dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
That doesn't work, new conflates memory allocation with construction. What if the constructor throws? -Steve
Jun 06 2012
parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 13:26:09 UTC, Steven Schveighoffer 
wrote:
 On Wed, 06 Jun 2012 05:13:39 -0400, Lars T. Kyllingstad 
 <public kyllingen.net> wrote:

 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer 
 wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky 
 <dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 	--> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
That doesn't work, new conflates memory allocation with construction. What if the constructor throws?
The constructor would have to be marked nothrow as well. Actually, that is currently the case: class Foo { this() { } } void bar() nothrow { auto f = new Foo; // Error: constructor this is not nothrow // Error: function 'bar' is nothrow yet may throw } The only difference between "new" and "nothrow new" is that the latter would return null on allocation failure instead of throwing an OutOfMemoryError. Based on this discussion, I gather that one of the big problems is that the compiler is free to elide stack unwinding code for nothrow functions, despite the fact that they can in fact throw Errors. One solution would therefore be to disallow *all* throwing from nothrow functions, including Errors. Besides OutOfMemoryError, I can only think of two other Errors that would make this a hassle: AssertError and RangeError. However, both of these signify problems with the program logic, and unwinding the stack is probably a bad idea anyway, so why not simply make these abort()? -Lars
Jun 06 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06/06/2012 07:18 PM, Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 13:26:09 UTC, Steven Schveighoffer wrote:
 On Wed, 06 Jun 2012 05:13:39 -0400, Lars T. Kyllingstad
 <public kyllingen.net> wrote:

 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 <dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code.
That doesn't work, new conflates memory allocation with construction. What if the constructor throws?
The constructor would have to be marked nothrow as well. Actually, that is currently the case: class Foo { this() { } } void bar() nothrow { auto f = new Foo; // Error: constructor this is not nothrow // Error: function 'bar' is nothrow yet may throw } The only difference between "new" and "nothrow new" is that the latter would return null on allocation failure instead of throwing an OutOfMemoryError. Based on this discussion, I gather that one of the big problems is that the compiler is free to elide stack unwinding code for nothrow functions, despite the fact that they can in fact throw Errors. One solution would therefore be to disallow *all* throwing from nothrow functions, including Errors. Besides OutOfMemoryError, I can only think of two other Errors that would make this a hassle: AssertError and RangeError. However, both of these signify problems with the program logic, and unwinding the stack is probably a bad idea anyway, so why not simply make these abort()? -Lars
In the current implementation, in contract checking may catch AssertErrors and resume the program normally.
Jun 06 2012
parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 19:27:31 UTC, Timon Gehr wrote:
 On 06/06/2012 07:18 PM, Lars T. Kyllingstad wrote:
 Besides OutOfMemoryError, I can only think of two other Errors 
 that
 would make this a hassle: AssertError and RangeError. However, 
 both of
 these signify problems with the program logic, and unwinding 
 the stack
 is probably a bad idea anyway, so why not simply make these 
 abort()?

 -Lars
In the current implementation, in contract checking may catch AssertErrors and resume the program normally.
I'm not sure I understand what you mean. Are you suggesting catching AssertErrors thrown by in contracts, thereby using contracts as a means of checking input? If so, I would say you are using contracts in the wrong way. -Lars
Jun 06 2012
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 06/06/2012 11:04 PM, Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 19:27:31 UTC, Timon Gehr wrote:
 On 06/06/2012 07:18 PM, Lars T. Kyllingstad wrote:
 Besides OutOfMemoryError, I can only think of two other Errors that
 would make this a hassle: AssertError and RangeError. However, both of
 these signify problems with the program logic, and unwinding the stack
 is probably a bad idea anyway, so why not simply make these abort()?

 -Lars
In the current implementation, in contract checking may catch AssertErrors and resume the program normally.
I'm not sure I understand what you mean. Are you suggesting catching AssertErrors thrown by in contracts, thereby using contracts as a means of checking input? If so, I would say you are using contracts in the wrong way. -Lars
I was not describing an usage pattern. This is built-in behaviour. int throwAssertError(){ writeln("now throwing an AssertError"); throw new AssertError("!"); } class C{ void foo()in{throwAssertError();}body{} } class D: C{ override void foo()in{}body{} } void main(){ C c = new D; c.foo(); // assert error thrown and caught }
Jun 06 2012
parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 6 June 2012 at 21:35:15 UTC, Timon Gehr wrote:
 On 06/06/2012 11:04 PM, Lars T. Kyllingstad wrote:
 On Wednesday, 6 June 2012 at 19:27:31 UTC, Timon Gehr wrote:
 On 06/06/2012 07:18 PM, Lars T. Kyllingstad wrote:
 Besides OutOfMemoryError, I can only think of two other 
 Errors that
 would make this a hassle: AssertError and RangeError. 
 However, both of
 these signify problems with the program logic, and unwinding 
 the stack
 is probably a bad idea anyway, so why not simply make these 
 abort()?

 -Lars
In the current implementation, in contract checking may catch AssertErrors and resume the program normally.
I'm not sure I understand what you mean. Are you suggesting catching AssertErrors thrown by in contracts, thereby using contracts as a means of checking input? If so, I would say you are using contracts in the wrong way. -Lars
I was not describing an usage pattern. This is built-in behaviour. int throwAssertError(){ writeln("now throwing an AssertError"); throw new AssertError("!"); } class C{ void foo()in{throwAssertError();}body{} } class D: C{ override void foo()in{}body{} } void main(){ C c = new D; c.foo(); // assert error thrown and caught }
Ah, now I see what you mean. I didn't know that. -Lars
Jun 06 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 06/06/2012 11:13, Lars T. Kyllingstad a écrit :
 On Friday, 1 June 2012 at 12:29:27 UTC, Steven Schveighoffer wrote:
 On Fri, 01 Jun 2012 04:48:27 -0400, Dmitry Olshansky
 <dmitry.olsh gmail.com> wrote:

 I don't agree that OutOfMemory is critical:
 --> make it an exception ?
No. What we need is a non-throwing version of malloc that returns NULL. (throwing version can wrap this). If you want to throw an exception, then throw it there (or use enforce).
With some sugar: auto a = nothrow new Foo; // Returns null on OOM Then, ordinary new can be disallowed in nothrow code. IMO, failing assertions and out-of-bounds errors should just abort(), or, as Sean suggests, call a special handler. -Lars
Let see what Andrei propose for custom allocators.
Jun 06 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:
 Or better - save game and then crash gracefully.
That can result in saving a corrupted game state, which then will not load, or worse, load and then cause another crash. I would suggest instead implementing an auto-save feature which automatically saves the game state at regular intervals.
Jun 01 2012
next sibling parent reply "Simen Kjaeraas" <simen.kjaras gmail.com> writes:
On Fri, 01 Jun 2012 19:43:06 +0200, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:
 Or better - save game and then crash gracefully.
That can result in saving a corrupted game state, which then will not load, or worse, load and then cause another crash. I would suggest instead implementing an auto-save feature which automatically saves the game state at regular intervals.
Autosave is a good idea. Autoloading last save upon resuming the game is not. But saving the game before crashing, then explicitly telling the player that something went wrong and the save might be corrupted and not to expect too much from it, I think that's pretty good. Of course, the crashing part may be deep buried and will only trigger under obscure circumstances 100 hours down the road, in which case the autosave is definitely the correct solution. Basically it's about provability and consequence. If you can check the save file for corruption upon next start, no problem! If the result is two hours lost, no problem! If you can't check the save file, and the result may be 100 hours lost, don't save a potentially corrupted file (I'm looking at you, Bethesda).
Jun 01 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 3:25 PM, Simen Kjaeraas wrote:
 Autosave is a good idea. Autoloading last save upon resuming the game is not.
 But saving the game before crashing, then explicitly telling the player that
 something went wrong and the save might be corrupted and not to expect too much
 from it, I think that's pretty good.
An awful lot of game players are not computer professionals and will not understand such instructions nor tolerate them.
Jun 01 2012
prev sibling parent Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
01.06.2012 21:43, Walter Bright написал:
 On 6/1/2012 1:48 AM, Dmitry Olshansky wrote:
 Or better - save game and then crash gracefully.
That can result in saving a corrupted game state, which then will not load, or worse, load and then cause another crash. I would suggest instead implementing an auto-save feature which automatically saves the game state at regular intervals.
Even in my old D1 3D editor I implemented auto-saving because it was hard to recover from internal errors and I decided to log-and-crash. And it still works perfectly, user can loose at most a minute of work! I'd like every program to do the same thing. More that that, my favourite game (Windows XP SP3) sometimes fails to start (you know, "system32\config\<reg file> is corrupted" error) and I have to launch my restore script and almost every time the last restore point is broken (and its creation time is often a few hours before last shut down)! So trying to save after and program logic error is definitely a mistake. -- Денис В. Шеломовский Denis V. Shelomovskij
Jun 02 2012
prev sibling parent reply "Jacob Carlborg" <doob me.com> writes:
On Friday, 1 June 2012 at 01:16:28 UTC, Walter Bright wrote:

 [When I worked on flight critical airplane systems, the only 
 acceptable response for a self-detected fault was to 
 IMMEDIATELY stop the system, physically DISENGAGE it from the 
 flight controls, and inform the pilot.]
Plane/computer: ERROR ERROR, I just wanted to inform you that I've detected an error with the landing gear. I will now disengage the landing gear from the plane, I hope you do not need to land. :) -- /Jacob Carlborg
Jun 01 2012
parent Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 6:25 AM, Jacob Carlborg wrote:
 On Friday, 1 June 2012 at 01:16:28 UTC, Walter Bright wrote:

 [When I worked on flight critical airplane systems, the only acceptable
 response for a self-detected fault was to IMMEDIATELY stop the system,
 physically DISENGAGE it from the flight controls, and inform the pilot.]
Plane/computer: ERROR ERROR, I just wanted to inform you that I've detected an error with the landing gear. I will now disengage the landing gear from the plane, I hope you do not need to land. :)
I know you're joking, but the people who design these things have a lot of experience with things that fail on aircraft, why they fail, and how to design a system to survive failure. And the record of airline safety speaks for itself - it is astonishingly, unbelievably, good. (I don't know the landing gear system in detail, but I do know it has multiple *independent* subsystems to get it down and locked.)
Jun 01 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 5/31/2012 2:06 AM, deadalnix wrote:
 A failure in database component should prevent me from close a network
 connection properly in an unrelated part of the program.
It *is* related if the process space is shared. The correct way to do what you're proposing is to make each component a separate process. Then, you are guaranteed that the failure of one component is separable, and restartable.
 This is called failing gracefully. And this highly recommended, and you KNOW
 that the system will fail at some point.
In a shared memory space, you have no guarantees whatsoever that the failure in the component is not a failure in the rest of the program. You cannot tell if it is related or not.
May 31 2012
prev sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Thursday, 31 May 2012 at 02:18:22 UTC, Walter Bright wrote:
 On 5/30/2012 8:05 AM, Steven Schveighoffer wrote:
 I'd classify errors/exceptions into three categories:

 1. corruption/segfault -- not recoverable under any reasonable 
 circumstances.
 Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) -- Recovery 
 is not defined by
 the runtime, so you must do it manually. Any decision the 
 runtime makes will be
 arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and 
 *expected* to occur because
 the program cannot control it's environment. e.g. EOF when 
 none was expected.
A recoverable exception is NOT a logic bug in your program, which is why it is recoverable. If there is recovery possible from a particular assert error, then you are using asserts incorrectly.
I think this is a key point. Asserts are there to verify and debug program logic, they are not part of the logic itself. They are a useful tool for the programmer, nothing more. Specifically, asserts are NOT an error handling mechanism! If you compile the code with -release (which is often the case for production code), the asserts won't even be included. Therefore, when designing the error handling mechanisms for a program, one should just pretend the asserts aren't there. There is no point in writing code which shuts down "gracefully" when it is compiled without -release, but trudges on in an invalid state when compiled in release mode. Then you should have been using enforce() instead. -Lars
May 31 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 11:23, Lars T. Kyllingstad a écrit :
 On Thursday, 31 May 2012 at 02:18:22 UTC, Walter Bright wrote:
 On 5/30/2012 8:05 AM, Steven Schveighoffer wrote:
 I'd classify errors/exceptions into three categories:

 1. corruption/segfault -- not recoverable under any reasonable
 circumstances.
 Special cases exist (such as a custom paging mechanism).
 2. program invariant errors (i.e. assert errors) -- Recovery is not
 defined by
 the runtime, so you must do it manually. Any decision the runtime
 makes will be
 arbitrary, and could be wrong.
 3. try/catch exceptions -- these are planned for and *expected* to
 occur because
 the program cannot control it's environment. e.g. EOF when none was
 expected.
A recoverable exception is NOT a logic bug in your program, which is why it is recoverable. If there is recovery possible from a particular assert error, then you are using asserts incorrectly.
I think this is a key point. Asserts are there to verify and debug program logic, they are not part of the logic itself. They are a useful tool for the programmer, nothing more. Specifically, asserts are NOT an error handling mechanism! If you compile the code with -release (which is often the case for production code), the asserts won't even be included. Therefore, when designing the error handling mechanisms for a program, one should just pretend the asserts aren't there. There is no point in writing code which shuts down "gracefully" when it is compiled without -release, but trudges on in an invalid state when compiled in release mode. Then you should have been using enforce() instead. -Lars
Your are lost in the details of that specific case. assert is a very specific issue. What is discussed here is much broader.
May 31 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/31/2012 2:23 AM, Lars T. Kyllingstad wrote:
 On Thursday, 31 May 2012 at 02:18:22 UTC, Walter Bright wrote:
 A recoverable exception is NOT a logic bug in your program, which is why it is
 recoverable.

 If there is recovery possible from a particular assert error, then you are
 using asserts incorrectly.
I think this is a key point. Asserts are there to verify and debug program logic, they are not part of the logic itself. They are a useful tool for the programmer, nothing more. Specifically, asserts are NOT an error handling mechanism!
Right. And I'd like to amplify that the asserts are also there to detect program faults hopefully before damage is done. If a program must continue even after it has failed, then you have a WRONGLY designed system. It is extremely important to understand this point if you are implementing any sort of critical software.
May 31 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 02:57, Walter Bright a écrit :
 On 5/31/2012 2:23 AM, Lars T. Kyllingstad wrote:
 On Thursday, 31 May 2012 at 02:18:22 UTC, Walter Bright wrote:
 A recoverable exception is NOT a logic bug in your program, which is
 why it is
 recoverable.

 If there is recovery possible from a particular assert error, then
 you are
 using asserts incorrectly.
I think this is a key point. Asserts are there to verify and debug program logic, they are not part of the logic itself. They are a useful tool for the programmer, nothing more. Specifically, asserts are NOT an error handling mechanism!
Right. And I'd like to amplify that the asserts are also there to detect program faults hopefully before damage is done. If a program must continue even after it has failed, then you have a WRONGLY designed system. It is extremely important to understand this point if you are implementing any sort of critical software.
We are talking about runing scope statement and finally when unwiding the stack, not trying to continue the execution of the program. This is, most of the time, the point of error/exceptions. You rarely recover from them in real life.
Jun 01 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 11:14 AM, deadalnix wrote:
 We are talking about runing scope statement and finally when unwiding the
stack,
 not trying to continue the execution of the program.
Which will be running arbitrary code not anticipated by the assert failure, and code that is highly unlikely to be desirable for shutdown.
 This is, most of the time, the point of error/exceptions. You rarely recover
 from them in real life.
I believe this is a misunderstanding of what exceptions are for. "File not found" exceptions, and other errors detected in inputs, are routine and routinely recoverable. This discussion has come up repeatedly in the last 30 years. It's root is always the same - conflating handling of input errors, and handling of bugs in the logic of the program. The two are COMPLETELY different and dealing with them follow completely different philosophies, goals, and strategies. Input errors are not bugs, and vice versa. There is no overlap.
Jun 01 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 22:35, Walter Bright a écrit :
 On 6/1/2012 11:14 AM, deadalnix wrote:
 We are talking about runing scope statement and finally when unwiding
 the stack,
 not trying to continue the execution of the program.
Which will be running arbitrary code not anticipated by the assert failure, and code that is highly unlikely to be desirable for shutdown.
 This is, most of the time, the point of error/exceptions. You rarely
 recover
 from them in real life.
I believe this is a misunderstanding of what exceptions are for. "File not found" exceptions, and other errors detected in inputs, are routine and routinely recoverable. This discussion has come up repeatedly in the last 30 years. It's root is always the same - conflating handling of input errors, and handling of bugs in the logic of the program. The two are COMPLETELY different and dealing with them follow completely different philosophies, goals, and strategies. Input errors are not bugs, and vice versa. There is no overlap.
I'm pretty sure I understand what you are saying here. We have in fact 3 cases : 1/ A problem during the execution of an operation (Exceptions). 2/ A logical invalid state in the program (Errors). 3/ The program environment is broken (Error too ATM). Case 1/ is out of the current discussion scope. In case 3/, it doesn't even make sense to throw an Error as we do know, because it isn't even sure that this is possible (stack corrupted), or that the information provided are correct. This leave the case 2/ on the table. Programs are usually an aggregate of several smaller component that interacts with each others. Let say, as this is a very common case, I have a program that have a network component and another that perform some calculations. If an assert fails in the component that does calculations, it indicate a malfunction here. Whatever I do in that module, it is likely that it will make no sense. However, unless I consider I may be in case 3/ (but then, it isn't even a good response to throw an Error, so we consider we aren't) I'm sure that the network module is still in good shape and can close the connection.
Jun 03 2012
prev sibling parent Don Clugston <dac nospam.com> writes:
On 01/06/12 22:35, Walter Bright wrote:
 On 6/1/2012 11:14 AM, deadalnix wrote:
 We are talking about runing scope statement and finally when unwiding
 the stack,
 not trying to continue the execution of the program.
Which will be running arbitrary code not anticipated by the assert failure, and code that is highly unlikely to be desirable for shutdown.
Sorry, Walter, that's complete bollocks. try { assert(x == 2); } catch(AssertException e) { foo(); } is exactly equivalent to: version (release) {} else { if (x!=2) foo(); } Bad practice, sure. But it's not running arbitrary, unanticipated code.
Jun 04 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/30/2012 2:32 AM, Don Clugston wrote:
 In fact, generally, the point of an AssertError is to prevent the program from
 entering an invalid state.
It's already in an invalid state when the assert fails, by definition. It is not recoverable. The only option is a more or less graceful shutdown.
May 30 2012
parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Walter Bright wrote:
 On 5/30/2012 2:32 AM, Don Clugston wrote:
In fact, generally, the point of an AssertError is to prevent the program from
entering an invalid state.
It's already in an invalid state when the assert fails, by definition. It is not recoverable. The only option is a more or less graceful shutdown.
How do I do a graceful shutdown if finally and scope is not guaranteed to be executed? Assuming onAssertError, etc. is of no use because I need to perform different shutdowns due to having different cases or if I defined my own Error, let's say for some device. Jens
May 31 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/31/2012 12:40 AM, Jens Mueller wrote:
 How do I do a graceful shutdown if finally and scope is not guaranteed
 to be executed? Assuming onAssertError, etc. is of no use because I need
 to perform different shutdowns due to having different cases or if I
 defined my own Error, let's say for some device.
There's no way to guarantee a graceful shutdown. No way. If you must have such, then the way to do it is to divide your application into separate processes that communicate via interprocess communication, then when one component fails the rest of your app can restart it or do what's necessary, as the rest is not in an invalid state.
May 31 2012
next sibling parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Walter Bright wrote:
 On 5/31/2012 12:40 AM, Jens Mueller wrote:
How do I do a graceful shutdown if finally and scope is not guaranteed
to be executed? Assuming onAssertError, etc. is of no use because I need
to perform different shutdowns due to having different cases or if I
defined my own Error, let's say for some device.
There's no way to guarantee a graceful shutdown. No way. If you must have such, then the way to do it is to divide your application into separate processes that communicate via interprocess communication, then when one component fails the rest of your app can restart it or do what's necessary, as the rest is not in an invalid state.
Okay, let's assume I have separate processes maybe even processes on different machines. In one process I get an error. Let's say I want to trigger the other process that it restarts the process or just logs the event whatever makes sense. How do I do this if it not guaranteed that finally/scope blocks are being executed? I don't need a guarantee that the shutdown will work in each and every case. All I need is the possibility to perform a more graceful shutdown. Jens
May 31 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 5/31/2012 1:05 PM, Jens Mueller wrote:
 Okay, let's assume I have separate processes maybe even processes on
 different machines. In one process I get an error. Let's say I want to
 trigger the other process that it restarts the process or just logs the
 event whatever makes sense.
 How do I do this if it not guaranteed that finally/scope blocks are
 being executed?
Presumably the operating system provides a means to tell when a process is no longer running as part of its inter-process communication api.
May 31 2012
parent reply Jens Mueller <jens.k.mueller gmx.de> writes:
Walter Bright wrote:
 On 5/31/2012 1:05 PM, Jens Mueller wrote:
Okay, let's assume I have separate processes maybe even processes on
different machines. In one process I get an error. Let's say I want to
trigger the other process that it restarts the process or just logs the
event whatever makes sense.
How do I do this if it not guaranteed that finally/scope blocks are
being executed?
Presumably the operating system provides a means to tell when a process is no longer running as part of its inter-process communication api.
My point is that you may want to access some state of your invalid program. State that is lost otherwise. But maybe just having the core dump is actually enough, i.e. there is no other interesting state. You are probably right that you can always recover from the error when a new process is started. At least I cannot can up with a convincing case. Since the current implementation does not follow the specification regarding scope and finally block being executed in case of Error will try ... catch (...Error) keep working? I have code that uses assertThrows!AssertError to test some in contracts. Will this code break? Jens
Jun 01 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 6/1/2012 1:15 AM, Jens Mueller wrote:
 Since the current implementation does not follow the specification
 regarding scope and finally block being executed in case of Error will
 try ... catch (...Error) keep working?
No. The reason for this is the implementation was not updated after the split between Error and Exception happened. It was overlooked.
 I have code that uses
 assertThrows!AssertError to test some in contracts. Will this code
 break?
I don't know exactly what your code is, but if you're relying on scope to unwind in the presence of Errors, that will break.
Jun 01 2012
parent reply deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 12:29, Walter Bright a crit :
 On 6/1/2012 1:15 AM, Jens Mueller wrote:
 Since the current implementation does not follow the specification
 regarding scope and finally block being executed in case of Error will
 try ... catch (...Error) keep working?
No. The reason for this is the implementation was not updated after the split between Error and Exception happened. It was overlooked.
 I have code that uses
 assertThrows!AssertError to test some in contracts. Will this code
 break?
I don't know exactly what your code is, but if you're relying on scope to unwind in the presence of Errors, that will break.
If you have an error, it is already broken in some way. But this is unreasonable to think that the whole program is broken, except in very specific cases (stack corruption for instance) but in such a case, you can't throw an error anyway.
Jun 01 2012
parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
On Friday, 1 June 2012 at 12:03:15 UTC, deadalnix wrote:
 Le 01/06/2012 12:29, Walter Bright a écrit :
 On 6/1/2012 1:15 AM, Jens Mueller wrote:
 Since the current implementation does not follow the 
 specification
 regarding scope and finally block being executed in case of 
 Error will
 try ... catch (...Error) keep working?
No. The reason for this is the implementation was not updated after the split between Error and Exception happened. It was overlooked.
 I have code that uses
 assertThrows!AssertError to test some in contracts. Will this 
 code
 break?
I don't know exactly what your code is, but if you're relying on scope to unwind in the presence of Errors, that will break.
If you have an error, it is already broken in some way. But this is unreasonable to think that the whole program is broken, except in very specific cases (stack corruption for instance) but in such a case, you can't throw an error anyway.
I agree. It should be possible to have an plugin system where not every null pointer dereference in a plugin screws up the hole program. Without using different processes for the plugin. 90% of null pointer dereferences are simple bugs not memory corruption.
Jun 01 2012
next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 6/1/12, Tobias Pankrath <tobias pankrath.net> wrote:
 I agree. It should be possible to have an plugin system where not
 every null pointer dereference in a plugin screws up the hole
 program. Without using different processes for the plugin.
It should also be possible to correctly release hardware handles regardless of what happened to the app itself. I hate it when apps lock up and can't be killed (e.g. ones using ASIO hardware).
Jun 01 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 01/06/2012 14:16, Tobias Pankrath a écrit :
 On Friday, 1 June 2012 at 12:03:15 UTC, deadalnix wrote:
 Le 01/06/2012 12:29, Walter Bright a écrit :
 On 6/1/2012 1:15 AM, Jens Mueller wrote:
 Since the current implementation does not follow the specification
 regarding scope and finally block being executed in case of Error will
 try ... catch (...Error) keep working?
No. The reason for this is the implementation was not updated after the split between Error and Exception happened. It was overlooked.
 I have code that uses
 assertThrows!AssertError to test some in contracts. Will this code
 break?
I don't know exactly what your code is, but if you're relying on scope to unwind in the presence of Errors, that will break.
If you have an error, it is already broken in some way. But this is unreasonable to think that the whole program is broken, except in very specific cases (stack corruption for instance) but in such a case, you can't throw an error anyway.
I agree. It should be possible to have an plugin system where not every null pointer dereference in a plugin screws up the hole program. Without using different processes for the plugin. 90% of null pointer dereferences are simple bugs not memory corruption.
You want to crash an airplane or what ???
Jun 01 2012
prev sibling next sibling parent Sean Kelly <sean invisibleduck.org> writes:
On May 31, 2012, at 1:05 PM, Jens Mueller <jens.k.mueller gmx.de> wrote:

 Walter Bright wrote:
 On 5/31/2012 12:40 AM, Jens Mueller wrote:
 How do I do a graceful shutdown if finally and scope is not guaranteed
 to be executed? Assuming onAssertError, etc. is of no use because I need=
 to perform different shutdowns due to having different cases or if I
 defined my own Error, let's say for some device.
=20 There's no way to guarantee a graceful shutdown. =20 No way. =20 If you must have such, then the way to do it is to divide your application into separate processes that communicate via interprocess communication, then when one component fails the rest of your app can restart it or do what's necessary, as the rest is not in an invalid state.
=20 Okay, let's assume I have separate processes maybe even processes on different machines. In one process I get an error. Let's say I want to trigger the other process that it restarts the process or just logs the event whatever makes sense. How do I do this if it not guaranteed that finally/scope blocks are being executed? =20 I don't need a guarantee that the shutdown will work in each and every case. All I need is the possibility to perform a more graceful shutdown.
You pretty much need a local process monitor. This is needed anyway, since n= ot every failure may throw. Say SIGBUS on Posix, for example.=20=
May 31 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
Le 31/05/2012 21:47, Walter Bright a crit :
 On 5/31/2012 12:40 AM, Jens Mueller wrote:
 How do I do a graceful shutdown if finally and scope is not guaranteed
 to be executed? Assuming onAssertError, etc. is of no use because I need
 to perform different shutdowns due to having different cases or if I
 defined my own Error, let's say for some device.
There's no way to guarantee a graceful shutdown. No way. If you must have such, then the way to do it is to divide your application into separate processes that communicate via interprocess communication, then when one component fails the rest of your app can restart it or do what's necessary, as the rest is not in an invalid state.
They're is no way to ensure that an IP packet will go throw the internet. Let just shutdown that silly thing that internet is right now.
Jun 01 2012