www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - DIP33: A standard exception hierarchy

reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
It's time to clean up this mess.

http://wiki.dlang.org/DIP33
Apr 01 2013
next sibling parent reply "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

A quick comment about your "Error" section. You say: "In general, Errors should not be caught, primarily because they indicate that the program logic is compromised, and that the program may therefore be in an invalid state from which there is no recovery". It is actually much worst than that: errors bypass the entire exception handling mechanism, blasting through code that would handle destructors, and even flying through functions that are nothrow. They don't just indicate a "potential" invalid state, they actually *put* the program in an invalid state, from which there is no recovery. That is the main mechanical difference between an "error" and an "exception", it is not just a philosophical "logic vs runtime". -------- Under this situation, I'm wondering how the "OutOfMemory" is dealt with (you don't explain). The only logical explanation I can see is: - It is not an exception and is not caught by "catch(Exception)". - But it is not an error either, so does not corrupt the program state. => Goal: It is hard to catch, but you *can* recover from it. Is this correct? Is this what we are going for? -------- Other than that, I think it would be beneficial to clean up our exception architecture.
Apr 01 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
01-Apr-2013 20:00, John Colvin пишет:
 On Monday, 1 April 2013 at 12:12:56 UTC, Lars T. Kyllingstad wrote:

 But if all cleanup code is bypassed, what is the point in using the
 exception mechanism in the first place?  Why not just abort() and be
 done with it?

 I can think of two reasons for throwing an Error rather than aborting
 directly:
 1. You want a kind of "graceful" shutdown, in which destructors *are*
 called and make their best attempt at cleaning things up.
 2. You want to catch it at some point, and perform some manual cleanup.

 But if (1) does not happen, can you even hope to do something useful
 with (2)?  Your program is in the worst possible state it can be!

I'm no expert on these things, but: Any chance of being in an invalid state - > undefined behaviour Undefined behaviour - > your destructors/cleanup routine could in theory do anything.

While a solid point I'd argue the opposite is more applicable. The proponents of "Undefined bahaviour" is "anything can happen" let's just die fail flat on 2 counts: 1. Label all "bad things" s as undefined where it's more often system-defined or implementation defined. Out of memory is another one. Processor dependent behavior is another one (e.g. shift beyond word wideness). 2. Second that "anything can happen" thus "let's not try destructors and cleanup" just call abort. In fact if you escalate the point of "anything" there is no guarantee that abort call will ...e-hm... actually call the process termination routine (or that C run-time is intact).
 Therefore, you're better off not trying to cleanup if program state
 could be invalid.

Data is corrupted no matter if you just fail to write it in a consistent state (sudden assertion in some 3-rd party library) or corrupt accidentally by bad write (during cleanup on corrupted RAM). Therefore you should always try to orderly cleanup but do not rely on it to actually work at all circumstances (thus backups, commits/save points, watchdogs and whatnot). -- Dmitry Olshansky
Apr 01 2013
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/01/2013 12:44 PM, Dmitry Olshansky wrote:> 01-Apr-2013 20:00, John 
Colvin пишет:

 Therefore, you're better off not trying to cleanup if program state
 could be invalid.

Data is corrupted no matter if you just fail to write it in a consistent state (sudden assertion in some 3-rd party library) or corrupt accidentally by bad write (during cleanup on corrupted RAM).

The failed assertion may be the moment when the program detects that something is wrong. A safe program should stop doing anything else.
 Therefore you should always try to orderly cleanup but do not rely on it
 to actually work at all circumstances (thus backups, commits/save
 points, watchdogs and whatnot).

A safe program must first guarantee that that cleanup is harmless, which is not possible when the program is in an invalid state. Imagine sending almost infinite number of "cleanup" commands to a device that can harm people who are around it. Ali
Apr 01 2013
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 00:34, Ali Çehreli пишет:
 On 04/01/2013 12:44 PM, Dmitry Olshansky wrote:> 01-Apr-2013 20:00, John
 Colvin пишет:

  >> Therefore, you're better off not trying to cleanup if program state
  >> could be invalid.
  >
  > Data is corrupted no matter if you just fail to write it in a consistent
  > state (sudden assertion in some 3-rd party library) or corrupt
  > accidentally by bad write (during cleanup on corrupted RAM).

 The failed assertion may be the moment when the program detects that
 something is wrong. A safe program should stop doing anything else.

And that's precisely the interesting moment. It should stop but the definition of "stop" really depends on many factors. Just pretending that calling abort is a panacea is totally wrong IMO. BTW what do you exactly mean by "safe" program?
  > Therefore you should always try to orderly cleanup but do not rely on it
  > to actually work at all circumstances (thus backups, commits/save
  > points, watchdogs and whatnot).

 A safe program must first guarantee that that cleanup is harmless, which
 is not possible when the program is in an invalid state.

There could a lot of ways to do that even in "illegal" state. Restore and conservative reconstruction of valid state is often possible (like restoring files on a faulty hard drive) with a help of certain tricks you can get high estimates of these actions to succeed. Once you have resigned yourself to black and white - valid /invalid there is no way to go indeed.
 Imagine sending
 almost infinite number of "cleanup" commands to a device that can harm
 people who are around it.

These are far better off with built-in hardware fail-safe switches and redundant circuitry. Just suddenly stopping the program in the control of a spinning blade that cuts through somebody's tissue is not good enough.
 Ali

-- Dmitry Olshansky
Apr 01 2013
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/01/2013 02:01 PM, Dmitry Olshansky wrote:> 02-Apr-2013 00:34, Ali 
Çehreli пишет:

 The failed assertion may be the moment when the program detects that
 something is wrong. A safe program should stop doing anything else.

And that's precisely the interesting moment. It should stop but the definition of "stop" really depends on many factors. Just pretending that calling abort is a panacea is totally wrong IMO. BTW what do you exactly mean by "safe" program?

I meant a program that wants to produce correct results. I was indeed thinking about Therac-25 that Simen Kjærås mentioned. I agree that there must be hardware fail-safe switches as well but they could not protect people from every kind of software failure in that example. Having said that, I can see the counter argument as well: We are in an inconsistent state, so trying to do something about it could be better than not running a cleanup code. But I also remember that an AssertError may be thrown by an assert() call, telling us that a programmer put it in there explicitly, meaning that the program cannot continue. If there was any chance of recovery, then the programmer could have thrown an Exception or remedy the situation right then. Ali
Apr 01 2013
next sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 14:23, deadalnix пишет:
 On Monday, 1 April 2013 at 22:46:49 UTC, Ali Çehreli wrote:

 Not running cleanup code can transform a small issue in a big disaster
 as running can make the problem worse.

 I don't think wiring in the language the fact that error don't run the
 cleanup code is rather dangerous.

 If I had to propose something, it would be to handle error the same way
 exception are handled, but propose a callback that is ran before the
 error is throw, in order to allow for complete program stop based on
 user logic.

It's exactly what I have in mind as removing the exception handling is something user can't recreate easily. On the other hand "die on first signs of corruption" is as easy as a hook that calls abort before unwind of Error. Time to petition Walter ;) -- Dmitry Olshansky
Apr 02 2013
prev sibling parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/02/2013 12:01 PM, Jesse Phillips wrote:> On Monday, 1 April 2013 
at 22:46:49 UTC, Ali Çehreli wrote:
 But I also remember that an AssertError may be thrown by an assert()
 call, telling us that a programmer put it in there explicitly, meaning
 that the program cannot continue. If there was any chance of recovery,
 then the programmer could have thrown an Exception or remedy the
 situation right then.

 Ali

I don't think assert/Error makes any statement on the ability to recover.

Agreed. Error says that the program state is in an unknown state, while Exception says that we could not complete our task.
 What it usually means is you need to fix this because I won't
 be checking this condition when you throw on that release flag.

Aside: That's why the release flag should be used only when one is sure that all possible runtime scenarios are tested. I don't understand how anybody can be sure of that though. :)
 If you are doing input validation you should be throwing an exception.

Agreed.
 We can still throw exceptions in production, I don't tend to use this,
 but maybe this would be a time to say "invalid state stop." But then how
 do you distinguish it from "fix your program?"

(I am not sure that I understand that comment correctly.) The benefit of exceptions is that a low level function throws an exception if it cannot achieve its task. This is very different from incorrect program state. Then, a higher level function (usually all the way at the user interaction layer), catches this exception and reports "I could not do it because of such and such." The program state is still good because we have cleaned up everything as a result of stack unwinding.
 I've mostly enjoyed having temporary files being cleaned up upon some
 range access error which has no effect on my removing files that are no
 longer valid.

The problem is, the runtime cannot know that it will be doing what you really wanted. The incorrect program state may result in deleting the wrong file. Ali
Apr 02 2013
parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/02/2013 08:16 PM, Jesse Phillips wrote:> On Tuesday, 2 April 2013 
at 19:10:47 UTC, Ali Çehreli wrote:

 I've mostly enjoyed having temporary files being cleaned up

 range access error which has no effect on my removing files

 longer valid.

The problem is, the runtime cannot know that it will be doing what you really wanted. The incorrect program state may result in deleting the wrong file.

See above, the state isn't invalid. The error is thrown which is stating, "hey, buddy, good thing you didn't flip that release switch as I'm about to do something I shouldn't be."

An assert() is for situations that the programmer knows (!) to never exist. If my foo() function called my bar() function with 42, it is an invalid state. That is when assert() failed and threw Error. The Error is not thrown before entering the invalid state but at the first point that the error has been discovered. In that regard, the program is in a bad state.
 However D does allow nothrow functions to throw Errors. I wouldn't say
 this would cause the program enter into an invalid state (memory
 corruption) but it would be a bad state (contract violations).

There are some issues with contract programming in D. For example, some people think that whether the pre-conditions of a module to be compiled should be determined by the user of that module, not the library (e.g. Phobos). Because the above is not the case today, if I write a function, I cannot put the function pre-conditions in 'in' blocks because I don't know whether my function is being called as an implementation of my module or as an API function. The API function foo() may also be used as part of the implementation of the same module. (Maybe the same pre-condition checks should be repeated in the 'in' block and in the body; as asserts and corresponding enforces.) For that reason, in general, I think that pre-conditions should by default be implemented as enforce() calls in the function body, not assert() calls in the 'in' blocks. With that aside, a failed assert() in an 'in' block should still point at an invalid program state.
 Take the RangeError thrown when you pop an empty range. Under what
 scenario would receiving one of these would indicate that my file isn't
 correct for deletion (any more so than say a ConvException from the same
 range).

      auto myFile = "some.tmp";
      scope(exit) remove(myFile);

      // setup code here
      manipulateFileRange(range);

We are in agreement that it would be impossible to prove one way or the other whether removing the file would be the right thing to do or whether it will succeed. The difference in thinking is whether that should be attempted at all when some part of the code has determined that something is wrong with the program. Ali
Apr 03 2013
next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/03/2013 10:01 AM, H. S. Teoh wrote:
 On Wed, Apr 03, 2013 at 09:19:24AM -0700, Ali Çehreli wrote:

 Because the above is not the case today, if I write a function, I
 cannot put the function pre-conditions in 'in' blocks because I don't
 know whether my function is being called as an implementation of my
 module or as an API function. The API function foo() may also be used
 as part of the implementation of the same module. (Maybe the same
 pre-condition checks should be repeated in the 'in' block and in the
 body; as asserts and corresponding enforces.)

This is very bad. It makes greatly diminishes the value of DbC in D. What are the obstacles preventing us from fixing DMD so that contracts are compiled with user code instead of library code?

The following thread is relevant but I don't remember whether it touches issues with dmd: http://forum.dlang.org/thread/kf19eh$14tv$1 digitalmars.com Ali
Apr 03 2013
prev sibling parent reply =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/04/2013 08:47 AM, Jesse Phillips wrote:

 On Wednesday, 3 April 2013 at 16:19:25 UTC, Ali Çehreli wrote:
      auto myFile = "some.tmp";
      scope(exit) remove(myFile);

      // setup code here
      manipulateFileRange(range);

We are in agreement that it would be impossible to prove one way or the other whether removing the file would be the right thing to do or whether it will succeed.

All you need is one example where it would remove the wrong file,

$ dmd deneme.d -ofdeneme -I~/deneme/d -O -inline -m32 $ ./deneme import std.stdio; import std.string; import std.array; void main() { auto myFile = "some.tmp"; scope(exit) writeln(format("removing %s", myFile)); writeln("myFile.ptr ", myFile.ptr); void manipulateElement(E)(ref E e) { size_t local; // Playing with pointers (BUG HERE) *(&local + 10) = 4; *(&local - 1) = 100; writeln(&local - 1); writeln("myFile ", &myFile); writeln("e ", e.ptr); } void manipulateFileRange(R)(R range) { for (size_t i = 0; i != range.length; ++i) { writeln("&i ", &i); writeln("i ", i); manipulateElement(range[i]); } } manipulateFileRange([ myFile ]); } Note that RangeError below is caused by a bug in the program. Once that happens, we cannot say anything about the state of the program. It may be 99% correct but it is still in an invalid state. Here is the output of the program (arrow and comment are added manually by me): myFile.ptr 806C0C4 &i FFFCE5DC i 0 FFFCE5DC myFile FFFCE608 e 806C0C4 &i FFFCE5DC i 101 removing some <-- WRONG FILE! (not "some.tmp") core.exception.RangeError deneme(125887): Range violation
 I just
 requested that it have higher accuracy than Exception since what you're
 claiming as invalid state is the same invalid state exceptions check for
 (I didn't expect this).

Unfortunately, exception is too general a term and unfortunately both Exception and Error use the same mechanism in D. A thrown Exception does *not* indicate invalid program state; Error does. A thrown Exception means that some task could not be accomplished. Error is different: It means that an assertion failed. An assert failure means that the fundamental truths that the programmer has built the program on has been shattered. As simple as that. The runtime cannot assess whether the program is 1% or 100% correct. The only sensible thing to do is to stop executing so that no more harm is done. Again, a failed assert means that the program has gone out of line. It did something wrong. It is in an invalid state. Ali
Apr 04 2013
next sibling parent =?UTF-8?B?QWxpIMOHZWhyZWxp?= <acehreli yahoo.com> writes:
On 04/04/2013 12:16 PM, Ali Çehreli wrote:

 core.exception.RangeError deneme(125887): Range violation

I have realized something: Maybe some of the confusion here is due to range violation being an Error. I think that it should be an Exception. The rationale is, some function is told to provide the element at index 100 and there is no such element. The function cannot accomplish that task so it throws an Exception. (Same story for popFront() of an empty range.) My earlier comments about invalid program state apply to Error conditions. (Come to think of it, perhaps even more specifically to AssertError.) Ali
Apr 04 2013
prev sibling parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
04-Apr-2013 23:16, Ali Çehreli пишет:
  > All you need is one example where it would remove the wrong file,

 $ dmd deneme.d -ofdeneme -I~/deneme/d -O -inline -m32
 $ ./deneme

 import std.stdio;
 import std.string;
 import std.array;

 void main()
 {
      auto myFile = "some.tmp";
      scope(exit) writeln(format("removing %s", myFile));

      writeln("myFile.ptr ", myFile.ptr);

      void manipulateElement(E)(ref E e)
      {
          size_t local;
          // Playing with pointers (BUG HERE)
          *(&local + 10) = 4;
          *(&local - 1) = 100;
          writeln(&local - 1);
          writeln("myFile ", &myFile);
          writeln("e ", e.ptr);
      }

      void manipulateFileRange(R)(R range)
      {
          for (size_t i = 0; i != range.length; ++i) {
              writeln("&i ", &i);
              writeln("i ", i);
              manipulateElement(range[i]);
          }
      }

      manipulateFileRange([ myFile ]);
 }

 Note that RangeError below is caused by a bug in the program.

Obviously regardless of whether or not RangeError happened you still corrupted memory. From there to the point of eventual abort/recovery anything potentially could happen. Data loss or corruption can happen no matter what and even way before assert triggers.
 Once that
 happens, we cannot say anything about the state of the program. It may
 be 99% correct but it is still in an invalid state.

 Here is the output of the program (arrow and comment are added manually
 by me):

 myFile.ptr 806C0C4
 &i FFFCE5DC
 i 0
 FFFCE5DC
 myFile FFFCE608
 e 806C0C4
 &i FFFCE5DC
 i 101
 removing some  <-- WRONG FILE! (not "some.tmp")
 core.exception.RangeError deneme(125887): Range violation

The neat thing about your example is that it doesn't matter if you choose to unwind or abort right away, or even use or not use asserts! The program was compromised and could destroy data and wreck havoc in any one of possible ways that an OS allows it. Think again - what you look at is a failed or successful exploit of a program (like buffer overflow overwriting some internal pointers say a ret address). The assert aborting on something fishy won't help you an inch here: a) successful exploit will blow its way past any and all high-level safeguards once it gets in control. The only things that are true obstacles to it are anti-stack corruption, some heap protection, ASLR and related techniques. These operate on "the same" lower level. b) even unsuccessful exploit corrupts things a level deeper then the language guarantees or constructs operate. In the end simply returning from a call could cause segfault (overwritten return address). What I want to underline here is regardless of assertion policy you get anything you can imagine by corrupting memory in a certain way. And assertion failure may or may not indicate corruption but regardless it happens too late to make any kind of judgment based on that and in particular what to do next. Claiming that you protect from memory corruption via assert that calls abort by default is as silly as it gets. Bottom line what I'd suggest is a) allow Errors to propagate the usual way as Exceptions with the notion that these are generally fatal and can be used in nothrow. b) add more options regarding protection against memory corruption aside from safe-D c) add a hook to runtime that allows people to get "abort on Error thrown" behavior. -- Dmitry Olshansky
Apr 05 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/1/2013 2:20 PM, Simen Kjærås wrote:
 I am reminded of Therac-25[1]. though the situation there was slightly
 different, similar situations could arise from not turning off hardware.

Relying on a program running correctly in order to avoid disaster is a terrible design. Even mathematically proving a program to be correct is in no way, shape, or form sufficient to deal with this.
Apr 01 2013
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 03, 2013 at 03:28:10PM -0700, Ali ehreli wrote:
 On 04/03/2013 10:01 AM, H. S. Teoh wrote:
 On Wed, Apr 03, 2013 at 09:19:24AM -0700, Ali ehreli wrote:

 Because the above is not the case today, if I write a function, I
 cannot put the function pre-conditions in 'in' blocks because I
 don't know whether my function is being called as an implementation
 of my module or as an API function. The API function foo() may also
 be used as part of the implementation of the same module. (Maybe
 the same pre-condition checks should be repeated in the 'in' block
 and in the body; as asserts and corresponding enforces.)

This is very bad. It makes greatly diminishes the value of DbC in D. What are the obstacles preventing us from fixing DMD so that contracts are compiled with user code instead of library code?

The following thread is relevant but I don't remember whether it touches issues with dmd: http://forum.dlang.org/thread/kf19eh$14tv$1 digitalmars.com

Alright. Apparently Jonathan touched on some of the issues in the above thread (quoted below): [...]
 Unfortunately, while that's how it really _should_ work, AFAIK,
 there's no way with D's linking model to make things work that way.
 You can link against functions without any access to their bodies.

 Function pointers make it trivial to use a function without the
 compiler knowing what function your using (meaning that it couldn't
 insert the contracts at the call point).  Etc.  Etc. The contracts
 would have to be passed around with the functions in a manner which
 made it so that the caller could always insert them if it's being
 compiled with assertions enabled, and that just won't work.

This is not impossible to overcome. One approach is to define contracts as separate (sub)functions that wrap around the real function according to some well-known scheme; say the contract is mangled as mangle(funcname)~"__contract" or something like that. The contract wrapper has exactly the same arguments/return value as the real function, and simply forwards them to the real function, and passes the real function's return value back. Then when compiling in non-release mode, DMD will link all calls/references to the function to the contract wrapper instead, and when compiling for release, these calls/references go directly to the "real" function. In essence, this code: RetType myFunc(Args...)(Args args) in { assert(inContract(args)); } out(RetType ret) { assert(outContract(ret)); } body { return dotDotDotMagic(args); } void main() { auto x = myFunc(1,2,3); auto fp = &myFunc; } gets lowered to: RetType myFunc__contract(Args...)(Args args) { assert(inContract(args)); auto retVal = myFunc(args); assert(outContract(retVal)); return retVal; } RetType myFunc(Args...)(Args args) { return dotDotDotMagic(args); } void main() { version(release) { auto x = myFunc(1,2,3); auto fp = &myFunc; } else { auto x = myFunc__contract(1,2,3); auto fp = &myFunc__contract; } } The *__in_contract wrappers are always shipped with the library, so library users can always choose to link to the contracted version or not. (Even if the library is compiled in release mode, the contract wrappers are still there, they are just bypassed by internal library calls. They can still be enforced when compiling user code in non-release mode, since the compiler will then route all calls through the wrappers.) T -- Real Programmers use "cat > a.out".
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 11:23:50 UTC, monarch_dodra wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

A quick comment about your "Error" section. You say: "In general, Errors should not be caught, primarily because they indicate that the program logic is compromised, and that the program may therefore be in an invalid state from which there is no recovery". It is actually much worst than that: errors bypass the entire exception handling mechanism, blasting through code that would handle destructors, and even flying through functions that are nothrow. They don't just indicate a "potential" invalid state, they actually *put* the program in an invalid state, from which there is no recovery. That is the main mechanical difference between an "error" and an "exception", it is not just a philosophical "logic vs runtime".

I had forgotten about that. Personally, I think that is crazy. Errors ought to propagate just like Exceptions, they just shouldn't be a part of your "normal" error-handling mechanism.
 --------

 Under this situation, I'm wondering how the "OutOfMemory" is 
 dealt with (you don't explain). The only logical explanation I 
 can see is:
 - It is not an exception and is not caught by 
 "catch(Exception)".
 - But it is not an error either, so does not corrupt the 
 program state.
 => Goal: It is hard to catch, but you *can* recover from it.
 Is this correct? Is this what we are going for?

That was the idea, yes.
Apr 01 2013
prev sibling next sibling parent "monarch_dodra" <monarchdodra gmail.com> writes:
On Monday, 1 April 2013 at 11:33:57 UTC, Lars T. Kyllingstad 
wrote:
 On Monday, 1 April 2013 at 11:23:50 UTC, monarch_dodra wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

A quick comment about your "Error" section. You say: "In general, Errors should not be caught, primarily because they indicate that the program logic is compromised, and that the program may therefore be in an invalid state from which there is no recovery". It is actually much worst than that: errors bypass the entire exception handling mechanism, blasting through code that would handle destructors, and even flying through functions that are nothrow. They don't just indicate a "potential" invalid state, they actually *put* the program in an invalid state, from which there is no recovery. That is the main mechanical difference between an "error" and an "exception", it is not just a philosophical "logic vs runtime".

I had forgotten about that. Personally, I think that is crazy. Errors ought to propagate just like Exceptions, they just shouldn't be a part of your "normal" error-handling mechanism.

I don't know, I find the approach kind of genius personally. If it is a logic error, and your program is going to die, then why pay for cleanup? You are getting the radical efficiency of a normal assert, but with the opportunity to die like with an exception. The nicest part (IMO), is that thanks to this, you can assert in a nothrow function, without violating its nothrow-ness (which we do all over phobos, and even with built-in arrays).
 --------

 Under this situation, I'm wondering how the "OutOfMemory" is 
 dealt with (you don't explain). The only logical explanation I 
 can see is:
 - It is not an exception and is not caught by 
 "catch(Exception)".
 - But it is not an error either, so does not corrupt the 
 program state.
 => Goal: It is hard to catch, but you *can* recover from it.
 Is this correct? Is this what we are going for?

That was the idea, yes.

I like the idea, but it would be particularly breaking change for nothrow functions though: void foo() nothrow { try { throw new OutOfMemory() } catch (Exception /+e+/) {} } "Error: foo is nothrow yet may throw" ...what ...? But I caught the Exception! The bypass would be giving OutOfMemory the same semantics as an Error, but then, it would just be an actual Error...
Apr 01 2013
prev sibling next sibling parent reply "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 11:52:47 UTC, monarch_dodra wrote:
 On Monday, 1 April 2013 at 11:33:57 UTC, Lars T. Kyllingstad 
 wrote:
 On Monday, 1 April 2013 at 11:23:50 UTC, monarch_dodra wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

A quick comment about your "Error" section. You say: "In general, Errors should not be caught, primarily because they indicate that the program logic is compromised, and that the program may therefore be in an invalid state from which there is no recovery". It is actually much worst than that: errors bypass the entire exception handling mechanism, blasting through code that would handle destructors, and even flying through functions that are nothrow. They don't just indicate a "potential" invalid state, they actually *put* the program in an invalid state, from which there is no recovery. That is the main mechanical difference between an "error" and an "exception", it is not just a philosophical "logic vs runtime".

I had forgotten about that. Personally, I think that is crazy. Errors ought to propagate just like Exceptions, they just shouldn't be a part of your "normal" error-handling mechanism.

I don't know, I find the approach kind of genius personally. If it is a logic error, and your program is going to die, then why pay for cleanup? You are getting the radical efficiency of a normal assert, but with the opportunity to die like with an exception.

But if all cleanup code is bypassed, what is the point in using the exception mechanism in the first place? Why not just abort() and be done with it? I can think of two reasons for throwing an Error rather than aborting directly: 1. You want a kind of "graceful" shutdown, in which destructors *are* called and make their best attempt at cleaning things up. 2. You want to catch it at some point, and perform some manual cleanup. But if (1) does not happen, can you even hope to do something useful with (2)? Your program is in the worst possible state it can be!
 The nicest part (IMO), is that thanks to this, you can assert 
 in a nothrow function, without violating its nothrow-ness 
 (which we do all over phobos, and even with built-in arrays).

Well, there are two benefits to nothrow: 1. It makes a guarantee to the programmer that a function does not throw, and the programmer consequently does not need to worry about exception handling. 2. The compiler can elide exception handling code altogether, which improves performance. Personally I think (1) is the most important, and we could maintain this guarantee even if Error and OutOfMemory propagated just like Exception. We'd just redefine 'nothrow' to mean "does not throw Exception". Then, we could introduce a new attribute, e.g. __hard_nothrow, to allow for (2). This would require the programmer to handle Error and OutOfMemory too, and importantly, we could apply it to most C functions.
 Under this situation, I'm wondering how the "OutOfMemory" is 
 dealt with (you don't explain). The only logical explanation 
 I can see is:
 - It is not an exception and is not caught by 
 "catch(Exception)".
 - But it is not an error either, so does not corrupt the 
 program state.
 => Goal: It is hard to catch, but you *can* recover from it.
 Is this correct? Is this what we are going for?

That was the idea, yes.

I like the idea, but it would be particularly breaking change for nothrow functions though: void foo() nothrow { try { throw new OutOfMemory() } catch (Exception /+e+/) {} } "Error: foo is nothrow yet may throw" ...what ...? But I caught the Exception! The bypass would be giving OutOfMemory the same semantics as an Error, but then, it would just be an actual Error...

I think OutOfMemory should not be restricted by nothrow, and I propose to solve it as described above. Lars
Apr 01 2013
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 03, 2013 at 09:19:24AM -0700, Ali ehreli wrote:
 On 04/02/2013 08:16 PM, Jesse Phillips wrote:> On Tuesday, 2 April

 However D does allow nothrow functions to throw Errors. I wouldn't
 say this would cause the program enter into an invalid state (memory
 corruption) but it would be a bad state (contract violations).

There are some issues with contract programming in D. For example, some people think that whether the pre-conditions of a module to be compiled should be determined by the user of that module, not the library (e.g. Phobos). Because the above is not the case today, if I write a function, I cannot put the function pre-conditions in 'in' blocks because I don't know whether my function is being called as an implementation of my module or as an API function. The API function foo() may also be used as part of the implementation of the same module. (Maybe the same pre-condition checks should be repeated in the 'in' block and in the body; as asserts and corresponding enforces.)

This is very bad. It makes greatly diminishes the value of DbC in D. What are the obstacles preventing us from fixing DMD so that contracts are compiled with user code instead of library code?
 For that reason, in general, I think that pre-conditions should by
 default be implemented as enforce() calls in the function body, not
 assert() calls in the 'in' blocks.

Yes, this is the result of the wrong implementation of putting contracts in the called function rather than the caller. It makes 'in' blocks useless when you don't have ready access to library source code. T -- Computers shouldn't beep through the keyhole.
Apr 03 2013
prev sibling next sibling parent Denis Shelomovskij <verylonglogin.reg gmail.com> writes:
01.04.2013 15:08, Lars T. Kyllingstad пишет:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Personally I like current "Error = die" approach because of opportunities it gives for `nothrow` in release build. About `FormatError`: I'd like make formatting an error but... There is no `isValidFormat` in Phobos yet. -- Денис В. Шеломовский Denis V. Shelomovskij
Apr 01 2013
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-01 13:08, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

In general I think it looks good. I also think it's really needed. Don't IOException and ProcessException need a "kind" field as well? -- /Jacob Carlborg
Apr 01 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-01 21:22, Lars T. Kyllingstad wrote:

 Maybe.  The ones that have such fields will probably need more Kind enum
 members too.  What's in the DIP are simply the ones that I could think
 of when I wrote it.

I was think of what you wrote in the text: "failure to start a process, failure to wait for a process". -- /Jacob Carlborg
Apr 01 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Monday, 1 April 2013 at 12:12:56 UTC, Lars T. Kyllingstad 
wrote:

 But if all cleanup code is bypassed, what is the point in using 
 the exception mechanism in the first place?  Why not just 
 abort() and be done with it?

 I can think of two reasons for throwing an Error rather than 
 aborting directly:
 1. You want a kind of "graceful" shutdown, in which destructors 
 *are* called and make their best attempt at cleaning things up.
 2. You want to catch it at some point, and perform some manual 
 cleanup.

 But if (1) does not happen, can you even hope to do something 
 useful with (2)?  Your program is in the worst possible state 
 it can be!

I'm no expert on these things, but: Any chance of being in an invalid state - > undefined behaviour Undefined behaviour - > your destructors/cleanup routine could in theory do anything. Therefore, you're better off not trying to cleanup if program state could be invalid. Anything that doesn't signal a possible invalid state should be sensibly catchable and run destructors etc. , anything that does should cut through the program like a knife and is catchable at your own risk.
Apr 01 2013
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 01, 2013 at 01:08:15PM +0200, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

I'd prefer "NetworkException" instead of "NetworkingException" (long name with no added advantage). About the use of enums in FilesystemException, NetworkingException, etc.: I understand the rationale for them, but this also makes them closed for extension. Is there anything fundamentally wrong with creating subclasses of these exceptions instead of attempting to cover *all* possible problems in a single enum? I like the use of chaining to attach errno or windows system errors to exceptions. This solves the problem of errno's not being easily mapped to one of the standard exception classes. It's sort of the reverse of what chaining was intended for (another exception being thrown while the first one was in transit), but I haven't actually seen any real use case for the latter, so we might as well use it for the purpose here. The only thing is, this makes library code a bit trickier to write. Maybe Phobos needs to provide some kind of standard (though system-specific) way of mapping errno, windows error codes, etc., into one of the standard exception types, so that this mapping won't have to be duplicated all over the place. Obviously it can't be completely automatic, since some errno's may map to different exceptions depending on context, but *some* amount of mapping would be highly desirable to avoid code duplication. Another nice thing to have (not sure how practical it will be) is to add more information to the exceptions under Exception. To truly free us from the tendency to just invent GetoptException, XMLException, RegexException, etc., we need to consider that sometimes you *do* want to know where the exception came from. For example, you could be calling std.getopt from some generic initialization code that does other stuff too, both of which may throw a ConversionException, say. Sometimes you need to distinguish between them (display a command-line syntax help in one case, just display an error in the other case, depending on where the exception came from). One solution is to add a locus field to Exception: class Exception : Error { ... /// Where this exception came from /// E.g., "std.getopt", "std.xml", /// "my.program.init.complexconv", etc.. string locus; } This way the catching code doesn't have to downcast, guess, or do some ugly non-portable hacking to figure out what to do. This field should probably be automatically filled by Exception's ctor, so that it doesn't require additional burden on the programmer. I'm not 100% sure the module name should be used in this field, but the idea is that it should contain some way of identifying the origin of the exception that can be programmatically identified. T -- Two wrongs don't make a right; but three rights do make a left...
Apr 01 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-02 17:56, Lars T. Kyllingstad wrote:

 In my experience, most of the time, you don't even bother distinguishing
 between the finer categories.  If you can't open a file, well, that's
 that.  Tell the user why and ask them to try another file.  (I realise
 that this is highly arguable, of course.)

I would say that there's a big difference if a file exist or if you don't have permission to access it. Think of the command line, you can easily misspell a filename, or forget to use "sudo". -- /Jacob Carlborg
Apr 02 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-02 22:15, Lars T. Kyllingstad wrote:

 This illustrates my point nicely!  What does the shell do in this case?
 It treats both errors the same:  It prints an error message and returns
 to the command line.  It does not magically try to guess the filename,
 find a way to get you permission, etc.

No, but you do know the difference. It doesn't just say "can't open file <filename>". It will say either, "file <filename> doesn't exist" or "don't have permission to access <filename>". It's a huge difference. I know _what_ went wrong with that file, not just that _something_ when wrong. -- /Jacob Carlborg
Apr 03 2013
prev sibling next sibling parent "Jesse Phillips" <Jessekphillips+D gmail.com> writes:
On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

I like the idea, I think some specifics may need worked out (already being discussed) You're likely purposely avoiding this. But when the subject of a "Exception Hierarchy" comes up thoughts of having specific modules to house the exceptions come up. I don't think this is the primary concern, but my position is to keep the exceptions in their std implementation/use.
Apr 01 2013
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
01-Apr-2013 15:08, Lars T. Kyllingstad пишет:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Overall neat. 1. Where to define this hierarchy. Captain the obvious says std.exception, but can we fit all of them there? 2. ProcessException is IMHO a system exception. Plus some RTOses systems may not have processes in the usaul POSIX sense. It needs some thought. 3. I assume that NetworkingException should rather be NetworkException. Not only that but I truly hope it deals with network-only events (like resolver errors, host unreachable etc.) not with normal I/O Exceptions happening on sockets. It could get tricky to separate the two on some OSes. 4. A quiz as an extension of 3 - where a e.g. serial port exceptions would fit in this hierarchy? Including Parity errors, data overrun etc. Let's think a bit ahead and pick names wisely. Maybe turn NetworkException --> Comm(unication)Exception, I dunno. 5. Continuing the above - a failed call (in-system) of a general purpose IPC library would be a ... SystemException? A network exception? Should it matter to the user? 6. I like ParseException. Wondering if could be generalized a bit - in a sense it's a "bad format"/"malformed" exception. For instance corrupt DB file reporting ParseException (= bad format) is rather wacky. 7. DocParseExcpetion ---> TextParseException there is a notion of a binary "document" (e.g. BSON databases like Mongo DB). 8. For IOExcpetion we might consider adding an indication on which file handle the problem happened and/or if it's closed/invalid/"mostly fine" that. That's it so far. We'll see if there is more ;) -- Dmitry Olshansky
Apr 01 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
01-Apr-2013 23:46, Lars T. Kyllingstad пишет:
 On Monday, 1 April 2013 at 18:40:48 UTC, Dmitry Olshansky wrote:
 01-Apr-2013 15:08, Lars T. Kyllingstad пишет:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Overall neat. 1. Where to define this hierarchy. Captain the obvious says std.exception, but can we fit all of them there?

Some of them have to be in core.exception. Personally, I think the rest of the base hierarchy should be in std.exception. In fact, I think the "standard hierarchy" should be defined as "what's in core.exception and std.exception". If we identify cases where modules absolutely *must* extend these classes for extremely specific purposes, this can be done within the module.

Good. We are mostly on the same page here and later.
 2. ProcessException is IMHO a system exception. Plus some RTOses
 systems may not have processes in the usaul POSIX sense. It needs some
 thought.

Well, I *don't* think ProcessException is a SystemException. :) And if druntime/Phobos is to be used on OSes that don't have processes, there are other adaptations that have to be made that are probably more fundamental.

Okay. I guess all of this goes to "Embedded D"/"D Lite" kind of spec. Last thing - why separating ThreadException vs ProcessException and should they have some base class? Just asking to see the rationale behind it that is missing from DIP currently.
 4. A quiz as an extension of 3 - where a e.g. serial port exceptions
 would fit in this hierarchy? Including Parity errors, data overrun etc.
 Let's think a bit ahead and pick names wisely.
 Maybe turn NetworkException --> Comm(unication)Exception, I dunno.

Good question, I dunno either. :) I agree we should think about it.
 5. Continuing the above - a failed call (in-system) of a general
 purpose IPC library would be a ... SystemException? A network
 exception? Should it matter to the user?

Note that I am only saying that the standard exceptions should cover *most* error categories. 100% is not feasible, and we shouldn't even try. This may be one of the exceptions (haha) to the rule.

Well, maybe we could just extend it later on. As it's easier to add then remove then let's deffer the hard ones :)
 6. I like ParseException. Wondering if could be generalized a bit - in
 a sense it's a "bad format"/"malformed" exception. For instance
 corrupt DB file reporting ParseException (= bad format) is rather wacky.

I agree, and welcome all suggestions for better names.

 8. For IOExcpetion we might consider adding an indication on which
 file handle the problem happened and/or if it's closed/invalid/"mostly
 fine" that.

Which form do you suggest that such an indicator should take?

That's the trick - I hoped somebody will just say "aha!" and add one :) The internal handle is hard to represent other then ... some platform specific integer value. There goes generality... Other then this there is a potential to stomp on feet of higher-level abstraction on top of that handle. That last bit makes me reconsider the idea. While I see some potential use for it I suspect it's too niche to fit in the general hierarchy.
 Bear in
 mind that it should be general enough to cover all, or at least most,
 kinds of I/O exceptions.

Adding a Kind that states one of: out-of-data (read empty file) illegalOp (reading closed file, writing read-only file/socket) interrupted (operation was canceled by OS, connection forcibly closed, disk ejected etc.) hardFault (OS reports hardware failure) ... etc. shouldn't hurt. Need to think this through better, of course, and consult OSes manuals again. -- Dmitry Olshansky
Apr 01 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 20:39, H. S. Teoh пишет:
 On Tue, Apr 02, 2013 at 05:47:30PM +0200, Lars T. Kyllingstad wrote:
 [...]
 I have thought some more about it, and a basic serial comms error
 should probably be an IOException.  An error in a higher-level serial
 protocol, on the other hand, would be a NetworkException, and then the
 name doesn't suck so much.  CommException may still be better though,
 or maybe ProtocolException.

ProtocolException sounds like a low-level TCP or IP exception. I think NetworkException is still the best name, not overly specific, not overly generic. CommException sounds a bit too vague to me.

Yeah, I've come to conclusion that NetworkException beats CommException in 90% of cases. The rest 10% can leave with Network being a sane name. -- Dmitry Olshansky
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 12:12:56 UTC, Lars T. Kyllingstad 
wrote:
 I think OutOfMemory should not be restricted by nothrow, and I 
 propose to solve it as described above.

More precisely: In principle, I think OutOfMemory *should* be restricted by nothrow, but it would break too much code, and be far too annoying, to be feasible. :) Lars
Apr 01 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 14:58:42 UTC, Jacob Carlborg wrote:
 On 2013-04-01 13:08, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

In general I think it looks good. I also think it's really needed. Don't IOException and ProcessException need a "kind" field as well?

Maybe. The ones that have such fields will probably need more Kind enum members too. What's in the DIP are simply the ones that I could think of when I wrote it. Lars
Apr 01 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 17:28:11 UTC, H. S. Teoh wrote:
 On Mon, Apr 01, 2013 at 01:08:15PM +0200, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

I'd prefer "NetworkException" instead of "NetworkingException" (long name with no added advantage).

Agreed. I have changed it now.
 About the use of enums in FilesystemException, 
 NetworkingException,
 etc.: I understand the rationale for them, but this also makes 
 them
 closed for extension. Is there anything fundamentally wrong with
 creating subclasses of these exceptions instead of attempting 
 to cover
 *all* possible problems in a single enum?

Phobos/druntime devs will have the opportunity to add enum fields to cover every error category in those libraries, and the users themselves can still extend the classes the "normal" way. As you may have noticed, the first member of each enum is "unknown", which is meant for errors that do not fall into any of the other categories. (There may be a better name for this, like "other", and maybe there should be a separate enum member called "userDefined"?) My problem with subclasses is that they are a rather heavyweight addition to an API. If they bring no substance (such as extra data), I think they are best avoided.
 I like the use of chaining to attach errno or windows system 
 errors to
 exceptions. This solves the problem of errno's not being easily 
 mapped
 to one of the standard exception classes. It's sort of the 
 reverse of
 what chaining was intended for (another exception being thrown 
 while the
 first one was in transit), but I haven't actually seen any real 
 use case
 for the latter, so we might as well use it for the purpose here.

 The only thing is, this makes library code a bit trickier to 
 write.
 Maybe Phobos needs to provide some kind of standard (though
 system-specific) way of mapping errno, windows error codes, 
 etc., into
 one of the standard exception types, so that this mapping won't 
 have to
 be duplicated all over the place. Obviously it can't be 
 completely
 automatic, since some errno's may map to different exceptions 
 depending
 on context, but *some* amount of mapping would be highly 
 desirable to
 avoid code duplication.

This is a good idea.
 Another nice thing to have (not sure how practical it will be) 
 is to add
 more information to the exceptions under Exception. To truly 
 free us
 from the tendency to just invent GetoptException, XMLException,
 RegexException, etc., we need to consider that sometimes you 
 *do* want
 to know where the exception came from. For example, you could 
 be calling
 std.getopt from some generic initialization code that does 
 other stuff
 too, both of which may throw a ConversionException, say. 
 Sometimes you
 need to distinguish between them (display a command-line syntax 
 help in
 one case, just display an error in the other case, depending on 
 where
 the exception came from). One solution is to add a locus field 
 to
 Exception:

 	class Exception : Error {
 		...
 		/// Where this exception came from
 		/// E.g., "std.getopt", "std.xml",
 		/// "my.program.init.complexconv", etc..
 		string locus;
 	}

 This way the catching code doesn't have to downcast, guess, or 
 do some
 ugly non-portable hacking to figure out what to do.

 This field should probably be automatically filled by 
 Exception's ctor,
 so that it doesn't require additional burden on the programmer.

 I'm not 100% sure the module name should be used in this field, 
 but the
 idea is that it should contain some way of identifying the 
 origin of the
 exception that can be programmatically identified.

My first though was: "Isn't this what the stack trace is for?", but then again, that's rather bothersome to parse programmatically. I'm not completely sold on the idea of tagging exceptions with their modules of origin, but I don't have any good alternatives, either. Lars
Apr 01 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 18:40:48 UTC, Dmitry Olshansky wrote:
 01-Apr-2013 15:08, Lars T. Kyllingstad пишет:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Overall neat. 1. Where to define this hierarchy. Captain the obvious says std.exception, but can we fit all of them there?

Some of them have to be in core.exception. Personally, I think the rest of the base hierarchy should be in std.exception. In fact, I think the "standard hierarchy" should be defined as "what's in core.exception and std.exception". If we identify cases where modules absolutely *must* extend these classes for extremely specific purposes, this can be done within the module.
 2. ProcessException is IMHO a system exception. Plus some 
 RTOses systems may not have processes in the usaul POSIX sense. 
 It needs some thought.

Well, I *don't* think ProcessException is a SystemException. :) And if druntime/Phobos is to be used on OSes that don't have processes, there are other adaptations that have to be made that are probably more fundamental.
 3. I assume that NetworkingException should rather be 
 NetworkException. Not only that but I truly hope it deals with 
 network-only events (like resolver errors, host unreachable 
 etc.) not with normal I/O Exceptions happening on sockets. It 
 could get tricky to separate the two on some OSes.

I have changed the name. I have the same view of the NetworkException/IOException separation as you.
 4. A quiz as an extension of 3 - where a e.g. serial port 
 exceptions would fit in this hierarchy? Including Parity 
 errors, data overrun etc.
 Let's think a bit ahead and pick names wisely.
 Maybe turn NetworkException --> Comm(unication)Exception, I 
 dunno.

Good question, I dunno either. :) I agree we should think about it.
 5. Continuing the above - a failed call (in-system) of a 
 general purpose IPC library would be a ... SystemException? A 
 network exception? Should it matter to the user?

Note that I am only saying that the standard exceptions should cover *most* error categories. 100% is not feasible, and we shouldn't even try. This may be one of the exceptions (haha) to the rule.
 6. I like ParseException. Wondering if could be generalized a 
 bit - in a sense it's a "bad format"/"malformed" exception. For 
 instance corrupt DB file reporting ParseException (= bad 
 format) is rather wacky.

I agree, and welcome all suggestions for better names.
 7. DocParseExcpetion ---> TextParseException there is a notion 
 of a binary "document" (e.g. BSON databases like Mongo DB).

Agreed.
 8. For IOExcpetion we might consider adding an indication on 
 which file handle the problem happened and/or if it's 
 closed/invalid/"mostly fine" that.

Which form do you suggest that such an indicator should take? Bear in mind that it should be general enough to cover all, or at least most, kinds of I/O exceptions. Lars
Apr 01 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 17:42:07 UTC, Jesse Phillips wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

I like the idea, I think some specifics may need worked out (already being discussed) You're likely purposely avoiding this. But when the subject of a "Exception Hierarchy" comes up thoughts of having specific modules to house the exceptions come up. I don't think this is the primary concern, but my position is to keep the exceptions in their std implementation/use.

You're right, I was purposely avoiding it in the DIP. :) But I have given the matter some thought; see my answer to Dmitry. Lars
Apr 01 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, April 01, 2013 21:17:27 Lars T. Kyllingstad wrote:
 On Monday, 1 April 2013 at 12:12:56 UTC, Lars T. Kyllingstad
 
 wrote:
 I think OutOfMemory should not be restricted by nothrow, and I
 propose to solve it as described above.

More precisely: In principle, I think OutOfMemory *should* be restricted by nothrow, but it would break too much code, and be far too annoying, to be feasible. :)

There's really no point in making it so that OutOfMemory prevents nothrow, as it's pretty much assumed by the runtime that OutOfMemory means that you're toast (hence why it's not an Exception). If anything making it prevent nothrow would be a disaster, rendering nothrow nigh on unusable in many situations where it should work just fine. The whole point of nothrow is to indicate that no Exceptions can be thrown (and therefore anything that's required by exceptions doesn't have to be done). Errors don't fall in that category at all, especially in light of the fact that there's no guarantee that any clean- up will be done when a non-Exception is thrown. - Jonathan M Davis
Apr 01 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, April 01, 2013 13:23:49 monarch_dodra wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad
 
 wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

A quick comment about your "Error" section. You say: "In general, Errors should not be caught, primarily because they indicate that the program logic is compromised, and that the program may therefore be in an invalid state from which there is no recovery". It is actually much worst than that: errors bypass the entire exception handling mechanism, blasting through code that would handle destructors, and even flying through functions that are nothrow. They don't just indicate a "potential" invalid state, they actually *put* the program in an invalid state, from which there is no recovery. That is the main mechanical difference between an "error" and an "exception", it is not just a philosophical "logic vs runtime". -------- Under this situation, I'm wondering how the "OutOfMemory" is dealt with (you don't explain).

It's not an Exception, so no clean-up is guaranteed. _Any_ Throwable which is not an Exception risks no clean-up being done. Error really has nothing to do with it beyond the fact that it's not an Exception. Now, the reality of the matter is that the current implementation _does_ generally do clean-up when non-Exceptions are thrown (the major exception being code dealing with nothrow functions IIRC), but according to Walter, there's no guarantee that that's the case (and I'm not sure that he's particularly happy that any clean-up happens at all for non-Exceptions like it does right now). - Jonathan M Davis
Apr 01 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

About out-of-memory errors -------------------------- These are considered non-recoverable exceptions for the following reasons: 1. I've almost never seen a program that could successfully recover from out of memory errors, even ones that purport to. 2. Much effort is expended trying to make them recoverable, yet it doesn't work, primarily because the recovery paths are never tested. 3. There are an awful lot of instances where memory is allocated - almost as many as allocating stack space. (Running out of stack space doesn't even throw an Error exception, the program is just unceremoniously aborted.) 4. Making it recoverable means that pure functions now have side effects. Function purity, rather than a major feature of D, would become a little-used sideshow of marginal utility. 5. Although a bad practice, destructors in the unwinding process can also allocate memory, causing double-fault issues. 6. Memory allocation happens a lot. This means that very few function hierarchies could be marked 'nothrow'. This throws a lot of valuable optimizations under the bus. 7. With the multiple gigs of memory available these days, if your program runs out of memory, it's a good sign there is something seriously wrong with it (such as a persistent memory leak). 8. If you must recover from specific out of memory possibilities, you can still use malloc() or some other allocation scheme that does not rely on the GC.
Apr 01 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/2/2013 3:40 AM, deadalnix wrote:
 On Monday, 1 April 2013 at 20:58:00 UTC, Walter Bright wrote:
 On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 5. Although a bad practice, destructors in the unwinding process can also
 allocate memory, causing double-fault issues.


In C++, such aborts the program as the runtime can't handle it. In general, though, it's a hard to reason about problem.
 6. Memory allocation happens a lot. This means that very few function
 hierarchies could be marked 'nothrow'. This throws a lot of valuable
 optimizations under the bus.

how much gain you have from them is general ? Actual data are always better when discussing optimization.

For Win32 in particular, getting rid of EH frames results in a significant shortening of the code generated (try it and see). In general, a finally block defeats lots of flow analysis optimizations, and defeats enregistering of variables.
 7. With the multiple gigs of memory available these days, if your program runs
 out of memory, it's a good sign there is something seriously wrong with it
 (such as a persistent memory leak).

DMD regularly does.

I know DMD does, and I regard that as a problem with DMD - one that cannot be solved by catching out-of-memory exceptions.
Apr 02 2013
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

As for why finally blocks are not executed for Error exceptions, the idea is to minimize cases where the original error would now cause an abort during the unwinding process. Catching an Error is useful for things like: 1. throw the whole plugin away and restart it 2. produce a log of what happened before aborting 3. engage the backup before aborting 4. alert the operator that the system has failed and why before aborting Unwinding is not necessary for these, and can even get in the way by causing other failures and aborting the program by attempting cleanups when the code is in an invalid state.
Apr 01 2013
parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/2/2013 4:42 AM, Don wrote:
 I think that view is reasonable, but then I don't understand the reason to have
 Error in the first place! Why not just call some kind of abort() function, and
 provide the ability to hook into it?

It would seem to be a fair amount of bookkeeping code to make a hook that would work with many different pieces of code dealing with Errors in different ways.
Apr 02 2013
prev sibling next sibling parent =?utf-8?Q?Simen_Kj=C3=A6r=C3=A5s?= <simen.kjaras gmail.com> writes:
On Mon, 01 Apr 2013 22:34:39 +0200, Ali =C3=87ehreli <acehreli yahoo.com=
 wrote:

 A safe program must first guarantee that that cleanup is harmless, whi=

 is not possible when the program is in an invalid state. Imagine sendi=

 almost infinite number of "cleanup" commands to a device that can harm=

 people who are around it.

Of course. But the opposite is also the case - failure to turn off = dangerous hardware, or leaving hardware in a dangerous state when the program fail= s is just as bad as putting it in an unknown state. The decision must be m= ade on a case-by-case basis. I am reminded of Therac-25[1]. though the situation there was slightly different, similar situations could arise from not turning off hardware.= [1]: http://en.wikipedia.org/wiki/Therac-25 -- = Simen
Apr 01 2013
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, April 01, 2013 21:33:22 Lars T. Kyllingstad wrote:
 My problem with subclasses is that they are a rather heavyweight
 addition to an API. If they bring no substance (such as extra
 data), I think they are best avoided.

Except that they're extremely valuable when you need to catch them. Being able to do something like try { ... } catch(FileNotFoundException fnfe) { //handling specific to missing files } catch(PermissionsDeniedException pde) { //handling specific to lack of permissions } catch(FileExceptionfe { //generic file handling error } catch(Exception e) { //generic error handling } can be very valuable. In general, I'd strongly suggest having subclasses for the various "Kind"s in addition to the kind field. That way, you have the specific exception types if you want to have separate catch blocks for different error types, and you have the kind field if you just want to catch the base exception. If anything, exceptions are exactly the place where you want to have derived classes with next to nothing in them, precisely because it's the type that the catch mechanism uses to distinguish them. - Jonathan M Davis
Apr 01 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-02 01:52, Steven Schveighoffer wrote:

 I admit I haven't read the DIP yet, but I was responding to the general
 debate.  I agree with Lars that exceptions that add no data are hard to
 justify.

 But I also hate having to duplicate catch blocks.  The issue is that
 class hierarchies are almost never expressive enough.

 contrived example:

 class MyException : Exception {}
 class MySpecificException1 : MyException {}
 class MySpecificException2 : MyException {}
 class MySpecificException3 : MyException {}

 try
 {
     foo(); // can throw exception 1, 2, or 3 above
 }
 catch(MySpecificException1 ex)
 {
     // code block a
 }
 catch(MySpecificException2 ex)
 {
     // code block b
 }

 What if code block a and b are identical?  What if the code is long and
 complex?  Sure, I can put it in a function, but this seems superfluous
 and verbose -- exceptions are supposed to SIMPLIFY error handling, not
 make it more complex or awkward.  Basically, catching exceptions is like
 having an if statement which has no boolean operators.

 Even if I wanted to write one block, and just catch MyException, then
 check the type (and this isn't pretty either), it's not exactly what I
 want -- I will still catch Exception3.  If this is the case, I'd rather
 just put an enum in MyException and things will be easier to read and
 write.

The obvious solution to that would be to be able to specify multiple exception types for a single catch block: catch (MySpecificException1, MySpecificException2 ex) { } -- /Jacob Carlborg
Apr 01 2013
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-02 15:02, H. S. Teoh wrote:

 But what type will ex be, inside the catch block?

I was thinking the closest base type of the both exception types. It's basically what one would do if one would call a common function from multiple catch blocks. -- /Jacob Carlborg
Apr 02 2013
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-02 15:44, Steven Schveighoffer wrote:

 Yes, this could help.  But it's still not great.  One must still store a
 "type identifier" in the ex, or have to deal with casting to figure out
 what type ex is.

You would have the same problem if you used a common function for handling multiple exceptions.
 It also promotes creating a new type for every single catchable situation.

 Consider that we could create one exception type that contains an
 'errno' member, and if we have the ability to run extra checks for
 catching you could do:

 catch(ErrnoException ex) if (ex.errno == EBADF || ex.errno == EBADPARAM)
 {
     ...
 }

Is that so much better than: catch (ErrnoException ex) { if (ex.errno == EBADF || ex.errno == EBADPARAM) { /*handle exception */ } else throw ex; }
 But if we must do it with types, we need:

 class ErrnoException(uint e) : Exception
 {
     enum errno = e;
 }

 or worse, to make things easier to deal with we have:

 class ErrnoException : Exception
 {
     int errno;
     this(int errno) { this.errno = errno; }
 }

 class ErrnoExceptionT(uint e) : ErrnoException
 {
     this() { super(e); }
 }

 which would be easier to deal with, but damn what a waste!  Either way,
 every time you catch another errno exception, we are talking about
 instantiating another type.

Does that matter? It still need to create a new instance for every exception thrown. Or are you planning on changing the "errno" field and rethrow the exception? -- /Jacob Carlborg
Apr 02 2013
parent reply Jacob Carlborg <doob me.com> writes:
On 2013-04-02 19:02, Steven Schveighoffer wrote:

 Right, but what I see here is that the language uses one set of criteria
 to determine whether it should catch, but it's difficult to use that
 same criteria in order to process the exception.  It's not easy to
 switch on a class type, in fact it's downright ugly (maybe we need to
 come up with a way to do that in normal code too).

That's kind of breaking the whole point of OO and virtual functions, you should not need to know the exact type.
 Yes.  I won't forget to re-throw the exception.  Plus, it seems that you
 are saying "catch this, but if it's also that, then *really* catch it".
 I think the catch is a one-shot deal, and should be the final
 disposition of the exception, you should rarely have to re-throw.
 Re-throwing has it's own problems too, consider this possibility:

 catch(ErrnoException ex) if(ex.errno == EBADF || ex.errno == EBADPARAM)
 {
     // handle these specifically
 }
 catch(Exception ex)
 {
     // handle all other exceptions
 }

 I think you would have to have nested try/catch statements to do that
 without something like this.

I'm just trying to exhaust all exiting language constructs first, before adding new ones.
 I mean it's a waste of code space and template bloat.  Not a waste to
 create the exception.

I see. -- /Jacob Carlborg
Apr 02 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-03 05:14, Steven Schveighoffer wrote:

 The example dictates we determine why the exception is thrown, yet that
 we can't catch the exact type, because we would have to duplicate the
 code block.

 So we somehow have to catch the base type and then manually verify it's
 one of the types we want, and rethrow otherwise.  If there is a better
 idea, I'd love to hear it.

My objection was mostly to "maybe we need to come up with a way to do that in normal code too". Exceptions are kind of weird in that you break what OO is all about, not having to know the exact type. You may not need that with exceptions either, depending on what you do with them. -- /Jacob Carlborg
Apr 02 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 01 Apr 2013 18:26:22 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, April 01, 2013 21:33:22 Lars T. Kyllingstad wrote:
 My problem with subclasses is that they are a rather heavyweight
 addition to an API. If they bring no substance (such as extra
 data), I think they are best avoided.

Except that they're extremely valuable when you need to catch them. Being able to do something like try { ... } catch(FileNotFoundException fnfe) { //handling specific to missing files } catch(PermissionsDeniedException pde) { //handling specific to lack of permissions } catch(FileExceptionfe { //generic file handling error } catch(Exception e) { //generic error handling } can be very valuable. In general, I'd strongly suggest having subclasses for the various "Kind"s in addition to the kind field. That way, you have the specific exception types if you want to have separate catch blocks for different error types, and you have the kind field if you just want to catch the base exception. If anything, exceptions are exactly the place where you want to have derived classes with next to nothing in them, precisely because it's the type that the catch mechanism uses to distinguish them.

In general, this is not enough. Imagine having an exception type for each errno number. Someone may want that! Note that there are two categories of code dealing with thrown exceptions: 1. whether to catch 2. what to do Right now, we have the super-basic java/c++ model of matching the type for item 1. D could be much better than that: catch(SystemException e) if(e.errno == EBADF) { ... } For item 2, once you have the caught exception, you have mechanisms to deal with the various fields of the exception. So even without improvements to #1, you can rethrow the exception if it's not what you wanted. Just the code isn't cleaner: catch(SystemException e) { if(e.errno != EBADF) throw e; } -Steve
Apr 01 2013
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, April 01, 2013 13:08:15 Lars T. Kyllingstad wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

The basic idea is good, but of course, some details need to be sorted out. As I explained in another response, I really think that we should have derived exceptions for many of the "kind"s so that they can be explicitly caught. And of course, in some cases, extra information can be put into the subclasses (e.g. UTFException contains extra data specific to it), so having derived exceptions is very valuable. Having the kinds is great, but I don't think that that should preclude having derived classes which can be explicitly caught, thereby allowing users to choose whether they want to catch the base type (and possibly use the kind field, possibly not) or to catch the derived types when they want error handling specific to the errors that those types represent. Using only kind fields rather than derived exception types makes it harder to use try-catch blocks to separate out error handling code like they were designed to do. Another concern I have is InvalidArgumentError. There _are_ cases where it makes sense for invalid arguments to be an error, but there are also plenty of cases where it should be an exception (TimeException is frequently used in that way), so we may or may not want an InvalidArgumentException, but if you do that, you run the risk of making it too easy to confuse the two, thereby causing nasty bugs. And most of the cases where InvalidArgumentError makes sense could simply be an assertion in an in contract, so I don't know that it's really needed or even particularly useful. In general, I think that having a variety of Exception types is valuable, because you catch exceptions based on their type, but with Errors, you're really not supposed to catch them, so having different Error types is of questionable value. That doesn't mean that we shouldn't ever do it, but they need a very good reason to exist given the relative lack of value that they add. Also, if you're suggesting that these be _all_ of the exception types in Phobos, I don't agree. I think that there's definite value in having specific exception types for specific sections of code (e.g. TimeException for time- related code or CSVException in std.csv). It's just that they should be put in a proper place in the hierarchy so that users of the library can choose to catch either the base class or the derived class depending on how specific their error handling needs to be and on whatever else their code is doing. We _do_ want to move away from simply declaring module-specific exception types, but sometimes modules _should_ have specific exception types. The focus needs to be on creating a hierarchy that aids in error handling, so what exceptions we have should be based on what types of things it makes sense to catch in order to handle those errors specifically rather than them being treated as a general error, or even a general error of a specific category. Having a solid hierarchy is great and very much needed, but I fear that your DIP is too focused on getting rid of exception types rather than shifting them into their proper place in the exception hierarchy. In some cases, that _does_ mean getting rid of exception types, but I think that on the whole, it's more of a case of creating new base classes for existing exceptions so that we have key base classes in the hierarchy. The DIP focuses on those base classes but seems to want to get rid of the rest, and I think that that's a mistake. One more thing that I would point out is that your definition of DocParseException won't fly. file and line already exist in Exception with different meanings, so you'd need different names for them in DocParseException. - Jonathan M Davis
Apr 01 2013
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/2/2013 1:00 PM, Lars T. Kyllingstad wrote:
 I definitely don't think we need an IllegalArgumentException. IMO, passing
 illegal arguments is 1) a simple programming error, in which case it should be
 an Error, or 2) something the programmer can not avoid, in which case it
 requires a better description of why things went wrong than just "illegal
 argument".  "File not found", for example.

 I didn't really consider contracts when I wrote the DIP, and of course there
 will always be the problem of "should this be a contract or a normal input
 check?"  The problem with contracts, though, is that they go away in release
 mode, which is certainly not safe if that is your error handling mechanism.

A bit of philosophy here: Contracts are not there to validate user input. They are only there to check for logic bugs in the program itself. It's a very clear distinction, and should not be a problem. To reiterate, if a contract fails, that is a BUG in the program, and the program is then considered to be in an undefined state and is not recoverable. CONTRACTS ARE NOT AN ERROR HANDLING MECHANISM.
Apr 03 2013
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 4/2/2013 10:42 PM, Lars T. Kyllingstad wrote:
 The -release switch practically screams "use me in
 production code", but what's the point of bounds checking if it is only ever
 used while developers are testing their code?

For one thing, you can turn it on and off and actually measure how much the extra checks are costing. For another, it's up to the programmer how he wants to play it.
Apr 03 2013
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2013-04-03 16:33, Steven Schveighoffer wrote:

 I wish there was a way to say "this data is unchecked" or "this data is
 checked and certified to be correct" when you call a function.  That way
 you could run the in contracts on user-specified data, even with asserts
 turned off, and avoid the checks in release code when the data has
 already proven valid.

Scott Meyers had a good talk about this: http://www.youtube.com/watch?v=Jfu9Kc1D-gQ -- /Jacob Carlborg
Apr 03 2013
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 04/03/2013 07:42 AM, Lars T. Kyllingstad wrote:
 ...

 I personally think that, as a general rule, Errors should stay in
 production code.  I thought we had already separated -noboundscheck from
 -release, but I just tested now and that doesn't seem to be the case:
 -release still implies -noboundscheck.
 ...

It is more subtle than that. -release disables bounds checks in system code. -noboundscheck disables bounds checks in safe code.
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Monday, April 01, 2013 19:02:52 Steven Schveighoffer wrote:
 On Mon, 01 Apr 2013 18:26:22 -0400, Jonathan M Davis <jmdavisProg gmx.com>
 
 wrote:
 On Monday, April 01, 2013 21:33:22 Lars T. Kyllingstad wrote:
 My problem with subclasses is that they are a rather heavyweight
 addition to an API. If they bring no substance (such as extra
 data), I think they are best avoided.

Except that they're extremely valuable when you need to catch them. Being able to do something like try { ... } catch(FileNotFoundException fnfe) { //handling specific to missing files } catch(PermissionsDeniedException pde) { //handling specific to lack of permissions } catch(FileExceptionfe { //generic file handling error } catch(Exception e) { //generic error handling } can be very valuable. In general, I'd strongly suggest having subclasses for the various "Kind"s in addition to the kind field. That way, you have the specific exception types if you want to have separate catch blocks for different error types, and you have the kind field if you just want to catch the base exception. If anything, exceptions are exactly the place where you want to have derived classes with next to nothing in them, precisely because it's the type that the catch mechanism uses to distinguish them.

In general, this is not enough. Imagine having an exception type for each errno number. Someone may want that!

Obviously, there are limits. You don't want exceptions for absolutely every possible error condition under the sun, but a lot of errnos are quite rare and likely wouldn't be caught explicitly very often anyway. And something like FileNotFoundException _would_ be caught and handled differently from other file exceptions often enough to merit its own exception IMHO. What's being presented in this DIP is very sparse, and at least some portion of the kinds should be represented by derived types in addition to the kinds.
 Note that there are two categories of code dealing with thrown exceptions:
 
 1. whether to catch
 2. what to do
 
 Right now, we have the super-basic java/c++ model of matching the type for
 item 1. D could be much better than that:
 
 catch(SystemException e) if(e.errno == EBADF)
 {
 ...
 }
 
 For item 2, once you have the caught exception, you have mechanisms to
 deal with the various fields of the exception. So even without
 improvements to #1, you can rethrow the exception if it's not what you
 wanted. Just the code isn't cleaner:
 
 catch(SystemException e)
 {
 if(e.errno != EBADF)
 throw e;
 }

Adding new features to the language would changes things a bit, but without that, having specific exception types is generally the way to go. Otherwise, you get stuck doing things like putting switch statements in your catch blocks when we already have a perfectly good catch mechanism for separating out error types by the type of the exception being caught. - Jonathan M Davis
Apr 01 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Apr 01, 2013 at 03:25:48PM -0700, Walter Bright wrote:
 On 4/1/2013 2:20 PM, Simen Kjærås wrote:
I am reminded of Therac-25[1]. though the situation there was
slightly different, similar situations could arise from not turning
off hardware.

Relying on a program running correctly in order to avoid disaster is a terrible design. Even mathematically proving a program to be correct is in no way, shape, or form sufficient to deal with this.

"Beware of bugs in the above code; I have only proved it correct, not tried it." -- Donald Knuth T -- Кто везде - тот нигде.
Apr 01 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Mon, 01 Apr 2013 19:19:31 -0400, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, April 01, 2013 19:02:52 Steven Schveighoffer wrote:

 In general, this is not enough. Imagine having an exception type for  
 each
 errno number. Someone may want that!

Obviously, there are limits. You don't want exceptions for absolutely every possible error condition under the sun, but a lot of errnos are quite rare and likely wouldn't be caught explicitly very often anyway. And something like FileNotFoundException _would_ be caught and handled differently from other file exceptions often enough to merit its own exception IMHO. What's being presented in this DIP is very sparse, and at least some portion of the kinds should be represented by derived types in addition to the kinds.

I admit I haven't read the DIP yet, but I was responding to the general debate. I agree with Lars that exceptions that add no data are hard to justify. But I also hate having to duplicate catch blocks. The issue is that class hierarchies are almost never expressive enough. contrived example: class MyException : Exception {} class MySpecificException1 : MyException {} class MySpecificException2 : MyException {} class MySpecificException3 : MyException {} try { foo(); // can throw exception 1, 2, or 3 above } catch(MySpecificException1 ex) { // code block a } catch(MySpecificException2 ex) { // code block b } What if code block a and b are identical? What if the code is long and complex? Sure, I can put it in a function, but this seems superfluous and verbose -- exceptions are supposed to SIMPLIFY error handling, not make it more complex or awkward. Basically, catching exceptions is like having an if statement which has no boolean operators. Even if I wanted to write one block, and just catch MyException, then check the type (and this isn't pretty either), it's not exactly what I want -- I will still catch Exception3. If this is the case, I'd rather just put an enum in MyException and things will be easier to read and write. That being said, this is the mechanism we have, and the standard library shouldn't fight that. I will have to read the DIP before commenting further on that. -Steve
Apr 01 2013
prev sibling next sibling parent "Chris Nicholson-Sauls" <ibisbasenji gmail.com> writes:
On Monday, 1 April 2013 at 23:52:52 UTC, Steven Schveighoffer 
wrote:
 contrived example:

 class MyException : Exception {}
 class MySpecificException1 : MyException {}
 class MySpecificException2 : MyException {}
 class MySpecificException3 : MyException {}

 try
 {
    foo(); // can throw exception 1, 2, or 3 above
 }
 catch(MySpecificException1 ex)
 {
    // code block a
 }
 catch(MySpecificException2 ex)
 {
    // code block b
 }

 What if code block a and b are identical?

I was thinking about this too. And the most obvious answer in D is not that great. try { foo(); // can throw 1, 2, or 3 } catch ( Exception ex ) { if ( cast( Exception1 ) ex !is null || cast( Exception2 ) ex !is null ) { // recovery code } else throw ex; } Ew. The first thing that comes to mind is separating the variable from the condition, thus allowing multiple matches. catch ex ( Exception1, Exception2 ) { // recovery code } The necessary semantic caveat being that the type of 'ex' would be the nearest common base type to the named exception types. (The syntax is similar to some languages that have built-in error types and the like.) Combined with the previous proposal of being able to attach an if-constraint to catch blocks, I suppose it could be rather elaborate (powerful though?).
Apr 01 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 1 April 2013 at 22:46:49 UTC, Ali Çehreli wrote:
 On 04/01/2013 02:01 PM, Dmitry Olshansky wrote:> 02-Apr-2013 
 00:34, Ali Çehreli пишет:

 The failed assertion may be the moment when the program


 something is wrong. A safe program should stop doing


 And that's precisely the interesting moment. It should stop

 definition of "stop" really depends on many factors. Just

 that calling abort is a panacea is totally wrong IMO.

 BTW what do you exactly mean by "safe" program?

I meant a program that wants to produce correct results. I was indeed thinking about Therac-25 that Simen Kjærås mentioned. I agree that there must be hardware fail-safe switches as well but they could not protect people from every kind of software failure in that example. Having said that, I can see the counter argument as well: We are in an inconsistent state, so trying to do something about it could be better than not running a cleanup code. But I also remember that an AssertError may be thrown by an assert() call, telling us that a programmer put it in there explicitly, meaning that the program cannot continue. If there was any chance of recovery, then the programmer could have thrown an Exception or remedy the situation right then.

Yes, this is definitively a per case issue. Not running cleanup code can transform a small issue in a big disaster as running can make the problem worse. I don't think wiring in the language the fact that error don't run the cleanup code is rather dangerous. If I had to propose something, it would be to handle error the same way exception are handled, but propose a callback that is ran before the error is throw, in order to allow for complete program stop based on user logic.
Apr 02 2013
prev sibling next sibling parent reply "deadalnix" <deadalnix gmail.com> writes:
On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Several things. First the usage of enums isn't the right path. This makes it hard to extend in general, and it is a poor man replacement for sub classes in general. As a rule of thumb, when you use switch in OOP code, you are likely to do something wrong. Second, many of you error are recoverable here. It isn't quite satisfying. RangeError is a very bad thing IMO. It completely hides why the range fails in the first place. Trying to access front when not possible for instance, is an error for a reason (which is range dependent). That reason must be the source of the error/exception. In general the hierarchy is weird. Why isn't NetworkingException (why not NetworkException ?) a subclass of IOException ? OutOfMemoryError on its own isn't good IMO. The Error hierarchy is made for error that aren't recoverable (or may not be recoverable). It include a whole class of problem, and OOM is only one of them (another example is Stack overflow errors).
Apr 02 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 14:37, deadalnix пишет:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Several things. First the usage of enums isn't the right path. This makes it hard to extend in general, and it is a poor man replacement for sub classes in general.

And using sub-classing just to tag something is equally bad. We don't have multiple inheritance so adding say 2 independent pieces of info can get devilishly hard (and lead to combinatorial explosion of classes). What could be better is indeed "tags" that transport arbitrary set of interesting info about exception (that's what enums do but in more hard-wired way). See also what C++ gurus come up with: http://www.boost.org/doc/libs/1_53_0/libs/exception/doc/boost-exception.html
 As a rule of thumb, when you use switch in OOP code, you are likely to
 do something wrong.

And that's why? BTW D doesn't have pattern matching and type-switch like some OOP languages have. You might want to add Visitor pattern to Exceptions but it's darn messy to deal with and is an overkill most of the time.
 Second, many of you error are recoverable here. It isn't quite satisfying.

 RangeError is a very bad thing IMO. It completely hides why the range
 fails in the first place. Trying to access front when not possible for
 instance, is an error for a reason (which is range dependent). That
 reason must be the source of the error/exception.

I/O error when in fact you just run out of range during some (bogus) algorithm and that range happened to be backed by a MM-File? Think again.
 In general the hierarchy is weird. Why isn't NetworkingException (why
 not NetworkException ?) a subclass of IOException ?

No, no and no. I/O is read/write etc. calls. Network is broad stuff dealing at conceptually lower level (like discovering hosts that then would produce end-points that in turn can be connected to and only *then* I/O comes). "Unreachable host" is not an I/O error.
 OutOfMemoryError on its own isn't good IMO. The Error hierarchy is made
 for error that aren't recoverable (or may not be recoverable). It
 include a whole class of problem, and OOM is only one of them (another
 example is Stack overflow errors).

-- Dmitry Olshansky
Apr 02 2013
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
 You might want to add Visitor pattern to Exceptions but it's darn messy
 to deal with and is an overkill most of the time.

Actually I think that's a good thing to do. Andrei
Apr 02 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 15:35, Andrei Alexandrescu пишет:
 On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
 You might want to add Visitor pattern to Exceptions but it's darn messy
 to deal with and is an overkill most of the time.

Actually I think that's a good thing to do.

Why would be that? It doesn't solve the key problem of "try clause plus a ton of semi-identical catches" used just to perform a mapping of X handlers to Y subsets of errors. Plus visitor does the same dispatch that is already addressed by exception handlers (or partly so). If somebody comes up with a reasonable Visitor pattern for Exceptions that is flexible and fast then sure let's see it. I just doubt it'll help anything on its own in any case. -- Dmitry Olshansky
Apr 02 2013
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 4/2/13 7:59 AM, Dmitry Olshansky wrote:
 02-Apr-2013 15:35, Andrei Alexandrescu пишет:
 On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
 You might want to add Visitor pattern to Exceptions but it's darn messy
 to deal with and is an overkill most of the time.

Actually I think that's a good thing to do.

Why would be that? It doesn't solve the key problem of "try clause plus a ton of semi-identical catches" used just to perform a mapping of X handlers to Y subsets of errors. Plus visitor does the same dispatch that is already addressed by exception handlers (or partly so).

Visitor allows centralized and flexible handling of exceptions.
 If somebody comes up with a reasonable Visitor pattern for Exceptions
 that is flexible and fast then sure let's see it. I just doubt it'll
 help anything on its own in any case.

Well I think exceptions + factory + visitor is quite the connection. Andrei
Apr 02 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 17:43, Andrei Alexandrescu пишет:
 On 4/2/13 7:59 AM, Dmitry Olshansky wrote:
 02-Apr-2013 15:35, Andrei Alexandrescu пишет:
 On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
 You might want to add Visitor pattern to Exceptions but it's darn messy
 to deal with and is an overkill most of the time.

Actually I think that's a good thing to do.

Why would be that? It doesn't solve the key problem of "try clause plus a ton of semi-identical catches" used just to perform a mapping of X handlers to Y subsets of errors. Plus visitor does the same dispatch that is already addressed by exception handlers (or partly so).

Visitor allows centralized and flexible handling of exceptions.
 If somebody comes up with a reasonable Visitor pattern for Exceptions
 that is flexible and fast then sure let's see it. I just doubt it'll
 help anything on its own in any case.

Well I think exceptions + factory + visitor is quite the connection.

The only place where visitor would fit nicely that I can think of is walking an exception chain. Other then this for the moment let me stick with "talk is cheap, show me the code" position :) -- Dmitry Olshansky
Apr 02 2013
prev sibling next sibling parent Michel Fortin <michel.fortin michelf.ca> writes:
On 2013-04-02 11:59:39 +0000, Dmitry Olshansky <dmitry.olsh gmail.com> said:

 02-Apr-2013 15:35, Andrei Alexandrescu пишет:
 On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
 You might want to add Visitor pattern to Exceptions but it's darn messy
 to deal with and is an overkill most of the time.

Actually I think that's a good thing to do.

Why would be that? It doesn't solve the key problem of "try clause plus a ton of semi-identical catches" used just to perform a mapping of X handlers to Y subsets of errors. Plus visitor does the same dispatch that is already addressed by exception handlers (or partly so).

What would be nice is some syntactic sugar for the following pattern: void handler(CommonExceptionType e) { // do something with exception } try { … } catch (FooException e) { handler(e); } catch (BarException e) { handler(e); } catch (BazException e) { handler(e); } That could become: try { … } catch (CommonExceptionType e in FooException, BarException, BazException) { // do something with exception } I've been secretly wishing for something like this, and not just in D. -- Michel Fortin michel.fortin michelf.ca http://michelf.ca/
Apr 02 2013
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 4/2/2013 4:59 AM, Dmitry Olshansky wrote:
 If somebody comes up with a reasonable Visitor pattern for Exceptions that is
 flexible and fast then sure let's see it.

It doesn't really need to be fast. If you need performance out of Exceptions, you're misusing the idiom.
Apr 02 2013
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 21:48, Walter Bright пишет:
 On 4/2/2013 4:59 AM, Dmitry Olshansky wrote:
 If somebody comes up with a reasonable Visitor pattern for Exceptions
 that is
 flexible and fast then sure let's see it.

It doesn't really need to be fast. If you need performance out of Exceptions, you're misusing the idiom.

The exceptions are slowish but that hardly justifies adding an extra amount of overhead on top of that. This might as well push people to avoid them even where it makes perfect sense to use exceptions. That being said let's see that beast and then measure, then optimize and then judge it. -- Dmitry Olshansky
Apr 02 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
02-Apr-2013 22:26, Jonathan M Davis пишет:
 On Tuesday, April 02, 2013 22:16:30 Dmitry Olshansky wrote:
 02-Apr-2013 21:48, Walter Bright пишет:
 On 4/2/2013 4:59 AM, Dmitry Olshansky wrote:
 If somebody comes up with a reasonable Visitor pattern for Exceptions
 that is
 flexible and fast then sure let's see it.

It doesn't really need to be fast. If you need performance out of Exceptions, you're misusing the idiom.

The exceptions are slowish but that hardly justifies adding an extra amount of overhead on top of that. This might as well push people to avoid them even where it makes perfect sense to use exceptions. That being said let's see that beast and then measure, then optimize and then judge it.

D's exceptions are ridiculously slow ( http://d.puremagic.com/issues/show_bug.cgi?id=9584 ).

Make sure you did the magic setStackTraceHandler(null); or something to that effect I need to peruse druntime API to pinpoint the exact name. It seems like debug hook dealing with traces is abhorrently slow and is called regardless of if you need to print it or not. Otherwise they are exceptions like any other.
 Granted, in general, you
 shouldn't be relying on exceptions being efficient (they _are_ the error code
 path after all), but we really should do better than we're currently doing,
 and adding extra overhead obviously wouldn't help.

 The main area that I find exception speed to be a real problem is in unit
 testing. Solid unit tests will test error conditions, which generally means
 using assertThrown to verify that the correct exception was thrown for bad
 input, but with how slow D's exceptions are, it becomes _very_ expensive to do
 many tests like that, which is very frustrating when you're trying to do
 thorough unit tests.

Indeed exceptions vs error codes shouldn't be orders of magnitude deal. If it is we need to do better job in implementation quality department. -- Dmitry Olshansky
Apr 02 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Monday, 1 April 2013 at 20:58:00 UTC, Walter Bright wrote:
 On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 5. Although a bad practice, destructors in the unwinding 
 process can also allocate memory, causing double-fault issues.

Why is double fault such a big issue ?
 6. Memory allocation happens a lot. This means that very few 
 function hierarchies could be marked 'nothrow'. This throws a 
 lot of valuable optimizations under the bus.

Can we have an overview of the optimization that are thrown under the bus and how much gain you have from them is general ? Actual data are always better when discussing optimization.
 7. With the multiple gigs of memory available these days, if 
 your program runs out of memory, it's a good sign there is 
 something seriously wrong with it (such as a persistent memory 
 leak).

DMD regularly does.
Apr 02 2013
prev sibling next sibling parent "Don" <turnyourkidsintocash nospam.com> writes:
On Monday, 1 April 2013 at 20:58:26 UTC, Walter Bright wrote:
 On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

As for why finally blocks are not executed for Error exceptions, the idea is to minimize cases where the original error would now cause an abort during the unwinding process. Catching an Error is useful for things like: 1. throw the whole plugin away and restart it 2. produce a log of what happened before aborting 3. engage the backup before aborting 4. alert the operator that the system has failed and why before aborting Unwinding is not necessary for these, and can even get in the way by causing other failures and aborting the program by attempting cleanups when the code is in an invalid state.

I think that view is reasonable, but then I don't understand the reason to have Error in the first place! Why not just call some kind of abort() function, and provide the ability to hook into it? BTW, I actually went to quite a lot of trouble to make stack unwinding work correctly for Errors on Windows. It really wasn't easy.
Apr 02 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Tuesday, 2 April 2013 at 11:42:01 UTC, Don wrote:
 I think that view is reasonable, but then I don't understand 
 the reason to have Error in the first place! Why not just call 
 some kind of abort() function, and provide the ability to hook 
 into it?

I actually support something like this. While adding hooks for an non-recoverable error conditions is a valid use-case, it is better to keep it explicitly separate from usual error (exception) handling framework to keep clear that it is an advanced tool and hardly safe.
Apr 02 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 4/1/13, Walter Bright <newshound2 digitalmars.com> wrote:
 On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

As for why finally blocks are not executed for Error exceptions

They seem to be executed: void main() { try { throw new Error(""); } finally { assert(0); } } This will throw an AssertError.
Apr 02 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 02, 2013 at 08:55:43AM +0200, Jacob Carlborg wrote:
 On 2013-04-02 01:52, Steven Schveighoffer wrote:

But I also hate having to duplicate catch blocks.  The issue is that
class hierarchies are almost never expressive enough.

contrived example:

class MyException : Exception {}
class MySpecificException1 : MyException {}
class MySpecificException2 : MyException {}
class MySpecificException3 : MyException {}

try
{
    foo(); // can throw exception 1, 2, or 3 above
}
catch(MySpecificException1 ex)
{
    // code block a
}
catch(MySpecificException2 ex)
{
    // code block b
}

What if code block a and b are identical?  What if the code is long
and complex?  Sure, I can put it in a function, but this seems
superfluous and verbose -- exceptions are supposed to SIMPLIFY error
handling, not make it more complex or awkward.  Basically, catching
exceptions is like having an if statement which has no boolean
operators.

Even if I wanted to write one block, and just catch MyException, then
check the type (and this isn't pretty either), it's not exactly what
I want -- I will still catch Exception3.  If this is the case, I'd
rather just put an enum in MyException and things will be easier to
read and write.

The obvious solution to that would be to be able to specify multiple exception types for a single catch block: catch (MySpecificException1, MySpecificException2 ex) { }

But what type will ex be, inside the catch block? T -- Государство делает вид, что платит нам зарплату, а мы делаем вид, что работаем.
Apr 02 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 02 Apr 2013 02:55:43 -0400, Jacob Carlborg <doob me.com> wrote:

 The obvious solution to that would be to be able to specify multiple  
 exception types for a single catch block:

 catch (MySpecificException1, MySpecificException2 ex)
 {
 }

Yes, this could help. But it's still not great. One must still store a "type identifier" in the ex, or have to deal with casting to figure out what type ex is. It also promotes creating a new type for every single catchable situation. Consider that we could create one exception type that contains an 'errno' member, and if we have the ability to run extra checks for catching you could do: catch(ErrnoException ex) if (ex.errno == EBADF || ex.errno == EBADPARAM) { ... } But if we must do it with types, we need: class ErrnoException(uint e) : Exception { enum errno = e; } or worse, to make things easier to deal with we have: class ErrnoException : Exception { int errno; this(int errno) { this.errno = errno; } } class ErrnoExceptionT(uint e) : ErrnoException { this() { super(e); } } which would be easier to deal with, but damn what a waste! Either way, every time you catch another errno exception, we are talking about instantiating another type. I think the solution with a categorical errno exception, with a simple errno stored as a variable (along with the ability to catch based on it) is much cleaner IMO, and I wonder if testing members instead of typeids might be more efficient in the stack unwinding code. -Steve
Apr 02 2013
prev sibling next sibling parent "Jesse Phillips" <Jessekphillips+d gmail.com> writes:
On Tuesday, 2 April 2013 at 12:18:00 UTC, Andrej Mitrovic wrote:
 On 4/1/13, Walter Bright <newshound2 digitalmars.com> wrote:
 On 4/1/2013 4:08 AM, Lars T. Kyllingstad wrote:
 It's time to clean up this mess.

As for why finally blocks are not executed for Error exceptions

They seem to be executed: void main() { try { throw new Error(""); } finally { assert(0); } } This will throw an AssertError.

The spec makes no such guarantee, DMD just doesn't bother to not execute it. It is one of the many things which will break code when DMD decides to follow the spec.
Apr 02 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 2 April 2013 at 11:04:06 UTC, Dmitry Olshansky wrote:
 02-Apr-2013 14:23, deadalnix пишет:
 On Monday, 1 April 2013 at 22:46:49 UTC, Ali Çehreli wrote:

 Not running cleanup code can transform a small issue in a big 
 disaster
 as running can make the problem worse.

 I don't think wiring in the language the fact that error don't 
 run the
 cleanup code is rather dangerous.

 If I had to propose something, it would be to handle error the 
 same way
 exception are handled, but propose a callback that is ran 
 before the
 error is throw, in order to allow for complete program stop 
 based on
 user logic.

It's exactly what I have in mind as removing the exception handling is something user can't recreate easily. On the other hand "die on first signs of corruption" is as easy as a hook that calls abort before unwind of Error.

It is possible to propose as a default a hook that fails everything and can be overriden.
 Time to petition Walter ;)

Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 21:39:18 UTC, Dmitry Olshansky wrote:
 01-Apr-2013 23:46, Lars T. Kyllingstad пишет:
 On Monday, 1 April 2013 at 18:40:48 UTC, Dmitry Olshansky 
 wrote:
 2. ProcessException is IMHO a system exception. Plus some 
 RTOses
 systems may not have processes in the usaul POSIX sense. It 
 needs some
 thought.

Well, I *don't* think ProcessException is a SystemException. :) And if druntime/Phobos is to be used on OSes that don't have processes, there are other adaptations that have to be made that are probably more fundamental.

Okay. I guess all of this goes to "Embedded D"/"D Lite" kind of spec. Last thing - why separating ThreadException vs ProcessException and should they have some base class? Just asking to see the rationale behind it that is missing from DIP currently.

Well, I hadn't even considered that the same exception could be used for both. I see the similarity between processes and threads, but I also think there is a difference in how/when/where you want to deal with errors related to them. Say you have a function that starts a new process and then spawns a new thread to monitor it, and then the function fails for some reason. This could be something you'd want to deal with on different levels, as a ProcessException could simply be that you got the wrong executable name, while a ThreadException would usually point to a much deeper issue.
 4. A quiz as an extension of 3 - where a e.g. serial port 
 exceptions
 would fit in this hierarchy? Including Parity errors, data 
 overrun etc.
 Let's think a bit ahead and pick names wisely.
 Maybe turn NetworkException --> Comm(unication)Exception, I 
 dunno.

Good question, I dunno either. :) I agree we should think about it.


I have thought some more about it, and a basic serial comms error should probably be an IOException. An error in a higher-level serial protocol, on the other hand, would be a NetworkException, and then the name doesn't suck so much. CommException may still be better though, or maybe ProtocolException.
 8. For IOExcpetion we might consider adding an indication on 
 which
 file handle the problem happened and/or if it's 
 closed/invalid/"mostly
 fine" that.

Which form do you suggest that such an indicator should take?

That's the trick - I hoped somebody will just say "aha!" and add one :) The internal handle is hard to represent other then ... some platform specific integer value. There goes generality... Other then this there is a potential to stomp on feet of higher-level abstraction on top of that handle. That last bit makes me reconsider the idea. While I see some potential use for it I suspect it's too niche to fit in the general hierarchy.
 Bear in
 mind that it should be general enough to cover all, or at 
 least most,
 kinds of I/O exceptions.

Adding a Kind that states one of: out-of-data (read empty file) illegalOp (reading closed file, writing read-only file/socket) interrupted (operation was canceled by OS, connection forcibly closed, disk ejected etc.) hardFault (OS reports hardware failure)

These are good suggestions, and seem general enough. Lars
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 22:26:39 UTC, Jonathan M Davis wrote:
 On Monday, April 01, 2013 21:33:22 Lars T. Kyllingstad wrote:
 My problem with subclasses is that they are a rather 
 heavyweight
 addition to an API. If they bring no substance (such as extra
 data), I think they are best avoided.

Except that they're extremely valuable when you need to catch them. [...] try { ... } catch(FileNotFoundException fnfe) { //handling specific to missing files } catch(PermissionsDeniedException pde) { //handling specific to lack of permissions } catch(FileExceptionfe { //generic file handling error } catch(Exception e) { //generic error handling } can be very valuable.

Well, personally, I don't think this is much better than a switch statement.
 In general, I'd strongly suggest having subclasses for
 the various "Kind"s in addition to the kind field. That way, 
 you have the
 specific exception types if you want to have separate catch 
 blocks for different
 error types, and you have the kind field if you just want to 
 catch the base
 exception.

Then you'd have two points of maintenance when you wish to add or remove an error category.
 If anything, exceptions are exactly the place where you want to 
 have derived
 classes with next to nothing in them, precisely because it's 
 the type that the
 catch mechanism uses to distinguish them.

catch is *one* mechanism, switch is another. I propose to use catch for coarse-level handling (plus the cases where exceptions carry extra data), and switch for fine-level handling. In my experience, most of the time, you don't even bother distinguishing between the finer categories. If you can't open a file, well, that's that. Tell the user why and ask them to try another file. (I realise that this is highly arguable, of course.) Lars
Apr 02 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 02, 2013 at 05:47:30PM +0200, Lars T. Kyllingstad wrote:
[...]
 I have thought some more about it, and a basic serial comms error
 should probably be an IOException.  An error in a higher-level serial
 protocol, on the other hand, would be a NetworkException, and then the
 name doesn't suck so much.  CommException may still be better though,
 or maybe ProtocolException.

ProtocolException sounds like a low-level TCP or IP exception. I think NetworkException is still the best name, not overly specific, not overly generic. CommException sounds a bit too vague to me. T -- What are you when you run out of Monet? Baroque.
Apr 02 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 02, 2013 at 09:43:20AM -0400, Andrei Alexandrescu wrote:
 On 4/2/13 7:59 AM, Dmitry Olshansky wrote:
02-Apr-2013 15:35, Andrei Alexandrescu пишет:
On 4/2/13 7:24 AM, Dmitry Olshansky wrote:
You might want to add Visitor pattern to Exceptions but it's darn
messy to deal with and is an overkill most of the time.

Actually I think that's a good thing to do.

Why would be that? It doesn't solve the key problem of "try clause plus a ton of semi-identical catches" used just to perform a mapping of X handlers to Y subsets of errors. Plus visitor does the same dispatch that is already addressed by exception handlers (or partly so).

Visitor allows centralized and flexible handling of exceptions.
If somebody comes up with a reasonable Visitor pattern for Exceptions
that is flexible and fast then sure let's see it. I just doubt it'll
help anything on its own in any case.

Well I think exceptions + factory + visitor is quite the connection.

Hmm. So one could have something like this, perhaps? void main() { try { dotDotDotMagic(); } catch(Exception e) { e.handle(ExceptionHandler()); } } class ExceptionHandler { void handle(IOException e) { ... } void handle(ParseException e) { ... } ... } Of course, this would be the lowered version of some nice syntactic sugar that the compiler would translate into. But I'm not sure how this offers an advantage over the current state of things, though. How would you handle the case where some exception types aren't handled by ExceptionHandler? How would you handle accepting multiple exception types by the same handler code? How would you map something like this to nicer syntax that isn't worse than the current way of just manually defining a common function (or overloaded functions) that handles the exception? Or perhaps I didn't quite understand how you intend to implement the visitor pattern in exceptions. T -- Knowledge is that area of ignorance that we arrange and classify. -- Ambrose Bierce
Apr 02 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 02 Apr 2013 12:34:25 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-02 15:44, Steven Schveighoffer wrote:

 Yes, this could help.  But it's still not great.  One must still store a
 "type identifier" in the ex, or have to deal with casting to figure out
 what type ex is.

You would have the same problem if you used a common function for handling multiple exceptions.

Right, but what I see here is that the language uses one set of criteria to determine whether it should catch, but it's difficult to use that same criteria in order to process the exception. It's not easy to switch on a class type, in fact it's downright ugly (maybe we need to come up with a way to do that in normal code too).
 It also promotes creating a new type for every single catchable  
 situation.

 Consider that we could create one exception type that contains an
 'errno' member, and if we have the ability to run extra checks for
 catching you could do:

 catch(ErrnoException ex) if (ex.errno == EBADF || ex.errno == EBADPARAM)
 {
     ...
 }

Is that so much better than: catch (ErrnoException ex) { if (ex.errno == EBADF || ex.errno == EBADPARAM) { /*handle exception */ } else throw ex; }

Yes. I won't forget to re-throw the exception. Plus, it seems that you are saying "catch this, but if it's also that, then *really* catch it". I think the catch is a one-shot deal, and should be the final disposition of the exception, you should rarely have to re-throw. Re-throwing has it's own problems too, consider this possibility: catch(ErrnoException ex) if(ex.errno == EBADF || ex.errno == EBADPARAM) { // handle these specifically } catch(Exception ex) { // handle all other exceptions } I think you would have to have nested try/catch statements to do that without something like this.
 Either way,
 every time you catch another errno exception, we are talking about
 instantiating another type.

Does that matter? It still need to create a new instance for every exception thrown. Or are you planning on changing the "errno" field and rethrow the exception?

I mean it's a waste of code space and template bloat. Not a waste to create the exception. -Steve
Apr 02 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, April 02, 2013 17:56:07 Lars T. Kyllingstad wrote:
 On Monday, 1 April 2013 at 22:26:39 UTC, Jonathan M Davis wrote:
 In general, I'd strongly suggest having subclasses for
 the various "Kind"s in addition to the kind field. That way,
 you have the
 specific exception types if you want to have separate catch
 blocks for different
 error types, and you have the kind field if you just want to
 catch the base
 exception.

Then you'd have two points of maintenance when you wish to add or remove an error category.

True, but it's also trivial to do. But if we had to decide between basically putting error codes on exceptions and using sub-classes, I'd vote for subclasses in most cases - though errno would need to go the error code in the exception route, since it has different meanings in different contexts and would risk an absolute explosion of exception types anyway; though it should probably be translated to a more meaningful exception based on context with the errno exception being chained to it - which is what you suggest in the DIP IIRC. In general though, I'd favor subclasses, and I don't think that it's all that big a deal to give them each specific error codes when you want the base class to have an error code like you're suggesting. - Jonathan M Davis
Apr 02 2013
prev sibling next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Apr 02, 2013 at 01:29:00PM -0400, Jonathan M Davis wrote:
 On Tuesday, April 02, 2013 17:56:07 Lars T. Kyllingstad wrote:
 On Monday, 1 April 2013 at 22:26:39 UTC, Jonathan M Davis wrote:
 In general, I'd strongly suggest having subclasses for the various
 "Kind"s in addition to the kind field. That way, you have the
 specific exception types if you want to have separate catch blocks
 for different error types, and you have the kind field if you just
 want to catch the base exception.

Then you'd have two points of maintenance when you wish to add or remove an error category.

True, but it's also trivial to do. But if we had to decide between basically putting error codes on exceptions and using sub-classes, I'd vote for subclasses in most cases - though errno would need to go the error code in the exception route, since it has different meanings in different contexts and would risk an absolute explosion of exception types anyway; though it should probably be translated to a more meaningful exception based on context with the errno exception being chained to it - which is what you suggest in the DIP IIRC. In general though, I'd favor subclasses, and I don't think that it's all that big a deal to give them each specific error codes when you want the base class to have an error code like you're suggesting.

IMO, errno should be stored as-is in a dedicated ErrnoException. Any interpretation of errno thereof should wrap this ErrnoException inside another hierarchy-appropriate exception. For example: void lowLevelIORoutine(...) { if (osRead(...) < 0) { throw ErrnoException(errno); } ... } void libraryRoutine(...) { try { lowLevelIORoutine(...); } catch(ErrnoException e) { if (e.errno == ENOENT) { // Chain ErrnoException to FileNotFoundException throw new FileNotFoundException(e.msg, e); } else if (e.errno == ENOSPC) { // Chain ErrnoException to DiskFullException throw new DiskFullException(e.msg, e); } else { // etc. ... } } } This way, user code can catch IOException rather than ErrnoException, but errno is still accessible via .next should the user code want to deal directly with errno: void userCode() { try { auto f = File("/some/path/to/file"); } catch(IOException e) { if ((auto f = cast(ErrnoException) e.next) !is null) { handleErrno(f.errno); } ... } I think this would be a good use of the current .next field in Exception. T -- A mathematician is a device for turning coffee into theorems. -- P. Erdos
Apr 02 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, April 02, 2013 10:41:04 H. S. Teoh wrote:
 On Tue, Apr 02, 2013 at 01:29:00PM -0400, Jonathan M Davis wrote:
 On Tuesday, April 02, 2013 17:56:07 Lars T. Kyllingstad wrote:
 On Monday, 1 April 2013 at 22:26:39 UTC, Jonathan M Davis wrote:
 In general, I'd strongly suggest having subclasses for the various
 "Kind"s in addition to the kind field. That way, you have the
 specific exception types if you want to have separate catch blocks
 for different error types, and you have the kind field if you just
 want to catch the base exception.

Then you'd have two points of maintenance when you wish to add or remove an error category.

True, but it's also trivial to do. But if we had to decide between basically putting error codes on exceptions and using sub-classes, I'd vote for subclasses in most cases - though errno would need to go the error code in the exception route, since it has different meanings in different contexts and would risk an absolute explosion of exception types anyway; though it should probably be translated to a more meaningful exception based on context with the errno exception being chained to it - which is what you suggest in the DIP IIRC. In general though, I'd favor subclasses, and I don't think that it's all that big a deal to give them each specific error codes when you want the base class to have an error code like you're suggesting.

[...] IMO, errno should be stored as-is in a dedicated ErrnoException. Any interpretation of errno thereof should wrap this ErrnoException inside another hierarchy-appropriate exception. For example: void lowLevelIORoutine(...) { if (osRead(...) < 0) { throw ErrnoException(errno); } ... } void libraryRoutine(...) { try { lowLevelIORoutine(...); } catch(ErrnoException e) { if (e.errno == ENOENT) { // Chain ErrnoException to FileNotFoundException throw new FileNotFoundException(e.msg, e); } else if (e.errno == ENOSPC) { // Chain ErrnoException to DiskFullException throw new DiskFullException(e.msg, e); } else { // etc. ... } } } This way, user code can catch IOException rather than ErrnoException, but errno is still accessible via .next should the user code want to deal directly with errno: void userCode() { try { auto f = File("/some/path/to/file"); } catch(IOException e) { if ((auto f = cast(ErrnoException) e.next) !is null) { handleErrno(f.errno); } ... } I think this would be a good use of the current .next field in Exception.

Yes. That seems like a good approach and is essentially what I meant. - Jonathan M Davis
Apr 02 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, April 02, 2013 22:16:30 Dmitry Olshansky wrote:
 02-Apr-2013 21:48, Walter Bright пишет:
 On 4/2/2013 4:59 AM, Dmitry Olshansky wrote:
 If somebody comes up with a reasonable Visitor pattern for Exceptions
 that is
 flexible and fast then sure let's see it.

It doesn't really need to be fast. If you need performance out of Exceptions, you're misusing the idiom.

The exceptions are slowish but that hardly justifies adding an extra amount of overhead on top of that. This might as well push people to avoid them even where it makes perfect sense to use exceptions. That being said let's see that beast and then measure, then optimize and then judge it.

D's exceptions are ridiculously slow ( http://d.puremagic.com/issues/show_bug.cgi?id=9584 ). Granted, in general, you shouldn't be relying on exceptions being efficient (they _are_ the error code path after all), but we really should do better than we're currently doing, and adding extra overhead obviously wouldn't help. The main area that I find exception speed to be a real problem is in unit testing. Solid unit tests will test error conditions, which generally means using assertThrown to verify that the correct exception was thrown for bad input, but with how slow D's exceptions are, it becomes _very_ expensive to do many tests like that, which is very frustrating when you're trying to do thorough unit tests. - Jonathan M Davis
Apr 02 2013
prev sibling next sibling parent "Jesse Phillips" <Jessekphillips+D gmail.com> writes:
On Monday, 1 April 2013 at 22:46:49 UTC, Ali Çehreli wrote:
 But I also remember that an AssertError may be thrown by an 
 assert() call, telling us that a programmer put it in there 
 explicitly, meaning that the program cannot continue. If there 
 was any chance of recovery, then the programmer could have 
 thrown an Exception or remedy the situation right then.

 Ali

I don't think assert/Error makes any statement on the ability to recover. What it usually means is you need to fix this because I won't be checking this condition when you throw on that release flag. If you are doing input validation you should be throwing an exception. We can still throw exceptions in production, I don't tend to use this, but maybe this would be a time to say "invalid state stop." But then how do you distinguish it from "fix your program?" I've mostly enjoyed having temporary files being cleaned up upon some range access error which has no effect on my removing files that are no longer valid.
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Monday, 1 April 2013 at 23:03:56 UTC, Jonathan M Davis wrote:
 On Monday, April 01, 2013 13:08:15 Lars T. Kyllingstad wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

Another concern I have is InvalidArgumentError. There _are_ cases where it makes sense for invalid arguments to be an error, but there are also plenty of cases where it should be an exception (TimeException is frequently used in that way), so we may or may not want an InvalidArgumentException, but if you do that, you run the risk of making it too easy to confuse the two, thereby causing nasty bugs. And most of the cases where InvalidArgumentError makes sense could simply be an assertion in an in contract, so I don't know that it's really needed or even particularly useful. In general, I think that having a variety of Exception types is valuable, because you catch exceptions based on their type, but with Errors, you're really not supposed to catch them, so having different Error types is of questionable value. That doesn't mean that we shouldn't ever do it, but they need a very good reason to exist given the relative lack of value that they add.

I definitely don't think we need an IllegalArgumentException. IMO, passing illegal arguments is 1) a simple programming error, in which case it should be an Error, or 2) something the programmer can not avoid, in which case it requires a better description of why things went wrong than just "illegal argument". "File not found", for example. I didn't really consider contracts when I wrote the DIP, and of course there will always be the problem of "should this be a contract or a normal input check?" The problem with contracts, though, is that they go away in release mode, which is certainly not safe if that is your error handling mechanism.
 Also, if you're suggesting that these be _all_ of the exception 
 types in
 Phobos, I don't agree. I think that there's definite value in 
 having specific
 exception types for specific sections of code (e.g. 
 TimeException for time-
 related code or CSVException in std.csv). It's just that they 
 should be put in
 a proper place in the hierarchy so that users of the library 
 can choose to
 catch either the base class or the derived class depending on 
 how specific
 their error handling needs to be and on whatever else their 
 code is doing. We
 _do_ want to move away from simply declaring module-specific 
 exception types,
 but sometimes modules _should_ have specific exception types.

There may of course be, and probably are, a need for more exceptions than what I've listed in the DIP. The idea was to make a pattern, a system, to which more exceptions can be added if strictly necessary. I do think, however, that we should try to keep the number at a minimum, and that we should NOT create new classes for every little detail that can go wrong.
 The focus needs to be on creating a hierarchy that aids in 
 error handling, so
 what exceptions we have should be based on what types of things 
 it makes sense
 to catch in order to handle those errors specifically rather 
 than them being
 treated as a general error, or even a general error of a 
 specific category.
 Having a solid hierarchy is great and very much needed, but I 
 fear that your
 DIP is too focused on getting rid of exception types rather 
 than shifting them
 into their proper place in the exception hierarchy. In some 
 cases, that _does_
 mean getting rid of exception types, but I think that on the 
 whole, it's more
 of a case of creating new base classes for existing exceptions 
 so that we have
 key base classes in the hierarchy. The DIP focuses on those 
 base classes but
 seems to want to get rid of the rest, and I think that that's a 
 mistake.

It aims to get rid of the ones that don't add any value.
 One more thing that I would point out is that your definition of
 DocParseException won't fly. file and line already exist in 
 Exception with
 different meanings, so you'd need different names for them in 
 DocParseException.

True. Lars
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Tuesday, 2 April 2013 at 10:37:08 UTC, deadalnix wrote:
 On Monday, 1 April 2013 at 11:08:16 UTC, Lars T. Kyllingstad 
 wrote:
 It's time to clean up this mess.

 http://wiki.dlang.org/DIP33

Several things. First the usage of enums isn't the right path. This makes it hard to extend in general, and it is a poor man replacement for sub classes in general.

Phobos/druntime devs can always add to the enums. Users still have the option of subclassing if strictly necessary.
 As a rule of thumb, when you use switch in OOP code, you are 
 likely to do something wrong.

I'm not sure I agree with that rule. And anyway, D's final switch mitigate some of the problems with classic switch.
 Second, many of you error are recoverable here. It isn't quite 
 satisfying.

 RangeError is a very bad thing IMO. It completely hides why the 
 range fails in the first place. Trying to access front when not 
 possible for instance, is an error for a reason (which is range 
 dependent). That reason must be the source of the 
 error/exception.

No. To call front on an empty range is a programming error, plain and simple. It's like trying to access the first element of an empty array. The fact that some ranges may allow it anyway does not change anything.
 In general the hierarchy is weird. Why isn't 
 NetworkingException (why not NetworkException ?) a subclass of 
 IOException ?

Because they are supposed to signal different error conditions.
 OutOfMemoryError on its own isn't good IMO. The Error hierarchy 
 is made for error that aren't recoverable (or may not be 
 recoverable). It include a whole class of problem, and OOM is 
 only one of them (another example is Stack overflow errors).

The DIP sort of redefines Error to mean "programming error". Lars
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Tuesday, 2 April 2013 at 16:36:36 UTC, Jacob Carlborg wrote:
 On 2013-04-02 17:56, Lars T. Kyllingstad wrote:

 In my experience, most of the time, you don't even bother 
 distinguishing
 between the finer categories.  If you can't open a file, well, 
 that's
 that.  Tell the user why and ask them to try another file.  (I 
 realise
 that this is highly arguable, of course.)

I would say that there's a big difference if a file exist or if you don't have permission to access it. Think of the command line, you can easily misspell a filename, or forget to use "sudo".

This illustrates my point nicely! What does the shell do in this case? It treats both errors the same: It prints an error message and returns to the command line. It does not magically try to guess the filename, find a way to get you permission, etc. Lars
Apr 02 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Tuesday, April 02, 2013 22:00:57 Lars T. Kyllingstad wrote:
 On Monday, 1 April 2013 at 23:03:56 UTC, Jonathan M Davis wrote:
 On Monday, April 01, 2013 13:08:15 Lars T. Kyllingstad wrote:
 It's time to clean up this mess.
 
 http://wiki.dlang.org/DIP33

Another concern I have is InvalidArgumentError. There _are_ cases where it makes sense for invalid arguments to be an error, but there are also plenty of cases where it should be an exception (TimeException is frequently used in that way), so we may or may not want an InvalidArgumentException, but if you do that, you run the risk of making it too easy to confuse the two, thereby causing nasty bugs. And most of the cases where InvalidArgumentError makes sense could simply be an assertion in an in contract, so I don't know that it's really needed or even particularly useful. In general, I think that having a variety of Exception types is valuable, because you catch exceptions based on their type, but with Errors, you're really not supposed to catch them, so having different Error types is of questionable value. That doesn't mean that we shouldn't ever do it, but they need a very good reason to exist given the relative lack of value that they add.

I definitely don't think we need an IllegalArgumentException. IMO, passing illegal arguments is 1) a simple programming error, in which case it should be an Error, or 2) something the programmer can not avoid, in which case it requires a better description of why things went wrong than just "illegal argument". "File not found", for example.

If we had IllegalArgumentException, it would likely be the base class for some subset of exceptions which were always bad arguments, but it certainly wouldn't make sense to use it by itself in most cases. It would just provide a convenient way to catch that particular subset of exceptions if you needed to. In general though, I think that assert covers what IllegalArgumentError is trying to do just fine, and where it doesn't, the argument about needing more descriptive exceptions applies just as well (e.g. RangeError). So, unless it's used as a base class for more descriptive errors, I don't think that there's much value in having IllegalArgumentError.
 I didn't really consider contracts when I wrote the DIP, and of
 course there will always be the problem of "should this be a
 contract or a normal input check?" The problem with contracts,
 though, is that they go away in release mode, which is certainly
 not safe if that is your error handling mechanism.

If you're treating Error exclusively as programming errors, then it's really no different from AssertError. You're just creating categories for specific types rather than using assert for them all. And I would fully expect things like RangeError to be compiled out in -release. That's what we've started doing in std.range and std.algorithm now that we've got version(assert). So, instead of having a function like opIndex assert, it checks and throws a RangeError on failure - but it does so in a version(assert) block so that it's compiled out. It (nearly) matches the behavior of arrays that way. So, unless you're arguing that assertions should be left in code, then I don't think that it makes any sense to expect that Errors in general will stay in production code. Some may, depending on the code and what's being tested, but there's no guarantee that they will, and I think that it would be very much incorrect to expect them to in the general case - not when they equate specifically to programming errors. - Jonathan M Davis
Apr 02 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 02 Apr 2013 14:54:06 -0400, Jacob Carlborg <doob me.com> wrote:

 On 2013-04-02 19:02, Steven Schveighoffer wrote:

 Right, but what I see here is that the language uses one set of criteria
 to determine whether it should catch, but it's difficult to use that
 same criteria in order to process the exception.  It's not easy to
 switch on a class type, in fact it's downright ugly (maybe we need to
 come up with a way to do that in normal code too).

That's kind of breaking the whole point of OO and virtual functions, you should not need to know the exact type.

The example dictates we determine why the exception is thrown, yet that we can't catch the exact type, because we would have to duplicate the code block. So we somehow have to catch the base type and then manually verify it's one of the types we want, and rethrow otherwise. If there is a better idea, I'd love to hear it. -Steve
Apr 02 2013
prev sibling next sibling parent "Jesse Phillips" <Jessekphillips+d gmail.com> writes:
On Tuesday, 2 April 2013 at 19:10:47 UTC, Ali Çehreli wrote:
 We can still throw exceptions in production, I don't tend to

 but maybe this would be a time to say "invalid state stop."

 do you distinguish it from "fix your program?"

(I am not sure that I understand that comment correctly.)

(meant to say: we can still throw Errors in production.) Errors currently really get used to identify conditions that could cause an invalid state as the check may not always be there.
 I've mostly enjoyed having temporary files being cleaned up

 range access error which has no effect on my removing files

 longer valid.

The problem is, the runtime cannot know that it will be doing what you really wanted. The incorrect program state may result in deleting the wrong file.

See above, the state isn't invalid. The error is thrown which is stating, "hey, buddy, good thing you didn't flip that release switch as I'm about to do something I shouldn't be." However D does allow nothrow functions to throw Errors. I wouldn't say this would cause the program enter into an invalid state (memory corruption) but it would be a bad state (contract violations). Take the RangeError thrown when you pop an empty range. Under what scenario would receiving one of these would indicate that my file isn't correct for deletion (any more so than say a ConvException from the same range). auto myFile = "some.tmp"; scope(exit) remove(myFile); // setup code here manipulateFileRange(range); setup code could of course assign a pointer of myFile to range and range could make modifications but doing so could just as likely throw a ConvException (or others) before you hit a RangeError.
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 00:07:10 UTC, Jonathan M Davis 
wrote:
 [...]
 If we had IllegalArgumentException, it would likely be the base 
 class for some
 subset of exceptions which were always bad arguments, but it 
 certainly
 wouldn't make sense to use it by itself in most cases. It would 
 just provide a
 convenient way to catch that particular subset of exceptions if 
 you needed to.

I disagree. Some things don't really deserve a more detailed error than just "illegal argument". Examples include passing a negative number where a positive one was expected, an invalid combination of bit flags, an invalid file mode to a file-opening function, etc.
 In general though, I think that assert covers what 
 IllegalArgumentError is
 trying to do just fine, and where it doesn't, the argument 
 about needing more
 descriptive exceptions applies just as well (e.g. RangeError). 
 [...]

The problem with assert is that it gets disabled in release mode. I think it is a bad idea to have this be the "standard" behaviour of parameter validation.
 [...]

 If you're treating Error exclusively as programming errors, 
 then it's really
 no different from AssertError. You're just creating categories 
 for specific
 types rather than using assert for them all.

Wouldn't you say that most of the Error types we have today do, in fact, signal programming errors? Walter even argued that OutOfMemoryError is often a programming error.
 [...]

 So, unless you're arguing that assertions should be left in 
 code, then I don't
 think that it makes any sense to expect that Errors in general 
 will stay in
 production code. [...]

I personally think that, as a general rule, Errors should stay in production code. I thought we had already separated -noboundscheck from -release, but I just tested now and that doesn't seem to be the case: -release still implies -noboundscheck. D prides itself on being safer than C and C++, what with its default array bounds checks to combat buffer overruns and all, but as it turns out we're not that much better off. The -release switch practically screams "use me in production code", but what's the point of bounds checking if it is only ever used while developers are testing their code? Lars
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 05:42:13 UTC, Lars T. Kyllingstad 
wrote:
 The problem with assert is that it gets disabled in release 
 mode.
  I think it is a bad idea to have this be the "standard" 
 behaviour of parameter validation.

Allow me to clarify my position on assert() a bit. As a general rule, I think assert() should be used to verify internal program flow and program invariants, and nothing more. Specifically, public APIs should *not* change depending on whether asserts are compiled in or not. Say I am writing a function that you are using. I don't trust you to always give me correct parameters, so I check them. (Maybe my function could even do something dangerous if I didn't.) public void myFunction(someArgs) { if (someArgs are invalid) throw new InvalidArgumentError; ... } Now that I have verified that your input is correct, i.e. that your part of the deal is done, everything that happens afterwards is entirely up to me to get right. And *that* is when assertations enter the picture. I'll sprinkle my code with assert statements to make sure that every step of whatever procedure myFunction() performs is correct, and I may use it to verify input parameters to *private* helper functions and such. These kinds of checks are purely for my personal use, for debugging and for verifying that changes I make do not break other parts of the code. There may of course be situations, e.g. in performance-critical code, where it is desirable to disable parameter validation, but then -release should not be the switch that does it. -version=TotallyUnsafeButVeryPerformantDudeYourAeOnYourOwnNow, anyone? :) Lars
Apr 02 2013
prev sibling next sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 4/2/13, Jesse Phillips <Jessekphillips+d gmail.com> wrote:
 The spec makes no such guarantee

Yeah but let's not treat the spec as if it's something that was written professionally by an ISO committee. :)
 DMD just doesn't bother to not
 execute it. It is one of the many things which will break code
 when DMD decides to follow the spec.

I was wondering about this recently, and thought it's rather unsafe that Errors trigger finally blocks. Anyway is this bug filed somewhere? We don't want to lose track of this.
Apr 02 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 07:01:09 UTC, Jacob Carlborg wrote:
 On 2013-04-02 22:15, Lars T. Kyllingstad wrote:

 This illustrates my point nicely!  What does the shell do in 
 this case?
 It treats both errors the same:  It prints an error message 
 and returns
 to the command line.  It does not magically try to guess the 
 filename,
 find a way to get you permission, etc.

No, but you do know the difference. It doesn't just say "can't open file <filename>". It will say either, "file <filename> doesn't exist" or "don't have permission to access <filename>". It's a huge difference. I know _what_ went wrong with that file, not just that _something_ when wrong.

Which is exactly what you'd use FilesystemException.kind and/or FilesystemException.msg for. I never said there shouldn't be a way to distinguish between file errors, I just said that in most cases, an entirely new exception class is overkill. Lars
Apr 03 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 2 April 2013 at 20:11:31 UTC, Lars T. Kyllingstad 
wrote:
 No.  To call front on an empty range is a programming error, 
 plain and simple.  It's like trying to access the first element 
 of an empty array.  The fact that some ranges may allow it 
 anyway does not change anything.

It is illegal for a reason. For instance, with an array, it is an out of bound access. I see ne benefice to hide this information in a more generic RangeError. This is hiding information and providing nothing more.
Apr 03 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Tuesday, 2 April 2013 at 20:11:31 UTC, Lars T. Kyllingstad 
wrote:
 Phobos/druntime devs can always add to the enums.  Users still 
 have the option of subclassing if strictly necessary.

This is fundamentally incompatible with :
 I'm not sure I agree with that rule.  And anyway, D's final 
 switch mitigate some of the problems with classic switch.

As adding an entry unto the enum will break every single final switch in user code.
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 09:20:02 UTC, Walter Bright wrote:
 On 4/2/2013 1:00 PM, Lars T. Kyllingstad wrote:
 I definitely don't think we need an IllegalArgumentException. 
 IMO, passing
 illegal arguments is 1) a simple programming error, in which 
 case it should be
 an Error, or 2) something the programmer can not avoid, in 
 which case it
 requires a better description of why things went wrong than 
 just "illegal
 argument".  "File not found", for example.

 I didn't really consider contracts when I wrote the DIP, and 
 of course there
 will always be the problem of "should this be a contract or a 
 normal input
 check?"  The problem with contracts, though, is that they go 
 away in release
 mode, which is certainly not safe if that is your error 
 handling mechanism.

A bit of philosophy here: Contracts are not there to validate user input. They are only there to check for logic bugs in the program itself. It's a very clear distinction, and should not be a problem. To reiterate, if a contract fails, that is a BUG in the program, and the program is then considered to be in an undefined state and is not recoverable. CONTRACTS ARE NOT AN ERROR HANDLING MECHANISM.

I completely agree, and this is exactly why we need InputArgumentError. Lars
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 08:37:32 UTC, deadalnix wrote:
 On Tuesday, 2 April 2013 at 20:11:31 UTC, Lars T. Kyllingstad 
 wrote:
 No.  To call front on an empty range is a programming error, 
 plain and simple.  It's like trying to access the first 
 element of an empty array.  The fact that some ranges may 
 allow it anyway does not change anything.

It is illegal for a reason. For instance, with an array, it is an out of bound access. I see ne benefice to hide this information in a more generic RangeError. This is hiding information and providing nothing more.

For arrays, RangeError is synonymous with "out of bounds". I see no reason to invent a new class just for this purpose. And note that I'm not saying that ranges should be restricted to *only* throwing RangeErrors. Generally, it should be used in situations that are analogous to out of bounds for arrays, such as trying to pop past the end of the range. However, some ranges may want to do something else in this situation. An output range that writes to a file, for instance, may want to throw a "disk full" exception. A wrapper range may simply propagate any errors from the underlying range. RangeError is for the cases when it is not possible/necessary to provide more detail than "you tried to call popFront on an empty range, which is illegal". Lars
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 08:40:52 UTC, deadalnix wrote:
 On Tuesday, 2 April 2013 at 20:11:31 UTC, Lars T. Kyllingstad 
 wrote:
 Phobos/druntime devs can always add to the enums.  Users still 
 have the option of subclassing if strictly necessary.

This is fundamentally incompatible with :
 I'm not sure I agree with that rule.  And anyway, D's final 
 switch mitigate some of the problems with classic switch.

As adding an entry unto the enum will break every single final switch in user code.

I don't see the incompatibility. This is exactly the purpose of final switch. If the user didn't want to be forced to handle a new error category, they'd use normal switch instead. You have yet to specify the problems with switch in OOP. Maybe you meant something else, that final switch doesn't solve? Lars
Apr 03 2013
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 03 Apr 2013 03:12:56 -0400, Lars T. Kyllingstad  
<public kyllingen.net> wrote:

 On Wednesday, 3 April 2013 at 07:01:09 UTC, Jacob Carlborg wrote:
 On 2013-04-02 22:15, Lars T. Kyllingstad wrote:

 This illustrates my point nicely!  What does the shell do in this case?
 It treats both errors the same:  It prints an error message and returns
 to the command line.  It does not magically try to guess the filename,
 find a way to get you permission, etc.

No, but you do know the difference. It doesn't just say "can't open file <filename>". It will say either, "file <filename> doesn't exist" or "don't have permission to access <filename>". It's a huge difference. I know _what_ went wrong with that file, not just that _something_ when wrong.

Which is exactly what you'd use FilesystemException.kind and/or FilesystemException.msg for. I never said there shouldn't be a way to distinguish between file errors, I just said that in most cases, an entirely new exception class is overkill.

Think of it this way, is there a use case where you would want to catch the "file not found" exception, but let the "do not have permission" exception go through to the next handler? or vice versa? This is the only reason to have separate classes. Then we are in the awkward position of what to do if we need to catch both. In that case, you have to have a base class that covers both. The problem I have with this whole scheme of one class per error type is, you inevitably cannot cover everyone's use case, so they end up having to catch a base class and then doing the work to figure out what it is manually. -Steve
Apr 03 2013
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
03-Apr-2013 21:43, Jonathan M Davis пишет:
 On Wednesday, April 03, 2013 16:23:31 Lars T. Kyllingstad wrote:
 On Wednesday, 3 April 2013 at 14:11:09 UTC, Steven Schveighoffer

 wrote:
 The problem I have with this whole scheme of one class per
 error type is, you inevitably cannot cover everyone's use case,
 so they end up having to catch a base class and then doing the
 work to figure out what it is manually.

Precisely. And then, a switch over an enum is both way more efficient and more readable than a bunch of ifs and casts.

Except that the enum solution doesn't work at all when a 3rd party needs to add an error type to the mix,

This is the critical point. There shouldn't ever be a need to add a new exception type to differ in a minor way. Mark you wild type as unknown Kind of IOExcpetion is you need it. If you want error to propagate some extra information that's a whole other story but just subclassing everything and putting you brand name on it doesn't scale. It doesn't help standardized error handling nor with propagating specific info. Having million kinds (each library with its own) to propagate this info is awful and proven to suck (see C++).
 In general, I think that using error codes in exception types like you're
 basically suggesting is not properly taking advantage of the language's built-
 in exception features. catch is _designed_ for differentiating between errors

Between classes of Exceptions. Not between individual subtle thingies. -- Dmitry Olshansky
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 14:11:09 UTC, Steven Schveighoffer 
wrote:

 The problem I have with this whole scheme of one class per 
 error type is, you inevitably cannot cover everyone's use case, 
 so they end up having to catch a base class and then doing the 
 work to figure out what it is manually.

Precisely. And then, a switch over an enum is both way more efficient and more readable than a bunch of ifs and casts. Lars
Apr 03 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 03 Apr 2013 01:59:32 -0400, Lars T. Kyllingstad  
<public kyllingen.net> wrote:

 On Wednesday, 3 April 2013 at 05:42:13 UTC, Lars T. Kyllingstad wrote:
 The problem with assert is that it gets disabled in release mode.
  I think it is a bad idea to have this be the "standard" behaviour of  
 parameter validation.

Allow me to clarify my position on assert() a bit. As a general rule, I think assert() should be used to verify internal program flow and program invariants, and nothing more. Specifically, public APIs should *not* change depending on whether asserts are compiled in or not. Say I am writing a function that you are using. I don't trust you to always give me correct parameters, so I check them. (Maybe my function could even do something dangerous if I didn't.) public void myFunction(someArgs) { if (someArgs are invalid) throw new InvalidArgumentError; ... }

I disagree here. There are two "users" involved, one is the actual user, typing a command on the command line, and then the developer who uses the function. The developer should be checked with assert, the user should be checked with code like you wrote. The problem becomes apparent when developers don't check user input before passing to your functions. That is on them, not you. The library should be able to have all the safety checks removed to improve performance. I wish there was a way to say "this data is unchecked" or "this data is checked and certified to be correct" when you call a function. That way you could run the in contracts on user-specified data, even with asserts turned off, and avoid the checks in release code when the data has already proven valid. -Steve
Apr 03 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 3 April 2013 at 14:32:59 UTC, Steven Schveighoffer 
wrote:
 I wish there was a way to say "this data is unchecked" or "this 
 data is checked and certified to be correct" when you call a 
 function.  That way you could run the in contracts on 
 user-specified data, even with asserts turned off, and avoid 
 the checks in release code when the data has already proven 
 valid.

 -Steve

But you can. It is not much different from Range vs SortedRange, type systems exists exactly to solve such issues. Using UDA is probably even more elegant, but that is implementation details. And it is one of big improvements both on safety and performance side you can get from using strongly typed languages in network services.
Apr 03 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 3 April 2013 at 09:40:27 UTC, Lars T. Kyllingstad 
wrote:
 For arrays, RangeError is synonymous with "out of bounds".  I 
 see no reason to invent a new class just for this purpose.

Exactly, no need for RangeError (as out of bound access can be produced in many situation that don't involve ranges).
 And note that I'm not saying that ranges should be restricted 
 to *only* throwing RangeErrors.  Generally, it should be used 
 in situations that are analogous to out of bounds for arrays, 
 such as trying to pop past the end of the range.

The data always come from somewhere. That somewhere can't provide anymore data for a reason. Because you reached the end of a file (IOException or something) because the network disconnected (NetworkException) or whatever. RangeError imply that the data magically appears from a range, without any actual source, which is impossible.
 However, some ranges may want to do something else in this 
 situation.  An output range that writes to a file, for 
 instance, may want to throw a "disk full" exception.  A wrapper 
 range may simply propagate any errors from the underlying range.

Exactly. In general, wrapper should simply forward calls and let the source fail when its own logic isn't involved.
 RangeError is for the cases when it is not possible/necessary 
 to provide more detail than "you tried to call popFront on an 
 empty range, which is illegal".

It is always illegal for a reason.
Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 14:32:59 UTC, Steven Schveighoffer 
wrote:
 On Wed, 03 Apr 2013 01:59:32 -0400, Lars T. Kyllingstad 
 <public kyllingen.net> wrote:
 Say I am writing a function that you are using.  I don't trust 
 you to always give me correct parameters, so I check them.  
 (Maybe my function could even do something dangerous if I 
 didn't.)

   public void myFunction(someArgs)
   {
       if (someArgs are invalid)
           throw new InvalidArgumentError;
       ...
   }

I disagree here. There are two "users" involved, one is the actual user, typing a command on the command line, and then the developer who uses the function. The developer should be checked with assert, the user should be checked with code like you wrote. The problem becomes apparent when developers don't check user input before passing to your functions. That is on them, not you. The library should be able to have all the safety checks removed to improve performance.

Some, yes, but not all. You always have to weigh the benefit, i.e. the improved performance, against the drawbacks, i.e. reduced safety. If you are removing trivial safety checks from a function that performs a very expensive and possibly dangerous operation -- a disk operation, say -- you're doing something wrong. I agree it should be possible to remove safety checks from functions which are expected to be performant, and where the checks will have an impact (e.g. the range primitives), but it should be done with a less attractive compiler switch than -release. I think it's a big mistake to encourage programmers to ship their programs with array bounds checks and the like disabled. Such programs should be the exception, not the rule. It's always better to err on the side of safety rather than performance.
 I wish there was a way to say "this data is unchecked" or "this 
 data is checked and certified to be correct" when you call a 
 function.  That way you could run the in contracts on 
 user-specified data, even with asserts turned off, and avoid 
 the checks in release code when the data has already proven 
 valid.

That would be awesome indeed. Lars
Apr 03 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 3 April 2013 at 09:44:23 UTC, Lars T. Kyllingstad 
wrote:
 I don't see the incompatibility.  This is exactly the purpose 
 of final switch.  If the user didn't want to be forced to 
 handle a new error category, they'd use normal switch instead.

This is a good thing in your own code. But in phobos, this is a guarantee to break a lot of user code when adding to the enum.
 You have yet to specify the problems with switch in OOP.  Maybe 
 you meant something else, that final switch doesn't solve?

The idiomatic way to execute code depending on a value in OOP is virtual dispatch.
Apr 03 2013
prev sibling next sibling parent "Tobias Pankrath" <tobias pankrath.net> writes:
On Wednesday, 3 April 2013 at 15:33:53 UTC, deadalnix wrote:
 On Wednesday, 3 April 2013 at 09:44:23 UTC, Lars T. Kyllingstad 
 wrote:
 I don't see the incompatibility.  This is exactly the purpose 
 of final switch.  If the user didn't want to be forced to 
 handle a new error category, they'd use normal switch instead.

This is a good thing in your own code. But in phobos, this is a guarantee to break a lot of user code when adding to the enum.

You're using final switch because you want to make statically sure that you consider every enum member. If you now add a another enum value the code is by definition broken regardless of dmd issues an error or not. The same is true for any kind of virtual dispatch (if a normal switch with default: wasn't enough your base class method, won't be either).
 You have yet to specify the problems with switch in OOP.  
 Maybe you meant something else, that final switch doesn't 
 solve?

The idiomatic way to execute code depending on a value in OOP is virtual dispatch.

Apr 03 2013
prev sibling next sibling parent "Lars T. Kyllingstad" <public kyllingen.net> writes:
On Wednesday, 3 April 2013 at 15:29:01 UTC, deadalnix wrote:
 On Wednesday, 3 April 2013 at 09:40:27 UTC, Lars T. Kyllingstad 
 wrote:
 For arrays, RangeError is synonymous with "out of bounds".  I 
 see no reason to invent a new class just for this purpose.

Exactly, no need for RangeError (as out of bound access can be produced in many situation that don't involve ranges).

Well the word "range" has two meanings here: One is the programming language concept, i.e. a type that defines empty, front, etc. The other is the English word for which Google offers the following definition: "The area of variation between upper and lower limits on a particular scale." In light of this, I don't think RangeError is a bad name for an array bounds violation.
 And note that I'm not saying that ranges should be restricted 
 to *only* throwing RangeErrors.  Generally, it should be used 
 in situations that are analogous to out of bounds for arrays, 
 such as trying to pop past the end of the range.

The data always come from somewhere. That somewhere can't provide anymore data for a reason. Because you reached the end of a file (IOException or something) because the network disconnected (NetworkException) or whatever. RangeError imply that the data magically appears from a range, without any actual source, which is impossible.

Of course it's not impossible. std.range.iota std.range.recurrence std.range.sequence
 However, some ranges may want to do something else in this 
 situation.  An output range that writes to a file, for 
 instance, may want to throw a "disk full" exception.  A 
 wrapper range may simply propagate any errors from the 
 underlying range.

Exactly. In general, wrapper should simply forward calls and let the source fail when its own logic isn't involved.

This is not always possible or convenient. Assume the directory "foo" contains 4 files. auto fourFiles = std.file.dirEntries("foo", SpanMode.shallow); auto twoFiles = std.range.take(fourFiles, 2); twoFiles.popFront(); twoFiles.popFront(); twoFiles.popFront(); Which exception should the last popFront() throw? Lars
Apr 03 2013
prev sibling next sibling parent "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 03 Apr 2013 11:32:07 -0400, Lars T. Kyllingstad  
<public kyllingen.net> wrote:

 On Wednesday, 3 April 2013 at 14:32:59 UTC, Steven Schveighoffer wrote:

 I disagree here.  There are two "users" involved, one is the actual  
 user, typing a command on the command line, and then the developer who  
 uses the function.  The developer should be checked with assert, the  
 user should be checked with code like you wrote.

 The problem becomes apparent when developers don't check user input  
 before passing to your functions.  That is on them, not you.  The  
 library should be able to have all the safety checks removed to improve  
 performance.

Some, yes, but not all. You always have to weigh the benefit, i.e. the improved performance, against the drawbacks, i.e. reduced safety. If you are removing trivial safety checks from a function that performs a very expensive and possibly dangerous operation -- a disk operation, say -- you're doing something wrong.

The problem with this is if you "weigh the benefit" in the library, then the user of the library has no choice. Especially in phobos or druntime, we are disallowing libraries that build on top of them from saying "yes, I know this data is correct". In essense, he is stuck checking user input, and then passing the data to your function, which checks it again. There is no way out.
 I agree it should be possible to remove safety checks from functions  
 which are expected to be performant, and where the checks will have an  
 impact (e.g. the range primitives), but it should be done with a less  
 attractive compiler switch than -release.

I think it needs to be on a call-by-call basis, not a compiler-global. The same function could be called in the program twice, once with user-supplied data, once with static data. In the former, I want to run the checks, in the latter, I don't.
 I think it's a big mistake to encourage programmers to ship their  
 programs with array bounds checks and the like disabled.  Such programs  
 should be the exception, not the rule.  It's always better to err on the  
 side of safety rather than performance.

No, I completely disagree. It's very rare that you need bounds checks on data that is generated by the program. Note that you can enable bounds checks by writing code in safe D if that's how you wish to operate. -Steve
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 16:23:31 Lars T. Kyllingstad wrote:
 On Wednesday, 3 April 2013 at 14:11:09 UTC, Steven Schveighoffer
 
 wrote:
 The problem I have with this whole scheme of one class per
 error type is, you inevitably cannot cover everyone's use case,
 so they end up having to catch a base class and then doing the
 work to figure out what it is manually.

Precisely. And then, a switch over an enum is both way more efficient and more readable than a bunch of ifs and casts.

Except that the enum solution doesn't work at all when a 3rd party needs to add an error type to the mix, so they'll subclass it, and in the general case, you're forced to deal with the base class regardless. In general, I think that using error codes in exception types like you're basically suggesting is not properly taking advantage of the language's built- in exception features. catch is _designed_ for differentiating between errors, and using error codes in exceptions circumvents that. I think that the only case where it's cleaner to have the code is when you want to handle two exceptions of differing types with the same handler but don't want to handle all exceptions with their common base type in the same way. Other than that, I think that it's much better to let catch do its job - especially when people _do_ need to handle each exception type differently. - Jonathan M Davis
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 10:37:31 deadalnix wrote:
 On Tuesday, 2 April 2013 at 20:11:31 UTC, Lars T. Kyllingstad
 
 wrote:
 No. To call front on an empty range is a programming error,
 plain and simple. It's like trying to access the first element
 of an empty array. The fact that some ranges may allow it
 anyway does not change anything.

It is illegal for a reason. For instance, with an array, it is an out of bound access. I see ne benefice to hide this information in a more generic RangeError. This is hiding information and providing nothing more.

RangeError _is_ out-of-bounds-access. Range in this context has nothing to do with ranges in the D sense (though that would be a good argument for changing it to something more like OutOfBoundsError). I'm not sure that I'd use RangeError for the popFront case (if I didn't, I'd just a normal assertion), but I don't think that it's necessarily wrong or at all bad to use a RangeError in that case. We've definitely started moving towards using RangeError in version(assert) blocks for opIndex and opSlice in std.range and std.algorithm. And we're doing that precisely to make them act more like arrays. - Jonathan M Davis
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 17:41:52 Tobias Pankrath wrote:
 On Wednesday, 3 April 2013 at 15:33:53 UTC, deadalnix wrote:
 On Wednesday, 3 April 2013 at 09:44:23 UTC, Lars T. Kyllingstad
 
 wrote:
 I don't see the incompatibility. This is exactly the purpose
 of final switch. If the user didn't want to be forced to
 handle a new error category, they'd use normal switch instead.

This is a good thing in your own code. But in phobos, this is a guarantee to break a lot of user code when adding to the enum.

You're using final switch because you want to make statically sure that you consider every enum member. If you now add a another enum value the code is by definition broken regardless of dmd issues an error or not. The same is true for any kind of virtual dispatch (if a normal switch with default: wasn't enough your base class method, won't be either).

Yeah. I'm very much in favor of using derived classes over putting error types in the exceptions, but I'm not sure that breaking code using final switch is a good argument against the error types. You use final switch _because_ you want your code to break if the list of enum members changes. Then again, given the increased focus on not breaking user code and Walter's general attitude about that, I expect that he'd then be against ever adding or removing members from such an enum, because it _would_ break code. - Jonathan M Davis
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 10:33:00 Steven Schveighoffer wrote:
 On Wed, 03 Apr 2013 01:59:32 -0400, Lars T. Kyllingstad
 
 <public kyllingen.net> wrote:
 On Wednesday, 3 April 2013 at 05:42:13 UTC, Lars T. Kyllingstad wrote:
 The problem with assert is that it gets disabled in release mode.
 
 I think it is a bad idea to have this be the "standard" behaviour of
 
 parameter validation.

Allow me to clarify my position on assert() a bit. As a general rule, I think assert() should be used to verify internal program flow and program invariants, and nothing more. Specifically, public APIs should *not* change depending on whether asserts are compiled in or not. Say I am writing a function that you are using. I don't trust you to always give me correct parameters, so I check them. (Maybe my function could even do something dangerous if I didn't.) public void myFunction(someArgs) { if (someArgs are invalid) throw new InvalidArgumentError; ... }

I disagree here. There are two "users" involved, one is the actual user, typing a command on the command line, and then the developer who uses the function. The developer should be checked with assert, the user should be checked with code like you wrote. The problem becomes apparent when developers don't check user input before passing to your functions. That is on them, not you. The library should be able to have all the safety checks removed to improve performance.

I completely agree with this. There _are_ Errors which should definitely stay in in release, but in general, anything relating to DbC should be removed in release. It's then up to the caller to decide what testing they do or don't want in release.
 I wish there was a way to say "this data is unchecked" or "this data is
 checked and certified to be correct" when you call a function. That way
 you could run the in contracts on user-specified data, even with asserts
 turned off, and avoid the checks in release code when the data has already
 proven valid.

That would definitely be cool, though I have no idea how we could ever implement that beyond creating wrapper types which indicated whether the data was checked or not, which would introduce extra overhead - though if you're willing to overload all of your functions, you could do something like Verify!T verifyFoo(T param) { //do checks... return Verify!T(param); } void foo(T param) { foo(verifyFoo(param)); } void foo(Verified!T param) { ... } That would be rather intrusive, but you _could_ do it if you wanted to. However, the way that contracts really _should_ work is that whether they're enabled or not is completely dependent on how the caller is compiler rather than the callee, whereas right now, other than templated code, it's based entirely on how the callee is compiled. That way, it's completely up to the caller whether the checks happen or not. But unfortunately, I don't think that it would ever work very well from an implementation standpoint for it to work that way - not as long as we're using the C linker - since you'd need a way to associate the contracts with the functions in a way that it was up to the caller to run them or not, and I don't think that that's really possible with how C linking works. - Jonathan M Davis
Apr 03 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 3 April 2013 at 18:37:09 UTC, Jonathan M Davis 
wrote:
 ... you could do something like

 Verify!T verifyFoo(T param)
 {
  //do checks...
  return Verify!T(param);
 }

 void foo(T param)
 {
  foo(verifyFoo(param));
 }

 void foo(Verified!T param)
 {
  ...

 }

 That would be rather intrusive, but you _could_ do it if you 
 wanted to.

What problems do you see with this (or similar) approach? I find it very straightforward and no overhead is really necessary. Isn't it exactly what type system is for - enforcing static guarantees in between different parts of application?
Apr 03 2013
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 21:05:19 Dicebot wrote:
 On Wednesday, 3 April 2013 at 18:37:09 UTC, Jonathan M Davis
 
 wrote:
 ... you could do something like
 
 Verify!T verifyFoo(T param)
 {
 
 //do checks...
 return Verify!T(param);
 
 }
 
 void foo(T param)
 {
 
 foo(verifyFoo(param));
 
 }
 
 void foo(Verified!T param)
 {
 
 ...
 
 }
 
 That would be rather intrusive, but you _could_ do it if you
 wanted to.

What problems do you see with this (or similar) approach? I find it very straightforward and no overhead is really necessary. Isn't it exactly what type system is for - enforcing static guarantees in between different parts of application?

It's verbose. You're adding a fair bit of boilerplate just to statically determine whether verification has been done or not. It may very well be worth it in many cases, but it's enough extra code that I don't think that I'd advise it as a general solution. - Jonathan M Davis
Apr 03 2013
parent Jacob Carlborg <doob me.com> writes:
On 2013-04-03 23:44, Jonathan M Davis wrote:

 The main issue I have with the wrapper is the fact that you're then forced to
 overload your function if you want it to test the argument for validity if
 it's not wrapped and not test if it's wrapped. So, you're creating an extra
 overload with every function that's using the wrapper to determine whether it
 should test or not. And if you're not creating those overloads, then there was
 no point in creating the wrapper in the first place.

Then you're doing it wrong. The point is that you should validate the data in one place. Then pass the validated data around. You can also turn it around. Instead of having Verified!(T) you could have Raw!(T). Where ever you get the input data from should return Raw!(T). You have one function accepting Raw!(T), validate. The rest of the functions accepts T. -- /Jacob Carlborg
Apr 04 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 3 April 2013 at 19:21:53 UTC, Jonathan M Davis 
wrote:
 It's verbose. You're adding a fair bit of boilerplate just to 
 statically
 determine whether verification has been done or not. It may 
 very well be worth
 it in many cases, but it's enough extra code that I don't think 
 that I'd
 advise it as a general solution.

 - Jonathan M Davis

Erm, really? In most cases only wrapper type is needed. Almost all functions should accepts either verified or raw type version so no real boilerplate here (otherwise something is most likely wrong with your module responsibility organization). Wrapper types themselves are trivial and can be created automagically with some template/mixin. What else?
Apr 03 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Wednesday, April 03, 2013 22:00:17 Dicebot wrote:
 On Wednesday, 3 April 2013 at 19:21:53 UTC, Jonathan M Davis
 
 wrote:
 It's verbose. You're adding a fair bit of boilerplate just to
 statically
 determine whether verification has been done or not. It may
 very well be worth
 it in many cases, but it's enough extra code that I don't think
 that I'd
 advise it as a general solution.
 
 - Jonathan M Davis

Erm, really? In most cases only wrapper type is needed. Almost all functions should accepts either verified or raw type version so no real boilerplate here (otherwise something is most likely wrong with your module responsibility organization). Wrapper types themselves are trivial and can be created automagically with some template/mixin. What else?

The main issue I have with the wrapper is the fact that you're then forced to overload your function if you want it to test the argument for validity if it's not wrapped and not test if it's wrapped. So, you're creating an extra overload with every function that's using the wrapper to determine whether it should test or not. And if you're not creating those overloads, then there was no point in creating the wrapper in the first place. - Jonathan M Davis
Apr 03 2013
prev sibling next sibling parent "Dicebot" <m.strashun gmail.com> writes:
On Wednesday, 3 April 2013 at 21:44:36 UTC, Jonathan M Davis 
wrote:
 The main issue I have with the wrapper is the fact that you're 
 then forced to
 overload your function if you want it to test the argument for 
 validity if
 it's not wrapped and not test if it's wrapped. So, you're 
 creating an extra
 overload with every function that's using the wrapper to 
 determine whether it
 should test or not. And if you're not creating those overloads, 
 then there was
 no point in creating the wrapper in the first place.

 - Jonathan M Davis

Imagine typical web app. It does want all string data used with db backend escaped to prevent any SQL injections. It does not want to add checks if data is escaped in every single db-related function because they aren't free and data is supposed to come in already escaped by user input validator. Consider using wrapper types here. db backend functions don't need to accept raw data because they are supposed to get them already escaped. User input validation works with raw strings and can never receive wrapped ones from anyone. It is like contract, but works in release and is verified by type system instead of custom code. You may just omit wrapper, of course, but you lose compile-time errors on attempt to send raw string to db then. That is huge difference.
Apr 04 2013
prev sibling next sibling parent "Jesse Phillips" <Jessekphillips+D gmail.com> writes:
On Wednesday, 3 April 2013 at 16:19:25 UTC, Ali Çehreli wrote:
      auto myFile = "some.tmp";
      scope(exit) remove(myFile);

      // setup code here
      manipulateFileRange(range);

We are in agreement that it would be impossible to prove one way or the other whether removing the file would be the right thing to do or whether it will succeed.

All you need is one example where it would remove the wrong file, I just requested that it have higher accuracy than Exception since what you're claiming as invalid state is the same invalid state exceptions check for (I didn't expect this).
Apr 04 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Wednesday, 3 April 2013 at 21:44:36 UTC, Jonathan M Davis 
wrote:
 The main issue I have with the wrapper is the fact that you're 
 then forced to
 overload your function if you want it to test the argument for 
 validity if
 it's not wrapped and not test if it's wrapped. So, you're 
 creating an extra
 overload with every function that's using the wrapper to 
 determine whether it
 should test or not. And if you're not creating those overloads, 
 then there was
 no point in creating the wrapper in the first place.

I think the solution here is to ensure we have a way to implicitly construct the Verified from the value.
Apr 04 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, April 04, 2013 14:13:55 Ali Çehreli wrote:
 On 04/04/2013 12:16 PM, Ali Çehreli wrote:
 core.exception.RangeError deneme(125887): Range violation

I have realized something: Maybe some of the confusion here is due to range violation being an Error. I think that it should be an Exception. The rationale is, some function is told to provide the element at index 100 and there is no such element. The function cannot accomplish that task so it throws an Exception. (Same story for popFront() of an empty range.) My earlier comments about invalid program state apply to Error conditions. (Come to think of it, perhaps even more specifically to AssertError.)

Those are Errors for arrays. Sure, someone could choose to write their containers or ranges in a way that throws an exception when you try and access an element that isn't there, but that then incurs a performance cost for the program as a whole, which would be particularly bad if the standard library then made that decision. In most cases, you know whether the element is there or not, and if you don't it's easy to check, so from an efficiency standpoint, it makes far more sense to treat out of bounds errors (which is what RangeError really is) as Errors. Phobos definitely take the approach that accessing an element in a range that isn't there is an Error (be it by calling popFront on an empty range or using opIndex to an element which isn't there or whatever), and in general, I think that that's very much the correct approach. There are obviously exceptions to that (e.g. the in operator), but as far as indexing, slicing, and popFront go, it should be considered a programming bug if they're used on elements that aren't there. - Jonathan M Davis
Apr 04 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Thursday, 4 April 2013 at 21:25:18 UTC, Jonathan M Davis wrote:
 On Thursday, April 04, 2013 14:13:55 Ali Çehreli wrote:
 On 04/04/2013 12:16 PM, Ali Çehreli wrote:
 core.exception.RangeError deneme(125887): Range violation

I have realized something: Maybe some of the confusion here is due to range violation being an Error. I think that it should be an Exception. The rationale is, some function is told to provide the element at index 100 and there is no such element. The function cannot accomplish that task so it throws an Exception. (Same story for popFront() of an empty range.) My earlier comments about invalid program state apply to Error conditions. (Come to think of it, perhaps even more specifically to AssertError.)

Those are Errors for arrays. Sure, someone could choose to write their containers or ranges in a way that throws an exception when you try and access an element that isn't there, but that then incurs a performance cost for the program as a whole, which would be particularly bad if the standard library then made that decision. In most cases, you know whether the element is there or not, and if you don't it's easy to check, so from an efficiency standpoint, it makes far more sense to treat out of bounds errors (which is what RangeError really is) as Errors. Phobos definitely take the approach that accessing an element in a range that isn't there is an Error (be it by calling popFront on an empty range or using opIndex to an element which isn't there or whatever), and in general, I think that that's very much the correct approach. There are obviously exceptions to that (e.g. the in operator), but as far as indexing, slicing, and popFront go, it should be considered a programming bug if they're used on elements that aren't there. - Jonathan M Davis

It is where the current design choice make no sense. If they are error, the recovery code isn't executed. Such operation don't put the program in an invalid state (in fact, it prevent the program to go in invalid state). In such situation, not running that recovery code is likely to transform a small error into a huge mess. The case is very different from Ali's example before, where the wrong file can be deleted.
Apr 04 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, April 04, 2013 23:50:25 deadalnix wrote:
 On Thursday, 4 April 2013 at 21:25:18 UTC, Jonathan M Davis wrote:
 There are obviously exceptions to that (e.g. the in operator),
 but as far as
 indexing, slicing, and popFront go, it should be considered a
 programming bug
 if they're used on elements that aren't there.

It is where the current design choice make no sense. If they are error, the recovery code isn't executed. Such operation don't put the program in an invalid state (in fact, it prevent the program to go in invalid state). In such situation, not running that recovery code is likely to transform a small error into a huge mess. The case is very different from Ali's example before, where the wrong file can be deleted.

Well, the program has no way of knowing _why_ popFront is being called on an empty range or an invalid index is being passed to opIndex or opSlice. The fact that it happened is proof that either there's a programming bug or that things are corrupted and who-knows-what is happening. In either case, the program has no way of knowing whether it's safe to run the clean-up code or not. It could be perfectly safe, or things could already be in seriously bad shape, and running the clean-up code would make things worse (possibly resulting in things like deleting the wrong file, depending on what the clean- up code does and what went wrong). The problem is that while it's frequently safe to just run the clean-up code, sometimes it's very much _not_ safe to run it (especially if you get memory corruption in system code or something like). And we have to decide which risk is worse. And one good thing to remember is that Errors should be _extremely_ rare. They should basically only happen in debug builds when you're writing and debugging the program and in released code when things go horribly, horribly wrong. And that would mean that it's far more likely that in production code, Errors are normally being thrown in situations where doing clean-up is likely to make things worse. Another good thing to remember is that there's _never_ any guarantee that clean-up code wil actually run, because your program could be forcibly killed in a way that you can't control or protect against (e.g. the plug being pulled), so if your code truly relies on the clean-up code running for it to work properly when it's restarted or leave your system in a consistent state or anything like that, then you're pretty much screwed regardless of whether clean-up is done on Errors. - Jonathan M Davis
Apr 04 2013
prev sibling next sibling parent "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
Thank you, I'd like to say I agree with you on this error should 
not run cleanup and your definition for when we don't want to run 
cleanup code is spot on. I'm also not looking to change the 
language spec. I'm still struggling with convincing myself that 
this thrown error more likely indicates a corrupt state than an 
exception.

This post will likely get long, I'm just hoping to articulate why 
I'm struggling to be in full agreement here.

On thing I keep thinking is, what about when I trying to 
write/read to the file and some code throws an Exception prior to 
the range access, cleanup would be run and no error in sight.

Then I think about, what if arrays threw an exception? Or why is 
it an error? Arrays make an agreement they will operate on valid 
input. If indexing outside the array then the operation "can't 
complete that task."

And to that, the reason arrays don't throw 
IndexOutOfBoundsException is because release will no longer check 
for that condition.

So I'm back to considering how come an RangeError has given a 
better indication that the program has entered an invalid state 
over getting an IOOBException.

What I've come to is that an Error makes an indication corruption 
has occurred more accurately than an Exception is because a 
program is expected never to hit an error, the only logical 
explanation is that either cosmic rays flipped some bits or 
another component of the program has overwritten this perfect 
section of code "I'm executing."

I think I'm satisfied with this. Thanks again.
Apr 04 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 5 April 2013 at 01:11:40 UTC, Jonathan M Davis wrote:
 Well, the program has no way of knowing _why_ popFront is being 
 called on an
 empty range or an invalid index is being passed to opIndex or 
 opSlice. The
 fact that it happened is proof that either there's a 
 programming bug or that
 things are corrupted and who-knows-what is happening. In either 
 case, the
 program has no way of knowing whether it's safe to run the 
 clean-up code or
 not. It could be perfectly safe, or things could already be in 
 seriously bad
 shape, and running the clean-up code would make things worse 
 (possibly
 resulting in things like deleting the wrong file, depending on 
 what the clean-
 up code does and what went wrong).

 The problem is that while it's frequently safe to just run the 
 clean-up code,
 sometimes it's very much _not_ safe to run it (especially if 
 you get memory
 corruption in  system code or something like). And we have to 
 decide which
 risk is worse.

 And one good thing to remember is that Errors should be 
 _extremely_ rare. They
 should basically only happen in debug builds when you're 
 writing and debugging
 the program and in released code when things go horribly, 
 horribly wrong. And
 that would mean that it's far more likely that in production 
 code, Errors are
 normally being thrown in situations where doing clean-up is 
 likely to make
 things worse.

 Another good thing to remember is that there's _never_ any 
 guarantee that
 clean-up code wil actually run, because your program could be 
 forcibly killed
 in a way that you can't control or protect against (e.g. the 
 plug being
 pulled), so if your code truly relies on the clean-up code 
 running for it to
 work properly when it's restarted or leave your system in a 
 consistent state
 or anything like that, then you're pretty much screwed 
 regardless of whether
 clean-up is done on Errors.

 - Jonathan M Davis

Removing the plug a failure that is way more serious than an array out of bound access. Why do we want to worsen the array thing just because the later may happen ? I guess that is the same logic that lead to theses cars we see in movies that explode each time something goes wrong. After all, the car is likely to be broken, so let's just let it explode. Back on a more software related example. Let's consider a media player in which such error occurs (such software uses a lot of 3rd party code to support many format, desktop integration, whatever). How to argue that the software must plain crash, and, by the way, the config and playlist are not saved, so you'll restart the soft playing random crap preferably at maximum volume in your headphones (bonus point if it is some porn in a public area), instead of simply displaying a graphical glitch, skip a frame, go to the next item in the playlist, or even quit while saving the playlist/config so it can be restarted and the user can resume its film ? Right now, it isn't even possible to try a graceful shutdown when really, the program is unlikely to be in a completely unpredictable state, especially in safe code.
Apr 05 2013
prev sibling next sibling parent "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Friday, April 05, 2013 15:42:00 deadalnix wrote:
 Removing the plug a failure that is way more serious than an
 array out of bound access. Why do we want to worsen the array
 thing just because the later may happen ?
 
 I guess that is the same logic that lead to theses cars we see in
 movies that explode each time something goes wrong. After all,
 the car is likely to be broken, so let's just let it explode.
 
 Back on a more software related example. Let's consider a media
 player in which such error occurs (such software uses a lot of
 3rd party code to support many format, desktop integration,
 whatever). How to argue that the software must plain crash, and,
 by the way, the config and playlist are not saved, so you'll
 restart the soft playing random crap preferably at maximum volume
 in your headphones (bonus point if it is some porn in a public
 area), instead of simply displaying a graphical glitch, skip a
 frame, go to the next item in the playlist, or even quit while
 saving the playlist/config so it can be restarted and the user
 can resume its film ?
 
 Right now, it isn't even possible to try a graceful shutdown when
 really, the program is unlikely to be in a completely
 unpredictable state, especially in  safe code.

Part of the point is that if you had an Error, there _is_ no graceful shutdown. It's in an invalid state. Doing _anything_ at that point is risky. Depending on why the Error occurred, you could just as easily completely corrupt playlist files as properly update them. If you have an Error, you basically just crashed. Don't expect that to be any cleaner than if someone just pulled the plug on your computer. The fact that an Error is thrown rather than the program simply aborting gives you some chance of saving your program in circumstances where you actually know that catching the Error and continuing can work (e.g. in rare circumstances, that might work with OutOfMemoryError), and it gives you the chance tot attempt some truly critical clean-up code if you want to, but again, you can _never_ rely on clean-up happening correctly 100% of the time no matter what the language does, because your program could be forcibly killed by kill -9 or by power loss or whatever. So, if there's anything that you need to do in order to guarantee that your program always has consistent state, don't even rely on Exceptions for that. It won't always work. Errors should be extremely rare. It's not like you just had bad input. If an Error occurs, something seriously wrong happened in your program. It could be a programming bug that you never caught, or it could be that you have data corruption or that your hardware is actually busted and doing the wrong thing. You can't know what's going wrong, so you can't know how safe it is to attempt clean-up code. Maybe it is, maybe it isn't. It's basically the same as if your computer outrigt crashed. And do you really expect that to go cleanly? All the programs that I use that have playlists and the like return to a previously valid state when they crash or are killed - either by updating their state as they go along and making sure that they do so in a way that the playlist is valid and/or by restoring to the state that they were in when the program last shutdown correctly. Neither of those require that Errors be handled. Rather, they require making updates after successful operations. It's then pretty much irrelevant if future operations go horribly wrong. When you start the program up again, it'll simply put itself back in the last known good state. - Jonathan M Davis
Apr 05 2013
prev sibling next sibling parent "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 5 April 2013 at 13:42:02 UTC, deadalnix wrote:

 Right now, it isn't even possible to try a graceful shutdown 
 when really, the program is unlikely to be in a completely 
 unpredictable state, especially in  safe code.

It is possible. Catch the error. However, having the language pretend that it can make any logical guarantees to you like it does with exceptions (i.e. finally blocks, chaining etc.) only encourages people not to take Errors as seriously as one should. Soon people are throwing errors where they should be exceptions and vice versa. Even worse: people will be catching errors everywhere and their code could be happily running for days performing undefined behaviour. This is a similar situation to shared (although with some important differences). Making it easier to use would be like putting a seatbelt on a motorbike. Sure, it might be safer some of the time. It'll definitely require less care to use. But when the bike slips sideways underneath you going round a bend at 80mph, you need to kick it away as fast as possible. It'll save you all the times it *doesn't* matter, but it'll kill you that one time when it *does*.
Apr 05 2013
prev sibling next sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 5 April 2013 at 18:38:46 UTC, Jonathan M Davis wrote:
 Part of the point is that if you had an Error, there _is_ no 
 graceful
 shutdown. It's in an invalid state. Doing _anything_ at that 
 point is risky.
 Depending on why the Error occurred, you could just as easily 
 completely
 corrupt playlist files as properly update them. If you have an 
 Error, you
 basically just crashed. Don't expect that to be any cleaner 
 than if someone
 just pulled the plug on your computer.

That is once again nonsense. It MAY not be possible to recover, it doesn't mean that not even make it possible to try is the right thing to do.
Apr 06 2013
prev sibling parent "deadalnix" <deadalnix gmail.com> writes:
On Friday, 5 April 2013 at 19:39:14 UTC, John Colvin wrote:
 On Friday, 5 April 2013 at 13:42:02 UTC, deadalnix wrote:

 Right now, it isn't even possible to try a graceful shutdown 
 when really, the program is unlikely to be in a completely 
 unpredictable state, especially in  safe code.

It is possible. Catch the error.

No. At this point, the small issue is already transformed in complete havoc. Mutexes are not released, nothing is cleaned up, etc . . .
 However, having the language pretend that it can make any 
 logical guarantees to you like it does with exceptions (i.e. 
 finally blocks, chaining etc.) only encourages people not to 
 take Errors as seriously as one should.
 Soon people are throwing errors where they should be exceptions 
 and vice versa. Even worse: people will be catching errors 
 everywhere and their code could be happily running for days 
 performing undefined behaviour.

Well go all the way down the reasoning : nothing ensure that the stack isn't corrupted and that unwinding is possible.
 This is a similar situation to shared (although with some 
 important differences). Making it easier to use would be like 
 putting a seatbelt on a motorbike. Sure, it might be safer some 
 of the time. It'll definitely require less care to use. But 
 when the bike slips sideways underneath you going round a bend 
 at 80mph, you need to kick it away as fast as possible.
 It'll save you all the times it *doesn't* matter, but it'll 
 kill you that one time when it *does*.

I'm not sure a media player can kill me.
Apr 06 2013