www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Program logic bugs vs input/environmental errors

reply Walter Bright <newshound2 digitalmars.com> writes:
This issue comes up over and over, in various guises. I feel like Yosemite Sam
here:

     https://www.youtube.com/watch?v=hBhlQgvHmQ0

In that vein, Exceptions are for either being able to recover from 
input/environmental errors, or report them to the user of the application.

When I say "They are NOT for debugging programs", I mean they are NOT for 
debugging programs.

assert()s and contracts are for debugging programs.

After all, what would you think of a compiler that spewed out messages like
this:

    > dmd test.d
    test.d(15) Error: missing } thrown from dmd/src/parse.c(283)

?

See:

     https://issues.dlang.org/show_bug.cgi?id=13543

As for the programmer wanting to know where the message "missing }" came from,

     grep -r dmd/src/*.c "missing }"

works nicely. I do that sort of thing all the time. It really isn't a problem.
Sep 27 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 As for the programmer wanting to know where the message 
 "missing }" came from,

     grep -r dmd/src/*.c "missing }"

 works nicely. I do that sort of thing all the time. It really 
 isn't a problem.
grep is not useful for the purposes explained in issue 13543 because the file name is often inside a string variable, that is initialized elsewhere or generated in some way. So the exception is useful to know where's the instruction in user code that has tried the failed I/O action, as I've explained in that issue. Bye, bearophile
Sep 27 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/27/2014 4:33 PM, bearophile wrote:
 Walter Bright:

 As for the programmer wanting to know where the message "missing }" came from,

     grep -r dmd/src/*.c "missing }"

 works nicely. I do that sort of thing all the time. It really isn't a problem.
grep is not useful for the purposes explained in issue 13543 because the file name is often inside a string variable, that is initialized elsewhere or generated in some way. So the exception is useful to know where's the instruction in user code that has tried the failed I/O action, as I've explained in that issue.
Even if that is what you wanted, you won't get that from FileException, as it will only show file/lines emanating from calls inside std.file, not from higher level callers. Besides, take a bit of care when formulating a string for exceptions, and you won't have any trouble grepping for it. This isn't rocket science. Presenting internal debugging data to users for input/environmental errors is just bad programming practice. We shouldn't be enshrining it in Phobos and presenting it as a professional way to code.
Sep 27 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Sep 27, 2014 at 04:42:18PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/27/2014 4:33 PM, bearophile wrote:
Walter Bright:

As for the programmer wanting to know where the message "missing }"
came from,

    grep -r dmd/src/*.c "missing }"

works nicely. I do that sort of thing all the time. It really isn't
a problem.
grep is not useful for the purposes explained in issue 13543 because the file name is often inside a string variable, that is initialized elsewhere or generated in some way. So the exception is useful to know where's the instruction in user code that has tried the failed I/O action, as I've explained in that issue.
Even if that is what you wanted, you won't get that from FileException, as it will only show file/lines emanating from calls inside std.file, not from higher level callers. Besides, take a bit of care when formulating a string for exceptions, and you won't have any trouble grepping for it. This isn't rocket science. Presenting internal debugging data to users for input/environmental errors is just bad programming practice. We shouldn't be enshrining it in Phobos and presenting it as a professional way to code.
My take on this, is that uncaught exceptions are a program bug. Any messages displayed to the user ought to come from a catch block that not only prints the exception message (*without* things like line numbers and stack traces, btw), but also provides context (e.g., "Error in configuration file section 'abc': illegal field value" instead of just "illegal field value" with no context of where it might have been triggered). Uncaught exceptions (which ideally should only be Errors, not Exceptions) are a program bug that ought to be fixed. In the case that somehow one managed to elude your catch blocks, the full debug infodump (source file, line number, stack trace) is useful for users to hand back to you in a bug report, so that you can track down the problem. The user should not be expected to understand the infodump from an uncaught exception, whereas a message printed from a catch block ought to be user-understandable (like "can't open 'myphoto.jpg': file not found", not "internal error on line 12345" which makes no sense to a user). T -- Laissez-faire is a French term commonly interpreted by Conservatives to mean 'lazy fairy,' which is the belief that if governments are lazy enough, the Good Fairy will come down from heaven and do all their work for them.
Sep 27 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/27/2014 4:55 PM, H. S. Teoh via Digitalmars-d wrote:
 My take on this, is that uncaught exceptions are a program bug.
Not to me. Custom messages would be better, but the exception message should be serviceable.
 Uncaught exceptions (which ideally should only be Errors, not
 Exceptions) are a program bug that ought to be fixed. In the case that
 somehow one managed to elude your catch blocks, the full debug infodump
 (source file, line number, stack trace) is useful for users to hand back
 to you in a bug report, so that you can track down the problem. The user
 should not be expected to understand the infodump from an uncaught
 exception, whereas a message printed from a catch block ought to be
 user-understandable (like "can't open 'myphoto.jpg': file not found",
 not "internal error on line 12345" which makes no sense to a user).
Whoa, Camel! You're again thinking of Exceptions as a debugging tool.
Sep 27 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/28/2014 02:40 AM, Walter Bright wrote:
 On 9/27/2014 4:55 PM, H. S. Teoh via Digitalmars-d wrote:
 My take on this, is that uncaught exceptions are a program bug.
Not to me. ...
It is not worth fixing if a program terminates with a stack trace?
Sep 27 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/27/2014 5:54 PM, Timon Gehr wrote:
 On 09/28/2014 02:40 AM, Walter Bright wrote:
 On 9/27/2014 4:55 PM, H. S. Teoh via Digitalmars-d wrote:
 My take on this, is that uncaught exceptions are a program bug.
Not to me. ...
It is not worth fixing if a program terminates with a stack trace?
I never was in favor of adding the stack trace output, either, for the same reason. Exceptions are not programming bugs. For Errors, a case can be made for them.
Sep 27 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 00:40:26 UTC, Walter Bright wrote:
 Whoa, Camel! You're again thinking of Exceptions as a debugging 
 tool.
They can be. What if an API you're using throws an exception you didn't expect, and therefore don't handle? This might be considered a logic error if the exception is recoverable and you don't intend the program to abort from that operation.
Sep 28 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 16:16:09 UTC, Sean Kelly wrote:
 On Sunday, 28 September 2014 at 00:40:26 UTC, Walter Bright 
 wrote:
 Whoa, Camel! You're again thinking of Exceptions as a 
 debugging tool.
They can be. What if an API you're using throws an exception you didn't expect, and therefore don't handle? This might be considered a logic error if the exception is recoverable and you don't intend the program to abort from that operation.
Also, I think the idea that a program is created and shipped to an end user is overly simplistic. In the server/cloud programming world, when an error occurs, the client who submitted the request will get a response appropriate for them and the system will also generate log information intended for people working on the system. So things like stack traces and assertion failure information is useful even for production software. Same with any critical system, as I'm sure you're aware. The systems are designed to handle failures in specific ways, but they also have to leave a breadcrumb trail so the underlying problem can be diagnosed and fixed. Internal testing is never perfect, and achieving a high coverage percentage is nearly impossible if the system wasn't designed from the ground up to be testable in such a way (mock frameworks and such).
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 9:23 AM, Sean Kelly wrote:
 Also, I think the idea that a program is created and shipped to an end user is
 overly simplistic.  In the server/cloud programming world, when an error
occurs,
 the client who submitted the request will get a response appropriate for them
 and the system will also generate log information intended for people working
on
 the system.  So things like stack traces and assertion failure information is
 useful even for production software.  Same with any critical system, as I'm
sure
 you're aware.  The systems are designed to handle failures in specific ways,
but
 they also have to leave a breadcrumb trail so the underlying problem can be
 diagnosed and fixed.  Internal testing is never perfect, and achieving a high
 coverage percentage is nearly impossible if the system wasn't designed from the
 ground up to be testable in such a way (mock frameworks and such).
Then use assert(). That's just what it's for.
Sep 28 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 17:33:38 UTC, Walter Bright wrote:
 On 9/28/2014 9:23 AM, Sean Kelly wrote:
 Also, I think the idea that a program is created and shipped 
 to an end user is overly simplistic.  In the server/cloud 
 programming world, when an error occurs, the client who 
 submitted the request will get a response appropriate for them 
 and the system will also generate log information intended for 
 people working on the system.  So things like stack traces and 
 assertion failure information is useful even for production 
 software.  Same with any critical system, as I'm sure you're 
 aware.  The systems are designed to handle failures in 
 specific ways, but they also have to leave a breadcrumb trail 
 so the underlying problem can be diagnosed and fixed.  
 Internal testing is never perfect, and achieving a high 
 coverage percentage is nearly impossible if the system wasn't 
 designed from the ground up to be testable in such a way (mock 
 frameworks and such).
Then use assert(). That's just what it's for.
What if I don't want to be forced to abort the program in the event of such an error?
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 12:33 PM, Sean Kelly wrote:
 Then use assert(). That's just what it's for.
What if I don't want to be forced to abort the program in the event of such an error?
Then we are back to the discussion about can a program continue after a logic error is uncovered, or not. In any program, the programmer must decide if an error is a bug or not, before shipping it. Trying to avoid making this decision leads to confusion and using the wrong techniques to deal with it. A program bug is, by definition, unknown and unanticipated. The idea that one can "recover" from it is fundamentally wrong. Of course, in D one can try and recover from them anyway, but you're on your own trying that, just as you're on your own when casting integers to pointers. On the other hand, input/environmental errors must be anticipated and can often be recovered from. But presenting debug traces to the users for these implies at the very least a sloppily engineered product, in my not so humble opinion :-)
Sep 28 2014
next sibling parent reply Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 28/09/14 22:13, Walter Bright via Digitalmars-d wrote:
 On 9/28/2014 12:33 PM, Sean Kelly wrote:
 Then use assert(). That's just what it's for.
What if I don't want to be forced to abort the program in the event of such an error?
Then we are back to the discussion about can a program continue after a logic error is uncovered, or not. In any program, the programmer must decide if an error is a bug or not, before shipping it. Trying to avoid making this decision leads to confusion and using the wrong techniques to deal with it. A program bug is, by definition, unknown and unanticipated. The idea that one can "recover" from it is fundamentally wrong. Of course, in D one can try and recover from them anyway, but you're on your own trying that, just as you're on your own when casting integers to pointers.
Allowing for your "you can try ..." remarks, I still feel this doesn't really cover the practical realities of how some applications need to behave. Put it this way: suppose we're writing the software for a telephone exchange, which is handling thousands of simultaneous calls. If an Error is thrown inside the part of the code handling one single call, is it correct to bring down everyone else's call too? I appreciate that you might tell me "You need to find a different means of error handling that can distinguish errors that are recoverable", but the bottom line is, in such a scenario it's not possible to completely rule out an Error being thrown (an obvious cause would be an assert that gets triggered because the programmer forgot to put a corresponding enforce() statement at a higher level in the code). However, it's clearly very desirable in this use-case for the application to keep going if at all possible and for any problem, even an Error, to be contained in its local context if we can do so. (By "local context", in practice this probably means a thread or fiber or some other similar programming construct.) Sean's touched on this in the current thread with his reference to Erlang, and I remember that he and Dicebot brought the issue up in an earlier discussion on the Error vs. Exception question, but I don't recall that discussion having any firm conclusion, and I think it's important to address; we can't simply take "An Error is unrecoverable" as a point of principle for every application. (Related note: If I recall right, an Error or uncaught Exception thrown within a thread or fiber will not actually bring the application down, only cause that thread/fiber to hang, without printing any indication of anything going wrong. So on a purely practical basis, it can be essential for the top-level code of a thread or fiber to have a catch {} block for both Errors and Exceptions, just in order to be able to report what has happened effectively.)
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 3:51 PM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 However, it's clearly very desirable in this use-case for the application to
 keep going if at all possible and for any problem, even an Error, to be
 contained in its local context if we can do so.  (By "local context", in
 practice this probably means a thread or fiber or some other similar
programming
 construct.)
If the program has entered an unknown state, its behavior from then on cannot be predictable. There's nothing I or D can do about that. D cannot officially endorse such a practice, though D being a systems programming language it will let you do what you want. I would not even consider such a practice for a program that is in charge of anything that could result in injury, death, property damage, security breaches, etc.
Sep 28 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 00:09:59 UTC, Walter Bright wrote:
 On 9/28/2014 3:51 PM, Joseph Rushton Wakeling via Digitalmars-d 
 wrote:
 However, it's clearly very desirable in this use-case for the 
 application to
 keep going if at all possible and for any problem, even an 
 Error, to be
 contained in its local context if we can do so.  (By "local 
 context", in
 practice this probably means a thread or fiber or some other 
 similar programming
 construct.)
If the program has entered an unknown state, its behavior from then on cannot be predictable. There's nothing I or D can do about that. D cannot officially endorse such a practice, though D being a systems programming language it will let you do what you want. I would not even consider such a practice for a program that is in charge of anything that could result in injury, death, property damage, security breaches, etc.
Well... suppose you design a system with redundancy such that an error in a specific process isn't enough to bring down the system. Say it's a quorum method or whatever. In the instance that a process goes crazy, I would argue that the system is in an undefined state but a state that it's designed specifically to handle, even if that state can't be explicitly defined at design time. Now if enough things go wrong at once the whole system will still fail, but it's about building systems with the expectation that errors will occur. They may even be logic errors--I think it's kind of irrelevant at that point. Even a network of communicating processes, one getting in a bad state can theoretically poison the entire system and you're often not in a position to simply shut down the whole thing and wait for a repairman. And simply rebooting the system if it's a bad sensor that's causing the problem just means a pause before another failure cascade. I think any modern program designed to run continuously (increasingly the typical case) must be designed with some degree of resiliency or self-healing in mind. And that means planning for and limiting the scope of undefined behavior.
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 6:39 PM, Sean Kelly wrote:
 Well... suppose you design a system with redundancy such that an error in a
 specific process isn't enough to bring down the system.  Say it's a quorum
 method or whatever.  In the instance that a process goes crazy, I would argue
 that the system is in an undefined state but a state that it's designed
 specifically to handle, even if that state can't be explicitly defined at
design
 time.  Now if enough things go wrong at once the whole system will still fail,
 but it's about building systems with the expectation that errors will occur.
 They may even be logic errors--I think it's kind of irrelevant at that point.

 Even a network of communicating processes, one getting in a bad state can
 theoretically poison the entire system and you're often not in a position to
 simply shut down the whole thing and wait for a repairman.  And simply
rebooting
 the system if it's a bad sensor that's causing the problem just means a pause
 before another failure cascade.  I think any modern program designed to run
 continuously (increasingly the typical case) must be designed with some degree
 of resiliency or self-healing in mind.  And that means planning for and
limiting
 the scope of undefined behavior.
I've said that processes are different, because the scope of the effects is limited by the hardware. If a system with threads that share memory cannot be restarted, there are serious problems with the design of it, because a crash and the necessary restart are going to happen sooner or later, probably sooner. I don't believe that the way to get 6 sigma reliability is by ignoring errors and hoping. Airplane software is most certainly not done that way. I recall Toyota got into trouble with their computer controlled cars because of their idea of handling of inevitable bugs and errors. It was one process that controlled everything. When something unexpected went wrong, it kept right on operating, any unknown and unintended consequences be damned. The way to get reliable systems is to design to accommodate errors, not pretend they didn't happen, or hope that nothing else got affected, etc. In critical software systems, that means shut down and restart the offending system, or engage the backup. There's no other way that works.
Sep 28 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 02:57:03 UTC, Walter Bright wrote:
 I've said that processes are different, because the scope of 
 the effects is limited by the hardware.

 If a system with threads that share memory cannot be restarted, 
 there are serious problems with the design of it, because a 
 crash and the necessary restart are going to happen sooner or 
 later, probably sooner.
Right. But if the condition that caused the restart persists, the process can end up in a cascading restart scenario. Simply restarting on error isn't necessarily enough.
 I don't believe that the way to get 6 sigma reliability is by 
 ignoring errors and hoping. Airplane software is most certainly 
 not done that way.
I believe I was arguing the opposite. More to the point, I think it's necessary to expect undefined behavior to occur and to plan for it. I think we're on the same page here and just miscommunicating.
 I recall Toyota got into trouble with their computer controlled 
 cars because of their idea of handling of inevitable bugs and 
 errors. It was one process that controlled everything. When 
 something unexpected went wrong, it kept right on operating, 
 any unknown and unintended consequences be damned.

 The way to get reliable systems is to design to accommodate 
 errors, not pretend they didn't happen, or hope that nothing 
 else got affected, etc. In critical software systems, that 
 means shut down and restart the offending system, or engage the 
 backup.
My point was that it's often more complicated than that. There have been papers written on self-repairing systems, for example, and ways to design systems that are inherently durable when it comes to even internal errors. I think what I'm trying to say is that simply aborting on error is too brittle in some cases, because it only deals with one vector--memory corruption that is unlikely to reoccur. But I've watched always-on systems fall apart from some unexpected ongoing situation, and simply restarting doesn't actually help.
Sep 28 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 9:03 PM, Sean Kelly wrote:
 On Monday, 29 September 2014 at 02:57:03 UTC, Walter Bright wrote:
 Right.  But if the condition that caused the restart persists, the process can
 end up in a cascading restart scenario.  Simply restarting on error isn't
 necessarily enough.
When it isn't enough, use the "engage the backup" technique.
 I don't believe that the way to get 6 sigma reliability is by ignoring errors
 and hoping. Airplane software is most certainly not done that way.
I believe I was arguing the opposite. More to the point, I think it's necessary to expect undefined behavior to occur and to plan for it. I think we're on the same page here and just miscommunicating.
Assuming that the program bug couldn't have affected other threads is relying on hope. Bugs happen when the program went into an unknown and unanticipated state. You cannot know, until after you debug it, what other damage the fault caused, or what other damage caused the detected fault.
 My point was that it's often more complicated than that.  There have been
papers
 written on self-repairing systems, for example, and ways to design systems that
 are inherently durable when it comes to even internal errors.
I confess much skepticism about such things when it comes to software. I do know how reliable avionics software is done, and that stuff does work even in the face of all kinds of bugs, damage, and errors. I'll be betting my life on that tomorrow :-) Would you bet your life on software that had random divide by 0 bugs in it that were just ignored in the hope that they weren't serious? Keep in mind that software is rather unique in that a single bit error in a billion bytes can render the software utterly demented. Remember the Apollo 11 lunar landing, when the descent computer software started showing self-detected faults? Armstrong turned it off and landed manually. He wasn't going to bet his ass that the faults could be ignored. You and I wouldn't, either.
 I think what I'm
 trying to say is that simply aborting on error is too brittle in some cases,
 because it only deals with one vector--memory corruption that is unlikely to
 reoccur.  But I've watched always-on systems fall apart from some unexpected
 ongoing situation, and simply restarting doesn't actually help.
In such a situation, ignoring the error seems hardly likely to do any better.
Sep 28 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 05:15:14 UTC, Walter Bright wrote:
 I confess much skepticism about such things when it comes to 
 software. I do know how reliable avionics software is done, and 
 that stuff does work even in the face of all kinds of bugs, 
 damage, and errors. I'll be betting my life on that tomorrow :-)

 Would you bet your life on software that had random divide by 0 
 bugs in it that were just ignored in the hope that they weren't 
 serious? Keep in mind that software is rather unique in that a 
 single bit error in a billion bytes can render the software 
 utterly demented.
I'm not saying the errors should be ignored, but rather that there are other approaches to handling errors besides (or in addition to) terminating the process. For me, the single most important thing is detecting errors as soon as possible so corrective action can be taken before things go too far south (so hooray for contracts!). From there, the proper response depends on the error detected and the type of system I'm working on. Like with persistent stateful systems, even if a restart occurs can you assume that the persisted state is valid? With a mesh of communicating systems, if one node goes insane, what impact might it have on other nodes in the network? I think the definition of what constitutes an interdependent system is application defined. And yes, I know all about tiny bugs creating insane problems. With event-based asynchronous programming, the most common serious bugs I encounter memory corruption problems from dangling pointers, and the only way to find and fix these is by analyzing gigabytes worth of log files to try and unpack what happened after the fact. Spending a day looking at the collateral damage from what ultimately turns out to be a backwards conditional expression in an error handler somewhere gives a pretty healthy respect for the brittleness of memory unsafe code. This is one area where having a GC is an enormous win.
 Remember the Apollo 11 lunar landing, when the descent computer 
 software started showing self-detected faults? Armstrong turned 
 it off and landed manually. He wasn't going to bet his ass that 
 the faults could be ignored. You and I wouldn't, either.
And this is great if there's a human available to take over. But what if this were a space probe?
 I think what I'm trying to say is that simply aborting on 
 error is too brittle in some cases, because it only deals with 
 one vector--memory corruption that is unlikely to reoccur.  
 But I've watched always-on systems fall apart from some 
 unexpected ongoing situation, and simply restarting doesn't 
 actually help.
In such a situation, ignoring the error seems hardly likely to do any better.
Again, not ignoring, but rather that a restart may not be the appropriate response to the problem. Or it may be a part of the appropriate response, but other things need to happen as well.
Sep 29 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/29/2014 12:09 PM, Sean Kelly wrote:
 I'm not saying the errors should be ignored, but rather that
 there are other approaches to handling errors besides (or in
 addition to) terminating the process.
Those "other methods" are not robust and are not acceptable for a system claiming to be robust.
 Remember the Apollo 11 lunar landing, when the descent computer software
 started showing self-detected faults? Armstrong turned it off and landed
 manually. He wasn't going to bet his ass that the faults could be ignored. You
 and I wouldn't, either.
And this is great if there's a human available to take over. But what if this were a space probe?
A space probe would have either: 1. an independent backup system. -- or -- 2. a "can't fail" system that fails and the probe will be lost http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716 Any system designed around software that "cannot fail" is doomed from the start. You cannot write such software, and nobody else can, either. Attempting to do so will only reap expensive and bitter lessons :-(
Oct 04 2014
prev sibling next sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Monday, 29 September 2014 at 04:03:37 UTC, Sean Kelly wrote:
 On Monday, 29 September 2014 at 02:57:03 UTC, Walter Bright

 Right.  But if the condition that caused the restart persists, 
 the process can end up in a cascading restart scenario.  Simply 
 restarting on error isn't necessarily enough.
This can be mitigated: a cascade reboot would occur if the problem affects the reboot sequence itself. --- /Paolo
Sep 29 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 07:52:33 UTC, Paolo Invernizzi
wrote:
 On Monday, 29 September 2014 at 04:03:37 UTC, Sean Kelly wrote:
 On Monday, 29 September 2014 at 02:57:03 UTC, Walter Bright

 Right.  But if the condition that caused the restart persists, 
 the process can end up in a cascading restart scenario.  
 Simply restarting on error isn't necessarily enough.
This can be mitigated: a cascade reboot would occur if the problem affects the reboot sequence itself.
Or if an ongoing situation causes the problem to rapidly reoccur. Look at most MMO game launches for example. Production load hits and some process falls over in a weird way, which increases load because everyone goes nuts trying to log back in, and when the system comes back up it immediately falls over again. Rinse, repeat.
Sep 29 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Monday, 29 September 2014 at 19:23:28 UTC, Sean Kelly wrote:
 On Monday, 29 September 2014 at 07:52:33 UTC, Paolo Invernizzi
 wrote:
 On Monday, 29 September 2014 at 04:03:37 UTC, Sean Kelly wrote:
 On Monday, 29 September 2014 at 02:57:03 UTC, Walter Bright

 Right.  But if the condition that caused the restart 
 persists, the process can end up in a cascading restart 
 scenario.  Simply restarting on error isn't necessarily 
 enough.
This can be mitigated: a cascade reboot would occur if the problem affects the reboot sequence itself.
Or if an ongoing situation causes the problem to rapidly reoccur. Look at most MMO game launches for example. Production load hits and some process falls over in a weird way, which increases load because everyone goes nuts trying to log back in, and when the system comes back up it immediately falls over again. Rinse, repeat.
Is it not better to throttle down the connection volumes before it reach processes not being able to handle an overload in in a correct way? --- /Paolo
Sep 29 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 21:49:44 UTC, Paolo Invernizzi
wrote:
 Is it not better to throttle down the connection volumes before 
 it reach processes not being able to handle an overload in in a 
 correct way?
Well, it many cases it isn't the pure load numbers that are the problem but the interaction of the behavior of actual real users. Like it isn't overly difficult to simulate a large volume of traffic on a system, but it's extremely difficult to simulate realistic behavior patters. I think in many cases the problem ends up being unexpected interactions of different behaviors that cause problems rather than simply high volume.
Sep 29 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/09/2014 05:03, Sean Kelly wrote:
 I recall Toyota got into trouble with their computer controlled cars
 because of their idea of handling of inevitable bugs and errors. It
 was one process that controlled everything. When something unexpected
 went wrong, it kept right on operating, any unknown and unintended
 consequences be damned.

 The way to get reliable systems is to design to accommodate errors,
 not pretend they didn't happen, or hope that nothing else got
 affected, etc. In critical software systems, that means shut down and
 restart the offending system, or engage the backup.
My point was that it's often more complicated than that. There have been papers written on self-repairing systems, for example, and ways to design systems that are inherently durable when it comes to even internal errors. I think what I'm trying to say is that simply aborting on error is too brittle in some cases, because it only deals with one vector--memory corruption that is unlikely to reoccur. But I've watched always-on systems fall apart from some unexpected ongoing situation, and simply restarting doesn't actually help.
Sean, I fully agree with the points you have been making so far. But if Walter is fixated on thinking that all the practical uses of D will be critical systems, or simple (ie, single-use, non-interactive) command-line applications, it will be hard for him to comprehend the whole point that "simply aborting on error is too brittle in some cases". PS: Walter, what browser to you use? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Bruno Medeiros:

 But if Walter is fixated on thinking that all the practical 
 uses of D will be critical systems, or simple (ie, single-use, 
 non-interactive) command-line applications,
There's still some of way to go for D design to make it well fit for high integrity systems (some people even use a restricted subset of C for such purposes, but it's a bad language for it). Bye, bearophile
Oct 01 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/1/2014 7:32 AM, bearophile wrote:
 Bruno Medeiros:

 But if Walter is fixated on thinking that all the practical uses of D will be
 critical systems, or simple (ie, single-use, non-interactive) command-line
 applications,
There's still some of way to go for D design to make it well fit for high integrity systems (some people even use a restricted subset of C for such purposes, but it's a bad language for it).
No matter what language is used, a high integrity system cannot be constructed from continuing after the program has entered an unknown state.
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/1/2014 7:17 AM, Bruno Medeiros wrote:
 Sean, I fully agree with the points you have been making so far.
 But if Walter is fixated on thinking that all the practical uses of D will be
 critical systems, or simple (ie, single-use, non-interactive) command-line
 applications, it will be hard for him to comprehend the whole point that
"simply
 aborting on error is too brittle in some cases".
Airplane avionics systems all abort on error, yet the airplanes don't fall out of the sky. I've explained why and how this works many times, here it is again: http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
Oct 04 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 4 October 2014 at 09:04:58 UTC, Walter Bright wrote:
 Airplane avionics systems all abort on error, yet the airplanes 
 don't fall out of the sky.
To be fair, the's a function of aerodynamics more than system design. But I see what you're getting at.
Oct 04 2014
prev sibling parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 04/10/2014 10:05, Walter Bright wrote:
 On 10/1/2014 7:17 AM, Bruno Medeiros wrote:
 Sean, I fully agree with the points you have been making so far.
 But if Walter is fixated on thinking that all the practical uses of D
 will be
 critical systems, or simple (ie, single-use, non-interactive)
 command-line
 applications, it will be hard for him to comprehend the whole point
 that "simply
 aborting on error is too brittle in some cases".
Airplane avionics systems all abort on error, yet the airplanes don't fall out of the sky. I've explained why and how this works many times, here it is again: http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
That's completely irrelevant to the "simply aborting on error is too brittle in some cases" point above, because I wasn't talking about avionics systems, or any kind of mission critical systems at all. In fact, the opposite (non critical systems). -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 08 2014
prev sibling next sibling parent reply Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 29/09/14 02:09, Walter Bright via Digitalmars-d wrote:
 If the program has entered an unknown state, its behavior from then on cannot
be
 predictable. There's nothing I or D can do about that. D cannot officially
 endorse such a practice, though D being a systems programming language it will
 let you do what you want.

 I would not even consider such a practice for a program that is in charge of
 anything that could result in injury, death, property damage, security
breaches,
 etc.
I think I should clarify that I'm not asking you to say "I endorse catching Errors". Your point about systems responsible for the safety of people or property is very well made, and I'm fully in agreement with you about this. What I'm asking you to consider is a use-case, one that I picked quite carefully. Without assuming anything about how the system is architected, if we have a telephone exchange, and an Error occurs in the handling of a single call, it seems to me fairly unarguable that it's essential to avoid this bringing down everyone else's call with it. That's not simply a matter of convenience -- it's a matter of safety, because those calls might include emergency calls, urgent business communications, or any number of other circumstances where dropping someone's call might have severe negative consequences. As I'm sure you realize, I also picked that particular use-case because it's one where there is a well-known technological solution -- Erlang -- which has as a key feature its ability to isolate different parts of the program, and to deal with errors by bringing down the local process where the error occurred, rather than the whole system. This is an approach which is seriously battle-tested in production. As I said, I'm not asking you to endorse catching Errors in threads, or other gross simplifications of Erlang's approach. What I'm interested in are your thoughts on how we might approach resolving the requirement for this kind of stability and localization of error-handling with the tools that D provides. I don't mind if you say to me "That's your problem" (which it certainly is:-), but I'd like it to be clear that it _is_ a problem, and one that it's important for D to address, given its strong standing in the development of super-high-connectivity server applications.
Oct 03 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 10:00 AM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 What I'm asking you to consider is a use-case, one that I picked quite
 carefully.  Without assuming anything about how the system is architected, if
we
 have a telephone exchange, and an Error occurs in the handling of a single
call,
 it seems to me fairly unarguable that it's essential to avoid this bringing
down
 everyone else's call with it.  That's not simply a matter of convenience --
it's
 a matter of safety, because those calls might include emergency calls, urgent
 business communications, or any number of other circumstances where dropping
 someone's call might have severe negative consequences.
What you're doing is attempting to write a program with the requirement that the program cannot fail. It's impossible. If that's your requirement, the system needs to be redesigned so that it can accommodate the failure of the program. (Ignoring bugs in the program is not accommodating failure, it's pretending that the program cannot fail.)
 As I'm sure you realize, I also picked that particular use-case because it's
one
 where there is a well-known technological solution -- Erlang -- which has as a
 key feature its ability to isolate different parts of the program, and to deal
 with errors by bringing down the local process where the error occurred, rather
 than the whole system.  This is an approach which is seriously battle-tested in
 production.
As I (and Brad) has stated before, process isolation, shutting down the failed process, and restarting the process, is acceptable, because processes are isolated from each other. Threads are not isolated from each other. They are not. Not. Not.
 As I said, I'm not asking you to endorse catching Errors in threads, or other
 gross simplifications of Erlang's approach.  What I'm interested in are your
 thoughts on how we might approach resolving the requirement for this kind of
 stability and localization of error-handling with the tools that D provides.

 I don't mind if you say to me "That's your problem" (which it certainly is:-),
 but I'd like it to be clear that it _is_ a problem, and one that it's important
 for D to address, given its strong standing in the development of
 super-high-connectivity server applications.
The only way to have super high uptime is to design the system so that failure is isolated, and the failed process can be quickly restarted or replaced. Ignoring bugs is not isolation, and hoping that bugs in one thread doesn't affected memory shared by other threads doesn't work.
Oct 04 2014
next sibling parent reply Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 04/10/14 11:18, Walter Bright via Digitalmars-d wrote:
 What you're doing is attempting to write a program with the requirement that
the
 program cannot fail.

 It's impossible.
No, I'm attempting to discuss how to approach the problem that the program _can_ fail, and how to isolate that failure appropriately. I'm asking for discussion of how to handle a use-case, not trying to advocate for particular solutions. You seem to be convinced that I don't understand the principles you are advocating of isolation, backup, and so forth. What I've been trying (but obviously failing) to communicate to you is, "OK, I agree on these principles, let's talk about how to achieve them in a practical sense with D."
 If that's your requirement, the system needs to be redesigned so that it can
 accommodate the failure of the program.

 (Ignoring bugs in the program is not accommodating failure, it's pretending
that
 the program cannot fail.)
Indeed.
 As I'm sure you realize, I also picked that particular use-case because it's
one
 where there is a well-known technological solution -- Erlang -- which has as a
 key feature its ability to isolate different parts of the program, and to deal
 with errors by bringing down the local process where the error occurred, rather
 than the whole system.  This is an approach which is seriously battle-tested in
 production.
As I (and Brad) has stated before, process isolation, shutting down the failed process, and restarting the process, is acceptable, because processes are isolated from each other. Threads are not isolated from each other. They are not. Not. Not.
I will repeat what I said in my previous email: "Without assuming anything about how the system is architected". I realize that in my earlier remark:
 However, it's clearly very desirable in this use-case for the application to
keep going if at all possible and for any problem, even an Error, to be
contained in its local context if we can do so.  (By "local context", in
practice this probably means a thread or fiber or some other similar
programming construct.)
... I probably conveyed the idea that I was seeking to contain Errors inside threads or fibers. I was already anticipating that the answer here would be a definitive "You can't under any circumstances", and hence why I wrote, "or other similar programming construct", by which I was thinking of Erlang-style processes. Actually, a large part of my reason for continuing this discussion is because where high-connectivity server applications are concerned, I'm keen to ensure that their developers _avoid_ the dangerous solution that is, "Spawn lots of threads and fibers, and localize Errors by catching them and throwing away the thread rather than the application." However, unless there is an alternative in a practical sense, that is probably what people are going to do, because the trade-offs of their use-case make it seem the least bad option. I think that's a crying shame and that we can and should do better.
 The only way to have super high uptime is to design the system so that failure
 is isolated, and the failed process can be quickly restarted or replaced.
 Ignoring bugs is not isolation, and hoping that bugs in one thread doesn't
 affected memory shared by other threads doesn't work.
Right. Which is why I'd like to move the discussion over to "How can we achieve this in D?"
Oct 04 2014
next sibling parent "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Saturday, 4 October 2014 at 11:19:10 UTC, Joseph Rushton
Wakeling via Digitalmars-d wrote:
 On 04/10/14 11:18, Walter Bright via Digitalmars-d wrote:
 What you're doing is attempting to write a program with the 
 requirement that the
 program cannot fail.
 The only way to have super high uptime is to design the system 
 so that failure
 is isolated, and the failed process can be quickly restarted 
 or replaced.
 Ignoring bugs is not isolation, and hoping that bugs in one 
 thread doesn't
 affected memory shared by other threads doesn't work.
Right. Which is why I'd like to move the discussion over to "How can we achieve this in D?"
I see two things that are in the way (aside from the obvious things like non- safe code): Casting away shared, and implicitly shared immutable data. The former can be checked statically, but the latter is harder to work around in the current language.
Oct 04 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 4:19 AM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 On 04/10/14 11:18, Walter Bright via Digitalmars-d wrote:
 You seem to be convinced that I don't understand the principles you are
 advocating of isolation, backup, and so forth.  What I've been trying (but
 obviously failing) to communicate to you is, "OK, I agree on these principles,
 let's talk about how to achieve them in a practical sense with D."
Ok, I understand. My apologies for misunderstanding you. I would suggest the best way to achieve that is to use the process isolation abilities provided by the operating system. Separate the system into processes that communicate via some messaging system provided by the operating system (not shared memory). I read that the Chrome browser was done this way, so if one part of Chrome crashed, the failed part could be restarted without restarting the rest of Chrome. Note that such a solution has little to do with D in particular, or C or C++. It's more to do with what the operating system provides for process isolation and interprocess communication.
 Right.  Which is why I'd like to move the discussion over to "How can we
achieve
 this in D?"
D provides a lot of ability to make a single process more robust, such as pure functions, immutable data structures, unit testing, safe, etc., so bugs are less likely. And my personal experience with developing D programs is they come up faster and are less buggy than my C++ ones. But once a bug is detected, we're back to chucking the process.
Oct 04 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 4 October 2014 at 09:18:41 UTC, Walter Bright wrote:
 Threads are not isolated from each other. They are not. Not. 
 Not.
Neither are programs that communicate in some fashion. I'll grant that the possibility of memory corruption doesn't exist in this case (a problem unique to systems languages like D), but system corruption still does. And I absolutely agree with you that if memory corruption is ever even suspected, the process must immediately halt. In that case I wouldn't even throw an Error, I'd call exit(1).
Oct 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 9:16 AM, Sean Kelly wrote:
 On Saturday, 4 October 2014 at 09:18:41 UTC, Walter Bright wrote:
 Threads are not isolated from each other. They are not. Not. Not.
Neither are programs that communicate in some fashion.
Operating systems typically provide methods of interprocess communication that are robust against corruption, such as pipes, message passing, etc. The receiving process should regard such input as "user/environmental input", and must validate it. Corruption in it would not be regarded as a logic bug in the receiving process (unless it failed to check for it). Interprocess shared memory, though, is not robust.
 I'll grant that the
 possibility of memory corruption doesn't exist in this case (a problem unique
to
 systems languages like D), but system corruption still does.  And I absolutely
 agree with you that if memory corruption is ever even suspected, the process
 must immediately halt.  In that case I wouldn't even throw an Error, I'd call
 exit(1).
System corruption is indeed a problem with this type of setup. We're relying here on the operating system not having such bugs in it, and indeed OS vendors work very hard at preventing an errant program from corrupting the system. We all know, of course, that this sort of thing happens anyway. An even more robust system design will need a way to deal with that, and failure of the hardware, and failure of the data center, etc. All components of a reliable system are unreliable, and a robust system needs to be able to recover from the inevitable failure of any component. This kind of thinking needs to pervade the initial system design from the ground up, it's hard to tack it on later.
Oct 04 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 4 October 2014 at 19:36:02 UTC, Walter Bright wrote:
 On 10/4/2014 9:16 AM, Sean Kelly wrote:
 On Saturday, 4 October 2014 at 09:18:41 UTC, Walter Bright 
 wrote:
 Threads are not isolated from each other. They are not. Not. 
 Not.
Neither are programs that communicate in some fashion.
Operating systems typically provide methods of interprocess communication that are robust against corruption, such as pipes, message passing, etc. The receiving process should regard such input as "user/environmental input", and must validate it. Corruption in it would not be regarded as a logic bug in the receiving process (unless it failed to check for it). Interprocess shared memory, though, is not robust.
 I'll grant that the
 possibility of memory corruption doesn't exist in this case (a 
 problem unique to
 systems languages like D), but system corruption still does.  
 And I absolutely
 agree with you that if memory corruption is ever even 
 suspected, the process
 must immediately halt.  In that case I wouldn't even throw an 
 Error, I'd call
 exit(1).
System corruption is indeed a problem with this type of setup. We're relying here on the operating system not having such bugs in it, and indeed OS vendors work very hard at preventing an errant program from corrupting the system. We all know, of course, that this sort of thing happens anyway. An even more robust system design will need a way to deal with that, and failure of the hardware, and failure of the data center, etc. All components of a reliable system are unreliable, and a robust system needs to be able to recover from the inevitable failure of any component. This kind of thinking needs to pervade the initial system design from the ground up, it's hard to tack it on later.
This is not different from fiber or thread based approach. If one uses only immutable data for inter-thread communications (== does not use inter-process shared memory) same gurantees and reasoning come. And such design allows for many data optimizations hard or impossible to do with process-based approach. There is no magic solution that does not allow to screw up in 100% of cases whatever programmer does. Killing process is pragmatical default but not a pragmatical silver bullet and from pure theoretical point of few it has no advantages over killing the thread/fiber - it is all about chances of failure, not preventing it. Same in Erlang - some failure warrant killing the runtime, some only specific process. It is all about the context and programmer should decide what is best approach for any specific program. I am fine with non-default being hard but I want it to be still possible within legal language restricions.
Oct 05 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 8:35 AM, Dicebot wrote:
 I am fine with non-default being hard but I
 want it to be still possible within legal language restricions.
D being a systems language, you can without much difficulty do whatever works for you. People do look to us for guidance, however. The levels of programming mastery: newbie: follow the rules because you're told to master: follow the rules because you understand them guru: break the rules because you understand their limitations I'd be doing our users an injustice by not making sure they understand the rules before trying to break them.
Oct 05 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:
 On 10/5/2014 8:35 AM, Dicebot wrote:
 I am fine with non-default being hard but I
 want it to be still possible within legal language restricions.
D being a systems language, you can without much difficulty do whatever works for you.
Yes but it shouldn't be in undefined behaviour domain. In other words there needs to be a confidence that some new compiler optimization will not break the application completely. Right now Throwable/Error docs heavily suggest catching it is "shoot yourself in the foot" thing and new compiler release can possibly change its behaviour without notice. I'd like to have a bit more specific documentation about what can and what can't be expected. Experimental observations are that one shouldn't rely on any cleanup code (RAII / scope(exit)) to happen but other than that it is OK to consume Error if execution context for it (fiber in our case) gets terminated. As D1 compiler does not change it is good enough observation for practical means. But for D2 it would be nice to have some official clarification. I think this is the only important concern I have as long as power user stuff remains possible without re-implementing whole exception system from scratch.
Oct 05 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 2:51 PM, Dicebot wrote:
 On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:
 On 10/5/2014 8:35 AM, Dicebot wrote:
 I am fine with non-default being hard but I
 want it to be still possible within legal language restricions.
D being a systems language, you can without much difficulty do whatever works for you.
Yes but it shouldn't be in undefined behaviour domain. In other words there needs to be a confidence that some new compiler optimization will not break the application completely.
Relying on program state after entering an unknown state is undefined by definition. I don't see how a language can make a statement like "it's probably ok".
 Right now Throwable/Error docs heavily suggest catching it is "shoot yourself
in
 the foot" thing and new compiler release can possibly change its behaviour
 without notice. I'd like to have a bit more specific documentation about what
 can and what can't be expected. Experimental observations are that one
shouldn't
 rely on any cleanup code (RAII / scope(exit)) to happen but other than that it
 is OK to consume Error if execution context for it (fiber in our case) gets
 terminated. As D1 compiler does not change it is good enough observation for
 practical means. But for D2 it would be nice to have some official
clarification.
Definitely unwinding may or may not happen from Error throws, "nothrow" functions may throw Errors, and optimizers need not account for Errors being thrown. Attempting to unwind the stack when an Error is thrown may cause further corruption (if the Error was thrown because of corruption), another reason for the language not to try to do it. An Error is, by definition, unrecoverable.
 I think this is the only important concern I have as long as power user stuff
 remains possible without re-implementing whole exception system from scratch.
You can catch an Error. But what is done from there is up to you - and to do more than just log the error, engage the backup, and exit, I cannot recommend. To do more, use an Exception. But to throw an Exception when a logic bug has been detected, then try and continue based on it "probably" being ok, is something I cannot recommend and D certainly cannot guarantee anything. If the program does anything that matters, that is.
Oct 05 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 5 October 2014 at 23:01:48 UTC, Walter Bright wrote:
 Definitely unwinding may or may not happen from Error throws, 
 "nothrow" functions may throw Errors, and optimizers need not 
 account for Errors being thrown.
This is the real concern. If an Error is thrown out of a nothrow function that contains a synchronized block, for example, the mutex might still be locked. So the only viable option is to terminate, even for something theoretically recoverable like a divide by zero or an OOME.
Oct 05 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 4:28 PM, Sean Kelly wrote:
 On Sunday, 5 October 2014 at 23:01:48 UTC, Walter Bright wrote:
 Definitely unwinding may or may not happen from Error throws, "nothrow"
 functions may throw Errors, and optimizers need not account for Errors being
 thrown.
This is the real concern. If an Error is thrown out of a nothrow function that contains a synchronized block, for example, the mutex might still be locked. So the only viable option is to terminate, even for something theoretically recoverable like a divide by zero or an OOME.
Divide by zero is not recoverable since you don't know why it occurred. It could be the result of overflowing a buffer with 0s. Until a human debugs it and figures out why it happened, it not recoverable. Because it could be the result of corruption like buffer overflows, the less code that is executed between the detection of the bug and terminating the program, the safer the program is. Continuing execution may mess up user data, may execute injected malware, etc.
Oct 05 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 5 October 2014 at 23:01:48 UTC, Walter Bright wrote:
 On 10/5/2014 2:51 PM, Dicebot wrote:
 On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:
 On 10/5/2014 8:35 AM, Dicebot wrote:
 I am fine with non-default being hard but I
 want it to be still possible within legal language 
 restricions.
D being a systems language, you can without much difficulty do whatever works for you.
Yes but it shouldn't be in undefined behaviour domain. In other words there needs to be a confidence that some new compiler optimization will not break the application completely.
Relying on program state after entering an unknown state is undefined by definition. I don't see how a language can make a statement like "it's probably ok".
It is only in undefined state because language handles Errors that way. At the point of throwing the Error state was perfectly defined and 100% recoverable. This is the typical case for assertion failure in contract - it detects some program flaw like inability to handle specific data combination from other process but it does not mean memory is corrupted or program is inherently broken. Just killing the fiber and continuing with other requests (which don't trigger that unexpected code path) is absolutely fine unless compiler kicks in and optimizes something away in surprising fashion. If destructors are ignored those must be always ignored and defined in spec. Same for scope(exit) or any similar affected feature. Currently it may or may not attempt cleanup and that is the problem when trying to circumvent the semantics.
 I think this is the only important concern I have as long as 
 power user stuff
 remains possible without re-implementing whole exception 
 system from scratch.
You can catch an Error. But what is done from there is up to you - and to do more than just log the error, engage the backup, and exit, I cannot recommend.
Killing whole process is unacceptable in many cases, it will effectively shut down the whole service if faulty request happens at least one in a few seconds.
 To do more, use an Exception. But to throw an Exception when a 
 logic bug has been detected, then try and continue based on it 
 "probably" being ok, is something I cannot recommend and D 
 certainly cannot guarantee anything. If the program does 
 anything that matters, that is.
Assertions / contracts use Error. Do you think it is a better approach to prohibit using `assert` and throw custom exceptions from contracts?
Oct 06 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/6/2014 10:09 AM, Dicebot wrote:
 It is only in undefined state because language handles Errors that way. At the
 point of throwing the Error state was perfectly defined and 100% recoverable.
 This is the typical case for assertion failure in contract - it detects some
 program flaw like inability to handle specific data combination from other
 process but it does not mean memory is corrupted or program is inherently
 broken. Just killing the fiber and continuing with other requests (which don't
 trigger that unexpected code path) is absolutely fine unless compiler kicks in
 and optimizes something away in surprising fashion.
What you're describing sounds like using asserts to validate input data. This is not what asserts are for.
Oct 06 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Oct 06, 2014 at 07:55:23PM -0700, Walter Bright via Digitalmars-d wrote:
 On 10/6/2014 10:09 AM, Dicebot wrote:
It is only in undefined state because language handles Errors that
way. At the point of throwing the Error state was perfectly defined
and 100% recoverable.  This is the typical case for assertion failure
in contract - it detects some program flaw like inability to handle
specific data combination from other process but it does not mean
memory is corrupted or program is inherently broken. Just killing the
fiber and continuing with other requests (which don't trigger that
unexpected code path) is absolutely fine unless compiler kicks in and
optimizes something away in surprising fashion.
What you're describing sounds like using asserts to validate input data. This is not what asserts are for.
Using assertions in contracts in D currently has some design issues, one of the foremost being that in-contracts of derived classes are allowed to be more relaxed than base classes, which means that the effective contract is (baseClassContract || derivedClassContract). However, since in-contracts are allowed to be arbitrary statements, the only way to implement this is to catch AssertErrors and continue running if at least one of the contracts didn't assert. But that means we're technically in an undefined state after that point. :-( (If the in-contract calls a nothrow function that asserts, for example, dtors may have been skipped, cleanups not performed, etc., and yet we blindly barge on because the other contract didn't assert.) T -- LINUX = Lousy Interface for Nefarious Unix Xenophobes.
Oct 06 2014
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/06/2014 01:01 AM, Walter Bright wrote:
 On 10/5/2014 2:51 PM, Dicebot wrote:
 On Sunday, 5 October 2014 at 20:41:44 UTC, Walter Bright wrote:
 On 10/5/2014 8:35 AM, Dicebot wrote:
 I am fine with non-default being hard but I
 want it to be still possible within legal language restricions.
D being a systems language, you can without much difficulty do whatever works for you.
Yes but it shouldn't be in undefined behaviour domain. In other words there needs to be a confidence that some new compiler optimization will not break the application completely.
Relying on program state after entering an unknown state is undefined by definition.
What definition?
 I don't see how a language can make a statement like "it's
 probably ok".
E.g. type safety.
Oct 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 6:56 AM, Timon Gehr wrote:
 On 10/06/2014 01:01 AM, Walter Bright wrote:
 Relying on program state after entering an unknown state is undefined by
 definition.
What definition?
How can one define the behavior of an unknown state?
Oct 07 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/07/2014 09:26 PM, Walter Bright wrote:
 On 10/7/2014 6:56 AM, Timon Gehr wrote:
 On 10/06/2014 01:01 AM, Walter Bright wrote:
 Relying on program state after entering an unknown state is undefined by
 definition.
What definition?
How can one define the behavior of an unknown state?
Well, how do you define the behaviour of a program that will be fed an unknown input? That way. I don't really understand what this question is trying to get at. Just define the language semantics appropriately. Your reasoning usually goes like a certain kind of event you assume to be bad -> bug -> unknown state -> undefined behaviour. Why does this apply to D and not to e.g. Java?
Oct 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 12:44 PM, Timon Gehr wrote:
 On 10/07/2014 09:26 PM, Walter Bright wrote:
 On 10/7/2014 6:56 AM, Timon Gehr wrote:
 On 10/06/2014 01:01 AM, Walter Bright wrote:
 Relying on program state after entering an unknown state is undefined by
 definition.
What definition?
How can one define the behavior of an unknown state?
Well, how do you define the behaviour of a program that will be fed an unknown input? That way. I don't really understand what this question is trying to get at. Just define the language semantics appropriately. Your reasoning usually goes like a certain kind of event you assume to be bad -> bug -> unknown state -> undefined behaviour.
What defined behavior would you suggest would be possible after an overflow bug is detected?
Oct 07 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/07/2014 10:09 PM, Walter Bright wrote:
 On 10/7/2014 12:44 PM, Timon Gehr wrote:
 On 10/07/2014 09:26 PM, Walter Bright wrote:
 On 10/7/2014 6:56 AM, Timon Gehr wrote:
 On 10/06/2014 01:01 AM, Walter Bright wrote:
 Relying on program state after entering an unknown state is
 undefined by
 definition.
What definition?
How can one define the behavior of an unknown state?
Well, how do you define the behaviour of a program that will be fed an unknown input? That way. I don't really understand what this question is trying to get at. Just define the language semantics appropriately. Your reasoning usually goes like a certain kind of event you assume to be bad -> bug -> unknown state -> undefined behaviour.
What defined behavior would you suggest would be possible after an overflow bug is detected?
At the language level, there are many possibilities. Just look at what type safe languages do. It is not true that this must lead to UB by a "definition" commonly agreed upon by participants in this thread.
Oct 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 2:12 PM, Timon Gehr wrote:
 On 10/07/2014 10:09 PM, Walter Bright wrote:
 What defined behavior would you suggest would be possible after an
 overflow bug is detected?
At the language level, there are many possibilities. Just look at what type safe languages do. It is not true that this must lead to UB by a "definition" commonly agreed upon by participants in this thread.
And even in a safe language, how would you know that a bug in the runtime didn't lead to corruption which put your program into the unknown state? Your assertion rests on some assumptions: 1. the "safe" language doesn't have bugs in its proof or specification 2. the "safe" language doesn't have bugs in its implementation 3. that it is knowable what caused a bug without ever having debugged it 4. that program state couldn't have been corrupted due to hardware failures 5. that it's possible to write a perfect system all of which are false. I.e. it is not possible to define the state of a program after it has entered an unknown state that was defined to never happen.
Oct 07 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/08/2014 02:27 AM, Walter Bright wrote:
 On 10/7/2014 2:12 PM, Timon Gehr wrote:
 On 10/07/2014 10:09 PM, Walter Bright wrote:
 What defined behavior would you suggest would be possible after an
 overflow bug is detected?
At the language level, there are many possibilities. Just look at what type safe languages do. It is not true that this must lead to UB by a "definition" commonly agreed upon by participants in this thread.
And even in a safe language, how would you know that a bug in the runtime didn't lead to corruption which put your program into the unknown state? Your assertion
Which assertion? That there are languages that call themselves type safe?
 rests on some assumptions:

 1. the "safe" language doesn't have bugs in its proof or specification
So what? I can report these if present. That's not undefined behaviour, it is a wrong specification or a bug in the automated proof checker. (In my experience however, the developers might not actually acknowledge that the specification violates type safety. UB in safe code is a joke. But I am diverting.) Not specific to our situation where we get an overflow.
 2. the "safe" language doesn't have bugs in its implementation
So what? I can report these if present. That's not undefined behaviour, it is wrong behaviour. Not specific to our situation where we get an overflow.
 3. that it is knowable what caused a bug without ever having debugged it
Why would I need to assume this to make my point? Not specific to our situation where we get an overflow
 4. that program state couldn't have been corrupted due to hardware failures
Not specific to our situation where we detect the problem.
 5. that it's possible to write a perfect system
You cannot disprove this one, and no, I am not assuming this, but it would be extraordinarily silly to write into the official language specification: "a program may do anything at any time, because a conforming implementation might contain bugs". Also: Not specific to our situation where we detect the problem.
 all of which are false.


 I.e.
Why "I.e."?
 it is not possible to define the state of a program after it has
 entered an unknown state that was defined to never happen.
By assuming your 5 postulates are false, and filling in the justification for the "i.e." you left out, one will quickly reach the conclusion that it is not possible to define the behaviour of a program at all. Therefore, if we describe programs, our words are meaningless, because this is not "possible". This seems to quickly become a great example of the kind of black/white thinking you warned against in another post in this thread. It has to be allowed to use idealised language, otherwise you cannot say or think anything. What is _undefined behaviour_ depends on the specification alone, and as flawed and ambiguous as that specification may be, in practice it will still be an invaluable tool for communication among language users/developers. Can we at least agree that Dicebot's request for having the behaviour of inadvisable constructs defined such that an implementation cannot randomly change behaviour and then have the developers close down the corresponding bugzilla issue because it was the user's fault anyway is not unreasonable by definition because the system will not reach a perfect state anyway, and then retire this discussion?
Oct 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 6:18 PM, Timon Gehr wrote:
 I can report these if present.
Writing a strongly worded letter to the White Star Line isn't going to help you when the ship is sinking in the middle of the North Atlantic. What will help is minimizing the damage that a detected fault may cause. You cannot rely on the specification when a fault has been detected. "This can't be happening!" are likely the last words of more than a few people.
 Can we at least agree that Dicebot's request for having the behaviour of
 inadvisable constructs defined such that an implementation cannot randomly
 change behaviour and then have the developers close down the corresponding
 bugzilla issue because it was the user's fault anyway is not unreasonable by
 definition because the system will not reach a perfect state anyway, and then
 retire this discussion?
I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for. As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.
Oct 07 2014
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 10/08/2014 05:19 AM, Walter Bright wrote:
 On 10/7/2014 6:18 PM, Timon Gehr wrote:
  > I can report these if present.

 Writing a strongly worded letter to the White Star Line isn't going to
 help you when the ship is sinking in the middle of the North Atlantic.
 ...
Maybe it is going to help the next guy whose ship will not be sinking due to that report.
 What will help is minimizing the damage that a detected fault may cause.
 You cannot rely on the specification when a fault has been detected.
 "This can't be happening!" are likely the last words of more than a few
 people.
Sure, I agree. Just note that if some programmer is checking for overflow after the fact using the following idiom: int x=y*z; if(x/y!=z) assert(0); Then the language can be defined such that e.g.: 0. The overflow will throw on its own. 1. Overflow is undefined, i.e. the optimizer is allowed to remove the check and avoid the detection of the bug. 2. Guaranteed wrap-around behaviour makes the code valid and the bug is detected by the assert. 3. Arbitrary-precision integers. 4. ... Code is simply less likely to run as intended or else abort if possibility 1 is consciously taken. The language implementation may still be buggy, but if it may even sink your ship when it generated code according to the specification, it likely sinks in more cases. Of course you can say that the programmer is at fault for checking for overflow in the wrong fashion, but this does not matter at the point where your ship is sinking. One may still see this choice as the right trade-off, but it is not the only possibility 'by definition'.
Oct 08 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 8 October 2014 at 03:20:21 UTC, Walter Bright wrote:
 Can we at least agree that Dicebot's request for having the 
 behaviour of
 inadvisable constructs defined such that an implementation 
 cannot randomly
 change behaviour and then have the developers close down the 
 corresponding
 bugzilla issue because it was the user's fault anyway is not 
 unreasonable by
 definition because the system will not reach a perfect state 
 anyway, and then
 retire this discussion?
I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for. As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.
Just wanted to point out that resulting solution (== manually switching many of contracts to exceptions from asserts) to me is an unhappy workaround to deal with overly opinionated language and not actually a solution. I still consider this a problem.
Oct 09 2014
parent reply Johannes Pfau <nospam example.com> writes:
Am Thu, 09 Oct 2014 13:10:34 +0000
schrieb "Dicebot" <public dicebot.lv>:

 On Wednesday, 8 October 2014 at 03:20:21 UTC, Walter Bright wrote:
 Can we at least agree that Dicebot's request for having the 
 behaviour of
 inadvisable constructs defined such that an implementation 
 cannot randomly
 change behaviour and then have the developers close down the 
 corresponding
 bugzilla issue because it was the user's fault anyway is not 
 unreasonable by
 definition because the system will not reach a perfect state 
 anyway, and then
 retire this discussion?
I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for. As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.
Just wanted to point out that resulting solution (== manually switching many of contracts to exceptions from asserts) to me is an unhappy workaround to deal with overly opinionated language and not actually a solution. I still consider this a problem.
A point which hasn't been discussed yet: Errors and therefore assert can be used in nothrow functions. This is a pita for compilers because it now can't do certain optimizations. When porting GDC to ARM we started to see problems because of that (can't unwind from nothrow functions on ARM, program just aborts). And now we therefore have to worsen the codegen for nothrow functions because of this. I think Walter sometimes suggested that it would be valid for a compiler to not unwind Errors at all (in release mode), but simply kill the program and dump a error message. This would finally allow us to optimize nothrow functions.
Oct 09 2014
next sibling parent Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 9 October 2014 17:33, Johannes Pfau via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 Am Thu, 09 Oct 2014 13:10:34 +0000
 schrieb "Dicebot" <public dicebot.lv>:

 On Wednesday, 8 October 2014 at 03:20:21 UTC, Walter Bright wrote:
 Can we at least agree that Dicebot's request for having the
 behaviour of
 inadvisable constructs defined such that an implementation
 cannot randomly
 change behaviour and then have the developers close down the
 corresponding
 bugzilla issue because it was the user's fault anyway is not
 unreasonable by
 definition because the system will not reach a perfect state
 anyway, and then
 retire this discussion?
I've been working with Dicebot behind the scenes to help resolve the particular issues with the code he's responsible for. As for D, D cannot offer any guarantees about behavior after a program crash. Nor can any other language.
Just wanted to point out that resulting solution (== manually switching many of contracts to exceptions from asserts) to me is an unhappy workaround to deal with overly opinionated language and not actually a solution. I still consider this a problem.
A point which hasn't been discussed yet: Errors and therefore assert can be used in nothrow functions. This is a pita for compilers because it now can't do certain optimizations. When porting GDC to ARM we started to see problems because of that (can't unwind from nothrow functions on ARM, program just aborts). And now we therefore have to worsen the codegen for nothrow functions because of this. I think Walter sometimes suggested that it would be valid for a compiler to not unwind Errors at all (in release mode), but simply kill the program and dump a error message. This would finally allow us to optimize nothrow functions.
This behaviour was agreed at the bar at DConf. We'd have to put it in the spec to give it the official stamp of approval though. Iain.
Oct 09 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 16:33:53 UTC, Johannes Pfau wrote:
 I think Walter sometimes suggested that it would be valid for a
 compiler to not unwind Errors at all (in release mode), but 
 simply kill
 the program and dump a error message. This would finally allow 
 us to
 optimize nothrow functions.
I think this is reasonable in general but as long as assert throws Error and assert is encouraged to be used in unittest blocks such implementation would mark compiler as unusable for me. We may need to have another look at what is truly an Error and what is not before going that path.
Oct 09 2014
next sibling parent reply "Marc =?UTF-8?B?U2Now7x0eiI=?= <schuetzm gmx.net> writes:
On Thursday, 9 October 2014 at 17:31:32 UTC, Dicebot wrote:
 On Thursday, 9 October 2014 at 16:33:53 UTC, Johannes Pfau 
 wrote:
 I think Walter sometimes suggested that it would be valid for a
 compiler to not unwind Errors at all (in release mode), but 
 simply kill
 the program and dump a error message. This would finally allow 
 us to
 optimize nothrow functions.
I think this is reasonable in general but as long as assert throws Error and assert is encouraged to be used in unittest blocks such implementation would mark compiler as unusable for me.
Can it simply skip unwinding up to the next not-nothrow function? I.e. destructors of objects (and finally/scope(exit)) inside `nothrow` functions will not be executed, but unwinding will continue as normal at the first function up the call stack that supports it? Would this work for both GDC and LDC? If yes, your unittest framework will probably continue to work as is.
Oct 09 2014
parent "Dicebot" <public dicebot.lv> writes:
On Thursday, 9 October 2014 at 18:32:55 UTC, Marc Schütz wrote:
 Can it simply skip unwinding up to the next not-nothrow 
 function? I.e. destructors of objects (and finally/scope(exit)) 
 inside `nothrow` functions will not be executed, but unwinding 
 will continue as normal at the first function up the call stack 
 that supports it?

 Would this work for both GDC and LDC? If yes, your unittest 
 framework will probably continue to work as is.
Funny that it happens other way around right now with dmd (http://dpaste.dzfl.pl/e685c0c32b0d) : struct S { string id; ~this() nothrow { import core.stdc.stdio; printf("%s\n", id.ptr); } } void foo() nothrow { auto s = S("foo"); throw new Error("hmm"); } void main() { auto s = S("main"); foo(); } ==== foo object.Error: hmm ---------------- ./f57(_Dmain+0x23) [0x4171cf] ./f57(void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll().void __lambda1()+0x18) [0x417f40] ./f57(void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate())+0x2a) [0x417e9a] ./f57(void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll()+0x30) [0x417f00] ./f57(void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate())+0x2a) [0x417e9a] ./f57(_d_run_main+0x1a3) [0x417e1b] ./f57(main+0x17) [0x4171f3] /usr/lib/libc.so.6(__libc_start_main+0xf5) [0x40967a15]
Oct 09 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/9/2014 10:31 AM, Dicebot wrote:
 On Thursday, 9 October 2014 at 16:33:53 UTC, Johannes Pfau wrote:
 I think Walter sometimes suggested that it would be valid for a
 compiler to not unwind Errors at all (in release mode), but simply kill
 the program and dump a error message. This would finally allow us to
 optimize nothrow functions.
I think this is reasonable in general but as long as assert throws Error and assert is encouraged to be used in unittest blocks such implementation would mark compiler as unusable for me.
All assert actually does is call a function in druntime. You can override and insert your own assert handling function, and have it do as you need. It was deliberately designed that way.
 We may need to have another look at what is truly an Error and what is not
 before going that path.
This involves making some hard decisions, but is worthwhile.
Oct 11 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 11 October 2014 at 22:06:38 UTC, Walter Bright wrote:
 All assert actually does is call a function in druntime. You 
 can override and insert your own assert handling function, and 
 have it do as you need. It was deliberately designed that way.
A while ago I have been a looking for a way to throw Exception instead of Error for failing assertions inside unittest blocks but druntime didn't seem to provide needed context. Do you think this can be a worthy addition?
Oct 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/14/2014 8:03 PM, Dicebot wrote:
 On Saturday, 11 October 2014 at 22:06:38 UTC, Walter Bright wrote:
 All assert actually does is call a function in druntime. You can override and
 insert your own assert handling function, and have it do as you need. It was
 deliberately designed that way.
A while ago I have been a looking for a way to throw Exception instead of Error for failing assertions inside unittest blocks but druntime didn't seem to provide needed context. Do you think this can be a worthy addition?
assert() in a unittest calls: __d_unittest(string file, uint line) which calls: onUnittestErrorMsg(string file, size_t line, string msg) which calls: extern (C) void onAssertError( string file = __FILE__, size_t line = __LINE__ ) nothrow { if( _assertHandler is null ) throw new AssertError( file, line ); _assertHandler( file, line, null); } You can use setAssertHandler() to set _assertHandler to do whatever you wish. However, the compiler is still going to regard the assert() as nothrow, so the unwinding from an Exception won't happen until up stack a throwing function is encountered.
Oct 14 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 15 October 2014 at 03:18:31 UTC, Walter Bright 
wrote:
 However, the compiler is still going to regard the assert() as 
 nothrow, so the unwinding from an Exception won't happen until 
 up stack a throwing function is encountered.
This makes impossible to have non-fatal unittests and the very reason I was looking for a replacement.
Oct 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/14/2014 8:36 PM, Dicebot wrote:
 On Wednesday, 15 October 2014 at 03:18:31 UTC, Walter Bright wrote:
 However, the compiler is still going to regard the assert() as nothrow, so the
 unwinding from an Exception won't happen until up stack a throwing function is
 encountered.
This makes impossible to have non-fatal unittests and the very reason I was looking for a replacement.
I don't really understand the issue. Unittests are usually run with a separate build of the app, not in the main app. When they are run in the main app, they are run before the app even gets to main(). Why do you need non-fatal unittests?
Oct 14 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-15 07:57, Walter Bright wrote:

 Why do you need non-fatal unittests?
I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report. -- /Jacob Carlborg
Oct 14 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/14/2014 11:23 PM, Jacob Carlborg wrote:
 On 2014-10-15 07:57, Walter Bright wrote:

 Why do you need non-fatal unittests?
I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report.
I understand that, but I don't think that is what Dicebot is looking for. He's looking to recover from unittests, not just continue.
Oct 15 2014
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
On 10/15/14, 4:25 AM, Walter Bright wrote:
 On 10/14/2014 11:23 PM, Jacob Carlborg wrote:
 On 2014-10-15 07:57, Walter Bright wrote:

 Why do you need non-fatal unittests?
I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report.
I understand that, but I don't think that is what Dicebot is looking for. He's looking to recover from unittests, not just continue.
I think this means you can't get stack traces for exceptions thrown in unit tests, right?
Oct 15 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Wednesday, 15 October 2014 at 07:26:03 UTC, Walter Bright 
wrote:
 On 10/14/2014 11:23 PM, Jacob Carlborg wrote:
 On 2014-10-15 07:57, Walter Bright wrote:

 Why do you need non-fatal unittests?
I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report.
I understand that, but I don't think that is what Dicebot is looking for. He's looking to recover from unittests, not just continue.
How can one continue without recovering? This will result in any kind of environment not being cleaned and false failures of other tests that share it.
Oct 15 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 15 October 2014 at 14:25:43 UTC, Dicebot wrote:
 How can one continue without recovering? This will result in 
 any kind of environment not being cleaned and false failures of 
 other tests that share it.
fork()?
Oct 15 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Wednesday, 15 October 2014 at 14:47:33 UTC, Ola Fosheim 
Grøstad wrote:
 On Wednesday, 15 October 2014 at 14:25:43 UTC, Dicebot wrote:
 How can one continue without recovering? This will result in 
 any kind of environment not being cleaned and false failures 
 of other tests that share it.
fork()?
http://check.sourceforge.net/doc/check_html/check_2.html "Writing a framework for C requires solving some special problems that frameworks for Smalltalk, Java or Python don’t have to face. In all of those language, the worst that a unit test can do is fail miserably, throwing an exception of some sort. In C, a unit test is just as likely to trash its address space as it is to fail to meet its test requirements, and if the test framework sits in the same address space, goodbye test framework. To solve this problem, Check uses the fork() system call to create a new address space in which to run each unit test, and then uses message queues to send information on the testing process back to the test framework. That way, your unit test can do all sorts of nasty things with pointers, and throw a segmentation fault, and the test framework will happily note a unit test error, and chug along. "
Oct 15 2014
prev sibling parent reply Dan Olson <zans.is.for.cans yahoo.com> writes:
"Ola Fosheim "Grøstad\"" <ola.fosheim.grostad+dlang gmail.com> writes:

 On Wednesday, 15 October 2014 at 14:25:43 UTC, Dicebot wrote:
 How can one continue without recovering? This will result in any
 kind of environment not being cleaned and false failures of other
 tests that share it.
fork()?
Forking each unittest sounds like a good solution. -- dano
Oct 16 2014
parent reply "monnoroch" <monnoroch gmail.com> writes:
Hi all!
I've read the topic and I am really surprised so many engeneers
arguing for so long and not having systematic approach to the
problem.

As I see it, Walter states that there are eenvironmental errors
and program bugs, which are non-recoverable. So use exceptions
(enforce) for first ones and asserts for the latter.

Other folks argue, that you might want to recover from a program
bug or not recover from invalid input. Also,  exception might be
itself a program bug. This is a valid point too.

So if both are true, that clearly means that the right solution
would be to introduce four categories: a cross product of the
obove:

- bugs, that are recoverable
- bugs, that are unrecoverable
- input errors, that are recoverable
- input errors, that are not

Given that, makes sence to use exceptions for recoverable errors,
doesn't matter whether they are bugs or environmental errors, and
use asserts, if you can't recover.

So, the programmer decides, if his program can recover and puts
an assert or an enforce call in his code.

The problem is, as always, with libraries. The library writer
cannot possibly decide, is some unexpected condition recoverable
or not, so he just can't put both assert and enforce into his
library function and the caller must check the arguments before
calling the function. Yes, this is annoying, but it it the only
correct way.

But what if he didn't? This brings us to error codes. Yes, they
are the best for library error handling, imo, of course, in a
form of Maby and Error monads. They are as clear as asking the
ccaller to decide, what to do with the error. But I realize, that
you, guys, are all against error codes of any kind, so...

I would say, that since the caller didn't check the arguments
himself the bug becomes unrecoverable by default and there should
be an assert, which gives stack trace, so the programmer would
insert appropriate enforces before the function call.

Finally, this brings me to the conclusion: you don't need a stack
trace in the exception, it is never a bug.
Oct 16 2014
parent "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 16 October 2014 at 18:53:22 UTC, monnoroch wrote:
 So if both are true, that clearly means that the right solution
 would be to introduce four categories: a cross product of the
 obove:

 - bugs, that are recoverable
 - bugs, that are unrecoverable
 - input errors, that are recoverable
 - input errors, that are not
Yes, I've already started a thread for this: http://forum.dlang.org/thread/zwnycclpgvfsfaactcyl forum.dlang.org but almost no one replied.
Oct 16 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-15 16:25, Dicebot wrote:

 How can one continue without recovering? This will result in any kind of
 environment not being cleaned and false failures of other tests that
 share it.
I will probably use something else than "assert" in my unit tests. Something like assertEq, assertNotEq and so on. It's more flexible, can give better error message and I can have it throw an exception instead of an error. But there's still the problem with asserts in contracts and other parts of the code. -- /Jacob Carlborg
Oct 15 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 16 October 2014 at 06:11:46 UTC, Jacob Carlborg 
wrote:
 On 2014-10-15 16:25, Dicebot wrote:

 How can one continue without recovering? This will result in 
 any kind of
 environment not being cleaned and false failures of other 
 tests that
 share it.
I will probably use something else than "assert" in my unit tests. Something like assertEq, assertNotEq and so on. It's more flexible, can give better error message and I can have it throw an exception instead of an error. But there's still the problem with asserts in contracts and other parts of the code.
This is what we are using right now: public void test ( char[] op, T1, T2 ) ( T1 a, T2 b, char[] file = __FILE__, size_t line = __LINE__ ) { enforce!(op, TestException)(a, b, file, line); } but it won't work well with 3d-party libraries that use assertions.
Oct 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/16/2014 12:21 PM, Dicebot wrote:
 On Thursday, 16 October 2014 at 06:11:46 UTC, Jacob Carlborg wrote:
 On 2014-10-15 16:25, Dicebot wrote:

 How can one continue without recovering? This will result in any kind of
 environment not being cleaned and false failures of other tests that
 share it.
I will probably use something else than "assert" in my unit tests. Something like assertEq, assertNotEq and so on. It's more flexible, can give better error message and I can have it throw an exception instead of an error. But there's still the problem with asserts in contracts and other parts of the code.
This is what we are using right now: public void test ( char[] op, T1, T2 ) ( T1 a, T2 b, char[] file = __FILE__, size_t line = __LINE__ ) { enforce!(op, TestException)(a, b, file, line); } but it won't work well with 3d-party libraries that use assertions.
Ok, but why would 3rd party library unittests be a concern? They shouldn't have shipped it if their own unittests fail - that's the whole point of having unittests.
Oct 16 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright wrote:
 Ok, but why would 3rd party library unittests be a concern? 
 They shouldn't have shipped it if their own unittests fail - 
 that's the whole point of having unittests.
Libraries tend to be forked and modified. Libraries aren't always tested in environment similar to specific production case. At the same time not being able to use same test runner in all Continious Integration jobs greatly reduces the value of having standard unittest blocks in the very first place.
Oct 16 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 16 October 2014 at 19:56:57 UTC, Dicebot wrote:
 Libraries tend to be forked and modified. Libraries aren't 
 always tested in environment similar to specific production 
 case.
This seems relevant: http://www.tele-task.de/archive/video/flash/16130/
Oct 16 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/16/2014 12:56 PM, Dicebot wrote:
 On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright wrote:
 Ok, but why would 3rd party library unittests be a concern? They shouldn't
 have shipped it if their own unittests fail - that's the whole point of having
 unittests.
Libraries tend to be forked and modified.
If you're willing to go that far, then yes, you do wind up owning the unittests, in which case s/assert/myassert/ should do it.
 Libraries aren't always tested in
 environment similar to specific production case.
Unittests should not be testing their environment. They should be testing the function's logic, and should mock up input for them as required.
 At the same time not being able
 to use same test runner in all Continious Integration jobs greatly reduces the
 value of having standard unittest blocks in the very first place.
I understand that, but wouldn't you be modifying the unittests anyway if using an external test runner tool?
Oct 16 2014
parent reply "Dicebot" <public dicebot.lv> writes:
On Thursday, 16 October 2014 at 20:18:04 UTC, Walter Bright wrote:
 On 10/16/2014 12:56 PM, Dicebot wrote:
 On Thursday, 16 October 2014 at 19:35:40 UTC, Walter Bright 
 wrote:
 Ok, but why would 3rd party library unittests be a concern? 
 They shouldn't
 have shipped it if their own unittests fail - that's the 
 whole point of having
 unittests.
Libraries tend to be forked and modified.
If you're willing to go that far, then yes, you do wind up owning the unittests, in which case s/assert/myassert/ should do it.
Which means changing almost all sources and resolving conflicts upon each merge. Forking library for few tweaks is not "going that far", it is an absolutely minor routine. It also complicates propagating changes back upstream because all tests need to be re-adjusted back to original style.
 Libraries aren't always tested in
 environment similar to specific production case.
Unittests should not be testing their environment. They should be testing the function's logic, and should mock up input for them as required.
compiler version, libc version, kernel version - all it can affect behaviour or pretty self-contained function. Perfectly tested library is as much reality as program with 0 bugs.
 At the same time not being able
 to use same test runner in all Continious Integration jobs 
 greatly reduces the
 value of having standard unittest blocks in the very first 
 place.
I understand that, but wouldn't you be modifying the unittests anyway if using an external test runner tool?
No, right now one can affect the way tests are run by simply replacing the runner to a custom one and it will work for any amount of modules compiled in. Beauty of `unittest` block approach is that it is simply a bunch of functions that are somewhat easy to discover from the combined sources of the program - custom runner can do pretty much anything with those. Or it could if not the issue with AssertError and cleanup.
Oct 16 2014
parent reply "Atila Neves" <atila.neves gmail.com> writes:
 No, right now one can affect the way tests are run by simply 
 replacing the runner to a custom one and it will work for any 
 amount of modules compiled in. Beauty of `unittest` block 
 approach is that it is simply a bunch of functions that are 
 somewhat easy to discover from the combined sources of the 
 program - custom runner can do pretty much anything with those. 
 Or it could if not the issue with AssertError and cleanup.
Is cleaning up in a unittest build a problem? I'd say no, if it the tests fail it doesn't make much sense to clean up unless it affects the reporting of tests failing. I catch assertion errors in unit-threaded exactly to support the standard unittest blocks and can't see why I'd care about clean-up. At least in practice it hasn't been an issue, although to be fair I haven't used that functionality a lot (of using unit-threaded to run unittest blocks). Atila
Oct 17 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-17 10:26, Atila Neves wrote:

 Is cleaning up in a unittest build a problem? I'd say no, if it the
 tests fail it doesn't make much sense to clean up unless it affects the
 reporting of tests failing.
I have used files in some of my unit tests. I would certainly like those to be properly closed if a tests failed (for whatever reason). Now, some of you will argue that one shouldn't use files in unit tests. But that would only work in a ideal and perfect world, which we don't live in. -- /Jacob Carlborg
Oct 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/17/2014 9:10 AM, Jacob Carlborg wrote:
 On 2014-10-17 10:26, Atila Neves wrote:

 Is cleaning up in a unittest build a problem? I'd say no, if it the
 tests fail it doesn't make much sense to clean up unless it affects the
 reporting of tests failing.
I have used files in some of my unit tests. I would certainly like those to be properly closed if a tests failed (for whatever reason). Now, some of you will argue that one shouldn't use files in unit tests. But that would only work in a ideal and perfect world, which we don't live in.
This should be fairly straightforward to deal with: 1. Write functions to input/output from/to ranges instead of files. Then, have the unittests "mock up" input to drive them that does not come from files. I've used this technique very successfully in Warp. 2. If (1) cannot be done, then write the unittests like: { openfile(); scope (exit) closefile(); scope (failure) assert(0); ... use enforce() instead of assert() ... } 3. In a script that compiles/runs the unittests, have the script delete any extraneous generated files.
Oct 17 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 18 October 2014 at 05:22:54 UTC, Walter Bright wrote:
 2. If (1) cannot be done, then write the unittests like:

   {
     openfile();
     scope (exit) closefile();
     scope (failure) assert(0);
     ... use enforce() instead of assert() ...
   }

 3. In a script that compiles/runs the unittests, have the 
 script delete any extraneous generated files.
This is bad, it means: - I risk having my filesystem ruined by running unit-tests through the compiler. - The test environment changes between runs. Built in unit tests should have no side effects. Something along these lines would be a better setup: 1. Load a filesystem from a read-only file to a virtual driver. 2. Run a special initializer for unit tests to set up the in-memory test environment. 3. Create N forks (N=number of cores): 4. Fork the filesystem/program before running a single unit test. 5. Mount the virtual filesystem (from 1) 6. Run the unit test 7. Collect result from child process and print result. 8. goto 4 But just banning writing to resources would be more suitable. D unit tests are only suitable for testing simple library code anyway.
Oct 17 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-16 21:35, Walter Bright wrote:

 Ok, but why would 3rd party library unittests be a concern? They
 shouldn't have shipped it if their own unittests fail - that's the whole
 point of having unittests.
They will have asserts in contracts and other parts of that code that is not unit tests. -- /Jacob Carlborg
Oct 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/17/2014 9:05 AM, Jacob Carlborg wrote:
 On 2014-10-16 21:35, Walter Bright wrote:

 Ok, but why would 3rd party library unittests be a concern? They
 shouldn't have shipped it if their own unittests fail - that's the whole
 point of having unittests.
They will have asserts in contracts and other parts of that code that is not unit tests.
This particular subthread is about unittests.
Oct 17 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-10-18 06:36, Walter Bright wrote:

 This particular subthread is about unittests.
That doesn't make the problem go away. -- /Jacob Carlborg
Oct 18 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/15/2014 7:25 AM, Dicebot wrote:
 How can one continue without recovering? This will result in any kind of
 environment not being cleaned and false failures of other tests that share it.
Unittest asserts are top level - they shouldn't need recovering from (i.e. unwinding). Just continuing.
Oct 16 2014
prev sibling parent reply Dan Olson <zans.is.for.cans yahoo.com> writes:
Walter Bright <newshound2 digitalmars.com> writes:

 On 10/14/2014 11:23 PM, Jacob Carlborg wrote:
 On 2014-10-15 07:57, Walter Bright wrote:

 Why do you need non-fatal unittests?
I don't know if this would cause problems with the current approach. But most unit test frameworks don't NOT stop on the first failure, like D does. It catches the exception, continues with the next test and in the end prints a final report.
I understand that, but I don't think that is what Dicebot is looking for. He's looking to recover from unittests, not just continue.
That is what I am looking for, just being able to continue from a failed assert in a unittest. On iOS, it is easier to build one app with all unit tests. This is because I don't know of a way to automate the download/run of a bunch of smaller unittests. The test driver catches Throwables, records failure, then goes on to the next test. After catching up on this thread, I feel like unittests should throw an Exceptions.
Oct 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/15/2014 7:35 AM, Dan Olson wrote:
 That is what I am looking for, just being able to continue from a failed
 assert in a unittest.
Just use enforce() or something similar instead of assert(). Nothing says you have to use assert() in a unittest.
Oct 16 2014
parent reply Dan Olson <zans.is.for.cans yahoo.com> writes:
Walter Bright <newshound2 digitalmars.com> writes:

 On 10/15/2014 7:35 AM, Dan Olson wrote:
 That is what I am looking for, just being able to continue from a failed
 assert in a unittest.
Just use enforce() or something similar instead of assert(). Nothing says you have to use assert() in a unittest.
Makes sense. However it is druntime and phobos unittests that already use assert. I have convinced myself that catching Throwable is just fine in my case because at worst, unittests that follow an Error might be tainted, but only a perfect score of passing all tests really counts. -- dano
Oct 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/16/2014 8:36 AM, Dan Olson wrote:
 Walter Bright <newshound2 digitalmars.com> writes:

 On 10/15/2014 7:35 AM, Dan Olson wrote:
 That is what I am looking for, just being able to continue from a failed
 assert in a unittest.
Just use enforce() or something similar instead of assert(). Nothing says you have to use assert() in a unittest.
Makes sense. However it is druntime and phobos unittests that already use assert. I have convinced myself that catching Throwable is just fine in my case because at worst, unittests that follow an Error might be tainted, but only a perfect score of passing all tests really counts.
I don't understand why unittests in druntime/phobos are an issue for users. We don't release a DMD unless they all pass - it should be moot for users.
Oct 16 2014
next sibling parent Dan Olson <zans.is.for.cans yahoo.com> writes:
Walter Bright <newshound2 digitalmars.com> writes:
 I don't understand why unittests in druntime/phobos are an issue for
 users. We don't release a DMD unless they all pass - it should be moot
 for users.
I think some context was lost. This is different. I am making mods to LDC, druntime, and phobos to target iPhones and iPads (ARM-iOS). I also can't claim victory until all unittests pass.
Oct 16 2014
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-10-16 20:50, Walter Bright wrote:

 I don't understand why unittests in druntime/phobos are an issue for
 users. We don't release a DMD unless they all pass - it should be moot
 for users.
There are asserts elsewhere in the code. -- /Jacob Carlborg
Oct 17 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 15 October 2014 at 03:18:31 UTC, Walter Bright 
wrote:
 However, the compiler is still going to regard the assert() as 
 nothrow, so the unwinding from an Exception won't happen until 
 up stack a throwing function is encountered.
I hate to say it, but I'm inclined to treat nothrow the same as in C++, which is to basically pretend it's not a part of the language. The efficiency is nice, but not if it means that throwing an Error will cause the program to be invalid. Please tell me there's no plan to change the unwinding behavior when Error is thrown in standard (ie not nothrow) code. I touched on all this in my "on errors" thread that seems to have died. I suppose I could write a DIP but I was hoping for discussion.
Oct 15 2014
next sibling parent reply "Elena" <elena.works gmail.com> writes:
Hi,every one!
How can I realize mathematical rounding (not banking) for Double 
in D
0.4 -> 0
0.5 -> 1
1.5 -> 2
2.5 -> 3
3.5 -> 4  ?

Thank you.
Oct 15 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/15/14, 8:11 PM, Elena wrote:
 Hi,every one!
 How can I realize mathematical rounding (not banking) for Double in D
 0.4 -> 0
 0.5 -> 1
 1.5 -> 2
 2.5 -> 3
 3.5 -> 4  ?

 Thank you.
Add 0.5 and then cast to integral. -- Andrei
Oct 15 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 16 October 2014 at 05:24:23 UTC, Andrei Alexandrescu 
wrote:
 On 10/15/14, 8:11 PM, Elena wrote:
 Hi,every one!
 How can I realize mathematical rounding (not banking) for 
 Double in D
 0.4 -> 0
 0.5 -> 1
 1.5 -> 2
 2.5 -> 3
 3.5 -> 4  ?

 Thank you.
Add 0.5 and then cast to integral. -- Andrei
This would be the "round half up" mode that produce 0 for -0.5: floor(x+0.5) Whereas "round nearest" should produce -1 for -0.5: floor(abs(x)+0.5)*sgn(x)
Oct 16 2014
prev sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Thursday, 16 October 2014 at 03:11:57 UTC, Elena wrote:
 Hi,every one!
 How can I realize mathematical rounding (not banking) for
Never used it, but: http://dlang.org/library/std/math/FloatingPointControl.html phobos/math.d lists the enums: roundToNearest, roundDown, roundUp, roundToZero
Oct 15 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/15/2014 6:54 PM, Sean Kelly wrote:
 I hate to say it, but I'm inclined to treat nothrow the same as in C++, which
is
 to basically pretend it's not a part of the language. The efficiency is nice,
 but not if it means that throwing an Error will cause the program to be
 invalid.  Please tell me there's no plan to change the unwinding behavior when
 Error is thrown in standard (ie not nothrow) code.
Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use enforce() instead of assert().
Oct 16 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:
 On 10/15/2014 6:54 PM, Sean Kelly wrote:
 I hate to say it, but I'm inclined to treat nothrow the same 
 as in C++, which is
 to basically pretend it's not a part of the language. The 
 efficiency is nice,
 but not if it means that throwing an Error will cause the 
 program to be
 invalid.  Please tell me there's no plan to change the 
 unwinding behavior when
 Error is thrown in standard (ie not nothrow) code.
Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use enforce() instead of assert().
I'm more concerned about Phobos. If it uses nothrow and asserts in preconditions then the decision has been made for me.
Oct 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/16/2014 6:46 AM, Sean Kelly wrote:
 On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:
 Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use
 enforce() instead of assert().
I'm more concerned about Phobos. If it uses nothrow and asserts in preconditions then the decision has been made for me.
Which function(s) in particular?
Oct 16 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Thursday, 16 October 2014 at 18:49:13 UTC, Walter Bright wrote:
 On 10/16/2014 6:46 AM, Sean Kelly wrote:
 On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright 
 wrote:
 Don't throw Errors when you need to unwind. Throw Exceptions. 
 I.e. use enforce() instead of assert().
I'm more concerned about Phobos. If it uses nothrow and asserts in preconditions then the decision has been made for me.
Which function(s) in particular?
Nothing specifically... which is kind of the problem. If I call an impure nothrow function, it's possible the function accesses shared state that will not be properly cleaned up in the event of a thrown Error--say it contains a synchronized block, for example. So even if I can be sure that the problem that resulted in an Error being thrown did not corrupt program state, I can't be sure that the failure to unwind did not as well. That said, I'm inclined to say that this is only a problem because of how many things are classified as Errors at this point. If contracts used some checking mechanism other than assert, perhaps this would be enough. Again I'll refer to my "on errors" post that gets into this a bit. Using two broad categories: exceptions and errors, is unduly limiting.
Oct 16 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/16/2014 12:08 PM, Sean Kelly wrote:
 On Thursday, 16 October 2014 at 18:49:13 UTC, Walter Bright wrote:
 On 10/16/2014 6:46 AM, Sean Kelly wrote:
 On Thursday, 16 October 2014 at 07:44:37 UTC, Walter Bright wrote:
 Don't throw Errors when you need to unwind. Throw Exceptions. I.e. use
 enforce() instead of assert().
I'm more concerned about Phobos. If it uses nothrow and asserts in preconditions then the decision has been made for me.
Which function(s) in particular?
Nothing specifically... which is kind of the problem. If I call an impure nothrow function, it's possible the function accesses shared state that will not be properly cleaned up in the event of a thrown Error--say it contains a synchronized block, for example. So even if I can be sure that the problem that resulted in an Error being thrown did not corrupt program state, I can't be sure that the failure to unwind did not as well.
Contract errors in Phobos/Druntime should be limited to having passed it invalid arguments, which should be documented, or simply that the function has a bug in it, or that it ran out of memory (which is generally not recoverable anyway). I.e. I'm not seeing where this is a practical problem.
 That said, I'm inclined to say that this is only a problem
 because of how many things are classified as Errors at this
 point.  If contracts used some checking mechanism other than
 assert, perhaps this would be enough.  Again I'll refer to my "on
 errors" post that gets into this a bit.  Using two broad
 categories: exceptions and errors, is unduly limiting.
My initial impression is that there's so much confusion about what should be an Error and what should be an Exception, that adding a third category will not improve things.
Oct 16 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-16 21:31, Walter Bright wrote:

 Contract errors in Phobos/Druntime should be limited to having passed it
 invalid arguments, which should be documented
That doesn't mean it won't happen. -- /Jacob Carlborg
Oct 17 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/17/2014 9:13 AM, Jacob Carlborg wrote:
 On 2014-10-16 21:31, Walter Bright wrote:

 Contract errors in Phobos/Druntime should be limited to having passed it
 invalid arguments, which should be documented
That doesn't mean it won't happen.
Which means they'll be program bugs, not environmental errors. It is of great value to distinguish between program bugs and input/environmental errors, and to treat them entirely differently. It makes code easier to understand, more robust, and better/faster code can be generated. Using asserts to detect input/environmental errors is a bad practice - something like enforce() should be used instead. I understand that some have to work with poorly written libraries that incorrectly use assert. If that's the only issue with those libraries, you're probably lucky :-) Short term, I suggest editing the code of those libraries, and pressuring the authors of them. Longer term, we need to establish a culture of using assert/enforce correctly. This is not as pie-in-the-sky as it sounds. Over the years, a lot of formerly popular bad practices in C and C++ have been relentlessly driven out of existence by getting the influential members of the communities to endorse and advocate proper best practices. ---------------------- I do my best to practice what I preach. In the DMD source code, an assert tripping always, by definition, means it's a compiler bug. It is never used to signal errors in code being compiled or environmental errors. If a badly formed .d file causes dmd to assert, it is always a BUG in dmd.
Oct 17 2014
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior. -- /Jacob Carlborg
Oct 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.
As I've said before, tripping an assert by definition means the program has entered an unknown state. I don't believe it is possible for any language to make guarantees beyond that point. Now, if it is a "known" unknown state, and you want to recover, the solution is straightforward - use enforce(). enforce() offers the guarantees you're asking for. Using assert() when you mean enforce() is like pulling the fire alarm but not wanting the fire dept. to show up.
Oct 18 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.
As I've said before, tripping an assert by definition means the program has entered an unknown state. I don't believe it is possible for any language to make guarantees beyond that point.
What about using the contracts of a fucntion to optimize? They are mainly asserts, after all.
Oct 20 2014
parent reply "rst256" <ussr.24 yandex.ru> writes:
On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
 On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
 wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental 
 errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.
As I've said before, tripping an assert by definition means the program has entered an unknown state. I don't believe it is possible for any language to make guarantees beyond that point.
What about using the contracts of a fucntion to optimize? They are mainly asserts, after all.
this(errnoEnforce(.fopen(name, stdioOpenmode), text("Cannot open file `", name, "' in mode `", stdioOpenmode, "'")), name); making a couple instances of classes not knowing whether they are necessary at all, the performance did not cry. And why do you have all type of cars. Is it really you are so good compiler?
 What about using the contracts of a fucntion to optimize? They
Its linking time.
 is possible for any language to make guarantees beyond that
Of cous no, I will explain later 2-3 hour/ Sorry bisnes offtop: string noexist_file_name = "bag_file_global"; {writefln("------ begin scope: after"); auto fobj = File(noexist_file_name); scope(failure) writefln("test1.failure"); scope(exit) writefln("test1.exit"); } std.exception.ErrnoException std\stdio.d(362): Cannot open file `bag_... --------------- 5 line with only a memory addr 0x7C81077 in RegisterWaitForInputIdle i think you need stoped after first errmsg see exception.d: in constructor or class ErrnoException : Exception
Oct 20 2014
next sibling parent "rst256" <ussr.24 yandex.ru> writes:
On Tuesday, 21 October 2014 at 03:25:55 UTC, rst256 wrote:
In this post i forgot make correction machine translation.
I am so sorry!
 On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
 On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
 wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental 
 errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.
As I've said before, tripping an assert by definition means the program has entered an unknown state. I don't believe it is possible for any language to make guarantees beyond that point.
What about using the contracts of a fucntion to optimize? They are mainly asserts, after all.
this(errnoEnforce(.fopen(name, stdioOpenmode), text("Cannot open file `", name, "' in mode `", stdioOpenmode, "'")), name); making a couple instances of classes not knowing whether they are necessary at all, the performance did not cry. And why do you have all type of cars. Is it really you are so good compiler?
 What about using the contracts of a fucntion to optimize? They
Its linking time.
 is possible for any language to make guarantees beyond that
Of cous no, I will explain later 2-3 hour/ Sorry bisnes offtop: string noexist_file_name = "bag_file_global"; {writefln("------ begin scope: after"); auto fobj = File(noexist_file_name); scope(failure) writefln("test1.failure"); scope(exit) writefln("test1.exit"); } std.exception.ErrnoException std\stdio.d(362): Cannot open file `bag_... --------------- 5 line with only a memory addr 0x7C81077 in RegisterWaitForInputIdle i think you need stoped after first errmsg see exception.d: in constructor or class ErrnoException : Exception
Oct 21 2014
prev sibling parent "eles" <eles eles.com> writes:
On Tuesday, 21 October 2014 at 03:25:55 UTC, rst256 wrote:
 On Monday, 20 October 2014 at 20:36:58 UTC, eles wrote:
 On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
 wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:
 Its linking time.
No, it's not. At least if you move them in the generated .di files along with function prototypes. Basically, you would pull more source from .d files into .di files.
Oct 22 2014
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/18/2014 07:40 PM, Walter Bright wrote:
 As I've said before, tripping an assert by definition means the program
 has entered an unknown state. I don't believe it is possible for any
 language to make guarantees beyond that point.
What about the guarantee that your compiler didn't _intentionally_ screw them completely?
Oct 20 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/20/2014 1:54 PM, Timon Gehr wrote:
 On 10/18/2014 07:40 PM, Walter Bright wrote:
 As I've said before, tripping an assert by definition means the program
 has entered an unknown state. I don't believe it is possible for any
 language to make guarantees beyond that point.
What about the guarantee that your compiler didn't _intentionally_ screw them completely?
What does that mean?
Oct 20 2014
parent "rst256" <ussr.24 yandex.ru> writes:
in the extreme case, if you want to change anything.
or as a way to shut problem quickly
add to debugging symbols generation algorithm info about file and 
line
- Walter?
Oct 23 2014
prev sibling next sibling parent reply "w0rp" <devw0rp gmail.com> writes:
On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:

 Which means they'll be program bugs, not environmental errors.
Yes, but just because I made a mistake in using a function (hitting an assert) doesn't mean I want to have undefined behavior.
As I've said before, tripping an assert by definition means the program has entered an unknown state. I don't believe it is possible for any language to make guarantees beyond that point. Now, if it is a "known" unknown state, and you want to recover, the solution is straightforward - use enforce(). enforce() offers the guarantees you're asking for. Using assert() when you mean enforce() is like pulling the fire alarm but not wanting the fire dept. to show up.
I agree with you on this. I've only ever used assert() for expressing, "This should never happen." There's a difference between "this might happen if the environment goes wrong," which is like a tire being popped in car, and "this should never happen," which is like a car turning into King Kong and flying away. My most common assert in D is typically assert(x !is null) for demanding that objects are initialised.
Oct 22 2014
parent "eles" <eles eles.com> writes:
On Wednesday, 22 October 2014 at 15:05:58 UTC, w0rp wrote:
 On Saturday, 18 October 2014 at 17:40:43 UTC, Walter Bright 
 wrote:
 On 10/18/2014 8:21 AM, Jacob Carlborg wrote:
 On 2014-10-18 07:09, Walter Bright wrote:
 never happen," which is like a car turning into King Kong and
It depends on the environment: http://i.dailymail.co.uk/i/pix/2011/03/27/article-1370559-0B499D2E00000578-931_634x470.jpg
Oct 22 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 18/10/2014 18:40, Walter Bright wrote:
 As I've said before, tripping an assert by definition means the program
 has entered an unknown state. I don't believe it is possible for any
 language to make guarantees beyond that point.
The guarantees (if any), would not be made by the language, but by the programmer. The language cannot know if a program is totally broken and undefined when an assert fails, but a programmer can, for each particular assert, make some assumptions about which fault domains (like Sean put it) can be affected and which are not. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 29 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/29/2014 5:37 AM, Bruno Medeiros wrote:
 On 18/10/2014 18:40, Walter Bright wrote:
 As I've said before, tripping an assert by definition means the program
 has entered an unknown state. I don't believe it is possible for any
 language to make guarantees beyond that point.
The guarantees (if any), would not be made by the language, but by the programmer. The language cannot know if a program is totally broken and undefined when an assert fails, but a programmer can, for each particular assert, make some assumptions about which fault domains (like Sean put it) can be affected and which are not.
Assumptions are not guarantees. In any case, if the programmer knows than assert error is restricted to a particular domain, and is recoverable, and wants to recover from it, use enforce(), not assert().
Oct 29 2014
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2014-10-29 22:22, Walter Bright wrote:

 Assumptions are not guarantees.

 In any case, if the programmer knows than assert error is restricted to
 a particular domain, and is recoverable, and wants to recover from it,
 use enforce(), not assert().
I really don't like "enforce". It encourage the use of plain Exception instead of a subclass. -- /Jacob Carlborg
Oct 30 2014
prev sibling next sibling parent reply "Kagamin" <spam here.lot> writes:
On Wednesday, 29 October 2014 at 21:23:00 UTC, Walter Bright 
wrote:
 In any case, if the programmer knows than assert error is 
 restricted to a particular domain, and is recoverable, and 
 wants to recover from it, use enforce(), not assert().
But all that does is working around the assert's behavior to ignore cleanups. Maybe, when it's known, that a failure is not restricted, some different way of failure reporting should be used?
Nov 01 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 4:14 AM, Kagamin wrote:
 On Wednesday, 29 October 2014 at 21:23:00 UTC, Walter Bright wrote:
 In any case, if the programmer knows than assert error is restricted to a
 particular domain, and is recoverable, and wants to recover from it, use
 enforce(), not assert().
But all that does is working around the assert's behavior to ignore cleanups.
It is not "working around" anything unless you're trying to use a screwdriver as a hammer. Cleanups are not appropriate after a program has entered an unknown state.
 Maybe, when it's known, that a failure is not restricted, some different way of
 failure reporting should be used?
assert() and enforce() both work as designed.
Nov 01 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/10/2014 21:22, Walter Bright wrote:
 On 10/29/2014 5:37 AM, Bruno Medeiros wrote:
 On 18/10/2014 18:40, Walter Bright wrote:
 As I've said before, tripping an assert by definition means the program
 has entered an unknown state. I don't believe it is possible for any
 language to make guarantees beyond that point.
The guarantees (if any), would not be made by the language, but by the programmer. The language cannot know if a program is totally broken and undefined when an assert fails, but a programmer can, for each particular assert, make some assumptions about which fault domains (like Sean put it) can be affected and which are not.
Assumptions are not guarantees.
Let me give an example: double sqrt(double num) { assert(num >= 0); ... With just this, then purely from a compiler/language viewpoint, if the assert is triggered the *language* doesn't know if the whole program is corrupted (formatting the hard disk, etc.), or if the fault is localized there, and an error/exception can be thrown cleanly (clean in the sense that other parts of the program are not corrupted). So the language doesn't know, but the *programmer* can make a reasoning in each particular assert of which domains/components of the program are affected by that assertion failure. In the sqrt() case above, the programmer can easily state that the math library that sqrt is part of is not corrupted, and its state is not totally unknown (as in, it's not deadlocked, nor is it formatting the hard disk!). That being the case, sqrt() can be made to throw an exception, and then that "assertion" failure can be recovered cleanly. Which leads to what you say next:
 In any case, if the programmer knows than assert error is restricted to
 a particular domain, and is recoverable, and wants to recover from it,
 use enforce(), not assert().
Very well then. But then we'll get to the point where enforce() will become much more popular than assert to check for contract conditions. assert() will be relegated to niche and rare situations where the program cant really know how to continue/recover cleanly (memory corruption for example). That idiom is fine with me actually - but then the documentation for assert should reflect that. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Nov 07 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Friday, 7 November 2014 at 15:00:35 UTC, Bruno Medeiros wrote:
 Very well then. But then we'll get to the point where enforce() 
 will become much more popular than assert to check for contract 
 conditions. assert() will be relegated to niche and rare 
 situations where the program cant really know how to 
 continue/recover cleanly (memory corruption for example).

 That idiom is fine with me actually - but then the 
 documentation for assert should reflect that.
This looks like a only practical solution for me right now - but it is complicated by the fact that assert is not the only Error in the library code.
Nov 09 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/7/2014 7:00 AM, Bruno Medeiros wrote:
 Let me give an example:

 double sqrt(double num) {
    assert(num >= 0);
    ...

 With just this, then purely from a compiler/language viewpoint, if the assert
is
 triggered the *language* doesn't know if the whole program is corrupted
 (formatting the hard disk, etc.), or if the fault is localized there, and an
 error/exception can be thrown cleanly (clean in the sense that other parts of
 the program are not corrupted).

 So the language doesn't know, but the *programmer* can make a reasoning in each
 particular assert of which domains/components of the program are affected by
 that assertion failure. In the sqrt() case above, the programmer can easily
 state that the math library that sqrt is part of is not corrupted, and its
state
 is not totally unknown (as in, it's not deadlocked, nor is it formatting the
 hard disk!).
Making such an assumption presumes that the programmer knows the SOURCE of the bug. He does not. The source could be a buffer overflow, a wild pointer, any sort of corruption.
 Very well then. But then we'll get to the point where enforce() will become
much
 more popular than assert to check for contract conditions. assert() will be
 relegated to niche and rare situations where the program cant really know how
to
 continue/recover cleanly (memory corruption for example).

 That idiom is fine with me actually - but then the documentation for assert
 should reflect that.
I created this thread because it is an extremely important topic. It has come up again and again for my entire career. There is no such thing as knowing in advance what caused a bug, and that the bug is "safe" to continue from. If you know in advance what caused it, then it becomes expected program behavior, and is not a bug. assert() is for bug detection, detecting state that should have never happened. By definition you cannot know it is "safe", you cannot know what caused it. enforce() is for dealing with known, expected conditions.
Nov 09 2014
next sibling parent reply "eles" <eles eles.com> writes:
On Sunday, 9 November 2014 at 21:34:05 UTC, Walter Bright wrote:
 On 11/7/2014 7:00 AM, Bruno Medeiros wrote:
 assert() is for bug detection, detecting state that should have 
 never happened. By definition you cannot know it is "safe", you 
 cannot know what caused it.

 enforce() is for dealing with known, expected conditions.
This is clear. The missing piece is a way to make the compile enforce that use on the user. Code review alone does not work.
Nov 09 2014
parent "eles" <eles eles.com> writes:
On Sunday, 9 November 2014 at 21:59:19 UTC, eles wrote:
 On Sunday, 9 November 2014 at 21:34:05 UTC, Walter Bright wrote:
 On 11/7/2014 7:00 AM, Bruno Medeiros wrote:
 assert() is for bug detection, detecting state that should 
 have never happened. By definition you cannot know it is 
 "safe", you cannot know what caused it.

 enforce() is for dealing with known, expected conditions.
This is clear. The missing piece is a way to make the compile enforce that use on the user. Code review alone does not work.
This is clear. The missing piece is a way to make the compiler enforce that separate use on the user. Code review alone does not work.
Nov 09 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 09/11/2014 21:33, Walter Bright wrote:
 On 11/7/2014 7:00 AM, Bruno Medeiros wrote:
 Let me give an example:

 double sqrt(double num) {
    assert(num >= 0);
    ...

 With just this, then purely from a compiler/language viewpoint, if the
 assert is
 triggered the *language* doesn't know if the whole program is corrupted
 (formatting the hard disk, etc.), or if the fault is localized there,
 and an
 error/exception can be thrown cleanly (clean in the sense that other
 parts of
 the program are not corrupted).

 So the language doesn't know, but the *programmer* can make a
 reasoning in each
 particular assert of which domains/components of the program are
 affected by
 that assertion failure. In the sqrt() case above, the programmer can
 easily
 state that the math library that sqrt is part of is not corrupted, and
 its state
 is not totally unknown (as in, it's not deadlocked, nor is it
 formatting the
 hard disk!).
Making such an assumption presumes that the programmer knows the SOURCE of the bug. He does not. The source could be a buffer overflow, a wild pointer, any sort of corruption.
 Very well then. But then we'll get to the point where enforce() will
 become much
 more popular than assert to check for contract conditions. assert()
 will be
 relegated to niche and rare situations where the program cant really
 know how to
 continue/recover cleanly (memory corruption for example).

 That idiom is fine with me actually - but then the documentation for
 assert
 should reflect that.
I created this thread because it is an extremely important topic. It has come up again and again for my entire career. There is no such thing as knowing in advance what caused a bug, and that the bug is "safe" to continue from. If you know in advance what caused it, then it becomes expected program behavior, and is not a bug. assert() is for bug detection, detecting state that should have never happened. By definition you cannot know it is "safe", you cannot know what caused it. enforce() is for dealing with known, expected conditions.
As I mentioned before, it's not about knowing exactly what caused it, nor knowing for sure if it is "safe" (this is an imprecise term anyways, in this context). It's about making an educated guess about what will provide a better user experience when an assertion is triggered: halting the program, or ignoring the bug and continuing the program (even if admittedly the program will be in a buggy state). I've already mentioned several examples of situations where I think the later is preferable. Just to add another one, one that I recently came across while coding, was an assertion check that I put, which, if it where to fail, would only cause a redundant use of memory (but no NPEs or access violations or invalid state, etc.). -- Bruno Medeiros https://twitter.com/brunodomedeiros
Nov 19 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/19/2014 3:59 AM, Bruno Medeiros wrote:
 Just to add another one, one that I recently came across while coding, was an
 assertion check that I put, which, if it where to fail, would only cause a
 redundant use of memory (but no NPEs or access violations or invalid state,
etc.).
If you're comfortable with that, then you should be using enforce(), not assert().
Nov 19 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 18 October 2014 at 05:10:20 UTC, Walter Bright wrote:
 I understand that some have to work with poorly written 
 libraries that incorrectly use assert. If that's the only issue 
 with those libraries, you're probably lucky :-) Short term, I 
 suggest editing the code of those libraries, and pressuring the 
 authors of them. Longer term, we need to establish a culture of 
 using assert/enforce correctly.
So you consider the library interface to be user input? What about calls that are used internally but also exposed as part of the library interface?
Oct 18 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/18/2014 9:01 AM, Sean Kelly wrote:
 So you consider the library interface to be user input?
The library designer has to make that decision, not the language.
 What about calls that are used internally but also exposed as part of the
library interface?
The library designer has to make a decision about who's job it is to validate the parameters to an interface, and thereby deciding what are input/environmental errors and what are programming bugs. Avoiding making this decision means the API is underspecified, and we all know how poorly that works. Consider: /****************************** * foo() does magical things. * Parameters: * x a value that must be greater than 0 and less than 8 * y a positive integer * Throws: * Exception if y is negative * Returns: * magic value */ int foo(int x, int y) in { assert(x > 0 && x < 8); } body { enforce(y >= 0, "silly rabbit, y should be positive"); ... return ...; }
Oct 18 2014
parent "rst256" <ussr.24 yandex.ru> writes:
On Saturday, 18 October 2014 at 17:55:04 UTC, Walter Bright wrote:
 On 10/18/2014 9:01 AM, Sean Kelly wrote:
 So you consider the library interface to be user input?
The library designer has to make that decision, not the language.
 What about calls that are used internally but also exposed as 
 part of the library interface?
The library designer has to make a decision about who's job it is to validate the parameters to an interface, and thereby deciding what are input/environmental errors and what are programming bugs. Avoiding making this decision means the API is underspecified, and we all know how poorly that works. Consider: /****************************** * foo() does magical things. * Parameters: * x a value that must be greater than 0 and less than 8 * y a positive integer * Throws: * Exception if y is negative * Returns: * magic value */ int foo(int x, int y) in { assert(x > 0 && x < 8); } body { enforce(y >= 0, "silly rabbit, y should be positive"); ... return ...; }
Contract Programming. Contract-rider list all required for this item api conditions. In that case is a list of exseption handler on client side. Epic sample of first post, may be like this: in {rider(except, "this class may trowing io exception, you must defined ...") }
Oct 20 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/9/2014 9:33 AM, Johannes Pfau wrote:
 A point which hasn't been discussed yet:

 Errors and therefore assert can be used in nothrow functions. This is a
 pita for compilers because it now can't do certain optimizations. When
 porting GDC to ARM we started to see problems because of that (can't
 unwind from nothrow functions on ARM, program just aborts). And now we
 therefore have to worsen the codegen for nothrow functions because of
 this.

 I think Walter sometimes suggested that it would be valid for a
 compiler to not unwind Errors at all (in release mode), but simply kill
 the program and dump a error message. This would finally allow us to
 optimize nothrow functions.
Currently, Errors can be caught, but intervening finally blocks are not necessarily run. The reasons for this are: 1. better code generation 2. since after an Error the program is likely in an unknown state, the less code that is run after an Error, the better, because it may make things worse
Oct 11 2014
prev sibling next sibling parent reply Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10/3/2014 10:00 AM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 On 29/09/14 02:09, Walter Bright via Digitalmars-d wrote:
 If the program has entered an unknown state, its behavior from then on
 cannot be
 predictable. There's nothing I or D can do about that. D cannot
 officially
 endorse such a practice, though D being a systems programming language
 it will
 let you do what you want.

 I would not even consider such a practice for a program that is in
 charge of
 anything that could result in injury, death, property damage, security
 breaches,
 etc.
I think I should clarify that I'm not asking you to say "I endorse catching Errors". Your point about systems responsible for the safety of people or property is very well made, and I'm fully in agreement with you about this. What I'm asking you to consider is a use-case, one that I picked quite carefully. Without assuming anything about how the system is architected, if we have a telephone exchange, and an Error occurs in the handling of a single call, it seems to me fairly unarguable that it's essential to avoid this bringing down everyone else's call with it. That's not simply a matter of convenience -- it's a matter of safety, because those calls might include emergency calls, urgent business communications, or any number of other circumstances where dropping someone's call might have severe negative consequences. As I'm sure you realize, I also picked that particular use-case because it's one where there is a well-known technological solution -- Erlang -- which has as a key feature its ability to isolate different parts of the program, and to deal with errors by bringing down the local process where the error occurred, rather than the whole system. This is an approach which is seriously battle-tested in production. As I said, I'm not asking you to endorse catching Errors in threads, or other gross simplifications of Erlang's approach. What I'm interested in are your thoughts on how we might approach resolving the requirement for this kind of stability and localization of error-handling with the tools that D provides. I don't mind if you say to me "That's your problem" (which it certainly is:-), but I'd like it to be clear that it _is_ a problem, and one that it's important for D to address, given its strong standing in the development of super-high-connectivity server applications.
The part of Walter's point that is either deliberately overlooked or somewhat misunderstood here is the notion of a fault domain. In a typical unix or windows based environment, it's a process. A fault within the process yields the aborting of the process but not all processes. Erlang introduces within it's execution model a concept of a process within the higher level notion of the os level process. Within the erlang runtime it's individual processes run independently and can each fail independently. The erlang runtime guarantees a higher level of separation than a typical threaded java or c++ application. An error within the erlang runtime itself would justifiably cause the entire system to be halted. Just as within an airplane, to use Walter's favorite analogy, the seat entertainment system is physically and logically separated from flight control systems thus a fault within the former has no impact on the latter. So, where you have domains which must not impact each other, you reach for tools that allow complete separation such that faults within one CANNOT impact the other. You don't leave room for 'might not'. Later, Brad
Oct 03 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 3 October 2014 at 17:38:40 UTC, Brad Roberts via
Digitalmars-d wrote:
 The part of Walter's point that is either deliberately 
 overlooked or somewhat misunderstood here is the notion of a 
 fault domain.  In a typical unix or windows based environment, 
 it's a process.  A fault within the process yields the aborting 
 of the process but not all processes.  Erlang introduces within 
 it's execution model a concept of a process within the higher 
 level notion of the os level process.  Within the erlang 
 runtime it's individual processes run independently and can 
 each fail independently.  The erlang runtime guarantees a 
 higher level of separation than a typical threaded java or c++ 
 application.  An error within the erlang runtime itself would 
 justifiably cause the entire system to be halted.  Just as 
 within an airplane, to use Walter's favorite analogy, the seat 
 entertainment system is physically and logically separated from 
 flight control systems thus a fault within the former has no 
 impact on the latter.
Yep. And I think it's a fair assertion that the default fault domain in a D program is at the process level, since D is not inherently memory safe. But I don't think the language should necessarily make that assertion to the degree that no other definition is possible.
Oct 03 2014
parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 03/10/2014 19:20, Sean Kelly wrote:
 On Friday, 3 October 2014 at 17:38:40 UTC, Brad Roberts via
 Digitalmars-d wrote:
 The part of Walter's point that is either deliberately overlooked or
 somewhat misunderstood here is the notion of a fault domain.  In a
 typical unix or windows based environment, it's a process.  A fault
 within the process yields the aborting of the process but not all
 processes.  Erlang introduces within it's execution model a concept of
 a process within the higher level notion of the os level process.
 Within the erlang runtime it's individual processes run independently
 and can each fail independently.  The erlang runtime guarantees a
 higher level of separation than a typical threaded java or c++
 application.  An error within the erlang runtime itself would
 justifiably cause the entire system to be halted.  Just as within an
 airplane, to use Walter's favorite analogy, the seat entertainment
 system is physically and logically separated from flight control
 systems thus a fault within the former has no impact on the latter.
Yep. And I think it's a fair assertion that the default fault domain in a D program is at the process level, since D is not inherently memory safe. But I don't think the language should necessarily make that assertion to the degree that no other definition is possible.
Yes to Brad, and then yes to Sean. That nailed the point. To that I would only add that, when encountering a fault in a process, even an estimation (that is, not a 100% certainty) that such fault only affects a certain domain of the process, that would still be useful to certain kinds of systems and applications. I don't think memory-safety is at the core of the issue. Java is memory-safe, yet if you encounter a null pointer exception, you're still not sure if your whole application is now in an unusable state, or if the NPE was just confined to say, the operation the user just tried to do, or some other component of the application. There are no guarantees. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 08 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Wed, 08 Oct 2014 16:30:19 +0100
schrieb Bruno Medeiros <bruno.do.medeiros+dng gmail.com>:

 I don't think memory-safety is at the core of the issue. Java is 
 memory-safe, yet if you encounter a null pointer exception, you're still 
 not sure if your whole application is now in an unusable state, or if 
 the NPE was just confined to say, the operation the user just tried to 
 do, or some other component of the application. There are no guarantees.
I know a Karaoke software written in .NET, that doesn't check if a list view is empty when trying to select the first element. It results in a popup message about the Exception and you can continue from there. It is a logic error (aka assertion), but being tailored towards user interfaces, .NET doesn't kill the app. Heck, if Mono-D closed down on every NPE I wouldn't be using it any more. ;) -- Marco
Oct 10 2014
parent reply Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10/10/2014 2:26 AM, Marco Leise via Digitalmars-d wrote:
 Am Wed, 08 Oct 2014 16:30:19 +0100
 schrieb Bruno Medeiros <bruno.do.medeiros+dng gmail.com>:

 I don't think memory-safety is at the core of the issue. Java is
 memory-safe, yet if you encounter a null pointer exception, you're still
 not sure if your whole application is now in an unusable state, or if
 the NPE was just confined to say, the operation the user just tried to
 do, or some other component of the application. There are no guarantees.
I know a Karaoke software written in .NET, that doesn't check if a list view is empty when trying to select the first element. It results in a popup message about the Exception and you can continue from there. It is a logic error (aka assertion), but being tailored towards user interfaces, .NET doesn't kill the app. Heck, if Mono-D closed down on every NPE I wouldn't be using it any more. ;)
How much you want to bet that if it did exit the app, that the bug would actually have been addressed rather than still remain broken?
Oct 10 2014
parent "Dicebot" <public dicebot.lv> writes:
On Friday, 10 October 2014 at 18:53:46 UTC, Brad Roberts via 
Digitalmars-d wrote:
 How much you want to bet that if it did exit the app, that the 
 bug would actually have been addressed rather than still remain 
 broken?
It more likely would cause maintainer to abandon the project after encountering raging users he can't help. Most of such Mono-D issues simply are not reproducable on Alex side and remain unfixed because of that. I can't remember a single case when he remained willingly ignorant of an unhandled exception.
Oct 10 2014
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 10:38 AM, Brad Roberts via Digitalmars-d wrote:
 The part of Walter's point that is either deliberately overlooked or somewhat
 misunderstood here is the notion of a fault domain.  In a typical unix or
 windows based environment, it's a process.  A fault within the process yields
 the aborting of the process but not all processes.  Erlang introduces within
 it's execution model a concept of a process within the higher level notion of
 the os level process.  Within the erlang runtime it's individual processes run
 independently and can each fail independently.  The erlang runtime guarantees a
 higher level of separation than a typical threaded java or c++ application.  An
 error within the erlang runtime itself would justifiably cause the entire
system
 to be halted.  Just as within an airplane, to use Walter's favorite analogy,
the
 seat entertainment system is physically and logically separated from flight
 control systems thus a fault within the former has no impact on the latter.

 So, where you have domains which must not impact each other, you reach for
tools
 that allow complete separation such that faults within one CANNOT impact the
 other.  You don't leave room for 'might not'.
Thanks, Brad, that is a correct formulation.
Oct 04 2014
prev sibling parent Marco Leise <Marco.Leise gmx.de> writes:
Am Fri, 03 Oct 2014 10:38:21 -0700
schrieb Brad Roberts via Digitalmars-d
<digitalmars-d puremagic.com>:

 Just as within an airplane, to use Walter's 
 favorite analogy, the seat entertainment system is physically and 
 logically separated from flight control systems thus a fault within the 
 former has no impact on the latter.
And just like the Erlang runtime, the electrical components could be faulty or operating beyond their limits and cause the aircraft to shutdown. http://www.dailymail.co.uk/news/article-2519705/Blaze-scare-BA-Jumbo-Serious-electrical-sparked-planes-flight-entertainment-system.html http://en.wikipedia.org/wiki/Swissair_Flight_111 http://www.iasa.com.au/folders/sr111/747IFE-fire.htm -- Marco
Oct 05 2014
prev sibling next sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 28 Sep 2014 17:09:57 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 If the program has entered an unknown state, its behavior from then
 on cannot be predictable.
and D compiler itself contradicts this principle. why it tries to "recover" from parsing/compiling errors? it should stop on the first encountered error and not trying to "recover" itself from unknown state. hate this. and it's inconsistent with your words.
Oct 03 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Saturday, 4 October 2014 at 01:52:41 UTC, ketmar via 
Digitalmars-d wrote:
 On Sun, 28 Sep 2014 17:09:57 -0700
 Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 If the program has entered an unknown state, its behavior from 
 then
 on cannot be predictable.
and D compiler itself contradicts this principle. why it tries to "recover" from parsing/compiling errors? it should stop on the first encountered error and not trying to "recover" itself from unknown state. hate this. and it's inconsistent with your words.
I think that there's a big confusion about terms: there's nothing unknown in the parser state when it reach an error in the grammar. --- /Paolo
Oct 04 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 08:26:53 +0000
Paolo Invernizzi via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I think that there's a big confusion about terms: there's nothing=20
 unknown in the parser state when it reach an error in the grammar.
just like nothing unknown in program state when it gets garbage input. yet there is no sense in processing garbage input to produce garbage output. the best thing the program can do is to complain and shut down (if this is not a special recovery program which tries to get all possible bits of info).
Oct 04 2014
prev sibling next sibling parent Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10/3/2014 6:52 PM, ketmar via Digitalmars-d wrote:
 On Sun, 28 Sep 2014 17:09:57 -0700
 Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 If the program has entered an unknown state, its behavior from then
 on cannot be predictable.
and D compiler itself contradicts this principle. why it tries to "recover" from parsing/compiling errors? it should stop on the first encountered error and not trying to "recover" itself from unknown state. hate this. and it's inconsistent with your words.
Where's the contradiction? The compilers state hasn't been corrupted just because it encounters errors in the text file. In fact, it's explicitly built to detect and handle them. There's not even a contradiction in making assumptions about what that input could have been and attempting to continue based on those assumptions. At no time in there is the compilers internal state corrupted. And in direct affirmation of the principle, the compiler has numerous asserts scattered around that _do_ abort compilation should an unexpected and invalid state be detected.
Oct 03 2014
prev sibling parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Fri, 03 Oct 2014 19:25:53 -0700
Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Where's the contradiction?  The compilers state hasn't been corrupted=20
 just because it encounters errors in the text file.
but compiler is in unknown state. it can't do telepathy, and it's tries are annoying. there is no reason to guess what code programmer meant to write, it's just a source of mystic/nonsensical error messages ("garbage", in other words). the original source of "trying to recover and continue analysis" was slow compilation times, it was really painful to restart compiler after each error. but D compilation times are good enough to stop this "guess-and-miss" nonsence. and we have good IDEs that can analyse code on background and highligh errors, so there are virtually no reasons for telepathy left. yet many compilers (including D) still tries to do telepathy (and fails). 'cmon, it's best to improve compiling times, not attempting to guess. PL/1 fails at this, and all other compilers since then fails too. it's strange to me that Walter telling us that program should stop once it enters unknown state, but forcing D compiler to make uneducated guesses when D compiler enters unknown state. something is very wrong with one of this things.
Oct 03 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 03:16:08 UTC, ketmar via 
Digitalmars-d wrote:
 On Fri, 03 Oct 2014 19:25:53 -0700
 Brad Roberts via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 Where's the contradiction?  The compilers state hasn't been 
 corrupted just because it encounters errors in the text file.
but compiler is in unknown state.
It's not. It just detected that another system would enter in an unknown state if built. The compiler is like the engineer that examines the design of a project and discovers an error on it, so it refuses to build the product. Is the engineer in an unknown state? No, it is, just like the compiler is, outside its/his/her normal execution flow, wich is "take the design and build the product". It is a known state of the compiler/engineer, namely the error processing path. While on the error processing path you allow yourself to be slower and take time to guess. This is important. Just as for the engineer, this will educate the designers and will not come again so often with the same design mistake. Then, working/building fast is measured on the normal execution path, not on the error recovery path since the latters is assumed to occur quite rarely (in any case, its seen like an exceptional issue). Of course, the engineer has to handle the work conflict with the designer in a polite form. For the compiler this means printing a nice error message to the user. Guessing might not be good, but it is nice effort to do. Do you really miss the super-cryptic C (let's not even talk about C++) error messages that you sometimes receive?
Oct 03 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 04:10:49 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Where's the contradiction?  The compilers state hasn't been=20
 corrupted just because it encounters errors in the text file.
but compiler is in unknown state.
It's not. It just detected that another system would enter in an=20 unknown state if built. The compiler is like the engineer that=20 examines the design of a project and discovers an error on it, so=20 it refuses to build the product. Is the engineer in an unknown=20 state?
sorry. i meant that compiler WILL be in unknown state if it will continute to processing invalid source. that's why it should stop right after the first error.
 Guessing might not be good, but it is nice effort to do. Do you=20
 really miss the super-cryptic C (let's not even talk about C++)=20
 error messages that you sometimes receive?
yes. DMD attempts to 'guess' what identifier i mistyped drives me crazy. just shut up and stop after "unknown identifier", you robot, don't try to show me your artificial idiocity!
Oct 03 2014
next sibling parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 04:26:45 UTC, ketmar via 
Digitalmars-d wrote:
 On Sat, 04 Oct 2014 04:10:49 +0000
 eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 Where's the contradiction?  The compilers state hasn't been 
 corrupted just because it encounters errors in the text 
 file.
but compiler is in unknown state.
It's not. It just detected that another system would enter in an unknown state if built. The compiler is like the engineer that examines the design of a project and discovers an error on it, so it refuses to build the product. Is the engineer in an unknown state?
sorry. i meant that compiler WILL be in unknown state if it will continute to processing invalid source. that's why it should stop right after the first error.
No. It might produce an invalid product. Just like a real engineer could produce a flawed product on the basis of wrong designs. Yes, you might reach a state where you are no longer be able to continue because physically impossible. "Build a round square". Both the engineer and the compiler will bark when they see this.
 Guessing might not be good, but it is nice effort to do. Do 
 you really miss the super-cryptic C (let's not even talk about 
 C++) error messages that you sometimes receive?
yes. DMD attempts to 'guess' what identifier i mistyped drives me crazy. just shut up and stop after "unknown identifier", you robot, don't try to show me your artificial idiocity!
Could we add a flag to the compiler for that? -cassandramode=[yes/no/whatever] Joke :) Anyway, the Cassandra name for the compiler is just perfect: it might be right, but you won't believe!
Oct 03 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 04:36:48 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 sorry. i meant that compiler WILL be in unknown state if it will
 continute to processing invalid source. that's why it should=20
 stop right
 after the first error.
No. It might produce an invalid product. Just like a real=20 engineer could produce a flawed product on the basis of wrong=20 designs.
i can't see any sane reason to process garbage data, 'cause result is known to be garbage too.
 Could we add a flag to the compiler for that?
=20
 -cassandramode=3D[yes/no/whatever]
=20
 Joke :)
but this will be fine. let beginners get suggestions about importing 'std.stdio' on unknown 'writeln', but don't spit that on me. i tend to read compiler messages, and additional noise is annoying.
Oct 03 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 05:02:04 UTC, ketmar via 
Digitalmars-d wrote:
 On Sat, 04 Oct 2014 04:36:48 +0000
 eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 sorry. i meant that compiler WILL be in unknown state if it 
 will
 continute to processing invalid source. that's why it should 
 stop right
 after the first error.
No. It might produce an invalid product. Just like a real engineer could produce a flawed product on the basis of wrong designs.
i can't see any sane reason to process garbage data, 'cause result is known to be garbage too.
The same reason why a schoolteacher will process garbage exam sheets.
Oct 03 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 05:19:32 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 i can't see any sane reason to process garbage data, 'cause=20
 result is
 known to be garbage too.
The same reason why a schoolteacher will process garbage exam=20 sheets.
it can be true if DMD will has advanced AI someday -- to clearly understand what programmer wants to write. but then there will be no reason to emit error messages. and to writing code manually too.
Oct 03 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/3/14, 9:26 PM, ketmar via Digitalmars-d wrote:
 yes. DMD attempts to 'guess' what identifier i mistyped drives me
 crazy. just shut up and stop after "unknown identifier", you robot,
 don't try to show me your artificial idiocity!
awesome feature -- Andrei
Oct 04 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 2:45 PM, Andrei Alexandrescu wrote:
 On 10/3/14, 9:26 PM, ketmar via Digitalmars-d wrote:
 yes. DMD attempts to 'guess' what identifier i mistyped drives me
 crazy. just shut up and stop after "unknown identifier", you robot,
 don't try to show me your artificial idiocity!
awesome feature -- Andrei
I agree, I like it very much.
Oct 04 2014
parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Sunday, 5 October 2014 at 03:34:31 UTC, Walter Bright wrote:
 On 10/4/2014 2:45 PM, Andrei Alexandrescu wrote:
 On 10/3/14, 9:26 PM, ketmar via Digitalmars-d wrote:
 yes. DMD attempts to 'guess' what identifier i mistyped 
 drives me
 crazy. just shut up and stop after "unknown identifier", you 
 robot,
 don't try to show me your artificial idiocity!
awesome feature -- Andrei
I agree, I like it very much.
This is a great feature where we lack a really solid IDE experience (which would have intellisense and auto-completion that could be accurate and prevent such errors from occurring in the first place.) Otherwise it would probably be redundant.
Oct 04 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 05 Oct 2014 03:47:31 +0000
Cliff via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This is a great feature where we lack a really solid IDE=20
 experience (which would have intellisense and auto-completion=20
 that could be accurate and prevent such errors from occurring in=20
 the first place.)  Otherwise it would probably be redundant.
i'm not using IDEs for more than a decade (heh, i'm using mcedit to write code). yet this feature drives me mad: it trashes my terminal with useless garbage output. it was *never* in help, there were no moment when i looked at suggested identifier and thinked: "aha, THAT is the bug!" but virtually each time i see usggestion i'm thinking: "oh, well, i know. 'cmon, why don't you just shut up?!" it's like colorizing the output, yet colorizing can be turned off, and suggestions can't.
Oct 04 2014
parent reply "Cliff" <cliff.s.hudson gmail.com> writes:
On Sunday, 5 October 2014 at 05:46:56 UTC, ketmar via 
Digitalmars-d wrote:
 On Sun, 05 Oct 2014 03:47:31 +0000
 Cliff via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 This is a great feature where we lack a really solid IDE 
 experience (which would have intellisense and auto-completion 
 that could be accurate and prevent such errors from occurring 
 in the first place.)  Otherwise it would probably be redundant.
i'm not using IDEs for more than a decade (heh, i'm using mcedit to write code). yet this feature drives me mad: it trashes my terminal with useless garbage output. it was *never* in help, there were no moment when i looked at suggested identifier and thinked: "aha, THAT is the bug!" but virtually each time i see usggestion i'm thinking: "oh, well, i know. 'cmon, why don't you just shut up?!" it's like colorizing the output, yet colorizing can be turned off, and suggestions can't.
That you even make the bug at all which triggers the error is an indication the developer workflow you use is fundamentally flawed. This is something which should be caught much earlier - when you are at the point the typo was made - not after you have committed a change to disk and presented it to the compiler, where your train of thought may be significantly different. I'd much rather energy be directed at the prevention of mistakes, not the suppression of help in fixing them - if I had to choose. But I wouldn't object to having a switch to turn off the help if it bothers you that much. Seems like a very small thing to add.
Oct 04 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 05 Oct 2014 05:55:37 +0000
Cliff via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 That you even make the bug at all which triggers the error is an=20
 indication the developer workflow you use is fundamentally=20
 flawed.
what we are talking about here? sorry, i'm lost.
  This is something which should be caught much earlier -=20
 when you are at the point the typo was made - not after you have=20
 committed a change to disk and presented it to the compiler,=20
 where your train of thought may be significantly different.
i'm writing code in big pieces, and fixing typos, etc. then works as a "background thread" in my brain, while "main thread" is still thinking about the overall picture. no IDE helps me to improve this, and i don't need all their tools and their bloat. and i'm not making alot of typos. ;-)
 But I wouldn't object to having a switch to turn off the help if=20
 it bothers you that much.  Seems like a very small thing to add.
this is a spin-off ;-) of the actual discussion. what i'm talking about originally is that compiler should stop after first error it encountered, not trying to parse/analyze code further. and there i was talking about IDEs which helps to fix alot of typos and other grammar bugs even before compilation starts. and what i'm talking about is that trying to make sense from garbage input is not a sign of robust software, if that software is not a special case software which was designed to parse garbage. and that's why i'm against any "warnings" in compilers: all warnings should be errors.
Oct 04 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 14:45:42 -0700
Andrei Alexandrescu via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 On 10/3/14, 9:26 PM, ketmar via Digitalmars-d wrote:
 yes. DMD attempts to 'guess' what identifier i mistyped drives me
 crazy. just shut up and stop after "unknown identifier", you robot,
 don't try to show me your artificial idiocity!
awesome feature -- Andrei
the only bad thing with this future is that it is useless. a simple typo is not worth of additional noise, and in complex cases feature is not working (i.e. it can't guess method name from another module, especially if it's not imported). all in all this adds noise for nothing. ah, and for some fake coolness.
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 8:15 PM, ketmar via Digitalmars-d wrote:
 there is no reason to guess what code programmer meant to write,
It doesn't do that anymore (at least mostly not). The compiler only continues to issue error messages for code that is not dependent on the code that caused an error message.
Oct 04 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 01:06:14 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 It doesn't do that anymore (at least mostly not). The compiler only
 continues to issue error messages for code that is not dependent on
 the code that caused an error message.
but first it trues to guess what to skip to "get in sync" again. that's exactly the guessing i'm against. got error? stop parsing/analyzing and die.
Oct 04 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 08:15:57 UTC, ketmar via 
Digitalmars-d wrote:
 On Sat, 04 Oct 2014 01:06:14 -0700
 Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> 
 wrote:

 It doesn't do that anymore (at least mostly not). The compiler 
 only
 continues to issue error messages for code that is not 
 dependent on
 the code that caused an error message.
but first it trues to guess what to skip to "get in sync" again. that's exactly the guessing i'm against. got error? stop parsing/analyzing and die.
The compiler or the parser is not guessing "what I should do", but "what the user wanted to do". They are very different things. The parser itself is as sound as ever. It is on its user input error processing path but, hey, it was designed to do that. It knows what it is doing.
Oct 04 2014
parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 04 Oct 2014 09:14:03 +0000
eles via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 It knows what it is doing.
yes. processing garbage to generate more garbage.
Oct 04 2014
prev sibling parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 09/28/2014 04:13 PM, Walter Bright wrote:
 A program bug is, by definition, unknown and unanticipated. The idea
 that one can "recover" from it is fundamentally wrong.
In many cases you can. Don't forget, enterprise code aside, we're not in the days of spaghetti code where everything's directly intermingled: Things are encapsulated. If I call on a component, and that component fails, chances are the problems exist purely within that component. So yes, recovery is most definitely possible. Not *always*, but often. I imagine you're about to jump on that "chances are" and "not always" part...and you're right: For safety-critical software, relying on that "*chances are*" the bug exists purely within that component is, of course, completely inappropriate. I won't dispute that. But a LOT of software is *NOT* safety-critical. In those cases, going through the trouble of process-separated components or complete shutdowns on every minor compartmentalized hiccup, just on the off-chance that it might be symptomatic of something bigger (it usually isn't) usually amounts to swatting a fly with a bazooka. If tetris, or "semi-interactive movie of the year", or a music library manager, or an online poll, or youtube comments suddenly goes haywire, it's really not a big deal. And process separation and permissions still limit the collateral damage anyway. It's unprofessional, sure, all bugs and crashes are, but it's certainly no less professional than a program that shuts down whenever the digital wind shifts. Also, on a somewhat separate note: As other have indicted, suppose you *do* strictly limit Exceptions to bad user inputs and other expected non-logic-errors. What happens when you forget to handle one of those Exceptions? That's a logic error that manifests as an uncaught Exception. Code that handles and recovers from bad user input and other such "Exception" purposes is every bit as subject to logic errors as any other code. Therefore it is *impossible* to reliably ensure that Exception *only* indicates non-logic-errors such as bad user input. So the whole notion of strict separation falls apart right there. Exceptions are *going to* occasionally indicate logic errors, period. Therefore, they must carry enough information to help the developer diagnose and fix. Is showing that diagnostic information to the user upon a crash unprofessional? Sure. But *all* crashes are unprofessional, so there's really no avoiding that. The *least* unprofessional crashes are the ones that 1. Don't get triggered more often than actually necessary (considering the given program domain.) and 2. Contain enough information to actually get fixed.
Oct 04 2014
prev sibling next sibling parent reply Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 28/09/14 19:33, Walter Bright via Digitalmars-d wrote:
 On 9/28/2014 9:23 AM, Sean Kelly wrote:
 Also, I think the idea that a program is created and shipped to an end user is
 overly simplistic.  In the server/cloud programming world, when an error
occurs,
 the client who submitted the request will get a response appropriate for them
 and the system will also generate log information intended for people working
on
 the system.  So things like stack traces and assertion failure information is
 useful even for production software.  Same with any critical system, as I'm
sure
 you're aware.  The systems are designed to handle failures in specific ways,
but
 they also have to leave a breadcrumb trail so the underlying problem can be
 diagnosed and fixed.  Internal testing is never perfect, and achieving a high
 coverage percentage is nearly impossible if the system wasn't designed from the
 ground up to be testable in such a way (mock frameworks and such).
Then use assert(). That's just what it's for.
I don't follow this point. How can this approach work with programs that are built with the -release switch? Moreover, Sean's points here are absolutely on the money -- there are cases where the "users" of a program may indeed want to see traces even for anticipated errors. And even if you design a nice structure of throwing and catching exceptions so that the simple error message _always_ gives good enough context to understand what went wrong, you still have the other issue that Sean raised -- of an exception accidentally escaping its intended scope, because you forgot to handle it -- when a trace may be extremely useful. Put it another way -- I think you make a good case that stack traces for exceptions should be turned off by default (possibly just in -release mode?), but if that happens I think there's also a good case for a build flag that ensures stack traces _are_ shown for Exceptions as well as Errors.
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 4:18 PM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 I don't follow this point.  How can this approach work with programs that are
 built with the -release switch?
All -release does is not generate code for assert()s. To leave the asserts in, do not use -release. If you still want the asserts to be in even with -release, if (condition) assert(0);
 Moreover, Sean's points here are absolutely on the money -- there are cases
 where the "users" of a program may indeed want to see traces even for
 anticipated errors.
 And even if you design a nice structure of throwing and
 catching exceptions so that the simple error message _always_ gives good enough
 context to understand what went wrong, you still have the other issue that Sean
 raised -- of an exception accidentally escaping its intended scope, because you
 forgot to handle it -- when a trace may be extremely useful.

 Put it another way -- I think you make a good case that stack traces for
 exceptions should be turned off by default (possibly just in -release mode?),
 but if that happens I think there's also a good case for a build flag that
 ensures stack traces _are_ shown for Exceptions as well as Errors.
The -g switch should take care of that. It's what I use when I need a stack trace, as there are many ways a program can fail (not just Errors).
Sep 28 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 09/29/2014 02:47 AM, Walter Bright wrote:
 On 9/28/2014 4:18 PM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 I don't follow this point.  How can this approach work with programs
 that are
 built with the -release switch?
All -release does is not generate code for assert()s. ...
(Euphemism for undefined behaviour.)
Sep 28 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/29/2014 06:06 AM, Timon Gehr wrote:
 On 09/29/2014 02:47 AM, Walter Bright wrote:
 On 9/28/2014 4:18 PM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 I don't follow this point.  How can this approach work with programs
 that are
 built with the -release switch?
All -release does is not generate code for assert()s. ...
(Euphemism for undefined behaviour.)
Also, -release additionally removes contracts, in particular invariant calls, and enables version(assert).
Sep 28 2014
prev sibling next sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Sun, 28 Sep 2014 17:47:56 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 9/28/2014 4:18 PM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 I don't follow this point.  How can this approach work with
 programs that are built with the -release switch?
All -release does is not generate code for assert()s. To leave the asserts in, do not use -release. If you still want the asserts to be in even with -release, if (condition) assert(0);
Right now, but some time ago there was a huge debate whether it should be valid for the compiler to optimize based on asserts. I wonder if these 'use asserts for stack traces' and 'an assert is always supposed to pass, so it's valid to assume the condition holds (in release)' notions can go together. I guess it might at least lead to programs that are unusable when compiled with -release.
Sep 29 2014
next sibling parent "Daniel N" <ufo orbiting.us> writes:
On Monday, 29 September 2014 at 08:18:29 UTC, Johannes Pfau wrote:
 Right now, but some time ago there was a huge debate whether it 
 should
 be valid for the compiler to optimize based on asserts.

 I wonder if these 'use asserts for stack traces' and 'an assert 
 is
 always supposed to pass, so it's valid to assume the condition 
 holds
 (in release)' notions can go together. I guess it might at 
 least lead
 to programs that are unusable when compiled with -release.
For this reason I think it makes more sense to use abort() if you plan to use "-release".
Sep 29 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 29 Sep 2014 10:18:27 +0200
Johannes Pfau via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 I wonder if these 'use asserts for stack traces' and 'an assert is
 always supposed to pass, so it's valid to assume the condition holds
 (in release)' notions can go together. I guess it might at least lead
 to programs that are unusable when compiled with -release.
compiler never optimizes away `assert(0)`, AFAIR. `assert(0)` is a special thing, it means `abort()`.
Sep 29 2014
prev sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Monday, 29 September 2014 at 00:47:58 UTC, Walter Bright wrote:
 On 9/28/2014 4:18 PM, Joseph Rushton Wakeling via Digitalmars-d 
 wrote:
 I don't follow this point.  How can this approach work with 
 programs that are
 built with the -release switch?
All -release does is not generate code for assert()s. To leave the asserts in, do not use -release. If you still want the asserts to be in even with -release, if (condition) assert(0);
The reason I queried your approach here is because I feel you're conflating two things: * the _definition_ of an Exception vs. an Error, on which we 100% agree: the former as an anticipated possibility which a program is committed to try and handle, the latter a failure which is fundamentally wrong and should not happen under any conditions. * the way in which a program should report these different kinds of error. You seem to be advocating that, by definition, Exceptions and Errors should be reported differently (one without, and one with, a trace). I don't at all object to that as a sensible default, but I think that the ultimate decision on how Exceptions and Errors report themselves should be in the hands of the program developer, depending on the use-case of the application.
Sep 29 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 29 September 2014 at 15:39:08 UTC, Joseph Rushton 
Wakeling wrote:
 with, a trace).  I don't at all object to that as a sensible 
 default, but I think that the ultimate decision on how 
 Exceptions and Errors report themselves should be in the hands 
 of the program developer, depending on the use-case of the 
 application.
That's a very sensible argument. From a pragmatic point of view I am usually not interested in the whole stack trace. I am primarily interested in where it was thrown, where it was turned into an error and where it was logged + "function call input" at that point. Maybe a trade off can be found that is less costly than building a full stack trace.
Sep 29 2014
prev sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, 29 Sep 2014 01:18:02 +0200
Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com>
wrote:

 Then use assert(). That's just what it's for.
=20 I don't follow this point. How can this approach work with programs that are built with the -release switch?
don't use "-release" switch. the whole concept of "release version" is broken by design. ship what you debugged, not what you think you debugged.
Sep 28 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 9:16 AM, Sean Kelly wrote:
 On Sunday, 28 September 2014 at 00:40:26 UTC, Walter Bright wrote:
 Whoa, Camel! You're again thinking of Exceptions as a debugging tool.
They can be.
Of course they can be. But it's inappropriate to use them that way, and we should not be eschewing such in the library.
 What if an API you're using throws an exception you didn't expect,
 and therefore don't handle?
Then the app user sees the error message. This is one of the cool things about D - I can write small apps with NO error handling logic in it, and I still get appropriate and friendly messages when things go wrong like missing files. That is, until recently, when I get a bunch of debug stack traces and internal file/line messages, which are of no use at all to an app user and look awful.
 This might be considered a logic error if the
 exception is recoverable and you don't intend the program to abort from that
 operation.
Adding file/line to all exceptions implies that they are all bugs, and encourages them to be thought of as bugs and debugging tools, when they are NOT. Exceptions are for: 1. enabling recovery from input/environmental errors 2. reporting input/environmental errors to the app user 3. making input/environmental errors not ignorable by default They are not for detecting logic errors. Assert is designed for that.
Sep 28 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sun, Sep 28, 2014 at 10:32:14AM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/28/2014 9:16 AM, Sean Kelly wrote:
[...]
What if an API you're using throws an exception you didn't expect,
and therefore don't handle?
Then the app user sees the error message. This is one of the cool things about D - I can write small apps with NO error handling logic in it, and I still get appropriate and friendly messages when things go wrong like missing files. That is, until recently, when I get a bunch of debug stack traces and internal file/line messages, which are of no use at all to an app user and look awful.
It looks even more awful when the person who wrote the library code is Russian, and the user speaks English, and when an uncaught exception terminates the program, you get a completely incomprehensible message in a language you don't know. Not much different from a line number and filename that has no meaning for a user. That's why I said, an uncaught exception is a BUG. The only place where user-readable messages can be output is in a catch block where you actually have the chance to localize the error string. But if no catch block catches it, then by definition it's a bug, and you might as while print some useful info with it that your users can send back to you, rather than unhelpful bug reports of the form "the program crashed with error message 'internal error'". Good luck finding where in the code that is. (And no, grepping does not work -- the string 'internal error' could have come from a system call or C library error code translated by a generic code-to-message function, which could've been called from *anywhere*.)
This might be considered a logic error if the exception is
recoverable and you don't intend the program to abort from that
operation.
Adding file/line to all exceptions implies that they are all bugs, and encourages them to be thought of as bugs and debugging tools, when they are NOT. Exceptions are for: 1. enabling recovery from input/environmental errors 2. reporting input/environmental errors to the app user 3. making input/environmental errors not ignorable by default They are not for detecting logic errors. Assert is designed for that.
I do not condone adding file/line to exception *messages*. Catch blocks can print / translate those messages, which can be made user-friendly, but if the program failed to catch an exception, you're already screwed anyway so why not provide more info rather than less? Unless, of course, you're suggesting that we put this around every main() function: void main() { try { ... } catch(Exception e) { assert(0, "Unhandled exception: I screwed up"); } } T -- I think Debian's doing something wrong, `apt-get install pesticide', doesn't seem to remove the bugs on my system! -- Mike Dresser
Sep 28 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 20:58:20 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 That's why I said, an uncaught exception is a BUG. The only  
 place where
 user-readable messages can be output is in a catch block where  
 you
 actually have the chance to localize the error string. But if  
 no catch
 block catches it, then by definition it's a bug, and you might  
 as while
 print some useful info with it that your users can send back to 
  you,
 rather than unhelpful bug reports of the form "the program 
 crashed with
 error message 'internal error'".
Pretty much every system should generate a localized error message for the user and a detailed log of the problem for the programmer. Ideally, the user message will indicate how to provide the detailed information to the developer so the problem can be fixed. The one case where uncaught exceptions aren't really a bug is with programs that aren't being used outside the group that developed them. In these cases, the default behavior is pretty much exactly what's desired--a message, file/line info, and a stack trace. Which is why it's there. The vast bulk of today's shipping code doesn't run from the command line anyway, so the default exception handler should be practically irrelevant.
Sep 28 2014
prev sibling next sibling parent "Cliff" <cliff.s.hudson gmail.com> writes:
On Sunday, 28 September 2014 at 20:58:20 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I do not condone adding file/line to exception *messages*. 
 Catch blocks
 can print / translate those messages, which can be made 
 user-friendly,
 but if the program failed to catch an exception, you're already 
 screwed
 anyway so why not provide more info rather than less?

 Unless, of course, you're suggesting that we put this around 
 every
 main() function:

 	void main() {
 		try {
 			...
 		} catch(Exception e) {
 			assert(0, "Unhandled exception: I screwed up");
 		}
 	}
applicable here: 1. main() definitely had a top-level try/catch handler to produce useful output messages. Because throwing an uncaught exception out to the user *is* a bug, we naturally want to not just toss out a stack trace but information on what to do with it should a user encounter it. Even better if there is additional runtime information which can be provided for a bug report. 2. We also registered a top-level unhandled exception handler on the AppDomain (equivalent to a process in .NET, except that multiple AppDomains may exist within a single OS process), which allows the catching to exceptions which would otherwise escape background threads. Depending on the nature of the application, these could be logged to some repository to which the user could be directed. It's hard to strictly automate this because exactly what you can do with an exception which escapes a thread will be application dependent. In our case, these exceptions were considered bugs, were considered to be unrecoverable and resulted in a program abort with a user message indicating where to find the relevant log outputs and how to contact us. 3. For some cases, throwing an exception would also trigger an application dump suitable for post-mortem debugging from the point the exception was about to be thrown. This functionality is, of course, OS-specific, but helped us on more than a few occasions by eliminating the need to try to pre-determine which information was important and which was not so the exception could be usefully populated. I'm not a fan of eliminating the stack from exceptions. While exceptions should not be used to catch logic errors, an uncaught exception is itself a logic error (that is, one has omitted some required conditions in their code) and thus the context of the error needs to be made available somehow.
Sep 28 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 1:56 PM, H. S. Teoh via Digitalmars-d wrote:
 It looks even more awful when the person who wrote the library code is
 Russian, and the user speaks English, and when an uncaught exception
 terminates the program, you get a completely incomprehensible message in
 a language you don't know. Not much different from a line number and
 filename that has no meaning for a user.
I cannot buy into the logic that since Russian error messages are incomprehensible to me, that therefore incomprehensible messages are ok.
 That's why I said, an uncaught exception is a BUG.
It's a valid opinion, but is not the way D is designed to work.
 The only place where
 user-readable messages can be output is in a catch block where you
 actually have the chance to localize the error string. But if no catch
 block catches it, then by definition it's a bug, and you might as while
 print some useful info with it that your users can send back to you,
 rather than unhelpful bug reports of the form "the program crashed with
 error message 'internal error'".
If anyone is writing code that throws an Exception with "internal error", then they are MISUSING exceptions to throw on logic bugs. I've been arguing this all along.
 if the program failed to catch an exception, you're already screwed
 anyway
This is simply not true. One can write utilities with no caught exceptions at all, and yet have the program emit user friendly messages about "disk full" and stuff like that.
 so why not provide more info rather than less?
Because having an internal stack dump presented to the app user for when he, say, puts in invalid command line arguments, is quite inappropriate.
 Unless, of course, you're suggesting that we put this around every
 main() function:

 	void main() {
 		try {
 			...
 		} catch(Exception e) {
 			assert(0, "Unhandled exception: I screwed up");
 		}
 	}
I'm not suggesting that Exceptions are to be thrown on programmer screwups - I suggest the OPPOSITE.
Sep 28 2014
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09/29/2014 12:59 AM, Walter Bright wrote:
 ...

 Unless, of course, you're suggesting that we put this around every
 main() function:

     void main() {
         try {
             ...
         } catch(Exception e) {
             assert(0, "Unhandled exception: I screwed up");
         }
     }
I'm not suggesting that Exceptions are to be thrown on programmer screwups - I suggest the OPPOSITE.
He does not suggest that Exceptions are to be thrown on programmer screw-ups, but rather that the thrown exception itself is the screw-up, with a possibly complex cause. It is not: if(screwedUp()) throw Exception(""); It is rather: void foo(int x){ if(!test(x)) throw Exception(""); // this may be an expected code path for some callers } void bar(){ // ... int y=screwUp(); foo(y); // yet it is unexpected here }
Sep 28 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 28 September 2014 at 22:59:46 UTC, Walter Bright wrote:
 If anyone is writing code that throws an Exception with 
 "internal error", then they are MISUSING exceptions to throw on 
 logic bugs. I've been arguing this all along.
Nothing wrong with it. Quite common and useful for a non-critical web service to log the exception, then re-throw something like "internal error", catch the internal error at the root and returning the appropriate 5xx HTTP response, then keep going. You are arguing as if it is impossible to know whether the logic error is local to the handler, or not, with a reasonable probability. "Division by zero" is usually not a big deal, but it is a logic error. No need to shut down the service.
 I'm not suggesting that Exceptions are to be thrown on 
 programmer screwups - I suggest the OPPOSITE.
It is impossible to verify what the source is. It might be a bug in a boolean expression leading to a throw when the system is ok. assert()s should also not be left in production code. They are not for catching runtime errors, but for testing at the expense of performance. Uncaught exceptions should be re-thrown higher up in the call chain to a different error level based on the possible impact on the system. Getting an unexpected mismatch exception in a form-validator is not a big deal. Getting out-of-bounds error in main storage is a big deal. Whether it is a big deal can only be decided at the higher level. It is no doubt useful to be able to obtain a stack trace so that you can log it when an exception turns out to fall into the "big deal" category and therefore should be re-thrown as a critical failture. The deciding factor should be performance.
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 9:31 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Nothing wrong with it. Quite common and useful for a non-critical web service
to
 log the exception, then re-throw something like "internal error",  catch the
 internal error at the root and returning the appropriate 5xx HTTP response,
then
 keep going.
Lots of bad practices are commonplace.
 You are arguing as if it is impossible to know whether the logic error is local
 to the handler, or not, with a reasonable probability.
You're claiming to know that a program in an unknown and unanticipated state is really in a known state. It isn't.
 assert()s should also not be left in production code. They are not for catching
 runtime errors, but for testing at the expense of performance.
Are you really suggesting that asserts should be replaced by thrown exceptions? I suspect we have little common ground here.
 Uncaught exceptions should be re-thrown higher up in the call chain to a
 different error level based on the possible impact on the system. Getting an
 unexpected mismatch exception in a form-validator is not a big deal. Getting
 out-of-bounds error in main storage is a big deal. Whether it is a big deal can
 only be decided at the higher level.
A vast assumption here that you know in advance what bugs you're going to have and what causes them.
 It is no doubt useful to be able to obtain a stack trace so that you can log it
 when an exception turns out to fall into the "big deal" category and therefore
 should be re-thrown as a critical failture. The deciding factor should be
 performance.
You're using exceptions as a program bug reporting mechanism. Whoa camel, indeed!
Sep 28 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Monday, 29 September 2014 at 04:57:45 UTC, Walter Bright wrote:
 Lots of bad practices are commonplace.
This is not an argument, it is a postulate.
 You are arguing as if it is impossible to know whether the 
 logic error is local
 to the handler, or not, with a reasonable probability.
You're claiming to know that a program in an unknown and unanticipated state is really in a known state. It isn't.
It does not have to be known, it is sufficient that it is isolated or that it is improbable to be global or that it is of low impact to long term integrity.
 Are you really suggesting that asserts should be replaced by 
 thrown exceptions? I suspect we have little common ground here.
No, regular asserts should not be caught except for mailing the error log to the developer. They are for testing only. Pre/postconditions between subsystems are on a different level though. They should not be conflated with regular asserts.
 A vast assumption here that you know in advance what bugs 
 you're going to have and what causes them.
I know in advance that a "divison-by-zero" error is of limited scope with high probability or that an error in a strictly pure validator is of low impact with high probability. I also know that any sign of a flaw in a transaction engine is a critical error that warrants a shutdown. We know in advance that all programs above low complexity will contain bugs, most of them innocent and they are not a good excuse of shutting down the entire service for many services. If you have memory safety, reasonable isolation and well tested global data-structures it is most desirable to keep the system running if it is incapable of corrupting a critical database.
 You're using exceptions as a program bug reporting mechanism.
Uncaught exceptions are bugs and should be logged as such. If a form validator throws an unexpected exception then it is a bug. It makes the validation questionable, but does not affect the rest of the system. It is a non-critical bug that needs attention.
 Whoa camel, indeed!
By your line of reasoning no software should ever be shipped, without a formal proof, because they most certainly will be buggy and contain unspecified undetected state. Keep in mind that a working program, in the real world, is a program that provides reasonable output for reasonable input. Total correctness is a pipe dream, it is not within reach for most real programs. Not even with formal proofs.
Sep 28 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 11:08 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 It does not have to be known, it is sufficient that it is isolated or that it
is
 improbable to be global or that it is of low impact to long term integrity.
You cannot make such presumptions and then pretend you've written robust software.
 By your line of reasoning no software should ever be shipped, without a formal
 proof, because they most certainly will be buggy and contain unspecified
 undetected state.
You're utterly misunderstanding my point. Perhaps this will help: http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
Oct 04 2014
prev sibling parent reply Johannes Pfau <nospam example.com> writes:
Am Sun, 28 Sep 2014 15:59:45 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 if the program failed to catch an exception, you're already screwed
 anyway  
This is simply not true. One can write utilities with no caught exceptions at all, and yet have the program emit user friendly messages about "disk full" and stuff like that.
You're always thinking of simple console apps but this is the only place where the default 'print exception to console' strategy works. In a daemon which logs to syslog or in a GUI application or a game an uncaught 'disk full exception' would go completely unnoticed and that's definitely a bug.
Sep 29 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/29/2014 1:27 AM, Johannes Pfau wrote:
 In a daemon which logs to syslog or in a GUI application or a game an
 uncaught 'disk full exception' would go completely unnoticed and that's
 definitely a bug.
Failure to respond properly to an input/environmental error is a bug. But the input/environmental error is not a bug. If it was, then the program should assert on the error, not throw.
Sep 29 2014
parent Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/09/2014 10:06, Walter Bright wrote:
 On 9/29/2014 1:27 AM, Johannes Pfau wrote:
 In a daemon which logs to syslog or in a GUI application or a game an
 uncaught 'disk full exception' would go completely unnoticed and that's
 definitely a bug.
Failure to respond properly to an input/environmental error is a bug. But the input/environmental error is not a bug. If it was, then the program should assert on the error, not throw.
I agree. And isn't that exactly what Teoh said then: "That's why I said, an uncaught exception is a BUG. " I think people should be more careful with the term "uncaught exception" because it's not very precise. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 Even if that is what you wanted, you won't get that from 
 FileException, as it will only show file/lines emanating from 
 calls inside std.file, not from higher level callers.
Can't we use the template arguments like __LINE__ to offer the line of code in the IO function in user code?
 Besides, take a bit of care when formulating a string for 
 exceptions, and you won't have any trouble grepping for it. 
 This isn't rocket science.
The exception is generated inside library code, not in user code. There's nothing to grep in the user code. The D script often looks like this: void main() { import std.stdio; auto file_name = "some_file; // some code here auto file_name1 = file_name ~ "1.txt"; auto f1 = File(file_name1); // some code here auto file_name2 = file_name ~ "2.txt"; auto f2 = File(file_name2, "w"); // some code here }
 Presenting internal debugging data to users for 
 input/environmental errors is just bad programming practice.7 
 We shouldn't be enshrining it in Phobos and presenting it as a 
 professional way to code.
The file/line of code is not meant to replace serious practices for dealing with I/O failures in serious D programs. Bye, bearophile
Sep 27 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/27/2014 4:59 PM, bearophile wrote:
 Walter Bright:

 Even if that is what you wanted, you won't get that from FileException, as it
 will only show file/lines emanating from calls inside std.file, not from
 higher level callers.
Can't we use the template arguments like __LINE__ to offer the line of code in the IO function in user code?
We could, but again, WHOA CAMEL! Also, one should be careful in using __LINE__ in a template, as it will be forced to generate a new instantiation with every use.
 Besides, take a bit of care when formulating a string for exceptions, and you
 won't have any trouble grepping for it. This isn't rocket science.
The exception is generated inside library code, not in user code. There's nothing to grep in the user code. The D script often looks like this: void main() { import std.stdio; auto file_name = "some_file; // some code here auto file_name1 = file_name ~ "1.txt"; auto f1 = File(file_name1); // some code here auto file_name2 = file_name ~ "2.txt"; auto f2 = File(file_name2, "w"); // some code here }
Come on. What is hard about "Cannot open file `xxxx` in mode `xxxx`", then you grep for "Cannot open file" ? Let's try it: > grep "Cannot open file" *.d stdio.d: text("Cannot open file `", name, "' in mode `", stdio.d:// new StdioException("Cannot open file `"~fName~"' for reading")); stdio.d:// new StdioException("Cannot open file `"~fName~"' for reading")); Besides, File doesn't throw a FileException and doesn't add caller's __FILE__ and __LINE__. If you're seriously proposing adding __FILE__ and __LINE__ to every function call, then D is the wrong language. This would have disastrous bloat and performance problems. You mentioned Python and Ruby, both of which are well known for miserable performance. Building the equivalent of a symbolic debugger into RELEASED D code is just not acceptable.
 Presenting internal debugging data to users for input/environmental errors is
 just bad programming practice.7 We shouldn't be enshrining it in Phobos and
 presenting it as a professional way to code.
The file/line of code is not meant to replace serious practices for dealing with I/O failures in serious D programs.
That's exactly what you're doing with it.
Sep 27 2014
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/27/14 7:15 PM, Walter Bright wrote:

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.
Library code often cannot make that choice. The issue with exceptions vs. errors is that often you don't know where the input comes from. e.g.: auto f = File(someInternalStringThatIsCorrupted) -> error auto f = File(argv[1]) -> exception How does File know what it's target file name came from? -Steve
Sep 27 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/27/2014 6:24 PM, Steven Schveighoffer wrote:
 On 9/27/14 7:15 PM, Walter Bright wrote:

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.
Library code often cannot make that choice. The issue with exceptions vs. errors is that often you don't know where the input comes from. e.g.: auto f = File(someInternalStringThatIsCorrupted) -> error auto f = File(argv[1]) -> exception How does File know what it's target file name came from?
If the app is concerned about invalid filenames as bugs, you should scrub the filenames first. I.e. the interface is defined improperly if the code confuses a programming bug with input errors.
Sep 27 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/27/14 9:52 PM, Walter Bright wrote:
 On 9/27/2014 6:24 PM, Steven Schveighoffer wrote:
 On 9/27/14 7:15 PM, Walter Bright wrote:

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.
Library code often cannot make that choice. The issue with exceptions vs. errors is that often you don't know where the input comes from. e.g.: auto f = File(someInternalStringThatIsCorrupted) -> error auto f = File(argv[1]) -> exception How does File know what it's target file name came from?
If the app is concerned about invalid filenames as bugs, you should scrub the filenames first. I.e. the interface is defined improperly if the code confuses a programming bug with input errors.
OK, so if you want to avoid improperly having errors/exceptions, don't put bugs into your code. A simple plan! -Steve
Sep 29 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/29/2014 4:47 AM, Steven Schveighoffer wrote:
 On 9/27/14 9:52 PM, Walter Bright wrote:
 On 9/27/2014 6:24 PM, Steven Schveighoffer wrote:
 On 9/27/14 7:15 PM, Walter Bright wrote:

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.
Library code often cannot make that choice. The issue with exceptions vs. errors is that often you don't know where the input comes from. e.g.: auto f = File(someInternalStringThatIsCorrupted) -> error auto f = File(argv[1]) -> exception How does File know what it's target file name came from?
If the app is concerned about invalid filenames as bugs, you should scrub the filenames first. I.e. the interface is defined improperly if the code confuses a programming bug with input errors.
OK, so if you want to avoid improperly having errors/exceptions, don't put bugs into your code. A simple plan!
Validating user input is not the same thing as removing all the logic bugs from the program.
Sep 29 2014
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/29/14 7:53 AM, Walter Bright wrote:
 On 9/29/2014 4:47 AM, Steven Schveighoffer wrote:
 On 9/27/14 9:52 PM, Walter Bright wrote:
 If the app is concerned about invalid filenames as bugs, you should
 scrub the filenames first. I.e. the interface is defined improperly if
 the code confuses a programming bug with input errors.
OK, so if you want to avoid improperly having errors/exceptions, don't put bugs into your code. A simple plan!
Validating user input is not the same thing as removing all the logic bugs from the program.
What if it's not user input? -Steve
Sep 29 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-28 03:24, Steven Schveighoffer wrote:

 Library code often cannot make that choice. The issue with exceptions
 vs. errors is that often you don't know where the input comes from.

 e.g.:

 auto f = File(someInternalStringThatIsCorrupted) -> error
 auto f = File(argv[1]) -> exception

 How does File know what it's target file name came from?
Both of theses should throw an exception. Most stuff related to file operations should throw an exception, not an error. -- /Jacob Carlborg
Sep 28 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/28/14 5:01 AM, Jacob Carlborg wrote:
 On 2014-09-28 03:24, Steven Schveighoffer wrote:

 Library code often cannot make that choice. The issue with exceptions
 vs. errors is that often you don't know where the input comes from.

 e.g.:

 auto f = File(someInternalStringThatIsCorrupted) -> error
 auto f = File(argv[1]) -> exception

 How does File know what it's target file name came from?
Both of theses should throw an exception. Most stuff related to file operations should throw an exception, not an error.
That makes no sense. The opening of the file is subject to issues with the filesystem, which means they may be environmental or user errors, not programming errors. But that doesn't mean the opening of the file failed because the file doesn't exist, it could be an error in how you construct the file name. What about: File f; f.open(null); Is that an environmental error or User error? -Steve
Sep 29 2014
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/29/2014 4:51 AM, Steven Schveighoffer wrote:
 What about:

 File f;
 f.open(null);

 Is that an environmental error or User error?
Passing invalid arguments to a function is a programming bug.
Sep 29 2014
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/29/14 7:54 AM, Walter Bright wrote:
 On 9/29/2014 4:51 AM, Steven Schveighoffer wrote:
 What about:

 File f;
 f.open(null);

 Is that an environmental error or User error?
Passing invalid arguments to a function is a programming bug.
That throws an exception. My point exactly. -Steve
Sep 29 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 29/09/14 13:51, Steven Schveighoffer wrote:

 That makes no sense. The opening of the file is subject to issues with
 the filesystem, which means they may be environmental or user errors,
 not programming errors. But that doesn't mean the opening of the file
 failed because the file doesn't exist, it could be an error in how you
 construct the file name.

 What about:

 File f;
 f.open(null);

 Is that an environmental error or User error?
That depends on what "open" expect from its argument. In this particular case, in D, "null" is the same as the empty string. I don't see why that technically shouldn't be allowed. Of course, you can specify that "open" expects a string argument with the length that is greater then 0, in that case it's a bug by the programmer that uses the "open" function. -- /Jacob Carlborg
Sep 29 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/29/14 10:48 AM, Jacob Carlborg wrote:
 On 29/09/14 13:51, Steven Schveighoffer wrote:

 That makes no sense. The opening of the file is subject to issues with
 the filesystem, which means they may be environmental or user errors,
 not programming errors. But that doesn't mean the opening of the file
 failed because the file doesn't exist, it could be an error in how you
 construct the file name.

 What about:

 File f;
 f.open(null);

 Is that an environmental error or User error?
That depends on what "open" expect from its argument. In this particular case, in D, "null" is the same as the empty string. I don't see why that technically shouldn't be allowed.
My entire point is, it doesn't matter what is expected or what is treated as "correct." what matters is where the input CAME from. To a library function, it has no idea. There is no extra type info saying "this parameter comes from user input."
 Of course, you can specify that "open" expects a string argument with
 the length that is greater then 0, in that case it's a bug by the
 programmer that uses the "open" function.
Is it? I can think of cases where it's programmer error, and cases where it's user error. There are also better example functions, but I'm focusing on File because that's what this thread is about. The largest issue I see with this whole scheme is that exceptions can be turned into errors, but not the reverse. Once an error is thrown, it's pretty much game over. So defensive coding would suggest when you don't know the answer, throw an exception, and something higher up would say "Oh, that is really a program error, rethrowing" But expecting developers to do this at EVERY CALL is really impossible. My expectation is that an exception is really an error unless caught. It would be nice to postpone generating the stack trace unless the exception is caught outside the main function. I don't know enough about how exceptions work to know if this is possible or not. -Steve
Sep 29 2014
next sibling parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Sep 29, 2014 at 8:13 AM, Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 My entire point is, it doesn't matter what is expected or what is treated
 as "correct." what matters is where the input CAME from. To a library
 function, it has no idea. There is no extra type info saying "this
 parameter comes from user input."
From the method's view, parameters passed in are user input.  Full stop.
One thing that seems to be talked around a bit here is the separation/encapsulation of things. It is perfectly fine, and expected, for a library method to throw exceptions on bad input to the method - even if this input turns out to be a programming bug elsewhere. From the standpoint of the method, it does not know (and does not care) where the thing ultimately came from - all it knows is that it is input here, and it is wrong. If you call a method with bad input, and fail to catch the resulting exception, then _that_ is a bug, not the method throwing. It may be perfectly recoverable to ignore/retry/whatever, or it may be a symptom of something that should abort the program. But the method throwing does not know or care.
Sep 29 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/29/14 2:43 PM, Jeremy Powers via Digitalmars-d wrote:
 On Mon, Sep 29, 2014 at 8:13 AM, Steven Schveighoffer via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     My entire point is, it doesn't matter what is expected or what is
     treated as "correct." what matters is where the input CAME from. To
     a library function, it has no idea. There is no extra type info
     saying "this parameter comes from user input."


  From the method's view, parameters passed in are user input.  Full stop.
This is missing the point of an exception. An uncaught exception is an error which crashes the program. If you catch the exception, you can handle it, but if you don't expect it, then it's a bug. Any uncaught exceptions are BY DEFINITION programming errors.
 One thing that seems to be talked around a bit here is the
 separation/encapsulation of things.  It is perfectly fine, and expected,
 for a library method to throw exceptions on bad input to the method -
 even if this input turns out to be a programming bug elsewhere.  From
 the standpoint of the method, it does not know (and does not care) where
 the thing ultimately came from - all it knows is that it is input here,
 and it is wrong.
What is being discussed here is removing the stack trace and printout when an exception is thrown. Imagine this error message: myprompt> ./mycoolprogram (1 hour later) FileException: Error opening file xgghfsnbuer myprompt> Now what? xgghfsnbuer may not even be in the code anywhere. There may be no hints at all as to what caused it to happen. You don't even know which line of code YOU wrote that was causing the issue! You have to examine every File open, and every call that may have opened a file, and see if possibly that file name was passed into it. Whereas if you get a trace, you can at least see where the exception occurred, and start from there. Now, RELYING on this printout to be your interface to the user, that is incorrect design, I will agree. But one cannot possibly be expected to handle every possible exception at every possible call so one can throw an error in the cases where it actually is an error. D doesn't even require listing the exceptions that may be thrown on the API (and no, I'm not suggesting it should).
 If you call a method with bad input, and fail to catch the resulting
 exception, then _that_ is a bug, not the method throwing.  It may be
 perfectly recoverable to ignore/retry/whatever, or it may be a symptom
 of something that should abort the program.  But the method throwing
 does not know or care.
Sure, but it doesn't happen. Just like people do not check return values from syscalls. The benefit of the exception printing is at least you get a trace of where things went wrong when you didn't expect them to. Ignoring a call's return value doesn't give any notion something is wrong until much later. -Steve
Sep 29 2014
next sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Sep 29, 2014 at 11:58 AM, Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 9/29/14 2:43 PM, Jeremy Powers via Digitalmars-d wrote:

 On Mon, Sep 29, 2014 at 8:13 AM, Steven Schveighoffer via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     My entire point is, it doesn't matter what is expected or what is
     treated as "correct." what matters is where the input CAME from. To
     a library function, it has no idea. There is no extra type info
     saying "this parameter comes from user input."


  From the method's view, parameters passed in are user input.  Full stop.
This is missing the point of an exception. An uncaught exception is an error which crashes the program. If you catch the exception, you can handle it, but if you don't expect it, then it's a bug. Any uncaught exceptions are BY DEFINITION programming errors.
I agree (except the bit about missing the point). The point I wanted to make was that encapsulation means what is a fatal error to one part of a program may be easily handled by the containing part. Just because an exception is thrown somewhere does not mean the program is broken - it is the failure to handle the exception (explicit or inadvertent) that indicates an error. What is being discussed here is removing the stack trace and printout when
 an exception is thrown.
 ....
Sure, but it doesn't happen. Just like people do not check return values
 from syscalls.

 The benefit of the exception printing is at least you get a trace of where
 things went wrong when you didn't expect them to. Ignoring a call's return
 value doesn't give any notion something is wrong until much later.

 -Steve
I absolutely do not want a removal of stack trace information. If an uncaught exception bubbles up and terminates the program, this is a bug and I sure as hell want to know as much about it as possible. If having such information presented to the end user is unacceptable, then wrap and spew something better. Ignoring an error return value is like ignoring an exception - bad news, and indicative of a broken program.
Sep 29 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/09/2014 19:58, Steven Schveighoffer wrote:
 Any uncaught exceptions are BY DEFINITION programming errors.
Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/1/14 9:47 AM, Bruno Medeiros wrote:
 On 29/09/2014 19:58, Steven Schveighoffer wrote:
 Any uncaught exceptions are BY DEFINITION programming errors.
Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception.
No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve
Oct 01 2014
next sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 01/10/2014 14:55, Steven Schveighoffer wrote:
 On 10/1/14 9:47 AM, Bruno Medeiros wrote:
 On 29/09/2014 19:58, Steven Schveighoffer wrote:
 Any uncaught exceptions are BY DEFINITION programming errors.
Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception.
No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve
Well, at the very least it's bad UI design for sure (textual UI is still UI). But it's only a *bug* if it's not the behavior the programmer intended. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/1/14 10:36 AM, Bruno Medeiros wrote:
 On 01/10/2014 14:55, Steven Schveighoffer wrote:
 On 10/1/14 9:47 AM, Bruno Medeiros wrote:
 On 29/09/2014 19:58, Steven Schveighoffer wrote:
 Any uncaught exceptions are BY DEFINITION programming errors.
Not necessarily. For some applications (for example simple console apps), you can consider the D runtime's default exception handler to be an appropriate way to respond to the exception.
No, this is lazy/incorrect coding. You don't want your user to see an indecipherable stack trace on purpose. -Steve
Well, at the very least it's bad UI design for sure (textual UI is still UI). But it's only a *bug* if it's not the behavior the programmer intended.
Sure, one could also halt a program by reading a null pointer on purpose. This is a grey area that I think reasonable people can correctly call a bug if they so wish, despite the intentions of the developer. -Steve
Oct 01 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Wednesday, 1 October 2014 at 14:46:50 UTC, Steven 
Schveighoffer wrote:
 On 10/1/14 10:36 AM, Bruno Medeiros wrote:

This is a grey area that I think reasonable people
 can correctly call a bug if they so wish, despite the 
 intentions of the developer.
Correctly? In a discussion, It's amazing how difficult it is to agree also on simple words meaning: an _intentional programmer behaviour_ a bug? Whah ;-P --- /Paolo
Oct 01 2014
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/1/14 3:24 PM, Paolo Invernizzi wrote:
 On Wednesday, 1 October 2014 at 14:46:50 UTC, Steven Schveighoffer wrote:
 On 10/1/14 10:36 AM, Bruno Medeiros wrote:

 This is a grey area that I think reasonable people
 can correctly call a bug if they so wish, despite the intentions of
 the developer.
Correctly? In a discussion, It's amazing how difficult it is to agree also on simple words meaning: an _intentional programmer behaviour_ a bug?
More appropriately, it's not a bug but an incorrect design. I still would call it a bug, and I'm 99% sure the users would report it as a bug :) -Steve
Oct 01 2014
prev sibling parent reply Andrej Mitrovic via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10/1/14, Steven Schveighoffer via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 No, this is lazy/incorrect coding. You don't want your user to see an
 indecipherable stack trace on purpose.
So when they file a bug report are you going to also ask them to run the debugger so they capture the stack trace and file that to you? Come on.
Oct 01 2014
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/1/14 11:00 AM, Andrej Mitrovic via Digitalmars-d wrote:
 On 10/1/14, Steven Schveighoffer via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 No, this is lazy/incorrect coding. You don't want your user to see an
 indecipherable stack trace on purpose.
So when they file a bug report are you going to also ask them to run the debugger so they capture the stack trace and file that to you? Come on.
No what I mean is: ./niftyapp badfilename.txt Result should be: Error: Could not open badfilename.txt, please check and make sure the file exists and is readable. Not: std.exception.ErrnoException std/stdio.d(345): Cannot open file `badfilename.txt' in mode `rb' (No such file or directory) ---------------- 5 testexception 0x0000000104fad02d ref std.stdio.File std.stdio.File.__ctor(immutable(char)[], const(char[])) + 97 6 testexception 0x0000000104f8d735 _Dmain + 69 7 testexception 0x0000000104f9f771 void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll().void __lambda1() + 33 8 testexception 0x0000000104f9f6bd void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate()) + 45 9 testexception 0x0000000104f9f71d void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).runAll() + 45 10 testexception 0x0000000104f9f6bd void rt.dmain2._d_run_main(int, char**, extern (C) int function(char[][])*).tryExec(scope void delegate()) + 45 11 testexception 0x0000000104f9f639 _d_run_main + 449 12 testexception 0x0000000104f8d75c main + 20 13 libdyld.dylib 0x00007fff8fb2a5fd start + 1 14 ??? 0x0000000000000001 0x0 + 1 If it's an error due to *user input*, you should not rely on the exception handling of the runtime, you should have a more user-friendly message. Obviously, if you fail to handle it, the full trace happens, and then you must fix that in your code. It's for your benefit too :) This way you get less nuisance troubleshooting calls since the error message is clearer. -Steve
Oct 01 2014
prev sibling next sibling parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Sep 29, 2014 at 8:13 AM, Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 The largest issue I see with this whole scheme is that exceptions can be
 turned into errors, but not the reverse. Once an error is thrown, it's
 pretty much game over. So defensive coding would suggest when you don't
 know the answer, throw an exception, and something higher up would say "Oh,
 that is really a program error, rethrowing"

 But expecting developers to do this at EVERY CALL is really impossible.
And this is an argument for checked exceptions - being able to explicitly state 'these are known fatal cases for this component, you should deal with them appropriately' when defining a method. Cuts down the catch/check to just the common cases, and makes such cases explicit to the caller. Anything not a checked exception falls into the 'error, abort!' path. (Memory corruption etc. being abort scenarios) If I really needed to write a robust program in D right now, I would (attempt) to wrap every call in a try/catch, and check if the thrown exception was of a handleable type. But knowing which types for which methods would lead me to basically hacking up some documentation-enforced checked exceptions, and being entirely unmaintainable.
Sep 29 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 29 September 2014 at 18:59:59 UTC, Jeremy Powers via
Digitalmars-d wrote:
 And this is an argument for checked exceptions - being able to 
 explicitly state 'these are known fatal cases for this 
 component, you should deal with them appropriately' when 
 defining a method.  Cuts down the catch/check to just the 
 common cases, and makes such cases explicit to the caller. 
 Anything not a checked exception falls into the 'error, abort!' 
 path.  (Memory corruption etc. being abort scenarios)
Checked exceptions are good in theory but they failed utterly in Java. I'm not interested in seeing them in D.
Sep 29 2014
next sibling parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Sep 29, 2014 at 12:28 PM, Sean Kelly via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 Checked exceptions are good in theory but they failed utterly in
 Java.  I'm not interested in seeing them in D.
I've heard this before, but have not seen a reasonable argument as to why they are a failure. Last time this was discussed a link to a blog was provided, with lots of discussion there - which as far as I could tell boiled down to 'catching exceptions is ugly, and people just do the wrong thing anyway which is ugly when you have checked exceptions.' I am unlucky enough to write Java all day, and from my standpoint checked exceptions are a huge win. There are certain edges which can catch you, but they are immensely useful in developing robust programs. Basically checked exceptions -> recoverable problems, unchecked -> unrecoverable/programming errors (like asserts or memory errors). Note I am not advocating adding checked exceptions to D (though I would like it). Point is to acknowledge that there are different kinds of exceptions, and an exception for one part of the code may not be a problem for the bit that invokes it.
Sep 29 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 9/29/14 3:44 PM, Jeremy Powers via Digitalmars-d wrote:
 On Mon, Sep 29, 2014 at 12:28 PM, Sean Kelly via Digitalmars-d
 <digitalmars-d puremagic.com <mailto:digitalmars-d puremagic.com>> wrote:

     Checked exceptions are good in theory but they failed utterly in
     Java.  I'm not interested in seeing them in D.


 I've heard this before, but have not seen a reasonable argument as to
 why they are a failure.  Last time this was discussed a link to a blog
 was provided, with lots of discussion there - which as far as I could
 tell boiled down to 'catching exceptions is ugly, and people just do the
 wrong thing anyway which is ugly when you have checked exceptions.'

 I am unlucky enough to write Java all day, and from my standpoint
 checked exceptions are a huge win.  There are certain edges which can
 catch you, but they are immensely useful in developing robust programs.
 Basically checked exceptions -> recoverable problems, unchecked ->
 unrecoverable/programming errors (like asserts or memory errors).
Well, the failure comes from the effort to effect a certain behavior. Sun was looking to make programmers more diligent about handling errors. However, humans are lazy worthless creatures. What ends up happening is, the compiler complains they aren't handling an exception. They can't see any reason why the exception would occur, so they simply catch and ignore it to shut the compiler up. In 90% of cases, they are right -- the exception will not occur. But because they have been "trained" to simply discard exceptions, it ends up defeating the purpose for the 10% of the time that they are wrong. If you have been able to resist that temptation and handle every exception, then I think you are in the minority. But I have no evidence to back this up, it's just a belief.
 Note I am not advocating adding checked exceptions to D (though I would
 like it).  Point is to acknowledge that there are different kinds of
 exceptions, and an exception for one part of the code may not be a
 problem for the bit that invokes it.
I think this is appropriate for a lint tool for those out there like yourself who want that information. But requiring checked exceptions is I think a futile attempt to outlaw natural human behavior. -Steve
Sep 30 2014
parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, Sep 30, 2014 at 5:43 AM, Steven Schveighoffer via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 ...
 Well, the failure comes from the effort to effect a certain behavior.

 Sun was looking to make programmers more diligent about handling errors.
 However, humans are lazy worthless creatures. What ends up happening is,
 the compiler complains they aren't handling an exception. They can't see
 any reason why the exception would occur, so they simply catch and ignore
 it to shut the compiler up.

 In 90% of cases, they are right -- the exception will not occur. But
 because they have been "trained" to simply discard exceptions, it ends up
 defeating the purpose for the 10% of the time that they are wrong.
That's the argument, but it doesn't seem valid to me. Without checked exceptions, you will always be ignoring exceptions. With checked exceptions, you have to explicitly ignore (some) exceptions, and when you do it is immediately obvious in the code. You go from everyone ignoring exceptions all the time, to some people ignoring them - and being able to easily notice and call out such. Anyone 'trained' to ignore checked exceptions are simply shooting themselves in the foot - same as if there were no checked exceptions, but with more verbosity. This is not a failure of checked exceptions, but a failure of people to use a language feature properly. (Which, yeah, meta is a failure of the feature... not going to go there)
 If you have been able to resist that temptation and handle every
 exception, then I think you are in the minority. But I have no evidence to
 back this up, it's just a belief.
In my world of professional java, ignoring exceptions is an immediate, obvious indicator of bad code. You will be called on it, and chastised appropriately. So from my standpoint, Sun was successful in making programmers more diligent about handling errors.
  Note I am not advocating adding checked exceptions to D (though I would
 like it).  Point is to acknowledge that there are different kinds of
 exceptions, and an exception for one part of the code may not be a
 problem for the bit that invokes it.
I think this is appropriate for a lint tool for those out there like yourself who want that information. But requiring checked exceptions is I think a futile attempt to outlaw natural human behavior.
Perhaps I shouldn't have mentioned checked exceptions at all, seem to be distracting from what I wanted to say. The important bit I wanted to bring to the discussion is that not all exceptions are the same, and different sections of code have their own ideas of what is a breaking problem. A module/library/component/whatever treats any input into itself as its input, and thus appropriately throws exceptions on bad input. But code using that whatever may be perfectly fine handling exceptions coming from there. Exceptions need to be appropriate to the given abstraction, and dealt with by the user of that abstraction.
Sep 30 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 29/09/2014 20:28, Sean Kelly wrote:
 Checked exceptions are good in theory but they failed utterly in
 Java.  I'm not interested in seeing them in D.
That is the conventional theory, the established wisdom. But the more I become experienced with Java, over the years, I've become convinced otherwise. What has failed is not the concept of checked exceptions per se, but mostly, the failure of Java programmers to use checked exceptions effectively, and properly design their code around this paradigm. Like Jeremy mentioned, if one puts catch blocks right around the function that throws an exception, and just swallow/forget it there without doing anything else, then it's totally the programmers fault for being lazy. If one is annoyed that often, adding a throws clause in a function will require adding the same throws clause function to several other functions, well, that is editing work you have to accept for the sake of more correctness. But also one should understand there are ways to mitigate this editing work: First point is that in a lot of code, is better to have a function throw just one generic (but checked) exception, that can wrap any other specific errors/exceptions. If you are doing an operation that can throw File-Not-Found, Invalid-Path, No-Permissions, IO-Exception, etc., then often all of these will be handled in the same user-reporting code, so they could be wrapped under a single exception that would be used in the throws clause. And so the whole function call chain doesn't need to be modified every time a new exception is added or removed. If you're thinking that means adding a "throws Exception" to such functions in Java, then no. Because this will catch RuntimeExceptions too (the unchecked exceptions of Java), and these you often want to handle elsewhere than where you handle the checked exceptions. In this regard, Java does have a design fault, IMO, which is that there is no common superclass for checked Exceptions. (there is only for unchecked exceptions) The second point, is that even adding (or modifying) the throws clause of function signatures cause be made much easier with an IDE, and in particular Eclipse JDT helps a lot. If you have an error in the editor about a checked exception that is not caught or thrown, you can just press Ctrl-1 to automatically add either a throws clause, or a surrounding try-catch block. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Wed, Oct 1, 2014 at 7:24 AM, Bruno Medeiros via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 What has failed is not the concept of checked exceptions per se, but
 mostly, the failure of Java programmers to use checked exceptions
 effectively, and properly design their code around this paradigm.
This. I have seen many java programs (and their programmers) fail utterly at using exceptions. Like OO, the design of the system has to leverage it properly, and there are places where people can easily get tripped up - but when used well, can be immensely useful. Error handling is part of an API, and exceptions are error handling, so should be considered when designing API. Checked exceptions are a language-supported way to do this. For those that consider checked exceptions a failure: what other feature(s) would work instead? NB: If you see "throws Exception" in java code chances are the code is broken, same as if you see "catch (Exception" - this tells you nothing about the exception that happened, and hence you can do nothing with it. So you either swallow (and silently break in many cases) or rethrow (and break for things you needn't have). As mentioned, the standard way to avoid this is to have a parent exception type appropriate to the abstraction in the API, and throw subtypes in the implementation. Among other things, this means you can change the implementation to throw different exceptions without breaking any users (who will already be catching the parent exception). Adding/modifying a throws clause is an API-breaking change, so should be avoided however easy it is in the IDE. (Yes, I'm biased to writing libraries consumed by others) NNB: Retrofitting a program to use proper exception handling is much harder than it is to design it the right way from scratch. I'm going to compare to OO again: don't consider OO broken because people use inheritance when they want ownership, and it is painful to fix later.
Oct 01 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Wednesday, 1 October 2014 at 22:42:27 UTC, Jeremy Powers via
Digitalmars-d wrote:
 If you see "throws Exception" in java code chances are the code 
 is broken,
 same as if you see "catch (Exception" - this tells you nothing 
 about the
 exception that happened, and hence you can do nothing with it.  
 So you
 either swallow (and silently break in many cases) or rethrow 
 (and break for things you needn't have).  As mentioned, the 
 standard way to avoid this is to have a parent exception type 
 appropriate to the abstraction in the API, and throw subtypes 
 in the implementation.  Among other things, this means you can 
 change the implementation to throw different exceptions without 
 breaking any users (who will already be catching the parent 
 exception).
...while in Phobos, most of the subtyped exceptions were eliminated a while back in favor of just always throwing Exception.
Oct 01 2014
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Wednesday, 1 October 2014 at 23:00:41 UTC, Sean Kelly wrote:
 ...while in Phobos, most of the subtyped exceptions were
 eliminated a while back in favor of just always throwing
 Exception.
What are you referring to specifically? Compared to Tango, yes, Phobos might have a lot fewer concrete exception types. But I don't recall actually eliminating existing ones. David
Oct 01 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 02/10/14 01:19, David Nadlinger wrote:

 What are you referring to specifically? Compared to Tango, yes, Phobos
 might have a lot fewer concrete exception types. But I don't recall
 actually eliminating existing ones.
It happens implicitly when using "enforce". By default it will throw an instance of Exception. In most cases I don't think the developer bothers with specifying a specific exception type. I really, really hate this. It makes it basically impossible to do any form of error handling correctly. I think Exception should be an interface or abstract class. -- /Jacob Carlborg
Oct 01 2014
parent reply "David Nadlinger" <code klickverbot.at> writes:
On Thursday, 2 October 2014 at 06:39:17 UTC, Jacob Carlborg wrote:
 On 02/10/14 01:19, David Nadlinger wrote:

 What are you referring to specifically? Compared to Tango, 
 yes, Phobos
 might have a lot fewer concrete exception types. But I don't 
 recall actually eliminating existing ones.
It happens implicitly when using "enforce".
I'm well aware of that. But as On Wednesday, 1 October 2014 at 23:00:41 UTC, Sean Kelly wrote:
 ...while in Phobos, most of the subtyped exceptions were
 eliminated a while back in favor of just always throwing
 Exception.
you are saying that specific exceptions were replaced by enforce? I can't recall something like this happening. David
Oct 03 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-03 14:36, David Nadlinger wrote:

 you are saying that specific exceptions were replaced by enforce? I
 can't recall something like this happening.
I have no idea about this but I know there are a lot of "enforce" in Phobos and it sees to be encouraged to use it. Would be really sad if specific exceptions were deliberately replaced with less specific exceptions. -- /Jacob Carlborg
Oct 03 2014
parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Fri, 03 Oct 2014 21:35:01 +0200
schrieb Jacob Carlborg <doob me.com>:

 On 2014-10-03 14:36, David Nadlinger wrote:
 
 you are saying that specific exceptions were replaced by enforce? I
 can't recall something like this happening.
I have no idea about this but I know there are a lot of "enforce" in Phobos and it sees to be encouraged to use it. Would be really sad if specific exceptions were deliberately replaced with less specific exceptions.
Nice, finally someone who actually wants to discern Exception types. I'm always at a loss as to what warrants its own exception type. E.g. when looking at network protocols, would a 503 be a NetworkException, a HTTPException or a HTTPInternalServerErrorException ? Where do *you* wish libraries would differentiate? Or does it really come down to categories like "illegal argument", "division by zero", "null pointer", "out of memory" for you? -- Marco
Oct 05 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 8:08 AM, Marco Leise wrote:
 Nice, finally someone who actually wants to discern Exception
 types. I'm always at a loss as to what warrants its own
 exception type. E.g. when looking at network protocols, would
 a 503 be a NetworkException, a HTTPException or a
 HTTPInternalServerErrorException ?
 Where do*you*  wish libraries would differentiate?
 Or does it really come down to categories like "illegal
 argument", "division by zero", "null pointer", "out of memory"
 for you?
That reflects my misgivings about using exception hierarchies for error kinds as well. -- Andrei
Oct 05 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-05 17:08, Marco Leise wrote:

 Nice, finally someone who actually wants to discern Exception
 types. I'm always at a loss as to what warrants its own
 exception type.
Yeah, that can be quite difficult. In the end, if you have a good exception hierarchy I don't think it will hurt to have many different exception types.
 E.g. when looking at network protocols, would
 a 503 be a NetworkException, a HTTPException or a
 HTTPInternalServerErrorException ?
For this specific case I would probably have one general exception for all HTTP error codes.
 Where do *you* wish libraries would differentiate?
I would like to have as specific exception type as possible. Also a nice hierarchy of exception when catching a specific exception is not interesting. Instead of just a FileException there could be FileNotFoundException, PermissionDeniedExcepton and so on.
 Or does it really come down to categories like "illegal
 argument", "division by zero", "null pointer", "out of memory"
 for you?
-- /Jacob Carlborg
Oct 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 8:56 AM, Jacob Carlborg wrote:
 I would like to have as specific exception type as possible. Also a nice
 hierarchy of exception when catching a specific exception is not
 interesting. Instead of just a FileException there could be
 FileNotFoundException, PermissionDeniedExcepton and so on.
Exceptions are all about centralized error handling. How, and how often, would you handle FileNotFoundException differently than PermissionDeniedException? Andrei
Oct 05 2014
next sibling parent "Dicebot" <public dicebot.lv> writes:
On Sunday, 5 October 2014 at 16:18:33 UTC, Andrei Alexandrescu 
wrote:
 On 10/5/14, 8:56 AM, Jacob Carlborg wrote:
 I would like to have as specific exception type as possible. 
 Also a nice
 hierarchy of exception when catching a specific exception is 
 not
 interesting. Instead of just a FileException there could be
 FileNotFoundException, PermissionDeniedExcepton and so on.
Exceptions are all about centralized error handling. How, and how often, would you handle FileNotFoundException differently than PermissionDeniedException? Andrei
While precise formalization of the principle is hard I think this comment nails it in general. When defining exceptions hierarchies it makes sense to think about it in terms of "what code is likely to catch it and why?" and not "what this code should throw?". Dedicated exception type only makes sense if it is a common to catch it separately, any additional details can be stored as runtime field (like status code for HTTPStatusException)
Oct 05 2014
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 9:18 AM, Andrei Alexandrescu wrote:
 Exceptions are all about centralized error handling. How, and how often, would
 you handle FileNotFoundException differently than PermissionDeniedException?
You would handle it differently if there was extra data attached to that particular exception, specific to that sort of error.
Oct 05 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 2:27 PM, Walter Bright wrote:
 On 10/5/2014 9:18 AM, Andrei Alexandrescu wrote:
 Exceptions are all about centralized error handling. How, and how
 often, would
 you handle FileNotFoundException differently than
 PermissionDeniedException?
You would handle it differently if there was extra data attached to that particular exception, specific to that sort of error.
Indeed. Very few in Phobos do. -- Andrei
Oct 05 2014
prev sibling next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
05-Oct-2014 20:18, Andrei Alexandrescu пишет:
 On 10/5/14, 8:56 AM, Jacob Carlborg wrote:
 I would like to have as specific exception type as possible. Also a nice
 hierarchy of exception when catching a specific exception is not
 interesting. Instead of just a FileException there could be
 FileNotFoundException, PermissionDeniedExcepton and so on.
Exceptions are all about centralized error handling. How, and how often, would you handle FileNotFoundException differently than PermissionDeniedException?
Seems like it should be possible to define multiple interfaces for exceptions, and then catch by that (and/or combinations of such). Each of interface would be interested in a particular property of exception. Then catching by: FileException with PermissionException would mean OS-level permission viloated and it was during file access, while ProcessException with PermissionException would mean process manipulation was forbiden, etc. Of course, some code may be interested only in PermissionException side of things, while other code may want to contain anything related to files, and the catch-all-sensible-ones inside of the main function. -- Dmitry Olshansky
Oct 05 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 05/10/14 23:50, Dmitry Olshansky wrote:

 Seems like it should be possible to define multiple interfaces for
 exceptions, and then catch by that (and/or combinations of such).

 Each of interface would be interested in a particular property of
 exception. Then catching by:

 FileException with PermissionException

 would mean OS-level permission viloated and it was during file access,

 while

 ProcessException with PermissionException would mean process
 manipulation was forbiden, etc.

 Of course, some code may be interested only in PermissionException side
 of things, while other code may want to contain anything related to
 files, and the catch-all-sensible-ones inside of the main function.
Why not just define two different PermissionException? -- /Jacob Carlborg
Oct 05 2014
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
06-Oct-2014 10:33, Jacob Carlborg пишет:
 On 05/10/14 23:50, Dmitry Olshansky wrote:

 Seems like it should be possible to define multiple interfaces for
 exceptions, and then catch by that (and/or combinations of such).

 Each of interface would be interested in a particular property of
 exception. Then catching by:

 FileException with PermissionException

 would mean OS-level permission viloated and it was during file access,

 while

 ProcessException with PermissionException would mean process
 manipulation was forbiden, etc.

 Of course, some code may be interested only in PermissionException side
 of things, while other code may want to contain anything related to
 files, and the catch-all-sensible-ones inside of the main function.
Why not just define two different PermissionException?
Rather N permission exceptions. It gets worse with more problem domains to cover. Again the trick is being able to catch all PermissionExceptions, including for exception type you may not know in advance. It's obvious to me that one hierarchy is way to limiting for exceptions, simply because there could be many ways to categorize the same set of error conditions. -- Dmitry Olshansky
Oct 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 11:39 PM, Dmitry Olshansky wrote:
 It's obvious to me that one hierarchy is way to limiting for exceptions,
 simply because there could be many ways to categorize the same set of
 error conditions.
Well put. -- Andrei
Oct 06 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Mon, Oct 06, 2014 at 06:46:31AM -0700, Andrei Alexandrescu via Digitalmars-d
wrote:
 On 10/5/14, 11:39 PM, Dmitry Olshansky wrote:
It's obvious to me that one hierarchy is way to limiting for
exceptions, simply because there could be many ways to categorize the
same set of error conditions.
Well put. -- Andrei
What's the alternative, though? I can't think of any solution that (1) isn't far more complicated than the current state of things and (2) is easier to use. T -- "Holy war is an oxymoron." -- Lazarus Long
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 7:01 AM, H. S. Teoh via Digitalmars-d wrote:
 On Mon, Oct 06, 2014 at 06:46:31AM -0700, Andrei Alexandrescu via
Digitalmars-d wrote:
 On 10/5/14, 11:39 PM, Dmitry Olshansky wrote:
 It's obvious to me that one hierarchy is way to limiting for
 exceptions, simply because there could be many ways to categorize the
 same set of error conditions.
Well put. -- Andrei
What's the alternative, though? I can't think of any solution that (1) isn't far more complicated than the current state of things and (2) is easier to use.
I'm thinking a simple key-value store Variant[string] would accommodate any state needed for differentiating among exception kinds whenever that's necessary. It's commonly accepted that the usability scope of OOP has gotten significantly narrower since its heydays. However, surprisingly, the larger community hasn't gotten to the point to scrutinize object-oriented error handling, which as far as I can tell has never delivered. Andrei
Oct 06 2014
parent reply Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 10/6/14, 7:01 AM, H. S. Teoh via Digitalmars-d wrote:

 On Mon, Oct 06, 2014 at 06:46:31AM -0700, Andrei Alexandrescu via
 Digitalmars-d wrote:

 On 10/5/14, 11:39 PM, Dmitry Olshansky wrote:

 It's obvious to me that one hierarchy is way to limiting for
 exceptions, simply because there could be many ways to categorize the
 same set of error conditions.
Well put. -- Andrei
What's the alternative, though? I can't think of any solution that (1) isn't far more complicated than the current state of things and (2) is easier to use.
I'm thinking a simple key-value store Variant[string] would accommodate any state needed for differentiating among exception kinds whenever that's necessary.
And 'kinds' is a synonym for 'types' - You can have different kinds of problems, so you raise them with different kinds of exceptions. s/kind/type/g and the question is: why not leverage the type system? For a consumer-of-something-that-throws, having different types of exceptions for different things with different data makes sense. You have to switch on something to determine what data you can get from the exception anyway.
 It's commonly accepted that the usability scope of OOP has gotten
 significantly narrower since its heydays. However, surprisingly, the larger
 community hasn't gotten to the point to scrutinize object-oriented error
 handling, which as far as I can tell has never delivered.
Maybe, but what fits better? Errors/Exceptions have an inherent hierarchy, which maps well to a hierarchy of types. When catching an Exception, you want to guarantee you only catch the kinds (types) of things you are looking for, and nothing else.
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 4:46 PM, Jeremy Powers via Digitalmars-d wrote:
 On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d
     I'm thinking a simple key-value store Variant[string] would
     accommodate any state needed for differentiating among exception
     kinds whenever that's necessary.


 And 'kinds' is a synonym for 'types' - You can have different kinds of
 problems, so you raise them with different kinds of exceptions.

 s/kind/type/g and the question is: why not leverage the type system?
I've used "kinds" intentionally there. My basic thesis here is I haven't seen any systematic and successful use of exception hierarchies in 20 years. In rare scattered cases I've seen a couple of multiple "catch"es, and even those could have been helped by the use of a more flat handling. You'd think in 20 years some good systematic use of the feature would come forward. It's probably time to put exception hierarchies in the "emperor's clothes" bin.
 For a consumer-of-something-that-throws, having different types of
 exceptions for different things with different data makes sense.  You
 have to switch on something to determine what data you can get from the
 exception anyway.
Oh yah I know the theory. It's beautiful.
     It's commonly accepted that the usability scope of OOP has gotten
     significantly narrower since its heydays. However, surprisingly, the
     larger community hasn't gotten to the point to scrutinize
     object-oriented error handling, which as far as I can tell has never
     delivered.


 Maybe, but what fits better?  Errors/Exceptions have an inherent
 hierarchy, which maps well to a hierarchy of types.  When catching an
 Exception, you want to guarantee you only catch the kinds (types) of
 things you are looking for, and nothing else.
Yah, it's just that most/virtually all of the time I'm looking for all. And nothing else :o). Andrei
Oct 06 2014
next sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Oct 6, 2014 at 6:19 PM, Andrei Alexandrescu via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 10/6/14, 4:46 PM, Jeremy Powers via Digitalmars-d wrote:

 On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d
     I'm thinking a simple key-value store Variant[string] would
     accommodate any state needed for differentiating among exception
     kinds whenever that's necessary.


 And 'kinds' is a synonym for 'types' - You can have different kinds of
 problems, so you raise them with different kinds of exceptions.

 s/kind/type/g and the question is: why not leverage the type system?
I've used "kinds" intentionally there. My basic thesis here is I haven't seen any systematic and successful use of exception hierarchies in 20 years. In rare scattered cases I've seen a couple of multiple "catch"es, and even those could have been helped by the use of a more flat handling. You'd think in 20 years some good systematic use of the feature would come forward. It's probably time to put exception hierarchies in the "emperor's clothes" bin. For a consumer-of-something-that-throws, having different types of
 exceptions for different things with different data makes sense.  You
 have to switch on something to determine what data you can get from the
 exception anyway.
Oh yah I know the theory. It's beautiful.
I'm not talking theory (exclusively). From a practical standpoint, if I ever need information from an exception I need to know what information I can get. If different exceptions have different information, how do I tell what I can get? Types fits this as well as/better than anything I can think of.
      It's commonly accepted that the usability scope of OOP has gotten
     significantly narrower since its heydays. However, surprisingly, the
     larger community hasn't gotten to the point to scrutinize
     object-oriented error handling, which as far as I can tell has never
     delivered.


 Maybe, but what fits better?  Errors/Exceptions have an inherent
 hierarchy, which maps well to a hierarchy of types.  When catching an
 Exception, you want to guarantee you only catch the kinds (types) of
 things you are looking for, and nothing else.
Yah, it's just that most/virtually all of the time I'm looking for all. And nothing else :o).
Most/virtually all of the time I am looking only for the kind of exceptions I expect and can handle. If I catch an exception that I was not expecting, this is a program bug (and may result in undefined behavior, memory corruption, etc). Catching all is almost _never_ what I want. I have not found a whole lot of use for deep exception hierarchies, but some organization of types/kinds of exceptions is needed. At the very least you need to know if it is the kind of exception you know how to handle - and without a hierarchy, you need to know every single specific kind of exception anything you call throws. Which is not tenable.
Oct 07 2014
prev sibling parent Jeremy Powers via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Mon, Oct 6, 2014 at 6:19 PM, Andrei Alexandrescu via Digitalmars-d <
digitalmars-d puremagic.com> wrote:

 On 10/6/14, 4:46 PM, Jeremy Powers via Digitalmars-d wrote:

 On Mon, Oct 6, 2014 at 7:50 AM, Andrei Alexandrescu via Digitalmars-d
     I'm thinking a simple key-value store Variant[string] would
     accommodate any state needed for differentiating among exception
     kinds whenever that's necessary.


 And 'kinds' is a synonym for 'types' - You can have different kinds of
 problems, so you raise them with different kinds of exceptions.

 s/kind/type/g and the question is: why not leverage the type system?
I've used "kinds" intentionally there. My basic thesis here is I haven't seen any systematic and successful use of exception hierarchies in 20 years. In rare scattered cases I've seen a couple of multiple "catch"es, and even those could have been helped by the use of a more flat handling. You'd think in 20 years some good systematic use of the feature would come forward. It's probably time to put exception hierarchies in the "emperor's clothes" bin.
Sorry, forgot to respond to this part. As mentioned, I'm not a defender of hierarchies per se - but I've not seen any alternate way to accomplish what they give. I need to know that I am catching exceptions that I can handle, and not catching exceptions I can't/won't handle. Different components and layers of code have different ideas of what can and should be handled. Without particular exception types, how can I know that I am only catching what is appropriate, and not catching and swallowing other problems?
Oct 07 2014
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 05/10/14 18:18, Andrei Alexandrescu wrote:

 Exceptions are all about centralized error handling. How, and how often,
 would you handle FileNotFoundException differently than
 PermissionDeniedException?
Probably not that often. But I don't think it's up to "File" to decide that. I think "File" should throw a specific error as possible containing all details it has. Then it's up to the user of "File" how to handle the exception. If you're not interested in the differences between FileNotFoundException and PermissionDeniedException, then catch the base class instead. The details provided between these two exception could be different as well. FileNotFoundException should contain the path of the file that wasn't found. PermissionDeniedException should contain some information about if it was the source or target that caused the exception to be thrown. Think of a copy or move operation. How would one localize error messages if the specific exception is not known? -- /Jacob Carlborg
Oct 05 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/5/14, 11:31 PM, Jacob Carlborg wrote:
 On 05/10/14 18:18, Andrei Alexandrescu wrote:

 Exceptions are all about centralized error handling. How, and how often,
 would you handle FileNotFoundException differently than
 PermissionDeniedException?
Probably not that often. But I don't think it's up to "File" to decide that. I think "File" should throw a specific error as possible containing all details it has.
Sure, but that can be distinguished by payload, not type.
 Then it's up to the user of "File" how to
 handle the exception. If you're not interested in the differences
 between FileNotFoundException and PermissionDeniedException, then catch
 the base class instead.

 The details provided between these two exception could be different as
 well. FileNotFoundException should contain the path of the file that
 wasn't found. PermissionDeniedException should contain some information
 about if it was the source or target that caused the exception to be
 thrown. Think of a copy or move operation.

 How would one localize error messages if the specific exception is not
 known?
Knowledge doesn't have to be by type; just place data inside the exception. About the only place where multiple "catch" statements are used to make fine distinctions between exception types is in sample code showing how to use multiple "catch" statements :o). This whole notion that different exceptions need different types is as far as I can tell a red herring. Andrei
Oct 06 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 06/10/14 15:45, Andrei Alexandrescu wrote:

 Knowledge doesn't have to be by type; just place data inside the
 exception. About the only place where multiple "catch" statements are
 used to make fine distinctions between exception types is in sample code
 showing how to use multiple "catch" statements :o). This whole notion
 that different exceptions need different types is as far as I can tell a
 red herring.
What do you suggest, error codes? I consider that an ugly hack. -- /Jacob Carlborg
Oct 06 2014
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 7:48 AM, Jacob Carlborg wrote:
 On 06/10/14 15:45, Andrei Alexandrescu wrote:

 Knowledge doesn't have to be by type; just place data inside the
 exception. About the only place where multiple "catch" statements are
 used to make fine distinctions between exception types is in sample code
 showing how to use multiple "catch" statements :o). This whole notion
 that different exceptions need different types is as far as I can tell a
 red herring.
What do you suggest, error codes? I consider that an ugly hack.
I don't. On the contrary, I do consider proliferating types to the multiplicity of possible errors an obvious design sin. -- Andrei
Oct 06 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-06 17:07, Andrei Alexandrescu wrote:

 I don't. On the contrary, I do consider proliferating types to the
 multiplicity of possible errors an obvious design sin. -- Andrei
You loose the ability to have exception specific data. And no, I don't want to see an associative array of Variants, that's even worse hack then error codes. You'll run in to problems with unique keys and will most likely need to "scope" all keys. -- /Jacob Carlborg
Oct 06 2014
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdan.org> writes:
Jacob Carlborg <doob me.com> wrote:
 On 2014-10-06 17:07, Andrei Alexandrescu wrote:
 
 I don't. On the contrary, I do consider proliferating types to the
 multiplicity of possible errors an obvious design sin. -- Andrei
You loose the ability to have exception specific data. And no, I don't want to see an associative array of Variants, that's even worse hack then error codes. You'll run in to problems with unique keys and will most likely need to "scope" all keys.
Then scope them.
Oct 06 2014
parent Jacob Carlborg <doob me.com> writes:
On 06/10/14 20:26, Andrei Alexandrescu wrote:

 Then scope them.
We already have scoping rules built in to the language. Why should we invent a new way and set of rules for this. -- /Jacob Carlborg
Oct 06 2014
prev sibling parent reply "Regan Heath" <regan netmail.co.nz> writes:
On Mon, 06 Oct 2014 15:48:31 +0100, Jacob Carlborg <doob me.com> wrote:

 On 06/10/14 15:45, Andrei Alexandrescu wrote:

 Knowledge doesn't have to be by type; just place data inside the
 exception. About the only place where multiple "catch" statements are
 used to make fine distinctions between exception types is in sample code
 showing how to use multiple "catch" statements :o). This whole notion
 that different exceptions need different types is as far as I can tell a
 red herring.
What do you suggest, error codes? I consider that an ugly hack.
Why? It gives us the benefits of error code return values: - ability to easily/cheaply check for/compare them using "switch" on code value (vs comparing/casting types) - ability to pass through OS level codes directly Without any of the penalties: - checking for them after every call. - losing the return value "slot" or having to engineer multiple return values in the language. - having to mix error codes in with valid return values (for int() functions). We also get: - no type proliferation. - no arguments about what exception types are needed, or the hierarchy to put them in. Seems like a win to me. Of course.. it would be nicer still if there was a list of OS/platform agnostic error codes which were used throughout phobos and could be re-used by client code. And.. (for example) it would be nice if there was a FileNotFound(string path) function which returned an Exception using the correct code allowing: throw FileNotFound(path); and so on. I do not know a lot about how exceptions are thrown and caught at the compiler/compiled code level, but perhaps there is even a performance benefit to be had if you know that only 2 possible types (Exception and Error) can/will be thrown.. R -- Using Opera's revolutionary email client: http://www.opera.com/mail/
Oct 06 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 2014-10-06 18:03, Regan Heath wrote:

 Why?

 It gives us the benefits of error code return values:
   - ability to easily/cheaply check for/compare them using "switch" on
 code value (vs comparing/casting types)
   - ability to pass through OS level codes directly

 Without any of the penalties:
   - checking for them after every call.
   - losing the return value "slot" or having to engineer multiple return
 values in the language.
   - having to mix error codes in with valid return values (for int()
 functions).

 We also get:
   - no type proliferation.
   - no arguments about what exception types are needed, or the hierarchy
 to put them in.

 Seems like a win to me.
Then you'll always catch all exceptions. If error code doesn't match you need to rethrow the exception. Or make a language change that allows to catch based on the error code. -- /Jacob Carlborg
Oct 06 2014
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/6/14, 11:13 AM, Jacob Carlborg wrote:
 Then you'll always catch all exceptions. If error code doesn't match you
 need to rethrow the exception. Or make a language change that allows to
 catch based on the error code.
Either solution is fine here because that's the rare case. -- Andrei
Oct 06 2014
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 10/1/14, 4:00 PM, Sean Kelly wrote:
 On Wednesday, 1 October 2014 at 22:42:27 UTC, Jeremy Powers via
 Digitalmars-d wrote:
 If you see "throws Exception" in java code chances are the code is
 broken,
 same as if you see "catch (Exception" - this tells you nothing about the
 exception that happened, and hence you can do nothing with it. So you
 either swallow (and silently break in many cases) or rethrow (and
 break for things you needn't have).  As mentioned, the standard way to
 avoid this is to have a parent exception type appropriate to the
 abstraction in the API, and throw subtypes in the implementation.
 Among other things, this means you can change the implementation to
 throw different exceptions without breaking any users (who will
 already be catching the parent exception).
....while in Phobos, most of the subtyped exceptions were eliminated a while back in favor of just always throwing Exception.
My recollection is that was only talked about. Anyhow, one thing is clear - as of now there are no clear idioms and successful techniques for handling errors with exceptions (including the use of subtyping). -- Andrei
Oct 01 2014
parent Jacob Carlborg <doob me.com> writes:
On 02/10/14 01:19, Andrei Alexandrescu wrote:

 My recollection is that was only talked about. Anyhow, one thing is
 clear - as of now there are no clear idioms and successful techniques
 for handling errors with exceptions (including the use of subtyping). --
 Andrei
I think most error handling is done with "enforce", this will, by default, throw an instance of Exception. I really, really hate this. I think Exception should be an abstract class or an interface. -- /Jacob Carlborg
Oct 01 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-29 17:13, Steven Schveighoffer wrote:

 Is it? I can think of cases where it's programmer error, and cases where
 it's user error.
When would it be a user error? -- /Jacob Carlborg
Oct 01 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/1/14 3:27 PM, Jacob Carlborg wrote:
 On 2014-09-29 17:13, Steven Schveighoffer wrote:

 Is it? I can think of cases where it's programmer error, and cases where
 it's user error.
When would it be a user error?
./progThatExpectsFilename "" -Steve
Oct 01 2014
parent reply Jacob Carlborg <doob me.com> writes:
On 01/10/14 21:57, Steven Schveighoffer wrote:

 ./progThatExpectsFilename ""

 -Steve
It's the developer's responsibility to make sure a value like that never reaches the "File" constructor. That is, the developer of the "progThatExpectsFilename" application that uses "File". Not the developer of "File". Although, I don't see why you shouldn't be able to pass an empty string to "File". You'll just get an exception, "cannot open file ''". This is exactly what happens in Bash: $ echo "asd" > "" -bash: : No such file or directory -- /Jacob Carlborg
Oct 01 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/2/14 2:45 AM, Jacob Carlborg wrote:
 On 01/10/14 21:57, Steven Schveighoffer wrote:

 ./progThatExpectsFilename ""

 -Steve
It's the developer's responsibility to make sure a value like that never reaches the "File" constructor. That is, the developer of the "progThatExpectsFilename" application that uses "File". Not the developer of "File".
Then what is the point of File's constructor throwing an exception? This means, File is checking the filename, and I have to also check the file name.
 Although, I don't see why you shouldn't be able to pass an empty string
 to "File". You'll just get an exception, "cannot open file ''".
Right, that is fine. If you catch the exception and handle the result with a nice message to the user, that is exactly what should happen. If you forget to catch the exception, this is a bug, and the program should crash with an appropriate stack trace. Note 2 things: 1. You should NOT depend on the stack trace/Exception to be your error message. 2. File's ctor has NO IDEA whether throwing an exception is going to be a bug or a handled error condition. I would say, as soon as an exception is thrown and is not caught by user code, for all intents and purposes, it becomes an Error. -Steve
Oct 04 2014
parent Jacob Carlborg <doob me.com> writes:
On 2014-10-04 12:29, Steven Schveighoffer wrote:

 Then what is the point of File's constructor throwing an exception? This
 means, File is checking the filename, and I have to also check the file
 name.
"File" should check if the file exists, can be opened and similar things. These are things that can change from outside your application during a function call between your application and the underlying system. But, if "File" for some reason doesn't accept null as a valid value then that's something the developer of the application that uses "File" is responsible to check. It's not like the value can suddenly change without you knowing it.
 Right, that is fine. If you catch the exception and handle the result
 with a nice message to the user, that is exactly what should happen.

 If you forget to catch the exception, this is a bug, and the program
 should crash with an appropriate stack trace.
Yes, I agree.
 Note 2 things:

 1. You should NOT depend on the stack trace/Exception to be your error
 message.
Agree.
 2. File's ctor has NO IDEA whether throwing an exception is going to be
 a bug or a handled error condition.
Yes, but it's the responsibility of "File" to properly document what exceptions it can throw and during which conditions. Then it's up to the developer that uses "File" to handle these exceptions appropriately.
 I would say, as soon as an exception is thrown and is not caught by user
 code, for all intents and purposes, it becomes an Error.
Sure. In theory you can have an other library that handles these exceptions. Think something like a web framework that turns most exceptions into 500 responses. -- /Jacob Carlborg
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
More carefully design the interfaces if programmer error and input error are conflated.
Oct 04 2014
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 10/4/14 4:47 AM, Walter Bright wrote:
 On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
More carefully design the interfaces if programmer error and input error are conflated.
You mean more carefully design File's ctor? How so? -Steve
Oct 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 3:30 AM, Steven Schveighoffer wrote:
 On 10/4/14 4:47 AM, Walter Bright wrote:
 On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
More carefully design the interfaces if programmer error and input error are conflated.
You mean more carefully design File's ctor? How so?
You can start with deciding if random binary data passed as a "file name" is legal input to the ctor or not.
Oct 04 2014
next sibling parent =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
Walter Bright <newshound2 digitalmars.com> wrote:
 On 10/4/2014 3:30 AM, Steven Schveighoffer wrote:
 On 10/4/14 4:47 AM, Walter Bright wrote:
 On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
More carefully design the interfaces if programmer error and input error are conflated.
You mean more carefully design File's ctor? How so?
You can start with deciding if random binary data passed as a "file name" is legal input to the ctor or not.
I think it helps to see contracts as an informal extension to the type system. Ideally, the function signature would not allow invalid input at all. In practice, that's not always possible and contracts are a less formal way to specify the function signature. But conceptually they are still part of the signature. And of course (as with normal contract-less functions) you are always allowed to provide convenience functions with extended input validation. Those should then be based on the strict version. For example take a constructor for an XML document class. It could take the (unvalidated) file path as string parameter. Or a file (validated that it exists) object. Or a stream (validated that it exists and is already opened). You can provide all three for convenience, but I think it's still good design to provide three _different_ functions. Contracts are not a magical tool to provide you all three variants in one function depending somehow on the needs of the caller. Tobi
Oct 05 2014
prev sibling parent reply Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 04 Oct 2014 13:12:43 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 10/4/2014 3:30 AM, Steven Schveighoffer wrote:
 On 10/4/14 4:47 AM, Walter Bright wrote:
 On 9/29/2014 8:13 AM, Steven Schveighoffer wrote:
 I can think of cases where it's programmer error, and cases where it's
 user error.
More carefully design the interfaces if programmer error and input err=
or
 are conflated.
You mean more carefully design File's ctor? How so?
=20 You can start with deciding if random binary data passed as a "file name=
" is=20
 legal input to the ctor or not.
In POSIX speak [1] a file name consisting only of A-Za-z0-9.,- is a "character string" (a portable file name) whereas anything not representable in all locales is just a "string". Locales' charsets are required to be able to represent A-Za-z0-9.,- but may use a different mapping than ASCII for that. Only the slash '/' must have a fixed value of 0x2F. =46rom that I conclude, that File() should open files by ubyte[] exclusively to be POSIX compliant. This is the stuff that's frustrating me much about POSIX. It practically makes it impossible to write correct code. Even Qt and Gtk+ settled for the system locale and UTF-8 respectively as the assumed I/O charset for all file names, although each file system could be mounted in a different charset. E.g. CD-ROMs in ISO charset. Windows does much better by offering Unicode versions on top of the "ANSI" functions. The only fix I see for POSIX is to deprecate all other locales except UTF-8 at some point. [1] http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html#tag= _03_267 --=20 Marco
Oct 05 2014
parent reply ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sun, 5 Oct 2014 17:44:31 +0200
Marco Leise via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 From that I conclude, that File() should open files by ubyte[]
 exclusively to be POSIX compliant.
and yet there is currently no way in hell to open file with non-utf8 name with hardcoded name and without ugly hacks with File("..."). and ubyte will not help here, as D has no way to represent non-utf8 string literal without unmaintainable shit like "\xNN". and speaking about utf-8: for strings we at least have a hack, and for shebangs... nothing. nada. =D0=BD=D0=B8=D1=87=D0=B5=D0=B3=D0=BE. locale set= tings? other encodings? who needs that, there CAN'T be non-utf8 shebangs. OS interoperability? it's overhyped, there is only two kinds of OSes: those which are D-compatible and bad. changing your locale to non-utf8 magically turns your OS to bad, D will not try to interoperate with it anymore.
Oct 05 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 5 Oct 2014 19:04:23 +0300
schrieb ketmar via Digitalmars-d <digitalmars-d puremagic.com>:

 On Sun, 5 Oct 2014 17:44:31 +0200
 Marco Leise via Digitalmars-d <digitalmars-d puremagic.com> wrote:
=20
 From that I conclude, that File() should open files by ubyte[]
 exclusively to be POSIX compliant.
and yet there is currently no way in hell to open file with non-utf8 name with hardcoded name and without ugly hacks with File("..."). and ubyte will not help here, as D has no way to represent non-utf8 string literal without unmaintainable shit like "\xNN". =20 and speaking about utf-8: for strings we at least have a hack, and for shebangs... nothing. nada. =D0=BD=D0=B8=D1=87=D0=B5=D0=B3=D0=BE. locale s=
ettings? other encodings? who
 needs that, there CAN'T be non-utf8 shebangs. OS interoperability? it's
 overhyped, there is only two kinds of OSes: those which are D-compatible
 and bad. changing your locale to non-utf8 magically turns your OS to
 bad, D will not try to interoperate with it anymore.
There comes the day that you have to let Sputnik go and board the ISS. Still I and a others agree with you that Phobos should not assume Unicode locales everywhere, no need to rant. What I find difficult is to define just where in std.stdio the locale transcoding has to happen. Java probably just wraps a stream in a stream in a stream as usual, but in std.stdio it is just a File struct that more or less directly writes to the file descriptor. So my guess is, the transcoding has to happen at an earlier stage. Next, when output is NOT a terminal you typically want to output with no transcoding or set it up depending on your needs. Personally I want transcoding to the system locale when stdout/stderr is a terminal and UTF-8 for everything else (i.e. pipes and files). That would be my defaults. --=20 Marco
Oct 05 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Sat, 27 Sep 2014 16:15:40 -0700
Walter Bright via Digitalmars-d <digitalmars-d puremagic.com> wrote:

 After all, what would you think of a compiler that spewed out
 messages like this:
=20
     > dmd test.d
     test.d(15) Error: missing } thrown from dmd/src/parse.c(283)
=20
 ?
"wow, that's cool! one more pleasing feature!"
Sep 27 2014
prev sibling next sibling parent reply luka8088 <luka8088 owave.net> writes:
On 28.9.2014. 1:15, Walter Bright wrote:
 This issue comes up over and over, in various guises. I feel like
 Yosemite Sam here:

      https://www.youtube.com/watch?v=hBhlQgvHmQ0

 In that vein, Exceptions are for either being able to recover from
 input/environmental errors, or report them to the user of the application.

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.

 assert()s and contracts are for debugging programs.

 After all, what would you think of a compiler that spewed out messages
 like this:

     > dmd test.d
     test.d(15) Error: missing } thrown from dmd/src/parse.c(283)

 ?

 See:

      https://issues.dlang.org/show_bug.cgi?id=13543

 As for the programmer wanting to know where the message "missing }" came
 from,
 li
      grep -r dmd/src/*.c "missing }"

 works nicely. I do that sort of thing all the time. It really isn't a
 problem.
We has this issue at work (we are working with php). We outputted a stack trace for both exceptions and asserts but the lines that should be addressed are not always so obvious. I found a solution and it works great for us. All library code is marked appropriately so when stack is outputted it is shadows out (with gray color) all the lines in library code and point out first non-library line from the top of the stack. In 95% of the time it is the line that the programmer should look into. Other 5% is the time when it shows the line where programmer is forwarding a call to the library but turns out to be ok as it turns out to be much more comprehensible than the entire stack. One note worth mentioning is that juniors have much easier time understanding which lines concern them, and from that I can only conclude that such approach is more intuitive. Marking is done on namespace level so it can be easily disabled for entire namespace. I think outputting a stack trace for asserts is a must because of that 5%. And for exceptions I agree completely with your arguments and I think that there is no need for stack. From my experience this has been a good approach and I think is worth considering.
Sep 28 2014
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
luka8088:

 All library code is marked appropriately so when stack is 
 outputted it is shadows out (with gray color) all the lines in 
 library code and point out first non-library line from the top 
 of the stack. In 95% of the time it is the line that the 
 programmer should look into. Other 5% is the time when it shows 
 the line where programmer is forwarding a call to the library 
 but turns out to be ok as it turns out to be much more 
 comprehensible than the entire stack. One note worth mentioning 
 is that juniors have much easier time understanding which lines 
 concern them, and from that I can only conclude that such 
 approach is more intuitive.
This looks like a little enhancement request, to colourize differently the parts of the stack trace of the library code/runtime from the user code.
 And for exceptions I agree completely with your arguments and I 
 think that there is no need for stack.
I think Walter is not suggesting to remove the stack trace for exceptions. Bye, bearophile
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 2:28 AM, bearophile wrote:
 And for exceptions I agree completely with your arguments and I think that
 there is no need for stack.
I think Walter is not suggesting to remove the stack trace for exceptions.
I suggest removal of stack trace for exceptions, but leaving them in for asserts. Asserts are a deliberately designed debugging tool. Exceptions are not.
Sep 28 2014
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 9/28/14, 10:36 AM, Walter Bright wrote:
 On 9/28/2014 2:28 AM, bearophile wrote:
 And for exceptions I agree completely with your arguments and I think
 that
 there is no need for stack.
I think Walter is not suggesting to remove the stack trace for exceptions.
I suggest removal of stack trace for exceptions, but leaving them in for asserts. Asserts are a deliberately designed debugging tool. Exceptions are not.
I'm fine with that philosophy, with the note that it's customary nowadays to inflict things like the stack trace on the user. -- Andrei
Sep 28 2014
prev sibling next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Walter Bright:

 I suggest removal of stack trace for exceptions, but leaving 
 them in for asserts.
I suggest to keep stack trace for both cases, and improve it with colors :-) Another possibility is to keep the stack trace for exceptions in nonrelease mode only.
 Asserts are a deliberately designed debugging tool. Exceptions 
 are not.
Exceptions are often used to help debugging... We have even allowed exceptions inside D contracts (but I don't know why). Bye, bearophile
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 11:25 AM, bearophile wrote:
 Exceptions are often used to help debugging...
https://www.youtube.com/watch?v=hBhlQgvHmQ0
Sep 28 2014
parent luka8088 <luka8088 owave.net> writes:
On 28.9.2014. 21:32, Walter Bright wrote:
 On 9/28/2014 11:25 AM, bearophile wrote:
 Exceptions are often used to help debugging...
https://www.youtube.com/watch?v=hBhlQgvHmQ0
Example exception messages: Unable to connect to database Invalid argument count Invalid network package format All this messages do not require a stack trace as they do not require code fixes, they indicate an issue outside the program itself. If stack trace is required then assert should have been used instead. Or to better put it: can anyone give an example of exception that would require stack trace?
Sep 28 2014
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2014-09-28 19:36, Walter Bright wrote:

 I suggest removal of stack trace for exceptions, but leaving them in for
 asserts.
If you don't like the stack track, just wrap the "main" function in a try-catch block, catch all exceptions and print the error message. -- /Jacob Carlborg
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 12:33 PM, Jacob Carlborg wrote:
 On 2014-09-28 19:36, Walter Bright wrote:

 I suggest removal of stack trace for exceptions, but leaving them in for
 asserts.
If you don't like the stack track, just wrap the "main" function in a try-catch block, catch all exceptions and print the error message.
That's what the runtime that calls main() is supposed to do.
Sep 28 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sun, 28 Sep 2014 13:14:43 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 On 9/28/2014 12:33 PM, Jacob Carlborg wrote:
 On 2014-09-28 19:36, Walter Bright wrote:

 I suggest removal of stack trace for exceptions, but leaving them in for
 asserts.
If you don't like the stack track, just wrap the "main" function in a try-catch block, catch all exceptions and print the error message.
That's what the runtime that calls main() is supposed to do.
Guys, a druntime flag could settle matters in 10 minutes. But this topic is clearly about the right school of thought. I use contracts to check for logical errors, like when an argument must not be null or a value less than the length of some data structure. I use exceptions to check for invalid input and the return values of external libraries. External libraries can be anything from my own code in the same project to OpenGL from vendor XY. They could error out on valid input (if we leave out of memory aside for now), because of bugs or incorrect assumptions of the implementation. If that happens and all I get is: "Library XY Exception: code 0x13533939 (Invalid argument)." I'm at a loss, where the library might have had a hickup. Did some function internally handle a uint as an int and wrapped around? Maybe with std.logger we will see single line messages on the terminal and multi-line exception traces in the logs (which by default print to stderr as well). And then this discussion can be resolved. -- Marco
Oct 01 2014
prev sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 17:36:14 UTC, Walter Bright wrote:
 On 9/28/2014 2:28 AM, bearophile wrote:
 And for exceptions I agree completely with your arguments and 
 I think that
 there is no need for stack.
I think Walter is not suggesting to remove the stack trace for exceptions.
I suggest removal of stack trace for exceptions, but leaving them in for asserts. Asserts are a deliberately designed debugging tool. Exceptions are not.
Fair. So we generate traces for Errors but not Exceptions.
Sep 28 2014
prev sibling next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 9/27/14, 8:15 PM, Walter Bright wrote:
 This issue comes up over and over, in various guises. I feel like
 Yosemite Sam here:

      https://www.youtube.com/watch?v=hBhlQgvHmQ0

 In that vein, Exceptions are for either being able to recover from
 input/environmental errors, or report them to the user of the application.

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.

 assert()s and contracts are for debugging programs.
For me, assert is useless. We are developing a language using LLVM as our backend. If you give LLVM something it doesn't like, you get something this: ~~~ Assertion failed: (S1->getType() == S2->getType() && "Cannot create binary operator with two operands of differing type!"), function Create, file Instructions.cpp, line 1850. Abort trap: 6 ~~~ That is what the user gets when there is a bug in the compiler, at least when we are generating invalid LLVM code. And that's one of the good paths, if you compiled LLVM with assertions, because otherwise I guess it's undefined behaviour. What I'd like to do, as a compiler, is to catch those errors and tell the user: "You've found a bug in the app, could you please report it in this URL? Thank you.". We can't: the assert is there and we can't change it. Now, this is when you interface with C++/C code. But inside our language code we always use exceptions so that programmers can choose what to do in case of an error. With assert you loose that possibility. Raising an exception is costly, but that should happen in exceptional cases. Installing an exception handler is cost-free, so I don't see why there is a need for a less powerful construct like assert.
Sep 28 2014
next sibling parent "Xiao Xie" <xiao.xie 163.com> writes:
On Sunday, 28 September 2014 at 15:10:26 UTC, Ary Borenszweig 
wrote:
 What I'd like to do, as a compiler, is to catch those errors 
 and tell the user: "You've found a bug in the app, could you 
 please report it in this URL? Thank you.". We can't: the assert 
 is there and we can't change it.
Why is SIGABRT handler not working for your usecase? print and exit?
Sep 28 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 8:10 AM, Ary Borenszweig wrote:
 For me, assert is useless.

 We are developing a language using LLVM as our backend. If you give LLVM
 something it doesn't like, you get something this:

 ~~~
 Assertion failed: (S1->getType() == S2->getType() && "Cannot create binary
 operator with two operands of differing type!"), function Create, file
 Instructions.cpp, line 1850.

 Abort trap: 6
 ~~~

 That is what the user gets when there is a bug in the compiler, at least when
we
 are generating invalid LLVM code. And that's one of the good paths, if you
 compiled LLVM with assertions, because otherwise I guess it's undefined
behaviour.

 What I'd like to do, as a compiler, is to catch those errors and tell the user:
 "You've found a bug in the app, could you please report it in this URL? Thank
 you.". We can't: the assert is there and we can't change it.
You can hook D's assert and do what you want with it.
 Now, this is when you interface with C++/C code. But inside our language code
we
 always use exceptions so that programmers can choose what to do in case of an
 error. With assert you loose that possibility.
If you want to use Exceptions for debugging in your code, I won't try and stop you. But using them for debugging in official Phobos I strongly object to.
 Installing an exception handler is cost-free,
Take a look at the assembler dump from std.file.copy() that I posted in the other thread.
 so I don't see why there is a need
 for a less powerful construct like assert.
Exceptions are meant for RECOVERABLE errors. If you're using them instead of assert for logic bugs, you're looking at undefined behavior. Logic bugs are not recoverable.
Sep 28 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 17:40:49 UTC, Walter Bright wrote:
 You can hook D's assert and do what you want with it.
With the caveat that you must finish by either exiting the app or throwing an exception, since the compiler doesn't generate a stack frame that can be returned from.
 Exceptions are meant for RECOVERABLE errors. If you're using 
 them instead of assert for logic bugs, you're looking at 
 undefined behavior. Logic bugs are not recoverable.
In a multithreaded program, does this mean that the thread must be terminated or the entire process? In a multi-user system, does this mean the transaction or the entire process? The scope of a logic bug can be known to be quite limited. Remember my earlier point about Erlang, where a "process" there is actually just a logical thread in the VM.
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 12:38 PM, Sean Kelly wrote:
 Exceptions are meant for RECOVERABLE errors. If you're using them instead of
 assert for logic bugs, you're looking at undefined behavior. Logic bugs are
 not recoverable.
In a multithreaded program, does this mean that the thread must be terminated or the entire process? In a multi-user system, does this mean the transaction or the entire process? The scope of a logic bug can be known to be quite limited. Remember my earlier point about Erlang, where a "process" there is actually just a logical thread in the VM.
This has been asked many times before. If the threads share memory, the only robust choice is to terminate all the threads and the application. If the thread is in another process, where the memory is not shared, then terminating and possibly restarting that process is quite acceptable.
 The scope of a logic bug can be known to be quite limited.
If you know about the bug, then you'd have fixed it already instead of inserting recovery code for unknown problems. I can't really accept that one has "unknown bugs of known scope".
Sep 28 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 20:31:03 UTC, Walter Bright wrote:
 If the threads share memory, the only robust choice is to 
 terminate all the threads and the application.

 If the thread is in another process, where the memory is not 
 shared, then terminating and possibly restarting that process 
 is quite acceptable.

 The scope of a logic bug can be known to be quite limited.
If you know about the bug, then you'd have fixed it already instead of inserting recovery code for unknown problems. I can't really accept that one has "unknown bugs of known scope".
Well, say you're using SafeD or some other system where you know that memory corruption is not possible (pure functional programming, for example). In this case, if you know what data a particular execution flow touches, you know the scope of the potential damage. And if the data touched is all either shared but read-only or generated during the processing of the request, you can be reasonably certain that nothing outside the scope of the transaction has been adversely affected at all.
Sep 28 2014
next sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
29-Sep-2014 00:50, Sean Kelly пишет:
 On Sunday, 28 September 2014 at 20:31:03 UTC, Walter Bright wrote:
 If the threads share memory, the only robust choice is to terminate
 all the threads and the application.

 If the thread is in another process, where the memory is not shared,
 then terminating and possibly restarting that process is quite
 acceptable.

 The scope of a logic bug can be known to be quite limited.
If you know about the bug, then you'd have fixed it already instead of inserting recovery code for unknown problems. I can't really accept that one has "unknown bugs of known scope".
Well, say you're using SafeD or some other system where you know that memory corruption is not possible (pure functional programming, for example).
 In this case, if you know what data a particular execution
 flow touches, you know the scope of the potential damage.  And if the
 data touched is all either shared but read-only or generated during the
 processing of the request, you can be reasonably certain that nothing
 outside the scope of the transaction has been adversely affected at all.
not possible / highly unlikely (i.e. bug in VM or said system) But otherwise agreed, dropping the whole process is not always a good idea or it easily becomes a DoS attack vector in a public service. -- Dmitry Olshansky
Sep 28 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 21:16:51 UTC, Dmitry Olshansky 
wrote:
 But otherwise agreed, dropping the whole process is not always 
 a good idea or it easily becomes a DoS attack vector in a 
 public service.
What I really want to work towards is the Erlang model where an app is a web of communicating processes (though Erlang processes are effectively equivalent to D objects). Then, killing a process on an error is absolutely correct. It doesn't affect the resilience of the system. But if these processes are actually threads or fibers with memory protection, things get a lot more complicated. I really need to spend some time investigating how modern Linux systems handle tons of processes running on them and try to find a happy medium.
Sep 28 2014
parent Dmitry Olshansky <dmitry.olsh gmail.com> writes:
29-Sep-2014 01:21, Sean Kelly пишет:
 On Sunday, 28 September 2014 at 21:16:51 UTC, Dmitry Olshansky wrote:
 But otherwise agreed, dropping the whole process is not always a good
 idea or it easily becomes a DoS attack vector in a public service.
What I really want to work towards is the Erlang model where an app is a web of communicating processes (though Erlang processes are effectively equivalent to D objects). Then, killing a process on an error is absolutely correct. It doesn't affect the resilience of the system. But if these processes are actually threads or fibers with memory protection, things get a lot more complicated.
One thing I really appreciated about JVM is exactly the memory safety with ability to handle this pretty much in the same way Erlang does.
 I really need to spend
 some time investigating how modern Linux systems handle tons of
 processes running on them and try to find a happy medium.
Keep us posted. -- Dmitry Olshansky
Sep 28 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 1:50 PM, Sean Kelly wrote:
 On Sunday, 28 September 2014 at 20:31:03 UTC, Walter Bright wrote:
 The scope of a logic bug can be known to be quite limited.
If you know about the bug, then you'd have fixed it already instead of inserting recovery code for unknown problems. I can't really accept that one has "unknown bugs of known scope".
Well, say you're using SafeD or some other system where you know that memory corruption is not possible (pure functional programming, for example). In this case, if you know what data a particular execution flow touches, you know the scope of the potential damage. And if the data touched is all either shared but read-only or generated during the processing of the request, you can be reasonably certain that nothing outside the scope of the transaction has been adversely affected at all.
You may know the error is not a memory corrupting one, but that doesn't mean there aren't non-corrupting changes to the shared memory that would result in additional unexpected failures. Also, the logic bug may be the result of an system part of the code going wrong. You do not know, because YOU DO NOT KNOW the cause the error. And if you knew the cause, you wouldn't need a stack trace to debug it anyway. I.e. despite being 'safe' it does not imply the program is in a predictable or anticipated state. I can't get behind the notion of "reasonably certain". I certainly would not use such techniques in any code that needs to be robust, and we should not be using such cowboy techniques in Phobos nor officially advocate their use.
Sep 28 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Sunday, 28 September 2014 at 22:00:24 UTC, Walter Bright wrote:
 I can't get behind the notion of "reasonably certain". I 
 certainly would not use such techniques in any code that needs 
 to be robust, and we should not be using such cowboy techniques 
 in Phobos nor officially advocate their use.
I think it's a fair stance not to advocate this approach. But as it is I spend a good portion of my time diagnosing bugs in production systems based entirely on archived log data, and analyzing the potential impact on the system to determine the importance of a hot fix. The industry seems to be moving towards lowering the barrier between engineering and production code (look at what Netflix has done for example), and some of this comes from an isolation model akin to the Erlang approach, but the typical case is still that hot fixing code is incredibly expensive and so you don't want to do it if it isn't necessary. For me, the correct approach may simply be to eschew assert() in favor of enforce() in some cases. But the direction I want to be headed is the one you're encouraging. I simply don't know if it's practical from a performance perspective. This is still developing territory.
Sep 28 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/28/2014 6:17 PM, Sean Kelly wrote:
 On Sunday, 28 September 2014 at 22:00:24 UTC, Walter Bright wrote:
 I can't get behind the notion of "reasonably certain". I certainly would not
 use such techniques in any code that needs to be robust, and we should not be
 using such cowboy techniques in Phobos nor officially advocate their use.
I think it's a fair stance not to advocate this approach. But as it is I spend a good portion of my time diagnosing bugs in production systems based entirely on archived log data, and analyzing the potential impact on the system to determine the importance of a hot fix. The industry seems to be moving towards lowering the barrier between engineering and production code (look at what Netflix has done for example), and some of this comes from an isolation model akin to the Erlang approach, but the typical case is still that hot fixing code is incredibly expensive and so you don't want to do it if it isn't necessary. For me, the correct approach may simply be to eschew assert() in favor of enforce() in some cases. But the direction I want to be headed is the one you're encouraging. I simply don't know if it's practical from a performance perspective. This is still developing territory.
You've clearly got a tough job to do, and I understand you're doing the best you can with it. I know I'm hardcore and uncompromising on this issue, but that's where I came from (the aviation industry). I know what works (airplanes are incredibly safe) and what doesn't work (Toyota's approach was in the news not too long ago). Deepwater Horizon and Fukushima are also prime examples of not dealing properly with modest failures that cascaded into disaster.
Sep 28 2014
parent reply "Kagamin" <spam here.lot> writes:
On Monday, 29 September 2014 at 03:04:11 UTC, Walter Bright wrote:
 You've clearly got a tough job to do, and I understand you're 
 doing the best you can with it. I know I'm hardcore and 
 uncompromising on this issue, but that's where I came from (the 
 aviation industry).

 I know what works (airplanes are incredibly safe) and what 
 doesn't work (Toyota's approach was in the news not too long 
 ago). Deepwater Horizon and Fukushima are also prime examples 
 of not dealing properly with modest failures that cascaded into 
 disaster.
Do you interpret airplane safety right? As I understand, airplanes are safe exactly because they recover from assert failures and continue operation. Your suggestion is when seat 2A creaks, shut down the whole airplane. In reality airplanes continue to operate until there's zero physical resource to operate. Fukushima caused disaster because it didn't try to handle failure. But this is your idea that one can do nothing meaningful on failure, and Fukushima did just that: nothing. Termination of the process is the safe default, especially in the case of client software, but servers should probably terminate failed request, gracefully clean up and continue operation, like airplanes.
Oct 03 2014
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 03/10/14 13:27, Kagamin wrote:

 Do you interpret airplane safety right? As I understand, airplanes are
 safe exactly because they recover from assert failures and continue
 operation. Your suggestion is when seat 2A creaks, shut down the whole
 airplane. In reality airplanes continue to operate until there's zero
 physical resource to operate.
I have no idea of airplane works but I think Walter usual says they have at least three backup systems. If one system fails, shut it down and switch to the backup. -- /Jacob Carlborg
Oct 03 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 3 October 2014 at 12:16:30 UTC, Jacob Carlborg wrote:
 On 03/10/14 13:27, Kagamin wrote:

 Do you interpret airplane safety right? As I understand, 
 airplanes are
 safe exactly because they recover from assert failures and 
 continue
 operation. Your suggestion is when seat 2A creaks, shut down 
 the whole airplane. In reality airplanes continue to operate 
 until there's zero physical resource to operate.
I have no idea of airplane works but I think Walter usual says they have at least three backup systems. If one system fails, shut it down and switch to the backup.
My point, and I think Kagamin's as well, is that the entire plane is a system and the redundant internals are subsystems. They may not share memory, but they are wired to the same sensors, servos, displays, etc. Thus the point about shutting down the entire plane as a result of a small failure is fair.
Oct 03 2014
next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:
 My point, and I think Kagamin's as well, is that the entire 
 plane is a system and the redundant internals are subsystems.  
 They may not share memory, but they are wired to the same 
 sensors, servos, displays, etc.  Thus the point about shutting 
 down the entire plane as a result of a small failure is fair.
An airplane is a bad analogy for a regular server. You have redundant backups everywhere and you are not allowed to take off at the smallest sign of deviation from normal operation. You will never see D in a fighter jet (and you can probably not fly it without the controller in operation either, your only choice is to send the plane into the ocean and escape in a parachute). I think Walter forgets that you ensure integrity of a complex system of servers by utilizing a rock solid proven transaction database/task-scheduler for handling all critical information. If that fails, you probably should shut down everything, roll back to the last backup and reboot. But you don't shut down a restaurant because the waiter forgets to write down an order every once in a while, you shut it down if the kitchen is unsuitable for preparing food. After sanitizing the kitchen you open the restaurant again. You also don't fire the sloppy waiter until you have a better waiter at hand…
Oct 03 2014
next sibling parent reply "Piotrek" <p nonexistent.pl> writes:
On Friday, 3 October 2014 at 16:11:00 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:
 My point, and I think Kagamin's as well, is that the entire 
 plane is a system and the redundant internals are subsystems.  
 They may not share memory, but they are wired to the same 
 sensors, servos, displays, etc.  Thus the point about shutting 
 down the entire plane as a result of a small failure is fair.
An airplane is a bad analogy for a regular server. You have redundant backups everywhere and you are not allowed to take off at the smallest sign of deviation from normal operation.
That depends on design (logic). Ever heard of this? http://www.reddit.com/r/programming/comments/1ax0oa/how_kdes_1500_git_repositories_almost_were_lost/
 I think Walter forgets that you ensure integrity of a complex 
 system of servers by utilizing a rock solid proven transaction 
 database/task-scheduler for handling all critical information. 
 If that fails, you probably should shut down everything, roll 
 back to the last backup and reboot.
I agree with Walter wholeheartedly. If I get him correctly he speaks about distinction between the program logic and input errors. Not about recovery strategies/decisions.
 But you don't shut down a restaurant because the waiter forgets 
 to write down an order every once in a while, you shut it down 
 if the kitchen is unsuitable for preparing food. After 
 sanitizing the kitchen you open the restaurant again. You also 
 don't fire the sloppy waiter until you have a better waiter at 
 hand…
Let me play the game of finding analogies ;) IMO, an exception is more suitable for the analogy with waiter and dirty kitchen. A logic error would be a case when you think you are running a garage but suddenly you noticed your stuff is selling meals and is wearing chef's uniforms. Piotrek
Oct 03 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 3 October 2014 at 17:33:33 UTC, Piotrek wrote:
 That depends on design (logic). Ever heard of this?
 http://www.reddit.com/r/programming/comments/1ax0oa/how_kdes_1500_git_repositories_almost_were_lost/
How is not having redundant storage, logging or backup related to this? This is a risk assessment scenario: «What are we willing to loose compared to what it costs to mitigate the risks by investing in additional resources?»
 A logic error would be a case when you think you are running a 
 garage but suddenly you noticed your stuff is selling meals and 
 is wearing chef's uniforms.
But it is a business decision whether it is better to take amazon.com off the network for a week or just let their search engine occasionally serve food instead of books as search results. Not an engineering decision. It is a business decision whether it is better for a game to corrupt 1% of user accounts and let customer support manually build them back up than to take the game off the network until the problem is fixed. You would probably have heavier load on customer support and loose more subscriptions by taking the game off the network than giving those 1% one year of free game play as a compensation. If you have a logic error in a functional routine, it is local. It might not matter, it might be expected. Logic errors do not imply memory corruption. Memory corruption does not imply that exceptions are thrown. Even if memory corruption would lead to exceptions being thrown in 30% of the cases, you'd still have 70% of cases where memory corruption goes undetected. So if that is a concern, you need to focus elsewhere. You have to think about this in probabilistic terms and relate it to business decisions. Defining thresholds for acceptable reliability is not an engineering decision. An engineering decision is to use isolates, Erlang, Haskell etc to achieve the thresholds set as acceptable reliability/quality viewed from a business point of view.
Oct 03 2014
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Friday, 3 October 2014 at 19:05:51 UTC, Ola Fosheim Grøstad 
wrote:
 But it is a business decision whether it is better to take 
 amazon.com off the network for a week or just let their search 
 engine occasionally serve food instead of books as search 
 results. Not an engineering decision.

 It is a business decision whether it is better for a game to 
 corrupt 1% of user accounts and let customer support manually 
 build them back up than to take the game off the network until 
 the problem is fixed. You would probably have heavier load on 
 customer support and loose more subscriptions by taking the 
 game off the network than giving those 1% one year of free game 
 play as a compensation.
The thing is, the privilege to make that kind of business decision is wholly dependent on the fact that there are no meaningful safety issues involved. Compare that to the case of the Ford Pinto. The allegation made was that Ford had preferred to risk paying out lawsuits to injured drivers over fixing a design flaw responsible for those (serious) injuries, because a cost-benefit analysis had shown the payouts were cheaper than rolling out the fix. This allegation was rightly met with outrage, and severe punitive damages in court.
Oct 04 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 4:39 AM, Joseph Rushton Wakeling wrote:
 The thing is, the privilege to make that kind of business decision is wholly
 dependent on the fact that there are no meaningful safety issues involved.

 Compare that to the case of the Ford Pinto.  The allegation made was that Ford
 had preferred to risk paying out lawsuits to injured drivers over fixing a
 design flaw responsible for those (serious) injuries, because a cost-benefit
 analysis had shown the payouts were cheaper than rolling out the fix.  This
 allegation was rightly met with outrage, and severe punitive damages in court.
Unfortunately, such business decisions are always made. Nobody can make a 100% safe system, and if one even tried, such a system would be unusable. A car where safety was the overriding priority could not move an inch, nobody could afford to buy one, etc. The best one can do in an imperfect world is set a standard of the maximum probability of a fatal accident. In aviation, this standard is set by regulation, and airframe manufacturers are obliged to prove that the system reliability is greater than that standard, in order to get their designs certified. The debate then is how high can that standard be set and still have affordable, useful products.
Oct 04 2014
prev sibling parent "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 4 October 2014 at 11:39:24 UTC, Joseph Rushton 
Wakeling wrote:
 The thing is, the privilege to make that kind of business 
 decision is wholly dependent on the fact that there are no 
 meaningful safety issues involved.
Surgeons can do remote surgery using VR equipment. There are obvious dangerous technical factors there, but you have to weigh that up to getting an emergency operation done by the most exerienced surgeon in the country. So it is a probabilistic calculation. From an absolute safety point of view, you should not do that. The comm link might fail and that could be serious. It is a system that might create extra complications. From a probabilistic point of view it might lead to the highest survial rate and make local surgeons better, so it might be worth the risk when you amortize risk over N operations.
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 9:10 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 I think Walter forgets that you ensure integrity of a complex system of servers
 by utilizing a rock solid proven transaction database/task-scheduler for
 handling all critical information. If that fails, you probably should shut down
 everything, roll back to the last backup and reboot.
You don't ensure integrity of anything by running software after it has entered an unknown and unanticipated state. There's no way you'd bet your life on it.
 rock solid proven
Yeah, right.
 If that fails
"When that fails" FTFY
 I think Walter forgets
I think you forget my background in designing critical flight controls systems. I know what works, and the proof is the incredible safety of airliners. Yeah, I know that's "appeal to authority", but I've backed it up, too.
Oct 04 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 4 October 2014 at 08:25:22 UTC, Walter Bright wrote:
 On 10/3/2014 9:10 AM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 I think Walter forgets that you ensure integrity of a complex 
 system of servers
 by utilizing a rock solid proven transaction 
 database/task-scheduler for
 handling all critical information. If that fails, you probably 
 should shut down
 everything, roll back to the last backup and reboot.
You don't ensure integrity of anything by running software after it has entered an unknown and unanticipated state.
Integrity is ensured by the transaction engine. The world outside of the transaction engine has NO WAY of affecting integrity. D code that is written today belongs outside the transaction engine.
 There's no way you'd bet your life on it.
SAAB Gripen crashed in 1989 and 1993 due to control software, the pilots used their parachutes and sent the plane in a safe direction. Eurofighter is wire controlled, you most likely cannot keep it stable without electronic control. So if it fails, you have to use the parachute. Bye, bye $100.000.000. Anyway, failure should not be due to "asserts", that should be covered by program verification and formal proofs. Failure can still happen if the stabilizing model is inadequate. During peace time fighter jets stay grounded for many days every year due to technical issues, maybe as much as 50%. In war time they would be up fighting… So yes, you bet your life on it when you defend the air base. Your life is worth nothing in certain circumstances. It is contextual.
 I think you forget my background in designing critical flight 
 controls systems. I know what works, and the proof is the 
 incredible safety of airliners. Yeah, I know that's "appeal to 
 authority", but I've backed it up, too.
That's a marginal use scenario and software for critical control systems should not rely on asserts in 2014. Critical software should be formally proven correct.
Oct 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 1:40 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Saturday, 4 October 2014 at 08:25:22 UTC, Walter Bright wrote:
 On 10/3/2014 9:10 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 I think Walter forgets that you ensure integrity of a complex system of servers
 by utilizing a rock solid proven transaction database/task-scheduler for
 handling all critical information. If that fails, you probably should shut down
 everything, roll back to the last backup and reboot.
You don't ensure integrity of anything by running software after it has entered an unknown and unanticipated state.
Integrity is ensured
Sorry, Ola, you've never written bug-free software, and nobody else has, either.
 by the transaction engine. The world outside of the
 transaction engine has NO WAY of affecting integrity.
Hardware fails, too.
 SAAB Gripen crashed in 1989 and 1993 due to control software,
Wikipedia sez these were caused by "pilot induced oscillations". http://en.wikipedia.org/wiki/Accidents_and_incidents_involving_the_JAS_39_Gripen#February_1989 In any case, Fighter aircraft are not built to airliner safety standards.
 Eurofighter is wire
 controlled, you most likely cannot keep it stable without electronic control.
So
 if it fails, you have to use the parachute. Bye, bye $100.000.000.
That doesn't mean there are no backups to the primary flight control computer.
 Anyway, failure should not be due to "asserts", that should be covered by
 program verification and formal proofs.
The assumption that "proof" means the code doesn't have bugs is charming, but still false.
 Failure can still happen if the stabilizing model is inadequate.
It seems we can't escape bugs.
 During peace time fighter jets stay grounded for many days every year due to
 technical issues, maybe as much as 50%. In war time they would be up
fighting…
 So yes, you bet your life on it when you defend the air base. Your life is
worth
 nothing in certain circumstances. It is contextual.
Again, warplanes are not built to airliner safety standards. They have different priorities.
 I think you forget my background in designing critical flight controls
 systems. I know what works, and the proof is the incredible safety of
 airliners. Yeah, I know that's "appeal to authority", but I've backed it up,
too.
That's a marginal use scenario and software for critical control systems should not rely on asserts in 2014. Critical software should be formally proven correct.
Airframe companies are going to continue to rely on things that have a long, successful track record. It's pretty hard to argue with success.
Oct 04 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 09:40:26 UTC, Walter Bright wrote:
 On 10/4/2014 1:40 AM, "Ola Fosheim Grøstad" 
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Saturday, 4 October 2014 at 08:25:22 UTC, Walter Bright 
 wrote:
 On 10/3/2014 9:10 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 In any case, Fighter aircraft are not built to airliner safety 
 standards.
That concerns only the degrees of freedom of pilot's inputs and airplane's angles, but not the standard of the software or other components. It just trusts the pilot for the assetion of the situation. After all a missile is a far more danger than a possible stall.
Oct 04 2014
prev sibling next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Oct 04, 2014 at 02:40:28AM -0700, Walter Bright via Digitalmars-d wrote:
 On 10/4/2014 1:40 AM, "Ola Fosheim Grøstad"
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
[...]
Anyway, failure should not be due to "asserts", that should be
covered by program verification and formal proofs.
The assumption that "proof" means the code doesn't have bugs is charming, but still false.
[...] "Beware -- I've only proven that the code is correct, not tested it." -- Donald Knuth. :-) T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry Ford
Oct 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 7:13 AM, H. S. Teoh via Digitalmars-d wrote:
 "Beware -- I've only proven that the code is correct, not tested it." --
 Donald Knuth.

 :-)
Quotes like that prove (!) what a cool guy Knuth is!
Oct 04 2014
parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 4 October 2014 at 19:49:23 UTC, Walter Bright wrote:
 On 10/4/2014 7:13 AM, H. S. Teoh via Digitalmars-d wrote:
 "Beware -- I've only proven that the code is correct, not 
 tested it." --
 Donald Knuth.

 :-)
Quotes like that prove (!) what a cool guy Knuth is!
Nah, it only proves that he had not tested the code in his memo (and probably not run the proof through a verifier either).
Oct 04 2014
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 10/04/2014 10:37 PM, Ola Fosheim Grostad wrote:
 On Saturday, 4 October 2014 at 19:49:23 UTC, Walter Bright wrote:
 On 10/4/2014 7:13 AM, H. S. Teoh via Digitalmars-d wrote:
 "Beware -- I've only proven that the code is correct, not tested it." --
 Donald Knuth.

 :-)
Quotes like that prove (!) what a cool guy Knuth is!
Nah, it only proves that he had not tested the code in his memo (and probably not run the proof through a verifier either).
It doesn't prove that.
Oct 04 2014
prev sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 4 October 2014 at 09:40:26 UTC, Walter Bright wrote:
 Sorry, Ola, you've never written bug-free software, and nobody 
 else has, either.
I have, but only simple ones. This is also key to making systems robust. Army equipment tend to consist of few parts, robust construction, and as little fancy features as you can get away with. But it us often rather unconvinient and hard to adapt to non-army settings. D is on the other side of the spectrum. Nothing wrong with it, but not really following the principles that would make it suitable for creating simple robust systems.
 by the transaction engine. The world outside of the
 transaction engine has NO WAY of affecting integrity.
Hardware fails, too.
Sure, but the point is to have integrity ensured in a conceptually simple system that has been harnessed at a cost level that exceeds the budget of any single application.
 That doesn't mean there are no backups to the primary flight 
 control computer.
No, but the point is that the operational context matters. Robustness is something you have to reason about on a probabilistic level in relation to the operational context.
 The assumption that "proof" means the code doesn't have bugs is 
 charming, but still false.
A validated correctness proof ensures that the code follows the specification, so no bugs.
 Failure can still happen if the stabilizing model is
inadequate. It seems we can't escape bugs.
An inadeqate specification is not a bug!
 Again, warplanes are not built to airliner safety standards. 
 They have different priorities.
Indeed, so the operational context is what matters, therefore the app should set the priorities, not the language and libraries.
Oct 04 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/04/2014 03:57 PM, Ola Fosheim Grostad wrote:
 A validated correctness proof ensures that the code follows the
 specification, so no bugs.
The day we can guarantee a correctness proof as being perfectly sound and...well...correct is the day we can guarantee software correctness without the formal proof. Proofs are just as subject to mistakes and oversights as actual code. Proofs deal with that via auditing and peer review, but...so does software. It sure as hell doesn't guarantee lack of bugs in software, so what in the world would make anyone think it magically guarantees lack of mistakes in a "proof"? (A "proof" btw, is little more than code which may or may not ever actually get mechanically executed.)
Oct 04 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/04/2014 06:20 PM, Nick Sabalausky wrote:
 On 10/04/2014 03:57 PM, Ola Fosheim Grostad wrote:
 A validated correctness proof ensures that the code follows the
 specification, so no bugs.
The day we can guarantee a correctness proof as being perfectly sound and...well...correct is the day we can guarantee software correctness without the formal proof. Proofs are just as subject to mistakes and oversights as actual code. Proofs deal with that via auditing and peer review, but...so does software. It sure as hell doesn't guarantee lack of bugs in software, so what in the world would make anyone think it magically guarantees lack of mistakes in a "proof"? (A "proof" btw, is little more than code which may or may not ever actually get mechanically executed.)
And the "specification" itself may have flaws as well, so again, there are NO guarantees here whatsoever. The only thing proofs do in an engineering context is decrease the likelihood of problems, just like any other engineering strategy.
Oct 04 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 3:24 PM, Nick Sabalausky wrote:
 On 10/04/2014 06:20 PM, Nick Sabalausky wrote:
 The day we can guarantee a correctness proof as being perfectly sound
 and...well...correct is the day we can guarantee software correctness
 without the formal proof.

 Proofs are just as subject to mistakes and oversights as actual code.
 Proofs deal with that via auditing and peer review, but...so does
 software. It sure as hell doesn't guarantee lack of bugs in software, so
 what in the world would make anyone think it magically guarantees lack
 of mistakes in a "proof"? (A "proof" btw, is little more than code which
 may or may not ever actually get mechanically executed.)
And the "specification" itself may have flaws as well, so again, there are NO guarantees here whatsoever. The only thing proofs do in an engineering context is decrease the likelihood of problems, just like any other engineering strategy.
Yup. Proofs are just another tool that helps, not a magic solution.
Oct 04 2014
prev sibling parent reply "Ola Fosheim Grostad" <ola.fosheim.grostad+dlang gmail.com> writes:
On Saturday, 4 October 2014 at 22:24:08 UTC, Nick Sabalausky 
wrote:
 And the "specification" itself may have flaws as well, so 
 again, there are NO guarantees here whatsoever. The only thing 
 proofs do in an engineering context is decrease the likelihood 
 of problems, just like any other engineering strategy.
Machine validated proofs guarantee that there are no bugs in the source code for any reasonable definition of "guarantee". There is no reason for having proper asserts left in the code after that. If the specification the contract is based on is inadequate, then that is not an issue for the contractor. You still implement according to the spec/contract until the contract is changed by the customer. If an architect didn't follow the requirements of the law when drawing a house, then he cannot blame the carpenter for building the house according to the drawings.
Oct 04 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 10:24 PM, Ola Fosheim Grostad wrote:
 On Saturday, 4 October 2014 at 22:24:08 UTC, Nick Sabalausky wrote:
 And the "specification" itself may have flaws as well, so again, there are NO
 guarantees here whatsoever. The only thing proofs do in an engineering context
 is decrease the likelihood of problems, just like any other engineering
strategy.
Machine validated proofs guarantee that there are no bugs in the source code for any reasonable definition of "guarantee". There is no reason for having proper asserts left in the code after that. If the specification the contract is based on is inadequate, then that is not an issue for the contractor. You still implement according to the spec/contract until the contract is changed by the customer. If an architect didn't follow the requirements of the law when drawing a house, then he cannot blame the carpenter for building the house according to the drawings.
Carpenters can be liable for building things they know are wrong, regardless of what the spec says.
Oct 04 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 October 2014 at 06:09:44 UTC, Walter Bright wrote:
 Carpenters can be liable for building things they know are 
 wrong, regardless of what the spec says.
You can be made liable if you don't notify the responsible entity you work for that the solution is unsuitable for the purpose. How it works in my country is that to raise a building you need a qualified entity that is "responsible for the quality of the construction" which could be an architect, a building engineer or a carpenter with some extra certification. They are responsible for ensuring/approving the overall quality. They are required to ensure the quality of the spec/work and are therefore liable vs the customer.
Oct 04 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Sunday, 5 October 2014 at 06:55:16 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 5 October 2014 at 06:09:44 UTC, Walter Bright wrote:
 Carpenters can be liable for building things they know are 
 wrong, regardless of what the spec says.
You can be made liable if you don't notify the responsible entity you work for that the solution is unsuitable for the purpose. How it works in my country is that to raise a building you need a qualified entity that is "responsible for the quality of the construction" which could be an architect, a building engineer or a carpenter with some extra certification. They are responsible for ensuring/approving the overall quality. They are required to ensure the quality of the spec/work and are therefore liable vs the customer.
Oh, I think that here in Italy we outperform your country with that, as for sure we are the most bureaucratised country on the hearth. So it happens that we have people "responsible for the quality" for pretty everything, from building houses to doing pizzas. And guess it, here the buildings made by ancient romans are still "up and running", while we have schools building made in the '90 that come down at every earthquake... Guru meditation... ;-P --- /Paolo
Oct 05 2014
next sibling parent "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:
 On Sunday, 5 October 2014 at 06:55:16 UTC, Ola Fosheim Grøstad

 Oh, I think that here in Italy we outperform your country with 
 that, as for sure we are the most bureaucratised country on the 
 hearth.
/hearh/earth I must stop writing while doing breakfast *sigh* --- /P
Oct 05 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:
 Oh, I think that here in Italy we outperform your country with 
 that, as for sure we are the most bureaucratised country on the 
 hearth.
Hah! In Norway parents sign evaluation of progress for 6 years old school children every 2 weeks due to a "quality reform". And and all pupils are kept on the same level of progress so that nobody should feel left behind due to "social democracy principles"... In Italy you have Montesorri! Consider yourself lucky! In Norway bureaucracy is a life style and the way of being. In northern Norway the public sector accounts for 38-42% of all employees. In Norway the cost of living is so high that starting a business is a risky proposition unless the public sector is your customer base. :^)
Oct 05 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/05/2014 05:35 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 5 October 2014 at 09:06:45 UTC, Paolo Invernizzi wrote:
 Oh, I think that here in Italy we outperform your country with that,
 as for sure we are the most bureaucratised country on the hearth.
Hah! In Norway parents sign evaluation of progress for 6 years old school children every 2 weeks due to a "quality reform". And and all pupils are kept on the same level of progress so that nobody should feel left behind due to "social democracy principles"...
Aside from those 2 week evals (ouch!), the US isn't a whole lot different. US schools are still notoriously bureaucracy-heavy (just ask any school employee), and "No child left behind" is a big thing (at least, supposedly) while any advanced kids are capped at the level of the rest of their age group and forbidden from advancing at their own level (thus boring the shit out of them and seeding quite a few additional problems). Partly, that level-capping is done because there's a prevalent (but obviously BS) belief that kids should be kept with others of the same age, rather than with others of the same level of development or even a healthy mix. But also, they call this capping of advanced students "Being fair to the *other* kids". Obviously US teachers have no idea what the word "fair" actually means. But then, in my experience, there's a LOT that US teachers don't know. I blame both the teacher's unions (that's not intended as a statement on unions in general, BTW) and the complete and total lack of "logic" being part of the curriculum *they* were taught as kids (which is still inexcusably absent from modern curriculums).
 In Italy you have
 Montesorri! Consider yourself lucky!
US has a few of those too. They're constantly ridiculed (leave it to the US to blast anything that isn't group-think-compatible), but from what I've seen Montesorri's are at least less god-awful than US public schools. I almost went to one (but backed out since, by that point, it would have only been for one year - actually wound up with one of the best teachers I ever had that year, so it worked out fine in the end).
Oct 05 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 5 October 2014 at 20:37:18 UTC, Nick Sabalausky wrote:
 forbidden from advancing at their own level (thus boring the 
 shit out of them and seeding quite a few additional problems).
Yeah, I've always wondered if the motivation for keeping kids locked up in schools is to prevent them from doing mischief in the streets and teach them to accept hours of pointless boredom so that they will eventually accept doing boring pointless stuff when they later join the workforce as bureaucrats. :-/
 Obviously US teachers have no idea what the word "fair" 
 actually means. But then, in my experience, there's a LOT that 
 US teachers don't know.
The status of teaching has gone downhill. It used to be viewed as highly skilled work, then you got some idealistic teachers of the 1968-generation, but after the yuppie period in the 80s it has become the easy path to higher-education when you don't know what you want to do except that you aren't good with theory and you want to work with people. I think the same can be said about psychology, the field you enter if you don't know what to study and find science too difficult… (or am I being unfair?)
 statement on unions in general, BTW) and the complete and total 
 lack of "logic" being part of the curriculum *they* were taught 
 as kids (which is still inexcusably absent from modern 
 curriculums).
"logic" is theory. Theory does not belong in schools. Too difficult. You are only supposed to learn things that you don't have to figure out, otherwise finding qualified teachers will become impossible.
 US has a few of those too. They're constantly ridiculed (leave 
 it to the US to blast anything that isn't 
 group-think-compatible), but from what I've seen Montesorri's 
 are at least less god-awful than US public schools.
We have a few Montesorri and a few more Rudolph Steiner schools. Which I believe are better at motivating kids and at least not killing the fun of learning. Learning that figuring things out and doing things your own way is fun is a very important lesson. The government says that the public school system has changed and is kind of incorporating those methodologies too, but that is not what I see. And how can a single teacher handle 27 kids in a class, with only 2 minutes per kid per hour and a rigid plan for what you are supposed to teach? Btw, did you know that the majority in Norway voted against joining EU, so we did not, but since the majority of politicians were pro EU they got us in the backdoor by signing away our freedom in treaties instead? And even when the treaties do not apply we "harmonize our laws to EU" because we "just have to". So thanks to bureaucratic maneuvers we are now 99% EU regulated, have shittier consumer laws than we used to, have next to no control on the borders and are flooded by romanian beggars and criminals from every corner of Europe and beyond, in return we get no vote in the EU decision making process since we are "independent"… You gotta love democracy… On a positive note: the IOC managed to demand that the norwegian King ought to hold a party for the IOC leaders and additionally demanded that he should pay their drinks. It was part of their 7000 page olympics qualification requirements document. It is so heavily regulated that it explicitly specifies that the personnel in the hotels MUST SMILE to the IOC leaders when they arrive. I kid you not, even games and past times are heavily bureaucratic down to minuscule details these days. So, due pressure from the newspapers/grassroots and the royal insult the politicians eventually had to turn down the ~$10.000.000.000 winter olympics budget proposal. Good riddance. Live monarchy! I am personally looking forward to Beijing hosting the winter olympics 2022. I am sure they will mange to fake a smile after the politicians have demolished their homes to make space for the ski-jumping event. In the meantime Norway should not even think about hosting any sporting event until we can avoid being ranked as the worlds most expensive country: http://www.numbeo.com/cost-of-living/rankings_by_country.jsp 'Cause, we all know that a $10 billion budget turns into at least a $30 billion budget before the games are over. That would have been $6000 per capita. I'd say that royal insult paid off. Was this off-topic?
Oct 06 2014
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/06/2014 07:06 PM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 5 October 2014 at 20:37:18 UTC, Nick Sabalausky wrote:
 statement on unions in general, BTW) and the complete and total lack
 of "logic" being part of the curriculum *they* were taught as kids
 (which is still inexcusably absent from modern curriculums).
"logic" is theory. Theory does not belong in schools. Too difficult. You are only supposed to learn things that you don't have to figure out, otherwise finding qualified teachers will become impossible.
Math is theory too. But regardless: Yes, there *is* a theoretical side to logic, but logic is also *extremely* applicable to ordinary everyday life. Even moreso than math, I would argue. Now, I don't necessarily mean things like formal symbolic logic, lambda calculus. Although, my 9th grade math class *did* have a very heavy focus on formal proofs - and it wasn't even remotely one of the hardest math classes I'd taken, even just up to that point. Students can handle theory just fine as long as it isn't the more advanced/complex stuff...Although college students should be *expected* to be capable of handling even that. Now, *cutting edge* theory? Sure, leave that for grad students and independent study. Anyway, when I say "teach logic in schools" I just mean (at the very least) the basic things: Like recognizing and identifying the basic logical fallacies (no need necessarily to dive into the actual latin names - the names aren't nearly as crucial as understanding the concepts themselves), recognizing ambiguity, understanding *why* the fallacies and ambiguity are flaws, and the problems and absurdities that can occur when such things aren't noticed and avoided. This is VERY simple, and crucial, stuff. And yet I see SOOO many grown adults, even ones with advanced graduate degrees, consistently fail completely and uttery at basic logical reasoning in everyday life (and we're talking very, very obvious and basic fallacies), that it's genuinely disturbing.
 I am personally looking forward to Beijing hosting the winter olympics
 2022. I am sure they will mange to fake a smile after the politicians
 have demolished their homes to make space for the ski-jumping event.
Don't know whether this has always been the case and just never got noticed until recent years, but between the last winter olympics and the recent soccer/football match, and what you're saying about 2022, I'm noticing a rather bad trend with these big international sporting events. I get the feeling this'll be something that'll get bigger and bigger until either A. the right people get together and do something about it, or B. things come to a head and the shit *really* starts to hit the fan. (Yes, I like outdated slang ;) ) Nothing good can come from the current trajectory.
 Was this off-topic?
It was off-topic several posts up. :)
Oct 07 2014
next sibling parent reply Mike Parker <aldacron gmail.com> writes:
On 10/7/2014 5:19 PM, Nick Sabalausky wrote:

 Anyway, when I say "teach logic in schools" I just mean (at the very
 least) the basic things: Like recognizing and identifying the basic
 logical fallacies (no need necessarily to dive into the actual latin
 names - the names aren't nearly as crucial as understanding the concepts
 themselves), recognizing ambiguity, understanding *why* the fallacies
 and ambiguity are flaws, and the problems and absurdities that can occur
 when such things aren't noticed and avoided.
In other words, critical thinking. This is something that, at least in America, is not at all part of the primary school experience.
 This is VERY simple, and crucial, stuff. And yet I see SOOO many grown
 adults, even ones with advanced graduate degrees, consistently fail
 completely and uttery at basic logical reasoning in everyday life (and
 we're talking very, very obvious and basic fallacies), that it's
 genuinely disturbing.


I've personally seen two university courses offered under different guises that try to correct this problem. One is called "Introduction to Mathematical Thinking" and is taught by Keith Devlin at Stanford. The other is called "Think Again: How to Reason and Argue", headed by alter Sinnott-Armstrong at Duke. Despite the disparity in the course titles and the very different approaches taken by the instructors, the content is directed at the same goal -- pushing students to get past their cognitive biases and critically and logically examine any data presented to them. Sadly, American culture seems to increasingly encourage the opposite of critical thinking. It has almost become a badge of honor among some (rather large) circles to embrace a form of willful ignorance rooted in rejecting logic and hard, cold data in favor of falling victim to confirmation bias. --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com
Oct 07 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 04:55 AM, Mike Parker wrote:
 On 10/7/2014 5:19 PM, Nick Sabalausky wrote:

 Anyway, when I say "teach logic in schools" I just mean (at the very
 least) the basic things: Like recognizing and identifying the basic
 logical fallacies (no need necessarily to dive into the actual latin
 names - the names aren't nearly as crucial as understanding the concepts
 themselves), recognizing ambiguity, understanding *why* the fallacies
 and ambiguity are flaws, and the problems and absurdities that can occur
 when such things aren't noticed and avoided.
In other words, critical thinking. This is something that, at least in America, is not at all part of the primary school experience.
Pretty much, yea. In all my years of schooling, I only had one class that actually covered any of that stuff (as an actual stated topic anyway, rather than just as an implied part of another topic): It wasn't until college, *and* it was just an elective. Formal Logic, IIRC, or something along those lines, from the Philosophy dept (aiui, logic *is* considered a branch of philosophy, at least historically. Which does make sense IMO). It was actually a good course (not much new to me though since I was already neck-deep in programming, which basically *IS* applied logic. But it's one of the few courses I've ever actually been impressed with.) The downside of the course, though: Ever since I took it I've been ashamed at society for placing such incredibly minimal emphasis on something so crucially fundamental and important. :/ Just something that make me scream in by head "Yes! Everybody needs to know this!!!"
 This is VERY simple, and crucial, stuff. And yet I see SOOO many grown
 adults, even ones with advanced graduate degrees, consistently fail
 completely and uttery at basic logical reasoning in everyday life (and
 we're talking very, very obvious and basic fallacies), that it's
 genuinely disturbing.


I've personally seen two university courses offered under different guises that try to correct this problem. One is called "Introduction to Mathematical Thinking" and is taught by Keith Devlin at Stanford. The other is called "Think Again: How to Reason and Argue", headed by alter Sinnott-Armstrong at Duke. Despite the disparity in the course titles and the very different approaches taken by the instructors, the content is directed at the same goal -- pushing students to get past their cognitive biases and critically and logically examine any data presented to them.
Personally, I think that not presenting it *as* logic may be somewhat of a mistake. Makes it sounds almost like some self-help or management seminar or something. Less respectable-sounding, and obscures the true core nature of the material: logic. But then again, MANY people seem to be repelled by any mention of logic, whereas I've aways been attracted to it, so maybe that's just my own bias.
 Sadly, American culture seems to increasingly encourage the opposite of
 critical thinking. It has almost become a badge of honor among some
 (rather large) circles to embrace a form of willful ignorance rooted in
 rejecting logic and hard, cold data in favor of falling victim to
 confirmation bias.
Unfortunate, yes. Of course, there have *always* been things that have quite blatantly encouraged people to deliberately *not* think, reason, or question assumptions. So it's naturally not limited to just a modern american culture thing, FWIW.
Oct 07 2014
next sibling parent Mike Parker <aldacron gmail.com> writes:
On 10/7/2014 7:52 PM, Nick Sabalausky wrote:
 On 10/07/2014 04:55 AM, Mike Parker wrote:
 On 10/7/2014 5:19 PM, Nick Sabalausky wrote:
 In all my years of schooling, I only had one class that actually covered
 any of that stuff (as an actual stated topic anyway, rather than just as
 an implied part of another topic): It wasn't until college, *and* it was
 just an elective. Formal Logic, IIRC, or something along those lines,
 from the Philosophy dept (aiui, logic *is* considered a branch of
 philosophy, at least historically. Which does make sense IMO).
Yeah, my "Introduction to Logic" was branded PHIL 170 (I only know that because I just happened to find the syllabus in the front of my copy of Copi & Cohen -- the only book from college somehow managed to hold on to).

 Personally, I think that not presenting it *as* logic may be somewhat of
 a mistake. Makes it sounds almost like some self-help or management
 seminar or something. Less respectable-sounding, and obscures the true
 core nature of the material: logic.

 But then again, MANY people seem to be repelled by any mention of logic,
 whereas I've aways been attracted to it, so maybe that's just my own bias.
Formal logic (which is what is typically taught in courses with "Logic" in the title) and critical thinking aren't quite the same thing, though. Logic as a means of probing arguments is just one tool in the critical thinker's toolbox. A lot of people can pick up how to draw a Venn diagram or call out a fallacy, but they fall short in their ability to actually understand where an opposing argument is coming from, or in expressing themselves in a way appropriate for their target audience, or in getting past their own biases. That takes a lot of hard work that isn't solved by mapping out truth tables. --- This email is free from viruses and malware because avast! Antivirus protection is active. http://www.avast.com
Oct 07 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 3:52 AM, Nick Sabalausky wrote:
 But then again, MANY people seem to be repelled by any mention of logic,
whereas
 I've aways been attracted to it, so maybe that's just my own bias.
Amusingly, Mr. Spock was the most illogical member of the crew.
Oct 07 2014
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 03:39 PM, Walter Bright wrote:
 On 10/7/2014 3:52 AM, Nick Sabalausky wrote:
 But then again, MANY people seem to be repelled by any mention of
 logic, whereas
 I've aways been attracted to it, so maybe that's just my own bias.
Amusingly, Mr. Spock was the most illogical member of the crew.
There was one episode in particular that really bugged me with that. Left me thinking "Uhh, I'm not sure the writer actually understood logic very well": Spock and a few crew members were stranded in a shuttle with seemingly no chance for rescue except for a one-in-a-million longshot. Spock objected to actually doing the longshot because he'd inexplicably decided that its chances were really zero (even though logic would *really* dictate the chances were merely very small - which was obviously an improvement, albeit minor, over the "do nothing and guarantee lack of rescue" approach Spock was bizarrely in favor of). Oddly, the episode was clearly *trying* to tell people "logic isn't always right, use your gut"...which was interesting because, uhh, the author's logic wasn't right ;) Left me with a very big "Wait...WTF?!?" That episode's been bugging me ever since!
Oct 07 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky wrote:
 But regardless: Yes, there *is* a theoretical side to logic, 
 but logic is also *extremely* applicable to ordinary everyday 
 life. Even moreso than math, I would argue.
Yep, however what the human brain is really bad at is reasoning about probability. I agree that primary school should cover modus ponens, modus tollens and how you can define equivalance in terms of two implications. BUT I think you also need to experiment informally with probability at the same time and experience how intuition clouds our thinking. It is important to avoid the fallacies of black/white reasoning that comes with propositional logic. Actually, one probably should start with teaching "ad hoc" object-oriented modelling in primary schools. Turning what humans are really good at, abstraction, into something structured and visual. That way you also learn that when you argue a point you are biased, you always model certain limited projections of the relations that are present in real world.
 up to that point. Students can handle theory just fine as long 
 as it isn't the more advanced/complex stuff...Although college 
 students should be *expected* to be capable of handling even 
 that. Now, *cutting edge* theory? Sure, leave that for grad 
 students and independent study.
Educational research shows that students can handle theory much better if it they view it as useful. Students have gone from being very bad at math, to doing fine when it was applied to something they cared about (like building something, or predicting the outcome of soccer matches). Internalized motivation is really the key to progress in school, which is why the top-down fixed curriculum approach is underperforming compared to the enormous potential kids have. They are really good at learning stuff they find fun (like games).
 This is VERY simple, and crucial, stuff. And yet I see SOOO 
 many grown adults, even ones with advanced graduate degrees, 
 consistently fail completely and uttery at basic logical 
 reasoning in everyday life (and we're talking very, very 
 obvious and basic fallacies), that it's genuinely disturbing.
Yes, social factors are more important in the real world than optimal decision making, unless you build something that can fall apart in a spectacular way that makes it to the front page of the newspapers. :-)
 big international sporting events. I get the feeling this'll be 
 something that'll get bigger and bigger until either A. the 
 right people get together and do something about it, or B. 
 things come to a head and the shit *really* starts to hit the 
 fan. (Yes, I like outdated slang ;) ) Nothing good can come 
 from the current trajectory.
Yeah, I think the trajectory will keep going upwards until there are no more less-democratic countries willing to pay the price to look civilized. It is probably also the result of it being increasingly hard to be heard in the increased and interactive information flow of media, so being big and loud is viewed as a necessity. The Internet makes it much easier to escape from the events, in the 80s the olympics would be on all media surfaces. I barely noticed the last winter olympics despite the 20 billion price tag.
 Was this off-topic?
It was off-topic several posts up. :)
At some point the forum will split into a developer-section and a end-user-section. It is kind of inevitable :). The current confusion about the roles of developer vs end-user is kind of interesting. Maybe it is a positive thing. Not sure. :-)
Oct 07 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky wrote:
 But regardless: Yes, there *is* a theoretical side to logic, but logic
 is also *extremely* applicable to ordinary everyday life. Even moreso
 than math, I would argue.
Yep, however what the human brain is really bad at is reasoning about probability.
Yea, true. Probability can be surprisingly unintuitive even for people well-versed in logic. Ex: A lot of people have trouble understanding that getting "heads" in a coinflip many times in a row does *not* increase the likelihood of the next flip being "tails". And there's a very understandable reason why that's difficult to grasp. I've managed to grok it, but yet even I (try as I may) just cannot truly grok the monty hall problem. I *can* reliably come up with the correct answer, but *never* through an actual mental model of the problem, *only* by very, very carefully thinking through each step of the problem. And that never changes no matter how many times I think it through. That really impressed me about the one student depicted in the "21" movie (the one based around the real-life people who created card counting): Don't know how much of it was hollywood artistic license, but when he demonstrated a crystal-clear *intuitive* understanding of the monty hall problem - that was *impressive*.
 I agree that primary school should cover modus ponens,
 modus tollens and how you can define equivalance in terms of two
 implications. BUT I think you also need to experiment informally with
 probability at the same time and experience how intuition clouds our
 thinking. It is important to avoid the fallacies of black/white
 reasoning that comes with propositional logic.

 Actually, one probably should start with teaching "ad hoc"
 object-oriented modelling in primary schools. Turning what humans are
 really good at, abstraction, into something structured and visual. That
 way you also learn that when you argue a point you are biased, you
 always model certain limited projections of the relations that are
 present in real world.
Interesting points, I hadn't thought of any of that.
 Educational research shows that students can handle theory much better
 if it they view it as useful. Students have gone from being very bad at
 math, to doing fine when it was applied to something they cared about
 (like building something, or predicting the outcome of soccer matches).
Yea, definitely. Self-intimidation has a lot to do with it too. I've talked to several math teachers who say they've had very good success teaching algebra to students who struggled with it *just* by replacing the letter-based variables with empty squares. People are very good at intimidating themselves into refusing to even think. It's not just students, it's people in general, heck I've seen both my parents do it quite a bit: "Nick! Something popped up on my screen! I don't know what to do!!" "What does it say?" "I dunno! I didn't read it!! How do I get rid of it?!?" /facepalm
 Internalized motivation is really the key to progress in school,
This is something I've always felt needed to be, as mandatory, drilled into the heads of every educator. Required knowledge for educators, IMO. Think like "gold stars" are among the worst things you can do. It really drives the point home that it's all tedium and has no *inherent* value. Of course, in the classroom, most of it usually *is* tedium with little inherent value...
 which
 is why the top-down fixed curriculum approach is underperforming
 compared to the enormous potential kids have. They are really good at
 learning stuff they find fun (like games).
Yea, and that really proves just how bad the current approach is. Something I think is an appropriate metaphor for that (and bear with me on this): Are you familiar with the sitcom "It's always sunny in Philadelphia"? Created by a group of young writers/actors who were just getting their start. After the first season, it had impressed Danny DiVito enough (apparently he was a fan of the show) that he joined the cast. In an interview with one of the shows creators (on the Season 1&2 DVDs), this co-creator talked about how star-struck they were about having Danny DiVito on board, and how insecure/panicked he was about writing for DiVito...until he realized: (His words, more or less) "Wait a minute, this is *Danny DiVito* - If we can't make **him** funny, then we really suck!" A school that has trouble teaching kids is like a writer who can't make Danny DiVito funny. Learning is what kids *do*! How much failure does it take to mess *that* up? "Those who make a distinction between education and entertainment don't know the first thing about either."
 Yes, social factors are more important in the real world than optimal
 decision making,
I was quite disillusioned when I finally discovered that as an adult. Intelligence, knowledge and ability don't count for shit 90+% of the time (in fact, frequently it's a liability - people *expect* group-think and get very agitated and self-righteous when you don't conform to group-think). Intelligence/knowledge/ability *should* matter a great deal, and people *think* they do. But they don't.
 unless you build something that can fall apart in a
 spectacular way that makes it to the front page of the newspapers. :-)
I've noticed that people refuse to recognize (let alone fix) problems, even when directly pointed out, until people start dying. And even then it's kind of a crapshoot as to whether or not anything will actually be done.
Oct 07 2014
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky wrote:
 But regardless: Yes, there *is* a theoretical side to logic, but logic
 is also *extremely* applicable to ordinary everyday life. Even moreso
 than math, I would argue.
Yep, however what the human brain is really bad at is reasoning about probability.
Yea, true. Probability can be surprisingly unintuitive even for people well-versed in logic. ...
Really?
 Ex: A lot of people have trouble understanding that getting "heads" in a
 coinflip many times in a row does *not* increase the likelihood of the
 next flip being "tails". And there's a very understandable reason why
 that's difficult to grasp.
What is this reason? It would be really spooky if the probability was actually increased in this way. You could win at 'heads or tails' by flipping a coin really many times until you got a sufficiently long run of 'tails', then going to another room and betting that the next flip will be 'heads', and if people didn't intuitively understand that, some would actually try to apply this trick. (Do they?)
 I've managed to grok it, but yet even I (try
 as I may) just cannot truly grok the monty hall problem. I *can*
 reliably come up with the correct answer, but *never* through an actual
 mental model of the problem, *only* by very, very carefully thinking
 through each step of the problem. And that never changes no matter how
 many times I think it through.
It is actually entirely straightforward, but it is popular to present the problem as if it was actually really complicated, and those who like to present it often seem to understand it poorly as well. The stage is usually set up to maximise entertainment, not understanding. The presenter is often trying to impress, by forcing a quick answer, hoping that you will not think at all and get it wrong. Sometimes, the context is even set up that such a quick shot is more likely to be wrong, because of an intended wrong analogy to some other completely obvious question that came just before. Carefully thinking it through step by step multiple times afterwards tends to only serve to confuse oneself into strengthening the belief that something counter-intuitive is going on, and this is aggravated by the fact that there isn't, because therefore the part that is supposedly counter-intuitive can never be pinned down. I.e. I think it is confusing because one approaches the problem with a wrong set of assumptions. That said, it's just: When you first randomly choose the door, you would intuitively rather bet that you guessed wrong. The show master is simply proposing to tell you behind which of the other doors the car is in case you indeed guessed wrong. There's not more to it.
 I agree that primary school should cover modus ponens,
 modus tollens and how you can define equivalance in terms of two
 implications. BUT I think you also need to experiment informally with
 probability at the same time and experience how intuition clouds our
 thinking. It is important to avoid the fallacies of black/white
 reasoning that comes with propositional logic.

 Actually, one probably should start with teaching "ad hoc"
 object-oriented modelling in primary schools. Turning what humans are
 really good at, abstraction, into something structured and visual. That
 way you also learn that when you argue a point you are biased, you
 always model certain limited projections of the relations that are
 present in real world.
Interesting points, I hadn't thought of any of that. ...
I mostly agree, except I wouldn't go object-oriented, but do something else, because it tends to quickly fail at actually capturing relations that are present in the real world in a straightforward fashion.
 Educational research shows that students can handle theory much better
 if it they view it as useful. Students have gone from being very bad at
 math, to doing fine when it was applied to something they cared about
 (like building something, or predicting the outcome of soccer matches).
Yea, definitely. Self-intimidation has a lot to do with it too. I've talked to several math teachers who say they've had very good success teaching algebra to students who struggled with it *just* by replacing the letter-based variables with empty squares. People are very good at intimidating themselves into refusing to even think. It's not just students, it's people in general, heck I've seen both my parents do it quite a bit: "Nick! Something popped up on my screen! I don't know what to do!!" "What does it say?" "I dunno! I didn't read it!! How do I get rid of it?!?" /facepalm
Sounds familiar. I've last run into this on e.g. category theory (especially monads) and the monty hall problem. :-P In fact, I only now realised that those two seem to be rather related phenomena. Thanks!
Oct 07 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
[...]
 I've managed to grok it, but yet even I (try as I may) just cannot
 truly grok the monty hall problem. I *can* reliably come up with the
 correct answer, but *never* through an actual mental model of the
 problem, *only* by very, very carefully thinking through each step of
 the problem. And that never changes no matter how many times I think
 it through.
[...] The secret behind the monty hall scenario, is that the host is actually leaking extra information to you about where the car might be. You make a first choice, which has 1/3 chance of being right, then the host opens another door, which is *always* wrong. This last part is where the information leak comes from. The host's choice is *not* fully random, because if your initial choice was the wrong door, then he is *forced* to pick the other wrong door (because he never opens the right door, for obvious reasons), thereby indirectly revealing which is the right door. So we have: 1/3 chance: you picked the right door. Then the host can randomly choose between the 2 remaining doors. In this case, no extra info is revealed. 2/3 chance: you picked the wrong door, and the host has no choice but to pick the other wrong door, thereby indirectly revealing the right door. So if you stick with your initial choice, you have 1/3 chance of winning, but if you switch, you have 2/3 chance of winning, because if your initial choice was wrong, which is 2/3 of the time, the host is effectively leaking out the right answer to you. The supposedly counterintuitive part comes from wrongly assuming that the host has full freedom to pick which door to open, which he does not in the given scenario. Of course, this scenario is also often told in a deliberately misleading way -- the fact that the host *never* opens the right door is often left as an unstated "common sense" assumption, thereby increasing the likelihood that people will overlook this minor but important detail. T -- Written on the window of a clothing store: No shirt, no shoes, no service.
Oct 07 2014
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10/08/2014 02:37 AM, H. S. Teoh via Digitalmars-d wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 [...]
 I've managed to grok it, but yet even I (try as I may) just cannot
 truly grok the monty hall problem. I *can* reliably come up with the
 correct answer, but *never* through an actual mental model of the
 problem, *only* by very, very carefully thinking through each step of
 the problem. And that never changes no matter how many times I think
 it through.
[...] The secret behind the monty hall scenario, is that the host is actually leaking extra information to you about where the car might be. You make a first choice, which has 1/3 chance of being right, then the host opens another door, which is *always* wrong. This last part is where the information leak comes from. The host's choice is *not* fully random, because if your initial choice was the wrong door, then he is *forced* to pick the other wrong door (because he never opens the right door, for obvious reasons), thereby indirectly revealing which is the right door. So we have: 1/3 chance: you picked the right door. Then the host can randomly choose between the 2 remaining doors. In this case, no extra info is revealed. 2/3 chance: you picked the wrong door, and the host has no choice but to pick the other wrong door, thereby indirectly revealing the right door. So if you stick with your initial choice, you have 1/3 chance of winning, but if you switch, you have 2/3 chance of winning, because if your initial choice was wrong, which is 2/3 of the time, the host is effectively leaking out the right answer to you. The supposedly counterintuitive part comes from wrongly assuming that the host has full freedom to pick which door to open, which he does not in the given scenario. Of course, this scenario is also often told in a deliberately misleading way -- the fact that the host *never* opens the right door is often left as an unstated "common sense" assumption, thereby increasing the likelihood that people will overlook this minor but important detail. T
The problem with this explanation is simply that it is too long and calls the overly detailed reasoning a 'secret'. :o) It's like monad tutorials!
Oct 07 2014
parent reply "Dominikus Dittes Scherkl" writes:
On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:
 The secret behind the monty hall scenario, is that the host is 
 actually
 leaking extra information to you about where the car might be.

 You make a first choice, which has 1/3 chance of being right, 
 then the
 host opens another door, which is *always* wrong. This last 
 part is
 where the information leak comes from.  The host's choice is 
 *not* fully
 random, because if your initial choice was the wrong door, 
 then he is
 *forced* to pick the other wrong door (because he never opens 
 the right
 door, for obvious reasons), thereby indirectly revealing which 
 is the
 right door.  So we have:

 1/3 chance: you picked the right door. Then the host can 
 randomly choose
 	between the 2 remaining doors. In this case, no extra info is
 	revealed.

 2/3 chance: you picked the wrong door, and the host has no 
 choice but to
 	pick the other wrong door, thereby indirectly revealing the
 	right door.

 So if you stick with your initial choice, you have 1/3 chance 
 of
 winning, but if you switch, you have 2/3 chance of winning, 
 because if
 your initial choice was wrong, which is 2/3 of the time, the 
 host is
 effectively leaking out the right answer to you.

 The supposedly counterintuitive part comes from wrongly 
 assuming that
 the host has full freedom to pick which door to open, which he 
 does not
But yes. He has. It makes no difference. If he would ever open the right door, you would just take it too. So if the win is behind the two doors you did not choose first, you will always get it.
 The problem with this explanation is simply that it is too long 
 and calls the overly detailed reasoning a 'secret'. :o)
So take this shorter explanation: "There are three doors and two of them are opened, one by him and one by you. So the chance to win is two out of three." It doesn't matter if he uses his knowledge to open always a false door. It only matters that you open your door AFTER him, which allows you to react on the result of his door. If you open the door first, your chance is only 1/3.
Oct 08 2014
next sibling parent "eles" <eles eles.com> writes:
On Wednesday, 8 October 2014 at 07:00:38 UTC, Dominikus Dittes 
Scherkl wrote:
 On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:
 It doesn't matter if he uses his knowledge to open always a
 false door.
It does. Actually, this is the single most important thing.
 It only matters that you open your door AFTER him,
 which allows you to react on the result of his door. If you
 open the door first, your chance is only 1/3.
If his choice would be completely random, as you seem to suggest above (because, actually, its choice is conditioned by your first choice) then, even if you open a door after him, the only thing you have is the fact that you are now in problem with 50% probability to win. If you remove the above piece of information, you could simply assume that there are only two doors and you are to open one of them. In this case is just 50/50.
Oct 08 2014
prev sibling parent "eles" <eles eles.com> writes:
On Wednesday, 8 October 2014 at 07:00:38 UTC, Dominikus Dittes 
Scherkl wrote:
 On Wednesday, 8 October 2014 at 01:22:49 UTC, Timon Gehr wrote:
 If he would ever open the right door, you would just take it 
 too.
Almost. If he opens the winning door, he gives you another very important information: the correctness of your first choice. If you already know if your first choice is correct or wrong, then having the host opening a door (does not matter which of the remaining two, in this case) solves the problem without ambiguity. But, when you make your second choice, you still not know if your first choice was correct or not. The only thing that you know is that the chance that your first choice was correct is two times less than the chance it was wrong. So you bet that your first choice was wrong, and you move on to the next problem, which, assuming this bet, now becomes a non-ambiguous problem. The key is this: "how would a third person bet on my first choice?" Reasonably, he would bet that the choice is wrong. So why wouldn't I do the same?
Oct 08 2014
prev sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 [...]
 I've managed to grok it, but yet even I (try as I may) just cannot
 truly grok the monty hall problem. I *can* reliably come up with the
 correct answer, but *never* through an actual mental model of the
 problem, *only* by very, very carefully thinking through each step of
 the problem. And that never changes no matter how many times I think
 it through.
[...] The secret behind the monty hall scenario, is that the host is actually leaking extra information to you about where the car might be. [...]
Hmm, yea, that is a good thing to realize about it. I think a big part of what trips me up is applying a broken, contorted version of the "coin toss" reasoning (coin toss == my usual approach to probability). Because of the "coin toss" problem, I'm naturally inclined to see past events as irrelevant. So, initial impression is: I see two closed doors, an irrelevant open door, and a choice: "Closed door A or Closed door B?". Obviously, it's a total fallacy to assume "well, if there's two choices then they must be weighted equally." But, naturally, I figure that all three doors initially have equal changes, and so I'm already thinking "unweighted, even distribution", and then bam, my mind (falsely) sums up: "Two options, uniform distribution, third door isn't a choice so it's just a distraction. Therefore, 50/50". Now yes, I *do* see several fallacies/oversights/mistakes in that, but that's how my mind tries to setup the problem. So then I wind up working backwards from that or forcing myself to abandon it by starting from the beginning and carefully working it through. Another way to look at it, very similar to yours actually, and I think more or less the way the kid presented it in "21" (but in a typical hollywood "We hate exposition with a passion, so just rush through it as hastily as possible, we're only including it because writers expect us to, so who cares if nobody can follow it" style): 1. Three equal choices: 1/3 I'm right, 2/3 I'm wrong. 2. New choice: Bet that I was right (1/3), bet that I was wrong (2/3) 3. "An extra 33% chance for free? Sure, I'll take it." Hmm, looking at it now, I guess the second choice is simply *inverting* you first choice. Ahh, now I get what the kid (and you) was saying much better: Choosing "I'll change my guess" is equivalent to choosing *both* of the other two doors. The fact that he opens one of those other two doors is a complete distraction and totally irrelevent. Makes you think you're only choosing "the other ONE door" when you're really choosing "the other TWO doors". Interesting. See, this is why I love this NG :)
Oct 08 2014
parent "eles" <eles eles.com> writes:
On Wednesday, 8 October 2014 at 08:16:08 UTC, Nick Sabalausky 
wrote:
 On 10/07/2014 08:37 PM, H. S. Teoh via Digitalmars-d wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 [...]
 equivalent to choosing *both* of the other two doors.
Yeah, I think is the best way to put it.
Oct 08 2014
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 4:49 PM, Timon Gehr wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 Yep, however what the human brain is really bad at is reasoning about
 probability.
Yea, true. Probability can be surprisingly unintuitive even for people well-versed in logic. ...
Really?
Yes. See "Thinking, Fast and Slow": http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 I'm working my way through it now. Very interesting.
Oct 07 2014
prev sibling next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 07:49 PM, Timon Gehr wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 Ex: A lot of people have trouble understanding that getting "heads" in a
 coinflip many times in a row does *not* increase the likelihood of the
 next flip being "tails". And there's a very understandable reason why
 that's difficult to grasp.
What is this reason? It would be really spooky if the probability was actually increased in this way. You could win at 'heads or tails' by flipping a coin really many times until you got a sufficiently long run of 'tails', then going to another room and betting that the next flip will be 'heads', and if people didn't intuitively understand that, some would actually try to apply this trick. (Do they?)
I have actually met a lot of people who instinctively believe that getting "tails" many times in a row means that "heads" becomes more and more inevitable. Obviously they're wrong about that, but I think I *do* understand how they get tripped up: What people *do* intuitively understand is that the overall number of "heads" and "tails" are likely to be similar. Moreover, statistically speaking, the more coin tosses there are, the more the overall past results tend to converge towards 50%/50%. (Which is pretty much what's implied by "uniform random distribution".) This much is pretty easy for people to intuitively understand, even if they don't know the mathematical details. As a result, people's mental models will usually involve some general notion of "There's a natural tendency for the 'heads' and 'tails' to even out" Unfortunately, that summary is...well...partly truth but also partly inaccurate. So they take that kinda-shaky and not-entirely-accurate (but still *partially* true) mental summary and are then faced with the coin toss problem: "You've gotten 'tails' 10,000 times in a row." "Wow, really? That many?" So then the questionable mental model kicks in: "...natural tendency to even out..." The inevitable result? "Wow, I must be overdue for a heads!" Fallacious certainly, but also common and somewhat understandable. Another aspect that can mix people up: If you keep flipping the coin, over and over and over, it *is* very likely that at *some* point you'll get a "heads". That much *is* true and surprises nobody. Unfortunately, as true as it is, it's *not* relevant to individual tosses: They're individual likelihoods *always* stay the same: 50%. So we seemingly have a situation where something ("very, very likely to get a heads") is true of the whole *without* being true of *any* of the individual parts. While that does occur, it isn't exactly a super-common thing in normal everyday life, so it can be unintuitive for people. And another confusion: Suppose we rephrase it like this: "If you keep tossing a coin, how likely are you to get 10,000 'tails' in a row AND then get ANOTHER 'tails'?" Not very freaking likely, of course: 1 in 2^10,001. But *after* those first 10,000 'tails' have already occurred, the answer changes completely. What? Seriously? Math that totally changes based on "when"?!? But 2+2 is *always* 4!! All of a sudden, here we have a math where your location on the timeline is *crucially* important[1], and that's gotta trip up some of the people who (like everyone) started out with math just being arithmetic. [1] Or at least time *appears* to be crucially important, depending on your perspective: We could easily say that "time" is nothing more than an irrelevant detail of the hypothetical scenario and the *real* mathematics is just one scenario of "I have 10,001 samples of 50% probability" versus a completely different scenario of "I have 10,000 samples of 100% probability and 1 sample of 50% probability". Of course, deciding which of those problems is the one we're actually looking at involves considering where you are on the timeline.
 That said, it's just: When you first randomly choose the door, you would
 intuitively rather bet that you guessed wrong. The show master is simply
 proposing to tell you behind which of the other doors the car is in case
 you indeed guessed wrong.

 There's not more to it.
Hmm, yea, an interesting way to look at it.
Oct 08 2014
parent reply "eles" <eles eles.com> writes:
On Wednesday, 8 October 2014 at 07:35:25 UTC, Nick Sabalausky 
wrote:
 On 10/07/2014 07:49 PM, Timon Gehr wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 Ex: A lot of people have trouble understanding that getting 
 "heads" in a
 coinflip many times in a row does *not* increase the 
 likelihood of the
 next flip being "tails". And there's a very understandable
Of course it does not increase the probability to get a "tails". Actually, it increases the probability that you'll get "heads" again. For the simplest explanation, see here: http://batman.wikia.com/wiki/Two-Face's_Coin
Oct 08 2014
parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Wednesday, 8 October 2014 at 07:40:14 UTC, eles wrote:
 Of course it does not increase the probability to get a 
 "tails". Actually, it increases the probability that you'll get 
 "heads" again.
Er, no, unless you assume that repeated flips of the coin in some way deform it so as to bias the outcome. An extended run of heads might lead you to conclude that the coin is biased, and so increase your _expectation_ that the next flip will also result in a head, but that doesn't alter the actual underlying probability.
Oct 09 2014
next sibling parent reply Iain Buclaw via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 10 Oct 2014 01:35, "Joseph Rushton Wakeling via Digitalmars-d" <
digitalmars-d puremagic.com> wrote:
 On Wednesday, 8 October 2014 at 07:40:14 UTC, eles wrote:
 Of course it does not increase the probability to get a "tails".
Actually, it increases the probability that you'll get "heads" again.
 Er, no, unless you assume that repeated flips of the coin in some way
deform it so as to bias the outcome.
 An extended run of heads might lead you to conclude that the coin is
biased, and so increase your _expectation_ that the next flip will also result in a head, but that doesn't alter the actual underlying probability. http://www.wired.com/2012/12/what-does-randomness-look-like/
Oct 09 2014
next sibling parent reply "eles" <eles eles.com> writes:
On Friday, 10 October 2014 at 06:28:06 UTC, Iain Buclaw via 
Digitalmars-d wrote:
 On 10 Oct 2014 01:35, "Joseph Rushton Wakeling via 
 Digitalmars-d" <
 digitalmars-d puremagic.com> wrote:
 On Wednesday, 8 October 2014 at 07:40:14 UTC, eles wrote:
 Of course it does not increase the probability to get a 
 "tails".
Actually, it increases the probability that you'll get "heads" again.
 Er, no, unless you assume that repeated flips of the coin in 
 some way
deform it so as to bias the outcome.
;) It was a joke. The link that I posted was supposed to make things clear. :)
Oct 10 2014
parent "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Friday, 10 October 2014 at 09:21:37 UTC, eles wrote:
 ;) It was a joke. The link that I posted was supposed to make 
 things clear. :)
You two-faced person, you. ;-)
Oct 10 2014
prev sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Friday, 10 October 2014 at 06:28:06 UTC, Iain Buclaw via 
Digitalmars-d wrote:
 http://www.wired.com/2012/12/what-does-randomness-look-like/
... yes, allowing for the reasonable expectations one can have for extended runs of heads in a regular 50/50 coin-flip process. Actually, related to that article, in my very first stats lecture at university, the first slide the lecturer showed us (on the overhead projector...) was, side by side, two patterns of dots each within a rectangular area. He asked: "Do you think these points are distributed at random?" Well, they pretty much looked the same to the naked eye. Then he took another transparency, which placed grids over the two rectangular dot-filled areas. In one, the dots were here, there, with some grid squares containing no dots at all, some containing clusters, whatever. In the other, every single grid square contained exactly one dot. I still think that was one of the single most important lessons in probability that I ever had.
Oct 10 2014
next sibling parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/10/2014 05:31 AM, Joseph Rushton Wakeling wrote:
 On Friday, 10 October 2014 at 06:28:06 UTC, Iain Buclaw via
 Digitalmars-d wrote:
 http://www.wired.com/2012/12/what-does-randomness-look-like/
... yes, allowing for the reasonable expectations one can have for extended runs of heads in a regular 50/50 coin-flip process. Actually, related to that article, in my very first stats lecture at university, the first slide the lecturer showed us (on the overhead projector...) was, side by side, two patterns of dots each within a rectangular area. He asked: "Do you think these points are distributed at random?" Well, they pretty much looked the same to the naked eye. Then he took another transparency, which placed grids over the two rectangular dot-filled areas. In one, the dots were here, there, with some grid squares containing no dots at all, some containing clusters, whatever. In the other, every single grid square contained exactly one dot. I still think that was one of the single most important lessons in probability that I ever had.
I like that. I actually have a similar classroom probability story too (involving one of the best teachers I ever had): As part of a probability homework assignment, we were asked to flip a coin 100 times and write down the results. "Uhh, yea, there's no way I'm doing that. I'm just gonna write down a bunch of T's and F's." Having previously played around with PRNG's (using them, not actually creating them), I had noticed that you do tend to get surprisingly long runs of one value missing, or the occasional clustering. I carefully used that knowledge to help me cheat. During the next class, the teacher pointed out that "I can tell, most of you didn't actually flip a coin, did you? You just wrote down T's and F's..." Which turned out to be the whole *point* of the assignment. Deliberately get students to "cheat" and fake randomness - poorly - in order to *really* get them to understand the nature of randomness. Then he turned to me and said, "Uhh, Nick, you actually DID flip a coin didn't you?" Hehe heh heh. "Nope :)" I got a good chuckle out of that.
Oct 11 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/11/2014 1:34 PM, Nick Sabalausky wrote:
 Having previously played around with PRNG's (using them, not actually creating
 them), I had noticed that you do tend to get surprisingly long runs of one
value
 missing, or the occasional clustering. I carefully used that knowledge to help
 me cheat.
Election fraud is often detected by examining the randomness of the least significant digits of the vote totals.
Oct 14 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/10/2014 2:31 AM, Joseph Rushton Wakeling wrote:
 I still think that was one of the single most important lessons in probability
 that I ever had.
Research shows that humans, even trained statisticians, are spectacularly bad at intuitive probability. -- "Thinking, Fast and Slow" by Daniel Kahneman
Nov 01 2014
prev sibling parent "eles" <eles eles.com> writes:
On Friday, 10 October 2014 at 00:32:05 UTC, Joseph Rushton
Wakeling wrote:
 On Wednesday, 8 October 2014 at 07:40:14 UTC, eles wrote:
 Of course it does not increase the probability to get a 
 "tails". Actually, it increases the probability that you'll 
 get "heads" again.
Did you see the link? ;;)
Oct 10 2014
prev sibling next sibling parent ketmar via Digitalmars-d <digitalmars-d puremagic.com> writes:
On Tue, 7 Oct 2014 17:37:51 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> wrote:

 told in a deliberately misleading way -- the fact that the host
 *never* opens the right door is often left as an unstated "common
 sense" assumption, thereby increasing the likelihood that people will
 overlook this minor but important detail.
that's why i was always against this "problem". if you giving me the logic problem, give me all information. anything not written clear can't be the part of the problem. that's why the right answer for the problem when i didn't told that the host never opens the right door is "50/50".
Oct 08 2014
prev sibling parent reply "eles" <eles eles.com> writes:
On Tuesday, 7 October 2014 at 23:49:37 UTC, Timon Gehr wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky
 What is this reason?
This one: Result of coin tossing is independent at any current attempt. It does not depend of past or future results. Probability is 0.5 On the other hand, obtaining a series with 100 heads in a row is very small (exactly because of the independence). Obtaining a series with 101 heads in a row is even smaller, so people will assume that the 101 tossing should probably give a "tails". But they forget that the probability of a 101 series where first 100 are heads and the 101 is tails is exactly *the same* as the probability of a 101 series of heads. They compare the probability of a 101 series of heads with the probability of a series of 100 heads instead of comparing it against a series with first 100 heads and the 101rd being a tail. It is the bias choice (we have tendency to compare things that are easier - not more pertinent - to compare).
Oct 08 2014
parent "eles" <eles eles.com> writes:
On Wednesday, 8 October 2014 at 07:51:39 UTC, eles wrote:
 On Tuesday, 7 October 2014 at 23:49:37 UTC, Timon Gehr wrote:
 On 10/08/2014 12:10 AM, Nick Sabalausky wrote:
 On 10/07/2014 06:47 AM, "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?=
 <ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Tuesday, 7 October 2014 at 08:19:15 UTC, Nick Sabalausky
 against a series with first 100 heads and the 101rd being a
well, 101st
Oct 08 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 1:19 AM, Nick Sabalausky wrote:
 This is VERY simple, and crucial, stuff. And yet I see SOOO many grown adults,
 even ones with advanced graduate degrees, consistently fail completely and
 uttery at basic logical reasoning in everyday life (and we're talking very,
very
 obvious and basic fallacies), that it's genuinely disturbing.
I agree with teaching logical fallacies. I believe one of the most important things we can teach the young is how to separate truth from crap. And this is not done - I'd never really heard of logical fallacies until after college. (I was taught the scientific method, though.) I.e. logical fallacies and the scientific method should be core curriculum. Ironically, I've seen many researchers with PhD's carefully using the scientific method in their research, and promptly lapsing into logical fallacies with everything else. It's like sales techniques. I've read books on sales techniques and the psychology behind them. I don't use or apply them with any skill, but it has enabled me to recognize when those techniques are used on me, and has the effect of immunizing me against them. At least learning the logical fallacies helps immunize one against being fraudulently influenced.
Oct 07 2014
parent reply Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 03:37 PM, Walter Bright wrote:
 I believe one of the most
 important things we can teach the young is how to separate truth from
 crap. And this is not done
Hear, hear!
 I.e. logical fallacies and the scientific method should be core curriculum.
Yes. My high-school (and maybe junior high, IIRC) science classes covered the scientific method at least. So that much is good (at least, where I was anyway).
 Ironically, I've seen many researchers with PhD's carefully using the
 scientific method in their research, and promptly lapsing into logical
 fallacies with everything else.
Yes, people use entirely different mindsets for different topics. Seems to be an inherent part of the mind, and I can certainly see some benefit to that. Unfortunately it can occasionally go wrong, like you describe.
 It's like sales techniques. I've read books on sales techniques and the
 psychology behind them. I don't use or apply them with any skill, but it
 has enabled me to recognize when those techniques are used on me, and
 has the effect of immunizing me against them.

 At least learning the logical fallacies helps immunize one against being
 fraudulently influenced.
Definitely. I can always spot a commissioned (or "bonus"-based) salesman a mile away. A lot of their tactics are incredibly irritating, patronizing, and frankly very transparent. (But my dad's a salesman so maybe that's how I managed to develop a finely-tuned "sales-bullshit detector") It's interesting (read: disturbing) how convinced they are that you're just being rude and difficult when you don't fall hook, line and sinker for their obvious bullshit and their obvious lack of knowledge. Here's another interesting tactic you may not be aware of: I'm not sure how wide-spread this is, but I have direct inside information that it *is* common in car dealerships around my general area. Among themselves, the salesmen have a common saying: "Buyers are liars". It's an interesting (and disturbing) method of ensuring salesmen police themselves and continue to be 100% read-and-willing to abandon ethics and bullshit the crap out of customers. Obviously customers *do* lie of course (and that helps the tactic perpetuate itself), but when a *salesman* says it, it really is an almost hilarious case of "The pot calling the grey paint 'black'." It's a salesman's whole freaking *job* is be a professional liar! (And there's all sorts of tricks to self-rationalizing it and staying on the good side of the law. But their whole professional JOB is to *bullshit*! And they themselves are dumb enough to buy into their *own* "It's the *buyers* who are disonest!" nonsence.) Casinos are similar. Back in college, when my friends and I were all 19 and attending a school only about 2 hours from Canada...well, whaddya expect?...We took a roadtrip up to Casino Windsor! Within minutes of walking through the place I couldn't even *help* myself from counting the seemingly-neverending stream of blatantly-obvious psychological gimmicks. It was just one after another, everywhere you'd look, and they were so SO OBVIOUS it was like walking inside a salesman's brain. The physical manifestation of a direct insult to people's intelligence. It's really unbelievable how stupid a person has to be to fall for those blatant tricks. But then again, slots and video poker aren't exactly my thing anyway. I'm from the 80's: If I plunk coins into a machine I expect to get food, beverage, clean laundry, or *actual gameplay*. Repeatedly purchasing the message "You loose" while the entire building itself is treating me like a complete brain-dead idiot isn't exactly my idea of "addictive". If I want to spend money to watch non-interactive animations and occasionally push a button or two to keep it all going, I'll just buy "The Last of Us".
Oct 07 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/7/2014 3:54 PM, Nick Sabalausky wrote:
 It's a salesman's whole freaking *job* is be a professional liar!
Poor salesmen are liars. But the really, really good ones are ones who are able to match up what a customer needs with the right product for him. There, he is providing a valuable service to the customer. Serve the customer well like that, and you get a repeat customer. I know many salesmen who get my repeat business because of that. The prof who taught me accounting used to sell cars. I asked him how to tell a good dealership from a bad one. He told me the good ones have been in business for more than 5 years, because by then one has run out of new suckers and is relying on repeat business.
 But then again, slots and video poker aren't exactly my thing anyway. I'm from
 the 80's: If I plunk coins into a machine I expect to get food, beverage, clean
 laundry, or *actual gameplay*. Repeatedly purchasing the message "You loose"
 while the entire building itself is treating me like a complete brain-dead
idiot
 isn't exactly my idea of "addictive".
I found gambling to be a painful experience, not entertaining at all.
Oct 07 2014
parent Nick Sabalausky <SeeWebsiteToContactMe semitwist.com> writes:
On 10/07/2014 11:29 PM, Walter Bright wrote:
 On 10/7/2014 3:54 PM, Nick Sabalausky wrote:
 It's a salesman's whole freaking *job* is be a professional liar!
Poor salesmen are liars. But the really, really good ones are ones who are able to match up what a customer needs with the right product for him. There, he is providing a valuable service to the customer.
Can't say I've personally come across any of the latter (it relies on salesmen knowing what they're talking about and still working sales anyway - which I'm sure does occur for various reasons, but doesn't seem common from what I've seen). But maybe I've just spent far too much time at MicroCenter ;) Great store, but dumb sales staff ;)
 Serve the customer well like that, and you get a repeat customer. I know
 many salesmen who get my repeat business because of that.
Certainly reasonable points, and I'm glad to hear there *are* respectable ones out there.
 The prof who taught me accounting used to sell cars. I asked him how to
 tell a good dealership from a bad one. He told me the good ones have
 been in business for more than 5 years, because by then one has run out
 of new suckers and is relying on repeat business.
That sounds reasonable on the surface, but it relies on several questionable assumptions: 1. Suckers routinely know they've been suckered. 2. Suckers avoid giving repeat business to those who suckered them (not as reliable an assumption as one might expect) 3. The rate of loss on previous suckers overshadows the rate of new suckers. (Ex: No matter how badly people hate working at McDonald's, they're unlikely to run low on fresh applicants without a major birthrate decline - and even then they'd have 16 years to prepare) 4. Good dealerships don't become bad. 5. There *exists* a good one within a reasonable distance. 6. People haven't become disillusioned and given up on trying to find a good one (whether a good one exists or not, the effect here would be the same). 7. The bad ones aren't able to compete/survive through other means. (Cornering a market, mindshare, convincing ads, misc gimmicks, merchandising or other side-revenue streams, anti-competitive practices, etc.) Also, the strategy has a potential self-destruct switch: Even if the strategy works, if enough people follow it then even good dealerships might not be able to survive the initial 5 year trial. Plus, I know of a counter-example around here. From an insider, I've heard horror stories about the shit the managers, finance people, etc would pull. But they're a MAJOR dealer in the area and have been for as long as I can remember.
 But then again, slots and video poker aren't exactly my thing anyway.
 I'm from
 the 80's: If I plunk coins into a machine I expect to get food,
 beverage, clean
 laundry, or *actual gameplay*. Repeatedly purchasing the message "You
 loose"
 while the entire building itself is treating me like a complete
 brain-dead idiot
 isn't exactly my idea of "addictive".
I found gambling to be a painful experience, not entertaining at all.
I actually enjoyed that evening quite a bit: A road trip with friends is always fun, as is seeing new places, and it was certainly a very pretty place inside (for a very good reason, of course). But admittedly, the psychological tricks were somewhat insulting, and by the time I got through the $20 I'd budgeted I had already gotten very, very bored with slot machines and video poker. And blackjack's something I'd gotten plenty of all the way back on the Apple II. If I want to gamble I'll just go buy more insurance ;) Better odds. Or stock market. At least that doesn't have quite as much of a "house" that "always wins", at least not to the same extent.
Oct 08 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/6/2014 4:06 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On a positive note: the IOC managed to demand that the norwegian King ought to
 hold a party for the IOC leaders and additionally demanded that he should pay
 their drinks. It was part of their 7000 page olympics qualification
requirements
 document. It is so heavily regulated that it explicitly specifies that the
 personnel in the hotels MUST SMILE to the IOC leaders when they arrive. I kid
 you not, even games and past times are heavily bureaucratic down to minuscule
 details these days. So, due pressure from the newspapers/grassroots and the
 royal insult the politicians eventually had to turn down the ~$10.000.000.000
 winter olympics budget proposal. Good riddance. Live monarchy!
Yay for Norway!
Oct 07 2014
prev sibling parent reply =?UTF-8?Q?Tobias=20M=C3=BCller?= <troplin bluewin.ch> writes:
"Paolo Invernizzi" <paolo.invernizzi no.address> wrote:
 And guess it, here the buildings made by ancient romans are still "up and
 running", while we have schools building made in the '90 that come down
 at every earthquake...
All the bad buildings from the ancient romans already came down while the last 2000 years. The best 1% survived. Tobi
Oct 05 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 3:00 AM, Tobias Müller wrote:
 All the bad buildings from the ancient romans already came down while the
 last 2000 years. The best 1% survived.
Yay for survivorship bias!
Oct 05 2014
parent reply "eles" <eles215 gzk.dot> writes:
On Sunday, 5 October 2014 at 20:46:16 UTC, Walter Bright wrote:
 On 10/5/2014 3:00 AM, Tobias Müller wrote:
 All the bad buildings from the ancient romans already came 
 down while the
 last 2000 years. The best 1% survived.
Yay for survivorship bias!
The same happens when asking people if such or such dictatorship was good or bad. You hear "it wasn't so bad, after all". Yes, because they only ask the survivors, they don't go to ask the dead, too. Sorry, my rant about politics.
Oct 05 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/5/2014 2:55 PM, eles wrote:
 The same happens when asking people if such or such dictatorship was good or
 bad. You hear "it wasn't so bad, after all". Yes, because they only ask the
 survivors, they don't go to ask the dead, too.
Yeah, I've often wondered what the dead soldiers would say about wars - whether it was worth it or not. Only the ones who lived get interviewed.
 Sorry, my rant about politics.
Oct 05 2014
prev sibling next sibling parent reply "Piotrek" <p nonexistent.pl> writes:
On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:

 My point, and I think Kagamin's as well, is that the entire 
 plane is a system and the redundant internals are subsystems.  
 They may not share memory, but they are wired to the same 
 sensors, servos, displays, etc.  Thus the point about shutting 
 down the entire plane as a result of a small failure is fair.
This "real life" example: http://en.wikipedia.org/wiki/Air_France_Flight_447 I just pick some interesting statements (there are other factors described as well): "temporary inconsistency between the measured speeds, likely as a result of the obstruction of the pitot tubes by ice crystals, causing autopilot disconnection and reconfiguration to alternate law;" And as I can see it, all subsystems related to the "small failure" was shut down. But what is also important information was not clearly provided to the pilots: "Despite the fact that they were aware that altitude was declining rapidly, the pilots were unable to determine which instruments to trust: it may have appeared to them that all values were incoherent" "the cockpit lacked a clear display of the inconsistencies in airspeed readings identified by the flight computers;" Piotrek
Oct 03 2014
next sibling parent "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:
 And as I can see it, all subsystems related to the "small 
 failure" was shut down. But what is also important information 
 was not clearly provided to the pilots:

 "Despite the fact that they were aware that altitude was 
 declining rapidly, the pilots were unable to determine which 
 instruments to trust: it may have appeared to them that all 
 values were incoherent"

 "the cockpit lacked a clear display of the inconsistencies in 
 airspeed readings identified by the flight computers;"
There's a similar issue with nuclear reactors, which is that there are so many blinky lights and such that it can be impossible to spot or prioritize problems in a failure scenario. I know there have been articles written on revisions of user interface design in reactors specifically to deal with this issue, and I suspect the ideas are applicable to error handling in general.
Oct 03 2014
prev sibling next sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:
 On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:

 This "real life" example:

 http://en.wikipedia.org/wiki/Air_France_Flight_447

 I just pick some interesting statements (there are other 
 factors described as well):

 "temporary inconsistency between the measured speeds, likely as 
 a result of the obstruction of the pitot tubes by ice crystals, 
 causing autopilot disconnection and reconfiguration to 
 alternate law;"


 And as I can see it, all subsystems related to the "small 
 failure" was shut down. But what is also important information 
 was not clearly provided to the pilots:

 "Despite the fact that they were aware that altitude was 
 declining rapidly, the pilots were unable to determine which 
 instruments to trust: it may have appeared to them that all 
 values were incoherent"

 "the cockpit lacked a clear display of the inconsistencies in 
 airspeed readings identified by the flight computers;"

 Piotrek
As one that has read the original report integrally, I think that you have taken a bad example: despite the autopilot was disengaged, the stall alarm ringed a pletora of times. There's no real alternative to the disengagement of the autopilot is that fundamental parameter is compromised. It took the captain only a few moment to understand the problem (read the voice-recording transcription), but it was too late... --- /Paolo
Oct 03 2014
next sibling parent reply "Piotrek" <p nonexistent.pl> writes:
On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi wrote:

 As one that has read the original report integrally, I think 
 that you have taken a bad example: despite the autopilot was 
 disengaged, the stall alarm ringed a pletora of times.
My point was that the broken speed indicators shut down the autopilot systems. Piotrek
Oct 03 2014
next sibling parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Friday, 3 October 2014 at 22:27:44 UTC, Piotrek wrote:
 On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi 
 wrote:

 As one that has read the original report integrally, I think 
 that you have taken a bad example: despite the autopilot was 
 disengaged, the stall alarm ringed a pletora of times.
My point was that the broken speed indicators shut down the autopilot systems. Piotrek
And that is still the only reasonable thing to do in that case. --- /Paolo
Oct 04 2014
parent "Piotrek" <p nonexistent.pl> writes:
On Saturday, 4 October 2014 at 08:24:40 UTC, Paolo Invernizzi 
wrote:

 And that is still the only reasonable thing to do in that case.

 ---
 /Paolo
And I never said otherwise. See my response to Walter's post. Piotrek
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 3:27 PM, Piotrek wrote:
 My point was that the broken speed indicators shut down the autopilot systems.
The alternative is to have the autopilot crash the airplane. The autopilot cannot fly with compromised airspeed data.
Oct 04 2014
parent "Piotrek" <p nonexistent.pl> writes:
On Saturday, 4 October 2014 at 08:30:11 UTC, Walter Bright wrote:
 On 10/3/2014 3:27 PM, Piotrek wrote:
 My point was that the broken speed indicators shut down the 
 autopilot systems.
The alternative is to have the autopilot crash the airplane. The autopilot cannot fly with compromised airspeed data.
Yes, I know. I just provided that example as a response to:
 Do you interpret airplane safety right? As I understand, 
 airplanes are safe
 exactly because they recover from assert failures and continue 
 operation.
And Paulo stated it's a bad example. Maybe it is, but I couldn't find a better one. This accident just sits in my head as the sequence of events shocked me the most from all accident stories I heard. Piotrek
Oct 04 2014
prev sibling parent reply "eles" <eles215 gzk.dot> writes:
On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi wrote:
 On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:
 On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:
 As one that has read the original report integrally, I think 
 that you have taken a bad example: despite the autopilot was 
 disengaged, the stall alarm ringed a pletora of times.

 There's no real alternative to the disengagement of the 
 autopilot is that fundamental parameter is compromised.

 It took the captain only a few moment to understand the problem 
 (read the voice-recording transcription), but it was too late...
For the curious, the flight analysis here: http://www.popularmechanics.com/technology/aviation/crashes/what-really-happened-aboard-air-france-447-6611877 Captain's first error was to leave the cockpit when approaching storm. Second, was to give command to the lower-experienced co-pilots. Not only for quality of the flight, but also for the quality of the team. His third error was to have neglected the fact that the radar was not correctly set up. And his most important (and final) error was to not take commands back while coming back in to the cockpit. And it was airliner's fault to embark on transatlantic flights just one experienced man and two very low experienced copilots. This is a beginner mistake: "the insanity of pulling back on the controls while stalled" and this passage resumes it quite well: "the captain of the flight makes no attempt to physically take control of the airplane. Had Dubois done so, he almost certainly would have understood, as a pilot with many hours flying light airplanes, the insanity of pulling back on the controls while stalled" If you read the analysis you get scared.
Oct 03 2014
parent reply "eles" <eles eles.com> writes:
On Saturday, 4 October 2014 at 05:26:52 UTC, eles wrote:
 On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi 
 wrote:
 On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:
 On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:
 For the curious, the flight analysis here:

 http://www.popularmechanics.com/technology/aviation/crashes/what-really-happened-aboard-air-france-447-6611877
A just-printed new analysis of the same: http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash
Oct 14 2014
parent "eles" <eles215 gzk.dot> writes:
On Tuesday, 14 October 2014 at 15:57:05 UTC, eles wrote:
 On Saturday, 4 October 2014 at 05:26:52 UTC, eles wrote:
 On Friday, 3 October 2014 at 20:31:42 UTC, Paolo Invernizzi 
 wrote:
 On Friday, 3 October 2014 at 18:00:58 UTC, Piotrek wrote:
 On Friday, 3 October 2014 at 15:43:59 UTC, Sean Kelly wrote:
 For the curious, the flight analysis here:

 http://www.popularmechanics.com/technology/aviation/crashes/what-really-happened-aboard-air-france-447-6611877
A just-printed new analysis of the same: http://www.vanityfair.com/business/2014/10/air-france-flight-447-crash
& the movie of it: http://www.almdares.net/vz/youtube_browser.php?do=show&vidid=TsgyBqlFixo
Oct 27 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 11:00 AM, Piotrek wrote:
 http://en.wikipedia.org/wiki/Air_France_Flight_447

 I just pick some interesting statements (there are other factors described as
 well):
The overriding failure in that accident was the pilot panicked and persistently did the wrong thing, the opposite of what every pilot is relentlessly trained to do.
Oct 04 2014
parent reply "Paolo Invernizzi" <paolo.invernizzi no.address> writes:
On Saturday, 4 October 2014 at 08:28:40 UTC, Walter Bright wrote:
 On 10/3/2014 11:00 AM, Piotrek wrote:
 http://en.wikipedia.org/wiki/Air_France_Flight_447

 I just pick some interesting statements (there are other 
 factors described as
 well):
The overriding failure in that accident was the pilot panicked and persistently did the wrong thing, the opposite of what every pilot is relentlessly trained to do.
One point that the report stressed: when the co-pilots pulled back repeatedly the joystick to raise the plane, the second one could not see that action, as the two joystick are not connected, and they are not moving in sync in the airbus. Basically, the info-path regarding the joystick between two disconnected systems (the co-pilots) is cut in modern plane, so it's more difficult to check if their "output" (push/pull/etc) is coherent at the end. That, in my opinion, was the biggest problem in that tragedy. --- /Paolo
Oct 04 2014
next sibling parent reply "Piotrek" <p nonexistent.pl> writes:
On Saturday, 4 October 2014 at 08:45:57 UTC, Paolo Invernizzi 
wrote:

 Basically, the info-path regarding the joystick between two 
 disconnected systems (the co-pilots) is cut in modern plane, so 
 it's more difficult to check if their "output" (push/pull/etc) 
 is coherent at the end.

 That, in my opinion, was the biggest problem in that tragedy.
 ---
 /Paolo
Yeah, I wish to know the rationale for asynchronous joysticks. Pioterk
Oct 04 2014
parent "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 09:36:37 UTC, Piotrek wrote:
 On Saturday, 4 October 2014 at 08:45:57 UTC, Paolo Invernizzi 
 wrote:
 Yeah, I wish to know the rationale for asynchronous joysticks.
Not only that, also the computer *averaged* the inputs (I am really at loss with that, in a plane. If there is a mountain in front of the plane, and one pilot pulls left, while the other pilot pulls right, the computer makes the "smart" decision to fly ahead straight throught the mountain.) At least if it would let you know: "the other post is giving me other inputs! I warn you, I am averaging the inputs!". It did not warn. As the right seat has been pulling all back, the left seat saw nothing else that the plane did not respond (or very little) to his own inputs. He, following his inputs, have no feedback about how the plane is reacting. Not knowing the other is pulling back, all that he saw was an iressponsive plane. Anyway, systems might be blamed. But the attitude of that guy who never informed the others that he was pulling back for a quarter of hour is the issue there.
Oct 04 2014
prev sibling parent "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 08:45:57 UTC, Paolo Invernizzi 
wrote:
 On Saturday, 4 October 2014 at 08:28:40 UTC, Walter Bright 
 wrote:
 On 10/3/2014 11:00 AM, Piotrek wrote:
 That, in my opinion, was the biggest problem in that tragedy.
There is also this one: http://www.flightglobal.com/news/articles/stall-warning-controversy-haunts-af447-inquiry-360336/ "It insists the design of the stall warning "misled" the pilots. "Each time they reacted appropriately the alarm triggered inside the cockpit, as though they were reacting wrongly. Conversely each time the pilots pitched up the aircraft, the alarm shut off, preventing a proper diagnosis of the situation." SNPL's argument essentially suggests the on-off alarm might have incorrectly been interpreted by the crew as an indication that the A330 was alternating between being stalled and unstalled, when it was actually alternating between being stalled with valid airspeed data and stalled with invalid airspeed data."
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 8:43 AM, Sean Kelly wrote:
 My point, and I think Kagamin's as well, is that the entire plane is a system
 and the redundant internals are subsystems.  They may not share memory, but
they
 are wired to the same sensors, servos, displays, etc.
No, they do not share sensors, servos, etc.
 Thus the point about shutting down the entire plane as a result of a small
failure is fair.
That's a complete misunderstanding. In NO CASE does avionics software do anything after an assert but get shut down and physically isolated from what it controls. I've explained this over and over. It baffles me how twisted up this simple concept becomes when repeated back to me.
Oct 04 2014
next sibling parent "eles" <eles215 gzk.dot> writes:
On Saturday, 4 October 2014 at 08:15:51 UTC, Walter Bright wrote:
 On 10/3/2014 8:43 AM, Sean Kelly wrote:
 In NO CASE does avionics software do anything after an assert 
 but get shut down and physically isolated from what it controls.

 I've explained this over and over. It baffles me how twisted up 
 this simple concept becomes when repeated back to me.
AFAICT, you hold the righ idea. Either the discussion fellows argue about a different thing and this is a communication problem, either they are on a wrong path. Once an inconsistency in its own state is detected, any program quits guessing, and shuts down, sometimes with a minimal logging (if not harmless). An inconsistency is when a software or a system finds intself in a situation for which it wasn't designed to face. It simply breaks the ultimat variants of all that any critical program makes. This invariant is: "I know what I'm doing." Yes, a software might detect inconsistencies in the rest of the system and try to correct those *if it is designed (read: if it is made to know) what is doing. Majority voting, for example, is such case, but there the basic hypothesis is that sensors are not 100% reliable. And, sometimes, even the majority voting is directly wired. But, a critical software (and, for the general case, any software) only goes as far as it (believes) that it knows what's doing. When this is not the case, it does not continue, it stops immediately.
Oct 04 2014
prev sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Saturday, 4 October 2014 at 08:15:51 UTC, Walter Bright wrote:
 On 10/3/2014 8:43 AM, Sean Kelly wrote:
 My point, and I think Kagamin's as well, is that the entire 
 plane is a system
 and the redundant internals are subsystems.  They may not 
 share memory, but they
 are wired to the same sensors, servos, displays, etc.
No, they do not share sensors, servos, etc.
Gotcha. I imagine there are redundant displays in the cockpit as well, which makes sense. Thus the unifying factor in an airplane is the pilot. In a non-mannned system, it would be a control program (or a series of redundant control programs). So the system in this case includes the pilot.
 Thus the point about shutting down the entire plane as a 
 result of a small failure is fair.
That's a complete misunderstanding.
Right. So the system relies on the intelligence and training of the pilot for proper operation. Choosing which systems are in error vs. which are correct, etc. I still think an argument could be made that an entire airplane, pilot included, is analogous to a server infrastructure, or even a memory isolated program (the Erlang example). My only point in all this is that while choosing the OS process is a good default when considering the potential scope of undefined behavior, it's not the only definition. The pilot misinterpreting sensor data and making a bad judgement call is equivalent to the failure of distinct subsystems corrupting the state of the entire system to the point where the whole thing fails. The sensors were communicating confusing information to the pilot, and his programming, as it were, was not up to the task of separating the good information from the bad. Do you have any thoughts concerning my proposal in the "on errors" thread?
Oct 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 9:09 AM, Sean Kelly wrote:
 On Saturday, 4 October 2014 at 08:15:51 UTC, Walter Bright wrote:
 On 10/3/2014 8:43 AM, Sean Kelly wrote:
 My point, and I think Kagamin's as well, is that the entire plane is a system
 and the redundant internals are subsystems.  They may not share memory, but
they
 are wired to the same sensors, servos, displays, etc.
No, they do not share sensors, servos, etc.
Gotcha. I imagine there are redundant displays in the cockpit as well, which makes sense. Thus the unifying factor in an airplane is the pilot.
Even the pilot has a backup! Next time you go flying, peek in the cockpit. You'll see dual instruments and displays. If you examine the outside, you'll see two (or three) pitot tubes (which measure airspeed).
 Right.  So the system relies on the intelligence and training of the pilot for
 proper operation.  Choosing which systems are in error vs. which are correct,
 etc.
A lot of design revolves around making it obvious which component is the failed one, the classic being a red light on the instrument panel.
 I still think an argument could be made that an entire airplane, pilot
 included, is analogous to a server infrastructure, or even a memory isolated
 program (the Erlang example).
Anyone with little training can fly an airplane. Heck, you can go to any flight school and they'll take you up on an introductory flight and let you try out the controls in flight. Most of a pilot's training consists of learning how to deal with failure.
 My only point in all this is that while choosing the OS process is a good
 default when considering the potential scope of undefined behavior, it's not
the
 only definition.  The pilot misinterpreting sensor data and making a bad
 judgement call is equivalent to the failure of distinct subsystems corrupting
 the state of the entire system to the point where the whole thing fails.  The
 sensors were communicating confusing information to the pilot, and his
 programming, as it were, was not up to the task of separating the good
 information from the bad.
That's true. Many accidents have resulted from the pilot getting confused about the failures being reported to him, and his failure to properly grasp the situation and what to do about it. All of these result in reevaluations of how failures are presented to the pilot, and the pilot's training and procedures. On the other hand, many failures have not resulted in accidents because of the pilot's ability to "think outside the box" and come up with a creative solution on the spot. It's why we need human pilots. These solutions then become part of standard procedure!
 Do you have any thoughts concerning my proposal in the "on errors" thread?
Looks interesting, but haven't gotten to it yet.
Oct 04 2014
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 5:16 AM, Jacob Carlborg wrote:
 I have no idea of airplane works but I think Walter usual says they have at
 least three backup systems. If one system fails, shut it down and switch to the
 backup.
It's a "dual path" system, meaning there's one backup. Some systems are in triplicate, such as the hydraulic system.
Oct 04 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/3/2014 4:27 AM, Kagamin wrote:
 Do you interpret airplane safety right? As I understand, airplanes are safe
 exactly because they recover from assert failures and continue operation.
Nope. That's exactly 180 degrees from how it works. Any airplane system that detects a fault shuts itself down and the backup is engaged. No way in hell is software allowed to continue that asserted.
Oct 04 2014
parent reply "Kagamin" <spam here.lot> writes:
On Saturday, 4 October 2014 at 08:08:49 UTC, Walter Bright wrote:
 On 10/3/2014 4:27 AM, Kagamin wrote:
 Do you interpret airplane safety right? As I understand, 
 airplanes are safe
 exactly because they recover from assert failures and continue 
 operation.
Nope. That's exactly 180 degrees from how it works. Any airplane system that detects a fault shuts itself down and the backup is engaged. No way in hell is software allowed to continue that asserted.
Sure, software is one part of an airplane, like a thread is a part of a process. When the part fails, you discard it and continue operation. In software it works by rolling back a failed transaction. An airplane has some tricks to recover from failures, but still it's a "no fail" design you argue against: it shuts down parts one by one when and only when they fail and continues operation no matter what until nothing works and even then it still doesn't fail, just does nothing. The airplane example works against your arguments. The unreliable design you talk about would be committing a failed transaction, but no, nobody suggests that.
Oct 15 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/15/2014 12:19 AM, Kagamin wrote:
 Sure, software is one part of an airplane, like a thread is a part of a
process.
 When the part fails, you discard it and continue operation. In software it
works
 by rolling back a failed transaction. An airplane has some tricks to recover
 from failures, but still it's a "no fail" design you argue against: it shuts
 down parts one by one when and only when they fail and continues operation no
 matter what until nothing works and even then it still doesn't fail, just does
 nothing. The airplane example works against your arguments.
This is a serious misunderstanding of what I'm talking about. Again, on an airplane, no way in hell is a software system going to be allowed to continue operating after it has self-detected a bug. Trying to bend the imprecise language I use into meaning the opposite doesn't change that.
Oct 16 2014
next sibling parent "Kagamin" <spam here.lot> writes:
On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:
 On 10/15/2014 12:19 AM, Kagamin wrote:
 Sure, software is one part of an airplane, like a thread is a 
 part of a process.
 When the part fails, you discard it and continue operation. In 
 software it works
 by rolling back a failed transaction. An airplane has some 
 tricks to recover
 from failures, but still it's a "no fail" design you argue 
 against: it shuts
 down parts one by one when and only when they fail and 
 continues operation no
 matter what until nothing works and even then it still doesn't 
 fail, just does
 nothing. The airplane example works against your arguments.
This is a serious misunderstanding of what I'm talking about. Again, on an airplane, no way in hell is a software system going to be allowed to continue operating after it has self-detected a bug.
Neither does failed transaction. I already approved that:
 When the part fails, you discard it and continue operation. In 
 software it works by rolling back a failed transaction.
 Trying to bend the imprecise language I use into meaning the 
 opposite doesn't change that.
Do you think I question that? I don't. I agree discarding a failed part is ok, and this is what traditional multithreaded server software already do: rollback a failed transaction and continue operation, just like airplane: loosing a part doesn't lose the whole.
Oct 17 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:
 On 10/15/2014 12:19 AM, Kagamin wrote:
 Sure, software is one part of an airplane, like a thread is a 
 part of a process.
 When the part fails, you discard it and continue operation. In 
 software it works
 by rolling back a failed transaction. An airplane has some 
 tricks to recover
 from failures, but still it's a "no fail" design you argue 
 against: it shuts
 down parts one by one when and only when they fail and 
 continues operation no
 matter what until nothing works and even then it still doesn't 
 fail, just does
 nothing. The airplane example works against your arguments.
This is a serious misunderstanding of what I'm talking about. Again, on an airplane, no way in hell is a software system going to be allowed to continue operating after it has self-detected a bug. Trying to bend the imprecise language I use into meaning the opposite doesn't change that.
To better depict the big picture as I see it: You suggest that a system should shutdown as soon as possible on first sign of failure, which can affect the system. You provide the hospital in a hurricane example. But you don't praise the hospitals, which shutdown on failure, you praise the hospital, which continues to operate in face of an unexpected and uncontrollable disaster in total contradiction with your suggestion to shutdown ASAP. You refer to airplane's ability to not shutdown ASAP and continue operation on unexpected failure as if it corresponds to your suggestion to shutdown ASAP. This makes no sense, you contradict yourself. Why didn't you praise hospital shutdown? Why nobody wants airplanes to dive into ocean on first suspicion? Because that's how unreliable systems work: they often stop working. And reliable systems work in a completely different way, they employ many tricks, but one big objective of these tricks is to have ability to continue operation on failure. All the effort put into airplane design with one reason: to fight against immediate shutdown, defended by you as the only true way of operation. Exactly the way explicitly rejected by real reliable systems design. How an airplane without the tricks would work? It would dive into ocean on first failure (and crash investigation team diagnoses the failure) - exactly as you suggest. That's safe: it could fall on a city or a nuclear reactor. How a real airplane works? Failure happens and it still flies, contrary to your suggestion to shutdown on failure. That's how critical missions are done: they take a risk of a greater disaster to complete the mission, and failures can be diagnosed when appropriate. That's why I think your examples contradict to your proposal.
Oct 31 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Oct 31, 2014 at 08:15:17PM +0000, Kagamin via Digitalmars-d wrote:
 On Thursday, 16 October 2014 at 19:53:42 UTC, Walter Bright wrote:
On 10/15/2014 12:19 AM, Kagamin wrote:
Sure, software is one part of an airplane, like a thread is a part
of a process.  When the part fails, you discard it and continue
operation. In software it works by rolling back a failed
transaction. An airplane has some tricks to recover from failures,
but still it's a "no fail" design you argue against: it shuts down
parts one by one when and only when they fail and continues
operation no matter what until nothing works and even then it still
doesn't fail, just does nothing. The airplane example works against
your arguments.
This is a serious misunderstanding of what I'm talking about. Again, on an airplane, no way in hell is a software system going to be allowed to continue operating after it has self-detected a bug. Trying to bend the imprecise language I use into meaning the opposite doesn't change that.
To better depict the big picture as I see it: You suggest that a system should shutdown as soon as possible on first sign of failure, which can affect the system. You provide the hospital in a hurricane example. But you don't praise the hospitals, which shutdown on failure, you praise the hospital, which continues to operate in face of an unexpected and uncontrollable disaster in total contradiction with your suggestion to shutdown ASAP. You refer to airplane's ability to not shutdown ASAP and continue operation on unexpected failure as if it corresponds to your suggestion to shutdown ASAP. This makes no sense, you contradict yourself.
You are misrepresenting Walter's position. His whole point was that once a single component has detected a consistency problem within itself, it can no longer be trusted to continue operating and therefore must be shutdown. That, in turn, leads to the conclusion that your system design must include multiple, redundant, independent modules that perform that one function. *That* is the real answer to system reliability. Pretending that a failed component can somehow fix itself is a fantasy. The only way you can be sure you are not making the problem worse is by having multiple redundant units that can perform each other's function. Then when one of the units is known to be malfunctioning, you turn it off and fallback to one of the other, known-to-be-good, components. T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Oct 31 2014
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 You are misrepresenting Walter's position. His whole point was 
 that once
 a single component has detected a consistency problem within 
 itself, it
 can no longer be trusted to continue operating and therefore 
 must be
 shutdown. That, in turn, leads to the conclusion that your 
 system design
 must include multiple, redundant, independent modules that 
 perform that
 one function. *That* is the real answer to system reliability.
In server software such component is a transaction/request. They are independent.
 Pretending that a failed component can somehow fix itself is a 
 fantasy.
Traditionally a failed transaction is indeed rolled back. It's more a business logic requirement because a partially completed operation would confuse the user.
Oct 31 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Oct 31, 2014 at 09:11:53PM +0000, Kagamin via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via Digitalmars-d
 wrote:
You are misrepresenting Walter's position. His whole point was that
once a single component has detected a consistency problem within
itself, it can no longer be trusted to continue operating and
therefore must be shutdown. That, in turn, leads to the conclusion
that your system design must include multiple, redundant, independent
modules that perform that one function. *That* is the real answer to
system reliability.
In server software such component is a transaction/request. They are independent.
You're using a different definition of "component". An inconsistency in a transaction is a problem with the input, not a problem with the program logic itself. If something is wrong with the input, the program can detect it and recover by aborting the transaction (rollback the wrong data). But if something is wrong with the program logic itself (e.g., it committed the transaction instead of rolling back when it detected a problem) there is no way to recover within the program itself.
Pretending that a failed component can somehow fix itself is a
fantasy.
Traditionally a failed transaction is indeed rolled back. It's more a business logic requirement because a partially completed operation would confuse the user.
Again, you're using a different definition of "component". A failed transaction is a problem with the data -- this is recoverable to some extent (that's why we have the ACID requirement of databases, for example). For this purpose, you vet the data before trusting that it is correct. If the data verification fails, you reject the request. This is why you should never use assert to verify data -- assert is for checking the program's own consistency, not for checking the validity of data that came from outside. A failed component, OTOH, is a problem with program logic. You cannot recover from that within the program itself, since its own logic has been compromised. You *can* rollback the wrong changes made to data by that malfunctioning program, of course, but the rollback must be done by a decoupled entity outside of that program. Otherwise you might end up causing even more problems (for example, due to the compromised / malfunctioning logic, the program commits the data instead of reverting it, thus turning an intermittent problem into a permanent one). T -- By understanding a machine-oriented language, the programmer will tend to use a much more efficient method; it is much closer to reality. -- D. Knuth
Oct 31 2014
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/31/2014 2:31 PM, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Oct 31, 2014 at 09:11:53PM +0000, Kagamin via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 20:33:54 UTC, H. S. Teoh via Digitalmars-d
 wrote:
 You are misrepresenting Walter's position. His whole point was that
 once a single component has detected a consistency problem within
 itself, it can no longer be trusted to continue operating and
 therefore must be shutdown. That, in turn, leads to the conclusion
 that your system design must include multiple, redundant, independent
 modules that perform that one function. *That* is the real answer to
 system reliability.
In server software such component is a transaction/request. They are independent.
You're using a different definition of "component". An inconsistency in a transaction is a problem with the input, not a problem with the program logic itself. If something is wrong with the input, the program can detect it and recover by aborting the transaction (rollback the wrong data). But if something is wrong with the program logic itself (e.g., it committed the transaction instead of rolling back when it detected a problem) there is no way to recover within the program itself.
 Pretending that a failed component can somehow fix itself is a
 fantasy.
Traditionally a failed transaction is indeed rolled back. It's more a business logic requirement because a partially completed operation would confuse the user.
Again, you're using a different definition of "component". A failed transaction is a problem with the data -- this is recoverable to some extent (that's why we have the ACID requirement of databases, for example). For this purpose, you vet the data before trusting that it is correct. If the data verification fails, you reject the request. This is why you should never use assert to verify data -- assert is for checking the program's own consistency, not for checking the validity of data that came from outside. A failed component, OTOH, is a problem with program logic. You cannot recover from that within the program itself, since its own logic has been compromised. You *can* rollback the wrong changes made to data by that malfunctioning program, of course, but the rollback must be done by a decoupled entity outside of that program. Otherwise you might end up causing even more problems (for example, due to the compromised / malfunctioning logic, the program commits the data instead of reverting it, thus turning an intermittent problem into a permanent one).
This is a good summation of the situation.
Oct 31 2014
prev sibling next sibling parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 You're using a different definition of "component". An 
 inconsistency in
 a transaction is a problem with the input, not a problem with 
 the
 program logic itself. If something is wrong with the input, the 
 program
 can detect it and recover by aborting the transaction (rollback 
 the
 wrong data).
Transactions roll back when there is contention for resources and/or when you have any kind of integrity issue. That's why you have retries… so no, it is not only something wrong with the input. Something is temporarily wrong with the situation overall.
Oct 31 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/31/2014 5:38 PM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 Transactions roll back when there is contention for resources and/or when you
 have any kind of integrity issue. That's why you have retries… so no, it is
not
 only something wrong with the input. Something is temporarily wrong with the
 situation overall.
Those are environmental errors, not programming bugs, and asserting for those conditions is the wrong approach.
Oct 31 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 03:39:02 UTC, Walter Bright wrote:
 Those are environmental errors, not programming bugs, and 
 asserting for those conditions is the wrong approach.
The point is this: what happens in the transaction engine matters, what happens outside of it does not matter much. Asserts do not belong in release code at all...
Oct 31 2014
prev sibling next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Again, you're using a different definition of "component".
I see no justified reasoning why process can be considered such "component" ad anything else cannot. In practice it is completely dependent on system design as a whole and calling process a silver bullet only creates problems when it is in fact not.
Nov 01 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Nov 01, 2014 at 09:38:23AM +0000, Dicebot via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via Digitalmars-d
 wrote:
Again, you're using a different definition of "component".
I see no justified reasoning why process can be considered such "component" ad anything else cannot. In practice it is completely dependent on system design as a whole and calling process a silver bullet only creates problems when it is in fact not.
I never said "component" == "process". All I said was that at the OS level, at least with current OSes, processes are the smallest unit that is decoupled from each other. If you go below that level of granularity, you have the possibility of shared memory being corrupted by one thread (or fibre, or whatever smaller than a process) affecting the other threads. So that means they are not fully decoupled, and the failure of one thread makes all other threads no longer trustworthy. Obviously, you can go up to larger units than just processes when designing your system, as long as you can be sure they are decoupled from each other. T -- "No, John. I want formats that are actually useful, rather than over-featured megaliths that address all questions by piling on ridiculous internal links in forms which are hideously over-complex." -- Simon St. Laurent on xml-dev
Nov 01 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I never said "component" == "process". All I said was that at 
 the OS
 level, at least with current OSes, processes are the smallest 
 unit
 that is decoupled from each other. If you go below that level of
 granularity, you have the possibility of shared memory being 
 corrupted
 by one thread (or fibre, or whatever smaller than a process) 
 affecting
 the other threads. So that means they are not fully decoupled, 
 and the
 failure of one thread makes all other threads no longer 
 trustworthy.
This is a question of probability and impact. If my Python program fails unexpectedly, then it could in theory be a bug in a c-library, but it probably is not. So it is better to trap it and continue. If D provides bound checks, is a solid language, has a solid compiler, has a solid runtime, and solid libraries… then the same logic applies! If my C program traps on divison by zero, then it probably is an unlucky incident and not a memory corruption issue. So it is probably safe to continue. If my program cannot find a file, it MIGHT be a kernel issue, but it probably isn't. So it safe to continue. If my critical state is recorded by a wall built on transactions or full blown event logging, then it is safe to continue even if my front might suffer from memory corruption. You need to consider: 1. probability (what is the most likely cause of this signal?) 2. impact (do you have insurance?) 3. alternatives (are you in the middle of an air fight?)
Nov 01 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 I never said "component" == "process". All I said was that at 
 the OS
 level, at least with current OSes, processes are the smallest 
 unit
 that is decoupled from each other.
Which is exactly the statement I call wrong. With current OSes processes aren't decoupled units at all - it is all about feature set you stick to. Same with any other units.
 If you go below that level of
 granularity, you have the possibility of shared memory being 
 corrupted
 by one thread (or fibre, or whatever smaller than a process) 
 affecting
 the other threads.
You already have that possibility at process level via shared process memory and kernel mode code. And you still don't have that possibility at thread/fiber level if you don't use mutable shared memory (or any global state in general). It is all about system design. Pretty much only reliably decoupled units I can imagine are processes running in different restricted virtual machines (or, better, different physical machines). Everything else gives just certain level of expectations. Walter has experience with certain types of systems where process is indeed most appropriate unit of granularity and calls that a silver bullet by explicitly designing language in a way that makes any other approach inherently complicated and effort-consuming. But there is more than that in software world.
Nov 02 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 3:48 AM, Dicebot wrote:
 On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via Digitalmars-d
wrote:
 Which is exactly the statement I call wrong. With current OSes processes aren't
 decoupled units at all - it is all about feature set you stick to. Same with
any
 other units.
They have hardware protection against sharing memory between processes. It's a reasonable level of protection.
 If you go below that level of
 granularity, you have the possibility of shared memory being corrupted
 by one thread (or fibre, or whatever smaller than a process) affecting
 the other threads.
You already have that possibility at process level via shared process memory
1. very few processes use shared memory 2. those that do should regard it as input/environmental, and not trust it
 and kernel mode code.
Kernel mode code is the responsibility of the OS system, not the app.
 And you still don't have that possibility at thread/fiber
 level if you don't use mutable shared memory (or any global state in general).
A buffer overflow will render all that protection useless.
 It is all about system design.
It's about the probability of coupling and the level of that your system can stand. Process level protection is adequate for most things.
 Pretty much only reliably decoupled units I can imagine are processes running
in
 different restricted virtual machines (or, better, different physical
machines).
 Everything else gives just certain level of expectations.
Everything is coupled at some level. Again, it's about the level of reliability needed.
 Walter has experience with certain types of systems where process is indeed
most
 appropriate unit of granularity and calls that a silver bullet by explicitly
 designing language
I design the language to do what it can. A language cannot compensate for coupling and bugs in the operating system, nor can a language compensate for two machines being plugged into the same power circuit.
 in a way that makes any other approach inherently complicated
 and effort-consuming.
Using enforce is neither complicated nor effort consuming. The idea that asserts can be recovered from is fundamentally unsound, and makes D unusable for robust critical software. Asserts are for checking for programming bugs. A bug can be tripped because of a buffer overflow, memory corruption, a malicious code injection attack, etc. NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT. Running arbitrary cleanup code at this point is literally undefined behavior. This is not a failure of language design - no language can offer any guarantees about this. If you want code cleanup to happen, use enforce(). If you are using enforce() to detect programming bugs, well, that's your choice. enforce() isn't any more complicated or effort-consuming than using assert().
Nov 02 2014
next sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Sunday, 2 November 2014 at 17:53:45 UTC, Walter Bright wrote:
 On 11/2/2014 3:48 AM, Dicebot wrote:
 On Saturday, 1 November 2014 at 15:02:53 UTC, H. S. Teoh via 
 Digitalmars-d wrote:
 Which is exactly the statement I call wrong. With current OSes 
 processes aren't
 decoupled units at all - it is all about feature set you stick 
 to. Same with any
 other units.
They have hardware protection against sharing memory between processes. It's a reasonable level of protection.
reasonable default - yes reasoable level of protection in general - no
 1. very few processes use shared memory
 2. those that do should regard it as input/environmental, and 
 not trust it
This is no different from: 1. very few threads use shared 2. those that do should regard is as input/environmental
 and kernel mode code.
Kernel mode code is the responsibility of the OS system, not the app.
In some (many?) large scale server systems OS is the app or at least heavily integrated. Thinking about app as a single independent user-space process is a bit.. outdated.
 And you still don't have that possibility at thread/fiber
 level if you don't use mutable shared memory (or any global 
 state in general).
A buffer overflow will render all that protection useless.
Nice we have safe and default thread-local memory!
 It is all about system design.
It's about the probability of coupling and the level of that your system can stand. Process level protection is adequate for most things.
Again, I am fine with advocating it as a resonable default. What frustrates me is intentionally making any other design harder than it should be by explicitly allowing normal cleanup to be skipped. This behaviour is easy to achieve by installing custom assert handler (it could be generic Error handler too) but impossible to bail out when it is the default one. Because of abovementioned avoiding more corruption from cleanup does not sound to me as strong enough benefit to force that on everyone. Ironically in system with decent fault protection and safety redundance it won't even matter (everything it can possibly corrupt is duplicated and proof-checked anyway)
 Walter has experience with certain types of systems where 
 process is indeed most
 appropriate unit of granularity and calls that a silver bullet 
 by explicitly
 designing language
I design the language to do what it can. A language cannot compensate for coupling and bugs in the operating system, nor can a language compensate for two machines being plugged into the same power circuit.
I don't expect you to do magic. My blame is about making decisions that support designs you have great expertise with but hamper something different (but still very real) - decisions that are usually uncharacteristic in D (which I believe is non-opinionated language) and don't really belong to system programming language.
 in a way that makes any other approach inherently complicated
 and effort-consuming.
Using enforce is neither complicated nor effort consuming.
 If you want code cleanup to happen, use enforce(). If you are 
 using enforce() to detect programming bugs, well, that's your 
 choice. enforce() isn't any more complicated or 
 effort-consuming than using assert().
I don't have other choice and I don't like it. It is effort consuming because it requires manually maintained exception hierarchy and style rules to keep errors different from exceptions - something that language otherwise provides to you out of the box. And there is always that 3d party library that is hard-coded to throw Error. It is not something that I realistically expect to change in D and there are specific plans for working with it (thanks for helping with it btw!). Just mentioning it as one of few D design decisions I find rather broken conceptually.
 The idea that asserts can be recovered from is fundamentally 
 unsound, and makes D unusable for robust critical software.
Not "recovered" but "terminate user-defined portion of the system".
 Asserts are for checking for programming bugs. A bug can be 
 tripped because of a buffer overflow, memory corruption, a 
 malicious code injection attack, etc.

 NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly reliable. You simply call one higher reliability chance good enough and other lower one - disastrous. I don't agree this is the language call to make, even if decision is reasonable and fitting 90% cases. This is really no different than GC usage in Phobos before nogc push. If language decision may result in fundamental code base fragmentation (even for relatively small portion of users), it is likely to be overly opinionated decision.
 Running arbitrary cleanup code at this point is literally 
 undefined behavior. This is not a failure of language design - 
 no language can offer any guarantees about this.
Some small chance of undefined behaviour vs 100% chance of resource leaks? Former can be more practical in many cases. And if it isn't for specific application one can always install custom assert handler that kills program right away. Don't see a deal breaker here.
Nov 02 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 3:44 PM, Dicebot wrote:
 They have hardware protection against sharing memory between processes. It's a
 reasonable level of protection.
reasonable default - yes reasoable level of protection in general - no
No language can help when that is the requirement.
 1. very few processes use shared memory
 2. those that do should regard it as input/environmental, and not trust it
This is no different from: 1. very few threads use shared 2. those that do should regard is as input/environmental
It is absolutely different because of scale; having 1K of shared memory is very different from having 100Mb shared between processes including the stack and program code.
 and kernel mode code.
Kernel mode code is the responsibility of the OS system, not the app.
In some (many?) large scale server systems OS is the app or at least heavily integrated. Thinking about app as a single independent user-space process is a bit.. outdated.
Haha, I've used such a system (MSDOS) for many years. Switching to process protection was a huge advance. Sad that we're "modernizing" by reverting to such an awful programming environment.
 And you still don't have that possibility at thread/fiber
 level if you don't use mutable shared memory (or any global state in general).
A buffer overflow will render all that protection useless.
Nice we have safe and default thread-local memory!
Assert is to catch program bugs that should never happen, not correctly functioning programs. Nor can D possibly guarantee that called C functions are safe.
 It is all about system design.
It's about the probability of coupling and the level of that your system can stand. Process level protection is adequate for most things.
Again, I am fine with advocating it as a resonable default. What frustrates me is intentionally making any other design harder than it should be by explicitly allowing normal cleanup to be skipped. This behaviour is easy to achieve by installing custom assert handler (it could be generic Error handler too) but impossible to bail out when it is the default one.
Running normal cleanup code when the program is in an undefined, possibly corrupted, state can impede proper shutdown.
 Because of abovementioned avoiding more corruption from cleanup does not sound
 to me as strong enough benefit to force that on everyone.
I have considerable experience with what programs can do when continuing to run after a bug. This was on real mode DOS, which infamously does not seg fault on errors. It's AWFUL. I've had quite enough of having to reboot the operating system after every failure, and even then that often wasn't enough because it might scramble the disk driver code so it won't even boot. I got into the habit of layering in asserts to stop the program when it went bad. "Do not pass go, do not collect $200" is the only strategy that has a hope of working under such systems.
 I don't expect you to do magic. My blame is about making decisions that support
 designs you have great expertise with but hamper something different (but still
 very real) - decisions that are usually uncharacteristic in D (which I believe
 is non-opinionated language) and don't really belong to system programming
 language.
It is my duty to explain how to use the features of the language correctly, including how and why they work the way they do. The how, why, and best practices are not part of a language specification.
 I don't have other choice and I don't like it. It is effort consuming because
it
 requires manually maintained exception hierarchy and style rules to keep errors
 different from exceptions - something that language otherwise provides to you
 out of the box. And there is always that 3d party library that is hard-coded to
 throw Error.

 It is not something that I realistically expect to change in D and there are
 specific plans for working with it (thanks for helping with it btw!). Just
 mentioning it as one of few D design decisions I find rather broken
conceptually.
I hope to eventually change your mind about it being broken.
 NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly reliable.
That's correct, but not a justification for making it less reliable.
 You simply
 call one higher reliability chance good enough and other lower one -
disastrous.
 I don't agree this is the language call to make, even if decision is reasonable
 and fitting 90% cases.
If D changes assert() to do unwinding, then D will become unusable for building reliable systems until I add in yet another form of assert() that does not.
 This is really no different than GC usage in Phobos before  nogc push. If
 language decision may result in fundamental code base fragmentation (even for
 relatively small portion of users), it is likely to be overly opinionated
decision.
The reason I initiated this thread is to point out the correct way to use assert() and to get that into the culture of best practices for D. This is because if I don't, then in the vacuum people will tend to fill that vacuum with misunderstandings and misuse. It is an extremely important topic.
 Running arbitrary cleanup code at this point is literally undefined behavior.
 This is not a failure of language design - no language can offer any
 guarantees about this.
Some small chance of undefined behaviour vs 100% chance of resource leaks?
If the operating system can't handle resource recovery for a process terminating, it is an unusable operating system.
Nov 02 2014
next sibling parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:
 I have considerable experience with what programs can do when 
 continuing to run after a bug. This was on real mode DOS, which 
 infamously does not seg fault on errors.

 It's AWFUL. I've had quite enough of having to reboot the 
 operating system after every failure, and even then that often 
 wasn't enough because it might scramble the disk driver code so 
 it won't even boot.
Yep. Fun stuff. http://research.microsoft.com/en-us/people/mickens/thenightwatch.pdf
 I got into the habit of layering in asserts to stop the program 
 when it went bad. "Do not pass go, do not collect $200" is the 
 only strategy that has a hope of working under such systems.
The tough thing is that in D, contracts serve as this early warning system, but the errors this system catches are also sometimes benign and sometimes the programmer knows that they're benign, but if the person writing the test did so with an assert then the decision of how to respond to the error has been made for him.
 It is my duty to explain how to use the features of the 
 language correctly, including how and why they work the way 
 they do. The how, why, and best practices are not part of a 
 language specification.
I don't entirely agree. The standard library serves as the how and how and best practices for a language, and while a programmer can go against this, it's often like swimming upstream. For better or worse, we need to establish how parameters are validated and such in Phobos, and this will serve as the template for nearly all code written in D. The core design goal of Druntime is to make the default behavior as safe as possible, but to allow to discerning user to override this behavior in certain key places. I kind of see the entire D language like this--it has the power of C/C++ but the ease of use of a much higher level language. We should strive for all aspects of the language and standard library to have the same basic behavior: a safe, efficient default but the ability to customize in key areas to meet individual needs. The trick is determining what these key areas are and how much latitude the user should have.
 If D changes assert() to do unwinding, then D will become 
 unusable for building reliable systems until I add in yet 
 another form of assert() that does not.
To be fair, assert currently does unwinding. It always has. The proposal is that it should not.
 The reason I initiated this thread is to point out the correct 
 way to use assert() and to get that into the culture of best 
 practices for D. This is because if I don't, then in the vacuum 
 people will tend to fill that vacuum with misunderstandings and 
 misuse.

 It is an extremely important topic.
I still feel like there's something really important here that we're all grasping at but it hasn't quite come to the fore yet. Along the lines of the idea that a safe program may be able to recover from a logic error. It seems like a uniquely D thing insofar as systems languages are concerned.
 If the operating system can't handle resource recovery for a 
 process terminating, it is an unusable operating system.
There are all kinds of resources, and not all of them are local to the system. Everything will eventually recover though, it just won't happen immediately as is the case with resource cleanup within a process.
Nov 02 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/2/2014 8:54 PM, Sean Kelly wrote:
 On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:
 I got into the habit of layering in asserts to stop the program when it went
 bad. "Do not pass go, do not collect $200" is the only strategy that has a
 hope of working under such systems.
The tough thing is that in D, contracts serve as this early warning system, but the errors this system catches are also sometimes benign
If it is benign is not known until it is debugged by a human.
 and sometimes the
 programmer knows that they're benign, but if the person writing the test did so
 with an assert then the decision of how to respond to the error has been made
 for him.
The person who wrote "assert" decided that it was a non-recoverable programming bug. I deliberately wrote "bug", and not "error".
 It is my duty to explain how to use the features of the language correctly,
 including how and why they work the way they do. The how, why, and best
 practices are not part of a language specification.
I don't entirely agree. The standard library serves as the how and how and best practices for a language, and while a programmer can go against this, it's often like swimming upstream. For better or worse, we need to establish how parameters are validated and such in Phobos, and this will serve as the template for nearly all code written in D.
Definitely Phobos should exhibit best practices. Whether bad function argument values are input/environmental errors or bugs is decidable only on a case-by-case basis. There is no overarching rule. Input/environmental errors must not use assert to detect them.
 To be fair, assert currently does unwinding.  It always has.  The proposal is
 that it should not.
Not entirely - a function with only asserts in it is considered "nothrow" and callers may not have exception handlers for them.
 The reason I initiated this thread is to point out the correct way to use
 assert() and to get that into the culture of best practices for D. This is
 because if I don't, then in the vacuum people will tend to fill that vacuum
 with misunderstandings and misuse.

 It is an extremely important topic.
I still feel like there's something really important here that we're all grasping at but it hasn't quite come to the fore yet. Along the lines of the idea that a safe program may be able to recover from a logic error. It seems like a uniquely D thing insofar as systems languages are concerned.
It's a false hope. D cannot offer any guarantees of recovery from programming bugs. Asserts, by definition, can never happen. So when they do, something is broken. Broken programs are not recoverable because one cannot know why they broke until they are debugged. As I mentioned to Dicebot, safe only applies to the function's logic. D programs can call C functions. C functions are not safe. There can be compiler bugs. There can be other threads corrupting memory. There can be hardware failures, operating system bugs, etc., that resulting in tripping the assert. If a programmer "knows" a bug is benign and wants to recover from it, D has a mechanism for it - enforce(). I do not understand the desire to bash assert() into behaving like enforce(). Just use enforce() in the first place. The idea was brought up that one may be using a library that uses assert() to detect input/environmental errors. I do not understand using a library in a system that must be made robust, having the source code to the library, and not being allowed to change that source code to fix bugs in it. A robust application cannot be made using such a library - assert() misuse will not be the only problem with it.
 If the operating system can't handle resource recovery for a process
 terminating, it is an unusable operating system.
There are all kinds of resources, and not all of them are local to the system. Everything will eventually recover though, it just won't happen immediately as is the case with resource cleanup within a process.
I'd say that is a poor design for an operating system. Be that as it may, if you want to recover from assert()s, use enforce() instead. There are other consequences from trying to make assert() recoverable: 1. functions with assert()s cannot be "nothrow" 2. assert()s cannot provide hints to the optimizer Those are high prices to pay for a systems performance language.
Nov 02 2014
prev sibling parent reply "Dicebot" <public dicebot.lv> writes:
On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:
 On 11/2/2014 3:44 PM, Dicebot wrote:
 They have hardware protection against sharing memory between 
 processes. It's a
 reasonable level of protection.
reasonable default - yes reasoable level of protection in general - no
No language can help when that is the requirement.
Yes, because it is property of system architecture as a whole which is exactly what I am speaking about.
 It is absolutely different because of scale; having 1K of 
 shared memory is very different from having 100Mb shared 
 between processes including the stack and program code.
It is possible to have minimal amount shared mutable memory inside one process. There is nothing inherently blocking one to do so, same as there is nothing inherently preventing one to screw the inter-process shared memory. Being different only because of scale -> not really different.
 Kernel mode code is the responsibility of the OS system, not 
 the app.
In some (many?) large scale server systems OS is the app or at least heavily integrated. Thinking about app as a single independent user-space process is a bit.. outdated.
Haha, I've used such a system (MSDOS) for many years. Switching to process protection was a huge advance. Sad that we're "modernizing" by reverting to such an awful programming environment.
What is huge advance for user land applciation is a problem for server code. Have you ever heard "OS is the problem, not solution" slogan that is slowly becoming more popular in high load networking world?
 It is all about system design.
It's about the probability of coupling and the level of that your system can stand. Process level protection is adequate for most things.
Again, I am fine with advocating it as a resonable default. What frustrates me is intentionally making any other design harder than it should be by explicitly allowing normal cleanup to be skipped. This behaviour is easy to achieve by installing custom assert handler (it could be generic Error handler too) but impossible to bail out when it is the default one.
Running normal cleanup code when the program is in an undefined, possibly corrupted, state can impede proper shutdown.
Preventing cleanup can be done with roughly one line of code from user code. Enabling it back is effectively impossible. With this decision you don't trade safer default for more dangerous default - you trade configurable default for unavoidable. To preserve same safe defaults you could define all thrown Errors to result in plain HLT / abort call with possibility to define user handler that actually throws. That would have addressed all concernc nicely while still not making life of those who want cleanup harder.
 Because of abovementioned avoiding more corruption from 
 cleanup does not sound
 to me as strong enough benefit to force that on everyone.
I have considerable experience with what programs can do when continuing to run after a bug. This was on real mode DOS, which infamously does not seg fault on errors. It's AWFUL. I've had quite enough of having to reboot the operating system after every failure, and even then that often wasn't enough because it might scramble the disk driver code so it won't even boot.
I don't argue necessity to terminate the program. I argue strict relation "program == process" which is impractical and inflexible.
 It is my duty to explain how to use the features of the 
 language correctly, including how and why they work the way 
 they do. The how, why, and best practices are not part of a 
 language specification.
You can't just explain things to make them magically appropriate for user domain. I fully understand how you propose to design applications. Unfortunately, it is completely unacceptable in some cases and quite inconvenient in others. Right now your proposal is effectively "design applications like me or reimplement language / library routines yourself".
 NO CODE CAN BE RELIABLY EXECUTED PAST THIS POINT.
As I have already mentioned it almost never can be truly reliable.
That's correct, but not a justification for making it less reliable.
It is justification for making it more configurable.
 If D changes assert() to do unwinding, then D will become 
 unusable for building reliable systems until I add in yet 
 another form of assert() that does not.
My personal perfect design would be like this: - Exceptions work as they do now - Errors work the same way as exceptions but don't get caught by catch(Exception) - assert does not throw Error but simply aborts the program (configurable with druntime callback) - define "die" which is effectively "assert(false)" - tests don't use assert That would provide default behaviour similar to one we currently have (with all the good things) but leave much more configurable choices for system designer.
 Some small chance of undefined behaviour vs 100% chance of 
 resource leaks?
If the operating system can't handle resource recovery for a process terminating, it is an unusable operating system.
There are many unusable operating systems out there then :) And don't forget about remote network resources - while leak will eventually timeout there it will still have a negative impact.
Nov 09 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/9/2014 1:12 PM, Dicebot wrote:
 On Monday, 3 November 2014 at 03:29:05 UTC, Walter Bright wrote:
 It is absolutely different because of scale; having 1K of shared memory is
 very different from having 100Mb shared between processes including the stack
 and program code.
It is possible to have minimal amount shared mutable memory inside one process.
D's type system tries to minimize it, but the generated code knows nothing at all about the difference between local and shared memory, and has no protection against crossing the boundary. Interprocess protection is done via the hardware.
 There is nothing inherently blocking one to do so, same as there is nothing
 inherently preventing one to screw the inter-process shared memory. Being
 different only because of scale -> not really different.
Sharing 1K of interprocess memory is one millionth of the vulnerability surface of a 1G multithreaded program.
 What is huge advance for user land applciation is a problem for server code.
 Have you ever heard "OS is the problem, not solution" slogan that is slowly
 becoming more popular in high load networking world?
No, but my focus is what D can provide, not what the OS can provide.
 Preventing cleanup can be done with roughly one line of code from user code.
 Enabling it back is effectively impossible. With this decision you don't trade
 safer default for more dangerous default - you trade configurable default for
 unavoidable.

 To preserve same safe defaults you could define all thrown Errors to result in
 plain HLT / abort call with possibility to define user handler that actually
 throws. That would have addressed all concernc nicely while still not making
 life of those who want cleanup harder.
There is already a cleanup solution - use enforce().
 That's correct, but not a justification for making it less reliable.
It is justification for making it more configurable.
In general, some things shouldn't be configurable. For example, one cannot mix 3rd party libraries when each one is trying to customize global behavior.
 My personal perfect design would be like this:

 - Exceptions work as they do now
 - Errors work the same way as exceptions but don't get caught by
catch(Exception)
 - assert does not throw Error but simply aborts the program (configurable with
 druntime callback)
 - define "die" which is effectively "assert(false)"
 - tests don't use assert
Having assert() not throw Error would be a reasonable design choice.
Nov 09 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:
 Having assert() not throw Error would be a reasonable design 
 choice.
What if you could turn assert() in libraries into enforce() using a compiler switch? Servers should be able to record failure and free network resources/locks even on fatal failure.
Nov 12 2014
next sibling parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On the other hand I guess HLT will signal SIGSEGV which can be 
caught using a signal handler, but then D should provide the 
OS-specific infrastructure for obtaining the necessary 
information before exiting.
Nov 12 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2014 11:40 AM, "Ola Fosheim Grøstad" 
<ola.fosheim.grostad+dlang gmail.com>" wrote:
 On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:
 Having assert() not throw Error would be a reasonable design choice.
What if you could turn assert() in libraries into enforce() using a compiler switch?
Forgive me for being snarky, but there are text editing utilities where one can: s/assert/enforce/ because if one can use a compiler switch, then one has the source which can be edited. In any case, compiler switches should not change behavior like that. assert() and enforce() are completely different.
Nov 12 2014
parent reply "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 12 November 2014 at 20:40:45 UTC, Walter Bright 
wrote:
 Forgive me for being snarky, but there are text editing 
 utilities where one can:
Snarky is ok. :)
 In any case, compiler switches should not change behavior like 
 that. assert() and enforce() are completely different.
Well, but I don't understand how assert() can unwind the stack if everyone should assume that the stack might be trashed and therefore invalid? In order to be consistent it with your line of reasoning it should simply HLT, then a SIGSEGV handler should set up a preallocated stack, obtain the information and send it off to a logging service using pure system calls before terminating (or send it to the parent process).
Nov 12 2014
parent "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= writes:
On Wednesday, 12 November 2014 at 20:52:28 UTC, Ola Fosheim 
Grøstad wrote:
 In order to be consistent it with your line of reasoning it 
 should simply HLT, then a SIGSEGV handler should set up a 
 preallocated stack, obtain the information and send it off to a 
 logging service using pure system calls before terminating (or 
 send it to the parent process).
Btw, in C you should get SIGABRT on assert()
Nov 12 2014
prev sibling parent "Kagamin" <spam here.lot> writes:
On Saturday, 1 November 2014 at 16:42:31 UTC, Walter Bright wrote:
 My ideas are what are implemented on airplanes.
For components, not for a system. Nobody said a word against component design, it's systems that people want to be able to design, yet you prohibit it.
 I didn't originate these ideas, they come from the aviation 
 industry.
You're original in claiming it is the only working solution, but aviation industry proves error resilient systems are possible and successful, even though you claim their design is unsound and unusable. Yet you praise them, acknowledging their success, which makes your claims ever so ironical.
 Recall that I was employed as an engineer working on flight 
 critical systems design for the 757.
This is how problem decomposition works: you don't need to understand the whole system to work on a component. On Sunday, 2 November 2014 at 17:53:45 UTC, Walter Bright wrote:
 Kernel mode code is the responsibility of the OS system, not 
 the app.
Suddenly safety becomes not the top priority. If it can't always be the priority, there should be a choice of priorities, but you deny that choice. It's a matter of compliance with reality. Whatever way you design the language, can you change reality that way? I don't see why possibility of choice prevents anything.
Nov 11 2014
prev sibling parent reply "Kagamin" <spam here.lot> writes:
On Friday, 31 October 2014 at 21:33:22 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 You're using a different definition of "component".
System granularity is decided by the designer. You either allow people design their systems or force your design on them, if you do both, you contradict yourself.
 An inconsistency in a transaction is a problem with the input, 
 not a problem with the program logic itself.
Distinction between failures doesn't matter. Reliable system manages any failures, especially unexpected and unforeseen ones, without diagnostic.
 If something is wrong with the input, the program
 can detect it and recover by aborting the transaction (rollback 
 the
 wrong data). But if something is wrong with the program logic 
 itself
 (e.g., it committed the transaction instead of rolling back 
 when it
 detected a problem) there is no way to recover within the 
 program
 itself.
Not the case for an airplane: it recovers from any failure within itself. Another indication that an airplane example contradicts to Walter's proposal. See my post about the big picture.
 A failed component, OTOH, is a problem with program logic. You 
 cannot
 recover from that within the program itself, since its own 
 logic has
 been compromised. You *can* rollback the wrong changes made to 
 data by
 that malfunctioning program, of course, but the rollback must 
 be done by
 a decoupled entity outside of that program. Otherwise you might 
 end up
 causing even more problems (for example, due to the compromised 
 /
 malfunctioning logic, the program commits the data instead of 
 reverting
 it, thus turning an intermittent problem into a permanent one).
No misunderstanding, I think that Walter's idea is good, just not always practical, and that real critical systems don't work the way he describes, they make more complicated tradeoffs.
Nov 01 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/1/2014 3:35 AM, Kagamin wrote:
 No misunderstanding, I think that Walter's idea is good, just not always
 practical, and that real critical systems don't work the way he describes, they
 make more complicated tradeoffs.
My ideas are what are implemented on airplanes. I didn't originate these ideas, they come from the aviation industry. Recall that I was employed as an engineer working on flight critical systems design for the 757.
Nov 01 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 28/09/2014 23:00, Walter Bright wrote:
 I can't get behind the notion of "reasonably certain". I certainly would
 not use such techniques in any code that needs to be robust,
On 29/09/2014 04:04, Walter Bright wrote:
 I know I'm hardcore and uncompromising on this issue, but that's where I
 came from (the aviation industry).
Walter, you do understand that not all software has to be robust - in the critical systems sense - to be quality software? And that in fact, the majority of software is not critical systems software?... I was under the impression that D was meant to be a general purpose language, not a language just for critical systems. Yet, on language design issues, you keep making a series or arguments and points that apply *only* to critical systems software. -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/1/2014 6:17 AM, Bruno Medeiros wrote:
 Walter, you do understand that not all software has to be robust -
Yes, I understand that.
 in the critical systems sense - to be quality software? And that in fact, the
majority
 of software is not critical systems software?...

 I was under the impression that D was meant to be a general purpose language,
 not a language just for critical systems. Yet, on language design issues, you
 keep making a series or arguments and points that apply *only* to critical
 systems software.
If someone writes non-robust software, D allows them to do that. However, I won't leave unchallenged attempts to pass such stuff off as robust. Nor will I accept such practices in Phobos, because, as this thread clearly shows, there are a lot of misunderstandings about what robust software is. Phobos needs to CLEARLY default towards solid, robust practice. It's really too bad that I've never seen any engineering courses on reliability. http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
Oct 04 2014
next sibling parent "Piotrek" <p nonexistent.pl> writes:
On Saturday, 4 October 2014 at 08:39:47 UTC, Walter Bright wrote:

 It's really too bad that I've never seen any engineering 
 courses on reliability.

 http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
Thanks Walter. I was going to ask you about papers :) Maybe we need a to mention the key things in D guideline or in the future book: "Effective D". Piotrek
Oct 04 2014
prev sibling next sibling parent reply "Joseph Rushton Wakeling" <joseph.wakeling webdrake.net> writes:
On Saturday, 4 October 2014 at 08:39:47 UTC, Walter Bright wrote:
 If someone writes non-robust software, D allows them to do 
 that. However, I won't leave unchallenged attempts to pass such 
 stuff off as robust.

 Nor will I accept such practices in Phobos, because, as this 
 thread clearly shows, there are a lot of misunderstandings 
 about what robust software is. Phobos needs to CLEARLY default 
 towards solid, robust practice.
Would it help to clarify my intentions in this discussion if I said that, on this note, I entirely agree -- and nothing I have said in this discussion is intended to be an argument about how Phobos should be designed?
Oct 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 4:25 AM, Joseph Rushton Wakeling wrote:
 Would it help to clarify my intentions in this discussion if I said that, on
 this note, I entirely agree -- and nothing I have said in this discussion is
 intended to be an argument about how Phobos should be designed?
Yes. Thank you!
Oct 04 2014
prev sibling next sibling parent reply Joseph Rushton Wakeling via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 04/10/14 10:39, Walter Bright via Digitalmars-d wrote:
 If someone writes non-robust software, D allows them to do that. However, I
 won't leave unchallenged attempts to pass such stuff off as robust.

 Nor will I accept such practices in Phobos, because, as this thread clearly
 shows, there are a lot of misunderstandings about what robust software is.
 Phobos needs to CLEARLY default towards solid, robust practice.
A practical question that occurs to me here. Suppose that I implement, in D, a framework creating Erlang-style processes (i.e. properly isolated, lightweight processes within a defined runtime environment, with an appropriate error-handling framework that allows those processes to be brought down and restarted without bringing down the entire application). Is there any reasonable scope for accessing Phobos directly from programs written to operate within that runtime, or is it going to be necessary to wrap all of Phobos in order to ensure that it's accessed in a safe way (e.g. to ensure that the conditions required of in contracts are enforced before the call gets to phobos, etc.)?
Oct 04 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/4/2014 6:36 AM, Joseph Rushton Wakeling via Digitalmars-d wrote:
 Suppose that I implement, in D, a framework creating Erlang-style processes
 (i.e. properly isolated, lightweight processes within a defined runtime
 environment, with an appropriate error-handling framework that allows those
 processes to be brought down and restarted without bringing down the entire
 application).

 Is there any reasonable scope for accessing Phobos directly from programs
 written to operate within that runtime, or is it going to be necessary to wrap
 all of Phobos in order to ensure that it's accessed in a safe way (e.g. to
 ensure that the conditions required of in contracts are enforced before the
call
 gets to phobos, etc.)?
A start to this would be to ensure that the erlang-style processes only call pure functions. Then I'd add pervasive use of immutable data structures. This should help a lot.
Oct 04 2014
prev sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 04/10/2014 09:39, Walter Bright wrote:
 On 10/1/2014 6:17 AM, Bruno Medeiros wrote:
 Walter, you do understand that not all software has to be robust -
Yes, I understand that.
 in the critical systems sense - to be quality software? And that in
 fact, the majority
 of software is not critical systems software?...

 I was under the impression that D was meant to be a general purpose
 language,
 not a language just for critical systems. Yet, on language design
 issues, you
 keep making a series or arguments and points that apply *only* to
 critical
 systems software.
If someone writes non-robust software, D allows them to do that. However, I won't leave unchallenged attempts to pass such stuff off as robust. Nor will I accept such practices in Phobos, because, as this thread clearly shows, there are a lot of misunderstandings about what robust software is. Phobos needs to CLEARLY default towards solid, robust practice. It's really too bad that I've never seen any engineering courses on reliability. http://www.drdobbs.com/architecture-and-design/safe-systems-from-unreliable-parts/228701716
Well, I set myself a trap to get that response... Of course, I too want my software to be robust! I doubt that anyone would disagree that Phobos should be designed to be as robust as possible. But "robust" is too general of a term to be precise here, so this belies my original point. I did say robust-in-the-critical-systems-sense... What I was questioning was whether D and Phobos should be designed in a way that took critical systems software as its main use, and relegate the other kinds of software to secondary importance. (Note: I don't think such dichotomy and compromise *has* to exist in order to design a great D and Phobos. But in this discussion I feel the choices and vision were heading in a way that would likely harm the development of general purpose software in favor of critical systems.) -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 08 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/8/2014 8:06 AM, Bruno Medeiros wrote:
 (Note: I don't think such dichotomy and compromise *has* to exist in order to
 design a great D and Phobos. But in this discussion I feel the choices and
 vision were heading in a way that would likely harm the development of general
 purpose software in favor of critical systems.)
I definitely believe that Phobos should aid the implementation of critical systems over sloppy ones. But I don't believe that harms sloppy ones.
Oct 09 2014
prev sibling next sibling parent reply Bruno Medeiros <bruno.do.medeiros+dng gmail.com> writes:
On 28/09/2014 00:15, Walter Bright wrote:
 This issue comes up over and over, in various guises. I feel like
 Yosemite Sam here:

 https://www.youtube.com/watch?v=hBhlQgvHmQ0

 In that vein, Exceptions are for either being able to recover from
 input/environmental errors, or report them to the user of the application.

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.
This is incorrect. Yes, the primary purpose of Exceptions is not for debugging, but to report exceptional state to the user (or some other component of the system). But they also have a purpose for debugging, particularly the stack traces of exceptions. Take what you said: "Failure to respond properly to an input/environmental error is a bug. But the input/environmental error is not a bug. If it was, then the program should assert on the error, not throw. " So, some component/function Foo detects an environmental error, and throws and Exception, accordingly. Foo is not responsible for handling these errors, but some other component is. Component/function Bar is the one that should handle such an error (for example, it should display a dialog to the user, and continue the application). But due to a bug, it doesn't do so, and the Exception goes all the way through main(). The programmer notices this happening, and clearly recognizes it's a bug (but doesn't know where the bug is, doesn't know that it's Bar that should be handling it). Now, what is best, to just have the Exception message (something like "File not found") displayed to the programmer - or even an end-user that could report a bug -, or to have the stack trace of the Exception so that the programmer can more easily look at which function should be handling it? -- Bruno Medeiros https://twitter.com/brunodomedeiros
Oct 01 2014
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/1/2014 6:44 AM, Bruno Medeiros wrote:
 This is incorrect.

 Yes, the primary purpose of Exceptions is not for debugging, but to report
 exceptional state to the user (or some other component of the system).

 But they also have a purpose for debugging, particularly the stack traces of
 exceptions. Take what you said:

 "Failure to respond properly to an input/environmental error is a bug.
 But the input/environmental error is not a bug. If it was, then the
 program should assert on the error, not throw. "

 So, some component/function Foo detects an environmental error, and throws and
 Exception, accordingly. Foo is not responsible for handling these errors, but
 some other component is.

 Component/function Bar is the one that should handle such an error (for
example,
 it should display a dialog to the user, and continue the application). But due
 to a bug, it doesn't do so, and the Exception goes all the way through main().
This is like saying "if statements are for debugging".
 The programmer notices this happening, and clearly recognizes it's a bug (but
 doesn't know where the bug is, doesn't know that it's Bar that should be
 handling it). Now, what is best, to just have the Exception message (something
 like "File not found") displayed to the programmer - or even an end-user that
 could report a bug -, or to have the stack trace of the Exception so that the
 programmer can more easily look at which function should be handling it?
Would you agree that every time DMD reports a syntax error in user code, it should also include a stack trace of the DMD source code to where in DMD it reported the error?
Oct 04 2014
parent Marco Leise <Marco.Leise gmx.de> writes:
Am Sat, 04 Oct 2014 01:43:40 -0700
schrieb Walter Bright <newshound2 digitalmars.com>:

 Would you agree that every time DMD reports a syntax error in user code, it 
 should also include a stack trace of the DMD source code to where in DMD it 
 reported the error?
Of course not. DMD is a compiler and it is part of its normal operation to have sophisticated code to report syntax errors. I distinguish between "expected" exceptions and "unexpected" exceptions. Expected exceptions are those that occur due to validating user input or as part of handling documented error codes at the interface layer with external APIs. Unexpected exceptions are those we don't handle explicitly because we don't expect them. Maybe because we think that code wont throw, since the input had been validated or a bug in an external library or laziness. A FileNotFoundException can be the expected result of an "open file" dialog with the user typing the name of a non-existent file or the unexpected result of loading a static asset with an index that is off-by-one (e.g. icon<idx>.bmp). In the case of DMD, syntax errors are *expected* as part of validating user input. And that's why it prints a single line that can be parsed by IDEs to jump the the source. Now to make things interesting, we also see *unexpected* exceptions in DMD: internal compiler errors. While expected exceptions are commonly handled with a simple message, unexpected exceptions are handled depending on application. In non-interactive applications like DMD they can all be AssertErrors that terminate the program. In interactive software like a video editor, someone might just have spent hours on editing and the user should be allowed to reason about the severity of the fault and decide to ignore the exception or quit the program. There are other solutions like logging and emailing the error to someone. The example above, about the off-by-one when reading an icon is a typical case of an exception that you would want to investigate with an exception stack trace, but keep the program running. -- Marco
Oct 05 2014
prev sibling next sibling parent "rst256" <ussr.24 yandex.ru> writes:
https://en.wikipedia.org/wiki/Code_smell
Do you read this?

Yes phobos.stdio very smell code. Disagree?
107 КБ of source code olny for call
few functions from stdio. Are you sure that
this code are full correctly?
"silly rabbit, y should be positive" - may
be becose his used class like this?

the funny thing is that this code can easily
be written machine based on a few rules and header.

Say that do you think about this code model
File("output"){
...
some file operation
...
}else{
...
optional if defined handle error
...
}
...

this not class instance, is macrodef.
And "output" is not a file, this
external resurce. His may binded to
commandline arg (-output), static file
  and gui(just set asign property in
design form editor). May be get from
network.
That is cool, about this i can say:
"Take it to bank". Gui on macros
like this may create very quick.

File declare as subject not class
(eg in json format) you anyway build it
from exists function.
example:
File = {
export: [ export list ]
blocks: {
main:{},
else: {}
},

}
Firshtein?
see https://en.wikipedia.org/wiki/Subject-oriented_programming
how this make on jscript https://github.com/jonjamz/amethyst
Oct 22 2014
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
On 9/27/14, 8:15 PM, Walter Bright wrote:
 This issue comes up over and over, in various guises. I feel like
 Yosemite Sam here:

      https://www.youtube.com/watch?v=hBhlQgvHmQ0

 In that vein, Exceptions are for either being able to recover from
 input/environmental errors, or report them to the user of the application.

 When I say "They are NOT for debugging programs", I mean they are NOT
 for debugging programs.

 assert()s and contracts are for debugging programs.
Here's another +1 for exceptions. I want to add a a slash command to Slack (https://slack.zendesk.com/hc/en-us/articles/201259356-Slash-Commands). So, for example, when I say: /bot random phrase This hits a web server that processes that request and returns a random phrase. Now, imagine I have an assert in my application. When the web server hits the assertion it shuts down and the user doesn't get a response. What I'd like to do is to trap that assertion, tell the user that there's a problem, and send me an email telling me to debug it and fix it. That way the user can continue using the bot and I meanwhile I can fix the bug. In the real world where you don't want unhappy users, asserts don't work. Walter: how can you do that with an assertion triggering?
Oct 24 2014
next sibling parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Oct 24, 2014 at 03:29:43PM -0300, Ary Borenszweig via Digitalmars-d
wrote:
 On 9/27/14, 8:15 PM, Walter Bright wrote:
This issue comes up over and over, in various guises. I feel like
Yosemite Sam here:

     https://www.youtube.com/watch?v=hBhlQgvHmQ0

In that vein, Exceptions are for either being able to recover from
input/environmental errors, or report them to the user of the application.

When I say "They are NOT for debugging programs", I mean they are NOT
for debugging programs.

assert()s and contracts are for debugging programs.
Here's another +1 for exceptions. I want to add a a slash command to Slack (https://slack.zendesk.com/hc/en-us/articles/201259356-Slash-Commands). So, for example, when I say: /bot random phrase This hits a web server that processes that request and returns a random phrase. Now, imagine I have an assert in my application. When the web server hits the assertion it shuts down and the user doesn't get a response. What I'd like to do is to trap that assertion, tell the user that there's a problem, and send me an email telling me to debug it and fix it. That way the user can continue using the bot and I meanwhile I can fix the bug. In the real world where you don't want unhappy users, asserts don't work. Walter: how can you do that with an assertion triggering?
Sure they do. Your application should be running in a separate process from the webserver itself. The webserver receives a request and forwards it to the application process. The application process processes the request and sends the response back to the webserver, which forwards it back on the client socket. Meanwhile, the webserver also monitors the application process; if it crashes before producing a response, it steps in and sends a HTTP 500 response to the client instead. It can also email you about the bug, possibly with the stack trace of the crashed application process, etc.. (And before you complain about inefficiency, there *are* ways of eliminating copying overhead when forwarding requests/responses between the client and the application.) But if the webserver itself triggers an assertion, then it should NOT attempt to send anything back to the client, because the assertion may be indicating some kind of memory corruption or security exploit attempt. You don't know if you might be accidentally sending sensitive personal data (e.g. password for another user) back to the wrong client, because your data structures got scrambled and the wrong data is associated with the wrong client now. Basically, if you want a component to recover from a serious problem like a failed assertion, the recovery code should be in a *separate* component. Otherwise, if the recovery code is within the failing component, you have no way to know if the recovery code itself has been compromised, and trusting that it will do the right thing is very dangerous (and is what often leads to nasty security exploits). The watcher must be separate from the watched, otherwise how can you trust the watcher? T -- Why ask rhetorical questions? -- JC
Oct 24 2014
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 24 October 2014 at 18:47:59 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 Basically, if you want a component to recover from a serious 
 problem
 like a failed assertion, the recovery code should be in a 
 *separate*
 component. Otherwise, if the recovery code is within the failing
 component, you have no way to know if the recovery code itself 
 has been
 compromised, and trusting that it will do the right thing is 
 very
 dangerous (and is what often leads to nasty security exploits). 
 The
 watcher must be separate from the watched, otherwise how can 
 you trust
 the watcher?
You make process isolation sound like a silver bullet, but failure can happen on any scale from a temporary variable to global network. You can't use process isolation to contain a failure of a larger than process scale, and it's an overkill for a failure of a temporary variable scale.
Oct 31 2014
parent reply "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Fri, Oct 31, 2014 at 08:23:04PM +0000, Kagamin via Digitalmars-d wrote:
 On Friday, 24 October 2014 at 18:47:59 UTC, H. S. Teoh via Digitalmars-d
 wrote:
Basically, if you want a component to recover from a serious problem
like a failed assertion, the recovery code should be in a *separate*
component. Otherwise, if the recovery code is within the failing
component, you have no way to know if the recovery code itself has
been compromised, and trusting that it will do the right thing is
very dangerous (and is what often leads to nasty security exploits).
The watcher must be separate from the watched, otherwise how can you
trust the watcher?
You make process isolation sound like a silver bullet, but failure can happen on any scale from a temporary variable to global network. You can't use process isolation to contain a failure of a larger than process scale, and it's an overkill for a failure of a temporary variable scale.
You're missing the point. The point is that a reliable system made of unreliable parts, can only be reliable if you have multiple *redundant* copies of each component that are *decoupled* from each other. The usual unit of isolation at the lowest level is that of a single process, because threads within a process has full access to memory shared by all threads. Therefore, they are not decoupled from each other, and therefore, you cannot put any confidence in the correct functioning of other threads once a single thread has become inconsistent. The only failsafe solution is to have multiple redundant processes, so when one process becomes inconsistent, you fallback to another process, *decoupled* process that is known to be good. This does not mean that process isolation is a "silver bullet" -- I never said any such thing. The same reasoning applies to larger components in the system as well. If you have a server that performs function X, and the server begins to malfunction, you cannot expect the server to fix itself -- because you don't know if a hacker hasn't rooted the server and is running exploit code instead of your application. The only 100% safe way to recover, is to have another redundant server (or more) that also performs function X, shutdown the malfunctioning server for investigation and repair, and in the meantime switch over to the redundant server to continue operations. You don't shutdown the *entire* network unless all redundant components have failed. The reason you cannot go below the process level as a unit of redundancy is because of coupling. The above design of failing over to a redundant module only works if the modules are completely decoupled from each other. Otherwise, you have end up with the situation where you have two redundant modules M1 and M2, but both of them share a common helper module M3. Then if M1 detects a problem, you cannot be 100% sure it's not caused by a problem with M3, so in this case if you just switch to M2, it will also fail in the same way. Similarly, you cannot guarantee that while malfunctioning, M1 may have somehow damaged M3, and thereby also making M2 unreliable. The only way to be 100% sure that failover will actually fix the problem, is to make sure that M1 and M2 are completely isolated from each other (e.g., by having two redundant copies of M3 that are isolated from each other). Since a single process is the unit of isolation in the OS, you can't go below this granularity: as I've already said, if one thread is malfunctioning, it may have trashed the data shared by all other threads in the same process, and therefore none of the other threads can be trusted to continue operating correctly. The only way to be 100% sure that failover will actually fix the problem, is to switch over to another process that you *know* is not coupled to the old, malfunctioning process. Attempting to have a process "fix itself" after detecting an inconsistency is unreliable -- you're leaving it up to chance whether or not the attempted recovery will actually work, and not make the problem worse. You cannot guarantee the recovery code itself hasn't been compromised by the failure -- because the recovery code exists in the same process and is vulnerable to the same problem that caused the original failure, and vulnerable to memory corruption caused by malfunctioning code prior to the point the problem was detected. Therefore, the recovery code is not trustworthy, and cannot be relied on to continue operating correctly. That kind of "maybe, maybe not" recovery is not what I'd want to put any trust in, especially when it comes to critical applications that can cost lives if things go wrong. T -- English has the lovely word "defenestrate", meaning "to execute by throwing someone out a window", or more recently "to remove Windows from a computer and replace it with something useful". :-) -- John Cowan
Oct 31 2014
parent reply "Kagamin" <spam here.lot> writes:
On Friday, 31 October 2014 at 21:06:49 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 This does not mean that process isolation is a "silver bullet" 
 -- I
 never said any such thing.
But made it sound that way:
 The only failsafe solution is to have multiple redundant
 processes, so when one process becomes inconsistent, you 
 fallback to
 another process, *decoupled* process that is known to be good.
If you think a hacker rooted the server, how do you know other perfectly isolated processes are good? Not to mention you suggested to build a system from *communicating* processes, which doesn't sound like perfect isolation at all.
 You don't shutdown the *entire* network unless all redundant 
 components have failed.
If you have a hacker in your network, the network is compromised and is in an unknown state, why do you want the network to continue operation? You contradict yourself.
Nov 01 2014
parent "H. S. Teoh via Digitalmars-d" <digitalmars-d puremagic.com> writes:
On Sat, Nov 01, 2014 at 10:52:31AM +0000, Kagamin via Digitalmars-d wrote:
 On Friday, 31 October 2014 at 21:06:49 UTC, H. S. Teoh via Digitalmars-d
 wrote:
This does not mean that process isolation is a "silver bullet" -- I
never said any such thing.
But made it sound that way:
The only failsafe solution is to have multiple redundant processes,
so when one process becomes inconsistent, you fallback to another
process, *decoupled* process that is known to be good.
If you think a hacker rooted the server, how do you know other perfectly isolated processes are good? Not to mention you suggested to build a system from *communicating* processes, which doesn't sound like perfect isolation at all.
You're confusing the issue. Process-level isolation is for detecting per-process faults. If you want to handle server-level faults, you need external monitoring per server, so that when it detects a possible exploit on one server, it shuts down the server and fails over to another server known to be OK. And I said decoupled, not isolated. Decoupled means they can still communicate with each other, but with a known protocol that insulates them from each other's faults. E.g. you don't send binary executable code over the communication lines and the receiving process blindly runs it, but you send data in a predefined format that is verified by the receiving party before acting on it. I'm pretty sure this is obvious.
You don't shutdown the *entire* network unless all redundant
components have failed.
If you have a hacker in your network, the network is compromised and is in an unknown state, why do you want the network to continue operation? You contradict yourself.
The only contradiction here is introduced by you. If one or two servers on your network have been compromised, does that mean the *entire* network is compromised? No it doesn't. It just means those one or two servers have been compromised. So you have monitoring tools setup to detect problems within the network and isolate the compromised servers. If you are no longer sure the entire network is in a good state, e.g. if your monitoring tools can't detect certain large-scale problems, then sure, go ahead and shutdown the entire network. It depends on what granularity you're operating at. A properly-designed reliable system needs to have multiple levels of monitoring and failover. You have process-level decoupling, server-level, network-level, etc.. You can't just rely on a single level of granularity and expect it to solve everything. T -- Leather is waterproof. Ever see a cow with an umbrella?
Nov 01 2014
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 10/24/2014 11:29 AM, Ary Borenszweig wrote:
 On 9/27/14, 8:15 PM, Walter Bright wrote:
 Now, imagine I have an assert in my application. When the web server hits the
 assertion it shuts down and the user doesn't get a response. What I'd like to
do
 is to trap that assertion, tell the user that there's a problem, and send me an
 email telling me to debug it and fix it. That way the user can continue using
 the bot and I meanwhile I can fix the bug.
Don't need an exception for that. You can insert your own handler with core.assertHandler(myAssertHandler). Or you can catch(Error). But you don't want to try doing anything more than notification with that - the program is in an unknown state.
Oct 24 2014
parent reply "Sean Kelly" <sean invisibleduck.org> writes:
On Friday, 24 October 2014 at 19:09:23 UTC, Walter Bright wrote:
 You can insert your own handler with 
 core.assertHandler(myAssertHandler). Or you can catch(Error). 
 But you don't want to try doing anything more than notification 
 with that - the program is in an unknown state.
Also be aware that if you throw an Exception from the assertHandler you could be violating nothrow guarantees.
Oct 27 2014
parent Walter Bright <newshound2 digitalmars.com> writes:
On 10/27/2014 1:54 PM, Sean Kelly wrote:
 On Friday, 24 October 2014 at 19:09:23 UTC, Walter Bright wrote:
 You can insert your own handler with core.assertHandler(myAssertHandler). Or
 you can catch(Error). But you don't want to try doing anything more than
 notification with that - the program is in an unknown state.
Also be aware that if you throw an Exception from the assertHandler you could be violating nothrow guarantees.
Right.
Oct 29 2014