www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [Sorta OT] License Restrictions

reply Paul Bonser <misterpib gmail.com> writes:
Some mention of license problems got me thinking about this piece of 
standard Sun boilerplate:

"Nuclear, missile, chemical biological weapons or nuclear maritime end 
uses or end users, whether direct or indirect, are strictly prohibited."

Are we going to have that kind of restrictions on D, or will we be free 
to use it to guide weapons of mass destruction? :P

-- 
-PIB

--
"C++ also supports the notion of *friends*: cooperative classes that
are permitted to see each other's private parts." - Grady Booch
Feb 03 2005
next sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Paul Bonser wrote:
 Some mention of license problems got me thinking about this piece of 
 standard Sun boilerplate:
 
 "Nuclear, missile, chemical biological weapons or nuclear maritime end 
 uses or end users, whether direct or indirect, are strictly prohibited."
 
 Are we going to have that kind of restrictions on D, or will we be free 
 to use it to guide weapons of mass destruction? :P
 
The contexts I've usually seen that in is a disclaimer of responsibility for the results of using (this or that) product for (this or that) purpose. I doubt that it would have any effect (IANAL), but supposedly the claim is implicitly "We aren't responsible if you use it that way, so you can't sue us, and neither can your victims."
Feb 04 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Charles Hixson wrote:

 Are we going to have that kind of restrictions on D, or will we be 
 free to use it to guide weapons of mass destruction? :P
The contexts I've usually seen that in is a disclaimer of responsibility for the results of using (this or that) product for (this or that) purpose. I doubt that it would have any effect (IANAL), but supposedly the claim is implicitly "We aren't responsible if you use it that way, so you can't sue us, and neither can your victims."
I think the D license's:
 Do not use this software for life critical applications, or applications
 that could cause significant harm or property damage.
Might cover long distance missiles :-) --anders
Feb 04 2005
parent reply pragma <pragma_member pathlink.com> writes:
In article <cu0q2s$a8v$1 digitaldaemon.com>, 
I think the D license's:
 Do not use this software for life critical applications, or applications
 that could cause significant harm or property damage.
Might cover long distance missiles :-)
Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0. - EricAnderton at yahoo
Feb 04 2005
next sibling parent "Walter" <newshound digitalmars.com> writes:
"pragma" <pragma_member pathlink.com> wrote in message
news:cu0qpe$avq$1 digitaldaemon.com...
 In article <cu0q2s$a8v$1 digitaldaemon.com>,
I think the D license's:
 Do not use this software for life critical applications, or
applications
 that could cause significant harm or property damage.
Might cover long distance missiles :-)
Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA
out
 of the loop... that is if its not changed after v1.0.
I don't care for the liability. An organization could use it for such purposes, but only if they're willing to send me a signed statement assuming liability and indemnifying Digital Mars.
Feb 04 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"pragma" <pragma_member pathlink.com> wrote in message 
news:cu0qpe$avq$1 digitaldaemon.com...
 In article <cu0q2s$a8v$1 digitaldaemon.com>,
I think the D license's:
 Do not use this software for life critical applications, or 
 applications
 that could cause significant harm or property damage.
Might cover long distance missiles :-)
Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0.
Guys, if we persist with the mechanism of no compile-time detection of return paths, and rely on the runtime exceptions, do we really think NASA would use D? Come on!
Feb 04 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cu15pb$jqf$1 digitaldaemon.com...
 "pragma" <pragma_member pathlink.com> wrote in message 
 news:cu0qpe$avq$1 digitaldaemon.com...
 In article <cu0q2s$a8v$1 digitaldaemon.com>,
I think the D license's:
 Do not use this software for life critical applications, or 
 applications
 that could cause significant harm or property damage.
Might cover long distance missiles :-)
Its interesting that you bring that up. Walter may want to clarify that language becuase it would clearly put great organizations like NASA or ESA out of the loop... that is if its not changed after v1.0.
Guys, if we persist with the mechanism of no compile-time detection of return paths
"and switch cases"
, and rely on the runtime exceptions, do we really think NASA would use 
D? Come on!
Feb 04 2005
parent reply Vathix <vathix dprogramming.com> writes:
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would use
 D? Come on!
Would you fly to mars in debug mode?
Feb 04 2005
next sibling parent John Reimer <brk_6502 yahoo.com> writes:
Vathix wrote:
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would use
 D? Come on!
Would you fly to mars in debug mode?
Maybe if there were a debugger available, and one could single step. ;-)
Feb 04 2005
prev sibling next sibling parent "Walter" <newshound digitalmars.com> writes:
"Vathix" <vathix dprogramming.com> wrote in message
news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would use
 D? Come on!
Would you fly to mars in debug mode?
If it was critical software, yes, I'd run it with all the debugging checks turned on.
Feb 04 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Vathix" <vathix dprogramming.com> wrote in message 
news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time detection 
 of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would 
 use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on. I recently worked on a large-scale multi-protocol, (multi-threaded) multi-process, non-stop system, and used a lot of contract programming (CP) in it. It's now humming away happily with all that good contract enforcement, and suicidal servers. I have to tell you, I had a devil of a time persuading the project managers of the utility of CP, and even the techie guy had his qualms. Like all commercial projects, this was started system testing the day it went into production. And, do you know, it has only had two bugs so far. One of these had an invariant condition ready for it, so it killed itself informatively and the bug was fixed in 10 minutes. The second did not have an invariant coded for it - much to my chagrin - and took over a week to find. So, the lesson to me is that CP should always be on, and the more complex the system the more important it is that that be so. Although I've worked on a few complex large-scale systems in the past that did not have it (and which have run without flaw for years), I will not do so in the future. CP all the way! btw, we're going to write this up as a case-study for an instalment of Bjorn Karlsson and my Smart Pointers column, called: "The Nuclear Reactor and the Deep Space Probe". It's mostly written, including some excellent quotes from big-W, and we hope to get it out sometime this month. (The column's on Artima.com, and available free for anyone; no sign-up required.) Cheers Matthew
Feb 04 2005
parent reply "Dave" <Dave_member pathlink.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message 
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts where they are really needed but let the optimizer do it's thing everywhere else like remove asserts and array bounds checking. - Dave
Feb 05 2005
next sibling parent reply "Charles" <no email.com> writes:
 IMO, this could give the best of both worlds.
I would even settle for less , like a -release-with-cp flag. Charlie "Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time detection
of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would
use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at
least
 for 'important commercial', should be shipped with contract programming
 enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts where
they
 are really needed but let the optimizer do it's thing everywhere else like
 remove asserts and array bounds checking.

 - Dave
Feb 05 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I just think that we should have CP and debug/release somewhat 
independent.

Ideally, I'd like

    "-debug" to have debugging info _and_ CP
    "-release" to have CP
    "-release -contracts=off" to have neither

and, if anyone's that perverse

    "-debug -contracts=off" to have debugging info only

This all seems eminently straightforward. The only 'twist' is that CP is 
on by default, unless one explicitly requests it to be off. (I'm sure we 
can now start a heated battle about that ...)

"Charles" <no email.com> wrote in message 
news:cu36bt$2f2h$1 digitaldaemon.com...
 IMO, this could give the best of both worlds.
I would even settle for less , like a -release-with-cp flag. Charlie "Dave" <Dave_member pathlink.com> wrote in message news:cu2uq3$288a$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time 
 detection
of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA 
 would
use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at
least
 for 'important commercial', should be shipped with contract 
 programming
 enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident) And the contract code could be turned on/off via already built-in command line functionality and keywords. Would it create a mess for compiler implementors to do something like this (Walter)? Would it make sense in the commercial world where PwC is used (like your last project, Matthew)? IMO, this could give the best of both worlds. Keep the contracts where
they
 are really needed but let the optimizer do it's thing everywhere else 
 like
 remove asserts and array bounds checking.

 - Dave
Feb 05 2005
parent Thomas Kuehne <thomas-dloop kuehne.thisisspam.cn> writes:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Matthew schrieb am Sun, 6 Feb 2005 07:08:19 +1100:
 I just think that we should have CP and debug/release somewhat 
 independent.

 Ideally, I'd like

     "-debug" to have debugging info _and_ CP
     "-release" to have CP
     "-release -contracts=off" to have neither

 and, if anyone's that perverse

     "-debug -contracts=off" to have debugging info only

 This all seems eminently straightforward. The only 'twist' is that CP is 
 on by default, unless one explicitly requests it to be off. (I'm sure we 
 can now start a heated battle about that ...)
How about using GDC having a look at d/dmd/mars.h -> Param.useAssert|useInvariants|useIn|useOut|useArrayBounds|useSwitchError|useUnitTests d/d-lang.cc -> opt_code , d_handle_option Thomas -----BEGIN PGP SIGNATURE----- iD8DBQFCBURl3w+/yD4P9tIRAmfoAJ9VpNNbQO5ArPy8buyJfeyGsjiWZwCgnILL 94OisNSbPmTTocUz0qXFjf8= =4wnf -----END PGP SIGNATURE-----
Feb 05 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Dave" <Dave_member pathlink.com> wrote in message 
news:cu2uq3$288a$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message 
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time 
 detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA 
 would use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)
If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. Oh, no, I see. You mean mix CP constructs with version. I'd say no, I think this is _too_ much flexibility. In the rare cases where people really need to have some class's constructs versioned, I think version would suffice. (That'd require that body can be supplied without in or out. I don't know if that's currently legal, but it should be.)
 Would it make sense in the commercial world where PwC is used (like 
 your last project, Matthew)?
I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed anyway. The specific constructs which are 'purely debug', are left as simple debug-time asserts ACME_ASSERT / ACME_MESSAGE_ASSERT, while the CP constructs are ACME_ASSERT_PRECONDITION, ACME_ASSERT_POSTCONDITION, ACME_ASSERT_INVARIANT. Now this does raise the question of whether/how we discriminate between CP constructs that we want moderated with the "-contracts=on/off" flag and those with the "-debug" flag. In my recent experience, I would say that it's *very important* to be able to both types, and control them separately. But that's going to require a new keyword, since people will not be willing to pepper their code with debug { ... } blocks. Walter, your thoughts?
Feb 05 2005
parent reply "Dave" <Dave_member pathlink.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cu3a23$2iuq$1 digitaldaemon.com...
 "Dave" <Dave_member pathlink.com> wrote in message 
 news:cu2uq3$288a$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message 
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time detection 
 of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA would 
 use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)
If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed
I think you're right. What I mentioned above would tie PwC too closely to debug(...) and also potentially complicate large projects greatly, by allowing too much granularity. I'm with what you and Charlie posted earlier, some kind of -contracts=[on,off] flag or some such that could be used to override what -debug and -release both enforce now. I'm currently thinking that the defaults should perhaps act the same as-is ('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') because that is what current D users have come to expect and also because PwC is generally considered to be "debug" related (in other words, I think those defaults would be more intuitive for the majority of people familiar with PwC, but of course I could be wrong). - Dave
Feb 05 2005
next sibling parent Kris <Kris_member pathlink.com> writes:
In article <cu3iia$2r7v$1 digitaldaemon.com>, Dave says...
I'm with what you and Charlie posted earlier, some kind 
of -contracts=[on,off] flag or some such that could be used to override 
what -debug and -release both enforce now.

I'm currently thinking that the defaults should perhaps act the same as-is 
('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') 
because that is what current D users have come to expect and also because 
PwC is generally considered to be "debug" related (in other words, I think 
those defaults would be more intuitive for the majority of people familiar 
with PwC, but of course I could be wrong).

- Dave
Aye - add my voice to that call.
Feb 05 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Dave" <Dave_member pathlink.com> wrote in message 
news:cu3iia$2r7v$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cu3a23$2iuq$1 digitaldaemon.com...
 "Dave" <Dave_member pathlink.com> wrote in message 
 news:cu2uq3$288a$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cu1q0r$1609$1 digitaldaemon.com...
 "Vathix" <vathix dprogramming.com> wrote in message 
 news:opslo861ihkcck4r esi...
 Guys, if we persist with the mechanism of no compile-time 
 detection of
 return paths
"and switch cases"
 , and rely on the runtime exceptions, do we really think NASA 
 would use
 D? Come on!
Would you fly to mars in debug mode?
Well, to seriously answer your question: I think production code, at least for 'important commercial', should be shipped with contract programming enforcement on.
How about some middle ground? class Foo { ... invariant(X) // runs with debug=X { ... } int foo(int i) in(Y) {} // runs with debug=Y out(Y) {} body(Z) {} // "error: missing body { ... } after in or out" if debug=Y but not debug=Z } Then you could do this: dmd -O -inline -release -debug=X -debug=Y foo.d (-release and -debug are mutually exclusive, but not -release and -debug=ident)
If you mean that we should be able to individually select on/off the components of CP, i.e. preconditions, postconditions and invariants. At first blush, I'd say yes. But I think this should take some thinking about, as there may be good reasons against that don't immediately spring to mind. I'd say probably not. In this last large project, I did consider having a more granular approach, but the components run with ample speed
I think you're right. What I mentioned above would tie PwC too closely to debug(...) and also potentially complicate large projects greatly, by allowing too much granularity. I'm with what you and Charlie posted earlier, some kind of -contracts=[on,off] flag or some such that could be used to override what -debug and -release both enforce now. I'm currently thinking that the defaults should perhaps act the same as-is ('-debug' implies '-contracts=on'; '-release' implies '-contracts=off') because that is what current D users have come to expect and also because PwC is generally considered to be "debug" related (in other words, I think those defaults would be more intuitive for the majority of people familiar with PwC, but of course I could be wrong).
I would agree, for reasons of legacy breaking, were it not that D is pre-1.0. Since I'm coming to believe more and more that PwC should _not_ be considered a debug only thing, I think it should be the default. Naturally, I'm looking at this from the perspective of large commercial systems. For simple utilities, I'd be adding -contracts=off to my makefiles, and be content with that decision. Walter, may we have it on by default? (and divorce from debug?) Please. I'll be polite. Honest, ...., mate! :-)
Feb 05 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3tn3$292$1 digitaldaemon.com...
 I would agree, for reasons of legacy breaking, were it not that D is
 pre-1.0. Since I'm coming to believe more and more that PwC should _not_
 be considered a debug only thing, I think it should be the default.

 Naturally, I'm looking at this from the perspective of large commercial
 systems. For simple utilities, I'd be adding -contracts=off to my
 makefiles, and be content with that decision.

 Walter, may we have it on by default? (and divorce from debug?)

 Please. I'll be polite. Honest, ...., mate! :-)
It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on. -debug turns on the debug() statements. -g turns on the "generate symbolic debug info". These are all independent of each other. The reason -inline is a seperate switch is that sometimes inlining can make things slower, debugging can be difficult with inlining happening, and profiling is more accurate with inlining off.
Feb 05 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cu454v$7km$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu3tn3$292$1 digitaldaemon.com...
 I would agree, for reasons of legacy breaking, were it not that D is
 pre-1.0. Since I'm coming to believe more and more that PwC should 
 _not_
 be considered a debug only thing, I think it should be the default.

 Naturally, I'm looking at this from the perspective of large 
 commercial
 systems. For simple utilities, I'd be adding -contracts=off to my
 makefiles, and be content with that decision.

 Walter, may we have it on by default? (and divorce from debug?)

 Please. I'll be polite. Honest, ...., mate! :-)
It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on. -debug turns on the debug() statements. -g turns on the "generate symbolic debug info". These are all independent of each other. The reason -inline is a seperate switch is that sometimes inlining can make things slower, debugging can be difficult with inlining happening, and profiling is more accurate with inlining off.
Cool. Sounds like the _only_ thing to do is rename the misnomer "-release" to "-nocontracts". Can we have that, and forestall all the wasted mental cycles for people who have to learn what it really means?
Feb 05 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu45td$89b$1 digitaldaemon.com...
 Cool. Sounds like the _only_ thing to do is rename the misnomer
 "-release" to "-nocontracts". Can we have that, and forestall all the
 wasted mental cycles for people who have to learn what it really means?
The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Feb 05 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cu4c88$ecu$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu45td$89b$1 digitaldaemon.com...
 Cool. Sounds like the _only_ thing to do is rename the misnomer
 "-release" to "-nocontracts". Can we have that, and forestall all the
 wasted mental cycles for people who have to learn what it really 
 means?
The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Yes, it's that real world again. Better to have several switches which tell the absolute truth about what they each do than a few umbrella switches that mislead, don't you think? :-)
Feb 05 2005
prev sibling next sibling parent reply Mark T <Mark_member pathlink.com> writes:
In article <cu4c88$ecu$1 digitaldaemon.com>, Walter says...
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu45td$89b$1 digitaldaemon.com...
 Cool. Sounds like the _only_ thing to do is rename the misnomer
 "-release" to "-nocontracts". Can we have that, and forestall all the
 wasted mental cycles for people who have to learn what it really means?
The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
Maybe you could recycle -release to turn off contracts and debug, etc. It appears that you also need to provide individual control for the major D features. Do the GDC folks use the same switches?

Feb 06 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Mark T wrote:

 Do the GDC folks use the same switches?
For the most part, yes. See this page: http://home.earthlink.net/~dvdfrdmn/d/ With GDC, they are called for instance: -frelease -finline-functions -fbounds-check -fdeprecated -funittest -fdebug (i.e. usually the same, with a "f" prefix) There's also a "dmd" wrapper perl script, that converts DMD syntax to a GDC call... --anders PS. The Missing Manual Pages can be found at: http://www.algonet.se/~afb/d/d-manpages/
Feb 06 2005
prev sibling next sibling parent reply sai <sai_member pathlink.com> writes:
In article <cu4c88$ecu$1 digitaldaemon.com>, Walter says...
The original idea behind -release was to not require the DMD programmer to
learn a bunch of arcane weird switches (look at any C++ compiler!), there'd
be a switch that would make it "just work".

Looks like I failed :-(
I would say, -release is more intutive and self-explainatory, its just a matter of documentation to specify what -release does. or have multiple switches -nocontracts, -noarrayboundchecks etc etc ...... and specify in documentation that -release is shortcut to all above switches. Sai
Feb 06 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
sai wrote:

 I would say, -release is more intutive and self-explainatory, its just a 
 matter of documentation to specify what -release does. 
Okay, so -release is "intuitive" and "self-explanatory" - but you'll have to read the docs to find out what it does ? Does not compute :-) I find "-debug -release" to be a rather weird combination of DFLAGS ? But the first is just a version, and the second means to drop contracts (including pre-conditions, invariants, post-conditions and assertions) and also array-bounds and switch-default checks. Thus it is OK to mix. And of course, the -O and -g are somewhat related to debug vs. release too - but in D that's something else: optimization and debug symbols (neither of which is affected by settings for "-debug" or "-release") --anders PS. GDC has already added two flags, that are not found in DMD: -fbounds-check (for ArrayBounds) and -femit-templates (needed for template workarounds, on compilers without one-only linkage)
Feb 06 2005
parent reply sai <sai_member pathlink.com> writes:
Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll 
have to read the docs to find out what it does ? Does not compute :-)
I find "-debug -release" to be a rather weird combination of DFLAGS ?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !! Now, to know what all types of checks, contracts, pre-conditions, invariants etc are supported by the compiler, see its documentation.
But the first is just a version, and the second means to drop contracts
(including pre-conditions, invariants, post-conditions and assertions)
and also array-bounds and switch-default checks. Thus it is OK to mix.
I didn't say it is OK to mix. I usually don't put a -debug switch along with -release switch. Mixing both switches doesn't make sense either. Sai
Feb 06 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"sai" <sai_member pathlink.com> wrote in message 
news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute :-)
I find "-debug -release" to be a rather weird combination of DFLAGS ?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
Feb 06 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:

 "sai" <sai_member pathlink.com> wrote in message 
 news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute :-)
I find "-debug -release" to be a rather weird combination of DFLAGS ?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing. And by 'bugs', I mean behaviour that is not documented in the program's (business) requirements specifications. As distinct from runtime handling of bad data or unexpected situations. If so, then by the time you build a final production version of the application, all the testing is completed. And thus contracts can be removed from the final release. However, you might keep them in for a beta release. Bad data and unexpected situations should be still addressed by exceptions and/or simple messages, designed to be read by an *end* user and not only the developers. -- Derek Melbourne, Australia 7/02/2005 9:39:01 AM
Feb 06 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:cu66qc$29vn$1 digitaldaemon.com...
 On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:

 "sai" <sai_member pathlink.com> wrote in message
 news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute 
:-)
I find "-debug -release" to be a rather weird combination of DFLAGS 
?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.
Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested! As such, there's a strong argument that contracts should stay in. IMO, the only reasonable refutations of that argument are on performance grounds.
 And by 'bugs', I mean behaviour that is not documented in
 the program's (business) requirements specifications. As distinct from
 runtime handling of bad data or unexpected situations.
Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing. A lot of this depends on which term one wishes to use for what concept. Hence, one could argue that if a program encounters an 'unexpected situation', then it's operating counter to its design, and is invalid.
 If so, then by the time you build a final production version of the
 application, all the testing is completed.
As I said, this can never be asserted with 100% confidence.
 And thus contracts can be
 removed from the final release.
So this conclusion may not be drawn.
 However, you might keep them in for a beta
 release.
Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.
 Bad data and unexpected situations should be still addressed by 
 exceptions
 and/or simple messages, designed to be read by an *end* user and not 
 only
 the developers.
Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.
Feb 06 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 7 Feb 2005 13:36:48 +1100, Matthew wrote:

 "Derek Parnell" <derek psych.ward> wrote in message 
 news:cu66qc$29vn$1 digitaldaemon.com...
 On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:

 "sai" <sai_member pathlink.com> wrote in message
 news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute 
:-)
I find "-debug -release" to be a rather weird combination of DFLAGS 
?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.
Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!
Of course. In the same sense that nothing is ever perfect. By 'testing' I was referring to the formal development process. And I was thinking more about *who* was doing the testing (as a formal process). The contract code, as I see it, is designed to interact with a developer and not an end user.
 
 As such, there's a strong argument that contracts should stay in. IMO, 
 the only reasonable refutations of that argument are on performance 
 grounds.
'stay in' what? The executable shipped to the end user? Well of course you could, as in the end, its really a matter of style. I'm using the model that says that "contract code" is that portion of the source code that is only examining stuff so that it can detect specification errors. Other sorts of errors, such as bad data, and such as (illogical?) situations that *have not been specified*, are being tested by different portions of source code at run-time. So its just a matter of definition, I guess. I'm just segregating the types of errors being tested based on who will be getting the messages about said errors. "Contract" code assumes its audience for its messages is the development team, "Other Error Testing" code assumes its audience for its messages is both development people *and* end users. "Contract" code checks for bad output using good input, bad input caused by coding errors (i.e. not user entered data), illogical process flows, etc... "Other Error Testing" code checks for bad inputs, bad environments (eg. missing files), temporal anomalies (eg. a file which was open, is suddenly found to be closed), etc...
 And by 'bugs', I mean behaviour that is not documented in
 the program's (business) requirements specifications. As distinct from
 runtime handling of bad data or unexpected situations.
Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing.
Sorry. They are two (of many) distinct classes of errors. 'bad data' is one type of error. 'unexpected situations' is another type of error.
 A lot of 
 this depends on which term one wishes to use for what concept. Hence, 
 one could argue that if a program encounters an 'unexpected situation', 
 then it's operating counter to its design, and is invalid.
I meant 'unexpected' in the sense that it is a situation that was not documented in the requirements specification, but happened anyway. I could be seen as a bug in the spec, rather than the code.
 If so, then by the time you build a final production version of the
 application, all the testing is completed.
As I said, this can never be asserted with 100% confidence.
Again it's a definition thing. "testing is completed" means that the formal testing process for release candidate X is completed and the source code for that candidate is frozen. A production build is produced from that and a 'gold disk' created for the marketing/sales group. Of course, the test builds for the code still exist, but they are only used in house and by beta testers. But yes, I agree that end users are also involuntary gamma testers ;-)
 And thus contracts can be
 removed from the final release.
So this conclusion may not be drawn.
 However, you might keep them in for a beta
 release.
Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.
 Bad data and unexpected situations should be still addressed by 
 exceptions
 and/or simple messages, designed to be read by an *end* user and not 
 only
 the developers.
Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.
I'm thinking about the *cause* of invariant violations. When caused by coding errors, then they should be tested for by contract code. When caused by inputting bad data, then they should be handled by non-contract testing code. I say this, just because I can conceive that some testing code is not suitable for shipping to unsuspecting customers, and should really just be handled in-house. Such code needs to be removed from production versions and the DMD -release switch is the current mechanism for doing that. -- Derek Melbourne, Australia 7/02/2005 1:51:12 PM
Feb 06 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:cu6noo$9a1$1 digitaldaemon.com...
 On Mon, 7 Feb 2005 13:36:48 +1100, Matthew wrote:

 "Derek Parnell" <derek psych.ward> wrote in message
 news:cu66qc$29vn$1 digitaldaemon.com...
 On Mon, 7 Feb 2005 09:32:19 +1100, Matthew wrote:

 "sai" <sai_member pathlink.com> wrote in message
 news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but 
you'll
have to read the docs to find out what it does ? Does not compute
:-)
I find "-debug -release" to be a rather weird combination of 
DFLAGS
?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
I'm thinking as I write here, so I could be way off ... Isn't the idea of contracts just a mechanism to assist *coders* locate bugs during testing.
Well, yes and no. Yes, in the sense of a literal interpretation of that sentence. No in the sense that testing never ends - there is no non-trivial code that can be demonstrated to be fully tested!
Of course. In the same sense that nothing is ever perfect. By 'testing' I was referring to the formal development process.
Gotcha
 And I was thinking more
 about *who* was doing the testing (as a formal process). The contract 
 code,
 as I see it, is designed to interact with a developer and not an end 
 user.
Yes, that's a valid pov. For myself, I tend towards thinking of the contract between the software design and its reification in code (by fallible programmers), whose primary purpose is to protect the system on which it runs from damage. Sounds a bit calamitous, I know, but I've found that to be the most helpful, albeit a bit strict, perspective.
 As such, there's a strong argument that contracts should stay in. 
 IMO,
 the only reasonable refutations of that argument are on performance
 grounds.
'stay in' what? The executable shipped to the end user?
Most certainly. Since a contract violation indicates that the code is performing out of the bounds / against its design, that's something that needs to be detected and handled in all possible circumstances. (Naturally, that's a completely different thing from handling out of bounds runtime conditions.)
 Well of course you
 could, as in the end, its really a matter of style. I'm using the 
 model
 that says that "contract code" is that portion of the source code that 
 is
 only examining stuff so that it can detect specification errors. Other
 sorts of errors, such as bad data, and such as (illogical?) situations 
 that
 *have not been specified*, are being tested by different portions of 
 source
 code at run-time. So its just a matter of definition, I guess.
It is indeed.
 I'm just segregating the types of errors being tested based on who 
 will be
 getting the messages about said errors. "Contract" code assumes its
 audience for its messages is the development team, "Other Error 
 Testing"
 code assumes its audience for its messages is both development people 
 *and*
 end users.
From the perspective of who gets to see them, then I agree with what you say. Furthermore, I think it's a nice way of looking at it. Attractive as it is, however, I don't think it can be allowed to sway us into accepting that contracts should not manifest in the presence of users just because they're ill equipped to deal with the messages taht will be produced. Their prime purpose is to detect invalid programs, which must, axiomatically, be something that a user would not want to be executing on their system. More practically, I can say that from the experience of my recent work for a client - the project that has informed on / firmed up my commitment to 'CP-live' - that the users did indeed find it strange that the software informed them that it was invalid. However, when I (i) explained what this meant, and (ii) fixed the bug and had everything flying again with 10 minutes, they grok'ed it. Enthusiastically.
 "Contract" code checks for bad output using good input, bad input 
 caused by
 coding errors (i.e. not user entered data), illogical process flows, 
 etc...
Yes.
 "Other Error Testing" code checks for bad inputs, bad environments 
 (eg.
 missing files), temporal anomalies (eg. a file which was open, is 
 suddenly
 found to be closed), etc...
Yes, although I would hazard to suggest that the latter (the unexpectedly closed) would more likely be a sign of a bug in most instances in which it might possibly happen. Now, if you meant unexpectedly deleted, that would be more a runtime error condition, rather than violation.
 And by 'bugs', I mean behaviour that is not documented in
 the program's (business) requirements specifications. As distinct 
 from
 runtime handling of bad data or unexpected situations.
Well, your terminology is a bit off. You say "distinct from runtime handling of bad data or unexpected situations", implying 'bad data' and 'unexpected situations' are kind of part of the same thing.
Sorry. They are two (of many) distinct classes of errors. 'bad data' is one type of error. 'unexpected situations' is another type of error.
Here is where we get woolled up on the terminology, I think. Bad data is unequivocally a runtime error condition. But 'unexpected situations' can be that, or it can be a contract violation, depending on circumstances. The (10 minute) violation of which I've spoken a couple of times was most certainly an unexpected situation - hence it fired the violation assert and killed the process. There were other 'unexpected conditions' that were, in a sense, not so unexpected, since they were catered for in the code. One or two of these did occur, even though we didn't expect them to, but because we'd accounted for them, they resulted in a graceful reset and restart of the offended communications channel. I think I'd have to say that 'unexpected situation' is too maleable a term to be meaningful. I see it in black-and-white: there are contract violations and there are runtime error conditions. The former detect violations of the design assumptions. The latter are part of the design. (It is in the cracks between the two where the nightmares occur. The only other bug in the system was not caught for a week because the invariant for the Channel class was insufficiently specific. Thus, this can be said to be a defficiency in the design of the contracts, just as much as it manifests as a bug in the code.)
 A lot of
 this depends on which term one wishes to use for what concept. Hence,
 one could argue that if a program encounters an 'unexpected 
 situation',
 then it's operating counter to its design, and is invalid.
I meant 'unexpected' in the sense that it is a situation that was not documented in the requirements specification, but happened anyway. I could be seen as a bug in the spec, rather than the code.
Between spec and code is design. If it was accounted for in the design, then the code should handle it. If not, violation! <G>
 If so, then by the time you build a final production version of the
 application, all the testing is completed.
As I said, this can never be asserted with 100% confidence.
Again it's a definition thing. "testing is completed" means that the formal testing process for release candidate X is completed and the source code for that candidate is frozen. A production build is produced from that and a 'gold disk' created for the marketing/sales group. Of course, the test builds for the code still exist, but they are only used in house and by beta testers. But yes, I agree that end users are also involuntary gamma testers ;-)
An excellent phrase! I shall quote you with gay abandon. <G>
 And thus contracts can be
 removed from the final release.
So this conclusion may not be drawn.
 However, you might keep them in for a beta
 release.
Most certainly. Again, if, for performance reasons, a decision is made on performance grounds.
 Bad data and unexpected situations should be still addressed by
 exceptions
 and/or simple messages, designed to be read by an *end* user and not
 only
 the developers.
Assuming that your unexpected situations are in the 'bad data' camp, rather than invariant violations, in which case: Yes.
I'm thinking about the *cause* of invariant violations. When caused by coding errors, then they should be tested for by contract code. When caused by inputting bad data, then they should be handled by non-contract testing code.
By definition, a contract violation cannot be as a result of bad data.
 I say this, just because I can conceive that some testing code is not
 suitable for shipping to unsuspecting customers, and should really 
 just be
 handled in-house.
As can I. In the latest project - the only one in which I've really gone to town on C.P - there was a lot of code that was debug only. I wrote ACMECLIENT_ASSERT and ACMECLIENT_MESSAGE_ASSERT for the debugging stuff, and there was ACMECLIENT_ASSERT_PRECONDITION, ACMECLIENT_ASSERT_POSTCONDITION and ACMECLIENT_ASSERT_INVARIANT for the C.P. The presence/absence of the two sets were independent. In practice, we have everything in the debug build, and only the CP ones in release.
 Such code needs to be removed from production versions
 and the DMD -release switch is the current mechanism for doing that.
Agreed. That's why I'm suggesting that we need : 1. Separate 'assertion' constructs for C.P., as opposed to debug/developer only assertions 2. To separation the elision/enabling of the two types. Specifically I suggest that assertions are in unless "-release" is specified, and but C.P. constructs are in unless "-nocontracts" is specified. However, I fear all this good stuff we've been covering in the last few days will falter because Walter may not want to, at this stage, introduce separate constructs for CP vs debug/developer assertions. Which, then, makes writing large system stuff in D harder, as Kris's been regretfully observing. Perhaps a solution is that "-nocontracts" elide all assertions within in/out/invariant blocks, and "-release" all those without. Walter? Cheers Matthew
Feb 07 2005
parent reply "Unknown W. Brackets" <unknown simplemachines.org> writes:
I can also lament this; when people feel they've found a bug, they feel 
one of two things: anger, that there's a bug (most common in production) 
or satisfaction, for finding a mistake in someone else's work.

In fact, some of the second case of people will get noticeably 
dissapointed if you verify that what they experienced, for whatever 
reason, is NOT a bug - even if it's not their fault either (I remember 
remarking on this, but at the moment I can't think of how these two 
conditions can both be true.)

Anyway, if someone finds a bug and they get a warning describing 
something about what happened, they tend to be more often of the second 
class.  Not only because the error tends to corrupt their data less (not 
transparently screwing up, but catching itself) but because it's 
definitely a bug.  There's still often anger there, but there's usually 
satisfaction too.

But, next, if you have it fixed almost immediately after it's 
reported... well, let's be realistic:

Most end users know that many softwares they use have bugs.  They've 
been frustrated by bugs in Windows, spyware (which often crashes 
Internet Explorer for them), and other even good software.  There are 
always bugs, and so EVEN WHEN THEY DON'T FIND ANY, they expect them.

But, if they report a bug (which hopefully should be unlikely anyway) 
and it gets fast response, that's something else.  Confidence.  They 
knew there would be bugs before, but now, NOW they know that if they 
ever find one, they'll have an easy message to report, and once you get 
it you'll fix it for them and get them the new version.  They will love you.

Maybe if we were back 20 years ago, we could try to fix this.  But, it's 
too late now.  We can't pretend bugs don't exist, or are so uncommon our 
clients won't expect them - even in OUR software.  Nor can we pretend 
they won't, because... they will.

So, the trick is optimizing the solution.  Making them trust us, you, 
again.  At least, imho.

-[Unknown]


 More practically, I can say that from the experience of my recent work 
 for a client - the project that has informed on / firmed up my 
 commitment to 'CP-live' - that the users did indeed find it strange that 
 the software informed them that it was invalid. However, when I (i) 
 explained what this meant, and (ii) fixed the bug and had everything 
 flying again with 10 minutes, they grok'ed it. Enthusiastically.
Feb 07 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 07 Feb 2005 22:56:53 -0800, Unknown W. Brackets  
<unknown simplemachines.org> wrote:

<snip>

 But, if they report a bug (which hopefully should be unlikely anyway)  
 and it gets fast response, that's something else.  Confidence.  They  
 knew there would be bugs before, but now, NOW they know that if they  
 ever find one, they'll have an easy message to report, and once you get  
 it you'll fix it for them and get them the new version.  They will love  
 you.
This is the principle that the company I work for operates by. It, by and large it appears to work as you have described. Regan
Feb 08 2005
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
In article <cu6k7l$2dk$1 digitaldaemon.com>, Matthew says...

<snip>
As such, there's a strong argument that contracts should stay in. IMO, 
the only reasonable refutations of that argument are on performance 
grounds.
<rant> I agree. I mean let's face it, we've probably all shipped production code with what amounts to non-language formalized PwC in it, hence my earlier suggestion to divorce PwC from any switches, or at least any of the default "meta" switches. In the current ref. compiler implementation of PwC, the only perf. side effect that I can see is that calls are made to check for invariants even when there are none defined for a class. That is extremely expensive because it is done for every call to every method. If the compiler front-end can optimize those away than we only pay for what we use. I'm afraid that as-is, I'd end up falling back to some non-formalized form of PwC (and not even use it as D intends) because as you pointed out, Matthew, non-trivial & complicated code is hard to ever completely debug. As an example, consider an ad-hoc reporting UI where the user can select all sorts of different interdependent parameters that may have side-effects on any number of other parameters. Typically most of these different combinations will never be tested before release. The UI param class doesn't need high-performance, but the report data-gen class does. What to do? You can either write gobs of code spread out all over the place to try and validate the input params. or you can place it all in an invariant block and live with the shitty performance of the data-gen class that doesn't even use invariant (or end up segregating the code - and compiler switches in the build script - even though it may not otherwise make sense to do so, only because of the performance side effects of invariants on classes that don't use them). The compiler already has to walk the class geneology tree for things like making sure that super ctors are implemented with correct arguments, to enforce protection attributes, etc., etc. I suspect it can't add much complexity to add a cd->hasInvariant || cd->baseClass->hasInvaiant flag or whatever to make the decision on whether or not to emit the invariant call for a class. Formalized PwC is a great feature of D and I just want to see it used. After all, (hopefully) our D applications will go through many more CPU cycles in production than in test. </rant> - Dave
Feb 07 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Dave wrote:

 I agree. I mean let's face it, we've probably all shipped production code with
 what amounts to non-language formalized PwC in it, hence my earlier suggestion
 to divorce PwC from any switches, or at least any of the default "meta"
 switches.
Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ? --anders
Feb 07 2005
next sibling parent reply Dave <Dave_member pathlink.com> writes:
In article <cu87kd$2soo$1 digitaldaemon.com>,
=?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
Dave wrote:

 I agree. I mean let's face it, we've probably all shipped production code with
 what amounts to non-language formalized PwC in it, hence my earlier suggestion
 to divorce PwC from any switches, or at least any of the default "meta"
 switches.
Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ? --anders
I'm not sure what the consensus would be on that but PwC expresses the implementation side of the DbC idea better in my mind, and it seems that the terms are used interchangeably anyhow. Whether or not the originators of the two terms intended that I don't know. If the consensus here is that DbC is the better blanket term, I'll be more than happy to use that. - Dave
Feb 07 2005
parent reply =?windows-1252?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Dave wrote:

 Maybe I missed something, but what's the difference between
Programming with Contracts (PwC) and Design by Contract (DbC) ?
 If the consensus here is that DbC is the better blanket term,
 I'll be more than happy to use that.
Nah, it was only that I had to look it up myself, - AFAIK, it stood for PricewaterhouseCoopers :-) The D spec only used Design by Contract™, though. As in http://www.digitalmars.com/d/dbc.html "Design by Contract" is now a trademark by Eiffel Software. (yet another reason to use a generic: Contract Programming) But "Contracts" seems to be a simple enough term to use ? (otherwise we'll just debate "in/out" vs. "pre/post"...) There does seem to be quite the confusion about contracts vs. exceptions vs. unittests vs. release vs. whatever, so anything specific added to the D language spec helps here. --anders
Feb 07 2005
parent Kris <Kris_member pathlink.com> writes:
In article <cu8bdv$389$1 digitaldaemon.com>,
=?windows-1252?Q?Anders_F_Bj=F6rklund?= says...
Dave wrote:

 Maybe I missed something, but what's the difference between
Programming with Contracts (PwC) and Design by Contract (DbC) ?
 If the consensus here is that DbC is the better blanket term,
 I'll be more than happy to use that.
Nah, it was only that I had to look it up myself, - AFAIK, it stood for PricewaterhouseCoopers :-) The D spec only used Design by Contract™, though. As in http://www.digitalmars.com/d/dbc.html "Design by Contract" is now a trademark by Eiffel Software. (yet another reason to use a generic: Contract Programming) But "Contracts" seems to be a simple enough term to use ? (otherwise we'll just debate "in/out" vs. "pre/post"...) There does seem to be quite the confusion about contracts vs. exceptions vs. unittests vs. release vs. whatever, so anything specific added to the D language spec helps here. --anders
I'd just like to point out that DbC is but a subset of AOP. The latter wields some pretty awesome mechanics for 'instrumenting' code, in highly managable and ulimately configurable ways. Where DbC is about manually instrumenting each method and class, AOP subsumes that and extends it across classes; across behavior. It would seem to be a perfect match for the stated goals of D. Note that some 'aspects' of AOP relate to the injection of code before and after a particular set of methods is executed. D supports this via in{} and out{} constructs, plus invariant{}. This is why, I imagine, one can write an AOP preprocessor for D. However, AOP goes far beyond that. I encourage all to read up on AOP, just to see what the potential is. Why does this relate to the -release switch? The D version{} feature could be leveraged to enable/disable broad swathes of AOP functionality, at a high & adroitly manageable level. This would represent the ultimate in controlling which particular tests are retained for any given compile. - Kris
Feb 07 2005
prev sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:cu87kd$2soo$1 digitaldaemon.com...
 Dave wrote:

 I agree. I mean let's face it, we've probably all shipped production 
 code with
 what amounts to non-language formalized PwC in it, hence my earlier 
 suggestion
 to divorce PwC from any switches, or at least any of the default 
 "meta"
 switches.
Maybe I missed something, but what's the difference between Programming with Contracts (PwC) and Design by Contract (DbC) ?
It's a distinction primarily promoted by Chris Diggins (www.heron-language.com), which attempts to draw meaningful distinctions between the practice of contract programming techniques in code, and the use of contracts as design specifications. Being a decompositionist by nature, I get where he's coming from, and there's good historical support for his position - people have been using asserts for a long time in blissful ignorance of the lofty ideals of CP (or DbC, if you want to pay Mssr Dr Meyer lots of cash), and - it's eminently reasonable to use a contract to specify the behaviour of a function without enforcing it at runtime; we all do it all the time! However, since the two things are more and more coming together, I tend to prefer to follow Walter's lead, and just call it Contract Programming.
Feb 07 2005
prev sibling next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthew wrote:

 I think the original issue under debate, sadly largely ignored since, is 
 whether contracts (empty clauses elided, of course), should be included 
 in a 'default' release build. I'm inexorably moving over to the opinion 
 that they should, and I was hoping for opinions from people, considering 
 the long-term desire to turn D into a major player in systems 
 engineering
I just don't think contracts has anything to do with release vs. debug ? For instance, I had to build a non-release version of the Phobos lib just to make it check the runtime contracts in my own debugging builds. I though that pure debugging code was to be put in debug {} blocks ? And that the contracts *could* remain, even in released versions... Array-bounds and switch-default are probably OK to strip for release. Maybe stripping asserts and contracts in release builds is standard procedure, but it would be more straight-forward if called -contract ? (which could be a "subflag" that is triggered to 0 by -release, but) Or maybe I am just mixing up exceptions versus contracts, as usual... http://research.remobjects.com/blogs/mh/archive/2005/01/11/232.aspx Even so, having a libphobos-debug.a version has helped me catch a few. (i.e. for debugging builds I use -lphobos-debug, -lphobos for release) --anders
Feb 06 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I think it's getting clear that we're going to need fine grained control 
of what stays in a final production build, and what does not.

I also think that we should probably either:
    1 make "-release" mean "*all* CP stuff stays in", or
    2 make "*all* CP is elided".

Any halfway house is just likely to lead to confusion.

Since I, personally, think that 2 is a bad thing, but I strongly suspect 
there'll be objections to 1, for obverse reasons.

Maybe the answer might be to drop "-release" entirely. Can someone with 
a more detailed understanding than me describe what this might entail, 
i.e. what the diff between "-debug" and "" might be?

Vaguely, yours

Matthew

"Anders F Björklund" <afb algonet.se> wrote in message 
news:cu66t7$29pc$1 digitaldaemon.com...
 Matthew wrote:

 I think the original issue under debate, sadly largely ignored since, 
 is whether contracts (empty clauses elided, of course), should be 
 included in a 'default' release build. I'm inexorably moving over to 
 the opinion that they should, and I was hoping for opinions from 
 people, considering the long-term desire to turn D into a major 
 player in systems engineering
I just don't think contracts has anything to do with release vs. debug ? For instance, I had to build a non-release version of the Phobos lib just to make it check the runtime contracts in my own debugging builds. I though that pure debugging code was to be put in debug {} blocks ? And that the contracts *could* remain, even in released versions... Array-bounds and switch-default are probably OK to strip for release. Maybe stripping asserts and contracts in release builds is standard procedure, but it would be more straight-forward if called -contract ? (which could be a "subflag" that is triggered to 0 by -release, but) Or maybe I am just mixing up exceptions versus contracts, as usual... http://research.remobjects.com/blogs/mh/archive/2005/01/11/232.aspx Even so, having a libphobos-debug.a version has helped me catch a few. (i.e. for debugging builds I use -lphobos-debug, -lphobos for release) --anders
Feb 06 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthew wrote:

 I think it's getting clear that we're going to need fine grained control 
 of what stays in a final production build, and what does not.
 
 I also think that we should probably either:
     1 make "-release" mean "*all* CP stuff stays in", or
     2 make "*all* CP is elided".
 
 Any halfway house is just likely to lead to confusion.
 
 Since I, personally, think that 2 is a bad thing, but I strongly suspect 
 there'll be objections to 1, for obverse reasons.
After doing some more reading on the subject, I've come to agree with the default setting of DMD, which is to strip all contracts on -release. But there could still be optional individual flags to toggle each of: *) contracts (all four of them) *) array bounds *) switch default There are already such flags in the code, just not on the command line.
     if (global.params.release)
     {	global.params.useInvariants = 0;
 	global.params.useIn = 0;
 	global.params.useOut = 0;
 	global.params.useAssert = 0;
 	global.params.useArrayBounds = 0;
 	global.params.useSwitchError = 0;
     }
Just didn't make it all the way clear to the documentation, I suppose ?
 Maybe the answer might be to drop "-release" entirely. Can someone with 
 a more detailed understanding than me describe what this might entail, 
 i.e. what the diff between "-debug" and "" might be?
"debug" is just a version, i.e. it triggers the debug { ... } sections. It doesn't have *anything* to do with the "-release" flag whatsoever... Where as leaving out "-release" leaves the default values for the above:
     global.params.useAssert = 1;
     global.params.useInvariants = 1;
     global.params.useIn = 1;
     global.params.useOut = 1;
     global.params.useArrayBounds = 1;
     global.params.useSwitchError = 1;
I found some more interesting opinions on: http://c2.com/cgi/wiki?WhatAreAssertions
 Since assertions that don't fail are no-ops, once a program has  been
 thoroughly tested and bug-fixed, it is possible to recompile the source  code
 without the assertions to produce a program that is both smaller and faster
 Once upon a time assertions were not executable: See AssertionsAsComments.
--anders
Feb 07 2005
prev sibling next sibling parent Kris <Kris_member pathlink.com> writes:
Got my vote, on both counts. 

Contracts 'on' by default and, perhaps more pressing, a means to retain the
contracts whilst removing asserts, invariants, array-bounds, etc.

I don't suppose there will be any real agreement upon which of the latter are
appropriate or not; hence it would seem prudent to expose a compound flag,
controlling which of those tests should be on or off ... <sigh>

- Kris


In article <cu65vg$289g$1 digitaldaemon.com>, Matthew says...
"sai" <sai_member pathlink.com> wrote in message 
news:cu65me$27i4$1 digitaldaemon.com...
 Anders_F_Bj=F6rklund?= says...
Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute :-)
I find "-debug -release" to be a rather weird combination of DFLAGS ?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
Feb 06 2005
prev sibling parent Lars Ivar Igesund <larsivar igesund.net> writes:
Sorry if I didn't say so before Matthew, but I agree wholeheartedly. 
Which switches to use I don't really care about, though I agree that 
-'release' is somewhat badly named as it means different things to 
different people, dependent on experience, company practices and more.

Lars Ivar Igesund

Matthew wrote:
 "sai" <sai_member pathlink.com> wrote in message 
 news:cu65me$27i4$1 digitaldaemon.com...
 
Anders_F_Bj=F6rklund?= says...

Okay, so -release is "intuitive" and "self-explanatory" - but you'll
have to read the docs to find out what it does ? Does not compute :-)
I find "-debug -release" to be a rather weird combination of DFLAGS ?
Yes, -release means ..... it is a release version with all contracts (including pre-conditions, invariants, post-conditions and assertions) etc etc turned off, quite self explainatory to me !!
I think the original issue under debate, sadly largely ignored since, is whether contracts (empty clauses elided, of course), should be included in a 'default' release build. I'm inexorably moving over to the opinion that they should, and I was hoping for opinions from people, considering the long-term desire to turn D into a major player in systems engineering
Feb 07 2005
prev sibling parent "Dave" <Dave_member pathlink.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cu4c88$ecu$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu45td$89b$1 digitaldaemon.com...
 Cool. Sounds like the _only_ thing to do is rename the misnomer
 "-release" to "-nocontracts". Can we have that, and forestall all the
 wasted mental cycles for people who have to learn what it really means?
The original idea behind -release was to not require the DMD programmer to learn a bunch of arcane weird switches (look at any C++ compiler!), there'd be a switch that would make it "just work". Looks like I failed :-(
I don't think you failed - actually I think the original plan and how it's implemented makes a lot of sense, with perhaps one exception on the implementation.. If this issue: http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/15834 is addressed (basically, so a call to check for invariants at run-time is done /only/ if there is an invariant defined or inherited for a class), then I think PwC can be divorced from the -release switch altogether. What that would mean for developers is that 1) they could always have PwC without paying for it where it isn't used, 2) there would be no new switches and 3) they would have to change asserts to throws, i.e.: import std.stream; class X { int i,j; this(int iVal, int jVal) { i = iVal; j = jVal; } // code that also manipulates i and j here invariant { // assert(i >= 6 && i <= 10); if(i < 6 || i > 10) throw new Exception("Class X value i is out of range"); // assert(i >= 20 && i <= 100); if(j < 20 || j > 100) throw new Exception("Class X value j is out of range"); } } void main() { X x; int iVal = 0, jVal = 0; // simulate a confused user picking the values iVal = 6; jVal = 102; while(!foo(x,iVal,jVal)) { if(!(jVal % 2)) iVal--; else iVal++; jVal--; } // ... if(x) stdout.writefln("x.i,j = %d,%d",x.i,x.j); } bool foo(out X x, int iVal, int jVal) { try { x = new X(iVal,jVal); return true; } catch(Exception e) { stdout.writefln("%s",e); delete x; return false; } } which will probably work out to be a good thing because the error messages displayed to users can be better than the asserts, plus the developer can use invariant blocks to roll up common exceptions for a class into one place if they so desire. What that would mean for /users/ is that 1) they could always have PwC without paying for it where it isn't used, 2) Developers would be encouraged to use catchable exceptions in PwC that would make more sense than cryptic asserts and allow for better exception recovery. This could move PwC into being something that commonly makes it into production code instead of stopping at the release build. I think that could be a great thing to move the whole concept of language formalized PwC forward. - Dave
Feb 06 2005
prev sibling parent Dave <Dave_member pathlink.com> writes:
In article <cu454v$7km$1 digitaldaemon.com>, Walter says...
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3tn3$292$1 digitaldaemon.com...
 I would agree, for reasons of legacy breaking, were it not that D is
 pre-1.0. Since I'm coming to believe more and more that PwC should _not_
 be considered a debug only thing, I think it should be the default.

 Naturally, I'm looking at this from the perspective of large commercial
 systems. For simple utilities, I'd be adding -contracts=off to my
 makefiles, and be content with that decision.

 Walter, may we have it on by default? (and divorce from debug?)

 Please. I'll be polite. Honest, ...., mate! :-)
It is on by default. All -release does is turn it off. You could compile with: dmd -O foo and you'll get debug off, contracts on, optimization on.
But right now (when not using -release) there are invariant calls for each method of each class even when the class doesn't have or inherit an invariant definition. Can that issue be addressed? Thanks, - Dave
Feb 05 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu15pb$jqf$1 digitaldaemon.com...
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths, and rely on the runtime exceptions, do we really think
 NASA would use D? Come on!
NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are. The illustrative example of why this is a superior approach is the Java compiler's insistence on function signatures listing every exception they might raise. Sounds like a great idea to create robust code. Unfortunately, the opposite happens. Java programmers get used to just inserting catch all statements just to get the compiler to shut up. The end result is that critical errors get SILENTLY IGNORED rather than dealt with. The ABSOLUTELY WORST thing a critical software app can do is silently ignore errors. I've talked to a couple NASA probe engineers. They insert "deadman" switches in the computers. If the code crashes, locks, or has an unhandled exception, the deadman trips and the computer resets. The other approach I've seen in critical systems is "shut me down, notify the pilot, and engage the backup" upon crash, lock, or unhandled exception. This won't happen if the error is silently ignored. Having the compiler complain about lack of a return statement will encourage the programmer to just throw in a return 0; statement. Compiler is happy, the potential bug is NOT fixed, maintenance programmer is left wondering why there's a statement there that never gets executed, visual inspection of the code will not reveal anything obviously wrong, and testing will likely not reveal the bug since the function returns normally. Testers who use code coverage analyzers (an excellent QA technique) will have dead code sticking in their craws. However, if the runtime exception does throw, the programmer knows he has a REAL bug, not a HYPOTHETICAL bug, and it's something that needs fixing, not an annoyance that needs shutting up. Testing will likely reveal it. If it happens in the field in critical software, the deadman or backup can be engaged. A return 0; will paper over the bug, potentially causing far worse things to happen. The same comments apply to the switch default issue. Correct me if I'm wrong, but your position is that the compiler issuing an error will ensure that the programmer will correct the hypothetical error by inserting dead code, thereby making it correct. This may happen most of the time, but I worry about the cases where the shut the compiler up dead code is inserted instead, as what happens in Java even by Java experts who KNOW BETTER yet do it anyway. (I know this because they've told me they do this even knowing they shouldn't.) I've used compilers that insisted that I insert dead code. I usually add a comment saying it's dead code to shut the compiler up. It doesn't look good <g>. I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up. In short, I strongly believe that inserting dead code (code that will never be executed) is not the answer to writing bug free code. Having such code in there is misleading at best, and at worst will cause critical errors to be silently ignored.
Feb 04 2005
next sibling parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
I'd just like to say how much I agree with this.  Of course, this is an 
open source concept, really, but in practice it is something I have 
found to polish software much more robustly.

As an example, I write forum software (in PHP - I also do stuff in D, 

database.  One of the primary causes of bugs is database errors - 
meaning, syntax or data errors created by unexpected input (for example, 
not selecting any items but then clicking "delete selected".)

Of course, showing such database errors to end users is a bad idea.  In 
the worst case, showing detailed information about these errors can more 
easily expose a security hole which might otherwise be patched by the 
time we heard of the error.  Instead, database errors are shown to 
administrators (that is, people with privilege to see them) and also 
logged in the database for later retrieval.

Now, this may not seem to translate directly, but to me it does.  In 
previous versions of the software, database error messages were neither 
logged nor shown to anyone.  After the change, fixing bugs became much 
easier... especially for third party add-on developers.  It was then 
possible to fix bugs much more quickly and easily for all involved 
(including the users, which were sometimes programmers themselves), only 
increasing productivity and stability.

Moreover, sometimes relying on the compiler to detect your errors makes 
you soft.  By this, I don't mean you just stick in dead code - I mean 
that you expect the compiler to tell you if there are any paths that 
lead to a missing return (as an admittedly bad example.)  If the 
compiler, for any reason, mistakenly ignores a possibility... you will 
ignore it too.  Yes, this could be considered a bug in the compiler... 
but that only compounds the number of bugs.  IMHO, one of the best ways 
to make software stable is to make it so that if there ARE bugs, they 
won't do as much damage as they might otherwise.

Some people think they can dream up some way to rid the world of bugs. 
You can't do it - they can live through nuclear blasts, darn it! 
Prevention and a good strong boot are the only things that work, and 
thinking otherwise is only going to cause infestation not salvation. 
For those who don't like metaphors, I only mean to emphasize what I said 
above; there is no catch all solution to software bugs - even misplaced 
returns.

-[Unknown]

 I want to comment on the idea that having an unhandled exception happening
 to the customer makes the app developer look bad. Yep, it makes the
 developer look bad. Bugs always make the developer look bad. Silently
 ignoring bugs doesn't make them go away. At least with the exception you
 have a good chance of being able to reproduce the problem and get it fixed.
 That's much better for the customer than having a silent papered over bug
 insidiously corrupt his expensive database he didn't back up.
Feb 04 2005
prev sibling next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
This is all tired old ground, and I know I'm not going to prevail. 
However, the fact that my comment's got your back up sufficiently to 
post a long and erudite response must indicate that you realise that I'm 
not the sole barking-mad dog, howling at the wind. So, I'll bite. Just a 
little.

Before I kick off, I must say I find a disappointing lack of weight to 
your list of points, which I think reflects the lack of cogency to the 
state of D around this area:

 That said, you and I have different ideas on what constitutes support 
 for
 writing reliable code. I think it's better to have mechanisms in the
 language that:

 1) make it impossible to ignore situations the programmer did not 
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
 2) the bias is to force bugs to show themselves in an obvious manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"? *Who decides* what is obvious? How does/should the bug show itself? When should the showing be done: early, or late? Frankly, one might argue that the notion that the language and its premier compiler actively work to _prevent_ the programmer from detecting bugs at compile-time, forcing a wait of an unknowable amount of testing (or, more horribly, deployment time) to find them, is simply crazy.
 3) not making it easy for the programmer to insert dead code to "shut 
 up the
 compiler"
I completely agree. But you're hamstringing 100% of all developers for the careless/unprofessional/inept of a few. Do you really think it's worth it? Will those handful % of better-employed-working-in-the-spam-industry find no other way to screw up their systems? Is this really going to answer all the issues attendant with a lack of skill/learning/professionalism/adequate quality mechanisms (incl, design reviews, code reviews, documentation, refactoring, unit testing, system testing, etc. etc. )? But I'm not going to argue point by point with your post, since you lost me at "Java's exceptions". The analogy is specious, and thus unconvincing. (Though I absolutely concur that they were a little tried 'good idea', like C++'s exception specifications or, in fear of drawing unwanted venom from my friends in the C++ firmament, export.) My position is simply that compile-time error detection is better than runtime error detection. Further, where compile-time detection is not possible, runtime protection should be to the MAX: practically, this means that I *strongly* believe that contract violations mean death for an application, without exception. (So, FTR, the last several paragraphs of your post most certainly don't apply to this position. I'm highly confident you already know I hold this position, so I assume they're in there for wider pedegagical purposes, and will not comment on them further.) Your position now - or maybe its just expressed altogether in a single place for the first time- seems to be that having a compiler detect potential errors opens up the door for programmers to shut the compiler up with dead code. This is indeed true. You seem to argue that, as a consequence, it's better to prevent the compiler from giving (what you admit would be: "This may happen most of the time ...") very useful help in the majority of cases. I disagree. Now you're absolutely correct that an invalid state throwing an exception, leading to application/system reset is a good thing. Absolutely. But let's be honest. All that achieves is to prevent a bad program from continuing to function once it is established to be bad. It doesn't make that program less bad, or help it run well again. Depending on the vaguaries of its operating environment, it may well just keep going bad, in the same (hopefully very short) amount of time, again and again and again. The system's not being (further) corrupted, but it's not getting anything done either. It's clear, or seems to to me, that this issue, at least as far as the strictures of D is concerned, is a balance between the likelihoods of: 1. producing a non-violating program, and 2. preventing a violating program from continuing its execution and, therefore, potentially wreck a system. You seem to be of the opinion that the current situation of missing return/case handling (MRCH) minimises the likelihood of 2. I agree that it does so. However, contrarily, I assert that D's MRCH minimises the likelihood of producing a non-violating program in the first place. The reasons are obvious, so I'll not go into them. (If anyone's cares to disagree, I ask you to write a non-trival C++ program in a hurry, disable *all* warnings, and go straight to production with it.) Walter, I think that you've hung D on the petard of 'absolutism in the name of simplicity', on this and other issues. For good reasons, you won't conscience warnings, or pragmas, or even switch/function decoarator keywords (e.g. "int allcases func(int i) { if i < 0 return -1'; }"). Indeed, as I think most participants will acknowledge, there are good reasons for all the decisions made for D thus far. But there are also good reasons against most/all of those decisions. (Except for slices. Slices are *the best thing* ever, and coupled with auto+GC, will eventually stand D out from all other mainstream languages.<G>). Software engineering hasn't yet found a perfect language. D is not perfect, and it'd be surprising to hear anyone here say that it is. That being the case, how can the policy of absolutism be deemed a sensible one? It cannot be sanely argued that throwing on missing returns is a perfect solution, any more than it can be argued that compiler errors on missing returns is. That being the case, why has D made manifest in its definition the stance that one of these positions is indeed perfect? I know the many dark roads that await once the tight control on the language is loosened, but the real world's already here, batting on the door. I have an open mind, and willing fingers to all kinds of languages. I like D a lot, and I want it to succeed a *very great deal*. But I really cannot imagine recommending use of D to my clients with these flaws of absolutism. (My hopeful guess for the future is that other compiler variants will arise that will, at least, allow warnings to detect such things at compile time, which may alter the commercial landscape markedly; D is, after all, full of a great many wonderful things.) One last word: I recall a suggestion a year or so ago that would required the programmer to explicitly insert what is currently inserted implicitly. This would have the compiler report errors to me if I missed a return. It'd have the code throw errors to you if an unexpected code path occured. Other than screwing over people who prize typing one less line over robustness, what's the flaw? And yet it got no traction .... [My goodness! That was way longer than I wanted. I guess we'll still be arguing about this when the third edition of DPD's running hot through the presses ...] Matthew "Walter" <newshound digitalmars.com> wrote in message news:cu1clr$r71$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu15pb$jqf$1 digitaldaemon.com...
 Guys, if we persist with the mechanism of no compile-time detection 
 of
 return paths, and rely on the runtime exceptions, do we really think
 NASA would use D? Come on!
NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are. The illustrative example of why this is a superior approach is the Java compiler's insistence on function signatures listing every exception they might raise. Sounds like a great idea to create robust code. Unfortunately, the opposite happens. Java programmers get used to just inserting catch all statements just to get the compiler to shut up. The end result is that critical errors get SILENTLY IGNORED rather than dealt with. The ABSOLUTELY WORST thing a critical software app can do is silently ignore errors. I've talked to a couple NASA probe engineers. They insert "deadman" switches in the computers. If the code crashes, locks, or has an unhandled exception, the deadman trips and the computer resets. The other approach I've seen in critical systems is "shut me down, notify the pilot, and engage the backup" upon crash, lock, or unhandled exception. This won't happen if the error is silently ignored. Having the compiler complain about lack of a return statement will encourage the programmer to just throw in a return 0; statement. Compiler is happy, the potential bug is NOT fixed, maintenance programmer is left wondering why there's a statement there that never gets executed, visual inspection of the code will not reveal anything obviously wrong, and testing will likely not reveal the bug since the function returns normally. Testers who use code coverage analyzers (an excellent QA technique) will have dead code sticking in their craws. However, if the runtime exception does throw, the programmer knows he has a REAL bug, not a HYPOTHETICAL bug, and it's something that needs fixing, not an annoyance that needs shutting up. Testing will likely reveal it. If it happens in the field in critical software, the deadman or backup can be engaged. A return 0; will paper over the bug, potentially causing far worse things to happen. The same comments apply to the switch default issue. Correct me if I'm wrong, but your position is that the compiler issuing an error will ensure that the programmer will correct the hypothetical error by inserting dead code, thereby making it correct. This may happen most of the time, but I worry about the cases where the shut the compiler up dead code is inserted instead, as what happens in Java even by Java experts who KNOW BETTER yet do it anyway. (I know this because they've told me they do this even knowing they shouldn't.) I've used compilers that insisted that I insert dead code. I usually add a comment saying it's dead code to shut the compiler up. It doesn't look good <g>. I want to comment on the idea that having an unhandled exception happening to the customer makes the app developer look bad. Yep, it makes the developer look bad. Bugs always make the developer look bad. Silently ignoring bugs doesn't make them go away. At least with the exception you have a good chance of being able to reproduce the problem and get it fixed. That's much better for the customer than having a silent papered over bug insidiously corrupt his expensive database he didn't back up. In short, I strongly believe that inserting dead code (code that will never be executed) is not the answer to writing bug free code. Having such code in there is misleading at best, and at worst will cause critical errors to be silently ignored.
Feb 04 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
 2) the bias is to force bugs to show themselves in an obvious manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable amount
 of testing (or, more horribly, deployment time) to find them, is simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
 Do you really think it's worth it?
Absolutely.
 Will those handful % of better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl, design
 reviews, code reviews, documentation, refactoring, unit testing, system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
 But I'm not going to argue point by point with your post, since you lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little tried
 'good idea', like C++'s exception specifications or, in fear of drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility. There's some growing thought that even static type checking is an emperor without clothes, that dynamic type checking (like Python does) is more robust and more productive. I'm not at all convinced of that yet <g>, but it's fun seeing the conventional wisdom being challenged. It's good for all of us.
 My position is simply that compile-time error detection is better than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
 Further, where compile-time detection is not
 possible, runtime protection should be to the MAX: practically, this
 means that I *strongly* believe that contract violations mean death for
 an application, without exception. (So, FTR, the last several paragraphs
 of your post most certainly don't apply to this position. I'm highly
 confident you already know I hold this position, so I assume they're in
 there for wider pedegagical purposes, and will not comment on them
 further.)

 Your position now - or maybe its just expressed altogether in a single
 place for the first time-  seems to be that having a compiler detect
 potential errors opens up the door for programmers to shut the compiler
 up with dead code. This is indeed true. You seem to argue that, as a
 consequence, it's better to prevent the compiler from giving (what you
 admit would be: "This may happen most of the time ...") very useful help
 in the majority of cases. I disagree.
I know we disagree. <g>
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a bad
 program from continuing to function once it is established to be bad. It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
 Depending
 on the vaguaries of its operating environment, it may well just keep
 going bad, in the same (hopefully very short) amount of time, again and
 again and again. The system's not being (further) corrupted, but it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well. On airliners, the self detected faults trigger a dedicated circuit that disables the faulty computer and engages the backup. The last, last, last thing you want the autopilot on an airliner to do is execute a return 0; some programmer threw in to shut the compiler up. An exception thrown, shutting down the autopilot, engaging the backup, and notifying the pilot is what you'd much rather happen.
 It's clear, or seems to to me, that this issue, at least as far as the
 strictures of D is concerned, is a balance between the likelihoods of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the likelihood of
 producing a non-violating program in the first place. The reasons are
 obvious, so I'll not go into them. (If anyone's cares to disagree, I ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in the
 name of simplicity', on this and other issues. For good reasons, you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will acknowledge,
 there are good reasons for all the decisions made for D thus far. But
 there are also good reasons against most/all of those decisions. (Except
 for slices. Slices are *the best thing* ever, and coupled with auto+GC,
 will eventually stand D out from all other mainstream languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is. That
 being the case, how can the policy of absolutism be deemed a sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
 It cannot be sanely argued that throwing on missing returns is a perfect
 solution, any more than it can be argued that compiler errors on missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great deal*.
 But I really cannot imagine recommending use of D to my clients with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow warnings
 to detect such things at compile time, which may alter the commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently inserted
 implicitly. This would have the compiler report errors to me if I missed
 a return. It'd have the code throw errors to you if an unexpected code
 path occured. Other than screwing over people who prize typing one less
 line over robustness, what's the flaw? And yet it got no traction ....
Essentially, that means requiring the programmer to insert: assert(0); return 0; It just seems that requiring some fixed boilerplate to be inserted means that the language should do that for you. After all, that's what computers are good at!
 [My goodness! That was way longer than I wanted. I guess we'll still be
 arguing about this when the third edition of DPD's running hot through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Feb 05 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious 
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable 
 amount
 of testing (or, more horribly, deployment time) to find them, is 
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl, 
 design
 reviews, code reviews, documentation, refactoring, unit testing, 
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since you 
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little 
 tried
 'good idea', like C++'s exception specifications or, in fear of 
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an 
 emperor
 without clothes, that dynamic type checking (like Python does) is more
 robust and more productive. I'm not at all convinced of that yet <g>, 
 but
 it's fun seeing the conventional wisdom being challenged. It's good 
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better 
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a 
 bad
 program from continuing to function once it is established to be bad. 
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just keep
 going bad, in the same (hopefully very short) amount of time, again 
 and
 again and again. The system's not being (further) corrupted, but it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit 
 that
 disables the faulty computer and engages the backup. The last, last, 
 last
 thing you want the autopilot on an airliner to do is execute a return 
 0;
 some programmer threw in to shut the compiler up. An exception thrown,
 shutting down the autopilot, engaging the backup, and notifying the 
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as 
 the
 strictures of D is concerned, is a balance between the likelihoods 
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its 
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree 
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the likelihood 
 of
 producing a non-violating program in the first place. The reasons are
 obvious, so I'll not go into them. (If anyone's cares to disagree, I 
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in 
 the
 name of simplicity', on this and other issues. For good reasons, you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will 
 acknowledge,
 there are good reasons for all the decisions made for D thus far. But
 there are also good reasons against most/all of those decisions. 
 (Except
 for slices. Slices are *the best thing* ever, and coupled with 
 auto+GC,
 will eventually stand D out from all other mainstream languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is. 
 That
 being the case, how can the policy of absolutism be deemed a sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a 
 perfect
 solution, any more than it can be argued that compiler errors on 
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on 
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great 
 deal*.
 But I really cannot imagine recommending use of D to my clients with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow 
 warnings
 to detect such things at compile time, which may alter the commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently 
 inserted
 implicitly. This would have the compiler report errors to me if I 
 missed
 a return. It'd have the code throw errors to you if an unexpected 
 code
 path occured. Other than screwing over people who prize typing one 
 less
 line over robustness, what's the flaw? And yet it got no traction 
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted 
 means
 that the language should do that for you. After all, that's what 
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll still 
 be
 arguing about this when the third edition of DPD's running hot 
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 05 2005
next sibling parent reply "Unknown W. Brackets" <unknown simplemachines.org> writes:
Matthew, this response makes it sound like you're ignoring Walter's 
primary argument, which you earlier stated you disagree with.

Walter says: if it's compile time, programmers will patch it without 
thinking.  That's bad.  So let's use runtime.

You say: Runtime checking is bad.  Let's use compile time, that fixes 
everything!

You and Derek, who posted earlier, have implied that the runtime 
checking can still supplement the compile time checking.  Perhaps I've 
missed something crucial here, but I don't understand how - either there 
is a return there, or there isn't.  Example:

int main()
{
    return 0;
}

I do not see any space for runtime checking there.  None.  Not a single 
bit.  So, by that, we can logically come to the conclusion that if 
compile time checking is used runtime checking is impossible, because it 
makes no sense.

Walter, to my reckoning, is saying that the problem is this:

int main(char[][] args)
{
    if (args[1] != "--help")
    {
       doStuff();
       return 0;
    }
    else
       showHelp();
}

Oops.  Forgot the "return 1;".  His argument is that, in a more 
complicated function (with many lines and possibly different return 
values...) it may be difficult to tell what should be returned here.

Tell me, if you're working on a group project, using CVS or otherwise, 
and you are testing some code you've just added which you are about to 
check in... but someone else has checked in some code which no longer 
compiles because of said return warning - what is your instinct?  To sit 
on it until the return is fixed?  Maybe.  Or, maybe you want to fix it.

Being that you didn't write the code, you might say... well, it looks 
like if it gets to here it should return a 0.  Maybe you're right. 
Maybe you're wrong.  Maybe if you're wrong, the original author will 
notice and fix it.  Maybe not.  I hate maybes, they mean bugs.

Now, I'm sure I'm misrepresenting you.  We're all good patient 
programmers, and we'll wait for the guy on vacation who wrote this to 
come back and add his return.  Then we'll all break his bones for 
checking in code that doesn't even compile.

Here's another example.  Someone might argue that the compiler should 
give errors/warnings for the following:

if (true)
    1;
else if (var == 4)
    2;

Obviously, 2 will never happen.  Unreachable code detected, yes?  But 
what if it's this:

if (true) //var == 3)
    1;
else if (var == 4)
    2;

Suddenly, the obviousness of this error is gone.  It's no longer an 
error, it's testing.  2 isn't unreachable at all, it's only "commented 
out" so to speak!

What about this...?

version (1)
    1;
else version(2)
    2;

Is that an error?  No else for the versions... shouldn't there 
(probably) be a static assert there or similar?  Yes, maybe.  Obviously 
that can't be relied on, because sometimes it won't be true.  But, 
should you be forced to do this?

version (1)
    1;
else version(2)
    2;
else
    1 == 1;

Okay.  Let me reformat this example.  Should you be forced to do this?

int doIt(int var)
{
    if (var == 1)
       return 1;
    else if (var == 2)
       return 2;
    else
       return 0;
}

Same thing.  You'll say no, though.  These are different.  One's 
returning things, the other isn't, you'll say.

-[Unknown]
Feb 05 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Matthew, this response makes it sound like you're ignoring Walter's 
 primary argument, which you earlier stated you disagree with.
Does it? How?
 Walter says: if it's compile time, programmers will patch it without 
 thinking.  That's bad.  So let's use runtime.

 You say: Runtime checking is bad.  Let's use compile time, that fixes 
 everything!
I didn't say that. You appear to have caught Walter's disease.
Feb 05 2005
prev sibling next sibling parent John Reimer <brk_6502 yahoo.com> writes:
Unknown W. Brackets wrote:
 Matthew, this response makes it sound like you're ignoring Walter's 
 primary argument, which you earlier stated you disagree with.
 
 Walter says: if it's compile time, programmers will patch it without 
 thinking.  That's bad.  So let's use runtime.
 
 You say: Runtime checking is bad.  Let's use compile time, that fixes 
 everything!
 
I better stay out of this... but Matthew's last post did clarify that he was /not/ against runtime checking. He states that quite clearly. <Ducks away again> - John R.
Feb 05 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Unknown W. Brackets" <unknown simplemachines.org> wrote in message
news:cu25p2$1jbc$1 digitaldaemon.com...
 Walter says: if it's compile time, programmers will patch it without
 thinking.  That's bad.  So let's use runtime.
That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; } I'm not saying you would advocate "fixing" the code this way. I don't either. Nobody would. I am saying that this is often how real programmers will fix it. I know this because I see it done, time and again, in response to compilers that emit such error messages. This kind of code is a disaster waiting to happen. No compiler will detect it. It's hard to pick up on a code review. Testing isn't going to pick it up. It's an insidious, nasty kind of bug. It's root cause is not bad programmers, but a compiler error message that encourages writing bad code. Instead, having the compiler insert essentially an assert(0); where the missing return is means that if it isn't a bug, nothing happens, and everyone is happy. If it is a bug, the assert gets tripped, and the programmer *knows* it's a real bug that needs a real fix, and he won't be tempted to insert a return of an arbitrary value "because it'll never be executed anyway". This is the point I have consistently failed to make clear.
Feb 05 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cu3rt4$ra$1 digitaldaemon.com...
 "Unknown W. Brackets" <unknown simplemachines.org> wrote in message
 news:cu25p2$1jbc$1 digitaldaemon.com...
 Walter says: if it's compile time, programmers will patch it without
 thinking.  That's bad.  So let's use runtime.
That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; }
This is total rubbish. A maintenance engineer is stymied by *both* forms, and confused contrarily: the first looks like a bug but may not be, the second is a bug but doesn't look like it. The only form that stands up to maintenance is something along the lines of what Derek's talking about: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } throw logic_error("This function has encountered a situation which contradicts its design and/or the design of the software within which it resides"); return 0; } This is what I also do in such cases, and I believe (and have witnessed) it being a widely practiced technique. Walter, you're just digging yourself in deeper. It's embarassing. It strongly gives the impression that you only work with yourself. You're keen to mould D with a view to catering for, or at least mitigating the actions of, the lowest common denominators of the programming gene pool. Yet you seem decidely uninterested in addressing the concerns of large scale and/or commercial and/or large-teams and/or long-lasting codebases. How can this attitude help D to prosper? One of the reviewers of Imperfect C++ made the sage comment that I was spending too much time "protecting from Machiavelli". He said that that was a quest without end, and he's spot on. Your measure adds an indeterminately timed exception fire, in the case that a programmer doesn't add a return 0. That's great, so far as it goes. But here's the fly in your soup: what's to stop them adding the return 0? The code's still wrong, but now it doesn't even have your backup plan active. Here's a thought. When people cotton on to this implicit behaviour in D, maybe there'll be a large-scale propagation of "Make sure you get all your returns in!" warnings. Do you have empirical evidence that there won't be a concommitant swell of crappy / neophyte programmers who will add a return X; at the end of every function by rote, to avoid the dreaded swipe of the indeterminate exception? Maybe you're going to actually exacerbate the problem you think you're countering!
 Instead, having the compiler insert essentially an assert(0); where 
 the
 missing return is means that if it isn't a bug, nothing happens, and
 everyone is happy. If it is a bug, the assert gets tripped, and the
 programmer *knows* it's a real bug that needs a real fix, and he won't 
 be
 tempted to insert a return of an arbitrary value "because it'll never 
 be
 executed anyway".

 This is the point I have consistently failed to make clear.
Man, this is *so* frustrating. You obviously (now admittedly!) think that we're all just not getting your point. I get it. WE GET IT! I/we just think you're wrong. There's a problem with two opposing automatic ways of doing things. So the answer is to not have things automatic. I feel like Cassandra. Gah! I give up.
Feb 05 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
                 return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
You're
 keen to mould D with a view to catering for, or at least mitigating the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Sorry, mate, I've given up - I'll just have to content myself with 
writing Imperfect D with all the good amo you're providing - and am 
about to take the family out for some retail therapy. Ah, D.J.'s pesto, 
there's nothing like it ..... once you've tasted it, anything that comes 
out of a bottle might as well be cat vomit.

As for our doomed debate, I'll leave this parting shot: you've ignored 
the two most salient points of the debate recently made, namely the 
effect that missing error harbinging will have on the mindset - will it 
cause people to (mis-)add more return 0's than they would have anyway? - 
and the issue of having the compiler require what Derek wisely suggests. 
Alas, it appears that you are wont to do so.

Dr Cassandra Bigboy

P.S. For all the people who've joined the NG since the middle of last 
year, and have not seen such stag-rutting battles between Walter and 
myself (and others), you should know that despite (what I believe) are 
his stunning misapprehensions, I have a higher (technical and 
good-egg-edness) regard for big-W than almost anyone I know, famous or 
just quietly-good-at-their-job. Maybe it's because of that that I find 
his wrongness so affronting? Kind of like finding out your mother farts. 
;)


"Walter" <newshound digitalmars.com> wrote in message 
news:cu44i1$739$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may 
 not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
                 return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have 
 witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
You're
 keen to mould D with a view to catering for, or at least mitigating 
 the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the 
 case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
prev sibling next sibling parent reply Kris <Kris_member pathlink.com> writes:
Here's a suggestion that /might/ help: are you, Walter, familiar with AOP at
all? If so, you might consider treating such things as part of "cross-cutting
concerns", where 

a) a "point-cut" is declared, by the programmer, to add code to those
non-void-returning functions which don't actually end with a return statement.

b) the code generated at the end of a function would thus be an "advice". One
which has been explicitly provided by the programmer, rather than by the
compiler.

Here's a little blurb on AOP, via Google:
http://www.onjava.com/pub/a/onjava/2004/01/14/aop.html

- Kris


In article <cu44i1$739$1 digitaldaemon.com>, Walter says...
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
                 return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
You're
 keen to mould D with a view to catering for, or at least mitigating the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 05 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
AOP is cool, I wish it was possible to use it in D.

On Sun, 6 Feb 2005 05:34:27 +0000 (UTC), Kris <Kris_member pathlink.com>  
wrote:
 Here's a suggestion that /might/ help: are you, Walter, familiar with  
 AOP at
 all? If so, you might consider treating such things as part of  
 "cross-cutting
 concerns", where

 a) a "point-cut" is declared, by the programmer, to add code to those
 non-void-returning functions which don't actually end with a return  
 statement.

 b) the code generated at the end of a function would thus be an  
 "advice". One
 which has been explicitly provided by the programmer, rather than by the
 compiler.

 Here's a little blurb on AOP, via Google:
 http://www.onjava.com/pub/a/onjava/2004/01/14/aop.html

 - Kris


 In article <cu44i1$739$1 digitaldaemon.com>, Walter says...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
                 return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have  
 witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
 You're
 keen to mould D with a view to catering for, or at least mitigating the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
 Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 06 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opslsuv5qc23k2f5 ally...
 AOP is cool, I wish it was possible to use it in D.
I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Feb 06 2005
next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com>  
wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslsuv5qc23k2f5 ally...
 AOP is cool, I wish it was possible to use it in D.
I looked at Kris' reference, but AOP is one of those things I don't understand at all.
I looked too, I found it less easy to follow than the article on Aspect-Oriented Programming that Christopher Diggins wrote in the August 2004 edition of Dr Dobbs Journal. A simple example of AOP is... you have a class Bob, you want to log calls to all it's functions, dumping state etc. You write the code to do the logging and 'hook' it up to certain methods in Bob, but, the important part is that it does not require changes to Bob, and the new code can be 'hook'ed up to another class in the same way. 3 things are involved, the original class, the new code, and a 'pointcut' which defines the methods the new code effects i.e. how to 'hook' it up. C/C++ achieves it using the preprocessor. I am not sure how Java does it. If you/we think about it a bit I'm sure we can come up with a syntax for D. Mixins are almost it, though what you need is a way of defining where the mixins go without actually modifying the original class. See also: http://www.aspectc.org/ Regan
Feb 06 2005
prev sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com>  
wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslsuv5qc23k2f5 ally...
 AOP is cool, I wish it was possible to use it in D.
I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Ok, here is my attempt at a syntax for AOP for D. In reality I am no expert on AOP. Or on context free grammar. //An attempt at a syntax for Aspect Oriented Programming in D. //Based on the Aug 2004 DDJ article by Christopher Diggins //-Regan Heath //the original class, remains unmodified by this process. class Original { this() { } ~this() { } void foo() { } void bar() { } void baz() { } } //the aspect: to be added to classes as defined in a pointcut. aspect Logging { //joinpoint: code to be placed before the start of a function void in { writefln("Entering(",this.name,")"); } //joinpoint: code to be placed after the end of a function void out { writefln("Leaving(",this.name,")"); } //joinpoint: called before all other joinpoints, if it returns false the joinpoint is skipped bool query { } //joinpoint: executed on an exception void except { } //joinpoint: called after execution of the joinpoint even if 'query' returns false void finally { } //more joinpoints could be defined and added, requires more thought. //the definitions of the above are not set in stone, requires more thought. } //defines the new class, based on an existing class and 1 or more aspects pointcut newOriginal, Original { Logging { this, foo, bar } //<other aspect name> { // this, bar, baz //} //..etc.. } //how to use the new class void main() { newOriginal o = new newOriginal(); }
Feb 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Feb 2005 12:48:45 +1300, Regan Heath <regan netwin.co.nz> wrote:
 On Sun, 6 Feb 2005 19:42:14 -0800, Walter <newshound digitalmars.com>  
 wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslsuv5qc23k2f5 ally...
 AOP is cool, I wish it was possible to use it in D.
I looked at Kris' reference, but AOP is one of those things I don't understand at all.
Ok, here is my attempt at a syntax for AOP for D. In reality I am no expert on AOP. Or on context free grammar. //An attempt at a syntax for Aspect Oriented Programming in D. //Based on the Aug 2004 DDJ article by Christopher Diggins //-Regan Heath //the original class, remains unmodified by this process. class Original { this() { } ~this() { } void foo() { } void bar() { } void baz() { } } //the aspect: to be added to classes as defined in a pointcut. aspect Logging { //joinpoint: code to be placed before the start of a function void in { writefln("Entering(",this.name,")"); } //joinpoint: code to be placed after the end of a function void out { writefln("Leaving(",this.name,")"); } //joinpoint: called before all other joinpoints, if it returns false the joinpoint is skipped bool query { } //joinpoint: executed on an exception void except { } //joinpoint: called after execution of the joinpoint even if 'query' returns false void finally { } //more joinpoints could be defined and added, requires more thought. //the definitions of the above are not set in stone, requires more thought. } //defines the new class, based on an existing class and 1 or more aspects pointcut newOriginal, Original { Logging { //defines the method to apply the aspect to this, foo, bar } //<other aspect name> { // this, bar, baz //} //..etc.. } //how to use the new class void main() { newOriginal o = new newOriginal(); }
Small addition added above, specifically: //defines the method to apply the aspect to Regan
Feb 07 2005
parent reply "Walter" <newshound digitalmars.com> writes:
Thank-you. That actually does make sense. I can see now why it would be an
interesting feature. I can only understand these things in terms of how they
are implemented :-). So for AOP, what I see is being essentially a derived
class, with the modified methods being created that are wrappers around the
base class's methods. The aspect code is inserted into the wrappers.
Mar 02 2005
next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 2 Mar 2005 10:34:14 -0800, Walter <newshound digitalmars.com>  
wrote:
 Thank-you. That actually does make sense. I can see now why it would be  
 an
 interesting feature. I can only understand these things in terms of how  
 they
 are implemented :-). So for AOP, what I see is being essentially a  
 derived
 class, with the modified methods being created that are wrappers around  
 the
 base class's methods. The aspect code is inserted into the wrappers.
Yes, that's essentially it. Regan
Mar 02 2005
prev sibling parent reply pandemic <pandemic_member pathlink.com> writes:
In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...
Thank-you. That actually does make sense. I can see now why it would be an
interesting feature. I can only understand these things in terms of how they
are implemented :-). So for AOP, what I see is being essentially a derived
class, with the modified methods being created that are wrappers around the
base class's methods. The aspect code is inserted into the wrappers.
Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example). As suggested, the additional code is wrapped around the methods.
Mar 02 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 2 Mar 2005 23:39:21 +0000 (UTC), pandemic  
<pandemic_member pathlink.com> wrote:
 In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...
 Thank-you. That actually does make sense. I can see now why it would be  
 an
 interesting feature. I can only understand these things in terms of how  
 they
 are implemented :-). So for AOP, what I see is being essentially a  
 derived
 class, with the modified methods being created that are wrappers around  
 the
 base class's methods. The aspect code is inserted into the wrappers.
Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example).
Well, true, technically. The way I see it, you're simply stating. Take class A. add concern C1,C2 to method x,y, and z. call it C12_A. Take class B. add concern C1,C3 to method x, and y. call it C13_B. It is similar to inheritance, as in, you could do it manually... class A { void foo() {} } class C12_A : A { <- take class A, call it C12_A void foo() { ..concern.. <- add concern to method foo super.foo(); ..~concern.. } } but the idea is that it's done automatically, via some description/format and can be done to any base class, not just A, that you can add several different concerns to a class, that you can pick methods for each concern and they may differ to picks for another concern. Or am I missing your point? I must admit my understanding of it comes from a couple of articles I've read and not much else. Regan
Mar 02 2005
next sibling parent "Charlie Patterson" <charliep1 excite.com> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opsm1bmre123k2f5 ally...
 Well, true, technically.
 The way I see it, you're simply stating.

 Take class A. add concern C1,C2 to method x,y, and z. call it C12_A.
 Take class B. add concern C1,C3 to method x, and y. call it C13_B.

 It is similar to inheritance, as in, you could do it manually...

 class A {
   void foo() {}
 }

 class C12_A : A {    <- take class A, call it C12_A
   void foo() {
     ..concern..      <- add concern to method foo
     super.foo();
     ..~concern..
   }
 }
Before the topic gets dropped, I read about AOP a couple of years ago and forgot about it. (-: But there was a Java reference implementation which could be perused. Also, I *think* you can do cross-cuts on variables as well as functions. This may pose a problem for class T { int _a; } class B { public T t; } B = new B(); // special case for init of t B.t = something(); // check t again B.t._a = 3; // could even have a check on an int
Mar 04 2005
prev sibling parent reply xs0 <xs0 xs0.com> writes:
Hi,

are you sure a new class gets defined? I though the point was to add 
functionality to existing classes without actually modifying them directly.

There are three issues that are addressed by AOP - scattering (similar 
code all over the place), tangling (same span of code doing more than 
one thing) and crosscutting (if I get it right, the problem of 
connecting modules that do completely different stuff, like a profiler 
and profilee).

The typical case seems to be logging - you normally have to include 
logging code in all the classes you want to log (scattering+tangling), 
and you need to have a Logger class and pass it all around 
(crosscutting). This results in a large amount of code, and it's not 
even related to the original class functionality (an ImageProcessor 
should process images, not concern itself with logging). If you don't 
want to use Logger anymore, but a different class, you have to change 
all the classes that use it.

Conversely, if you have AOP at hand, you can just write an aspect that 
takes care of logging without modifying the original classes' code. This 
is obviously more efficient, especially in the amount of code to be 
written, ease of turning the functionality on/off (you can just remove 
the aspect) and modifyability (is this a word? anyway, it's really easy 
to do something different with all the concerned classes). It's also a 
cleanly defined "link" between the logging part of your app and its 
other parts.

Now, as far as compiling goes, what is done at compile time is that all 
aspects that are turned on are resolved and classes are compiled with 
their original code wrapped in aspect code. I'm really almost sure that 
you don't get a new class (it does have additional functionality, of 
course, but its name and position in class hierarchy are still exactly 
the same). For example, in AspectJ, you can attach code to 
reading/writing a field and calls of methods, ctors and exception 
handlers (you can match both before and after). So, for each of those, 
all matching aspects are identified and their code is inserted.


xs0

Regan Heath wrote:
 On Wed, 2 Mar 2005 23:39:21 +0000 (UTC), pandemic  
 <pandemic_member pathlink.com> wrote:
 
 In article <d051g0$fq8$1 digitaldaemon.com>, Walter says...

 Thank-you. That actually does make sense. I can see now why it would 
 be  an
 interesting feature. I can only understand these things in terms of 
 how  they
 are implemented :-). So for AOP, what I see is being essentially a  
 derived
 class, with the modified methods being created that are wrappers 
 around  the
 base class's methods. The aspect code is inserted into the wrappers.
Yes and no. As I understand it, the real power of AOP lies in its ability to cut across multiple, perhaps unrelated, classes. Without a common base-class, and no multiple inheritance. It's not really class-oriented, since the methods involved are often identifed using a limited regex form (select all 'put' methods across all classes, for example).
Well, true, technically. The way I see it, you're simply stating. Take class A. add concern C1,C2 to method x,y, and z. call it C12_A. Take class B. add concern C1,C3 to method x, and y. call it C13_B. It is similar to inheritance, as in, you could do it manually... class A { void foo() {} } class C12_A : A { <- take class A, call it C12_A void foo() { ..concern.. <- add concern to method foo super.foo(); ..~concern.. } } but the idea is that it's done automatically, via some description/format and can be done to any base class, not just A, that you can add several different concerns to a class, that you can pick methods for each concern and they may differ to picks for another concern. Or am I missing your point? I must admit my understanding of it comes from a couple of articles I've read and not much else. Regan
Mar 05 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Sat, 05 Mar 2005 10:45:31 +0100, xs0 <xs0 xs0.com> wrote:
 are you sure a new class gets defined?
Yes, though it's defined in a different manner to inheritance.
 I though the point was to add functionality to existing classes without  
 actually modifying them directly.
Correct.
 There are three issues that are addressed by AOP - scattering (similar  
 code all over the place), tangling (same span of code doing more than  
 one thing) and crosscutting (if I get it right, the problem of  
 connecting modules that do completely different stuff, like a profiler  
 and profilee).

 The typical case seems to be logging - you normally have to include  
 logging code in all the classes you want to log (scattering+tangling),  
 and you need to have a Logger class and pass it all around  
 (crosscutting). This results in a large amount of code, and it's not  
 even related to the original class functionality (an ImageProcessor  
 should process images, not concern itself with logging). If you don't  
 want to use Logger anymore, but a different class, you have to change  
 all the classes that use it.

 Conversely, if you have AOP at hand, you can just write an aspect that  
 takes care of logging without modifying the original classes' code. This  
 is obviously more efficient, especially in the amount of code to be  
 written, ease of turning the functionality on/off (you can just remove  
 the aspect) and modifyability (is this a word? anyway, it's really easy  
 to do something different with all the concerned classes). It's also a  
 cleanly defined "link" between the logging part of your app and its  
 other parts.
I agree with the description(s) above.
 Now, as far as compiling goes, what is done at compile time is that all  
 aspects that are turned on are resolved and classes are compiled with  
 their original code wrapped in aspect code.
That seems to me to be how Walter see's it working (from his reply earlier).
 I'm really almost sure that you don't get a new class (it does have  
 additional functionality, of course, but its name and position in class  
 hierarchy are still exactly the same).
You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class. Regan
Mar 06 2005
parent reply xs0 <xs0 xs0.com> writes:
Hopefully we won't do the same thing as the last time :)


 I'm really almost sure that you don't get a new class (it does have  
 additional functionality, of course, but its name and position in 
 class  hierarchy are still exactly the same).
You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class.
If you look at http://dev.eclipse.org/viewcvs/indextech.cgi/~checkout~/aspectj-home/doc/progguide/examples-development.html you'll see that no new class names are produced. It wouldn't make sense either - the point, if you want logging, for example, is to have exactly the same code (including class hierarchy) for the logged part, and turn on logging from outside.. Or, to put it another way, there is a "new" class, but it has the same name as the "old" class and the old class doesn't exist anymore.. Or, yet another way, with AOP, the class is no longer just itself, but itself+all its aspects.. xs0
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 07 Mar 2005 09:29:33 +0100, xs0 <xs0 xs0.com> wrote:
 I'm really almost sure that you don't get a new class (it does have   
 additional functionality, of course, but its name and position in  
 class  hierarchy are still exactly the same).
You need a new name to refer to the old class + new functionality. The old class + new functionality is a new 'thing' which sits somewhere else in the heirarchy, it's not identical to the old class. IMO AOP is just a different form of code sharing, like a mixin and the result is a new class.
If you look at http://dev.eclipse.org/viewcvs/indextech.cgi/~checkout~/aspectj-home/doc/progguide/examples-development.html you'll see that no new class names are produced. It wouldn't make sense either - the point, if you want logging, for example, is to have exactly the same code (including class hierarchy) for the logged part, and turn on logging from outside..
I see. I don't like it.
 Or, to put it another way, there is a "new" class, but it has the same  
 name as the "old" class and the old class doesn't exist anymore..
 Or, yet another way, with AOP, the class is no longer just itself, but  
 itself+all its aspects..
The very reason I don't like it. What if I want to use the old class and the new class in the same application? Regan
Mar 07 2005
parent reply xs0 <xs0 xs0.com> writes:
 Or, to put it another way, there is a "new" class, but it has the 
 same  name as the "old" class and the old class doesn't exist anymore..
 Or, yet another way, with AOP, the class is no longer just itself, 
 but  itself+all its aspects..
The very reason I don't like it. What if I want to use the old class and the new class in the same application?
Well, as far as I know, you can't - the aspect is an integral part of the class (or method), just like class members are, it's just defined elsewhere because it may not be what the class is about. For example, the purpose of a Shape class is to have something that can draw itself, and not also to perform timing measurements, so it makes sense to put such profiling code outside in an aspect, where it can be turned off when no longer needed. It can also be reused for all other cases where you want to measure performance, because it is not tied in to Shape. Of course, you can reuse such code as is by putting it inside some class (Profiler), but you need to change your classes to use it, and then again to not use it anymore, when you're done. If you take a look at what the typical aspects are (tracing, logging, change monitoring, etc.), it would seem you don't use them in cases where you don't want the new behavior. Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of them (of course, the logging code can choose to not do anything, but it's still "turned on" all the time). You do have the option of defining pointcuts for just classes that are of interest, and, of course, if you want to control this inside the classes themselves, you don't need aspects, I guess.. xs0
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 07 Mar 2005 11:02:58 +0100, xs0 <xs0 xs0.com> wrote:
 Or, to put it another way, there is a "new" class, but it has the  
 same  name as the "old" class and the old class doesn't exist anymore..
 Or, yet another way, with AOP, the class is no longer just itself,  
 but  itself+all its aspects..
The very reason I don't like it. What if I want to use the old class and the new class in the same application?
Well, as far as I know, you can't
Using my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?
 If you take a look at what the typical aspects are (tracing, logging,  
 change monitoring, etc.), it would seem you don't use them in cases  
 where you don't want the new behavior.
Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point. Eg. Locking, I need to lock access to the members of a class, but only if it's being shared between threads.
 Like, if you want to log all calls to some method (or whatever), you  
 can't also want to not log some of them
Yes, I can. It's called targetting a specific instance. It would be great for debugging.
 (of course, the logging code can choose to not do anything, but it's  
 still "turned on" all the time).
There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.
 You do have the option of defining pointcuts for just classes that are  
 of interest, and, of course, if you want to control this inside the  
 classes themselves, you don't need aspects, I guess..
You never want to "control this inside the classes themselves" that would defeat the purpose of AOP. However, you might want to enable or disable logging with a button, that button would flip a variable, that variable would be checked by the AOP code (not the class itself), it's the feature I described above. In addition to this feature, you might want to apply the AOP code to an instance of a class and not another. I see no point in limiting AOP in the ways you describe. Regan
Mar 07 2005
parent reply xs0 <xs0 xs0.com> writes:
First, I'd like to say that I responded to your claim that an aspect 
causes a new class to be produced (with the original one still 
available), which I disagreed with, so please, let's keep the discussion 
focused on that.


 Well, as far as I know, you can't
Using my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?
No, I was talking in general. I think that if you want two versions of the class, you need to declare the new class (if for no other reason, to give it a name), which is very different than declaring an aspect (which just modifies the existing class). If you do declare a new class, you can of course implement the new functionality using an aspect that matches the new class but not the old class. Or, you can just use a mixin..
 If you take a look at what the typical aspects are (tracing, logging,  
 change monitoring, etc.), it would seem you don't use them in cases  
 where you don't want the new behavior.
Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point.
Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.
 Eg. Locking, I need to lock access to the members of a class, but only 
 if  it's being shared between threads.
I don't see how you can do this by producing a new class (how will you switch implementation in runtime? except by creating a new instance, or by using a proxy, but I guess that causes more trouble than it solves), while I can see how you could do this with a single class (e.g. if (shared) { mutex.acquire(); }). So, please provide an example how you would do this.
 Like, if you want to log all calls to some method (or whatever), you  
 can't also want to not log some of them
Yes, I can. It's called targetting a specific instance. It would be great for debugging.
That's not logical - if you want to log all calls, you want to log all calls, not just some :) On a more serious note, you can easily implement what you want by having the aspect check some variable to see whether the instance is the one you're interested in. However, the aspect will still get executed on all calls to the method in all instances.
 (of course, the logging code can choose to not do anything, but it's  
 still "turned on" all the time).
There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.
Can I have the link to the article?
 You do have the option of defining pointcuts for just classes that 
 are  of interest, and, of course, if you want to control this inside 
 the  classes themselves, you don't need aspects, I guess..
 However, you might want to enable or disable logging with a button, 
 that  button would flip a variable, that variable would be checked by 
 the AOP  code (not the class itself), it's the feature I described above.
Sure, AOP code can check a variable, but I don't see how this requires a new class to be produced.
 In addition to this feature, you might want to apply the AOP code to an  
 instance of a class and not another. I see no point in limiting AOP in 
 the  ways you describe.
I would argue that your approach is the one that is limiting. Consider this: class A { } aspect B { // match A and do something with it } A obj=new A(); Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B(); to use the aspect. I don't see how that could be useful (I mean, it's far easier to just modify A than to modify all references to A). You seem to see aspects as something similar to mixins, but they are actually quite different, even though they superficially seem to do the same thing - include some code somewhere. Mixins' primary purpose is to reuse a piece of code instead of typing it over and over again. Aspects' primary purpose is to connect two parts of an app that do not have much in common, in a way that is clean and doesn't require those parts to handle each other. For example, you can have a rendering module and a profiling module. It does not make sense for the rendering module to call the profiling module (which is the non-AOP way), because the rendering module should not concern itself with profiling. Likewise, the profiling module should not need to know that there exists a rendering module, because its purpose is to measure time (or memory or whatnot). So, the solution AOP provides is to have those two modules completely unaware of each other, and the only thing that provides profiling of rendering is the aspect. The benefits are obvious - in non-AOP code, you will need to have every rendering class you want to profile be aware of a profiler, you will need to implement methods to set the profiler that is used (and you will also need to set it somewhere), each draw() method will need to call functions of the profiler; when you decide you no longer need the profiling code, you will have to manually delete it from everywhere (or set the profiler to null, but that will require a bunch of null-checks slowing the thing down). If you decide to use a completely different profiler (i.e. a non-compatible class), you will again have to manually change all references to the new one, possibly also changing which methods get called and in what order. If you use AOP, you avoid all that. xs0
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 07 Mar 2005 14:16:18 +0100, xs0 <xs0 xs0.com> wrote:
 Well, as far as I know, you can't
Using my concept, I can. What do you mean by "as far as I know" are you talking about an existing implementation, if so, is it the "JAspect" one, if so, why do we have to do it that way?
No, I was talking in general. I think that if you want two versions of the class, you need to declare the new class (if for no other reason, to give it a name), which is very different than declaring an aspect (which just modifies the existing class).
Nope. IMO adding aspects to a class defines a new class.
 If you take a look at what the typical aspects are (tracing, logging,   
 change monitoring, etc.), it would seem you don't use them in cases   
 where you don't want the new behavior.
Well obviously you won't use them if you don't want them. My point is that it's entirely possible I want to use them on a class at one point in my code and not on that same class at another point.
Well, then you can implement the aspect in a way that supports this. The point is that the aspect code still gets executed all the time, even though it can obviously do nothing if it is written that way. That does not require two different classes.
This is less efficient.
 Eg. Locking, I need to lock access to the members of a class, but only  
 if  it's being shared between threads.
I don't see how you can do this by producing a new class (how will you switch implementation in runtime?
You wont. You simply need a locked version of class Foo at one point, and not at another.
 except by creating a new instance
Exactly. You create a LockedFoo when you need a shared one, and a Foo when you don't (see below).
 , or by using a proxy, but I guess that causes more trouble than it  
 solves), while I can see how you could do this with a single class (e.g.  
 if (shared) { mutex.acquire(); }). So, please provide an example how you  
 would do this.
The aspect calls mutex.acquire() instead of the class itself. So, instead of "if (shared) mutex.acquire();" we just apply the locked aspect to the class. eg. class Foo { void baz() {} } <.. AOP definition defining locked version of Foo ..> LockedFoo; LockedFoo a; static this() { a = new LockedFoo(); } void main() { Foo b; ..create threads, threads share 'a' .. b = new Foo(); <- nothing but main accesses 'b' ..use b.. a.baz(); <- aspect calls mutex.acquire(); }
 Like, if you want to log all calls to some method (or whatever), you   
 can't also want to not log some of them
Yes, I can. It's called targetting a specific instance. It would be great for debugging.
That's not logical - if you want to log all calls, you want to log all calls, not just some :)
It's perfectly logical. I didn't say I wanted to "log all calls". I said "targetting a specific instance", in other words I wanted to "log calls for specific instances", not "all calls".
 On a more serious note, you can easily implement what you want by having  
 the aspect check some variable to see whether the instance is the one  
 you're interested in. However, the aspect will still get executed on all  
 calls to the method in all instances.
I could, but that requires a global var, and is less efficient.
 (of course, the logging code can choose to not do anything, but it's   
 still "turned on" all the time).
There was a facility in the AOP article I read to do this. The aspect was included, but it decided not to do it's thing some of the time.
Can I have the link to the article?
It was in DrDobbs Journal. Written by "Christopher Diggin" (sp?) I cannot remember the issue (the mag is at work). I have posted the article info in another message to this NG, if you search you might find it.
 You do have the option of defining pointcuts for just classes that  
 are  of interest, and, of course, if you want to control this inside  
 the  classes themselves, you don't need aspects, I guess..
 However, you might want to enable or disable logging with a button,  
 that  button would flip a variable, that variable would be checked by  
 the AOP  code (not the class itself), it's the feature I described  
 above.
Sure, AOP code can check a variable, but I don't see how this requires a new class to be produced.
It doesn't. This is another feature described in the article. It's for runtime enable/disable of an aspect.
 In addition to this feature, you might want to apply the AOP code to  
 an  instance of a class and not another. I see no point in limiting AOP  
 in the  ways you describe.
I would argue that your approach is the one that is limiting. Consider this: class A { } aspect B { // match A and do something with it } A obj=new A(); Now, if the aspect produces a new class (even if it is named A_B (or whatever) automatically), you need to change the last line to A obj=new A_B();
no, A_B obj = new A_B();
 I don't see how that could be useful
It's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.
 (I mean, it's far easier to just modify A than to modify all references  
 to A).
True, which is why, if it's an aspect for debugging/profiling, one that would be enabled/disabled a lot and/or periodically you would use an alias, eg. class NormalFoo {} <.. aspect ..> LockedFoo; alias Foo NormalFoo; ... Foo f = new Foo(); just as we've been doing in C/C++ for years (with #define).
 You seem to see aspects as something similar to mixins, but they are  
 actually quite different, even though they superficially seem to do the  
 same thing - include some code somewhere. Mixins' primary purpose is to  
 reuse a piece of code instead of typing it over and over again. Aspects'  
 primary purpose is to connect two parts of an app that do not have much  
 in common, in a way that is clean and doesn't require those parts to  
 handle each other.
I understand your concept, I just don't think it makes for a better AOP implementation than my own. Your's appears (to me) to be less flexible and/or less efficient (due to inflexibility).
 For example, you can have a rendering module and a profiling module. It  
 does not make sense for the rendering module to call the profiling  
 module (which is the non-AOP way), because the rendering module should  
 not concern itself with profiling. Likewise, the profiling module should  
 not need to know that there exists a rendering module, because its  
 purpose is to measure time (or memory or whatnot). So, the solution AOP  
 provides is to have those two modules completely unaware of each other,  
 and the only thing that provides profiling of rendering is the aspect.

 The benefits are obvious - in non-AOP code, you will need to have every  
 rendering class you want to profile be aware of a profiler, you will  
 need to implement methods to set the profiler that is used (and you will  
 also need to set it somewhere), each draw() method will need to call  
 functions of the profiler; when you decide you no longer need the  
 profiling code, you will have to manually delete it from everywhere (or  
 set the profiler to null, but that will require a bunch of null-checks  
 slowing the thing down). If you decide to use a completely different  
 profiler (i.e. a non-compatible class), you will again have to manually  
 change all references to the new one, possibly also changing which  
 methods get called and in what order. If you use AOP, you avoid all that.
I agree with this example, it's a good description of where you'd use AOP. I still prefer my concept/method of implementing it. Regan
Mar 07 2005
parent reply xs0 <xs0 xs0.com> writes:
 Nope. IMO adding aspects to a class defines a new class.
Well, I looked at several AOP languages, and you seem to be the only one that thinks aspects define a new class. If I'm wrong, please provide a reference (preferably on web this time).
 Well, then you can implement the aspect in a way that supports this. 
 The  point is that the aspect code still gets executed all the time, 
 even  though it can obviously do nothing if it is written that way. 
 That does  not require two different classes.
This is less efficient.
How? I'd say it's faster to check a var than to execute completely different code, because modern CPUs rely on cache so heavily, its far more efficient to stay within cache than to avoid two CPU instructions. That is even more true in the case you're arguing (tracking a single instance), because the branch predictor will be right most of the time, avoiding even the potential cost of conditional jump (i.e. pipeline flush).
 I don't see how you can do this by producing a new class (how will 
 you  switch implementation in runtime?
You wont. You simply need a locked version of class Foo at one point, and not at another.
Why do you need an aspect for this? There is no cross-cutting concern and whatnot, if that is what you want to do..
 Now, if the aspect produces a new class (even if it is named A_B (or  
 whatever) automatically), you need to change the last line to

 A obj=new A_B();
no, A_B obj = new A_B();
How is that less of a change?
 I don't see how that could be useful
It's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.
But there is no point in using aspects if all you want is different versions of the same class. Or, as a question, why would you use an aspect in this case?
 I agree with this example, it's a good description of where you'd use AOP.
 I still prefer my concept/method of implementing it.
Well, it's a contradiction that you agree with what I said and also think that aspects should produce new classes. If an aspect produces a new class, you still have to manually change all references from OriginalClassName to AOPClassName (and back when you no longer want it), which is again far more work than just changing the original class, so rather pointless. Why would you do more work with same benefits (i.e. new functionality) and how is that better? xs0
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Mar 2005 02:07:28 +0100, xs0 <xs0 xs0.com> wrote:
 Nope. IMO adding aspects to a class defines a new class.
Well, I looked at several AOP languages, and you seem to be the only one that thinks aspects define a new class. If I'm wrong, please provide a reference (preferably on web this time).
Sorry, no can do. The article is in DDJ, and you have to be a subscriber to read it, IIRC. I'm not saying "you're wrong". I'm saying I prefer my concept to yours (the one you're describing).
 Well, then you can implement the aspect in a way that supports this.  
 The  point is that the aspect code still gets executed all the time,  
 even  though it can obviously do nothing if it is written that way.  
 That does  not require two different classes.
This is less efficient.
How? I'd say it's faster to check a var than to execute completely different code
It's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.
 , because modern CPUs rely on cache so heavily, its far more efficient  
 to stay within cache than to avoid two CPU instructions. That is even  
 more true in the case you're arguing (tracking a single instance),  
 because the branch predictor will be right most of the time, avoiding  
 even the potential cost of conditional jump (i.e. pipeline flush).
I'm saying additional code will make it slower.
 I don't see how you can do this by producing a new class (how will  
 you  switch implementation in runtime?
You wont. You simply need a locked version of class Foo at one point, and not at another.
Why do you need an aspect for this? There is no cross-cutting concern and whatnot, if that is what you want to do..
How else can I apply a set of generic code to specific methods of any number of existing classes?
 Now, if the aspect produces a new class (even if it is named A_B (or   
 whatever) automatically), you need to change the last line to

 A obj=new A_B();
no, A_B obj = new A_B();
How is that less of a change?
It's not, I was correcting a mistake.
 I don't see how that could be useful
It's useful because you can _also_ say: A obj = new A(); at the same time, and use both the normal class and the class with aspects applied.
But there is no point in using aspects if all you want is different versions of the same class.
Yes there is, because there is no other generic way to do it.
 Or, as a question, why would you use an aspect in this case?
Why not? It appears to be the best way to achieve what I want.
 I agree with this example, it's a good description of where you'd use  
 AOP.
 I still prefer my concept/method of implementing it.
Well, it's a contradiction that you agree with what I said and also think that aspects should produce new classes.
No, it's not. I agreed with your description of a problem. A problem solved by AOP. There are other problems, also solved by AOP. I believe my concept solves them better than the one you're describing.
 If an aspect produces a new class, you still have to manually change all  
 references from OriginalClassName to AOPClassName (and back when you no  
 longer want it),
No. I've already explained how that would be done. Using alias.
 which is again far more work than just changing the original class, so  
 rather pointless. Why would you do more work with same benefits (i.e.  
 new functionality) and how is that better?
It's not more work. My way has more benefits i.e. is more flexible. That is why I prefer it. Regan
Mar 07 2005
parent reply xs0 <xs0 xs0.com> writes:
 How? I'd say it's faster to check a var than to execute completely  
 different code
It's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.
It's not no code at all.. if you apply an aspect to a method/function (and all code is in a method or a function), there are then two functions, the original, and the original+aspect (and in the case the original does nothing, why is it there?)
 I'm saying additional code will make it slower.
Good for you.. I was wondering, however, what your arguments are? And, like explained above, there is actually more code in your case..
 How else can I apply a set of generic code to specific methods of any  
 number of existing classes?
Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes.. - in "your" version, new classes are always produced, which might be useful in some cases, but is completely useless when you don't want new classes, as there is no single-line-of-code "workaround" (unless you go and change all the rest of the code as well; again, this defeats the purpose) xs0
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Mar 2005 03:13:23 +0100, xs0 <xs0 xs0.com> wrote:
 How? I'd say it's faster to check a var than to execute completely   
 different code
It's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.
It's not no code at all..
Sorry, I meant to say "no additional code".
 if you apply an aspect to a method/function (and all code is in a method  
 or a function), there are then two functions, the original, and the  
 original+aspect
Correct. If I don't need the "original+aspect" then being forced to use it, but skip it with a variable will be slower than "the original" without aspect applied.
 How else can I apply a set of generic code to specific methods of any   
 number of existing classes?
Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes..
True.
 - in "your" version, new classes are always produced, which might be  
 useful in some cases, but is completely useless when you don't want new  
 classes
True.
 , as there is no single-line-of-code "workaround"
There is a workaround, alias, it's a couple of lines, its comparable to what is done in C/C++ for the same reason. I still prefer to create a new class. If, simply because when a class behaviour ismodified I think the name should change to relfect that. It appears there is little or no function difference between our ideas, I simply prefer mine. Go figure. Regan
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Mar 2005 15:36:36 +1300, Regan Heath <regan netwin.co.nz> wrote:
 On Tue, 08 Mar 2005 03:13:23 +0100, xs0 <xs0 xs0.com> wrote:
 How? I'd say it's faster to check a var than to execute completely   
 different code
It's not "execute completely different code", it's "execute no code at all" as in, the class without the aspect applied. So, it's faster.
It's not no code at all..
Sorry, I meant to say "no additional code".
 if you apply an aspect to a method/function (and all code is in a  
 method or a function), there are then two functions, the original, and  
 the original+aspect
Correct. If I don't need the "original+aspect" then being forced to use it, but skip it with a variable will be slower than "the original" without aspect applied.
 How else can I apply a set of generic code to specific methods of any   
 number of existing classes?
Well, OK, that might be true, but let's compare: - in "my" version, you can easily define a new class that extends the original and have the aspect target just the new one. All you need to do is to define the new class, which takes a line of code. So, you have both versions, which you seem to want, while you can still use aspects to change existing classes in cases where you don't want new classes..
True.
 - in "your" version, new classes are always produced, which might be  
 useful in some cases, but is completely useless when you don't want new  
 classes
True.
 , as there is no single-line-of-code "workaround"
There is a workaround, alias, it's a couple of lines, its comparable to what is done in C/C++ for the same reason. I still prefer to create a new class. If, simply because when a class behaviour ismodified I think the name should change to relfect that. It appears there is little or no function difference between our ideas, I simply prefer mine. Go figure.
I can't seem to 'edit' with my client "Opera".. allow me to re-phrase the para above: I still prefer to create a new class. If, simply because when a class behaviour is modified I think the name should change to relfect that. It appears there is little or no functional difference between our ideas, I simply prefer mine. Go figure. Regan
Mar 07 2005
parent reply xs0 <xs0 xs0.com> writes:
I see no point in arguing this further. You're just making arbitrary 
unsubstantiated claims and/or say that your preference is somehow a good 
argument in itself. Even when I said the whole world disagrees with you, 
the only thing you managed to respond with was that I'm not a DDJ 
subscriber..


xs0


 I still prefer to create a new class. If, simply because when a class
 behaviour is modified I think the name should change to relfect that. It
 appears there is little or no functional difference between our ideas, I
 simply prefer mine. Go figure.
 
 Regan
Mar 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Mar 2005 08:50:54 +0100, xs0 <xs0 xs0.com> wrote:
 I see no point in arguing this further.
I agree.
 You're just making arbitrary unsubstantiated claims and/or say that your  
 preference is somehow a good argument in itself. Even when I said the  
 whole world disagrees with you, the only thing you managed to respond  
 with was that I'm not a DDJ subscriber..
*sigh* I just don't understand what I'm doing that makes you so hostile. Obviously it must be something *I'm doing* because "the whole world disagrees with [me]" Regan
Mar 08 2005
parent reply xs0 <xs0 xs0.com> writes:
Disclaimer: this post is long and meant for Regan, so it is probably not 
worth your time reading it.

 You're just making arbitrary unsubstantiated claims and/or say that 
 your  preference is somehow a good argument in itself. Even when I 
 said the  whole world disagrees with you, the only thing you managed 
 to respond  with was that I'm not a DDJ subscriber..
*sigh* I just don't understand what I'm doing that makes you so hostile. Obviously it must be something *I'm doing* because "the whole world disagrees with [me]"
I'm not trying to be hostile, perhaps that is the result of my limited knowledge of english or something. But since you asked why you [annoy] me (in random order): - you ignore half of what other people write (e.g. I said something about CPU cache and how checking a flag could actually be faster, you just said "no, it's slower" without even considering _why_ I said checking a flag _could_ be faster) - when you misread something, you'll tag the other person as basically stupid without considering that you may be the one that made the mistake (e.g. in the thread on stable functions you misunderstood the comment on caching and suggested that the poster is proposing some bizarre global caching scheme) - you cling to your ideas like it was a matter of life and death (e.g. in the opCast thread, even after several people, including me, said that using cast to select a method is ridiculous, you still went on and on how it is something natural; if it was the natural thing to do, there would obviously be no disagreement) - you use a type of argument, but don't allow others to use that same type of argument (e.g., again in the opCast thread, the whole Brad is with me/Greg is with you thing) - you never admit you're wrong (e.g. I said "if you want to log all calls, you can't also want to not log some calls", and you said you can; that's simply a logical fallacy, but you failed to admit even something that simple) - even though you're quick to point out that other people use their preference as arguments, _your_ own preference is often the only argument you have (e.g. "I'm saying I prefer my concept to yours" without any argumentation; if you do manage to say something like "I prefer it because it is more flexible", you totally fail to argument that it is indeed more flexible, at least in my opinion) - you fail to provide counterarguments in most cases, and just say something arbitrary. Our typical conversation goes like me: A you: B me: ~B, because C, D, E you: B me: ~B, because D, F, G you: B, isn't it obvious? this thread was also going this exact same way, so I decided to drop it, because I don't feel any of us is gaining something from it. - you take things out of context way too often (e.g. "the whole world disagrees with you"; the point of that sentence was that you fail to counterargue and I exagerated a bit to make that point clearer; you sliced it and took it out of context (which naturally completely changes its meaning) and again failed to counterargue (which you could do by showing that you do indeed counterargue)) There you have it, it got much longer than I planned, but I tried to argument that you do indeed do those things that bother me :) xs0
Mar 08 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Mar 2005 12:05:09 +0100, xs0 <xs0 xs0.com> wrote:
 I'm not trying to be hostile, perhaps that is the result of my limited  
 knowledge of english or something. But since you asked why you [annoy]  
 me (in random order):

 - you ignore half of what other people write (e.g. I said something  
 about CPU cache and how checking a flag could actually be faster, you  
 just said "no, it's slower" without even considering _why_ I said  
 checking a flag _could_ be faster)
The reason I didn't address the cache comment is because you missunderstood what I was trying to say, here is the thread: <quote>
 Well, then you can implement the aspect in a way that supports this. 
 The  point is that the aspect code still gets executed all the time, 
 even  though it can obviously do nothing if it is written that way. 
 That does  not require two different classes.
This is less efficient.
How? I'd say it's faster to check a var than to execute completely different code, because modern CPUs rely on cache so heavily, its far more efficient to stay within cache than to avoid two CPU instructions.That is even more true in the case you're arguing (tracking a singleinstance), because the branch predictor will be right most of the time,avoiding even the potential cost of conditional jump (i.e. pipeline flush).
It's not "execute completely different code", it's "execute no code at all"as in, the class without the aspect applied. So, it's faster.
It's not no code at all.. if you apply an aspect to a method/function (andall code is in a method or a function), there are then two functions, theoriginal, and the original+aspect (and in the case the original does nothing, why is it there?)
Correct. If I don't need the "original+aspect" then being forced to use it, butskip it with a variable will be slower than "the original" without aspect applied.
</quote> caching didn't apply to what I was saying.
 - when you misread something, you'll tag the other person as basically  
 stupid without considering that you may be the one that made the mistake  
 (e.g. in the thread on stable functions you misunderstood the comment on  
 caching and suggested that the poster is proposing some bizarre global  
 caching scheme)
FYI: I don't "tag" people as anything. In that particular example the OP suggested a compile time optimisation, I ammended their definition of "stable functions" to include "known at compile time", they agreed. <quote "martin">
 Instead I suggest the concept of "stable functions". A class of  
 functions
 (and methods), that have no side effects and are guaranteed to generate 
 the same result given the same input
known at compile time.
Exactly. </quote> Ilya disagreed "No need to limit it to compile-time known arguments. ... " we discussed it, Sebastian joined in, I am still unclear exactly what he was suggesting. I would still like to know.
 - you cling to your ideas like it was a matter of life and death (e.g.  
 in the opCast thread, even after several people, including me, said that  
 using cast to select a method is ridiculous, you still went on and on  
 how it is something natural; if it was the natural thing to do, there  
 would obviously be no disagreement)
I will argue my own point of view up and until I convice you, you convince me, or we agree to go our seperate ways. It appears (to me) you do the same thing. Yes, some people disagreed, you, and brad. Some people also agreed. <quote "georg"> If we had overloading on return type, then in some situations we'd want some way to choose which return type to use. Using cast for this would seem natural. </quote> <quote "derek"> To counter this, one could make the rule that every call to a function must either assign the result or indicate to the compiler which return type is being ignored/required. This would help make programs more robust and help readers know the coder's intentions better. For example... cast(int)foo('x'); // Call the 'int' version and ignore the result. bar( cast(real)foo('y') ); // Call the 'real' version of foo and bar. </quote>
 - you use a type of argument, but don't allow others to use that same  
 type of argument (e.g., again in the opCast thread, the whole Brad is  
 with me/Greg is with you thing)
That was a bad comment on my part, "taking sides" should not happen in a NG. Sorry.
 - you never admit you're wrong (e.g. I said "if you want to log all  
 calls, you can't also want to not log some calls", and you said you can;  
 that's simply a logical fallacy, but you failed to admit even something  
 that simple)
You missrepresented my argument (and you're doing it again), that is logical fallacy. <quote "regan"> The very reason I don't like it. What if I want to use the old class and the new class in the same application? </quote> To which you replied with a very long paragraph, which I won't quote in entirety, the part in question read: <quote> Like, if you want to log all calls to some method (or whatever), you can't also want to not log some of them </quote> You brought up logging all calls, not I. You missunderstood my comment.
 - even though you're quick to point out that other people use their  
 preference as arguments, _your_ own preference is often the only  
 argument you have (e.g. "I'm saying I prefer my concept to yours"  
 without any argumentation; if you do manage to say something like "I  
 prefer it because it is more flexible", you totally fail to argument  
 that it is indeed more flexible, at least in my opinion)
The entire thread was my argument as to why I preferred my idea. As it turned out, our ideas were almost functionally identical. But, they were different and I preferred the tradeoffs of my idea to the tradeoffs of yours. Those statements were reflections of that fact.
 - you fail to provide counterarguments in most cases, and just say  
 something arbitrary. Our typical conversation goes like
 me: A
 you: B
 me: ~B, because C, D, E
 you: B
 me: ~B, because D, F, G
 you: B, isn't it obvious?
 this thread was also going this exact same way, so I decided to drop it,  
 because I don't feel any of us is gaining something from it.
Please post an example of this. I don't believe it has ever occurred as I intentionally make a point to address every argument someone makes (the exception being when I got "fed up" halfway thru a reply to you).
 - you take things out of context way too often (e.g. "the whole world  
 disagrees with you"; the point of that sentence was that you fail to  
 counterargue and I exagerated a bit to make that point clearer;
You should have said "you fail to counterargue". Instead your statement could not be prooven and was simply inflamatory.
 you sliced it and took it out of context (which naturally completely  
 changes its meaning) and again failed to counterargue (which you could  
 do by showing that you do indeed counterargue))
What is the point in arguing with a statement which cannot be proven?
 There you have it, it got much longer than I planned, but I tried to  
 argument that you do indeed do those things that bother me :)
I honestly believe that a lot of the 'problems' we seem to have with each other stem from missunderstanding. You've stated that you have a "limited knowledge of english", so I will take extra care to be as clear as possible in future discussions. For the record I can only speak english, I have respect for anyone who is multi-lingual. Regan
Mar 09 2005
parent reply xs0 <xs0 xs0.com> writes:
I'll reply to OT stuff via e-mail later, as it probably is of no 
interest to anybody but us..


 The reason I didn't address the cache comment is because you  
 missunderstood what I was trying to say, here is the thread:
 
 [snip quotes]
 
 caching didn't apply to what I was saying.
Yes it did. You're suggesting that there exist two methods (actually, two entire classes), one without the aspect code, the other one with aspect code. Code also occupies cache. If the compiled original method is 1000 bytes long, and the new method is 1200 bytes, they take 2200 bytes of cache. If you just have one version that checks a flag, it's like 1210 bytes (including the flag). Considering that L1 cache is usually really small (like 16K for code and 16K for data), that can be a significant difference. I'm not saying it's always the case that it's better to check flags than to have two methods, I'm just saying it can be faster in some cases. Not to even mention how much more flexible a flag is than conditionally doing something with two separate classes.. If you test this with really simple/short functions (I did test), flag checking is indeed slower, because you'll have everything in cache anyway (although even such a simple thing as "if (flag) a++; else a+=2" is only like 2% slower compared to having two methods that "a++" or "a+=2"), but in "real" code, flag checking may be faster. Efficiency (as in speed) in modern systems is really not that simple anymore. I read an article the other day on real-time ray tracing, and the two things that provided the biggest speed gains were using SSE instructions (because they can work on more than one data at a time) and a cache-friendly layout of data structures. It was faster to unconditionally do 4 calculations than to conditionally do one. It was faster to convert everything to triangles so that only one case exists, than to handle other primitives (even though a single sphere became like 50 triangles). It was faster to just compute some stuff than to have a check if it is even needed and only then compute it (even though the check was far simpler and the total executed instructions count would be lower, the time that took was longer). These cases all go against the conventional wisdom that the fastest code is the one that doesn't get executed (that is still true, of course, just not 100% of time). xs0
Mar 09 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Thu, 10 Mar 2005 07:33:21 +0100, xs0 <xs0 xs0.com> wrote:
 I'll reply to OT stuff via e-mail later, as it probably is of no  
 interest to anybody but us..
If you like.
 The reason I didn't address the cache comment is because you   
 missunderstood what I was trying to say, here is the thread:
  [snip quotes]
  caching didn't apply to what I was saying.
Yes it did. You're suggesting that there exist two methods (actually, two entire classes)
Yes, two classes.
 , one without the aspect code, the other one with aspect code.
Yes.
 Code also occupies cache. If the compiled original method is 1000 bytes  
 long, and the new method is 1200 bytes, they take 2200 bytes of cache.  
 If you just have one version that checks a flag, it's like 1210 bytes  
 (including the flag). Considering that L1 cache is usually really small  
 (like 16K for code and 16K for data), that can be a significant  
 difference. I'm not saying it's always the case that it's better to  
 check flags than to have two methods, I'm just saying it can be faster  
 in some cases.
Ahh.. I see what you're saying now.
 Not to even mention how much more flexible a flag is than conditionally  
 doing something with two separate classes..
It's more flexible in that it allows a runtime change in behaviour. I think it has a place regardless which method Walter chooses to use (if he chooses to implement AOP).
 If you test this with really simple/short functions (I did test), flag  
 checking is indeed slower, because you'll have everything in cache  
 anyway (although even such a simple thing as "if (flag) a++; else a+=2"  
 is only like 2% slower compared to having two methods that "a++" or  
 "a+=2"), but in "real" code, flag checking may be faster.
So, in short, if code is cached, flags are slower, but, if the code isn't cached, flags may be faster.
 Efficiency (as in speed) in modern systems is really not that simple  
 anymore. I read an article the other day on real-time ray tracing, and  
 the two things that provided the biggest speed gains were using SSE  
 instructions (because they can work on more than one data at a time) and  
 a cache-friendly layout of data structures.

 It was faster to unconditionally do 4 calculations than to conditionally  
 do one. It was faster to convert everything to triangles so that only  
 one case exists, than to handle other primitives (even though a single  
 sphere became like 50 triangles). It was faster to just compute some  
 stuff than to have a check if it is even needed and only then compute it  
 (even though the check was far simpler and the total executed  
 instructions count would be lower, the time that took was longer). These  
 cases all go against the conventional wisdom that the fastest code is  
 the one that doesn't get executed (that is still true, of course, just  
 not 100% of time).
So, from this we can conclude that efficiency is not a pro nor con for either method, as it's dependant on the exact situation in which the code is used. Regan
Mar 13 2005
prev sibling parent h3r3tic <foo bar.baz> writes:
Regan Heath wrote:
 AOP is cool, I wish it was possible to use it in D.
I've written a simple aspect preprocessor for D, but it hasn't received too much attention in the ng. If still anyone wants to take a look, it's here: http://codeinsane.info/download/adp.zip
Feb 06 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
I'm jumping into this at a somewhat arbitrary point, but the general claim
Walter (apparently) makes is that 

1) D tries to catch dumb mistakes made by a user
2) D tries to steer the programmer in the 'right' direction 

Let's see here:

char[] getLine (char[] s)
{
uint length = s.length;
foreach (uint i, char c; s)
{
if (c == '\n')
length = i;
}
return s [0..length];
}

The above is just an arbitrary example of the apparent hypocritical nature of
(1) and (2). The function is supposed to return the subset of its argument only
as far as a newline. 

Do you see the insideous bug there? Many of you will not, so I'll spell it out:

Walter added a very subtle pseudo-reserved word, that's only used when it comes
to arrays. Yes, it's the word "length". When used within square-brackets, it
always means "the length of the enclosing array". Of course, this overrides any
other variable that happens to be called "length". Naturally, no warning is
emitted.

This would perhaps not be so bad if the pseudo-reserved word were
"implicitArrayLength" or something like that. But NO! Walter uses an
undecorated, and exceptionally common variable name instead. Oh; and this was
introduced to ease the implementation of certain templates - on technical
merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from
"length" to a 'decorated' version instead.

Any talk about D with regard to (1) and (2) are moot, when D clearly injects
subtle and glorious ways to f%ck the programmer in simple, and shall I say
common, ways.

Fair warning :-)

I fully sympathize with your head-beating-wall exercise, Matthew. Keep it up!





In article <cu44i1$739$1 digitaldaemon.com>, Walter says...
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
You're
 keen to mould D with a view to catering for, or at least mitigating the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
I agree 'length' seems to be poorly implemented, or perhaps is simply a  
bad idea. I think using a symbol like $ is better for this very reason.

It should be an "error" IMO (not a warning) to 'hide' a variable from an  
enclosing scope. There could be 3 options for avoiding this error:

1. rename the variable or the enclosing variable

2. specify the reference in full. We can specify the enclosing variable in  
full, but we need to be able to say <in this scope>.varname also.

3. use an alias, we need some way to pick the preferred variable i.e. the  
inner variable, allowing you to specify the enclosing var in full.


Regardless, either you're arguing that because this is bad about D, the  
missing return behaviour must also be bad, which is clearly illogical.

Or, this post is simply an attack designed to make Walters position seem  
weaker, when in fact it supplies no logical evidence to do so.

In other words I can't see how this post has any bearing on the argument  
at hand. At best it's a strawman:
http://www.datanation.com/fallacies/straw.htm

On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris <Kris_member pathlink.com>  
wrote:
 I'm jumping into this at a somewhat arbitrary point, but the general  
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical  
 nature of
 (1) and (2). The function is supposed to return the subset of its  
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll spell  
 it out:

 Walter added a very subtle pseudo-reserved word, that's only used when  
 it comes
 to arrays. Yes, it's the word "length". When used within  
 square-brackets, it
 always means "the length of the enclosing array". Of course, this  
 overrides any
 other variable that happens to be called "length". Naturally, no warning  
 is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and  
 this was
 introduced to ease the implementation of certain templates - on technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/  
 change from
 "length" to a 'decorated' version instead.

 Any talk about D with regard to (1) and (2) are moot, when D clearly  
 injects
 subtle and glorious ways to f%ck the programmer in simple, and shall I  
 say
 common, ways.

 Fair warning :-)

 I fully sympathize with your head-beating-wall exercise, Matthew. Keep  
 it up!





 In article <cu44i1$739$1 digitaldaemon.com>, Walter says...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have  
 witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
 You're
 keen to mould D with a view to catering for, or at least mitigating the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
 Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Regardless, either you're arguing that because this is bad about D, 
 the  missing return behaviour must also be bad, which is clearly 
 illogical.
He's not saying that at all.
 Or, this post is simply an attack designed to make Walters position 
 seem  weaker, when in fact it supplies no logical evidence to do so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
 In other words I can't see how this post has any bearing on the 
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Feb 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 8 Feb 2005 09:21:12 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Regardless, either you're arguing that because this is bad about D,
 the  missing return behaviour must also be bad, which is clearly
 illogical.
He's not saying that at all.
Good.
 Or, this post is simply an attack designed to make Walters position
 seem  weaker, when in fact it supplies no logical evidence to do so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
1. You have chosen to attack the method in which I have presented my argument, instead of the actual argument itself: http://www.datanation.com/fallacies/style.htm 2. To put it simply, "whether there is a link or not, has no bearing on whether it's important or not", to argue otherwise is clearly illogical. Regan
Feb 07 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Or, this post is simply an attack designed to make Walters position
 seem  weaker, when in fact it supplies no logical evidence to do so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
 1. You have chosen to  attack the method in which I have presented my 
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own. Specifically, I _did_ attack the argument, and the proof of that is that you responded to my point. Doh! Let's see what gnomic little nugget you're going to profer next ...
Feb 07 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 8 Feb 2005 09:48:10 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters position
 seem  weaker, when in fact it supplies no logical evidence to do so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic. Regan
Feb 07 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters 
 position
 seem  weaker, when in fact it supplies no logical evidence to do 
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented 
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Feb 07 2005
next sibling parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters 
 position
 seem  weaker, when in fact it supplies no logical evidence to do 
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented 
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P -Ben
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <Ben_member pathlink.com> wrote in message 
news:cuaddf$1tqk$1 digitaldaemon.com...
 In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters
 position
 seem  weaker, when in fact it supplies no logical evidence to do
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P
Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round. The Ranting Twit .....
Feb 08 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 06:42:44 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Ben Hinkle" <Ben_member pathlink.com> wrote in message
 news:cuaddf$1tqk$1 digitaldaemon.com...
 In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters
 position
 seem  weaker, when in fact it supplies no logical evidence to do
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P
Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round.
Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature. Regan
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opslwiflq723k2f5 ally...
 On Wed, 9 Feb 2005 06:42:44 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Ben Hinkle" <Ben_member pathlink.com> wrote in message
 news:cuaddf$1tqk$1 digitaldaemon.com...
 In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters
 position
 seem  weaker, when in fact it supplies no logical evidence to 
 do
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on 
 the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have 
 presented
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next 
 ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P
Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round.
Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature.
Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but (i) I overreacted, and (ii) I know full well that I can be, and often am, at least as irritating, and probably in several different ways. 'nuff said? Cheers The Huffy Nerfal .....
Feb 08 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 10:07:19 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslwiflq723k2f5 ally...
 On Wed, 9 Feb 2005 06:42:44 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Ben Hinkle" <Ben_member pathlink.com> wrote in message
 news:cuaddf$1tqk$1 digitaldaemon.com...
 In article <cu9eiv$26pa$1 digitaldaemon.com>, Matthew says...
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters
 position
 seem  weaker, when in fact it supplies no logical evidence to
 do
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on
 the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have
 presented
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next
 ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute. Indeed, examining your posts from a psychological perspective reveals all manner of interesting little tactics. For example, "please make an attempt to refute my argument in the same manner in which it was preferred, with logic". This not only attempts to (subconsciously) persuade the recipient (me) _and_ others to accept that my/Kris' arguments thus far are devoid of logic, it also inclines us all to treat your posts as logical because you explicitly and overtly put in your impressive links. Furthermore, it attempts to control the debate - in your favour no doubt - by prescribing its form. I'm neither impressed with your tactics (though I recognise that they may well be effective in many of your online relationships), nor am I inclined to comply with your attempts to frame the debates according to your own terms.
Jeepers, guys. Chill out. I'm half-way not believing that Matthew posted that since it doesn't really sound like him. This "debate" has gotten too polarized IMO. Everyone put the knives down and back away... :-P
Agreed. Bad day behaviour. I guess I just don't like being told what to do, or how to think. Sorry all round.
Matthew, I too am sorry. My intention wasn't to tell you what to do or how to do it, but rather to share an ideal to which I prescribe. I have replied to your last post, probably because I have to have the last word. :) I would be happy if you felt like reading and replying, thought I'll understand if you simply want to leave the horse where it lies, so to speak. To re-iterate I have the greatest respect for both you and Kris (and many other people here), at the same time I have strong opinions of my own and will always share them. I realise that I can come across aggressively, I fear it's a flaw of what I hope is a passionate nature.
Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but
Understood. I'll do my best to curtail my religeous zeal.
 (i) I
 overreacted, and (ii) I know full well that I can be, and often am, at
 least as irritating, and probably in several different ways.

 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... ) Regan
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Matthew, I too am sorry. My intention wasn't to tell you what to do 
 or
 how  to do it, but rather to share an ideal to which I prescribe.

 I have replied to your last post, probably because I have to have 
 the
 last  word. :)

 I would be happy if you felt like reading and replying, thought I'll
 understand if you simply want to leave the horse where it lies, so 
 to
 speak.

 To re-iterate I have the greatest respect for both you and Kris (and
 many  other people here), at the same time I have strong opinions of
 my own and  will always share them. I realise that I can come across
 aggressively, I  fear it's a flaw of what I hope is a passionate
 nature.
Regan, I am, like everyone else, flawed in myriad ways. One of 'em is I don't like being told what to do. Add that to a few frustrating days in my work life, and you get overreaction, rudeness, patronisation and general bad form. I think the way you carry on with the logic is irritating, but
Understood. I'll do my best to curtail my religeous zeal.
 (i) I
 overreacted, and (ii) I know full well that I can be, and often am, 
 at
 least as irritating, and probably in several different ways.

 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
Feb 08 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 10:23:50 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
(secretly stealing the last word again) LOL.. elegantly done! Regan
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opslwlvgg823k2f5 ally...
 On Wed, 9 Feb 2005 10:23:50 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
(secretly stealing the last word again) LOL.. elegantly done!
It was nothing
Feb 08 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 11:22:27 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslwlvgg823k2f5 ally...
 On Wed, 9 Feb 2005 10:23:50 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
(secretly stealing the last word again) LOL.. elegantly done!
It was nothing
Again! I fear I am no match...
Feb 08 2005
next sibling parent John Reimer <brk_6502 yahoo.com> writes:
Regan Heath wrote:
 On Wed, 9 Feb 2005 11:22:27 +1100, Matthew  
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslwlvgg823k2f5 ally...

 On Wed, 9 Feb 2005 10:23:50 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:

 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
(secretly stealing the last word again) LOL.. elegantly done!
It was nothing
Again! I fear I am no match...
Okay, guys! This is rediculous. I'll have the last word and be done with it! :-P
Feb 08 2005
prev sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opslwm1ox523k2f5 ally...
 On Wed, 9 Feb 2005 11:22:27 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslwlvgg823k2f5 ally...
 On Wed, 9 Feb 2005 10:23:50 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 'nuff said?
Yeah. (yet here I am posting more? I really must admit to having a problem with needing the last word... )
You're welcome to it.
(secretly stealing the last word again) LOL.. elegantly done!
It was nothing
Again! I fear I am no match...
Surely not
Feb 08 2005
prev sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 8 Feb 2005 15:18:52 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsluof7fx23k2f5 ally...
 On Tue, 8 Feb 2005 09:48:10 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 Or, this post is simply an attack designed to make Walters
 position
 seem  weaker, when in fact it supplies no logical evidence to do
 so.
Oh come on! It goes to the motivation behind the missing return value. Plain as the nose on your face.
I don't see how showing someones past mistake (a matter of opinion, which I happen to share), has any bearing on another action which may/may not be a mistake (another matter of opinion). Yes, Walters motivation may be as stated, however, clearly he believes he is being true to that motivation WRT to the missing return situation, therefore "at best" Kris has shown that length was/is a bad idea and needs to be changed, but it has little or no bearing on the missing return situation.
Well put. I just don't agree.
Sorry, don't agree with what in particular? - Walter believes it's true to his motivation. - The behaviour is true to Walters motivation. - This argument has no bearing on the other.
 In other words I can't see how this post has any bearing on the
 argument  at hand. At best it's a strawman:
 http://www.datanation.com/fallacies/straw.htm
Yawn! Keep trotting 'em out. They must be important and apposite, if there's a link you can reference.
Marvellous stuff. Keep going. I'm sure you've got one for every occasion, and it's ripping good sport.
By defintion I have one for every instance in which someone appears to _me_ to be illogical. (I accept the posibility that I could be wrong and welcome a rebuttal)
 1. You have chosen to  attack the method in which I have presented
 my
 argument, instead of the actual argument itself:
Well, it appears that you're more adept at quoting other's wisdoms, than acquiring your own.
Now you're attacking me: http://www.datanation.com/fallacies/attack.htm
 Specifically, I _did_ attack the argument, and the
 proof of that is that you responded to my point. Doh!
You attacked _both_ the argument _and_ the method in which it was proposed. The first is fine, the seccond is illogical.
 Let's see what gnomic little nugget you're going to profer next ...
The reason I profer these links is simple. In my experience a skillful writer/speaker can sway an audience to believe/disbelieve just about anything, they can do it without providing any logical or rational reasoning. These links helped _me_ understand what they were doing and why it was illogical, I hope to enlighten as many people as I can, so that we can all get on with having logical, rational debates with good sound reasoning. Now, I'm not saying either you or Kris _are_ illogical and/or irrational at all, you both exhibit very good logical and rational reasoning, however in this particular case I think the argument is illogical and I'm trying to explain why to the best of my ability. My intention is not to attack the person at all (for that would be illogical), however for some reason you seem to have taken it as an attack against the person, and attacked back in that fashion. I may be wrong about this argument being illogical. If you believe so please make an attempt to refute my argument in the same manner in which it was proferred, with logic.
Very well, put. What you either fail to recognise, or may recognise all too well, is that by contextualising both your own arguments and those of others in logic terms (I'd say logical terms, but that'd be confusing, illogical as that may be), you are attempting to coerce just as surely as those whom you (claim to) refute.
Please explain, I don't understand.
 Indeed, examining your posts from a psychological perspective reveals
 all manner of interesting little tactics. For example, "please make an
 attempt to refute my argument in the same manner in which it was
 preferred, with logic". This not only attempts to (subconsciously)
 persuade the recipient (me) _and_ others to accept that my/Kris'
 arguments thus far are devoid of logic
I see what you mean, and I agree, that sentence was ill considered. What I meant by it was that I believed some of the arguments presented were illogical, in particular those that I indicated were and why.
 , it also inclines us all to treat
 your posts as logical because you explicitly and overtly put in your
 impressive links.
The links merely serve to better explain the concepts I am trying to explain. I can see your point, some people view links as 'authorative' and posting links therefore has an effect. What can I say, I wish it were not so. It all gets a bit circular, by reading these links I've learnt to spot things like this, but I had to follow the link to do so.
 Furthermore, it attempts to control the debate - in
 your favour no doubt - by prescribing its form.
You seem to be assuming malicious intent on my part? I realise not everything can be expressed logically, but it appears to me that this can, and should be.
 I'm neither impressed with your tactics (though I recognise that they
 may well be effective in many of your online relationships), nor am I
 inclined to comply with your attempts to frame the debates according to
 your own terms.
Saying I have 'tactics' implies that I am trying to beat you in some way. My intent is for us to find and share common ground not war. Regan
Feb 08 2005
prev sibling next sibling parent reply Derek <derek psych.ward> writes:
On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:

 I'm jumping into this at a somewhat arbitrary point, but the general claim
 Walter (apparently) makes is that 
 
 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction 
 
 Let's see here:
 
 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }
 
 The above is just an arbitrary example of the apparent hypocritical nature of
 (1) and (2). The function is supposed to return the subset of its argument only
 as far as a newline. 
 
 Do you see the insideous bug there? Many of you will not, so I'll spell it out:
 
 Walter added a very subtle pseudo-reserved word, that's only used when it comes
 to arrays. Yes, it's the word "length". When used within square-brackets, it
 always means "the length of the enclosing array". Of course, this overrides any
 other variable that happens to be called "length". Naturally, no warning is
 emitted.
 
 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and this was
 introduced to ease the implementation of certain templates - on technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/ change from
 "length" to a 'decorated' version instead.
And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope) -- Derek Melbourne, Australia
Feb 07 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek wrote:

 And this is one of the reason why I use 'decorated' identifier names;
I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:
 Hungarian Notation
 Just say no.
--anders
Feb 07 2005
parent reply Derek <derek psych.ward> writes:
On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:

 Derek wrote:
 
 And this is one of the reason why I use 'decorated' identifier names;
I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:
 Hungarian Notation
 Just say no.
Well its been working for me and my teams for 10 years now, so sue me. ;-) -- Derek Melbourne, Australia
Feb 07 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek" <derek psych.ward> wrote in message 
news:1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net...
 On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:

 Derek wrote:

 And this is one of the reason why I use 'decorated' identifier 
 names;
I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:
 Hungarian Notation
 Just say no.
Well its been working for me and my teams for 10 years now, so sue me. ;-)
Type decoration - void fn(long lLimit); - is bad, because it is non-portable, and introduces strong probabilities that the code itself will be turned into a liar. Purpose decoration - void fn(char const *name, int bOverwrite); - is good, notwithstanding its uglification. (It's still better to do without, if that does not promote ambiguities.) (For a better exposition, consult section 17.4 of your copy of Imperfect C++ <g>) Cheers -- Matthew Wilson Author: "Imperfect C++", Addison-Wesley, 2004 (http://www.imperfectcplusplus.com) Contributing editor, C/C++ Users Journal (http://www.synesis.com.au/articles.html#columns) STLSoft moderator (http://www.stlsoft.org) "I can't sleep nights till I found out who hurled what ball through what apparatus" -- Dr Niles Crane -------------------------------------------------------------------------------
Feb 07 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net>, Derek says...
On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:

 Derek wrote:
 
 And this is one of the reason why I use 'decorated' identifier names;
I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:
 Hungarian Notation
 Just say no.
Well its been working for me and my teams for 10 years now, so sue me. ;-)
I applaud any group that sets their own standards to deal with complexity; and then sticks with it. The issue here is that D slyly injects its own variable named 'length', which then (a) forces one to adopt just such a standard, once you've hopefully noticed the bug, and (b) D does not tell you what it did to f&ck you in the first place :-( Given Walter's current position on this particular language 'idiom', one must resort to (a) Hence, one has to adopt a somewhat tongue-in-cheek attitude to lofty claims regarding the goals of D to "protect and serve". I understand Walter invoked the "do as I say, not do as I do" as a rebuke within this thread somewhere. My opinion, and suggestion, is that perhaps he might reflect upon that for a while :-)
Feb 07 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 7 Feb 2005 22:38:27 +0000 (UTC), Kris wrote:

 In article <1lxyunvbocmz8$.jojxhcylx4ev.dlg 40tude.net>, Derek says...
On Mon, 07 Feb 2005 22:21:25 +0100, Anders F Björklund wrote:

 Derek wrote:
 
 And this is one of the reason why I use 'decorated' identifier names;
I'm not sure that warts classifies as decorations in all cultures ? :-) http://www.digitalmars.com/d/dstyle.html:
 Hungarian Notation
 Just say no.
Well its been working for me and my teams for 10 years now, so sue me. ;-)
I applaud any group that sets their own standards to deal with complexity; and then sticks with it. The issue here is that D slyly injects its own variable named 'length', which then (a) forces one to adopt just such a standard, once you've hopefully noticed the bug, and (b) D does not tell you what it did to f&ck you in the first place :-( Given Walter's current position on this particular language 'idiom', one must resort to (a) Hence, one has to adopt a somewhat tongue-in-cheek attitude to lofty claims regarding the goals of D to "protect and serve". I understand Walter invoked the "do as I say, not do as I do" as a rebuke within this thread somewhere. My opinion, and suggestion, is that perhaps he might reflect upon that for a while :-)
Sorry I digressed from the main point of your post. I tend to agree with your assessment of the 'length' decision. However, even with that said, and without using decorated identifiers, your example could do with improved identifier naming, for example ... char[] getLine (char[] text_string) { uint newline_position = text_string.length; foreach (uint curr_position, char curr_char; text_string) { if (curr_char == '\n') { newline_position = curr_position; break; } } return text_string [0..newline_position]; } Short identifier names do not always lead to better legibility, just as longer ones do always enhance legibility. But there is a balance that can work. -- Derek Melbourne, Australia 8/02/2005 11:24:10 AM
Feb 07 2005
parent Kris <Kris_member pathlink.com> writes:
In article <cu91gc$1fc7$1 digitaldaemon.com>, Derek Parnell says...
<snip>
Short identifier names do not always lead to better legibility, just as
longer ones do always enhance legibility. But there is a balance that can
work.
Amen, Derek. But that's a somewhat different topic. This one is "Compiler support for writing bug free code". The point is that, in this case, the compiler does just the opposite. If I may be so bold: What bother's me is that while Walter acknowledges this issue, he doesn't think it's worthy enough to warrant any attention. This is in rather stark contrast to the "preaching" and "hand clasping" that's somewhat evident in parts of this thread. It would be funny, if it weren't so sad :-) Anyway; I must apologise for drifting this thread away from the original problem, so I'll finish with the following: ultimately, we all want D to be a better language -- not better than C++/Java -- better than D is currently. To that end, we have to point out all of the shortcomings and endevour to have them resolved appropriately. This is why I'm not giving your alternate topic of "better variable names; best practices" the credit it duly & truly deserves, yet keep carping on about the hypocrisy that needs to be rectified :~} Cheers! - Kris
Feb 07 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek" <derek psych.ward> wrote in message 
news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...
 On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:

 I'm jumping into this at a somewhat arbitrary point, but the general 
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical 
 nature of
 (1) and (2). The function is supposed to return the subset of its 
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll 
 spell it out:

 Walter added a very subtle pseudo-reserved word, that's only used 
 when it comes
 to arrays. Yes, it's the word "length". When used within 
 square-brackets, it
 always means "the length of the enclosing array". Of course, this 
 overrides any
 other variable that happens to be called "length". Naturally, no 
 warning is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and 
 this was
 introduced to ease the implementation of certain templates - on 
 technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/ 
 change from
 "length" to a 'decorated' version instead.
And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.
Feb 07 2005
next sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 8 Feb 2005 09:24:38 +1100, Matthew wrote:

 "Derek" <derek psych.ward> wrote in message 
 news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...
 On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:

 I'm jumping into this at a somewhat arbitrary point, but the general 
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical 
 nature of
 (1) and (2). The function is supposed to return the subset of its 
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll 
 spell it out:

 Walter added a very subtle pseudo-reserved word, that's only used 
 when it comes
 to arrays. Yes, it's the word "length". When used within 
 square-brackets, it
 always means "the length of the enclosing array". Of course, this 
 overrides any
 other variable that happens to be called "length". Naturally, no 
 warning is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and 
 this was
 introduced to ease the implementation of certain templates - on 
 technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/ 
 change from
 "length" to a 'decorated' version instead.
And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature.
Agreed, if the only purpose of using decorated words for identifiers is work around clashes with keywords. However, another major reason for using the decoration scheme that we do use here, is to make it faster for people to understand the code that they are reading. By having 'scope/purpose' hints in the identifier names, it usually saves people scanning large blocks of code looking for where an identifier was declared.
 The last I recall from last year was that the implicit length was going 
 to be $. I'm sure there were reasons against, but they cannot be as 
 compelling as the example Kris gave.
You are preaching to the converted, brother ;-)
 Here's a possible compromise, although I'm not sure I like it:
 
     char[] getLine (char[] s)
     {
         uint length = s.length;
         foreach (uint i, char c; s)
         {
             if (c == '\n')
             length = i;
         }
 
         return s [0 .. .length];
     }
 
 The . before length indicates its 'local' to 's'.
 
 Hmmm, on second thoughts, that stinks.
Yes, it does. But nice try though.
 In general - indeed, it's harder to think of a contrary example - 
 verbose code is better than dangerous code. Kris is quite right when he 
 says that D has introduced some of the latter.
Agreed. I have always supported the use of a symbol rather than an English word to represent the array's length property. I'm keen to promote the readibilty of source code by humans, so an extra 'dot' seems counter productive to that aim. -- Derek Melbourne, Australia 8/02/2005 11:15:38 AM
Feb 07 2005
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
In article <cu8qfi$11q3$1 digitaldaemon.com>, Matthew says...
"Derek" <derek psych.ward> wrote in message 
news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...
 On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:

 I'm jumping into this at a somewhat arbitrary point, but the general 
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical 
 nature of
 (1) and (2). The function is supposed to return the subset of its 
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll 
 spell it out:

 Walter added a very subtle pseudo-reserved word, that's only used 
 when it comes
 to arrays. Yes, it's the word "length". When used within 
 square-brackets, it
 always means "the length of the enclosing array". Of course, this 
 overrides any
 other variable that happens to be called "length". Naturally, no 
 warning is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and 
 this was
 introduced to ease the implementation of certain templates - on 
 technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/ 
 change from
 "length" to a 'decorated' version instead.
And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.
How about: array[from...] // analogous to array[from .. array.length]; - Dave

Feb 07 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Dave" <Dave_member pathlink.com> wrote in message 
news:cu91eb$1f90$1 digitaldaemon.com...
 In article <cu8qfi$11q3$1 digitaldaemon.com>, Matthew says...
"Derek" <derek psych.ward> wrote in message
news:19f0tg7229js5.y2wq16gmtkq3.dlg 40tude.net...
 On Mon, 7 Feb 2005 19:47:15 +0000 (UTC), Kris wrote:

 I'm jumping into this at a somewhat arbitrary point, but the 
 general
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical
 nature of
 (1) and (2). The function is supposed to return the subset of its
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll
 spell it out:

 Walter added a very subtle pseudo-reserved word, that's only used
 when it comes
 to arrays. Yes, it's the word "length". When used within
 square-brackets, it
 always means "the length of the enclosing array". Of course, this
 overrides any
 other variable that happens to be called "length". Naturally, no
 warning is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses 
 an
 undecorated, and exceptionally common variable name instead. Oh; 
 and
 this was
 introduced to ease the implementation of certain templates - on
 technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/
 change from
 "length" to a 'decorated' version instead.
And this is one of the reason why I use 'decorated' identifier names; to avoid clashes with language keywords. char[] getLine (char[] pString) { uint lLength = pString.length; foreach (uint fIdx, char fCurrChar; pString) { if (fCurrChar == '\n') lLength = fIdx; } return pString [0..lLength]; } (The prefixes give hints as to the identifiers' scope)
Very sensible. But very sad that we must do so, given total ugliness of decorations in general, and the almost total uselessness of Hungarian nature. The last I recall from last year was that the implicit length was going to be $. I'm sure there were reasons against, but they cannot be as compelling as the example Kris gave. Here's a possible compromise, although I'm not sure I like it: char[] getLine (char[] s) { uint length = s.length; foreach (uint i, char c; s) { if (c == '\n') length = i; } return s [0 .. .length]; } The . before length indicates its 'local' to 's'. Hmmm, on second thoughts, that stinks. In general - indeed, it's harder to think of a contrary example - verbose code is better than dangerous code. Kris is quite right when he says that D has introduced some of the latter.
How about: array[from...] // analogous to array[from .. array.length];
IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)
Feb 07 2005
next sibling parent Ben Hinkle <Ben_member pathlink.com> writes:
 How about:

 array[from...] // analogous to array[from .. array.length];
IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length
Unicode to the rescue: array[from..\u221E] For those who don't immediately recognize \u221E, it is the codepoint for infinity. :-)
Feb 07 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cu930u$1j33$1 digitaldaemon.com>, Matthew says...
 How about:

 array[from...] // analogous to array[from .. array.length];
IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)
The problem is that one may need to reference the array-length within an expression; an expression within the brackets. This extends to templates, which needed a means to explicitly reference the array length whilst avoiding recanting the array itself (or something like that). The upshot, I understand, was that the notion of an implicit array-length 'temporary' seemed appropriate. Unfortunately it was implemented as a pseudo-reserved "length", rather than an alternate manner that didn't quite shove it up the programmers' proverbial arse
Feb 07 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Kris" <Kris_member pathlink.com> wrote in message 
news:cu956n$1msm$1 digitaldaemon.com...
 In article <cu930u$1j33$1 digitaldaemon.com>, Matthew says...
 How about:

 array[from...] // analogous to array[from .. array.length];
IIRC, that was a popular suggestion at the time, as was array[ .. 2] // from 0 => 2 and array[ .. ] // from 0 => length but they were not accepted. I can't remember why, and they seem ok to me. In neither Ruby nor Python are such things in the least confusing. (Although Ruby's use of inclusive and exclusive ranges via .. and ... gets a little confusing - I'd have to have a look in the book now to tell you which was which.)
The problem is that one may need to reference the array-length within an expression; an expression within the brackets.
Ah, of course. Silly me.
 This extends to templates, which
 needed a means to explicitly reference the array length whilst 
 avoiding
 recanting the array itself (or something like that).

 The upshot, I understand, was that the notion of an implicit 
 array-length
 'temporary' seemed appropriate. Unfortunately it was implemented as a
 pseudo-reserved "length", rather than an alternate manner that didn't 
 quite
 shove it up the programmers' proverbial arse
Indeed
Feb 07 2005
prev sibling next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I'm afraid I was out working + book writing when that went in. Very poor 
form indeed. (And you're quite right that it totally takes the legs out 
from Walter's arguments on missing returns.)

:-(

"Kris" <Kris_member pathlink.com> wrote in message 
news:cu8gk2$e0a$1 digitaldaemon.com...
 I'm jumping into this at a somewhat arbitrary point, but the general 
 claim
 Walter (apparently) makes is that

 1) D tries to catch dumb mistakes made by a user
 2) D tries to steer the programmer in the 'right' direction

 Let's see here:

 char[] getLine (char[] s)
 {
 uint length = s.length;
 foreach (uint i, char c; s)
 {
 if (c == '\n')
 length = i;
 }
 return s [0..length];
 }

 The above is just an arbitrary example of the apparent hypocritical 
 nature of
 (1) and (2). The function is supposed to return the subset of its 
 argument only
 as far as a newline.

 Do you see the insideous bug there? Many of you will not, so I'll 
 spell it out:

 Walter added a very subtle pseudo-reserved word, that's only used when 
 it comes
 to arrays. Yes, it's the word "length". When used within 
 square-brackets, it
 always means "the length of the enclosing array". Of course, this 
 overrides any
 other variable that happens to be called "length". Naturally, no 
 warning is
 emitted.

 This would perhaps not be so bad if the pseudo-reserved word were
 "implicitArrayLength" or something like that. But NO! Walter uses an
 undecorated, and exceptionally common variable name instead. Oh; and 
 this was
 introduced to ease the implementation of certain templates - on 
 technical
 merits. Oh! And Walter feels this pseudo-reserved name should /not/ 
 change from
 "length" to a 'decorated' version instead.

 Any talk about D with regard to (1) and (2) are moot, when D clearly 
 injects
 subtle and glorious ways to f%ck the programmer in simple, and shall I 
 say
 common, ways.

 Fair warning :-)

 I fully sympathize with your head-beating-wall exercise, Matthew. Keep 
 it up!





 In article <cu44i1$739$1 digitaldaemon.com>, Walter says...
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3v58$3c3$1 digitaldaemon.com...
 A maintenance engineer is stymied by *both*
 forms, and confused contrarily: the first looks like a bug but may 
 not
 be, the second is a bug but doesn't look like it. The only form that
 stands up to maintenance is something along the lines of what 
 Derek's
 talking about:

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
return v.z;
         }

         throw logic_error("This function has encountered a situation
 which contradicts its design and/or the design of the software 
 within
 which it resides");

         return 0;
     }
From a C/C++ perspective, you're right, this is the only correct solution. From a D perspective, however, I submit that the first example is not confusing. There is no falling off the end in D functions, as an exception would be thrown. The only returns that can happen are explicitly there with return statements. The maintenance engineer will know this as surely as he knows that after an assert(p) that p is not null. I agree this is a different way of thinking about the code, that coming from a solid C/C++ background it might be a bit off-putting.
 This is what I also do in such cases, and I believe (and have 
 witnessed)
 it being a widely practiced technique.
Yes, and I've written magazine articles and done lectures pushing exactly that. It's what one has to do with C/C++.
You're
 keen to mould D with a view to catering for, or at least mitigating 
 the
 actions of,  the lowest common denominators of the programming gene
 pool.
I've seen this kind of error written by experts, not just the lowest common denominator. If D cannot prevent an error, it should try to mitigate the damage.
Yet you seem decidely uninterested in addressing the concerns of
 large scale and/or commercial and/or large-teams and/or long-lasting
 codebases. How can this attitude help D to prosper?
I have to disagree with this. Many features of D are the result of many long conversations with program development managers. They need positive mechanisms in the language to prevent or at least mitigate the effects of common, very human, programming mistakes. C and C++ are seriously deficient in this area. That you disagree with the efficacy of one the solutions does not at all mean I am uninterested. A very large part of D is providing support for writing robust code.
 Your measure adds an indeterminately timed exception fire, in the 
 case
 that a programmer doesn't add a return 0. That's great, so far as it
 goes. But here's the fly in your soup: what's to stop them adding 
 the
 return 0?
Absolutely nothing. But as I wrote before, if he's looking at fixing the code after the exception fired, he knows he's dealing with a bug that needs fixing. In the case of the compiler error message, there is not necessarilly a bug there, so the easy temptation is to throw in a return of some arbitrary value. Is that bad programming technique? Absolutely. Does it happen anyway? Yes, it does. I've been in code review meetings and listened to the excuses for it. Those kinds of things are hard to pick up in a code review, so removing the cause of it and trying to mitigate the damage is of net benefit. Let's put it this way, here are the choices (numbers pulled out of dimension X): 1) A bug catching feature that 90% of the time will cause the programmer to write correct code, but 10% of the time will result in code that has an insidious, nasty, hard to reproduce & find bug. 2) A bug catching feature that 70% of the time will cause the programmer to write correct code, but the 30% that get it wrong results in code that when it fails, fails cleanly, in an easy to reproduce, find and therefore fixable manner. It's a judgement call, not dogma. I'd rather have (2), and I believe that (2) is better for the long term success of a code base. I do not like (1), b ecause the penalties of such bugs, even though they are less frequent, are so severe they overshadows everything else.
Feb 07 2005
prev sibling next sibling parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
I agree.  IMHO, "length" should either be:
	- reserved; it should be an error to use it as a variable name, etc.
	- an error to use within a slice (in other words, if the scope contains 
a "length" variable and you want to use "length" within a slice - 
whichever you mean, you get an error!)
	- removed, and either not replaced or replaced with a symbol.

I think you're advocating 2 or 3, which I agree with.  If neither of 
those, 1 would be a good choice.  But, I think it is fully clear that 
the current situation is a problem.

-[Unknown]


 The above is just an arbitrary example of the apparent hypocritical nature of
 (1) and (2). The function is supposed to return the subset of its argument only
 as far as a newline. 
Feb 07 2005
prev sibling parent reply Ben Hinkle <Ben_member pathlink.com> writes:
Walter added a very subtle pseudo-reserved word, that's only used when it comes
to arrays. Yes, it's the word "length". When used within square-brackets, it
always means "the length of the enclosing array". Of course, this overrides any
other variable that happens to be called "length". Naturally, no warning is
emitted.
Two things: 1) I wouldn't mind seeing that feature removed or changed so that it works with overloaded opIndex and friends. I avoid using it. It's a piece of syntactic salt (looks like sugar but doesn't taste quite right). 2) A dlint program could flag shadowed variables called "length". -Ben
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <Ben_member pathlink.com> wrote in message 
news:cuafb1$22ud$1 digitaldaemon.com...
Walter added a very subtle pseudo-reserved word, that's only used 
when it comes
to arrays. Yes, it's the word "length". When used within 
square-brackets, it
always means "the length of the enclosing array". Of course, this 
overrides any
other variable that happens to be called "length". Naturally, no 
warning is
emitted.
Two things: 1) I wouldn't mind seeing that feature removed or changed so that it works with overloaded opIndex and friends. I avoid using it. It's a piece of syntactic salt (looks like sugar but doesn't taste quite right). 2) A dlint program could flag shadowed variables called "length".
It's got to be 1. Kris is right that this is just a crazy idea. (While I disagree with the return value thingy, I can actually see some sense in it. With this, though, it's just plain wrong.) Can someone enlighten me as to why $ was rejected?
Feb 08 2005
parent reply Derek <derek psych.ward> writes:
On Wed, 9 Feb 2005 06:44:54 +1100, Matthew wrote:


[snip]
 
 Can someone enlighten me as to why $ was rejected?
I believe it was being saved for later, just in case a better usage came about. -- Derek Melbourne, Australia
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek" <derek psych.ward> wrote in message 
news:1q7ahceywgxwl.156v9zjgzkizw$.dlg 40tude.net...
 On Wed, 9 Feb 2005 06:44:54 +1100, Matthew wrote:


 [snip]

 Can someone enlighten me as to why $ was rejected?
I believe it was being saved for later, just in case a better usage came about.
Fair enough. Though one might observe that even if a different behaviour was forthcoming, it may well be orthogonal to an array range expression. [OT] btw, if Walter was to build regex into the language, like in Ruby, I'd go for that. :-)
Feb 08 2005
parent "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cub9la$mbe$1 digitaldaemon.com...
 [OT] btw, if Walter was to build regex into the language, like in Ruby,
 I'd go for that. :-)
Doing that, as I argued elsewhere, would break some of the core principles on which the D grammar is based. But something close to it can be achieved, see the new regexp and string functions in DMD 0.114.
Mar 02 2005
prev sibling parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
I'd just like to say three more things and then I'll shut up since no 
one asked me anyway:

1. I never said whether I actually think a warning of some sort would be 
nice.  To say it now: I would like one indeed, although lint would be okay.

2. Some of you live in a vacuum where only bad programmers make mistakes 
and where maintenance programmers find every flaw there is.  Read this: 
Customers see bugs in software all of the time.  It happens.  Get over 
it.  Now make it so when they see it they don't use other software 
because it screws them over.

3. The code below is what I hate about this feature in some compilers. 
If I throw an exception I *should not* have to put a return (but have to 

detect unreachable code, why can't it detect that?

-[Unknown]

     int foo(CollectionClass c, int y)
     {
         foreach (Value v; c)
         {
             if (v.x == y)
                 return v.z;
         }
 
         throw logic_error("This function has encountered a situation 
 which contradicts its design and/or the design of the software within 
 which it resides");
 
         return 0;
     }
Feb 05 2005
prev sibling next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cu3rt4$ra$1 digitaldaemon.com...
 "Unknown W. Brackets" <unknown simplemachines.org> wrote in message
 news:cu25p2$1jbc$1 digitaldaemon.com...
 Walter says: if it's compile time, programmers will patch it without
 thinking.  That's bad.  So let's use runtime.
That's essentially right.
[snip] I don't know where to jump into this thread so I'll jump here. Walter, would it be possible to get a "lint" program that flags dubious constructs? I'd like to working something like that into the D emacs mode so that typical errors can be flagged as the source code is written instead - but at the user's request. At work we use a tool like this called mlint - obviously based on the old lint program and it does wonders for cleaning up code and suggesting more efficient constructs etc. We have it integrated into all of our MATLAB editor tools so that you just hit a button and it highlights all the lines with recommendations and what the recommendations are. I hope that given D's lack of preprocessor and simpler syntax a dlint program would be able to generate some very useful recommendations. -Ben
Feb 05 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:cu40vq$4ne$1 digitaldaemon.com...
 I don't know where to jump into this thread so I'll jump here. Walter,
would
 it be possible to get a "lint" program that flags dubious constructs? I'd
 like to working something like that into the D emacs mode so that typical
 errors can be flagged as the source code is written instead - but at the
 user's request.
 At work we use a tool like this called mlint - obviously based on the old
 lint program and it does wonders for cleaning up code and suggesting more
 efficient constructs etc. We have it integrated into all of our MATLAB
 editor tools so that you just hit a button and it highlights all the lines
 with recommendations and what the recommendations are. I hope that given
D's
 lack of preprocessor and simpler syntax a dlint program would be able to
 generate some very useful recommendations.
I don't think it would be hard to morph the D front end code into a lint. Such a program could also be configurable by the end user to enforce the local coding style guide. I think it could be a valuable tool. Anyone looking for a D project to do? <g>
Feb 05 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 I don't think it would be hard to morph the D front end code into a lint.
 Such a program could also be configurable by the end user to enforce the
 local coding style guide. I think it could be a valuable tool. Anyone
 looking for a D project to do? <g>
I hope someone picks this up. Some of the possible rules that come to mind: 1) casting arrays to pointers vs using the ptr property 2) unused variables 3) dead code 4) returns at end of functions 5) switch statements without default clauses 6) checks for compilation errors 7) properties with getters and setters that mismatch types 8) replace simple for loops with foreach 9) comparing an object with null using ==, < or > maybe some others... 10) replace memmove/memcpy with slice assignment 11) using an floating point or char variable that has been initialized (I know D initializes all variables but the initial values for some types are chosen to usually force programmers to initialize variables)
Feb 06 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Ben Hinkle wrote:

 Some of the possible rules that come to mind:
 5) switch statements without default clauses
Currently this throws an Error at run-time, for non-release builds. "Error: Switch Default" This Exception is not thrown with -release. (like ArrayBoundsError)
 6) checks for compilation errors
Using -c sort of works, with the side effect of generating objects.
 9) comparing an object with null using ==, < or >
Hopefully this will be made into a "hard" compilation error, even ? --anders
Feb 06 2005
parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message 
news:cu5d4b$19r6$1 digitaldaemon.com...
 Ben Hinkle wrote:

 Some of the possible rules that come to mind:
 5) switch statements without default clauses
Currently this throws an Error at run-time, for non-release builds. "Error: Switch Default" This Exception is not thrown with -release. (like ArrayBoundsError)
That's why it would make a good candidate for dlint. It is legal code but potentially dangerous. I can sympathize with Walter's argument that "potentially dangerous" shouldn't necessarily be "illegal" (maybe I shouldn't put words in his mouth but hopefully I'm not too far off the mark). A tool to find potentially dangerous code should be available.
 6) checks for compilation errors
Using -c sort of works, with the side effect of generating objects.
plus dlint should generate output that is easy to parse by other tools. The compiler generates errors when it has to but its main job is to compile code. I wouldn't expect it to have a nice interface for dlint-like uses. Plus I would bet the output format would change from compiler to compiler so each tool would have to have special logic for parsing the output of every supported compiler.
 9) comparing an object with null using ==, < or >
Hopefully this will be made into a "hard" compilation error, even ?
Actually I would be satisfied if dlint caught it. I just worry about my ported Java code that has these things floating around. I've been scanning the code manually or with grep but without some tool I just don't have confidence that I've found all the bugs. To me it doesn't really matter if that tool is the compiler or something else.
 --anders
Feb 06 2005
prev sibling parent reply zwang <nehzgnaw gmail.com> writes:
Ben Hinkle wrote:
I don't think it would be hard to morph the D front end code into a lint.
Such a program could also be configurable by the end user to enforce the
local coding style guide. I think it could be a valuable tool. Anyone
looking for a D project to do? <g>
I hope someone picks this up. Some of the possible rules that come to mind: 1) casting arrays to pointers vs using the ptr property 2) unused variables 3) dead code 4) returns at end of functions 5) switch statements without default clauses 6) checks for compilation errors 7) properties with getters and setters that mismatch types 8) replace simple for loops with foreach 9) comparing an object with null using ==, < or > maybe some others... 10) replace memmove/memcpy with slice assignment 11) using an floating point or char variable that has been initialized (I know D initializes all variables but the initial values for some types are chosen to usually force programmers to initialize variables)
and some more: 12) detect unused parameters/methods/members/labels/code blocks 13) suggest the use of "with" statement when applicable 14) list classes with high cyclomatic complexity measurements 15) list classes with getters/setters only 16) rant on use of uninstantiated objects 17) highlight ill-named identifiers ... wait a sec. is there anyone passionate enough to launch the dlint project?
Feb 06 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 14) list classes with high cyclomatic complexity measurements
Interesting idea but it could be hard to compute and hard to know what to do about it. Can you see a code analysis program saying something like "Your code is too complicated. Make it simpler." Or are you hinting at a dangerous problem involving cyclic references?
 15) list classes with getters/setters only
why is this one a problem?
 16) rant on use of uninstantiated objects
I like it - a lint with attitude. For those C++ programmers who don't read the documenation carefully enough.
 17) highlight ill-named identifiers
Style-guides would be nice - as long as they are very customizable and optional!
 ...
 wait a sec.  is there anyone passionate enough to launch the dlint 
 project?
If it's still not done by the time MinWin gets settled (still many months) then I will definitely give it a shot. I could use a dlint ASAP.
Feb 06 2005
parent zwang <nehzgnaw gmail.com> writes:
Ben Hinkle wrote:
14) list classes with high cyclomatic complexity measurements
Interesting idea but it could be hard to compute and hard to know what to do about it. Can you see a code analysis program saying something like "Your code is too complicated. Make it simpler." Or are you hinting at a dangerous problem involving cyclic references?
I was just thinking about a list of classes sorted by their complexity, showing potential directions of high-level refactoring.
15) list classes with getters/setters only
why is this one a problem?
It's not. I don't know why I wrote it down :p
16) rant on use of uninstantiated objects
I like it - a lint with attitude. For those C++ programmers who don't read the documenation carefully enough.
17) highlight ill-named identifiers
Style-guides would be nice - as long as they are very customizable and optional!
...
wait a sec.  is there anyone passionate enough to launch the dlint 
project?
If it's still not done by the time MinWin gets settled (still many months) then I will definitely give it a shot. I could use a dlint ASAP.
I suppose MinWin is more anticipated by the D community. Keep up the good work :)
Feb 06 2005
prev sibling next sibling parent reply Derek <derek psych.ward> writes:
On Sat, 5 Feb 2005 17:27:07 -0800, Walter wrote:

 "Unknown W. Brackets" <unknown simplemachines.org> wrote in message
 news:cu25p2$1jbc$1 digitaldaemon.com...
 Walter says: if it's compile time, programmers will patch it without
 thinking.  That's bad.  So let's use runtime.
That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } }
And the coder probably should have done some more like ... int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } an impossible value '" ~ toString(y) ~ "'"); }
 By the nature of the program I'm writing, "y" is guaranteed to be within c.
 Therefore, there is only one return from the function, and that is the one
 shown. But the compiler cannot verify this. You recommend that the compiler
 complain about it. 
You use the word 'complain', whereas I'd tend you the phrase 'alert the coder to a potential problem'.
 I, the programmer, knows this can never happen, and I'm
 in a hurry with my mind on other things and I want to get it to compile and
 move on, so I write:
 
 int foo(CollectionClass c, int y)
 {
     foreach (Value v; c)
     {
         if (v.x == y)
             return v.z;
     }
     return 0;
 }
For which the peer review team should give you a smack the hand for. Most commercial coders do not work in a vacuum. The lone coder is always going to be a minority. Most coders have multiple others reading and critiquing there code long before it gets into commercial release.
 I'm not saying you would advocate "fixing" the code this way. I don't
 either. Nobody would. I am saying that this is often how real programmers
 will fix it. 
Is that 'real' as opposed to 'unreal'? Or are you implying the 'real' also implies the majority of commercial coders working in a typical organisation that cares about quality.
I know this because I see it done, time and again, in response
 to compilers that emit such error messages. This kind of code is a disaster
 waiting to happen. No compiler will detect it. It's hard to pick up on a
 code review.
Why is that? Common checklist items for functions include ... ** Does every possible return value meet the contract for the function? ** Does the default return value meet the function contract? ** Does the default return value imply an error situation, and if so, does the caller respond to the error value? ** Is the default return value also one of the possible non-default values, and if so, is it allowed for in the function requirements.
 Testing isn't going to pick it up. 
Testing *may not* pick it up. Sometimes it does.
It's an insidious, nasty kind of bug. 
Amen to that brother.
It's root cause is not bad programmers, but a compiler error
 message that encourages writing bad code.
No, its root cause *is* bad programmers. 'Bad' in the sense that they are not responsible coders. The compiler message would remind responsible coders to do the right thing; to others it just pisses them off.
 Instead, having the compiler insert essentially an assert(0); where the
 missing return is means that if it isn't a bug, nothing happens, and
 everyone is happy. If it is a bug, the assert gets tripped, and the
 programmer *knows* it's a real bug that needs a real fix, and he won't be
 tempted to insert a return of an arbitrary value "because it'll never be
 executed anyway".
Actually, it's more likely that it is not the programmer who finds out about it, but one of his customers. Then the programmer *and* the customer is not so happy. What is so wrong about trying to find problems as early as possible? Why wait til the customer rings you up to complain?
 This is the point I have consistently failed to make clear.
You have made your point very clear. I really, really , really do understand your point of view. I just don't agree with you that is useful. -- Derek Melbourne, Australia
Feb 05 2005
parent "Walter" <newshound digitalmars.com> writes:
"Derek" <derek psych.ward> wrote in message
news:1iq2ryvz6s0l9.id90ayeemth4$.dlg 40tude.net...
 You have made your point very clear. I really, really , really do
 understand your point of view. I just don't agree with you that is useful.
It's fair to disagree. I just want to get across the reasoning, so the decision doesn't look arbitrary. Now that I've succeeded in that, I'll retire for the moment from this debate <g>.
Feb 05 2005
prev sibling next sibling parent reply Scott Wood <scott buserror.net> writes:
On Sat, 5 Feb 2005 17:27:07 -0800, Walter <newshound digitalmars.com> wrote:
 I'm not saying you would advocate "fixing" the code this way. I don't
 either. Nobody would. I am saying that this is often how real programmers
 will fix it. I know this because I see it done, time and again, in response
 to compilers that emit such error messages. This kind of code is a disaster
 waiting to happen. No compiler will detect it. It's hard to pick up on a
 code review. Testing isn't going to pick it up. It's an insidious, nasty
 kind of bug. It's root cause is not bad programmers, but a compiler error
 message that encourages writing bad code.
No, its root cause *is* bad programmers. A good programmer would not interpret a missing-return error as an encouragement to mindlessly stick a "return 0;" in the code, but rather an indication that there's a code path that isn't properly terminated. Now, it may be the case that there are a lot of bad programmers out there, but that doesn't make it the compiler's fault[1], nor does it mean that those who would not commit this particular offense should have a useful compile-time diagnostic denied to them. What if the error message were "missing return statement or assert(0)" rather than just "missing return statement"?
 Instead, having the compiler insert essentially an assert(0); where the
 missing return is means that if it isn't a bug, nothing happens, and
 everyone is happy.
The person reading the code isn't happy when he can't tell whether it was an error that simply hadn't been caught in testing (or that was, but he's unsure of which piece of code to blame), or an intentional implicit assert(0). The person who mainly gets missing-return errors for cases where there really should be a return but it was forgotton (such as when a function is changed from returning void to returning non-void, or a non-void-returning stub function that was left completely empty) is also not happy that he doesn't find the bug until run-time testing.
 If it is a bug, the assert gets tripped, and the programmer *knows*
 it's a real bug that needs a real fix,
Only if it shows up in testing. If it's on a rare but known-possible execution path, where the programmer would have realized the need for a proper return statement (or other action) if he had been shown an error message, the bug gets found later. -Scott [1] This isn't in the same class as array bounds checking, garbage collection, etc. where the programmer mistakes that they avoid arise out of overlooking something (which all humans do from time to time); in this case, the programmer looked directly at the problem and decided to do something stupid.
Feb 06 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Scott Wood" <scott buserror.net> wrote in message
news:slrnd0cocc.5nk.nospam odin.buserror.net...
 No, its root cause *is* bad programmers.  A good programmer would not
 interpret a missing-return error as an encouragement to mindlessly
 stick a "return 0;" in the code, but rather an indication that
 there's a code path that isn't properly terminated.
Back when I used to work for Boeing, a major focus of attention was making it impossible to cross the hydraulic lines to critical flight controls. There have been many crashes and near disasters from this happening. Measures taken include: 1) making sure the lines are not long enough to connect to the wrong port 2) using different diameter lines and fittings for each port 3) one one port use left hand threads, on the other use right hand threads 4) warnings and labels 5) test proceedures to verify correct hookup 6) preflight checks to verify correct hookup Real life FAA certified mechanics sometimes went to astonishing lengths to idiotically cross the lines. You can't rely on better training, more certification, etc., eliminating the problem. You've got to design the machine to minimize the potential of mechanics getting it wrong. This kind of effort to eliminate tempting sources of error is pervasive in jetliner design. We all have to work with "bad" programmers, we can't wish them away or assume that better training will transform them. Even great programmers make silly mistakes. I tried to design D in a way that doing the right thing is *less* work than doing the wrong thing. This is because the wrong things often happen because they are easier. Sure, a determined mechanic could still cross the lines. But he's going to have to go through a great deal of effort to do it, and hopefully at some point the thought will cross his mind "this is too hard, I must be doing something wrong." P.S. There was a crash a few years back of a fighter (not a Boeing design). The pitch controls were reversed. Reversing the controls on that bird was as easy as moving a control rod from one spot to the wrong one. Their ace mechanic did it by mistake, and the pilot didn't do his preflight checkout right. Both got blamed. Me, I blame the design of the flight controls. http://www.aviationtoday.com/sia/20010801.htm http://www.ntsb.gov/ntsb/brief.asp?ev_id=20001212X24781&key=1
Feb 06 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 6 Feb 2005 13:35:14 -0800, Walter wrote:

 "Scott Wood" <scott buserror.net> wrote in message
 news:slrnd0cocc.5nk.nospam odin.buserror.net...
 No, its root cause *is* bad programmers.  A good programmer would not
 interpret a missing-return error as an encouragement to mindlessly
 stick a "return 0;" in the code, but rather an indication that
 there's a code path that isn't properly terminated.
Back when I used to work for Boeing, a major focus of attention was making it impossible to cross the hydraulic lines to critical flight controls. There have been many crashes and near disasters from this happening. Measures taken include: 1) making sure the lines are not long enough to connect to the wrong port 2) using different diameter lines and fittings for each port 3) one one port use left hand threads, on the other use right hand threads 4) warnings and labels 5) test proceedures to verify correct hookup 6) preflight checks to verify correct hookup Real life FAA certified mechanics sometimes went to astonishing lengths to idiotically cross the lines. You can't rely on better training, more certification, etc., eliminating the problem. You've got to design the machine to minimize the potential of mechanics getting it wrong. This kind of effort to eliminate tempting sources of error is pervasive in jetliner design. We all have to work with "bad" programmers, we can't wish them away or assume that better training will transform them. Even great programmers make silly mistakes. I tried to design D in a way that doing the right thing is *less* work than doing the wrong thing. This is because the wrong things often happen because they are easier. Sure, a determined mechanic could still cross the lines. But he's going to have to go through a great deal of effort to do it, and hopefully at some point the thought will cross his mind "this is too hard, I must be doing something wrong." P.S. There was a crash a few years back of a fighter (not a Boeing design). The pitch controls were reversed. Reversing the controls on that bird was as easy as moving a control rod from one spot to the wrong one. Their ace mechanic did it by mistake, and the pilot didn't do his preflight checkout right. Both got blamed. Me, I blame the design of the flight controls. http://www.aviationtoday.com/sia/20010801.htm http://www.ntsb.gov/ntsb/brief.asp?ev_id=20001212X24781&key=1
It appears to me then that you now have DMD informing the pilot at 30,000 feet that the lines are crossed and the system is shutting down, rather than letting maintenance people on the ground know before the plane takes off. -- Derek Melbourne, Australia 7/02/2005 10:06:54 AM
Feb 06 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message
news:cu6835$2cmj$1 digitaldaemon.com...
 It appears to me then that you now have DMD informing the pilot at 30,000
 feet that the lines are crossed and the system is shutting down, rather
 than letting maintenance people on the ground know before the plane takes
 off.
Actually, the point of that story was that one cannot assume away "bad" programmers, one must assume their existence and design to prevent errors or mitigate the damage they can cause. I view the compiler error in this case as akin to "Hey, hydraulic fluid was leaking. A couple of the lines weren't hooked up, so I just screwed them into a couple of ports nearby. It doesn't leak anymore!" It's a mistake to assume that the mechanic will go read the documentation and hook them to the correct port. Sooner or later, some mechanic will do the easiest "fix" possible, so it's very, very important to design so that the easiest fix is the correct one. The solution you use, while very correct, is not the easiest one. I wish all programmers were as careful as you obviously are.
Feb 06 2005
parent sai <sai_member pathlink.com> writes:
I think Walter's stand on 'missing return' is an opposite extreme of java 
checked exceptions enforcement.

I mean, Java enforces checked exceptions, many people didn't like it, 
Walter didn't like it, he hated it, and believed that compiler
should never enforce few things, including 'missing returns'. 

But not checking for missing return statements just too extreme. 
People hated Java compiler for its over involvement,
and now, they might hate D for its under involvement !!

Just my opinion
Sai
Feb 06 2005
prev sibling parent Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 "Unknown W. Brackets" <unknown simplemachines.org> wrote in message
 news:cu25p2$1jbc$1 digitaldaemon.com...
 
Walter says: if it's compile time, programmers will patch it without
thinking.  That's bad.  So let's use runtime.
That's essentially right. I'll add one more example to the ones you presented: int foo(Collection c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } } By the nature of the program I'm writing, "y" is guaranteed to be within c. Therefore, there is only one return from the function, and that is the one shown. But the compiler cannot verify this. You recommend that the compiler complain about it. I, the programmer, knows this can never happen, and I'm in a hurry with my mind on other things and I want to get it to compile and move on, so I write: int foo(CollectionClass c, int y) { foreach (Value v; c) { if (v.x == y) return v.z; } return 0; }
Hey, hey, one really sould put an assert(0) there! And then I remebered, that a year ago I would've put the return(0) there myself, if all it was for was to shut up the compiler.
 I'm not saying you would advocate "fixing" the code this way. I don't
 either. Nobody would. I am saying that this is often how real programmers
 will fix it. I know this because I see it done, time and again, in response
 to compilers that emit such error messages. This kind of code is a disaster
 waiting to happen. No compiler will detect it. It's hard to pick up on a
 code review. Testing isn't going to pick it up. It's an insidious, nasty
 kind of bug. It's root cause is not bad programmers, but a compiler error
 message that encourages writing bad code.
 
 Instead, having the compiler insert essentially an assert(0); where the
 missing return is means that if it isn't a bug, nothing happens, and
 everyone is happy. If it is a bug, the assert gets tripped, and the
 programmer *knows* it's a real bug that needs a real fix, and he won't be
 tempted to insert a return of an arbitrary value "because it'll never be
 executed anyway".
 
 This is the point I have consistently failed to make clear.
You got me. Now I see it your way!
Mar 08 2005
prev sibling next sibling parent reply "Charles" <no email.com> writes:
I can see both your points.  And I do not want to gang up on Walter but

 Q: Do you think driving on the left-hand side of the road is more or
 less sensible than driving on the right?
 A: When driving on the left-hand side of the road, be careful to monitor
 junctions from the left.
represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one. Now a switch with no default and no matching 'case' ? I can see the argument for that. I think that is a good example of 'shut-up code' doing harm, where its better caught at runtime then at compile time. But its almost GUARANTEED not to run right with a missing return statement , best to catch it at compile time. Charlie "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu23g5$1hh3$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable
 amount
 of testing (or, more horribly, deployment time) to find them, is
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl,
 design
 reviews, code reviews, documentation, refactoring, unit testing,
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since you
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little
 tried
 'good idea', like C++'s exception specifications or, in fear of
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an
 emperor
 without clothes, that dynamic type checking (like Python does) is more
 robust and more productive. I'm not at all convinced of that yet <g>,
 but
 it's fun seeing the conventional wisdom being challenged. It's good
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a
 bad
 program from continuing to function once it is established to be bad.
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just keep
 going bad, in the same (hopefully very short) amount of time, again
 and
 again and again. The system's not being (further) corrupted, but it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit
 that
 disables the faulty computer and engages the backup. The last, last,
 last
 thing you want the autopilot on an airliner to do is execute a return
 0;
 some programmer threw in to shut the compiler up. An exception thrown,
 shutting down the autopilot, engaging the backup, and notifying the
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as
 the
 strictures of D is concerned, is a balance between the likelihoods
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the likelihood
 of
 producing a non-violating program in the first place. The reasons are
 obvious, so I'll not go into them. (If anyone's cares to disagree, I
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in
 the
 name of simplicity', on this and other issues. For good reasons, you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will
 acknowledge,
 there are good reasons for all the decisions made for D thus far. But
 there are also good reasons against most/all of those decisions.
 (Except
 for slices. Slices are *the best thing* ever, and coupled with
 auto+GC,
 will eventually stand D out from all other mainstream languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is.
 That
 being the case, how can the policy of absolutism be deemed a sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a
 perfect
 solution, any more than it can be argued that compiler errors on
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great
 deal*.
 But I really cannot imagine recommending use of D to my clients with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow
 warnings
 to detect such things at compile time, which may alter the commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently
 inserted
 implicitly. This would have the compiler report errors to me if I
 missed
 a return. It'd have the code throw errors to you if an unexpected
 code
 path occured. Other than screwing over people who prize typing one
 less
 line over robustness, what's the flaw? And yet it got no traction
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted
 means
 that the language should do that for you. After all, that's what
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll still
 be
 arguing about this when the third edition of DPD's running hot
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 05 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 I can see both your points.  And I do not want to gang up on Walter 
 but

 represents allot of responses from Walter when he's dead set on 
 something.
Yes. Kind of like calling upon a disinterested god. A night's sleep has interveened, and I tire of smashing my head on the same brick wall, so I'll segue on y'all to say:
 Q: Do you think driving on the left-hand side of the road is more or
 less sensible than driving on the right?
 A: When driving on the left-hand side of the road, be careful to 
 monitor
 junctions from the left.
Given the fact that the majority of people are right-handed, and steering correctly is probably more important than changing gear correctly, driving on the LHS is the better thing. So nya nya from the Australasia/Japan/UK to all the rest of the world. Of course, an increasing number of people drive automatics, to it's largely moot. And then there's that bizarre steering wheel change business that you NW hemisphere types enjoy, which totally blows my argument. :-) "Charles" <no email.com> wrote in message news:cu3747$2foi$1 digitaldaemon.com...
 I can see both your points.  And I do not want to gang up on Walter 
 but

 Q: Do you think driving on the left-hand side of the road is more or
 less sensible than driving on the right?
 A: When driving on the left-hand side of the road, be careful to 
 monitor
 junctions from the left.
represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one. Now a switch with no default and no matching 'case' ? I can see the argument for that. I think that is a good example of 'shut-up code' doing harm, where its better caught at runtime then at compile time. But its almost GUARANTEED not to run right with a missing return statement , best to catch it at compile time. Charlie "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message news:cu23g5$1hh3$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did 
 not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable
 amount
 of testing (or, more horribly, deployment time) to find them, is
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of 
 better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going 
 to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl,
 design
 reviews, code reviews, documentation, refactoring, unit testing,
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since 
 you
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little
 tried
 'good idea', like C++'s exception specifications or, in fear of
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an
 emperor
 without clothes, that dynamic type checking (like Python does) is 
 more
 robust and more productive. I'm not at all convinced of that yet 
 <g>,
 but
 it's fun seeing the conventional wisdom being challenged. It's good
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a
 bad
 program from continuing to function once it is established to be 
 bad.
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just 
 keep
 going bad, in the same (hopefully very short) amount of time, 
 again
 and
 again and again. The system's not being (further) corrupted, but 
 it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit
 that
 disables the faulty computer and engages the backup. The last, 
 last,
 last
 thing you want the autopilot on an airliner to do is execute a 
 return
 0;
 some programmer threw in to shut the compiler up. An exception 
 thrown,
 shutting down the autopilot, engaging the backup, and notifying the
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as
 the
 strictures of D is concerned, is a balance between the likelihoods
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of 
 missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the 
 likelihood
 of
 producing a non-violating program in the first place. The reasons 
 are
 obvious, so I'll not go into them. (If anyone's cares to disagree, 
 I
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in
 the
 name of simplicity', on this and other issues. For good reasons, 
 you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will
 acknowledge,
 there are good reasons for all the decisions made for D thus far. 
 But
 there are also good reasons against most/all of those decisions.
 (Except
 for slices. Slices are *the best thing* ever, and coupled with
 auto+GC,
 will eventually stand D out from all other mainstream 
 languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it 
 is.
 That
 being the case, how can the policy of absolutism be deemed a 
 sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a
 perfect
 solution, any more than it can be argued that compiler errors on
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed 
 perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on 
 the
 language is loosened, but the real world's already here, batting 
 on
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great
 deal*.
 But I really cannot imagine recommending use of D to my clients 
 with
 these flaws of absolutism. (My hopeful guess for the future is 
 that
 other compiler variants will arise that will, at least, allow
 warnings
 to detect such things at compile time, which may alter the 
 commercial
 landscape markedly; D is, after all, full of a great many 
 wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently
 inserted
 implicitly. This would have the compiler report errors to me if I
 missed
 a return. It'd have the code throw errors to you if an unexpected
 code
 path occured. Other than screwing over people who prize typing one
 less
 line over robustness, what's the flaw? And yet it got no traction
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted
 means
 that the language should do that for you. After all, that's what
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll 
 still
 be
 arguing about this when the third edition of DPD's running hot
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 05 2005
prev sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Sat, 5 Feb 2005 13:34:54 -0600, Charles <no email.com> wrote:
 I can see both your points.  And I do not want to gang up on Walter but

 Q: Do you think driving on the left-hand side of the road is more or
 less sensible than driving on the right?
 A: When driving on the left-hand side of the road, be careful to monitor
 junctions from the left.
represents allot of responses from Walter when he's dead set on something. I agree that putting in "shut-up" code can indeed lead to more bugs like Walter was saying , but in this case , I don't think that a 'return' statement is one of them. Using C/C++ its always been an error on my compilers to not have a return statement, and its _never_ been a problem ( probably saved many bugs because of it ). Have to agree 100% with Matthew on that one.
The problem is not whether compile time detecting missing returns is good or bad, most would agree that it's good. IIRC the problem is that it's not possible to do it with 100% accuracy, and further doing it at to a degree that approaches 100% requires a lot of code which basically amounts to checking for each and every weird possible combination of factors.
 Now a switch with no default and no matching 'case' ? I can see the  
 argument
 for that.  I think that is a good example of 'shut-up code' doing harm,
 where its better caught at runtime then at compile time.  But its almost
 GUARANTEED not to run right with a missing return statement , best to  
 catch
 it at compile time.
We're not simply talking about a missing return. We're talking about a return which: - The programmer believed was not required, as it would never be executed. - The compiler complained about (correctly, but seemingly incorrectly). - The programmer added a 'return 0;' "to shut it up" - Which was then later executed. - Causing un-expected behaviour, potentially un-noticed for ... Regan
Feb 06 2005
prev sibling next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Sat, 5 Feb 2005 20:26:43 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:

<snip>

     Camp A want behaviour X to be done automatically by the compiler
     Camp B want behaviour Y to be done automatically by the compiler. X
 and Y are incompatible, when done automatically.
Not true, assuming: X == compile time warning/error of the fault Y == auto-insert of code to cause runtime error for fault. As described the compiler can do both. <snip> Regan
Feb 06 2005
prev sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
Disclaimer: Please correct me if I have miss-represented anyone, I  
appologise in advance for doing so, it was not my intent.

The following is my impression of the points/positions in this argument:

1. Catching things at compile time is better than at runtime.
  - all parties agree

2. If it cannot be caught at compile time, then a hard failure at runtime  
is desired.
  - all parties agree

3. An error which causes the programmer to add code to 'shut the compiler  
up' causes hidden bugs
  - Walter

Matthew?

4. Programmers should take responsibilty for the code they add to 'shut  
the compiler up' by adding an assert/exception.
  - Matthew

Walter?

5. The language/compiler should where it can make it hard for the  
programmer to write bad code
  - Walter

Matthew?


IMO it seems to be a disagreement about what happens in the "real world",  
IMO Matthew has an optimistic view, Walter a pessimistic view, eg.

Matthew: If it were a warning, programmers would notice immediately,  
consider the error, fix it or add an assert for protection, thus the error  
would be caught immediately or at runtime.

It seems to me that Matthews position is that warning the programmer at  
compile time about the situation gives them the opportunity to fix it at  
compile time, and I agree.

Walter: If it were a warning, programmers might add 'return 0;' causing  
the error to remain un-detected for longer.

It seems to me that Walters position is that if it were a warning there is  
potential for the programmer to do something stupid, and I agree.

So why can't we have both?

To explore this, an imaginary situation:

- Compiler detects problem.
- Adds code to handle it (hard-fail at runtime).
- Gives notification of the potential problem.
- Programmer either:

  a. cannot see the problem, adds code to shut the compiler up. (causing  
removal of auto hard-fail code)

  b. cannot see the problem, adds an assert (hard-fail) and code to shut  
the compiler up.

  c. sees the problem, fixes it.

if a then the bug could remain undetected for longer.
if b then the bug is caught at runtime.
if c then the bug is avoided.

Without the notification (a) is impossible, so it seems Walters position  
removes the worst case scenario, BUT, without the notification (c) is  
impossible, so it seems Walters position removes the best case scenario  
also.

Of course for any programmer who would choose (b) over (a) 'all the time'  
Matthews position is clearly the superior one, however...

The real question is. In the real world are there more programmers who  
choose (a), as Walter imagines, or are there more choosing (b) as Matthew  
imagines?

Those that choose (a), do they do so out of ignorance, impatience, or  
stupidity? (or some other reason)

If stupidity, there is no cure for stupidity.

If impatience (as Walter has suggested) what do we do, can we do anything.

If ignorance, then how do we teach them? does auto-inserting the hard fail  
and giving no warning do so? would giving the warning do a better/worse  
job?

eg.

"There is the potential for undefined behaviour here, an exception has  
been added automatically please consider the situation and either: A. add  
your own exception or B. fix the bug."

Regan

On Sat, 5 Feb 2005 20:26:43 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable
 amount
 of testing (or, more horribly, deployment time) to find them, is
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl,
 design
 reviews, code reviews, documentation, refactoring, unit testing,
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since you
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little
 tried
 'good idea', like C++'s exception specifications or, in fear of
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an
 emperor
 without clothes, that dynamic type checking (like Python does) is more
 robust and more productive. I'm not at all convinced of that yet <g>,
 but
 it's fun seeing the conventional wisdom being challenged. It's good
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a
 bad
 program from continuing to function once it is established to be bad.
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just keep
 going bad, in the same (hopefully very short) amount of time, again
 and
 again and again. The system's not being (further) corrupted, but it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit
 that
 disables the faulty computer and engages the backup. The last, last,
 last
 thing you want the autopilot on an airliner to do is execute a return
 0;
 some programmer threw in to shut the compiler up. An exception thrown,
 shutting down the autopilot, engaging the backup, and notifying the
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as
 the
 strictures of D is concerned, is a balance between the likelihoods
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the likelihood
 of
 producing a non-violating program in the first place. The reasons are
 obvious, so I'll not go into them. (If anyone's cares to disagree, I
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in
 the
 name of simplicity', on this and other issues. For good reasons, you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will
 acknowledge,
 there are good reasons for all the decisions made for D thus far. But
 there are also good reasons against most/all of those decisions.
 (Except
 for slices. Slices are *the best thing* ever, and coupled with
 auto+GC,
 will eventually stand D out from all other mainstream languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is.
 That
 being the case, how can the policy of absolutism be deemed a sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a
 perfect
 solution, any more than it can be argued that compiler errors on
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great
 deal*.
 But I really cannot imagine recommending use of D to my clients with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow
 warnings
 to detect such things at compile time, which may alter the commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently
 inserted
 implicitly. This would have the compiler report errors to me if I
 missed
 a return. It'd have the code throw errors to you if an unexpected
 code
 path occured. Other than screwing over people who prize typing one
 less
 line over robustness, what's the flaw? And yet it got no traction
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted
 means
 that the language should do that for you. After all, that's what
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll still
 be
 arguing about this when the third edition of DPD's running hot
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 06 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Sounds good to me.

But I suspect Walter will argue that given the programmer any hint of
the problem will result in them putting something in to shut the
compiler up. At which point I'll have to smash myself in the head with 
my laptop.

"Regan Heath" <regan netwin.co.nz> wrote in message
news:opslstuwzg23k2f5 ally...
 Disclaimer: Please correct me if I have miss-represented anyone, I
 appologise in advance for doing so, it was not my intent.

 The following is my impression of the points/positions in this
 argument:

 1. Catching things at compile time is better than at runtime.
  - all parties agree

 2. If it cannot be caught at compile time, then a hard failure at
 runtime  is desired.
  - all parties agree

 3. An error which causes the programmer to add code to 'shut the
 compiler  up' causes hidden bugs
  - Walter

 Matthew?

 4. Programmers should take responsibilty for the code they add to
 'shut  the compiler up' by adding an assert/exception.
  - Matthew

 Walter?

 5. The language/compiler should where it can make it hard for the
 programmer to write bad code
  - Walter

 Matthew?


 IMO it seems to be a disagreement about what happens in the "real
 world",  IMO Matthew has an optimistic view, Walter a pessimistic
 view, eg.

 Matthew: If it were a warning, programmers would notice immediately,
 consider the error, fix it or add an assert for protection, thus the
 error  would be caught immediately or at runtime.

 It seems to me that Matthews position is that warning the programmer
 at  compile time about the situation gives them the opportunity to fix
 it at  compile time, and I agree.

 Walter: If it were a warning, programmers might add 'return 0;'
 causing  the error to remain un-detected for longer.

 It seems to me that Walters position is that if it were a warning
 there is  potential for the programmer to do something stupid, and I
 agree.

 So why can't we have both?

 To explore this, an imaginary situation:

 - Compiler detects problem.
 - Adds code to handle it (hard-fail at runtime).
 - Gives notification of the potential problem.
 - Programmer either:

  a. cannot see the problem, adds code to shut the compiler up.
 (causing  removal of auto hard-fail code)

  b. cannot see the problem, adds an assert (hard-fail) and code to
 shut  the compiler up.

  c. sees the problem, fixes it.

 if a then the bug could remain undetected for longer.
 if b then the bug is caught at runtime.
 if c then the bug is avoided.

 Without the notification (a) is impossible, so it seems Walters
 position  removes the worst case scenario, BUT, without the
 notification (c) is  impossible, so it seems Walters position removes
 the best case scenario  also.

 Of course for any programmer who would choose (b) over (a) 'all the
 time'  Matthews position is clearly the superior one, however...

 The real question is. In the real world are there more programmers who
 choose (a), as Walter imagines, or are there more choosing (b) as
 Matthew  imagines?

 Those that choose (a), do they do so out of ignorance, impatience, or
 stupidity? (or some other reason)

 If stupidity, there is no cure for stupidity.

 If impatience (as Walter has suggested) what do we do, can we do
 anything.

 If ignorance, then how do we teach them? does auto-inserting the hard
 fail  and giving no warning do so? would giving the warning do a
 better/worse  job?

 eg.

 "There is the potential for undefined behaviour here, an exception has
 been added automatically please consider the situation and either: A.
 add  your own exception or B. fix the bug."

 Regan

 On Sat, 5 Feb 2005 20:26:43 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable
 amount
 of testing (or, more horribly, deployment time) to find them, is
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of
 better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going
 to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl,
 design
 reviews, code reviews, documentation, refactoring, unit testing,
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since you
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little
 tried
 'good idea', like C++'s exception specifications or, in fear of
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an
 emperor
 without clothes, that dynamic type checking (like Python does) is
 more
 robust and more productive. I'm not at all convinced of that yet
 <g>,
 but
 it's fun seeing the conventional wisdom being challenged. It's good
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a
 bad
 program from continuing to function once it is established to be
 bad.
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just
 keep
 going bad, in the same (hopefully very short) amount of time, again
 and
 again and again. The system's not being (further) corrupted, but
 it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit
 that
 disables the faulty computer and engages the backup. The last, last,
 last
 thing you want the autopilot on an airliner to do is execute a
 return
 0;
 some programmer threw in to shut the compiler up. An exception
 thrown,
 shutting down the autopilot, engaging the backup, and notifying the
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as
 the
 strictures of D is concerned, is a balance between the likelihoods
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the
 likelihood
 of
 producing a non-violating program in the first place. The reasons
 are
 obvious, so I'll not go into them. (If anyone's cares to disagree,
 I
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in
 the
 name of simplicity', on this and other issues. For good reasons,
 you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will
 acknowledge,
 there are good reasons for all the decisions made for D thus far.
 But
 there are also good reasons against most/all of those decisions.
 (Except
 for slices. Slices are *the best thing* ever, and coupled with
 auto+GC,
 will eventually stand D out from all other mainstream
 languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is.
 That
 being the case, how can the policy of absolutism be deemed a
 sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a
 perfect
 solution, any more than it can be argued that compiler errors on
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed
 perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great
 deal*.
 But I really cannot imagine recommending use of D to my clients
 with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow
 warnings
 to detect such things at compile time, which may alter the
 commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently
 inserted
 implicitly. This would have the compiler report errors to me if I
 missed
 a return. It'd have the code throw errors to you if an unexpected
 code
 path occured. Other than screwing over people who prize typing one
 less
 line over robustness, what's the flaw? And yet it got no traction
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted
 means
 that the language should do that for you. After all, that's what
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll
 still
 be
 arguing about this when the third edition of DPD's running hot
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 06 2005
next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 7 Feb 2005 13:27:06 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 Sounds good to me.

 But I suspect Walter will argue that given the programmer any hint of
 the problem will result in them putting something in to shut the
 compiler up.
He already has, it's the (a) option below, the 'worst case' scenario. I did note that you mentioned it would be possible for people to start using a "return 0;" to avoid the auto assert, I agree it's possible, I just don't think it's very likely.
 At which point I'll have to smash myself in the head with
 my laptop.
I do hope it's one of those new ones that isn't very big and/or solid, not like the one I used to have which survived being run over by a car. Regan
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslstuwzg23k2f5 ally...
 Disclaimer: Please correct me if I have miss-represented anyone, I
 appologise in advance for doing so, it was not my intent.

 The following is my impression of the points/positions in this
 argument:

 1. Catching things at compile time is better than at runtime.
  - all parties agree

 2. If it cannot be caught at compile time, then a hard failure at
 runtime  is desired.
  - all parties agree

 3. An error which causes the programmer to add code to 'shut the
 compiler  up' causes hidden bugs
  - Walter

 Matthew?

 4. Programmers should take responsibilty for the code they add to
 'shut  the compiler up' by adding an assert/exception.
  - Matthew

 Walter?

 5. The language/compiler should where it can make it hard for the
 programmer to write bad code
  - Walter

 Matthew?


 IMO it seems to be a disagreement about what happens in the "real
 world",  IMO Matthew has an optimistic view, Walter a pessimistic
 view, eg.

 Matthew: If it were a warning, programmers would notice immediately,
 consider the error, fix it or add an assert for protection, thus the
 error  would be caught immediately or at runtime.

 It seems to me that Matthews position is that warning the programmer
 at  compile time about the situation gives them the opportunity to fix
 it at  compile time, and I agree.

 Walter: If it were a warning, programmers might add 'return 0;'
 causing  the error to remain un-detected for longer.

 It seems to me that Walters position is that if it were a warning
 there is  potential for the programmer to do something stupid, and I
 agree.

 So why can't we have both?

 To explore this, an imaginary situation:

 - Compiler detects problem.
 - Adds code to handle it (hard-fail at runtime).
 - Gives notification of the potential problem.
 - Programmer either:

  a. cannot see the problem, adds code to shut the compiler up.
 (causing  removal of auto hard-fail code)

  b. cannot see the problem, adds an assert (hard-fail) and code to
 shut  the compiler up.

  c. sees the problem, fixes it.

 if a then the bug could remain undetected for longer.
 if b then the bug is caught at runtime.
 if c then the bug is avoided.

 Without the notification (a) is impossible, so it seems Walters
 position  removes the worst case scenario, BUT, without the
 notification (c) is  impossible, so it seems Walters position removes
 the best case scenario  also.

 Of course for any programmer who would choose (b) over (a) 'all the
 time'  Matthews position is clearly the superior one, however...

 The real question is. In the real world are there more programmers who
 choose (a), as Walter imagines, or are there more choosing (b) as
 Matthew  imagines?

 Those that choose (a), do they do so out of ignorance, impatience, or
 stupidity? (or some other reason)

 If stupidity, there is no cure for stupidity.

 If impatience (as Walter has suggested) what do we do, can we do
 anything.

 If ignorance, then how do we teach them? does auto-inserting the hard
 fail  and giving no warning do so? would giving the warning do a
 better/worse  job?

 eg.

 "There is the potential for undefined behaviour here, an exception has
 been added automatically please consider the situation and either: A.
 add  your own exception or B. fix the bug."

 Regan

 On Sat, 5 Feb 2005 20:26:43 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu1pe6$15ks$1 digitaldaemon.com...
 1) make it impossible to ignore situations the programmer did not
 think of
So do I. So does any sane person. But it's a question of level, context, time. You're talking about two measures that are small-scale, whose effects may or may not ever be seen in a running system . If they do, they may or may not be in a context, and at a time, which renders them useless as an aid to improving the program.
If the error is silently ignored, it will be orders of magnitude harder to find. Throwing in a return 0; to get the compiler to stop squawking is not helping.
I'm not arguing for that! You have the bad habit of attributing positions to me that are either more extreme, or not representative whatsoever, in order to have something against which to argue more strongly. (You're not unique in that, of course. I'm sure I do it as well sometimes.)
 2) the bias is to force bugs to show themselves in an obvious
 manner.
So do I. But this statement is too bland to be worth anything. What's is "obvious"?
Throwing an uncaught exception is designed to be obvious and is the preferred method of being obvious about a runtime error.
Man oh man! Have you taken up politics? My problem is that you're forcing issues that can be dealt with at compile time to be runtime. Your response: exceptions are the best way to indicate runtime error. Come on. Q: Do you think driving on the left-hand side of the road is more or less sensible than driving on the right? A: When driving on the left-hand side of the road, be careful to monitor junctions from the left.
 *Who decides* what is obvious? How does/should the bug show
 itself? When should the showing be done: early, or late?
As early as possible. Putting in the return 0; means the showing will be late.
Oh? And that'd be later than the compiler preventing it from even getting to object code in the first place?
 Frankly, one might argue that the notion that the language and its
 premier compiler actively work to _prevent_ the programmer from
 detecting bugs at compile-time, forcing a wait of an unknowable
 amount
 of testing (or, more horribly, deployment time) to find them, is
 simply
 crazy.
I understand your point, but for this case, I do not agree for all the reasons stated here. I.e. there are other factors at work, factors that will make the bugs harder to find, not easier, if your approach is used. It is recognition of how programmers really write code, rather than the way they are exhorted to write code.
Disagree.
 But you're hamstringing 100% of all developers for the
 careless/unprofessional/inept of a few.
I don't believe it is a few. It is enough that Java was forced to change things, to allow unchecked exceptions. People who look at a lot of Java code and work with a lot of Java programmers tell me it is a commonplace practice, *even* among the experts. When even the experts tend to write code that is wrong even though they know it is wrong and tell others it is wrong, is a very strong signal that the language requirement they are dealing with is broken. I don't want to design a language that the experts will say "do as I say, not as I do."
Yet again, you are broad-brushing your arbitrary (or at least partial) absolute decisions with a complete furphy. This is not an analogy, it's a mirror with some smoke machines behind it.
 Will those handful % of
 better-employed-working-in-the-spam-industry
 find no other way to screw up their systems? Is this really going
 to
 answer all the issues attendant with a lack of
 skill/learning/professionalism/adequate quality mechanisms (incl,
 design
 reviews, code reviews, documentation, refactoring, unit testing,
 system
 testing, etc. etc. )?
D is based on my experience and that of many others on how programmers actually write code, rather than how we might wish them to. (Supporting a compiler means I see an awful lot of real world code!) D shouldn't force people to insert dead code into their source. It's tedious, it looks wrong, it's misleading, and it entices bad habits even from expert programmers.
Sorry, but wrong again. As I mentioned in the last post, there's a mechanism for addressing both camps, yet you're still banging on with this all-or-nothing position.
 But I'm not going to argue point by point with your post, since you
 lost
 me at "Java's exceptions". The analogy is specious, and thus
 unconvincing. (Though I absolutely concur that they were a little
 tried
 'good idea', like C++'s exception specifications or, in fear of
 drawing
 unwanted venom from my friends in the C++ firmament, export.)
I believe it is an apt analogy as it shows how forcing programmers to do something unnatural leads to worse problems than it tries to solve. The best that can be said for it is "it seemed like a good idea at the time". I was at the last C++ standard committee meeting, and the topic came up on booting exception specifications out of C++ completely. The consensus was that it was now recognized as a worthless feature, but it did no harm (since it was optional), so leave it in for legacy compatibility.
All of this is of virtually no relevance to the topic under discussion
 There's some growing thought that even static type checking is an
 emperor
 without clothes, that dynamic type checking (like Python does) is
 more
 robust and more productive. I'm not at all convinced of that yet
 <g>,
 but
 it's fun seeing the conventional wisdom being challenged. It's good
 for all
 of us.
I'm with you there.
 My position is simply that compile-time error detection is better
 than
 runtime error detection.
In general, I agree with that statement. I do not agree that it is always true, especially in this case, as it is not necessarilly an error. It is hypothetically an error.
Nothing is *always* true. That's kind of one of the bases of my thesis.
 Now you're absolutely correct that an invalid state throwing an
 exception, leading to application/system reset is a good thing.
 Absolutely. But let's be honest. All that achieves is to prevent a
 bad
 program from continuing to function once it is established to be
 bad.
 It
 doesn't make that program less bad, or help it run well again.
Oh, yes it does make it less bad! It enables the program to notify the system that it has failed, and the backup needs to be engaged. That can make the difference between an annoyance and a catastrophe. It can help it run well again, as the error is found closer to the the source of it, meaning it will be easier to reproduce, find and correct.
Sorry, but this is totally misleading nonsense. Again, you're arguing against me as if I think runtime checking is invalid or useless. Nothing could be further from the truth. So, again, my position is: Checking for an invalid state at runtime, and acting on it in a non-ignorable manner, is the absolute best thing one can do. Except when that error can be detected at runtime. Please stop arguing against your demons on this, and address my point. If an error can be detected at compile time, then it is a mistake to detect it at runtime. Please address this specific point, and stop general carping at the non-CP adherents. I'm not one of 'em.
 Depending
 on the vaguaries of its operating environment, it may well just
 keep
 going bad, in the same (hopefully very short) amount of time, again
 and
 again and again. The system's not being (further) corrupted, but
 it's
 not getting anything done either.
One of the Mars landers went silent for a couple days. Turns out it was a self detected fault, which caused a reset, then the fault, then the reset, etc. This resetting did eventually allow JPL to wrest control of it back. If it had simply locked, oh well.
Abso-bloody-lutely spot on behaviour. What: you think I'm arguing that the lander should have all its checking done at compile time (as if that's even possible) and eschew runtime checking. At no time have I ever said such a thing.
 On airliners, the self detected faults trigger a dedicated circuit
 that
 disables the faulty computer and engages the backup. The last, last,
 last
 thing you want the autopilot on an airliner to do is execute a
 return
 0;
 some programmer threw in to shut the compiler up. An exception
 thrown,
 shutting down the autopilot, engaging the backup, and notifying the
 pilot is
 what you'd much rather happen.
Same as above. Please address my thesis, not the more conveniently down-shootable one you seem to have addressing.
 It's clear, or seems to to me, that this issue, at least as far as
 the
 strictures of D is concerned, is a balance between the likelihoods
 of:
     1.    producing a non-violating program, and
     2.    preventing a violating program from continuing its
 execution
 and, therefore, potentially wreck a system.
There's a very, very important additional point - that of not enticing the programmer into inserting "shut up" code to please the compiler that winds up masking a bug.
Absolutely. But that is not, in and of itself, sufficient justification for ditching compile detection in favour of runtime detection. Yet again, we're having to swallow absolutism - dare I say dogma? - instead of coming up with a solution that handles all requirements to a healthy degree.
 You seem to be of the opinion that the current situation of missing
 return/case handling (MRCH) minimises the likelihood of 2. I agree
 that
 it does so.

 However, contrarily, I assert that D's MRCH minimises the
 likelihood
 of
 producing a non-violating program in the first place. The reasons
 are
 obvious, so I'll not go into them. (If anyone's cares to disagree,
 I
 ask
 you to write a non-trival C++ program in a hurry, disable *all*
 warnings, and go straight to production with it.)

 Walter, I think that you've hung D on the petard of 'absolutism in
 the
 name of simplicity', on this and other issues. For good reasons,
 you
 won't conscience warnings, or pragmas, or even switch/function
 decoarator keywords (e.g. "int allcases func(int i) { if i < 0
 return -1'; }"). Indeed, as I think most participants will
 acknowledge,
 there are good reasons for all the decisions made for D thus far.
 But
 there are also good reasons against most/all of those decisions.
 (Except
 for slices. Slices are *the best thing* ever, and coupled with
 auto+GC,
 will eventually stand D out from all other mainstream
 languages.<G>).
Jan Knepper came up with the slicing idea. Sheer genius!
Truly
 Software engineering hasn't yet found a perfect language. D is not
 perfect, and it'd be surprising to hear anyone here say that it is.
 That
 being the case, how can the policy of absolutism be deemed a
 sensible
 one?
Now that you set yourself up, I can't resist knocking you down with "My position is simply that compile-time error detection is better than runtime error detection." :-)
? If you're trying to say that I've implied that compile-time detection can handle everything, leaving nothing to be done at runtime, you're either kidding, sly, or mental. I'm assuming kidding, from the smiley, but it's a bit disingenuous at this level of the debate, don't you think?
 It cannot be sanely argued that throwing on missing returns is a
 perfect
 solution, any more than it can be argued that compiler errors on
 missing
 returns is. That being the case, why has D made manifest in its
 definition the stance that one of these positions is indeed
 perfect?
I don't believe it is perfect. I believe it is the best balance of competing factors.
I know you do. We all know that you do. It's just that many disagree that it is. That's one of the problems.
 I know the many dark roads that await once the tight control on the
 language is loosened, but the real world's already here, batting on
 the
 door. I have an open mind, and willing fingers to all kinds of
 languages. I like D a lot, and I want it to succeed a *very great
 deal*.
 But I really cannot imagine recommending use of D to my clients
 with
 these flaws of absolutism. (My hopeful guess for the future is that
 other compiler variants will arise that will, at least, allow
 warnings
 to detect such things at compile time, which may alter the
 commercial
 landscape markedly; D is, after all, full of a great many wonderful
 things.)
I have no problem at all with somebody making a "lint" for D that will explore other ideas on checking for errors. One of the reasons the front end is open source is so that anyone can easily make such a tool.
I'm not talking about lint. I confidently predict that the least badness that will happen will be the general use of non-standard compilers and the general un-use of DMD. But I realistically think that D'll splinter as a result of making the same kinds of mistakes, albeit for different reasons, as C++. :-(
 One last word: I recall a suggestion a year or so ago that would
 required the programmer to explicitly insert what is currently
 inserted
 implicitly. This would have the compiler report errors to me if I
 missed
 a return. It'd have the code throw errors to you if an unexpected
 code
 path occured. Other than screwing over people who prize typing one
 less
 line over robustness, what's the flaw? And yet it got no traction
 ....
Essentially, that means requiring the programmer to insert: assert(0); return 0;
That is not the suggested syntax, at least not to the best of my recollection.
 It just seems that requiring some fixed boilerplate to be inserted
 means
 that the language should do that for you. After all, that's what
 computers
 are good at!
LOL! Well, there's no arguing with you there, eh? You don't want the compiler to automate the bits I want. I don't want it to automate the bits you want. I suggest a way to resolve this, by requiring more of the programmer - fancy that! - and you discount that because it's something the compiler should do. Just in case anyone's missed the extreme illogic of that position, I'll reiterate. Camp A want behaviour X to be done automatically by the compiler Camp B want behaviour Y to be done automatically by the compiler. X and Y are incompatible, when done automatically. By having Z done manually, X and Y are moot, and everything works well. (To the degree that D will, then, and only then, achieve resultant robustnesses undreamt of.) Walter reckons that Z should be done automatically by the compiler. Matthew auto-defolicalises and goes to wibble his frimble in the back drim-drim with the other nimpins. Less insanely, I'm keen to hear if there's any on-point response to this?
 [My goodness! That was way longer than I wanted. I guess we'll
 still
 be
 arguing about this when the third edition of DPD's running hot
 through
 the presses ...]
I don't expect we'll agree on this anytime soon.
Agreed
Feb 06 2005
prev sibling parent reply John Reimer <brk_6502 yahoo.com> writes:
Matthew wrote:
 Sounds good to me.
 
 But I suspect Walter will argue that given the programmer any hint of
 the problem will result in them putting something in to shut the
 compiler up. At which point I'll have to smash myself in the head with 
 my laptop.
 
If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Feb 06 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message 
news:cu6mt4$785$1 digitaldaemon.com...
 Matthew wrote:
 Sounds good to me.

 But I suspect Walter will argue that given the programmer any hint of
 the problem will result in them putting something in to shut the
 compiler up. At which point I'll have to smash myself in the head 
 with my laptop.
If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the desk, with its miserable little broken hinge.
Feb 06 2005
parent reply "Carlos Santander B." <csantander619 gmail.com> writes:
Matthew wrote:
 
 
 Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on the 
 desk, with its miserable little broken hinge.
 
 
 
Does that mean you don't want it? I'll take it. Seriously. _______________________ Carlos Santander Bernal
Feb 09 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Carlos Santander B." <csantander619 gmail.com> wrote in message 
news:cuefto$o4k$1 digitaldaemon.com...
 Matthew wrote:
 Nah! It's not arrived yet. It'd be thing 5 kilo old Dell sitting on 
 the desk, with its miserable little broken hinge.
Does that mean you don't want it? I'll take it. Seriously.
He he. No, sorry. I've ordered a hinge from Dell - a company that seems to know how to provide at least acceptable, if not great, customer service - and it's about to become a Linux machine. (GDC, here I come!)
Feb 09 2005
parent "Carlos Santander B." <csantander619 gmail.com> writes:
Matthew wrote:
 
 He he. No, sorry. I've ordered a hinge from Dell - a company that seems 
 to know how to provide at least acceptable, if not great, customer 
 service - and it's about to become a Linux machine. (GDC, here I come!)
 
 
 
Oh, ok. Can't say I didn't try... :D _______________________ Carlos Santander Bernal
Feb 10 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message 
news:cu6mt4$785$1 digitaldaemon.com...
 Matthew wrote:
 Sounds good to me.

 But I suspect Walter will argue that given the programmer any hint of
 the problem will result in them putting something in to shut the
 compiler up. At which point I'll have to smash myself in the head 
 with my laptop.
If it's that new Apple laptop you've got on order, please send it to me before you smash it on your head! ;-)
Cancelled it. I'm not going to document here why Apple have lost my business, but suffice it to say, one can understand their consistent lack of market share. Tossers!
Feb 08 2005
parent reply John Reimer <brk_6502 yahoo.com> writes:
Matthew wrote:

 
 Cancelled it. I'm not going to document here why Apple have lost my 
 business, but suffice it to say, one can understand their consistent 
 lack of market share. Tossers! 
 
 
Oh no! why?! Too many delays? Apple losing /your/ business is not a good thing. Darn it. - John
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message 
news:cuc6at$1g83$1 digitaldaemon.com...
 Matthew wrote:

 Cancelled it. I'm not going to document here why Apple have lost my 
 business, but suffice it to say, one can understand their consistent 
 lack of market share. Tossers!
Oh no! why?! Too many delays?
Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.
 Apple losing /your/ business is not a good thing.
Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Feb 08 2005
parent reply John Reimer <brk_6502 yahoo.com> writes:
Matthew wrote:
 "John Reimer" <brk_6502 yahoo.com> wrote in message 
 news:cuc6at$1g83$1 digitaldaemon.com...
 
Matthew wrote:


Cancelled it. I'm not going to document here why Apple have lost my 
business, but suffice it to say, one can understand their consistent 
lack of market share. Tossers!
Oh no! why?! Too many delays?
Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.
Oh Bother!
 
Apple losing /your/ business is not a good thing.
Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Um... something like that! :-(
Feb 08 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message 
news:cuc7r9$1i7t$1 digitaldaemon.com...
 Matthew wrote:
 "John Reimer" <brk_6502 yahoo.com> wrote in message 
 news:cuc6at$1g83$1 digitaldaemon.com...

Matthew wrote:


Cancelled it. I'm not going to document here why Apple have lost my 
business, but suffice it to say, one can understand their consistent 
lack of market share. Tossers!
Oh no! why?! Too many delays?
Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.
Oh Bother!
Apple losing /your/ business is not a good thing.
Why? What's so special about me? Do you wonder whether I may be inclined to document their shortcomings ... ;)
Um... something like that! :-(
Well, I've sent of a snotty letter to sales apple.com and sales apple.com.au. The latter bounced, from which I deduce that Apple probably don't have, or don't service, any guessable email addresses - lord knows, there are none on their websites - and so it's gone in the bit bucket. If I don't hear anything back soon, I reckon there'll be a blog entry coming in a couple of weeks ...
Feb 08 2005
parent reply John Reimer <brk_6502 yahoo.com> writes:
Matthew wrote:

 Well, I've sent of a snotty letter to sales apple.com and 
 sales apple.com.au. The latter bounced, from which I deduce that Apple 
 probably don't have, or don't service, any guessable email addresses - 
 lord knows, there are none on their websites - and so it's gone in the 
 bit bucket.
 
 If I don't hear anything back soon, I reckon there'll be a blog entry 
 coming in a couple of weeks ... 
 
 
Ok, Matthew. Quit holding back. Where's your blog site?
Feb 08 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"John Reimer" <brk_6502 yahoo.com> wrote in message 
news:cucak4$1kcp$1 digitaldaemon.com...
 Matthew wrote:

 Well, I've sent of a snotty letter to sales apple.com and 
 sales apple.com.au. The latter bounced, from which I deduce that 
 Apple probably don't have, or don't service, any guessable email 
 addresses - lord knows, there are none on their websites - and so 
 it's gone in the bit bucket.

 If I don't hear anything back soon, I reckon there'll be a blog entry 
 coming in a couple of weeks ...
Ok, Matthew. Quit holding back. Where's your blog site?
It's on Artima, where I can rub shoulders with people who really know what they're talking about. But I haven't posted any yet. I've been, er, busy. I will be kicking it off next week, for sure, now I've got my back up!
Feb 08 2005
prev sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
John Reimer wrote:
 Matthew wrote:
 Cancelled it. I'm not going to document here why Apple have lost my 
 business, but suffice it to say, one can understand their consistent 
 lack of market share. Tossers!
Oh no! why?! Too many delays?
Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.
Oh Bother!
Being new to the Mac, it's easy how you could misunderstand this. Apple is famous for their design, and infamous for their service. And over the years, they've also produced a fair share of "lemons"... Some of us like them anyway, and just make a big glass of lemonade. If you don't like that, you can always buy your Mac els... nevermind. :-P --anders
Feb 09 2005
parent John Reimer <brk_6502 yahoo.com> writes:
On Wed, 09 Feb 2005 10:31:48 +0100, Anders F Björklund wrote:

 John Reimer wrote:
 Matthew wrote:
 Cancelled it. I'm not going to document here why Apple have lost my 
 business, but suffice it to say, one can understand their consistent 
 lack of market share. Tossers!
Oh no! why?! Too many delays?
Yeah, plus an unbelievably slack attitude. World leaders in customer service ... not.
Oh Bother!
Being new to the Mac, it's easy how you could misunderstand this. Apple is famous for their design, and infamous for their service.
Actually, I'm not really that new to the mac. I grew up with one. My parents got the first Mac 128 as soon as it came out. I spent hours on it with word processing, music programs, games (my favourite was Fokker Triplane Simulator: hours of fun). I think I even did a little programming in BASIC on it (not much; I didn't start programming until a few years later on the C64). We used it for years. I've read tons of MacWorld Mags while growing up. I learned to type on the Mac with Typing Tutor 3 when I was 9 or 10. My folks upgraded to a new PowerPC mac years later. That machine was also plagued with problems. Apple eventually replaced the motherboard because of known problems with the model (I don't think the cost was completely covered by Apple, though). I've followed Apple history closely throughout the time, watching their successes and many failures. It was the era I grew up in. Despite all this, for some inexplicable reason, I've maintained a certain fondness for the machine. It doesn't make sense, really. :-) That said, I've never personally taken the plunge to get my own personal Mac (I've used other computers for years). Now with Mac OS X and cheaper macs available, I'm almost ready to take the plunge DESPITE Apples tenuous grasp on market share /and/ reputation for abysmal customer service. The disappointment I share here is just an expression of sadness that Apple's poor customer service might destroy their chances at success. They have to impress people like Matthew, if they know what's good for them. ;-) I think Apple needs to succeed, at the very least to give Microsoft the competition it so badly needs.
 And over the years, they've also produced a fair share
of "lemons"...
 Some of us like them anyway, and just make a big glass of lemonade.
Oh yes, they sure have made their share of lemons. In fact, I must correct myself. I /did/ buy myself a Mac once... an iMac (or was it an eMac; it was G3 machine, one of the first models of the new sehll design). I loved it! ... that is until it began to repeatedly crash, and the CD drive refused to work within a few days of purchasing it. I was mortified. After years of wanting the machine, I finally got one and look what happened. I sent it back and never got a replacement... That was several years ago, and I haven't tried again yet. I'm hoping my next attempt won't meet with such failure.
 If you don't like that, you can always buy your Mac els... nevermind.
 :-P
 --anders
There aren't many options, really. It's too bad. It's too bad Apple isn't a little more open (as in clones; I realize they tried that once before), but then they are probably afraid of losing market share within their own ranks... *sigh*. Later, John R.
Feb 09 2005
prev sibling parent reply Charles Patterson <charliep1 excite.com> writes:
First time reader, first time poster!

I think some people are missing the point that code rots.  A coder might 
add a return(0) to shut up the compiler and this might be a reasonable 
thing to do *at that point in time*.  For instance, if it is simply 
impossible for the code to reach this point, then there is probably no 
acceptable real artifact that can be returned, so he might as well return 0.

Later, if the code morphs around this function, the returned "artifact" 
  might not come through as such.  Zero might be an actual useful value 
and was just returned as an error because there was nothing more 
appropriate.  The compiler can not guess that return(0) has become outdated.

Restated, as code changes and assumptions change, leaving a potentially 
leaky return boldly states, "this will never happen and the compiler can 
add assertions all it wants".  However, adding a return(0) loses that 
intention.

So next argument is, "OK. So don't use a no-nag return but put an 
assertion in.  This will accomplish the same thing without involving the 
compiler."...

First, this will still not help until run-time.

I think there is an elegance issue to work through as well.  And I'll 
mention up front that I don't have any conclusions.  (-:  The following, 
Andrew code, is clean code.

   int foo(CollectionClass c, int y)
   {
     foreach (Value v; c)
     {
       if (v.x == y)
         return v.z;
     }
   }

Aahhh.  This reminds me of the simple code snippets you would find in an 
analysis of algorithms tome -- no extra if's used for checking on 
production results.  Could real code truly be this clean?

And yet the question will nag: is this programmer boldly stating that he 
will always return inside the loop and so he needs no terminal case? 
what if the programmer simply forgot his terminal exception case?

I wouldn't be upset with forcing the coder to have his own assertion, 
(or throw one in for him if not?) but it does seem less elegant. 
However, code like the above rarely appears in production code.  It's 
usually tons of tests against file opens, matrix reads, zero values, 
etc.  The "inelegance" of peppering your own code with asserts isn't too 
bad.

So I think it is 6 of one, half dozen of the other.  Perusing a few of 
the messages on this newsgroup, it appears that this language is 
Walter's puppy, and when it comes to a tie, I'd let him have his way. 
(-:  I'd hate to see this language suffer the same burnt-out, 
never-finished fate of so many other promising starts.

Let me also say that I dont' think it is worth distorting a language too 
much in order to get as much compile time checking as possible.  I think 
the reasonable limits are fairly well known (strong typing, etc.)  I 
wouldn't over-use the argument that the compiler *could have* caught this.

- Charlie



Regan Heath wrote:
 Disclaimer: Please correct me if I have miss-represented anyone, I  
 appologise in advance for doing so, it was not my intent.
 
 The following is my impression of the points/positions in this argument:
 
 1. Catching things at compile time is better than at runtime.
  - all parties agree
 
 2. If it cannot be caught at compile time, then a hard failure at 
 runtime  is desired.
  - all parties agree
 
 3. An error which causes the programmer to add code to 'shut the 
 compiler  up' causes hidden bugs
  - Walter
 
 Matthew?
 
 4. Programmers should take responsibilty for the code they add to 'shut  
 the compiler up' by adding an assert/exception.
  - Matthew
 
 Walter?
 
 5. The language/compiler should where it can make it hard for the  
 programmer to write bad code
  - Walter
 
 Matthew?
 
 
 IMO it seems to be a disagreement about what happens in the "real 
 world",  IMO Matthew has an optimistic view, Walter a pessimistic view, eg.
 
 Matthew: If it were a warning, programmers would notice immediately,  
 consider the error, fix it or add an assert for protection, thus the 
 error  would be caught immediately or at runtime.
 
 It seems to me that Matthews position is that warning the programmer at  
 compile time about the situation gives them the opportunity to fix it 
 at  compile time, and I agree.
 
 Walter: If it were a warning, programmers might add 'return 0;' causing  
 the error to remain un-detected for longer.
 
 It seems to me that Walters position is that if it were a warning there 
 is  potential for the programmer to do something stupid, and I agree.
 
 So why can't we have both?
 
 To explore this, an imaginary situation:
 
 - Compiler detects problem.
 - Adds code to handle it (hard-fail at runtime).
 - Gives notification of the potential problem.
 - Programmer either:
 
  a. cannot see the problem, adds code to shut the compiler up. (causing  
 removal of auto hard-fail code)
 
  b. cannot see the problem, adds an assert (hard-fail) and code to shut  
 the compiler up.
 
  c. sees the problem, fixes it.
 
 if a then the bug could remain undetected for longer.
 if b then the bug is caught at runtime.
 if c then the bug is avoided.
 
 Without the notification (a) is impossible, so it seems Walters 
 position  removes the worst case scenario, BUT, without the notification 
 (c) is  impossible, so it seems Walters position removes the best case 
 scenario  also.
 
 Of course for any programmer who would choose (b) over (a) 'all the 
 time'  Matthews position is clearly the superior one, however...
 
 The real question is. In the real world are there more programmers who  
 choose (a), as Walter imagines, or are there more choosing (b) as 
 Matthew  imagines?
 
 Those that choose (a), do they do so out of ignorance, impatience, or  
 stupidity? (or some other reason)
 
 If stupidity, there is no cure for stupidity.
 
 If impatience (as Walter has suggested) what do we do, can we do anything.
 
 If ignorance, then how do we teach them? does auto-inserting the hard 
 fail  and giving no warning do so? would giving the warning do a 
 better/worse  job?
 
 eg.
 
 "There is the potential for undefined behaviour here, an exception has  
 been added automatically please consider the situation and either: A. 
 add  your own exception or B. fix the bug."
 
Feb 08 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 First time reader, first time poster!

 I think some people are missing the point that code rots.  A coder 
 might add a return(0) to shut up the compiler and this might be a 
 reasonable thing to do *at that point in time*.  For instance, if it 
 is simply impossible for the code to reach this point, then there is 
 probably no acceptable real artifact that can be returned, so he might 
 as well return 0.

 Later, if the code morphs around this function, the returned 
 "artifact" might not come through as such.  Zero might be an actual 
 useful value and was just returned as an error because there was 
 nothing more appropriate.  The compiler can not guess that return(0) 
 has become outdated.
Agreed. Last time this was debated, someone suggested a "neverreturn" keyword, or some such. That's less likely to rot, don't you think?
Feb 08 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 06:47:39 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 First time reader, first time poster!

 I think some people are missing the point that code rots.  A coder
 might add a return(0) to shut up the compiler and this might be a
 reasonable thing to do *at that point in time*.  For instance, if it
 is simply impossible for the code to reach this point, then there is
 probably no acceptable real artifact that can be returned, so he might
 as well return 0.

 Later, if the code morphs around this function, the returned
 "artifact" might not come through as such.  Zero might be an actual
 useful value and was just returned as an error because there was
 nothing more appropriate.  The compiler can not guess that return(0)
 has become outdated.
Agreed. Last time this was debated, someone suggested a "neverreturn" keyword, or some such. That's less likely to rot, don't you think?
I agree. Having some sort of way to tell the compiler what you mean is useful, in addition it tells the next guy who looks at the code your intent. It's not strictly necessary to have a keyword, an assert can do the job, but it has to be visible in the code to have the full effect. The problem (it seems this is what Walter is most worried about) is how to get people to use it.. a keyword might achieve that goal? Regan
Feb 08 2005
prev sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
I think you and I have very similar opinions on this matter.

I think most all of us here agree on what the best outcome is, what we  
seem to disagree over is what the compiler can best do to achieve it.

On Tue, 08 Feb 2005 13:07:21 -0500, Charles Patterson  
<charliep1 excite.com> wrote:
 First time reader, first time poster!

 I think some people are missing the point that code rots.  A coder might  
 add a return(0) to shut up the compiler and this might be a reasonable  
 thing to do *at that point in time*.  For instance, if it is simply  
 impossible for the code to reach this point, then there is probably no  
 acceptable real artifact that can be returned, so he might as well  
 return 0.

 Later, if the code morphs around this function, the returned "artifact"  
   might not come through as such.  Zero might be an actual useful value  
 and was just returned as an error because there was nothing more  
 appropriate.  The compiler can not guess that return(0) has become  
 outdated.

 Restated, as code changes and assumptions change, leaving a potentially  
 leaky return boldly states, "this will never happen and the compiler can  
 add assertions all it wants".  However, adding a return(0) loses that  
 intention.

 So next argument is, "OK. So don't use a no-nag return but put an  
 assertion in.  This will accomplish the same thing without involving the  
 compiler."...

 First, this will still not help until run-time.

 I think there is an elegance issue to work through as well.  And I'll  
 mention up front that I don't have any conclusions.  (-:  The following,  
 Andrew code, is clean code.

    int foo(CollectionClass c, int y)
    {
      foreach (Value v; c)
      {
        if (v.x == y)
          return v.z;
      }
    }

 Aahhh.  This reminds me of the simple code snippets you would find in an  
 analysis of algorithms tome -- no extra if's used for checking on  
 production results.  Could real code truly be this clean?

 And yet the question will nag: is this programmer boldly stating that he  
 will always return inside the loop and so he needs no terminal case?  
 what if the programmer simply forgot his terminal exception case?

 I wouldn't be upset with forcing the coder to have his own assertion,  
 (or throw one in for him if not?) but it does seem less elegant.  
 However, code like the above rarely appears in production code.  It's  
 usually tons of tests against file opens, matrix reads, zero values,  
 etc.  The "inelegance" of peppering your own code with asserts isn't too  
 bad.

 So I think it is 6 of one, half dozen of the other.  Perusing a few of  
 the messages on this newsgroup, it appears that this language is  
 Walter's puppy, and when it comes to a tie, I'd let him have his way.  
 (-:  I'd hate to see this language suffer the same burnt-out,  
 never-finished fate of so many other promising starts.

 Let me also say that I dont' think it is worth distorting a language too  
 much in order to get as much compile time checking as possible.  I think  
 the reasonable limits are fairly well known (strong typing, etc.)  I  
 wouldn't over-use the argument that the compiler *could have* caught  
 this.

 - Charlie



 Regan Heath wrote:
 Disclaimer: Please correct me if I have miss-represented anyone, I   
 appologise in advance for doing so, it was not my intent.
  The following is my impression of the points/positions in this  
 argument:
  1. Catching things at compile time is better than at runtime.
  - all parties agree
  2. If it cannot be caught at compile time, then a hard failure at  
 runtime  is desired.
  - all parties agree
  3. An error which causes the programmer to add code to 'shut the  
 compiler  up' causes hidden bugs
  - Walter
  Matthew?
  4. Programmers should take responsibilty for the code they add to  
 'shut  the compiler up' by adding an assert/exception.
  - Matthew
  Walter?
  5. The language/compiler should where it can make it hard for the   
 programmer to write bad code
  - Walter
  Matthew?
   IMO it seems to be a disagreement about what happens in the "real  
 world",  IMO Matthew has an optimistic view, Walter a pessimistic view,  
 eg.
  Matthew: If it were a warning, programmers would notice immediately,   
 consider the error, fix it or add an assert for protection, thus the  
 error  would be caught immediately or at runtime.
  It seems to me that Matthews position is that warning the programmer  
 at  compile time about the situation gives them the opportunity to fix  
 it at  compile time, and I agree.
  Walter: If it were a warning, programmers might add 'return 0;'  
 causing  the error to remain un-detected for longer.
  It seems to me that Walters position is that if it were a warning  
 there is  potential for the programmer to do something stupid, and I  
 agree.
  So why can't we have both?
  To explore this, an imaginary situation:
  - Compiler detects problem.
 - Adds code to handle it (hard-fail at runtime).
 - Gives notification of the potential problem.
 - Programmer either:
   a. cannot see the problem, adds code to shut the compiler up.  
 (causing  removal of auto hard-fail code)
   b. cannot see the problem, adds an assert (hard-fail) and code to  
 shut  the compiler up.
   c. sees the problem, fixes it.
  if a then the bug could remain undetected for longer.
 if b then the bug is caught at runtime.
 if c then the bug is avoided.
  Without the notification (a) is impossible, so it seems Walters  
 position  removes the worst case scenario, BUT, without the  
 notification (c) is  impossible, so it seems Walters position removes  
 the best case scenario  also.
  Of course for any programmer who would choose (b) over (a) 'all the  
 time'  Matthews position is clearly the superior one, however...
  The real question is. In the real world are there more programmers  
 who  choose (a), as Walter imagines, or are there more choosing (b) as  
 Matthew  imagines?
  Those that choose (a), do they do so out of ignorance, impatience, or   
 stupidity? (or some other reason)
  If stupidity, there is no cure for stupidity.
  If impatience (as Walter has suggested) what do we do, can we do  
 anything.
  If ignorance, then how do we teach them? does auto-inserting the hard  
 fail  and giving no warning do so? would giving the warning do a  
 better/worse  job?
  eg.
  "There is the potential for undefined behaviour here, an exception  
 has  been added automatically please consider the situation and either:  
 A. add  your own exception or B. fix the bug."
Feb 08 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what we 
 seem to disagree over is what the compiler can best do to achieve it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
 On Tue, 08 Feb 2005 13:07:21 -0500, Charles Patterson 
 <charliep1 excite.com> wrote:
 First time reader, first time poster!

 I think some people are missing the point that code rots.  A coder 
 might  add a return(0) to shut up the compiler and this might be a 
 reasonable  thing to do *at that point in time*.  For instance, if it 
 is simply  impossible for the code to reach this point, then there is 
 probably no  acceptable real artifact that can be returned, so he 
 might as well  return 0.

 Later, if the code morphs around this function, the returned 
 "artifact"  might not come through as such.  Zero might be an actual 
 useful value  and was just returned as an error because there was 
 nothing more  appropriate.  The compiler can not guess that return(0) 
 has become  outdated.

 Restated, as code changes and assumptions change, leaving a 
 potentially  leaky return boldly states, "this will never happen and 
 the compiler can  add assertions all it wants".  However, adding a 
 return(0) loses that  intention.

 So next argument is, "OK. So don't use a no-nag return but put an 
 assertion in.  This will accomplish the same thing without involving 
 the  compiler."...

 First, this will still not help until run-time.

 I think there is an elegance issue to work through as well.  And I'll 
 mention up front that I don't have any conclusions.  (-:  The 
 following,  Andrew code, is clean code.

    int foo(CollectionClass c, int y)
    {
      foreach (Value v; c)
      {
        if (v.x == y)
          return v.z;
      }
    }

 Aahhh.  This reminds me of the simple code snippets you would find in 
 an  analysis of algorithms tome -- no extra if's used for checking on 
 production results.  Could real code truly be this clean?

 And yet the question will nag: is this programmer boldly stating that 
 he  will always return inside the loop and so he needs no terminal 
 case?  what if the programmer simply forgot his terminal exception 
 case?

 I wouldn't be upset with forcing the coder to have his own assertion, 
 (or throw one in for him if not?) but it does seem less elegant. 
 However, code like the above rarely appears in production code.  It's 
 usually tons of tests against file opens, matrix reads, zero values, 
 etc.  The "inelegance" of peppering your own code with asserts isn't 
 too  bad.

 So I think it is 6 of one, half dozen of the other.  Perusing a few 
 of  the messages on this newsgroup, it appears that this language is 
 Walter's puppy, and when it comes to a tie, I'd let him have his way. 
 (-:  I'd hate to see this language suffer the same burnt-out, 
 never-finished fate of so many other promising starts.

 Let me also say that I dont' think it is worth distorting a language 
 too  much in order to get as much compile time checking as possible. 
 I think  the reasonable limits are fairly well known (strong typing, 
 etc.)  I  wouldn't over-use the argument that the compiler *could 
 have* caught  this.

 - Charlie



 Regan Heath wrote:
 Disclaimer: Please correct me if I have miss-represented anyone, I 
 appologise in advance for doing so, it was not my intent.
  The following is my impression of the points/positions in this 
 argument:
  1. Catching things at compile time is better than at runtime.
  - all parties agree
  2. If it cannot be caught at compile time, then a hard failure at 
 runtime  is desired.
  - all parties agree
  3. An error which causes the programmer to add code to 'shut the 
 compiler  up' causes hidden bugs
  - Walter
  Matthew?
  4. Programmers should take responsibilty for the code they add to 
 'shut  the compiler up' by adding an assert/exception.
  - Matthew
  Walter?
  5. The language/compiler should where it can make it hard for the 
 programmer to write bad code
  - Walter
  Matthew?
   IMO it seems to be a disagreement about what happens in the "real 
 world",  IMO Matthew has an optimistic view, Walter a pessimistic 
 view,  eg.
  Matthew: If it were a warning, programmers would notice 
 immediately,   consider the error, fix it or add an assert for 
 protection, thus the  error  would be caught immediately or at 
 runtime.
  It seems to me that Matthews position is that warning the 
 programmer  at  compile time about the situation gives them the 
 opportunity to fix  it at  compile time, and I agree.
  Walter: If it were a warning, programmers might add 'return 0;' 
 causing  the error to remain un-detected for longer.
  It seems to me that Walters position is that if it were a warning 
 there is  potential for the programmer to do something stupid, and I 
 agree.
  So why can't we have both?
  To explore this, an imaginary situation:
  - Compiler detects problem.
 - Adds code to handle it (hard-fail at runtime).
 - Gives notification of the potential problem.
 - Programmer either:
   a. cannot see the problem, adds code to shut the compiler up. 
 (causing  removal of auto hard-fail code)
   b. cannot see the problem, adds an assert (hard-fail) and code to 
 shut  the compiler up.
   c. sees the problem, fixes it.
  if a then the bug could remain undetected for longer.
 if b then the bug is caught at runtime.
 if c then the bug is avoided.
  Without the notification (a) is impossible, so it seems Walters 
 position  removes the worst case scenario, BUT, without the 
 notification (c) is  impossible, so it seems Walters position 
 removes  the best case scenario  also.
  Of course for any programmer who would choose (b) over (a) 'all the 
 time'  Matthews position is clearly the superior one, however...
  The real question is. In the real world are there more programmers 
 who  choose (a), as Walter imagines, or are there more choosing (b) 
 as  Matthew  imagines?
  Those that choose (a), do they do so out of ignorance, impatience, 
 or   stupidity? (or some other reason)
  If stupidity, there is no cure for stupidity.
  If impatience (as Walter has suggested) what do we do, can we do 
 anything.
  If ignorance, then how do we teach them? does auto-inserting the 
 hard  fail  and giving no warning do so? would giving the warning do 
 a  better/worse  job?
  eg.
  "There is the potential for undefined behaviour here, an exception 
 has  been added automatically please consider the situation and 
 either:  A. add  your own exception or B. fix the bug."
Feb 08 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Wed, 9 Feb 2005 09:25:50 +1100, Matthew wrote:

I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what we 
 seem to disagree over is what the compiler can best do to achieve it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
The best compromise I've heard of so far is to have the '-v' (verbose) DMD switch to tell me (a hopefully responsible coder) where DMD has inserted the assert(0) code. Most people do not use -v in normal compilations, so only anal coders, such as myself, could use it to find out where I could improve my poor coding practices. [snipped stuff that is not relevant to the above comment] -- Derek Melbourne, Australia 9/02/2005 10:11:31 AM
Feb 08 2005
parent Kris <Kris_member pathlink.com> writes:
In article <1koccnt7piohu.8r5ivwwg1lso.dlg 40tude.net>, Derek Parnell says...
On Wed, 9 Feb 2005 09:25:50 +1100, Matthew wrote:

I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what we 
 seem to disagree over is what the compiler can best do to achieve it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
The best compromise I've heard of so far is to have the '-v' (verbose) DMD switch to tell me (a hopefully responsible coder) where DMD has inserted the assert(0) code. Most people do not use -v in normal compilations, so only anal coders, such as myself, could use it to find out where I could improve my poor coding practices. [snipped stuff that is not relevant to the above comment] -- Derek Melbourne, Australia 9/02/2005 10:11:31
Good stuff. Whilst on the subject, let's add the implicit "default:" injection to that list of "diagnostics" also. Frankly, I find it vaguely annoying when a compiler thinks it knows best, and does so silently. All changes made to the original code, as 'designed' by the programmer, should be clearly noted during compile time -- if that requires a -v switch, then great! Diagnostics are not warnings; therefore there cannot be any wiffle waffle about them, and Walter may actually accept that as a compromise. - Kris
Feb 08 2005
prev sibling next sibling parent "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 9 Feb 2005 09:25:50 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what we
 seem to disagree over is what the compiler can best do to achieve it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up
Because of real life contraints, time, etc
 because programmers are unprofessional.
Some are.
 This hamstrings all
 diligent engineers.
I agree. It also mitigates mistakes made by less diligent engineers, or diligent engineers having a bad day or in a bad position. Basically my problem is that I can see both sides and have no way to measure which side is correct. Interestingly it's my impression people like the default switch behaviour and dislike the missing return, I am struggling to find the difference between the two, tho I have a nagging feeling there is one.
 Pessimism vs Optimism/Responsibility. As has been
 observed, there's no resolution of this difference, so we need to find a
 compromise.
Wouldn't life be boring if we were all the same. Regan
Feb 08 2005
prev sibling next sibling parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
I disagree.  He doesn't think that.  He thinks (umm, I think he thinks) 
that one possible outcome of such a situation is shut up code.  He has 
expressed that be believes this happens 10% of the time.

Regardless, I think we can probably all agree that verbose should 
*definitely* show this message.

It seems to me as if Walter doesn't like too many options, but to me 
this argument and the sides in it seems to indicate the prefect 
opportunity for some sort of "tell me when I omit returns" option.  The 
obvious problem is that then, you never know if the compiler has this 
enabled, and so sometimes good programs when built on other machines 
(e.g. other platforms, etc.) will give these warnings to the great 
annoyance of the programmer(s).

But, perhaps, if there was a way to indicate options in the current 
directory (e.g. ".dmd") that wouldn't be a problem.  And, everyone could 
enable this option if they understood its effects and that shut up code 
is bad.  The default could remain off.

Then again, I like options.  When I open Firefox and go to 
"about:config", it makes me happy.  That was one of the main things that 
sold me on the browser.  Oh, well.

-[Unknown]


 Absolutely. That's the entire problem. Walter thinks that if the 
 compiler tells the user there's a problem, the most likely outcome is a 
 shut-up because programmers are unprofessional. This hamstrings all 
 diligent engineers. Pessimism vs Optimism/Responsibility. As has been 
 observed, there's no resolution of this difference, so we need to find a 
 compromise. 
Feb 08 2005
prev sibling parent reply "Charlie Patterson" <charliep1 excite.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cubfs7$t5n$1 digitaldaemon.com...
I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what we 
 seem to disagree over is what the compiler can best do to achieve it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
I don't see the problem as that. <preach> I don't think it is fair to call one side pessimistic. That's pretty rhetorical. And if this is a discussion between pessimists and optimists, then I'm not interested because both camps are typically full of non-thinkers. I think the appropriate position would be realist here. Sorry. </preach> <ot> Let me weigh in that most programmers *are* unprofessional. (-: You don't have to dig through too many books on software engineering to find that out. There is a factor of 30 between the productivity of the bad and good coders, for example. And most people in any "sweatshop" environment, of which there are plenty in programming, do the minimum work they can. But I don't think that matters. </ot> I see the problem as a matter of elegance and consistency. And I think it is more elegant to provide the assert as a run-time check. Doesn't D also have array bounds checks? Would you also think that the user should be forced to make these explicitly? What if the code // I am not a D programmer yet, so bear with me if this is C/C++ // A has atleast 10 elements and we only care about the first 10 for ( int i = 0; i < 10; i++ ) test( A[ i ] ); caused a compile-time warning because it can't tell for sure that your assumption is correct. The only way to "shut-up" the compiler is to have an assert in-line: for ( int i = 0; i < 10; i++ ) { assert( i < A.size ); A[ i ] = i' } So I think the compiler should supply one automatically for the return(0) case just like it does for other situations where the coder is checked on but given the flexibility to write the code as he sees fit. And this is all assuming you can tell the assert is related to the problem. How smart will the compiler have to be to force an assert in the return(0) case? int foo(CollectionClass c, int y) { int t; foreach (Value v; c) { if (v.x == y) return v.z; } assert( t == 0 ); } How does the compiler know that the assert is unrelated to the loop?
Feb 09 2005
next sibling parent reply Derek <derek psych.ward> writes:
On Wed, 9 Feb 2005 11:31:46 -0500, Charlie Patterson wrote:

[snip]

 
 I don't see the problem as that.
[snip]
 I see the problem as a matter of elegance and consistency.  And I think it 
 is more elegant to provide the assert as a run-time check.
I have no problem with this as well. The issue for me has boiled down to whether or not the compiler tells me that's what its done. I want to be told whenever the compiler does this sort of thing on my behalf. And I'm happy to have to ask for this too, for example via the -v compiler switch. -- Derek Melbourne, Australia
Feb 09 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek" <derek psych.ward> wrote in message 
news:r5usmcalgzby.16ym9b7gewm7p.dlg 40tude.net...
 On Wed, 9 Feb 2005 11:31:46 -0500, Charlie Patterson wrote:

 [snip]

 I don't see the problem as that.
[snip]
 I see the problem as a matter of elegance and consistency.  And I 
 think it
 is more elegant to provide the assert as a run-time check.
I have no problem with this as well. The issue for me has boiled down to whether or not the compiler tells me that's what its done. I want to be told whenever the compiler does this sort of thing on my behalf.
Agreed.
 And I'm happy to have to ask for this too, for example via the -v
 compiler switch.
I think this is a mistake, but if this is as far as Walter can be pushed on this issue, then this'd an invocation of matthew->ShutUp("You get it as a flag on the compiler") would probably not throw an exception at this stage, (though it would result in the printing of a critical missive to stdwhine).
Feb 09 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Charlie Patterson" <charliep1 excite.com> wrote in message 
news:cuddto$2o74$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cubfs7$t5n$1 digitaldaemon.com...
I think you and I have very similar opinions on this matter.

 I think most all of us here agree on what the best outcome is, what 
 we seem to disagree over is what the compiler can best do to achieve 
 it.
Absolutely. That's the entire problem. Walter thinks that if the compiler tells the user there's a problem, the most likely outcome is a shut-up because programmers are unprofessional. This hamstrings all diligent engineers. Pessimism vs Optimism/Responsibility. As has been observed, there's no resolution of this difference, so we need to find a compromise.
I don't see the problem as that. <preach> I don't think it is fair to call one side pessimistic. That's pretty rhetorical.
Agreed. As I've just posted on the 'const/readonly string' thread, I think this issue maybe needs to go back to a simple distillation of the problem.
 And if this is a discussion between pessimists and optimists, then I'm 
 not interested because both camps are typically full of non-thinkers. 
 I think the appropriate position would be realist here. Sorry.
 </preach>

 <ot>
 Let me weigh in that most programmers *are* unprofessional.  (-:  You 
 don't have to dig through too many books on software engineering to 
 find that out.
Yes, alas, I guess I maybe have to admit that a significant proportion of them are. Lord knows - btw, is there an aetheistic/agnostic equivalent to "Lord knows"?? - I've worked with enough of them in years gone by. I suppose my thinking's been coloured these last few years because I've spent most of my time writing, wherein one learns all the myriad ways in which one's assumptions / implementations are wrong/bad, and any that are missed are gleefully pointed out by reviewers. The times I have worked have been consultative projects where I've done most/all the implementation myself. I confess that some years past I've worked with people who should be selling fake jewellery on street corners, rather than working on complex systems. But if we agree that there are many unprofessional programmers, must we not also agree that there will be a continuum, rather than just, say 10% programmers are highly professional and 90% are utterly unprofessional. Given that, I think a language which actually lays traps for people who are somewhere in between - and I am convinced that D does indeed do that - is a bad thing. Someone who's half-arsed may well put in "return 0;"s in every function by rote, so as to avoid the dreaded "indeterminate return" that their (somewhat more professional) colleague has warned them about (or their team-leader has castigated them about, after they've caused run-time f-ups for the third time). To me the only sane solution is a middle ground. But this clashes with Walter's strong intent to avoid warnings (and for good reason). My position is that something's got to give.
 There is a factor of 30 between the productivity of the bad and good 
 coders, for example.
Indeed, but try getting a 3000% pay rise on the strength of that. (btw, IIRC, it's only 29x <g>.)
  And most people in any "sweatshop" environment, of which there are 
 plenty in programming, do the minimum work they can.  But I don't 
 think that matters.
 </ot>

 I see the problem as a matter of elegance and consistency.  And I 
 think it is more elegant to provide the assert as a run-time check. 
 Doesn't D also have array bounds checks?
Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
Feb 09 2005
next sibling parent reply "Carlos Santander B." <csantander619 gmail.com> writes:
Matthew wrote:
 of them are. Lord knows - btw, is there an aetheistic/agnostic 
 equivalent to "Lord knows"?? - I've worked with enough of them in years 
No offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D _______________________ Carlos Santander Bernal
Feb 09 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Carlos Santander B." <csantander619 gmail.com> wrote in message 
news:cuegnn$opn$1 digitaldaemon.com...
 Matthew wrote:
 of them are. Lord knows - btw, is there an aetheistic/agnostic 
 equivalent to "Lord knows"?? - I've worked with enough of them in 
 years
No offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)
Feb 09 2005
parent reply "Alex Stevenson" <ans104 cs.york.ac.uk> writes:
On Thu, 10 Feb 2005 13:49:51 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:

 "Carlos Santander B." <csantander619 gmail.com> wrote in message
 news:cuegnn$opn$1 digitaldaemon.com...
 Matthew wrote:
 of them are. Lord knows - btw, is there an aetheistic/agnostic
 equivalent to "Lord knows"?? - I've worked with enough of them in
 years
No offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)
I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob. -- Using Opera's revolutionary e-mail client: http://www.opera.com/m2/
Feb 09 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Alex Stevenson" <ans104 cs.york.ac.uk> wrote in message
news:opslywebqz08qma6 mjolnir.spamnet.local...
 On Thu, 10 Feb 2005 13:49:51 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:

 "Carlos Santander B." <csantander619 gmail.com> wrote in message
 news:cuegnn$opn$1 digitaldaemon.com...
 Matthew wrote:
 of them are. Lord knows - btw, is there an aetheistic/agnostic
 equivalent to "Lord knows"?? - I've worked with enough of them in
 years
No offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)
I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob.
That's great! Thanks Bob only knows why I didn't think of it before. :-)
Feb 09 2005
parent Kramer <Kramer_member pathlink.com> writes:
In article <cuetqd$14g2$1 digitaldaemon.com>, Matthew says...
"Alex Stevenson" <ans104 cs.york.ac.uk> wrote in message
news:opslywebqz08qma6 mjolnir.spamnet.local...
 On Thu, 10 Feb 2005 13:49:51 +1100, Matthew
 <admin stlsoft.dot.dot.dot.dot.org> wrote:

 "Carlos Santander B." <csantander619 gmail.com> wrote in message
 news:cuegnn$opn$1 digitaldaemon.com...
 Matthew wrote:
 of them are. Lord knows - btw, is there an aetheistic/agnostic
 equivalent to "Lord knows"?? - I've worked with enough of them in
 years
No offense intended, but it reminds me of what a teacher of mine said once: "Atheist are weird, especially when they say: 'I swear by God that I'm atheist'" :D
No offence to me, I'm not an atheist. (Certainly on either side indicates, to me, a decidedly unnerving degree of certitude.) I just wanted to know if anyone could suggest an alternative to "Lord knows, " or "Heaven knows, ", which I find myself saying more often than I'd like. :-)
I tend to use 'Bob' as a generic drop-in replacement for deity invocation - "Bob only knows" warning: Do not use around people called Bob.
That's great! Thanks Bob only knows why I didn't think of it before. :-)
What about Void? Isn't that supposed to cover everything? Void only knows! or cast(void)(Diety) only knows! Sorry, couldn't help myself!
Feb 09 2005
prev sibling parent reply "Charlie Patterson" <charliep1 excite.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cueank$jur$1 digitaldaemon.com...
 I see the problem as a matter of elegance and consistency.  And I think 
 it is more elegant to provide the assert as a run-time check. Doesn't D 
 also have array bounds checks?
Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
When the fog lifts... (-: So why is it OK to remove array-bounds checks at release but not functional return points? They seem remarkably similar to me. The assumption seems to be that if you didn't catch them in debug, you'll be alright. And why should the user be forced to insert dummy return point (or assertions at return points), but not dummy array-bounds checks or assertions? I'm also OK with Derek that a compile option such as --sanity would point out the automatically inserted assertions, but how big might this list be if it includes, again, array-bounds checks, etc? For the record, I hate it when compilers dump warnings about things that aren't a problem. I guess I'm anal like that, but it frustrates me for the compiler to point out non-errors. It's like being micro-managed. Like I'm painting and someone is standing over my shoulder saying, "Hey you missed a spot! Did you mean to leave it uneven? Are you going to put something there?" Plus, I really hate inheriting code that throws warnings. I dont' know the code base yet so how worried should I be?
Feb 10 2005
next sibling parent Nick <Nick_member pathlink.com> writes:
In article <cufvel$2aln$1 digitaldaemon.com>, Charlie Patterson says...
So why is it OK to remove array-bounds checks at release but not functional 
return points?  They seem remarkably similar to me.  The assumption seems to 
be that if you didn't catch them in debug, you'll be alright.
One difference is that bounds checking costs cycles for every array lookup, but an assert(0) at the end of a function doesn't cost anything by just being there. Nick
Feb 10 2005
prev sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Charlie Patterson" <charliep1 excite.com> wrote in message 
news:cufvel$2aln$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:cueank$jur$1 digitaldaemon.com...
 I see the problem as a matter of elegance and consistency.  And I 
 think it is more elegant to provide the assert as a run-time check. 
 Doesn't D also have array bounds checks?
Not in release, AFAIK. <snip reason = "ran out of time; mentally foggy">
When the fog lifts... (-: So why is it OK to remove array-bounds checks at release but not functional return points? They seem remarkably similar to me. The assumption seems to be that if you didn't catch them in debug, you'll be alright. And why should the user be forced to insert dummy return point (or assertions at return points), but not dummy array-bounds checks or assertions?
Assuming we're correct re checks in release, I agree it's inconsistent, as you point out
Feb 10 2005
prev sibling next sibling parent reply Derek <derek psych.ward> writes:
On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu15pb$jqf$1 digitaldaemon.com...
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths, and rely on the runtime exceptions, do we really think
 NASA would use D? Come on!
NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler"
I come from the position that a compiler's job (a part from compiling), is to help the coder write correct programs. Of course, it can't do this to the Nth degree because how does the compiler 'know' what is correct or not? However, a compiler is often able to detect things that are *probably* incorrect or have a high probability to cause the application to function incorrectly. Thus I see think that a good compiler is one that is allowed to have the ability to point these situations out to the code writer. (The compiler should also allow coders to tell the compiler that the coder knows what they are doing in this instance and just let me get on with it, okay?!) Now, what to do though if the code writer chooses to ignore the compiler's observations? I would suggest that the compiler should insert run time code that prevents the application from continuing if the application tries to continue past the code that the compiler thinks might ( i.e. highly likely) cause bad results. You seem to be concerned that a code will always insert 'dead code' just so the compiler will stop nagging them. Of course, some coders are just this immature. They either grow up or whither. As a coder matures, they will begin to take the compiler seriously and add in code that makes sense in the context. I'm 50 years old and I've been coding for 28 years. You will often find in my code such things as ... Abort("Logic Error #nnn. If you see this, a mistake was made by the programmer. This should never be seen. Inform your supplier about this message."); You might regard this is superfluous 'dead code', however a 'nice' message from the coder to the user is better than a compiler generated 'jargon' message that the user must decode. Thus my switch constructs always have a default clause, and any 'if' statement in which an unhandled false would cause problems, I have an 'else' phrase. I always have a return statement at the end of my function text, even if its will never be executed (if all goes well). Call it overkill if you like, but in the long run, it keeps the users better informed and *more* importantly, keeps future maintainers aware of the previous coder's intentions and reasons for doing things. Currently, D is way too dogmatic and unreasonably unhelpful to the coder. It is mostly still better than C/C++ though. -- Derek Melbourne, Australia
Feb 05 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Derek" <derek psych.ward> wrote in message
news:1uqf5a6fc42ei$.8hb72yklj5d2.dlg 40tude.net...
 You seem to be concerned that a code will always insert 'dead code' just
so
 the compiler will stop nagging them.
Always? No. But it happens much more often than one would think. It usually happens when one is in a hurry, or thinking about something else at the time. One promises oneself that one will go back and fix it later. But that never happens.
 I'm 50 years old and I've been coding for 28 years. You will often find in
 my code such things as ...

   Abort("Logic Error #nnn. If you see this, a mistake was made by the
 programmer. This should never be seen. Inform your supplier about this
 message.");

 You might regard this is superfluous 'dead code',
No, I do not. I think such practice as yours is fine. My concern is with the temptation to just insert a return statement without any abort call. I've seen it happen, a lot, by professional programmers. This is where the good intentions of the compiler error message have gone awry and caused things to be worse.
 however a 'nice' message
 from the coder to the user is better than a compiler generated 'jargon'
 message that the user must decode.
This is good, and is also achievable by putting a catch in at the top level to catch any wayward uncaught exceptions and print out any message desired.
 Thus my switch constructs always have a
 default clause,
That's my normal practice with C/C++ code, years ago I had a paper advocating such called "Defensive Programming". Many of those ideas have been automated in D, as D will insert a default clause for you if none is specified, and that inserted default clause will throw an exception. It takes the place of all the default: assert(0); break; I write in C/C++. I think we are much more in agreement on these issues than not.
 and any 'if' statement in which an unhandled false would
 cause problems, I have an 'else' phrase. I always have a return statement
 at the end of my function text, even if its will never be executed (if all
 goes well). Call it overkill if you like, but in the long run, it keeps
the
 users better informed and *more* importantly, keeps future maintainers
 aware of the previous coder's intentions and reasons for doing things.
No, I don't regard it as overkill. But I would regard an inserted return statement (that is not preceded by one of the abort messages you showed above) that is not intended to ever be executed as masking a potential bug. I also believe that dead code, unless it is marked with an Abort() like your example, is a problem for future maintainers. He'll see that return, and wonder what it's for and why it doesn't seem to be possible to execute it. (As an aside, it's interesting how much dead code tends to accumulate in an app. You can find such by running a coverage analyzer. Dead code accumulates like all the useless DNA we carry around <g>. I've been thinking of writing such a tool for D, it would be a good complement to the profiler.)
 Currently, D is way too dogmatic and unreasonably unhelpful to the coder.
But I think that a compiler requiring dead code to be inserted is being dogmatic! <g> Guess it's all in one's perspective.
 It is mostly still better than C/C++ though.
I surely hope so!
Feb 05 2005
parent reply Derek <derek psych.ward> writes:
I don't think I made myself very clear.

I would like to see a compiler that would insert run-time code that would
crash an application in those instances where it detected a probable
mistake made by the coder, *and* inform the coder about what the compiler
has done about it. The coder can then take one of three choices for each of
these instances,

(1) Do nothing. The coder lives with the compiler informing them and
accepts the compiler inserted code.

(2) The coder modifies their code so that the situation detected by the
compiler no longer exists. If the coder adds irresponsible code then they
have just continued their stupid (or uneducated) behaviour, as this is just
as poor as leaving it unattended. The issue here is, who's responsibility
is it to code well? The coder or the compiler? I maintain it is the coder
and one role that the compiler brings to is similar to that of a mentor or
coach, rather than a moral enforcement officer.

(3) The coder adds into their code, a statement that informs the compiler
that the coder acknowledges the situation and that the compiler no longer
needs to inform the coder of it. The compiler still inserts the run-time
code but no longer informs the coder.


With the current DMD behaviour, if the coder is the type or person who
continually writes irresponsible code, then it is more likely that the
first person to find this out would be the user of the application rather
than the coder. I believe it is politer for the compiler to mention the
poor code to the coder before the end user is disturbed by it. If the coder
does not mend their ways, then they probably deserve their consumer
backlash. However, if the compiler's inserted code is what the user sees,
then the whole programming community is tarnished, not just the original
coder.

On Sat, 5 Feb 2005 02:19:15 -0800, Walter wrote:

 "Derek" <derek psych.ward> wrote in message
 news:1uqf5a6fc42ei$.8hb72yklj5d2.dlg 40tude.net...
 You seem to be concerned that a code will always insert 'dead code' just
so
 the compiler will stop nagging them.
Always? No. But it happens much more often than one would think. It usually happens when one is in a hurry, or thinking about something else at the time. One promises oneself that one will go back and fix it later. But that never happens.
I work in the software production industry. It pays my wages. I also have a load of hands-on experience from many projects - large and small. The behaviour you just describe is not universal. In our development regime, peer inspection, bloody-minded testers, and management supported quality control processes virtually ensure that irresponsible coding practices are detected, corrected, and perpetrators retrained.
 I'm 50 years old and I've been coding for 28 years. You will often find in
 my code such things as ...

   Abort("Logic Error #nnn. If you see this, a mistake was made by the
 programmer. This should never be seen. Inform your supplier about this
 message.");

 You might regard this is superfluous 'dead code',
No, I do not. I think such practice as yours is fine. My concern is with the temptation to just insert a return statement without any abort call. I've seen it happen, a lot, by professional programmers. This is where the good intentions of the compiler error message have gone awry and caused things to be worse.
Define 'professional'? My definition includes the concept of responsibility.
 however a 'nice' message
 from the coder to the user is better than a compiler generated 'jargon'
 message that the user must decode.
This is good, and is also achievable by putting a catch in at the top level to catch any wayward uncaught exceptions and print out any message desired.
Yes, and that is just one of many mechanisms to achieve this effect.
 Thus my switch constructs always have a
 default clause,
That's my normal practice with C/C++ code, years ago I had a paper advocating such called "Defensive Programming". Many of those ideas have been automated in D, as D will insert a default clause for you if none is specified, and that inserted default clause will throw an exception. It takes the place of all the default: assert(0); break; I write in C/C++. I think we are much more in agreement on these issues than not.
Having D insert this code is not a problem. But having DMD be silent about it is. I would regard is as good manners to inform the coder about what you have done to their code.
 and any 'if' statement in which an unhandled false would
 cause problems, I have an 'else' phrase. I always have a return statement
 at the end of my function text, even if its will never be executed (if all
 goes well). Call it overkill if you like, but in the long run, it keeps
the
 users better informed and *more* importantly, keeps future maintainers
 aware of the previous coder's intentions and reasons for doing things.
No, I don't regard it as overkill. But I would regard an inserted return statement (that is not preceded by one of the abort messages you showed above) that is not intended to ever be executed as masking a potential bug. I also believe that dead code, unless it is marked with an Abort() like your example, is a problem for future maintainers. He'll see that return, and wonder what it's for and why it doesn't seem to be possible to execute it. (As an aside, it's interesting how much dead code tends to accumulate in an app. You can find such by running a coverage analyzer. Dead code accumulates like all the useless DNA we carry around <g>. I've been thinking of writing such a tool for D, it would be a good complement to the profiler.)
Sounds okay. Off you go then... ;-)
 Currently, D is way too dogmatic and unreasonably unhelpful to the coder.
But I think that a compiler requiring dead code to be inserted is being dogmatic! <g> Guess it's all in one's perspective.
See my choices above. The compiler is not required to force the coder to add dead code. The coder should be able to tell the compiler that "Hey! I know what I'm doing, ok!?"
 It is mostly still better than C/C++ though.
I surely hope so!
But it is still not as good as you can make it, Walter. You can make it even better still. The journey is not over with v1.0. -- Derek Melbourne, Australia
Feb 05 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
What you're advocating sounds very much like how compile time warnings work
in typical C/C++ compilers. Is this what you mean?

"Derek" <derek psych.ward> wrote in message
news:w9f4rolhiyrh.t9depibclhn6$.dlg 40tude.net...
 I don't think I made myself very clear.

 I would like to see a compiler that would insert run-time code that would
 crash an application in those instances where it detected a probable
 mistake made by the coder, *and* inform the coder about what the compiler
 has done about it. The coder can then take one of three choices for each
of
 these instances,

 (1) Do nothing. The coder lives with the compiler informing them and
 accepts the compiler inserted code.

 (2) The coder modifies their code so that the situation detected by the
 compiler no longer exists. If the coder adds irresponsible code then they
 have just continued their stupid (or uneducated) behaviour, as this is
just
 as poor as leaving it unattended. The issue here is, who's responsibility
 is it to code well? The coder or the compiler? I maintain it is the coder
 and one role that the compiler brings to is similar to that of a mentor or
 coach, rather than a moral enforcement officer.

 (3) The coder adds into their code, a statement that informs the compiler
 that the coder acknowledges the situation and that the compiler no longer
 needs to inform the coder of it. The compiler still inserts the run-time
 code but no longer informs the coder.


 With the current DMD behaviour, if the coder is the type or person who
 continually writes irresponsible code, then it is more likely that the
 first person to find this out would be the user of the application rather
 than the coder. I believe it is politer for the compiler to mention the
 poor code to the coder before the end user is disturbed by it. If the
coder
 does not mend their ways, then they probably deserve their consumer
 backlash. However, if the compiler's inserted code is what the user sees,
 then the whole programming community is tarnished, not just the original
 coder.
Feb 05 2005
next sibling parent reply Derek <derek psych.ward> writes:
On Sat, 5 Feb 2005 17:46:37 -0800, Walter wrote:

 What you're advocating sounds very much like how compile time warnings work
 in typical C/C++ compilers. Is this what you mean?
Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching', whatever... messages, I don't care. I don't care what you call the messages. I am asking for better (useful, helpful, detailed) information to be passed from the compiler to the coder. As I know you have some deep seated hang-up with the concept of 'warning message', what say we call them Transitory Information/Problem Status messages (TIPS for short). Secondly, we are only talking about two or three distinct situations, not all the hundreds of possible constructs out there. Currently DMD *already* takes special action in these situations, so its not a big difference. DMD already has all the information at its fingertips, so to speak, all it needs to do is pass this information on to the coder. If the coder decides to ignore them, or add stupid code, or tells DMD to shut up, then there is nothing more you can do. Its not your fault! Its okay, really. You did your best to help. In the long run, one cannot protect oneself, or others, from idiots. A fool-proof system just causes the universe to come up with a better class of fool. -- Derek Melbourne, Australia
Feb 05 2005
parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <w34t3lnducuh$.1cbudn9r87umu.dlg 40tude.net>, Derek says...
On Sat, 5 Feb 2005 17:46:37 -0800, Walter wrote:

 What you're advocating sounds very much like how compile time warnings work
 in typical C/C++ compilers. Is this what you mean?
Firstly, be they 'warning', 'information', 'error', 'FOOBAR', 'coaching', whatever... messages, I don't care. I don't care what you call the messages. I am asking for better (useful, helpful, detailed) information to be passed from the compiler to the coder. As I know you have some deep seated hang-up with the concept of 'warning message', what say we call them Transitory Information/Problem Status messages (TIPS for short). Secondly, we are only talking about two or three distinct situations, not all the hundreds of possible constructs out there. Currently DMD *already* takes special action in these situations, so its not a big difference. DMD already has all the information at its fingertips, so to speak, all it needs to do is pass this information on to the coder. If the coder decides to ignore them, or add stupid code, or tells DMD to shut up, then there is nothing more you can do. Its not your fault! Its okay, really. You did your best to help. In the long run, one cannot protect oneself, or others, from idiots. A fool-proof system just causes the universe to come up with a better class of fool. -- Derek Melbourne, Australia
A statement about inserting code could be made in verbose mode (-v) since that is a flag to the compiler to get all the details about what it is doing. Non-verbose mode should be... non-verbose.
Feb 06 2005
parent Derek Parnell <derek psych.ward> writes:
On Mon, 7 Feb 2005 04:10:40 +0000 (UTC), Ben Hinkle wrote:

 A statement about inserting code could be made in verbose mode (-v) since that
 is a flag to the compiler to get all the details about what it is doing.
 Non-verbose mode should be... non-verbose.
Now that's a decent idea. -- Derek Melbourne, Australia 7/02/2005 5:48:22 PM
Feb 06 2005
prev sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com>  
wrote:
 What you're advocating sounds very much like how compile time warnings  
 work
 in typical C/C++ compilers. Is this what you mean?
<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Feb 06 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opslsqnpdr23k2f5 ally...
 On Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com> 
 wrote:
 What you're advocating sounds very much like how compile time 
 warnings  work
 in typical C/C++ compilers. Is this what you mean?
<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Sounds like a pretty excellent compromise to me!!
Feb 06 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Mon, 7 Feb 2005 09:33:27 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opslsqnpdr23k2f5 ally...
 On Sat, 5 Feb 2005 17:46:37 -0800, Walter <newshound digitalmars.com>
 wrote:
 What you're advocating sounds very much like how compile time
 warnings  work
 in typical C/C++ compilers. Is this what you mean?
<snip> To me, there seems to be one important difference between this proposition and the current compile time warnings of a c/c++ compiler. That difference is that the D compiler is going to do something about it, eg. switch(a) { case 1: case 2: } a C compiler might say, "Warning: no default case". The D compiler is going to add a default case which throws an exception if triggered, now, can't it also issue a notification of what it has done eg. "Note: default case added". To me, this behaviour cannot be called a 'warning' but either the literal definition of the word: http://dictionary.reference.com/search?q=warning or by the behaviour we are all used to. According to the web pages, the reason for removing warnings.. "No Warnings D compilers will not generate warnings for questionable code. Code will either be acceptable to the compiler or it will not be. This will eliminate any debate about which warnings are valid errors and which are not, and any debate about what to do with them. The need for compiler warnings is symptomatic of poor language design." This new imagined behaviour does not violate the above paragraph. As mentioned previously this new behaviour allows you to catch the bug in the compile phase and before the testing phase. Regan
Sounds like a pretty excellent compromise to me!!
It did to me too, however after more thought it appears more complex, see my later post dated: Mon, 07 Feb 2005 12:21:58 +1300 in this same thread for my later ramblings. Regan
Feb 06 2005
prev sibling parent reply Charles Patterson <charliep1 excite.com> writes:
Regan wrote:
 switch(a) {
 case 1:
 case 2:
 }
 
 a C compiler might say, "Warning: no default case". The D compiler is  
 going to add a default case which throws an exception if triggered, 
 now,  can't it also issue a notification of what it has done eg. "Note: 
 default  case added".
Not to pick on you Regan, because this is the second note of yours I've replied to, but if it is possible to leave off a default case *on purpose*, then I hate it when I have done what I intended and the compiler is spitting out any warnings or errors. Maybe I'm just anal, but I wouldn't like to see the compiler say "5 notes; 0 errors" when I'm through. I bring this up because I bet I'm not alone.
Feb 08 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 08 Feb 2005 13:22:06 -0500, Charles Patterson  
<charliep1 excite.com> wrote:
 Regan wrote:
 switch(a) {
 case 1:
 case 2:
 }
  a C compiler might say, "Warning: no default case". The D compiler is   
 going to add a default case which throws an exception if triggered,  
 now,  can't it also issue a notification of what it has done eg. "Note:  
 default  case added".
Not to pick on you Regan
I don't feel you are. :)
 , because this is the second note of yours I've replied to, but if it is  
 possible to leave off a default case *on purpose*
Sure, you leave it off because you don't believe it can 'ever' occur, I can understand that. The current D compiler behaviour is such that in this case the compiler inserts the 'assert' and if you're right, it never occurs, no harm done. But, if you're wrong you get an assert which clearly shows where the bug is. The alternative, if it did not add the assert, is for the program to continue having not done any of your switch statements, likely crashing shortly thereafter, in which case you'd be looking in the wrong place for the bug.
 , then I hate it when I have done what I intended and the compiler is  
 spitting out any warnings or errors. Maybe I'm just anal, but I wouldn't  
 like to see the compiler say "5 notes; 0 errors" when I'm through.  I  
 bring this up because I bet I'm not alone.
You're not alone. In fact, I agree with you. I too would find the "5 notes" annoying and would want to 'fix' them, I think it's part of our nature. This is exactly the behaviour Walter is describing which causes programmers to add 'dead code' in order to 'shut the compiler up'. So.. if it gave the 'note' you'd fix it, at best by adding a default case with an assert (which the compiler is already doing automatically), at worst by adding a default case with nothing in it. (there are other options, but I believe they fall into one of the two categories based on their outcome) I say at best and at worst because those two actions cause the two results I have described above, an assert at the bug, or a crash at some indeterminate stage later. I get the impression most people like the current behaviour WRT switch statements, it seems people dislike the same behaviour WRT to missing returns. I am confused as to why, to me they seem like the same thing.. tho there is a nagging in the back of my mind that I cannot put into words that says there is a difference somewhere. Regan
Feb 08 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I don't think I made myself very clear.

 I would like to see a compiler that would insert run-time code that 
 would
 crash an application in those instances where it detected a probable
 mistake made by the coder, *and* inform the coder about what the 
 compiler
 has done about it. The coder can then take one of three choices for 
 each of
 these instances,

 (1) Do nothing. The coder lives with the compiler informing them and
 accepts the compiler inserted code.

 (2) The coder modifies their code so that the situation detected by 
 the
 compiler no longer exists. If the coder adds irresponsible code then 
 they
 have just continued their stupid (or uneducated) behaviour, as this is 
 just
 as poor as leaving it unattended. The issue here is, who's 
 responsibility
 is it to code well? The coder or the compiler? I maintain it is the 
 coder
 and one role that the compiler brings to is similar to that of a 
 mentor or
 coach, rather than a moral enforcement officer.

 (3) The coder adds into their code, a statement that informs the 
 compiler
 that the coder acknowledges the situation and that the compiler no 
 longer
 needs to inform the coder of it. The compiler still inserts the 
 run-time
 code but no longer informs the coder.
This is eminently sensible. I give it 0.01% chance of getting traction. Sarcasm aside, it's a warning by any other name, and we're not allowed warnings in D. :-(
 With the current DMD behaviour, if the coder is the type or person who
 continually writes irresponsible code, then it is more likely that the
 first person to find this out would be the user of the application 
 rather
 than the coder. I believe it is politer for the compiler to mention 
 the
 poor code to the coder before the end user is disturbed by it.
Brilliantly put. I'm going to email myself a copy of this and quote you next chance I get. <g>
 If the coder
 does not mend their ways, then they probably deserve their consumer
 backlash. However, if the compiler's inserted code is what the user 
 sees,
 then the whole programming community is tarnished, not just the 
 original
 coder.
In Chapter 1 of IC++, I expressed it more verbosely, as: " 1.1 Eggs And Ham I'm no doubt teaching all you gentle readers about egg-sucking here, but it's an important thing to state nevertheless. Permit me to wax awhile: · It's better to catch a bug at design time than at coding/compile time[1]. · It's better to catch a bug at coding/compile time than during unit testing[2]. · It's better to catch a bug during unit testing than during debug system testing. · It's better to catch a bug during debug system than in pre-release/beta system testing. · It's better to catch a bug during pre-release/beta system testing than have your customer catch one. · It's better to have your customer catch a bug (in a reasonably sophisticated/graceful manner), than to have no customers. This is all pretty obvious stuff, although customers would probably disagree with the last one; best keep that one to ourselves. There are two ways in which such enforcements can take effect: at compile-time and at run-time, and these form the substance of this chapter. [1] I'm not a waterfaller, so coding time and compiling time are the same time for me. But even though I like unit-tests and have had some blisteringly fast pair-programming partnerships, I don't think I'm an XP-er [Beck2000] either. [2] This assumes you do unit testing. If you don't, then you need to start doing so, sharpish! " Walter was one of the reviewers of IC++, and never expressed reservations on this section. Was I wrong? Matthew P.S. Sorry for having the bad manners to be quoting myself. But worry not, most of the gnomes I profer originate from others, so oftentimes I'm actually quoting someone worth listening to. <CG>
Feb 05 2005
parent "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cu3vu0$3sv$1 digitaldaemon.com...
 Walter was one of the reviewers of IC++, and never expressed
 reservations on this section.

 Was I wrong?
It is the conventional wisdom, and all other things being equal, it's correct. It makes an implicit assumption that all bugs are equally bad. I'll refer you to the tradeoff I mentioned in the other posting, about preferring a lightweight bug in greater quantity to a heavyweight bug in lesser quantity.
Feb 05 2005
prev sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 I come from the position that a compiler's job (a part from 
 compiling), is
 to help the coder write correct programs. Of course, it can't do this 
 to
 the Nth degree because how does the compiler 'know' what is correct or 
 not?
 However, a compiler is often able to detect things that are *probably*
 incorrect or have a high probability to cause the application to 
 function
 incorrectly. Thus I see think that a good compiler is one that is 
 allowed
 to have the ability to point these situations out to the code writer. 
 (The
 compiler should also allow coders to tell the compiler that the coder 
 knows
 what they are doing in this instance and just let me get on with it,
 okay?!)

 Now, what to do though if the code writer chooses to ignore the 
 compiler's
 observations? I would suggest that the compiler should insert run time 
 code
 that prevents the application from continuing if the application tries 
 to
 continue past the code that the compiler thinks might ( i.e. highly 
 likely)
 cause bad results.

 You seem to be concerned that a code will always insert 'dead code' 
 just so
 the compiler will stop nagging them. Of course, some coders are just 
 this
 immature. They either grow up or whither. As a coder matures, they 
 will
 begin to take the compiler seriously and add in code that makes sense 
 in
 the context.

 I'm 50 years old and I've been coding for 28 years. You will often 
 find in
 my code such things as ...

  Abort("Logic Error #nnn. If you see this, a mistake was made by the
 programmer. This should never be seen. Inform your supplier about this
 message.");
He he! Great stuff
 You might regard this is superfluous 'dead code', however a 'nice' 
 message
 from the coder to the user is better than a compiler generated 
 'jargon'
 message that the user must decode. Thus my switch constructs always 
 have a
 default clause, and any 'if' statement in which an unhandled false 
 would
 cause problems, I have an 'else' phrase. I always have a return 
 statement
 at the end of my function text, even if its will never be executed (if 
 all
 goes well). Call it overkill if you like, but in the long run, it 
 keeps the
 users better informed and *more* importantly, keeps future maintainers
 aware of the previous coder's intentions and reasons for doing things.

 Currently, D is way too dogmatic and unreasonably unhelpful to the 
 coder.

 It is mostly still better than C/C++ though.
It is indeed mostly better. Unfortunately, in the ways in which it is not better it is disconcertingly flawed. I've been involved with D for nearly three years now, and I've yet to meet a client who doesn't have use-preventing reservations about it. Though a big fan of D, and a hoper for its future, I myself do not use it for anything serious.
Feb 05 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu15pb$jqf$1 digitaldaemon.com...
 Guys, if we persist with the mechanism of no compile-time detection of
 return paths, and rely on the runtime exceptions, do we really think
 NASA would use D? Come on!
NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are.
Of course, this is all a moot point if you compile using the -release switch. In that case, one gets neither run time nor compile time messages, but the bugs still remain. So a really lazy/ignorant/impatient/stupid coder just always compiles using -release and never gets nagged by the compiler. -- Derek Melbourne, Australia 7/02/2005 10:36:21 AM
Feb 06 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:cu69r8$2gqi$1 digitaldaemon.com...
 On Fri, 4 Feb 2005 18:53:15 -0800, Walter wrote:

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cu15pb$jqf$1 digitaldaemon.com...
 Guys, if we persist with the mechanism of no compile-time detection 
 of
 return paths, and rely on the runtime exceptions, do we really think
 NASA would use D? Come on!
NASA uses C, C++, Ada and assembler for space hardware. http://www.spacenewsfeed.co.uk/2004/11July2004_6.html http://vl.fmnet.info/safety/lang-survey.html That said, you and I have different ideas on what constitutes support for writing reliable code. I think it's better to have mechanisms in the language that: 1) make it impossible to ignore situations the programmer did not think of 2) the bias is to force bugs to show themselves in an obvious manner 3) not making it easy for the programmer to insert dead code to "shut up the compiler" This is why the return and the switch defaults are the way they are.
Of course, this is all a moot point if you compile using the -release switch. In that case, one gets neither run time nor compile time messages, but the bugs still remain. So a really lazy/ignorant/impatient/stupid coder just always compiles using -release and never gets nagged by the compiler.
Shit! Is that so? I hadn't cottoned on to that. If that is indeed the case, then this whole thing is just a joke. Wake me up when things get sane again.
Feb 06 2005
parent "Unknown W. Brackets" <unknown simplemachines.org> writes:
I agree this behavior is not nice at all, unless you've never seen 
programs like "Windows" and believe that code can be totally bug free 
when compiled in release mode.

I might suggest making it an error, when in release mode, but obviously 
that has flaws: not only *could* someone be MORE keen to shut the 
compiler up when switching to release, but it would be very annoying if 
it was working along and suddenly wouldn't compile using --release :/. 
So, I suppose that won't work.

Personally, I think the asserts should go away and instead we should get 
your all-knowing exceptions thrown, whether in debug or release, with 
line and file information (assuming the case where no compile-time 
error/warning is shown.)  Alas, I dream.

-[Unknown]


 Wake me up when things get sane again.
Feb 06 2005
prev sibling next sibling parent Paul Bonser <misterpib gmail.com> writes:
Paul Bonser wrote:
 Some mention of license problems got me thinking about this piece of 
 standard Sun boilerplate:
 
 "Nuclear, missile, chemical biological weapons or nuclear maritime end 
 uses or end users, whether direct or indirect, are strictly prohibited."
 
 Are we going to have that kind of restrictions on D, or will we be free 
 to use it to guide weapons of mass destruction? :P
 
Leave it to you guys to take a perfectly good semi-off-topic thread and bring it onto a topic :P -- -PIB -- "C++ also supports the notion of *friends*: cooperative classes that are permitted to see each other's private parts." - Grady Booch
Feb 07 2005
prev sibling parent reply Paul Bonser <misterpib gmail.com> writes:
I'm proud to have fathered such a successful thread...

-- 
-PIB

--
"C++ also supports the notion of *friends*: cooperative classes that
are permitted to see each other's private parts." - Grady Booch
Feb 22 2005
parent John Reimer <brk_6502 yahoo.com> writes:
Paul Bonser wrote:
 
 
 I'm proud to have fathered such a successful thread...
 
He he... well I don't think this is just /a/ thread... it's a multi-thread. Hard to believe there can be so many topics on one. :-) - John R.
Feb 22 2005