www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - That override keyword ...

reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
I really like the override keyword. It works well as a reminder when you're
refactoring classes. However, I'd like to see it extended such that it
becomes /required/. This would be in keeping with its current meaning, and
may also resolve some of the issues surrounding the need to use an alias to
pull in superclass methods of the same basic name (which is criminal in my
opinion, but a different battle).

Anyway; if override were required it would catch a number of subtle bugs in
derived classes. For example, I was refactoring some classes in Mango and
decided to rename a couple of methods to pause() and resume() because that
suited their behavior much better than the previous names. A few days later
I started noticing odd program-termination behaviour, and just couldn't
track it down. The MSVC debugger couldn't even help me.

Turns out that the class with the renamed methods was three levels removed
from a base class of (you guessed it) ... Thread. I had inadvertently
overridden the Thread.pause() and Thread.resume().  On win32 it didn't cause
a disaster, but on linux it caused segfaults during shutdown and/or thread
termination. Fancy that :-)

Now this is in my own library. Imagine the crap that will inevitably hit the
fan in a long-term maintenance scenario? I think this is a great opportunity
for D to shine in this arena; typing in the override keyword is no biggie
for overriding methods, given the potential two-way benefit (refactoring and
inadvertent-override).

I think it should be mandated by the compiler; an overriding method without
the override keyword should result in a compile time error. What say you?

- Kris
Jul 17 2004
next sibling parent reply "Matthew Wilson" <admin.hat stlsoft.dot.org> writes:
You know me, the more strictly maintenance is supported, the better.

Of course, you realise it's not going to happen, don't you ...

"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdatji$17fc$1 digitaldaemon.com...
 I really like the override keyword. It works well as a reminder when

 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/. This would be in keeping with its current meaning, and
 may also resolve some of the issues surrounding the need to use an alias

 pull in superclass methods of the same basic name (which is criminal in my
 opinion, but a different battle).

 Anyway; if override were required it would catch a number of subtle bugs

 derived classes. For example, I was refactoring some classes in Mango and
 decided to rename a couple of methods to pause() and resume() because that
 suited their behavior much better than the previous names. A few days

 I started noticing odd program-termination behaviour, and just couldn't
 track it down. The MSVC debugger couldn't even help me.

 Turns out that the class with the renamed methods was three levels removed
 from a base class of (you guessed it) ... Thread. I had inadvertently
 overridden the Thread.pause() and Thread.resume().  On win32 it didn't

 a disaster, but on linux it caused segfaults during shutdown and/or thread
 termination. Fancy that :-)

 Now this is in my own library. Imagine the crap that will inevitably hit

 fan in a long-term maintenance scenario? I think this is a great

 for D to shine in this arena; typing in the override keyword is no biggie
 for overriding methods, given the potential two-way benefit (refactoring

 inadvertent-override).

 I think it should be mandated by the compiler; an overriding method

 the override keyword should result in a compile time error. What say you?

 - Kris

Jul 17 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Matthew Wilson" wrote in message
 You know me, the more strictly maintenance is supported, the better.
 Of course, you realise it's not going to happen, don't you ...

Any particular reason why not Matthew? After all, it's a simple change that makes D a more robust language for commercial usage. I've yet to hear a realistic negative against the notion ... The obvious problem with the compiler making such an enforcement is that of backward compatibility. As it stands, the body of D code right now is small enough to manage such a change. That will not (hopefully) be the case with a version 2 of the language. Now is the time to do it, if ever. Personally I'd be more than glad to spend a half hour or so patching the 1.2MB of Mango source given the potential time-savings and lessened headaches for both myself and for everyone who ever uses (or maintains) Mango down the road. I would have thought you'd be a staunch advocate given that you're developing a toolkit also <g> In addition to catching refactoring and inadvertent-override problems, a third benefit relates to adding methods to say, a library base-class, where you don't know what potential subclass method names might be. In such a scenario the compiler would at least pinpoint where the new library version conflicts with the 'client' method names (when the client compiles against the new codebase). That would also be highly beneficial IMHO. Three real benefits; and almost free. Not to mention a possibly safe passage out of the alias quagmire WRT overloaded method names. What sayeth thou? And thee, Walter?
Jul 17 2004
next sibling parent Andy Friesen <andy ikagames.com> writes:
Kris wrote:
 "Matthew Wilson" wrote in message
 
You know me, the more strictly maintenance is supported, the better.
Of course, you realise it's not going to happen, don't you ...

Any particular reason why not Matthew? After all, it's a simple change that makes D a more robust language for commercial usage. I've yet to hear a realistic negative against the notion ... The obvious problem with the compiler making such an enforcement is that of backward compatibility. As it stands, the body of D code right now is small enough to manage such a change. That will not (hopefully) be the case with a version 2 of the language. Now is the time to do it, if ever. Personally I'd be more than glad to spend a half hour or so patching the 1.2MB of Mango source given the potential time-savings and lessened headaches for both myself and for everyone who ever uses (or maintains) Mango down the road. I would have thought you'd be a staunch advocate given that you're developing a toolkit also <g> In addition to catching refactoring and inadvertent-override problems, a third benefit relates to adding methods to say, a library base-class, where you don't know what potential subclass method names might be. In such a scenario the compiler would at least pinpoint where the new library version conflicts with the 'client' method names (when the client compiles against the new codebase). That would also be highly beneficial IMHO. Three real benefits; and almost free. Not to mention a possibly safe passage out of the alias quagmire WRT overloaded method names. What sayeth thou? And thee, Walter?

I think requiring the override keyword is a great idea. C# does exactly this, and it works fantastically. The only way this could possibly cause any pain is when you forget to use the keyword on something you are intentionally overriding. This is pretty miniscule in the face of all the headaches it can save: the compiler can trivially generate a useful error message, and it only takes one extra keyword to fix it. With respect to backwards compatibility, there will never be a better time to change D than right now. Ever. Changing D after 1.0 will have to be done in a much more careful manner once that commitment to stability has been made. -- andy
Jul 17 2004
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdbtc5$1hum$1 digitaldaemon.com...
 "Matthew Wilson" wrote in message
 You know me, the more strictly maintenance is supported, the better.
 Of course, you realise it's not going to happen, don't you ...

Any particular reason why not Matthew? After all, it's a simple change that makes D a more robust language for commercial usage. I've yet to hear a realistic negative against the notion ... The obvious problem with the compiler making such an enforcement is that of backward compatibility. As it stands, the body of D code right now is small enough to manage such a change. That will not (hopefully) be the case with a version 2 of the language. Now is the time to do it, if ever. Personally I'd be more than glad to spend a half hour or so patching the 1.2MB of Mango source given the potential time-savings and lessened headaches for both myself and for everyone who ever uses (or maintains) Mango down the road. I would have thought you'd be a staunch advocate given that you're developing a toolkit also <g>

I already said I want it. But I'm also a realist, and experience with D is that very few things which constrain the developer towards robustness and maintenance get in. Or am I being a cynic?
 In addition to catching refactoring and inadvertent-override problems, a
 third benefit relates to adding methods to say, a library base-class, where
 you don't know what potential subclass method names might be. In such a
 scenario the compiler would at least pinpoint where the new library version
 conflicts with the 'client' method names (when the client compiles against
 the new codebase). That would also be highly beneficial IMHO.

 Three real benefits; and almost free. Not to mention a possibly safe passage
 out of the alias quagmire WRT overloaded method names.

 What sayeth thou? And thee, Walter?

For the third time. Let's have it. Just don't go holding your breath. And on that note of optimism, I must go away and write a plea for my own new keyword which I believe is absolutely necessary to the success of D(TL): autotype.
Jul 17 2004
next sibling parent Regan Heath <regan netwin.co.nz> writes:
On Sun, 18 Jul 2004 06:32:15 +1000, Matthew 
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 I already said I want it. But I'm also a realist, and experience with D 
 is that
 very few things which constrain the developer towards robustness and 
 maintenance
 get in. Or am I being a cynic?

You're being a cynic. I prefer optimisim at all times, perhaps I'm just a dreamer ;o) Regan -- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 17 2004
prev sibling parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Matthew" wrote in message...
 I already said I want it. But I'm also a realist, and experience with D is

 very few things which constrain the developer towards robustness and

 get in. Or am I being a cynic?

Yep; interfaces made it ~ and I was just being an arse <g>
Jul 17 2004
prev sibling next sibling parent reply teqDruid <me teqdruid.com> writes:
I think this is a great idea.

John

On Sat, 17 Jul 2004 03:06:53 -0700, Kris wrote:

 I really like the override keyword. It works well as a reminder when
 you're refactoring classes. However, I'd like to see it extended such that
 it becomes /required/. This would be in keeping with its current meaning,
 and may also resolve some of the issues surrounding the need to use an
 alias to pull in superclass methods of the same basic name (which is
 criminal in my opinion, but a different battle).
 
 Anyway; if override were required it would catch a number of subtle bugs
 in derived classes. For example, I was refactoring some classes in Mango
 and decided to rename a couple of methods to pause() and resume() because
 that suited their behavior much better than the previous names. A few days
 later I started noticing odd program-termination behaviour, and just
 couldn't track it down. The MSVC debugger couldn't even help me.
 
 Turns out that the class with the renamed methods was three levels removed
 from a base class of (you guessed it) ... Thread. I had inadvertently
 overridden the Thread.pause() and Thread.resume().  On win32 it didn't
 cause a disaster, but on linux it caused segfaults during shutdown and/or
 thread termination. Fancy that :-)
 
 Now this is in my own library. Imagine the crap that will inevitably hit
 the fan in a long-term maintenance scenario? I think this is a great
 opportunity for D to shine in this arena; typing in the override keyword
 is no biggie for overriding methods, given the potential two-way benefit
 (refactoring and inadvertent-override).
 
 I think it should be mandated by the compiler; an overriding method
 without the override keyword should result in a compile time error. What
 say you?
 
 - Kris

Jul 17 2004
parent "Blandger" <zeroman aport.ru> writes:
"teqDruid" <me teqdruid.com> wrote in message
news:pan.2004.07.17.22.24.00.469883 teqdruid.com...
 I think this is a great idea.

Take my vote too. +1
Jul 18 2004
prev sibling next sibling parent reply Derek <derek psyc.ward> writes:
On Sat, 17 Jul 2004 03:06:53 -0700, Kris wrote:

 I really like the override keyword. It works well as a reminder when you're
 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/. This would be in keeping with its current meaning, and
 may also resolve some of the issues surrounding the need to use an alias to
 pull in superclass methods of the same basic name (which is criminal in my
 opinion, but a different battle).
 
 Anyway; if override were required it would catch a number of subtle bugs in
 derived classes. For example, I was refactoring some classes in Mango and
 decided to rename a couple of methods to pause() and resume() because that
 suited their behavior much better than the previous names. A few days later
 I started noticing odd program-termination behaviour, and just couldn't
 track it down. The MSVC debugger couldn't even help me.
 
 Turns out that the class with the renamed methods was three levels removed
 from a base class of (you guessed it) ... Thread. I had inadvertently
 overridden the Thread.pause() and Thread.resume().  On win32 it didn't cause
 a disaster, but on linux it caused segfaults during shutdown and/or thread
 termination. Fancy that :-)
 
 Now this is in my own library. Imagine the crap that will inevitably hit the
 fan in a long-term maintenance scenario? I think this is a great opportunity
 for D to shine in this arena; typing in the override keyword is no biggie
 for overriding methods, given the potential two-way benefit (refactoring and
 inadvertent-override).
 
 I think it should be mandated by the compiler; an overriding method without
 the override keyword should result in a compile time error. What say you?
 

As I understand the issue, the current setup is that the compiler may do something that the coder is totally unaware of. There is not even an 'information' message (let's not call it a warning) to tell the coder what's going on. If this understandng is correct, then I support the idea of the _override_ keyword having to be mandatory. It will cause less bugs in code, without a doubt, and at very little cost to the coder. The general principle I have is that a computer language's primary purpose is to *help* people create correctly functioning executables. That is, the emphasis is on the "helping" role. -- Derek Melbourne, Australia
Jul 17 2004
parent reply Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Derek wrote:


 If this understandng is correct, then I support the idea of the _override_
 keyword having to be mandatory. It will cause less bugs in code, without a
 doubt, and at very little cost to the coder.

I totally agree; after all it's not like the burden of Java forcing the programmmers to explictly declare all the exceptions that a method can throw (as an example) but a simpler thing, want to override? use "override". Makes sense.
Jul 17 2004
parent reply teqDruid <me teqdruid.com> writes:
On Sun, 18 Jul 2004 02:14:59 +0200, Juanjo Álvarez wrote:

 Derek wrote:
 
 
 If this understandng is correct, then I support the idea of the
 _override_ keyword having to be mandatory. It will cause less bugs in
 code, without a doubt, and at very little cost to the coder.

I totally agree; after all it's not like the burden of Java forcing the programmmers to explictly declare all the exceptions that a method can throw (as an example) but a simpler thing, want to override? use "override". Makes sense.

Actually, although having to continually write throws ... in my code is a pain, I rather like that Java requires the caller to write a try-catch block. While this is also a pain, it makes for better, more stable code. I believe it's a general rule in programming that writing stable code to deal with all situations is a pain in the ass.
Jul 19 2004
next sibling parent Berin Loritsch <bloritsch d-haven.org> writes:
teqDruid wrote:

 On Sun, 18 Jul 2004 02:14:59 +0200, Juanjo Álvarez wrote:
 
 
Derek wrote:



If this understandng is correct, then I support the idea of the
_override_ keyword having to be mandatory. It will cause less bugs in
code, without a doubt, and at very little cost to the coder.

I totally agree; after all it's not like the burden of Java forcing the programmmers to explictly declare all the exceptions that a method can throw (as an example) but a simpler thing, want to override? use "override". Makes sense.

Actually, although having to continually write throws ... in my code is a pain, I rather like that Java requires the caller to write a try-catch block. While this is also a pain, it makes for better, more stable code. I believe it's a general rule in programming that writing stable code to deal with all situations is a pain in the ass.

I do like checked exceptions, but I have to admit that many times the API defines a checked exception where a runtime would have been better. They should be used few and far between.
Jul 19 2004
prev sibling parent reply Andy Friesen <andy ikagames.com> writes:
teqDruid wrote:

 On Sun, 18 Jul 2004 02:14:59 +0200, Juanjo Álvarez wrote:
 
Derek wrote:

If this understandng is correct, then I support the idea of the
_override_ keyword having to be mandatory. It will cause less bugs in
code, without a doubt, and at very little cost to the coder.

I totally agree; after all it's not like the burden of Java forcing the programmmers to explictly declare all the exceptions that a method can throw (as an example) but a simpler thing, want to override? use "override". Makes sense.

Actually, although having to continually write throws ... in my code is a pain, I rather like that Java requires the caller to write a try-catch block. While this is also a pain, it makes for better, more stable code.

There's a *great* interview on artima with Anders Hejlsberg on this issue. <http://www.artima.com/intv/handcuffs.html> Basically, it amounts to two big things: programmers are lazy oafs (it takes one to know one) and will circumvent checked exceptions, and that most people just want the exception to a toplevel error handler, so the error can be reported to the user in one place. Try/finally blocks ensure that everything gets released properly on such an occasion. Besides, all that error checking is a lot of code. The whole point of exceptions is to make sure that error checking is NOT a lot of code.
 I believe it's a general rule in programming that writing stable code to
 deal with all situations is a pain in the ass.

I feel pretty confident saying that, if it's a pain in the ass, then the tools didn't do their job. (in their defense, they have hard jobs: there are lots of common problems for which there are no sufficiently powerful tools) -- andy
Jul 19 2004
next sibling parent reply Hauke Duden <H.NS.Duden gmx.net> writes:
Andy Friesen wrote:
 teqDruid wrote:
 
 On Sun, 18 Jul 2004 02:14:59 +0200, Juanjo Álvarez wrote:

 Derek wrote:

 If this understandng is correct, then I support the idea of the
 _override_ keyword having to be mandatory. It will cause less bugs in
 code, without a doubt, and at very little cost to the coder.

I totally agree; after all it's not like the burden of Java forcing the programmmers to explictly declare all the exceptions that a method can throw (as an example) but a simpler thing, want to override? use "override". Makes sense.

Actually, although having to continually write throws ... in my code is a pain, I rather like that Java requires the caller to write a try-catch block. While this is also a pain, it makes for better, more stable code.

There's a *great* interview on artima with Anders Hejlsberg on this issue. <http://www.artima.com/intv/handcuffs.html> Basically, it amounts to two big things: programmers are lazy oafs (it takes one to know one) and will circumvent checked exceptions, and that most people just want the exception to a toplevel error handler, so the error can be reported to the user in one place. Try/finally blocks ensure that everything gets released properly on such an occasion. Besides, all that error checking is a lot of code. The whole point of exceptions is to make sure that error checking is NOT a lot of code.

Apart from this, checked exceptions are not compatible with interfaces. The whole point of an interface is to abstract from the implementation. But if you have to specify which kinds of errors can occur then you have to foresee any error that could occur in any future implementation - which is impossible, of course. Hauke
Jul 19 2004
parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Hauke Duden wrote:
 Apart from this, checked exceptions are not compatible with interfaces. 
 The whole point of an interface is to abstract from the implementation. 
 But if you have to specify which kinds of errors can occur then you have 
 to foresee any error that could occur in any future implementation - 
 which is impossible, of course.

Huh? Saying this is akin to saying that interfaces shouldn't specify return values.
Jul 19 2004
prev sibling next sibling parent reply Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Andy Friesen wrote:


 
 There's a *great* interview on artima with Anders Hejlsberg on this
 issue.  <http://www.artima.com/intv/handcuffs.html>

True, great interview. I'm a little on the middle side about checked exceptions. I think they could be one of the places where (with some parameter, like -Wuncatched or -Euncatched) it could have sense to have compiler warnings, or just errors, like: "Warning||Error: method cl.foo (line 300) can throw FooException and and you're not catching it." Then you could decide if you need to catch the exception or not (because depending of how you use cl.foo FooException it could be impossible to trigger FooException, or you just want the program to abort in that case) Wouldn't that be nice? In fact, one of the projects I've on my long "TODO" list is a program (or a patch to pychecker) to show these warnings on Python code.
Jul 19 2004
next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Juanjo Álvarez wrote:

 Andy Friesen wrote:
There's a *great* interview on artima with Anders Hejlsberg on this
issue.  <http://www.artima.com/intv/handcuffs.html>

True, great interview. I'm a little on the middle side about checked exceptions. I think they could be one of the places where (with some parameter, like -Wuncatched or -Euncatched) it could have sense to have compiler warnings, or just errors, like: "Warning||Error: method cl.foo (line 300) can throw FooException and and you're not catching it." Then you could decide if you need to catch the exception or not (because depending of how you use cl.foo FooException it could be impossible to trigger FooException, or you just want the program to abort in that case) Wouldn't that be nice? In fact, one of the projects I've on my long "TODO" list is a program (or a patch to pychecker) to show these warnings on Python code.

It's entirely reasonable to plow through a whole algorithm, expecting any thrown exceptions to be handled by the caller. What's less reasonable is letting open sockets dangle around because of it. (is there any reason at all to want this to occur?) For this reason, it would be better if a warning was raised when the compiler can verifiably prove that such a resource would be left hanging in the case of an uncaught exception. This gives us the chance to make sure we declared our autos as autos and put our finally clauses in place. (the trick is telling the compiler what's expensive. A pragma might be overkill for such a specific thing. Maybe it's worth it) -- andy
Jul 19 2004
parent Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Andy Friesen wrote:

 It's entirely reasonable to plow through a whole algorithm, expecting
 any thrown exceptions to be handled by the caller.  What's less
 reasonable is letting open sockets dangle around because of it.  (is
 there any reason at all to want this to occur?)

That would be a killer feature of this "exception" detecter. The problem is, how does the compiler (or the external tool) know what is a "cleanable" resource and what is not?
 For this reason, it would be better if a warning was raised when the
 compiler can verifiably prove that such a resource would be left hanging
 in the case of an uncaught exception.

The best solution would probably be that all those resources are implemented as objects with proper destructors freeing the resource. Then the GC (for not auto types) makes the rest. Of course this will not always be the case.
 (the trick is telling the compiler what's expensive.  A pragma might be
 overkill for such a specific thing.  Maybe it's worth it)

I think it would be worth but I see it more like a 2.0 feature.
Jul 19 2004
prev sibling parent Berin Loritsch <bloritsch d-haven.org> writes:
Juanjo Álvarez wrote:

 Andy Friesen wrote:
 
 
  
 
There's a *great* interview on artima with Anders Hejlsberg on this
issue.  <http://www.artima.com/intv/handcuffs.html>

True, great interview. I'm a little on the middle side about checked exceptions. I think they could be one of the places where (with some parameter, like -Wuncatched or -Euncatched) it could have sense to have compiler warnings, or just errors, like: "Warning||Error: method cl.foo (line 300) can throw FooException and and you're not catching it." Then you could decide if you need to catch the exception or not (because depending of how you use cl.foo FooException it could be impossible to trigger FooException, or you just want the program to abort in that case) Wouldn't that be nice?

Having experienced with a language that supports checked and unchecked exceptions, I must say that if you are going to err at all, do it on the side of unchecked exceptions. I like checked exceptions, but I wouldn't want to have to check all of them all the time. The only time a checked exception should be concidered IMO is when there is something that happens outside the control of the runtime (in the D case, something that happens with the OS/device interaction). I have worked on several projects where too many exceptions were required to be be checked, and the only real gain from it was code bloat. While you intend for people to be better programmers, ultimately being the lazy beasts that we are, the cost is too high so people circumvent dealing with exceptions alltogether. If there is going to be checked and unchecked exceptions they should be done intelligently. For example, all the Java formatters (DateFormatter, CurrencyFormatter, MessageFormatter, etc.) require the user to catch a ParseException. The only time that might be necessary is if we are dealing with user input--nine times out of ten, the strings being formatted are already debugged. However if you do have a checked exception you want it to be checked all the time. I would learn from Java and provide a mechanism to handle exceptions that terminate a thread. In the more recent versions of Java there is a callback for Thread termination to deal with exceptions not handled in code. That will provide a decent mechanism to deal with things that slip through the cracks in user code.
Jul 20 2004
prev sibling parent teqDruid <me teqdruid.com> writes:
On Mon, 19 Jul 2004 11:02:10 -0700, Andy Friesen wrote:
 Actually, although having to continually write throws ... in my code is
 a pain, I rather like that Java requires the caller to write a try-catch
 block.  While this is also a pain, it makes for better, more stable
 code.

There's a *great* interview on artima with Anders Hejlsberg on this issue. <http://www.artima.com/intv/handcuffs.html> Basically, it amounts to two big things: programmers are lazy oafs (it takes one to know one) and will circumvent checked exceptions, and that most people just want the exception to a toplevel error handler, so the error can be reported to the user in one place. Try/finally blocks ensure that everything gets released properly on such an occasion. Besides, all that error checking is a lot of code. The whole point of exceptions is to make sure that error checking is NOT a lot of code.

In my experience checked exceptions force you not to put a lot of checking code everywhere, but put checking code at the right locations, assuming you've done a decent job designing your classes and interfaces. The biggest code bloat, and the pain in the ass, is adding throws clauses to method signatures. This is what having non-checked exceptions rids one of. The biggest problem I have with non-checked exceptions is that there's no way to tell what exceptions might be throwing without looking at their source (unless it's documented, which I guarantee it won't be), so unless you're going to be catching the general Exception a lot, there's a strong possibility for lots of exceptions to propagate up to main(). How does one get around this?
 
 I believe it's a general rule in programming that writing stable code to
 deal with all situations is a pain in the ass.

I feel pretty confident saying that, if it's a pain in the ass, then the tools didn't do their job. (in their defense, they have hard jobs: there are lots of common problems for which there are no sufficiently powerful tools)

This is true. The tools available for Java make dealing with the checked exceptions MUCH easier, so I use them without thinking twice. However, as the current eclipseD programmer, I'll admit the the tools for D right now suck. This will probably be true for some time. It's taken years for the good Java tools to emerge, and the C/C++ tools (I feel) are still pretty bad (at least compared to the Java ones.) John
Jul 19 2004
prev sibling next sibling parent reply "Vathix" <vathixSpamFix dprogramming.com> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdatji$17fc$1 digitaldaemon.com...
 I really like the override keyword. It works well as a reminder when

 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/. This would be in keeping with its current meaning, and
 may also resolve some of the issues surrounding the need to use an alias

 pull in superclass methods of the same basic name (which is criminal in my
 opinion, but a different battle).

 Anyway; if override were required it would catch a number of subtle bugs

 derived classes. For example, I was refactoring some classes in Mango and
 decided to rename a couple of methods to pause() and resume() because that
 suited their behavior much better than the previous names. A few days

 I started noticing odd program-termination behaviour, and just couldn't
 track it down. The MSVC debugger couldn't even help me.

 Turns out that the class with the renamed methods was three levels removed
 from a base class of (you guessed it) ... Thread. I had inadvertently
 overridden the Thread.pause() and Thread.resume().  On win32 it didn't

 a disaster, but on linux it caused segfaults during shutdown and/or thread
 termination. Fancy that :-)

 Now this is in my own library. Imagine the crap that will inevitably hit

 fan in a long-term maintenance scenario? I think this is a great

 for D to shine in this arena; typing in the override keyword is no biggie
 for overriding methods, given the potential two-way benefit (refactoring

 inadvertent-override).

 I think it should be mandated by the compiler; an overriding method

 the override keyword should result in a compile time error. What say you?

 - Kris

I always use override; this sounds good to me.
Jul 17 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cdcbrr$1np2$1 digitaldaemon.com>, Vathix says...
I always use override; this sounds good to me.

And I. (Well, almost - see below). Now, pardon me for taking this reasoning one step further, but, if override is to be compulsory, then we don't actually need the keyword, do we? That is, if it is to be compulsory that a function in a derived class which has the same name as a function in a base class must have the same signature (which is what "override" dictates), then we might as well simply have the compiler enforce this at all times - in which case the keyword "override" becomes entirely redundant. We can dispense with it. Throw it away. But ... might there be times when you /want/ a subclass to provide a same-name function with a different signature? I can think of a good example - my Int class sensibly overrides (with override keyword in place) the function opEquals(Object) ... but it /also/ has a function opEquals(int), allowing you to write stuff like: # Int n; # if (n == 4) // calls opEquals(int) Now, I agree with majority opinion here, in the sense that overriding is what you /normally/ want to do. But, like everything else, sometimes there are exceptions. Maybe another approach might work. How about this: (1) Ditch the "override" keyword - member functions override by default. (2) Introduce a new keyword to allow same-name-different-signature functions. Just as a thought, the keyword "new" springs to mind as possibly appropriate. Arcane Jill
Jul 18 2004
next sibling parent Andy Friesen <andy ikagames.com> writes:
Arcane Jill wrote:
 Maybe another approach might work. How about this:
 (1) Ditch the "override" keyword - member functions override by default.
 (2) Introduce a new keyword to allow same-name-different-signature functions.
 Just as a thought, the keyword "new" springs to mind as possibly appropriate.

Both approaches would have about the same effect on the writability of the code, but it seems to me that 'override', as an explicit annotation declaring that the function supplants base class functionality, does much more for the readability. -- andy
Jul 18 2004
prev sibling next sibling parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
Ditching the "override" would lose one of the key benefits. In fact it would
lose the only benefit it has right now: namely the ability for the compiler
to point out that the your method is no longer overriding a super-class
instance (that's what it does at this point). The compiler depends upon the
explicit annotation to catch this.

Lets not even begin to talk about the lack of method signature-matching
that's grossly missing under various scenarios ... that's another topic
altogether. One that I'll write an Exposé upon soon, since it needs to be
fixed before any sane commercial venture would touch the language/compiler.

 - Kris


"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cdeppu$2m2o$1 digitaldaemon.com...
 In article <cdcbrr$1np2$1 digitaldaemon.com>, Vathix says...
I always use override; this sounds good to me.

And I. (Well, almost - see below). Now, pardon me for taking this

 step further, but, if override is to be compulsory, then we don't actually

 the keyword, do we?

 That is, if it is to be compulsory that a function in a derived class

 the same name as a function in a base class must have the same signature

 is what "override" dictates), then we might as well simply have the

 enforce this at all times - in which case the keyword "override" becomes
 entirely redundant. We can dispense with it. Throw it away.

 But ... might there be times when you /want/ a subclass to provide a

 function with a different signature? I can think of a good example - my

 class sensibly overrides (with override keyword in place) the function
 opEquals(Object) ... but it /also/ has a function opEquals(int), allowing

 write stuff like:

 #    Int n;
 #    if (n == 4)  // calls opEquals(int)

 Now, I agree with majority opinion here, in the sense that overriding is

 you /normally/ want to do. But, like everything else, sometimes there are
 exceptions.

 Maybe another approach might work. How about this:
 (1) Ditch the "override" keyword - member functions override by default.
 (2) Introduce a new keyword to allow same-name-different-signature

 Just as a thought, the keyword "new" springs to mind as possibly

 Arcane Jill

Jul 18 2004
prev sibling next sibling parent "me" <memsom interalpha.co.uk> writes:
 Maybe another approach might work. How about this:
 (1) Ditch the "override" keyword - member functions override by default.
 (2) Introduce a new keyword to allow same-name-different-signature

 Just as a thought, the keyword "new" springs to mind as possibly

Delphi (ultimately where C# got it's override keyword from, as Anders also designed the Delphi Pascal dielect based Borland Object Pascal dialect) uses: "virtual" to mark a method as virtual (can be overriden) "override" to override the method (a warning if you don't do this since Delphi 4) "reintroduce" to redefie a method with the same name but different signature (since Delphi 4, before that the compiler really didn't seem to care ;-) (there's also "dynamic" which is the same as virtual, except it uses chaining rather than an explicit VMT to look up the inherited method call.) We also have an "overload" keyword for method overloading. This makes the whole business of inheritence and polymorphism very clear. Love it or hate it, it's very hard to make a mistake.
Jul 19 2004
prev sibling parent Berin Loritsch <bloritsch d-haven.org> writes:
Arcane Jill wrote:

 In article <cdcbrr$1np2$1 digitaldaemon.com>, Vathix says...
 
I always use override; this sounds good to me.

And I. (Well, almost - see below). Now, pardon me for taking this reasoning one step further, but, if override is to be compulsory, then we don't actually need the keyword, do we?

And here is where I resonate.
 
 That is, if it is to be compulsory that a function in a derived class which has
 the same name as a function in a base class must have the same signature (which
 is what "override" dictates), then we might as well simply have the compiler
 enforce this at all times - in which case the keyword "override" becomes
 entirely redundant. We can dispense with it. Throw it away.
 
 But ... might there be times when you /want/ a subclass to provide a same-name
 function with a different signature? I can think of a good example - my Int
 class sensibly overrides (with override keyword in place) the function
 opEquals(Object) ... but it /also/ has a function opEquals(int), allowing you
to
 write stuff like:
 
 #    Int n;
 #    if (n == 4)  // calls opEquals(int)
 
 Now, I agree with majority opinion here, in the sense that overriding is what
 you /normally/ want to do. But, like everything else, sometimes there are
 exceptions.
 
 Maybe another approach might work. How about this:
 (1) Ditch the "override" keyword - member functions override by default.
 (2) Introduce a new keyword to allow same-name-different-signature functions.
 Just as a thought, the keyword "new" springs to mind as possibly appropriate.

Of course then there is the argument that we might want to *replace* a method by design. For example: class A { void foo(); // buggy code } class B { void foo(); // never calls super.foo() } Currently this compiles and doesn't provide any info to the uer invoking the compiler. This is the source of the discussion. In most cases we want the "overrides" semantics, but in a few cases we actually intend to replace the method. Truthfully, I am always of the mindset that whatever is most common should be easiest to do, and whatever is least common should require a little more. So in this case using the "new" keyword in this context would provide symantic clues as to what is intended by design. Alternatively one could use "replaces" as a keyword, but if "new" can do it, why add another keyword? I don't think anyone (not even me) is arguing that the behavior identified by overrides is bad. The only thing being discussed is whether it should be mandated all the time. My thinking is that if the language requires it to be used all the time, then it is probably what the default should be. Differences would be signified with a different keyword. Obviously, the final keyword would not loose any of its semantics. Once a method is declared final it cannot (or at least should not) be able to be overridden or replaced.
Jul 20 2004
prev sibling next sibling parent reply Jonathan Leffler <jleffler earthlink.net> writes:
Kris wrote:

 I really like the override keyword. It works well as a reminder when you're
 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/. 

Check out Stroustrup 'Design and Evolution of C++' for a discussion of why override was dropped from C++. -- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/
Jul 17 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
Could you give us all a brief summary please Jonathan? It may not apply to D
in the same manner ...

"Jonathan Leffler" <jleffler earthlink.net> wrote in message
news:cdcf1n$1osa$2 digitaldaemon.com...
 Kris wrote:

 I really like the override keyword. It works well as a reminder when


 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/.

Check out Stroustrup 'Design and Evolution of C++' for a discussion of why override was dropped from C++. -- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/

Jul 17 2004
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Echoed.

I don't want to be a "Cabal" about this, but I've just scanned D&E and can find
no mention of it. Overriding is specifically indexed at pages 76 and 293,
neither
of which talk about the keyword, but rather pertain to the issue of covariant
return types.


"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdcjlr$1qor$1 digitaldaemon.com...
 Could you give us all a brief summary please Jonathan? It may not apply to D
 in the same manner ...

 "Jonathan Leffler" <jleffler earthlink.net> wrote in message
 news:cdcf1n$1osa$2 digitaldaemon.com...
 Kris wrote:

 I really like the override keyword. It works well as a reminder when


 refactoring classes. However, I'd like to see it extended such that it
 becomes /required/.

Check out Stroustrup 'Design and Evolution of C++' for a discussion of why override was dropped from C++. -- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/


Jul 17 2004
parent reply Jonathan Leffler <jleffler earthlink.net> writes:
Matthew wrote:

 Echoed.
 
 I don't want to be a "Cabal" about this, but I've just scanned D&E and can find
 no mention of it. Overriding is specifically indexed at pages 76 and 293,
neither
 of which talk about the keyword, but rather pertain to the issue of covariant
 return types.

Beg pardon - the keyword was overload, not override (shaky memory - overloaded, I guess :-) p226 refers - 11.2.2 Ambiguity Control. Prior to C++ 2.0, there was a keyword 'overload' functioning as a storage class: overload void print(int); void print(double); And the order of declaration controlled which function was used, which was 'trivial for implementors to get right, and was a constant source of errors and confusion. Reversioing the declaration order could completely change the meaning of a piece of code: [...example...] Basically, order dependence was too error-prone. It also became a serious obstacle to the effort to evolve C++ programming towards a greater use of libraries.' There's a whole lot more, more particularly about what changed in 2.0 and beyond. On second thoughts, it most probably does not apply to D - wrong keyword for one thing, and in any case 'override' probably applies in a different context, and without the problems. Sorry for disturbing the ether with a pointless message.
 "Kris" <someidiot earthlink.dot.dot.dot.net> wrote:
Could you give us all a brief summary please Jonathan? It may not apply to D
in the same manner ...

"Jonathan Leffler" <jleffler earthlink.net> wrote:
Kris wrote:
I really like the override keyword. It works well as a reminder when
you're
refactoring classes. However, I'd like to see it extended such that it
becomes /required/.

Check out Stroustrup 'Design and Evolution of C++' for a discussion of why override was dropped from C++.



-- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/
Jul 17 2004
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Jonathan Leffler" <jleffler earthlink.net> wrote in message
news:cdd54f$222s$1 digitaldaemon.com...
 Matthew wrote:

 Echoed.

 I don't want to be a "Cabal" about this, but I've just scanned D&E and can


 no mention of it. Overriding is specifically indexed at pages 76 and 293,


 of which talk about the keyword, but rather pertain to the issue of covariant
 return types.

Beg pardon - the keyword was overload, not override (shaky memory - overloaded, I guess :-) p226 refers - 11.2.2 Ambiguity Control. Prior to C++ 2.0, there was a keyword 'overload' functioning as a storage class: overload void print(int); void print(double); And the order of declaration controlled which function was used, which was 'trivial for implementors to get right, and was a constant source of errors and confusion. Reversioing the declaration order could completely change the meaning of a piece of code: [...example...] Basically, order dependence was too error-prone. It also became a serious obstacle to the effort to evolve C++ programming towards a greater use of libraries.' There's a whole lot more, more particularly about what changed in 2.0 and beyond. On second thoughts, it most probably does not apply to D - wrong keyword for one thing, and in any case 'override' probably applies in a different context, and without the problems. Sorry for disturbing the ether with a pointless message.

He he. Rather than doing a "Cabal", you actually ended up doing a "Matthew". :) No worries from my POV. It caused me to dip back into that wonderful book, which I've not done much of in years. FWIW, I think Dr Stroustrup is something of a prescient genius, despite the overload debacle. Several of the things I've "invented" in C++ in the last couple of years have been anticipated, in one way or another, by him several years before. My "Fast String Concatenation" technique was inspired by his technique for efficient matrix operations.
 "Kris" <someidiot earthlink.dot.dot.dot.net> wrote:
Could you give us all a brief summary please Jonathan? It may not apply to D
in the same manner ...

"Jonathan Leffler" <jleffler earthlink.net> wrote:
Kris wrote:
I really like the override keyword. It works well as a reminder when
you're
refactoring classes. However, I'd like to see it extended such that it
becomes /required/.

Check out Stroustrup 'Design and Evolution of C++' for a discussion of why override was dropped from C++.



-- Jonathan Leffler #include <disclaimer.h> Email: jleffler earthlink.net, jleffler us.ibm.com Guardian of DBD::Informix v2003.04 -- http://dbi.perl.org/

Jul 18 2004
prev sibling next sibling parent reply "Bent Rasmussen" <exo bent-rasmussen.info> writes:
I like that its optional. Perhaps an IDE could help here.

Btw. (and a bit off topic) I just found this superb resource
http://www.artima.com/articles/index.jsp?topic=intv

There's no D topic yet, but there are some very nice insigts, e.g.
http://www.artima.com/intv/goldilocks3.html

Having the compiler detect non-virtual functions makes this style pleasant.
Jul 17 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
May I ask you to expound on your preference for optional please?

"Bent Rasmussen" <exo bent-rasmussen.info> wrote in message
news:cdclsk$1rlm$1 digitaldaemon.com...
 I like that its optional. Perhaps an IDE could help here.

 Btw. (and a bit off topic) I just found this superb resource
 http://www.artima.com/articles/index.jsp?topic=intv

 There's no D topic yet, but there are some very nice insigts, e.g.
 http://www.artima.com/intv/goldilocks3.html

 Having the compiler detect non-virtual functions makes this style


Jul 17 2004
parent reply "Bent Rasmussen" <exo bent-rasmussen.info> writes:
I'm lazy and haven't run into a situation as you describe, or if so, it
hasn't passed the irritation threshold, but then my programs tend not to be
large. I'd rather have an editor inform me of overrides than write
"override" next to every overridden function just in case. Personal
preference.
Jul 18 2004
parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
Thanks; yeah, an editor doing that would be very nice (I'm lazy too). But
from a strategic perspective it makes sense for the compiler to be strict
about it; the latter would provide a good bargaining chip for getting D into
the commercial sector (where long-term maintenance is /truly/ an issue).

- Kris

"Bent Rasmussen" <exo bent-rasmussen.info> wrote in message
news:cdeigq$2isd$1 digitaldaemon.com...
 I'm lazy and haven't run into a situation as you describe, or if so, it
 hasn't passed the irritation threshold, but then my programs tend not to

 large. I'd rather have an editor inform me of overrides than write
 "override" next to every overridden function just in case. Personal
 preference.

Jul 18 2004
prev sibling parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Perhaps we need to add the almost-keyword:
	!override
Then we have 3 types of functions:
	For good, explicit design:
		Must-be-override functions
		Must-not-be-override functions
	For rapid prototyping, and learning D:
		Unspecified functions

And, happily, we don't break any existing codebase.

Thoughts?
Jul 19 2004
parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Russ Lewis wrote:

 Perhaps we need to add the almost-keyword:
     !override
 Then we have 3 types of functions:
     For good, explicit design:
         Must-be-override functions
         Must-not-be-override functions
     For rapid prototyping, and learning D:
         Unspecified functions
 
 And, happily, we don't break any existing codebase.

What's wrong with having all functions/methods being overridable unless there is a keywords saying don't do it? This is something that works quite happily in the Java lanaguage. Having to explicitly say "I want this to be overriden" is quite annoying, as most of the time that is what I expect. Of course another C++ism that I have found annoying is that you can have multiple methods have the same signature because of different placements of "const" etc. C# makes things a bit more confusing because you can have one method act one way when accessed through one interface, and another way when accessed through a different interface. I just like the simple one method to one signature relationship in Java. It's simple, and it isn't too hard to deal with. Just my 2 cents.
Jul 19 2004
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Berin Loritsch wrote:
 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.
 
 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.

The question is not whether a function is overridABLE. It is whether the function is overridING. Putting 'override' on a function means that it must override some base class function. This is an easy way to check that you're actually overriding a base class function. The question at hand in this thread is whether or not there should be some way to explicitly say that you are NOT overriding a base class function. If you happen to accidentally override a base class function - and you don't know what that function is supposed to do - then you are practically guaranteed to have a bug, and you won't know about it until runtime.
Jul 19 2004
next sibling parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Russ Lewis wrote:

 Berin Loritsch wrote:
 
 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.

 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.

The question is not whether a function is overridABLE. It is whether the function is overridING.

Ok. I'm used to it happening implicitly. Since I do make use of inheritance, having to state that explicitly would be annoying to me.
 
 Putting 'override' on a function means that it must override some base 
 class function.  This is an easy way to check that you're actually 
 overriding a base class function.
 
 The question at hand in this thread is whether or not there should be 
 some way to explicitly say that you are NOT overriding a base class 
 function.  If you happen to accidentally override a base class function 
 - and you don't know what that function is supposed to do - then you are 
 practically guaranteed to have a bug, and you won't know about it until 
 runtime.

When would you *not* override a base class function? I don't get when you wouldn't want that to happen.
Jul 19 2004
next sibling parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Berin Loritsch wrote:
 When would you *not* override a base class function?  I don't get when 
 you wouldn't want that to happen.

Somebody (I forget who) had a problem where he added a pause() function to one of his classes. However, he later discovered that the class Thread (which, 3 levels back, was a base class of this guy's class) also had a pause() function. So when some thread code called pause(), it got his pause() function...which definitely does NOT pause the thread.
Jul 19 2004
parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Russ Lewis wrote:

 Berin Loritsch wrote:
 
 When would you *not* override a base class function?  I don't get when 
 you wouldn't want that to happen.

Somebody (I forget who) had a problem where he added a pause() function to one of his classes. However, he later discovered that the class Thread (which, 3 levels back, was a base class of this guy's class) also had a pause() function. So when some thread code called pause(), it got his pause() function...which definitely does NOT pause the thread.

So he did not override a method, he replaced the method? He should have placed a call to super.pause() to have everything working as expected. However that is done in D.
Jul 19 2004
next sibling parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Berin Loritsch wrote:

 Russ Lewis wrote:
 
 Berin Loritsch wrote:

 When would you *not* override a base class function?  I don't get 
 when you wouldn't want that to happen.

Somebody (I forget who) had a problem where he added a pause() function to one of his classes. However, he later discovered that the class Thread (which, 3 levels back, was a base class of this guy's class) also had a pause() function. So when some thread code called pause(), it got his pause() function...which definitely does NOT pause the thread.

So he did not override a method, he replaced the method? He should have placed a call to super.pause() to have everything working as expected. However that is done in D.

What about a compiler warning/error if a method overrides a base class method, but never calls super.methodName()?
Jul 19 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cdh400$kov$3 digitaldaemon.com>, Berin Loritsch says...

Berin, I think you've misunderstood. May I take the trouble to explain things?
First, here's the status quo: The "override" keyword is /not/ compulsory. This
means that you have a choice of two different ways of writing things. Version
one:

#    class A
#    {
#        void f();
#    }
#
#    class B : A
#    {
#        void f();
#    }
#
#    // and now for some polymorphism
#    A a = new B();
#    a.f(); // calls B.f();

This works fine. And now, here's version 2:

#    class A
#    {
#        void f();
#    }
#
#    class B : A
#    {
#        override void f();
#    }
#
#    // and now for some polymorphism
#    A a = new B();
#    a.f(); // calls B.f();

The difference? Well, right now, none at all. Both versions do exactly the same
thing. BUT - suppose that, at some point in the future, some developer modifes
class A (but not class B) so that class A becomes:

#    class A
#    {
#        void f(bool b);
#    }

Now /this/ is where the override keyword comes in, because now, version 1 will
still compile, but version 2 will not. Trouble is, version 1 now contains a bug,
because polymorphism has now stopped working, contrary to the programmer's
expectations. Thus:

#    // and now for some FAILED polymorphism
#    A a = new B();
#    a.f(false); // Whoops! - Calls A.f();, not B.f()

So how is the designer of class B ever supposed to know that the base class
signature has changed? Wait for a bug report? "override" gives you a way of
being notified of the need to change your code (next time you compile the
source).

So, "override" is very, very useful, as it can help you find bugs which
otherwise you might miss for years, and the /absence/ of the "override" keyword
could be viewed as a bug waiting to happen. So some people have proposed on this
thread that the keyword be compulsory.

Another school of thought is that the compiler should always behave /as if/ the
"override" keyword had been present, so you never actually have to type it.
However, there are times when you /do/ want to provide a function with the same
name, but a different signature from, a base class. In D the philosophy seems to
be to /encourage/ people to do the "right thing", but nonetheless to allow them
to do something a bit different if they really need to. Which brings me to your
question:

When would you *not* override a base class function?  I don't get when 
you wouldn't want that to happen.

There's a really neat example of this in my class Int (unlimited precision integers). It's a class, so it derives from Object. Object defines opEquals() as having this signature: # int opEquals(Object obj) Just in case Walter should ever decide to change this signature, within Int I declare: # override int opEquals(Object obj) however, in addition to this, I /also/ provide a same-name-different-signature function, which does /not/ override any function in Object. It is this: # int opEquals(int n) Why would I do this? Well because it allows users of my class to write tests such as: # // Int n # if (n == 42) instead of the more cumbersome (and less efficient) # if (n == new Int(42)) So, /sometimes/ you don't want to override, but /usually/ you do, and /usually/ it's a bug waiting to happen if you don't (because someone might change the base class without your knowledge). With that in mind, the possibilities at our disposal are: (1) the status quo - "override" implies an override; no keyword implies nothing at all (so it might or might not be an override) (2) as (1), except that the keyword "override" is now compulsory whenever overriding. (3) my suggestion - that the behavior currently enabled by the keyword "override" be the default behavior, and that, therefore, the keyword itself becomes redundant and may be dropped. A side-effect of this is that we'd then need a new keyword to do something /other/ than the default. (4) require an explicit keyword for each and every possibility. Changes such as this can be made (by Walter) with ease right now, but post v1.0 the syntax is supposed to be frozen in concrete, so this is almost our last chance to press for changes in syntax. Walter is amenable to suggestions, but only if he thinks they're a good idea. (He's not swayed by a mere majority vote - you actually have to /convince/ him). Arcane Jill
Jul 19 2004
next sibling parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Arcane Jill wrote:

 In article <cdh400$kov$3 digitaldaemon.com>, Berin Loritsch says...
 
 Berin, I think you've misunderstood. May I take the trouble to explain things?
 First, here's the status quo: The "override" keyword is /not/ compulsory. This
 means that you have a choice of two different ways of writing things. Version
 one:

Please do. <snip type="examples"/> Ok now, if I may, can I illustrate my example? class A { void foo(); } class B { void foo() { super:foo() }; } Now, if we change the superclass A's signature to use the boolean we still get a compile error: class A { void foo(boolean); } There is no super:foo() to call. Also, the examples you gave omitted any type of implementation so it is hard to prove what you would expect to see. The thing is, it is method replacement if an overriden method never calls the superclass's method. That is where you will never get a warning or error.
 So how is the designer of class B ever supposed to know that the base class
 signature has changed? Wait for a bug report? "override" gives you a way of
 being notified of the need to change your code (next time you compile the
 source).

Does D somehow always call the superclass's implementation? I mean sometimes you want to replace a method, but most times you want to extend it and provide more info. I understand where you are comming from, but using normal class semantics you can still get the desired effect that you want.
 
 So, "override" is very, very useful, as it can help you find bugs which
 otherwise you might miss for years, and the /absence/ of the "override" keyword
 could be viewed as a bug waiting to happen. So some people have proposed on
this
 thread that the keyword be compulsory.

Yuck. I would much rather see a warning/error if I never called the superclass method--which would have essentially the same effect without having to type a keyword that can very easily be interpreted as clutter. I prefer my sourcecode to be as clean and clear as possible, extra compulsary keywords add clutter even when they are not needed.
 Another school of thought is that the compiler should always behave /as if/ the
 "override" keyword had been present, so you never actually have to type it.
 However, there are times when you /do/ want to provide a function with the same
 name, but a different signature from, a base class. In D the philosophy seems
to
 be to /encourage/ people to do the "right thing", but nonetheless to allow them
 to do something a bit different if they really need to. Which brings me to your
 question:
 
 
When would you *not* override a base class function?  I don't get when 
you wouldn't want that to happen.


<snip type="example of method overloading"/> Well, I know some people who are completely opposed to method overloading, much less operator overloading. But that really doesn't come to play here. I suppose a better question would be when would you want method *replacement* as opposed to method *extension*? If you call super:methodName(), then you are calling a discrete method in a super class--which means that it must exist in order for something to compile correctly. So this gets the desires of both camps satisfied without using extra keywords.
 
 So, /sometimes/ you don't want to override, but /usually/ you do, and /usually/
 it's a bug waiting to happen if you don't (because someone might change the
base
 class without your knowledge).
 
 With that in mind, the possibilities at our disposal are:
 
 (1) the status quo - "override" implies an override; no keyword implies nothing
 at all (so it might or might not be an override)
 
 (2) as (1), except that the keyword "override" is now compulsory whenever
 overriding.

Yuck. I don't like that.
 (3) my suggestion - that the behavior currently enabled by the keyword
 "override" be the default behavior, and that, therefore, the keyword itself
 becomes redundant and may be dropped. A side-effect of this is that we'd then
 need a new keyword to do something /other/ than the default.
 
 (4) require an explicit keyword for each and every possibility.

This is the worst option of the four. It adds clutter and the real desired meaning gets lost in something your brain naturally filters out due to the number of times it sees it. (5) have a compiler warning if you override a method but never call the superclass's version. Conversely if you call a superclass's version and the signature you need is not there, treat it as an error. Note this does have a corner case that wouldn't be detected: class A { void foo(); } class B : A { void foo(boolean); } class C : B { void foo() {super:foo();} } This would still compile, with the C:foo() calling A:foo() internally--and the foo(boolean) method would not be called from B.
 
 Changes such as this can be made (by Walter) with ease right now, but post v1.0
 the syntax is supposed to be frozen in concrete, so this is almost our last
 chance to press for changes in syntax. Walter is amenable to suggestions, but
 only if he thinks they're a good idea. (He's not swayed by a mere majority vote
 - you actually have to /convince/ him).

Understood.
Jul 19 2004
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
There are many reasons why you might not want to call the superclass' 
version of a method.  You can't expect that you will call the supercass 
method all the time.

Berin Loritsch wrote:
 Arcane Jill wrote:
 
 In article <cdh400$kov$3 digitaldaemon.com>, Berin Loritsch says...

 Berin, I think you've misunderstood. May I take the trouble to explain 
 things?
 First, here's the status quo: The "override" keyword is /not/ 
 compulsory. This
 means that you have a choice of two different ways of writing things. 
 Version
 one:

Please do. <snip type="examples"/> Ok now, if I may, can I illustrate my example? class A { void foo(); } class B { void foo() { super:foo() }; } Now, if we change the superclass A's signature to use the boolean we still get a compile error: class A { void foo(boolean); } There is no super:foo() to call. Also, the examples you gave omitted any type of implementation so it is hard to prove what you would expect to see. The thing is, it is method replacement if an overriden method never calls the superclass's method. That is where you will never get a warning or error.
 So how is the designer of class B ever supposed to know that the base 
 class
 signature has changed? Wait for a bug report? "override" gives you a 
 way of
 being notified of the need to change your code (next time you compile the
 source).

Does D somehow always call the superclass's implementation? I mean sometimes you want to replace a method, but most times you want to extend it and provide more info. I understand where you are comming from, but using normal class semantics you can still get the desired effect that you want.
 So, "override" is very, very useful, as it can help you find bugs which
 otherwise you might miss for years, and the /absence/ of the 
 "override" keyword
 could be viewed as a bug waiting to happen. So some people have 
 proposed on this
 thread that the keyword be compulsory.

Yuck. I would much rather see a warning/error if I never called the superclass method--which would have essentially the same effect without having to type a keyword that can very easily be interpreted as clutter. I prefer my sourcecode to be as clean and clear as possible, extra compulsary keywords add clutter even when they are not needed.
 Another school of thought is that the compiler should always behave 
 /as if/ the
 "override" keyword had been present, so you never actually have to 
 type it.
 However, there are times when you /do/ want to provide a function with 
 the same
 name, but a different signature from, a base class. In D the 
 philosophy seems to
 be to /encourage/ people to do the "right thing", but nonetheless to 
 allow them
 to do something a bit different if they really need to. Which brings 
 me to your
 question:


 When would you *not* override a base class function?  I don't get 
 when you wouldn't want that to happen.


<snip type="example of method overloading"/> Well, I know some people who are completely opposed to method overloading, much less operator overloading. But that really doesn't come to play here. I suppose a better question would be when would you want method *replacement* as opposed to method *extension*? If you call super:methodName(), then you are calling a discrete method in a super class--which means that it must exist in order for something to compile correctly. So this gets the desires of both camps satisfied without using extra keywords.
 So, /sometimes/ you don't want to override, but /usually/ you do, and 
 /usually/
 it's a bug waiting to happen if you don't (because someone might 
 change the base
 class without your knowledge).

 With that in mind, the possibilities at our disposal are:

 (1) the status quo - "override" implies an override; no keyword 
 implies nothing
 at all (so it might or might not be an override)

 (2) as (1), except that the keyword "override" is now compulsory whenever
 overriding.

Yuck. I don't like that.
 (3) my suggestion - that the behavior currently enabled by the keyword
 "override" be the default behavior, and that, therefore, the keyword 
 itself
 becomes redundant and may be dropped. A side-effect of this is that 
 we'd then
 need a new keyword to do something /other/ than the default.

 (4) require an explicit keyword for each and every possibility.

This is the worst option of the four. It adds clutter and the real desired meaning gets lost in something your brain naturally filters out due to the number of times it sees it. (5) have a compiler warning if you override a method but never call the superclass's version. Conversely if you call a superclass's version and the signature you need is not there, treat it as an error. Note this does have a corner case that wouldn't be detected: class A { void foo(); } class B : A { void foo(boolean); } class C : B { void foo() {super:foo();} } This would still compile, with the C:foo() calling A:foo() internally--and the foo(boolean) method would not be called from B.
 Changes such as this can be made (by Walter) with ease right now, but 
 post v1.0
 the syntax is supposed to be frozen in concrete, so this is almost our 
 last
 chance to press for changes in syntax. Walter is amenable to 
 suggestions, but
 only if he thinks they're a good idea. (He's not swayed by a mere 
 majority vote
 - you actually have to /convince/ him).

Understood.

Jul 19 2004
parent Arcane Jill <Arcane_member pathlink.com> writes:
There are many reasons why you might not want to call the superclass' 
version of a method.  You can't expect that you will call the supercass 
method all the time.

Well yeah! Like, who in their right mind would want to call Object.opCmp() from within SomeOtherClass.opCmp()? (Or indeed, at all!) And why on Earth would MyClass.toString() want to call Object.toString()? What's with all this baseclass version stuff? Where did that come from? It doesn't seem to me to have anything to do with what anyone else is discussing. Jill
Jul 19 2004
prev sibling next sibling parent Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Arcane Jill wrote:


 (1) the status quo - "override" implies an override; no keyword implies
 nothing at all (so it might or might not be an override)
 
 (2) as (1), except that the keyword "override" is now compulsory whenever
 overriding.
 
 (3) my suggestion - that the behavior currently enabled by the keyword
 "override" be the default behavior, and that, therefore, the keyword
 itself becomes redundant and may be dropped. A side-effect of this is that
 we'd then need a new keyword to do something /other/ than the default.
 
 (4) require an explicit keyword for each and every possibility.

I would vote for (2) or (4). I'm not voting for you choice (3) for a simple stadistical and byte-economy reason: In my experience when I've written a subclass usually there were more non-overriding methods than overriding ones (except if the base class was abstract, of course). But this is only a "soft" preference and I would be happy with any of (2), (3) or (4).
Jul 19 2004
prev sibling parent reply Andy Friesen <andy ikagames.com> writes:
Arcane Jill wrote:
 (3) my suggestion - that the behavior currently enabled by the keyword
 "override" be the default behavior, and that, therefore, the keyword itself
 becomes redundant and may be dropped. A side-effect of this is that we'd then
 need a new keyword to do something /other/ than the default.

So, something like: class A { void foo() { ... } // ok. no such thing as Object.foo() } class B { void bar() { ... } // ok. No A.bar() or Object.bar() void foo() { ... } // ok. overrides A.foo() int foo(float f) { ... } // no. A.foo() exists and this method cannot override it new int foo(float f) { ... } // we must write it like so } This does clear things up exactly the same way as the override keyword, but (I've said this before, I'm sure) I think it's a better idea to have that positive assertion that the method is supplanting or augmenting behaviours defined in an ancestor class. 'new' methods tell you is that there's some stuff in the base class that is sort of superficially similar, and that the method has nothing to do with that. The concept almost reminds me of a Monty Python sketch or somesuch. "See that thing over there? The little wee spec waaaay off in the distance? Well, it's completely unrelated to what we're doing here, so stop staring." It makes more sense to me if D forces us to emphasize the links which are there, as opposed to the ones which are not. :) -- andy
Jul 19 2004
next sibling parent Daniel Horn <hellcatv hotmail.com> writes:
+1
Andy Friesen wrote:
 Arcane Jill wrote:
 
 (3) my suggestion - that the behavior currently enabled by the keyword
 "override" be the default behavior, and that, therefore, the keyword 
 itself
 becomes redundant and may be dropped. A side-effect of this is that 
 we'd then
 need a new keyword to do something /other/ than the default.

So, something like: class A { void foo() { ... } // ok. no such thing as Object.foo() } class B { void bar() { ... } // ok. No A.bar() or Object.bar() void foo() { ... } // ok. overrides A.foo() int foo(float f) { ... } // no. A.foo() exists and this method cannot override it new int foo(float f) { ... } // we must write it like so } This does clear things up exactly the same way as the override keyword, but (I've said this before, I'm sure) I think it's a better idea to have that positive assertion that the method is supplanting or augmenting behaviours defined in an ancestor class. 'new' methods tell you is that there's some stuff in the base class that is sort of superficially similar, and that the method has nothing to do with that. The concept almost reminds me of a Monty Python sketch or somesuch. "See that thing over there? The little wee spec waaaay off in the distance? Well, it's completely unrelated to what we're doing here, so stop staring." It makes more sense to me if D forces us to emphasize the links which are there, as opposed to the ones which are not. :) -- andy

Jul 19 2004
prev sibling parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Andy Friesen" wrote
 It makes more sense to me if D forces us to emphasize the links which
 are there, as opposed to the ones which are not. :)

Yes indeed. The other thing about positive assertion is exemplified by the current use of the keyword: if the superclass has the original overridden method /removed/ or /renamed/ then the compiler can tell you about it. That's what override currently does, and no-one is suggesting changing that behaviour; it's very helpful. I think there's a distinction emerging here; if I may be so bold I get the feeling that those in favour have extensive and sordid experience with long-term maintenance, while those detracting from the notion do so because they don't want to type in an additional keyword now and then. Enforcing the use of override does not restrict the language in any manner identified thus far. Rather, it simply enhances it. Let's face it: actually writing the algorithmic 'decoration' for the compiler (the method-body, braces, data types, etc) is the very lowest on the scale of time consumption. It's the design, implementation, testing, documenting and debugging that consume at least 99%, right? Arguing that the "override" should not be required because one doesn't wish to type in the word is like saying you don't wish to type in the "class" or "struct" keyword. Another vague detraction is that of "snow blindness". I think that clearly identifying those methods that override from those that don't is effective in helping either/or stand out against the background; regardless of whether your classes mostly/typically override or not. However, this is hardly an argument against something that's guaranteed to reduce the subsequent maintenance and debugging costs. So let's talk about that. The overriding (heh heh) financial cost of any long-lived software project is typically borne long after the initial release has been shipped. It's the cycle of updates, bug fixes, "enhancements" and so on that really suck up the dollars. We're not talking about some dorm project that goes out to a few buddies and is then dropped after a month or two. We're talking projects involving potentially hundreds of man-years. Anything, and I really do mean *anything*, that a computer language can do to reduce the element of 'surprise' during that long-drawn-out-phase is a huge boon in terms of overall productivity and in terms of hard currency. This latter part is what gets the attention of management. If the use of a language can reduce the bottom-line over time, then it gets a great big pat on the back. DbC is one such notion embraced by Walter; a stricter application of the override keyword would be another. You start adding up all these little features, and pretty soon you have something that the commercial development sector will start to take notice of (purely from a bottom-line perspective). Anyone who argues against such features based on personal laziness simply paints themselves as an ignorant fool. If you're too lazy to add in an appropriate keyword here and there, then you're almost certain to be unspeakably lazy elsewhere also (error condition? what error condition?). Frankly, I'd hate to see any code by such an individual, and they certainly would not get a job at my company. - Kris
Jul 19 2004
next sibling parent reply Farmer <itsFarmer. freenet.de> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in
news:cdhnbk$svg$1 digitaldaemon.com: 

[snip]
 I think there's a distinction emerging here; if I may be so bold I get
 the feeling that those in favour have extensive and sordid experience
 with long-term maintenance, while those detracting from the notion do so
 because they don't want to type in an additional keyword now and then.
 Enforcing the use of override does not restrict the language in any
 manner identified thus far. Rather, it simply enhances it.
 
 Let's face it: actually writing the algorithmic 'decoration' for the
 compiler (the method-body, braces, data types, etc) is the very lowest
 on the scale of time consumption. It's the design, implementation,
 testing, documenting and debugging that consume at least 99%, right?

No comments really. Just repeated this paragraph because it's so rare to read such musings, here.
 Arguing that the "override" should not be required because one doesn't
 wish to type in the word is like saying you don't wish to type in the
 "class" or "struct" keyword.

How about preparing a proposal 'implicit class and struct declaration'? I bet, it gets plenty supporters but very few (if any) detractors. After all, OO-folks must write *so many* class declarations.
 
 Another vague detraction is that of  "snow blindness". I think that
 clearly identifying those methods that override from those that don't is
 effective in helping either/or stand out against the background;
 regardless of whether your classes mostly/typically override or not.
 However, this is hardly an argument against something that's guaranteed
 to reduce the subsequent maintenance and debugging costs. So let's talk
 about that. 
 
 The overriding (heh heh) financial cost of any long-lived software
 project is typically borne long after the initial release has been
 shipped. It's the cycle of updates, bug fixes, "enhancements" and so on
 that really suck up the dollars. 

But what about all those short-lived software: software projects that get canceled before they are finished or software that is abandoned shortly after their initial release? The overriding number of projects belong to this category. And the more effort is put upfront (e.g. into maintainability) the more likely the project is going to be canceled.
 We're not talking about some dorm
 project that goes out to a few buddies and is then dropped after a month
 or two. We're talking projects involving potentially hundreds of
 man-years. Anything, and I really do mean *anything*, that a computer
 language can do to reduce the element of 'surprise' during that
 long-drawn-out-phase is a huge boon in terms of overall productivity and
 in terms of hard currency.

You really do mean *anything*? Wow, that's uncompromising, but real real-world computer languages put your considerations into the 'nice to have' basket at best.
 This latter part is what gets the attention
 of management. If the use of a language can reduce the bottom-line over
 time, then it gets a great big pat on the back. DbC is one such notion
 embraced by Walter; a stricter application of the override keyword would
 be another. You start adding up all these little features, and pretty
 soon you have something that the commercial development sector will 
 start to take notice of (purely from a bottom-line perspective). 

Probably, management doesn't care much for the maintenance phase of software projects, since successful managers have moved to new projects, before that stage is reached. Consider, how Java took management by storm. Although, Java promotes several programming practices that a prone to bite maintainers, and dropped some of C++'s features to improve maintainability. I believe that the DbC thing isn't really about maintainability in first place, it's about cranking out reasonably bug-free code *fast*: If I crank out code with no other help than compile-time type-checking, I loose much time fixing all my bugs with a debugger. Writting full-blown test- cases with 100% test-coverage is a time trap, too. But D's integrated DbC feature might reduce the time to write prototype-quality code, since it catches a fair amount of bugs, but requires only a modest time to write and maintain.
 
 Anyone who argues against such features based on personal laziness
 simply paints themselves as an ignorant fool. If you're too lazy to add
 in an appropriate keyword here and there, then you're almost certain to
 be unspeakably lazy elsewhere also (error condition? what error
 condition?). Frankly, I'd hate to see any code by such an individual,
 and they certainly would not get a job at my company.
 
 - Kris
 

Unfortunately, it's always those belonging to the minority that are considered as fools no matter how foolish the majority (of developers) acts. Individuals with 'personal laziness' get job offers by most other companies, so they don't have to care. Frankly, I've heard of developers that got fired for not getting the job done, or not getting it done _fast enough_. But I've never heard of developers fired for writting code that isn't maintainable enough. My impression is that as long as you agree to format your code according to the company's coding style, you're fine in the 'code maintainablility' department. Farmer.
Jul 21 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
Thought I'd return the courtesy to you, Farmer:

"Farmer" wrote .
 Just repeated this paragraph because it's so rare to read
 such musings, here.

Likewise re your comments
 Arguing that the "override" should not be required because one doesn't
 wish to type in the word is like saying you don't wish to type in the
 "class" or "struct" keyword.

How about preparing a proposal 'implicit class and struct declaration'? I bet, it gets plenty supporters but very few (if any) detractors. After all, OO-folks must write *so many* class declarations.

Don't follow you here, I'm afraid.
 Another vague detraction is that of  "snow blindness". I think that
 clearly identifying those methods that override from those that don't is
 effective in helping either/or stand out against the background;
 regardless of whether your classes mostly/typically override or not.
 However, this is hardly an argument against something that's guaranteed
 to reduce the subsequent maintenance and debugging costs. So let's talk
 about that.

 The overriding (heh heh) financial cost of any long-lived software
 project is typically borne long after the initial release has been
 shipped. It's the cycle of updates, bug fixes, "enhancements" and so on
 that really suck up the dollars.

But what about all those short-lived software: software projects that get canceled before they are finished or software that is abandoned shortly

 their initial release? The overriding number of projects belong to this
 category. And the more effort is put upfront (e.g. into maintainability)

 more likely the project is going to be canceled.

What you're partly talking about here is unrealistic scheduling which, unfortunately, seems the bane of the software industry. Regardless; we're talking about typing in perhaps a handful of additional keywords per class: I can't imagine you mean to say that will make or break a project? Surely some stronger compilation checks would not hinder such ventures?
 We're not talking about some dorm
 project that goes out to a few buddies and is then dropped after a month
 or two. We're talking projects involving potentially hundreds of
 man-years. Anything, and I really do mean *anything*, that a computer
 language can do to reduce the element of 'surprise' during that
 long-drawn-out-phase is a huge boon in terms of overall productivity and
 in terms of hard currency.

You really do mean *anything*? Wow, that's uncompromising, but real real-world computer languages put

 considerations into the 'nice to have' basket at best.

OK, fair enough. I meant anything that is accepted by the programming community. If adding such features restricts expression, then the language will die from lack of attention. That's not what we're talking about here: explicit use of "override" does not restrict expression in D at all. In return for a few measly additional characters you get more robustness. Plain and simple. Both long and short term.
 This latter part is what gets the attention
 of management. If the use of a language can reduce the bottom-line over
 time, then it gets a great big pat on the back. DbC is one such notion
 embraced by Walter; a stricter application of the override keyword would
 be another. You start adding up all these little features, and pretty
 soon you have something that the commercial development sector will
 start to take notice of (purely from a bottom-line perspective).

Probably, management doesn't care much for the maintenance phase of

 projects, since successful managers have moved to new projects, before

 stage is reached. Consider, how Java took management by storm. Although,

 promotes several programming practices that a prone to bite maintainers,

 dropped some of C++'s features to improve maintainability.

Yes, those managers who wear Crampons to work. Regardless; someone pays the maintenance bills, and who do you think takes responsibility for commercial software after yon clambering manager has moved on to another pinnacle? They would certainly appreciate development features that help make their task smoother and less frustrating. Yes? There again, it's not just about maintenance as you rightly point out :
 I believe that the DbC thing isn't really about maintainability in first
 place, it's about cranking out reasonably bug-free code *fast*:
 If I crank out code with no other help than compile-time type-checking, I
 loose much time fixing all my bugs with a debugger. Writting full-blown

 cases with 100% test-coverage is a time trap, too. But D's integrated DbC
 feature might reduce the time to write prototype-quality code, since it
 catches a fair amount of bugs, but requires only a modest time to write

 maintain.

Good point. I didn't claim that DbC or stricter overrides were purely about maintainability. Rather, I noted "If the use of a language can reduce the bottom-line over time, then it gets a great big pat on the back". Stricter overrides would absolutely help out in the short term also, in a vein similar to DbC. The problems induced by a misguided, mistyped, or "I just didn't know" method signature are just as difficult to track down as anything that DbC could help with. If someone likes DbC, I can't imagine why they wouldn't like stricter overrides. As you point out, DbC catches a fair amount of bugs but requires only modest investment. Surely the benefits of stricter overrides require only the most minimal investment one could hope for?
 Unfortunately, it's always those belonging to the minority that are
 considered as fools no matter how foolish the majority (of developers)

Right. Those were frustrated and inappropriate comments on my part ... should never have written them.
Jul 21 2004
parent reply Farmer <itsFarmer. freenet.de> writes:
First, I realize some "mis-communication": My last post isn't really about
the topic 'mandatory override keyword', and some of my comments are cynic
(beware!). 
You are trying to focus the discussion back on topic, but I'm unwilling to 
follow you: Matthew already said *everything* that is to say about your 
proposal in his first post. So I won't add *anything*. 

But I also had the impression your post wasn't restricted about 'override',
either: 
<quote>
Anything, and I really do mean *anything*, that a computer language can do
to reduce the element of 'surprise' during that long-drawn-out-phase is a
huge boon in terms of overall productivity and in terms of hard currency. 
</quote>
You must be thinking about more than just the 'override' keyword. After
all, the mandatory override keyword, per se, won't generate such great
financial benefits (only minor ones) for commercial projects. 
One has to add much more bug-prevention features, to get the huge boon 
you'are speaking about. I recommend, reading the 'C# Language Specification', 
if you haven't already done so. There are quite some 'features' in there. 
Yes, C# has the override keyword, and it is mandatory, of course. 


More comments embedded.
 
 Yes, those managers who wear Crampons to work. Regardless; someone pays
 the maintenance bills, and who do you think takes responsibility for
 commercial software after yon clambering manager has moved on to another
 pinnacle? [...]

I think, that usually *nobody* takes the responsibility for the financial loss. But I know who has to deal with all the messed code, it's always _me_ and other unfortunate individuals.
 Good point. I didn't claim that DbC or stricter overrides were purely
 about maintainability. Rather, I noted "If the use of a language can
 reduce the bottom-line over time, then it gets a great big pat on the
 back". Stricter overrides would absolutely help out in the short term
 also, in a vein similar to DbC. The problems induced by a misguided,
 mistyped, or "I just didn't know" method signature are just as difficult
 to track down as anything that DbC could help with. 

Sorry, you've never made such claims about DbC, I've shamelessly put words in your mouth. <cynic mode again> I refrain to comment about your estimation about the short term value of stricter overrides. Just keep on arguing about short term productivity of stricter overrides, this way, we might get them! </>
 If someone likes
 DbC, I can't imagine why they wouldn't like stricter overrides. 

Because DbC is completely optional, but stricter overrides aren't. Farmer.
Jul 22 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
Please forgive the subject title: was being a tad cynical myself.

So; more off topic musings. But really, actually, surreptitiously, on topic;

I thought a little about that "laziness" comment, and realized that the
cause is likely to be something else. I'm sure many people on this NG have
heard the hoary old line about how software design and construction is
somehow "a thin line between art and science". This is a truly wonderful
spin! What it apparently implies is that software is somehow mystical: dark
shrouded science mixed with creative spices from distant shores, plus some
old-fashioned voodoo thrown in for good measure ...

That is just so much BS :-) Sure, sometimes solutions for problems seem to
appear from nowhere, or you wake up in the middle of the night with the
"perfect answer" in your mind. If you can regularly do that regarding
software design, you can do it for any other profession; and more power to
you. No; the spin in that line is about a lack of discipline. You don't need
to be disciplined to be good at something, but you do need some of it to be
/consistently/ good. Creative people generally don't like such shackles.
After all, it gets in the way of the creative juices right? In fact, for
certain "creative" people I know or have met, almost anything that smacks of
discipline gets the finger <g>

It's interesting that the software industry often employs people in powerful
positions (less so at the very top) who have absolutely zero self-control.
Most of us can probably recount a story about some totally out-of-control,
schizophrenic sociopath who makes life truly miserable for coworkers, but
whom the board-of-directors either tolerate or bow down to (there seemed to
be barrowloads of 'em around in the dot-com heyday). That begs the question:
isn't it predominantly a level of discipline that separates the consummate
professional from the rank amateur?

I'm tempted to suggest that this creative-versus-disciplined notion plays a
significant part in why so much software truly stinks today in terms of
reliability. And yet the general consumer seems to take it for granted that
their "computing tool" should sometimes require rebooting several times a
day. Great marketing. Go figure.

Anyway; the point is that I may have mistaken total laziness for a total
lack of discipline. I mean, wouldn't those who view the term "strictness" as
a perceived impingement (upon personal creativity) cry out in the loudest
terms possible? Perhaps the phrase "stricter application of override" is
just inviting trouble based purely upon the choice of words? After all,
there is no shackle presented there; no hindrance to language expression.

Perhaps if anyone has to walk that "thin line between art and science" it
might be computer language designers ... I mean there has to be a good
measure of hard-algorithmic science present, yet somehow taking into account
the vagaries of a particular audience section who might just balk at any
perception of inhibition. Nasty job. That aside; we all know there's both
black-magic and voodoo inside a compiler ...

How about that Walter? Would you prefer to see "less looseness" with respect
to override, but are concerned it might upset too many?

:-)

Just a few thoughts. Of course, I could be just wildly wrong on all counts.
Back to the cave ...
Jul 22 2004
parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
I've struggled to keep up with the arguments of either you or Farmer; frankly
you've both seemed to be more polemic than
point, and I really can't divine a definitive position for either of you.

That being said, I think that the issue of language strictness is secondary to
the issue of professionalism of the
practitioner. If a language is strict, it's easier to write most things well,
but far harder to step outside of the
constraints when you need to. This is one of the reasons why I suspect C++ has
a *very* long future; we see already that
there are several ways in which D's eschewal of the preprocessor is causing
headaches for many of us. (Note: that's just
an example. I don't seek to start a pre-processor argument, and I don't intend
to participate in one.)

For my part, the professionalism and the training/experience of the
practitioner is far more important. If someone
doesn't care about maintainability, their code will not be maintainable. (In
fact, it will be shit, and they are the
kind of person that should be selling ice-creams on street corners rather than
taking part in the most complex activity
of man.)

I believe there are a wealth of studies (referenced in "Facts and Fallacies",
"The Art Of UNIX Programming" and many
other books - you should check for yourselves, given my track record in
remembering what book gives what wisdom) that
demonstrate that the quality of the software depends on the quality of the
programmer up to 10 times more than it does
on the language being used.

Not sure what all this means wrt your debate, but at least it lets me spout off
like a moaning old git.



"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdpui8$1cjq$1 digitaldaemon.com...
 Please forgive the subject title: was being a tad cynical myself.

 So; more off topic musings. But really, actually, surreptitiously, on topic;

 I thought a little about that "laziness" comment, and realized that the
 cause is likely to be something else. I'm sure many people on this NG have
 heard the hoary old line about how software design and construction is
 somehow "a thin line between art and science". This is a truly wonderful
 spin! What it apparently implies is that software is somehow mystical: dark
 shrouded science mixed with creative spices from distant shores, plus some
 old-fashioned voodoo thrown in for good measure ...

 That is just so much BS :-) Sure, sometimes solutions for problems seem to
 appear from nowhere, or you wake up in the middle of the night with the
 "perfect answer" in your mind. If you can regularly do that regarding
 software design, you can do it for any other profession; and more power to
 you. No; the spin in that line is about a lack of discipline. You don't need
 to be disciplined to be good at something, but you do need some of it to be
 /consistently/ good. Creative people generally don't like such shackles.
 After all, it gets in the way of the creative juices right? In fact, for
 certain "creative" people I know or have met, almost anything that smacks of
 discipline gets the finger <g>

 It's interesting that the software industry often employs people in powerful
 positions (less so at the very top) who have absolutely zero self-control.
 Most of us can probably recount a story about some totally out-of-control,
 schizophrenic sociopath who makes life truly miserable for coworkers, but
 whom the board-of-directors either tolerate or bow down to (there seemed to
 be barrowloads of 'em around in the dot-com heyday). That begs the question:
 isn't it predominantly a level of discipline that separates the consummate
 professional from the rank amateur?

 I'm tempted to suggest that this creative-versus-disciplined notion plays a
 significant part in why so much software truly stinks today in terms of
 reliability. And yet the general consumer seems to take it for granted that
 their "computing tool" should sometimes require rebooting several times a
 day. Great marketing. Go figure.

 Anyway; the point is that I may have mistaken total laziness for a total
 lack of discipline. I mean, wouldn't those who view the term "strictness" as
 a perceived impingement (upon personal creativity) cry out in the loudest
 terms possible? Perhaps the phrase "stricter application of override" is
 just inviting trouble based purely upon the choice of words? After all,
 there is no shackle presented there; no hindrance to language expression.

 Perhaps if anyone has to walk that "thin line between art and science" it
 might be computer language designers ... I mean there has to be a good
 measure of hard-algorithmic science present, yet somehow taking into account
 the vagaries of a particular audience section who might just balk at any
 perception of inhibition. Nasty job. That aside; we all know there's both
 black-magic and voodoo inside a compiler ...

 How about that Walter? Would you prefer to see "less looseness" with respect
 to override, but are concerned it might upset too many?

 :-)

 Just a few thoughts. Of course, I could be just wildly wrong on all counts.
 Back to the cave ...

Jul 22 2004
next sibling parent Sean Kelly <sean f4.ca> writes:
Matthew wrote:
 
 I believe there are a wealth of studies (referenced in "Facts and Fallacies",
"The Art Of UNIX Programming" and many
 other books - you should check for yourselves, given my track record in
remembering what book gives what wisdom) that
 demonstrate that the quality of the software depends on the quality of the
programmer up to 10 times more than it does
 on the language being used.

An excellent book I ran across a few months ago is "Large Scale C++ Software Design." Reading it really drove home the difference between software engineering and programming. I must say I'm glad that the dot com boom is over and folks who self-taught themselves to write code by reading "Javascript for Dummies" have mostly moved on to greener pastures. I got trapped in a contracting job with one such fellow and it was an experience I hope never to repeat.
 Not sure what all this means wrt your debate, but at least it lets me spout
off like a moaning old git.

Sometimes that's reason enough it itself :) Sean
Jul 22 2004
prev sibling next sibling parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Matthew" wrote ..

"That being said, I think that the issue of language strictness is secondary
to the issue of professionalism of the practitioner. If a language is
strict, it's easier to write most things well, but far harder to step
outside of the constraints when you need to."

Couldn't agree more, but that's not the point being made re "override" (and
I think you know that). Just to be clear for others: the "constraints" you
imply one would wish to step beyond could only be bugs in this case. That's
not what most sentient developer would refer to as "strict" WRT your above
assertion ~ therein lurks a source of unwarranted anxiety.

"Not sure what all this means wrt your debate, but at least it lets me spout
off like a moaning old git"

Hey! I thought that was my job today?

I didn't notice there was further debate (actually I failed miserably to
comprehend what Farmer's position and point was). This was my personal time
for spouting off like a moaning old git, so stop interrupting and get your
own thread. Seriously though:  if there were a salient point to these latter
musings, then it would have to be a cone of your ice-cream perspective,
topped with dollop of lamentful sauce.

- Kris
Jul 22 2004
next sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdq690$1h8i$1 digitaldaemon.com...
 "Matthew" wrote ..

 "That being said, I think that the issue of language strictness is secondary
 to the issue of professionalism of the practitioner. If a language is
 strict, it's easier to write most things well, but far harder to step
 outside of the constraints when you need to."

 Couldn't agree more, but that's not the point being made re "override" (and
 I think you know that). Just to be clear for others: the "constraints" you
 imply one would wish to step beyond could only be bugs in this case. That's
 not what most sentient developer would refer to as "strict" WRT your above
 assertion ~ therein lurks a source of unwarranted anxiety.

Sorry, mate. I think I must have put my dumb head on. I'm really not getting you. I'll leave this one unread, I think, and try again in a few days. ;)
 "Not sure what all this means wrt your debate, but at least it lets me spout
 off like a moaning old git"

 Hey! I thought that was my job today?

 I didn't notice there was further debate (actually I failed miserably to
 comprehend what Farmer's position and point was). This was my personal time
 for spouting off like a moaning old git, so stop interrupting and get your
 own thread. Seriously though:  if there were a salient point to these latter
 musings, then it would have to be a cone of your ice-cream perspective,
 topped with dollop of lamentful sauce.

Ok.
Jul 22 2004
parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cdq986$1i74$1 digitaldaemon.com...
 Sorry, mate. I think I must have put my dumb head on. I'm really not

 and try again in a few days. ;)

And perhaps I have my dumb ass on ... not a pretty sight
Jul 22 2004
parent "Matthew" <admin.hat stlsoft.dot.org> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in message
news:cdqa5c$1iig$1 digitaldaemon.com...
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cdq986$1i74$1 digitaldaemon.com...
 Sorry, mate. I think I must have put my dumb head on. I'm really not

 and try again in a few days. ;)

And perhaps I have my dumb ass on ... not a pretty sight

Euch! That's conjuring all kinds of nasty images. Please desist!
Jul 22 2004
prev sibling parent reply Farmer <itsFarmer. freenet.de> writes:
"Kris" <someidiot earthlink.dot.dot.dot.net> wrote in
news:cdq690$1h8i$1 digitaldaemon.com: 


 I didn't notice there was further debate (actually I failed miserably to
 comprehend what Farmer's position and point was). 

I agree. And admittedly, it's my fault that there wasn't a debate. Because I deliberately try to not reveal my position on the 'strict override' matter. BTW, I think my posts read quite like the typical unmaintainable (code) mess: - intentions remain completely mysterious to everyone - only the one who wrote it, understands it - after three days, even the one who wrote it, doesn't understand it - the one who wrote it, has written it in a language that he doesn't understand ;-) Farmer.
Jul 23 2004
parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Farmer" wrote ..
 "Kris"  wrote

 I didn't notice there was further debate (actually I failed miserably to
 comprehend what Farmer's position and point was).

I agree. And admittedly, it's my fault that there wasn't a debate. Because

 deliberately try to not reveal my position on the 'strict override'

 BTW, I think my posts read quite like the typical unmaintainable (code)

 - intentions remain completely mysterious to everyone
 - only the one who wrote it, understands it
 - after three days, even the one who wrote it, doesn't understand it
 - the one who wrote it, has written it in a language that he doesn't
 understand ;-)

That's cool Farmer. In the spirit of comradery (and a tip of the hat to James McComb) I repost my earlier drivel, but in a more understandable form: I dought some little about dat "laziness" comment, and realized dat de cause be likesly t'be sump'n else. I'm sho' man many sucka's on dis NG gots heard da damn ho'y old line about how software design and construcshun is somehow "a din line between art and science". Dis be a truly wonderful spin! Right on! Whut it apparently implies be dat software be somehow mah'stical, dig dis: dark shrouded science mixed wid creative spices fum distant sho'es, plus some old-fashioned voodoo drown in fo' baaaad measho' man ... Dat be plum so much BS :-) Sho' man, sometimes solushuns fo' problems seem to appear fum nowhere, o' ya' wake down in de middle uh de night wid de "puh'fect answer" in yo' mind. If ya' kin regularly do dat regardin' software design, ya' kin do it fo' any oda' profession; and mo'e powa' to ya'. No; de spin in dat line be about some lack uh discipline. You's duzn't need to be disciplined t'be baaaad at sump'n, but ya' do need some uh it t'be /consistently/ baaaad. Creative sucka's generally duzn't likes such shackles. Afta' all, it digs in de way uh de creative juices right? In fact, fo' certain "creative" sucka's ah' know o' gots met, mos' nuthin dat smacks of discipline digs de fin'a' It's interestin' dat da damn software industry often employs sucka's in powerful posishuns (less so's at da damn very top) who gots absolutely zero self-control. Most uh us kin probably recount some sto'y about some totally out-of-control, schizophrenic sociopad who makes life truly miserable fo' cowo'kers, but whom de bo'd-of-directo's eida' tolerate o' bow waaay down t'(dere seemed to be barrowloads uh 'em around in de dot-com heyday). Dat begs de quesshun: ain't it predominantly some level uh discipline dat separates de consummate professional fum de rank beginna'? I'm tempted t'suggest dat dis creative-versus-disciplined noshun plays a significant part in why so's much software truly stinks today in terms of reliability. Slap mah fro! And yet da damn general consuma' seems t'snatch it fo' granted dat deir "computin' tool" should sometimes require rebootin' several times a day. Slap mah fro! Great marketin'. Go figure. Anyway; de point be dat ah' may gots missnatchn total laziness fo' some total lack uh discipline. ah' mean, wouldn't dose who view de term "strictness" as a puh'ceived impin'ement (upon sucka'al creativity) cry out in de loudest terms possible? Perhaps de phrase "stricta' applicashun uh override" is plum invitin' trouble based purely downon de choice uh wo'ds? Afta' all, dere be no shackle presented dere; no hindrance t'language 'espression. 'S coo', bro. Perhaps if any sucka gots'ta walk dat "din line between art and science" it might be clunker language designers ... ah' mean dere gots'ta be some baaaad measho' man uh hard-algo'idmic science present, yet somehow takin' into account de vagaries uh a particular audience secshun who might plum balk at any puh'cepshun uh inhibishun. Nasty job. Co' got d' beat! Dat aside; we all know dere's bod black-magic and voodoo inside some compila' ... How about dat Walter? Would ya' prefa' to see "less looseness" wid respect to override, but is concerned it might downset too many? Just some few doughts. Of course, ah' could be plum wildly wrong on all counts. Back t'de cave ...
Jul 23 2004
prev sibling parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Matthew wrote:

 
 That being said, I think that the issue of language strictness is secondary to
the issue of professionalism of the
 practitioner. If a language is strict, it's easier to write most things well,
but far harder to step outside of the
 constraints when you need to. This is one of the reasons why I suspect C++ has
a *very* long future; we see already that
 there are several ways in which D's eschewal of the preprocessor is causing
headaches for many of us. (Note: that's just
 an example. I don't seek to start a pre-processor argument, and I don't intend
to participate in one.)
 

The preprocessor thing kind of peaks my interest in the sense of why it would be giving problems. Having worked without one for so long it makes me think why one would want the preprocessor. In my knowlege, the only reasons for the precompiler are: 1) protect from double include. Fixed by "import" 2) Override implementation of method. That's evil, and a source of maintenance hell (so is it the system printf I'm using or the one in another library?). Simple, don't do it. 3) Define constants or magic numbers. Fixed by "const" or "final" 4) Conditional compilation. Fixed by "version" 4) Define "macros". Thought was that templated functions could solve this, but perhaps there might be a need for a language supported macro language? What other possible need would there be for the preprocessor? (A well versed Java programmer asks.) The only time I was tempted to have one for Java, the "version" keyword in D would have solved my problem.
Jul 23 2004
parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Berin Loritsch" <bloritsch d-haven.org> wrote in message
news:cdr427$1usb$1 digitaldaemon.com...
 Matthew wrote:

 That being said, I think that the issue of language strictness is secondary to
the issue of professionalism of the
 practitioner. If a language is strict, it's easier to write most things well,
but far harder to step outside of the
 constraints when you need to. This is one of the reasons why I suspect C++ has
a *very* long future; we see already


 there are several ways in which D's eschewal of the preprocessor is causing
headaches for many of us. (Note: that's


 an example. I don't seek to start a pre-processor argument, and I don't intend
to participate in one.)

The preprocessor thing kind of peaks my interest in the sense of why it would be giving problems. Having worked without one for so long it makes me think why one would want the preprocessor. In my knowlege, the only reasons for the precompiler are: 1) protect from double include. Fixed by "import" 2) Override implementation of method. That's evil, and a source of maintenance hell (so is it the system printf I'm using or the one in another library?). Simple, don't do it. 3) Define constants or magic numbers. Fixed by "const" or "final" 4) Conditional compilation. Fixed by "version" 4) Define "macros". Thought was that templated functions could solve this, but perhaps there might be a need for a language supported macro language? What other possible need would there be for the preprocessor? (A well versed Java programmer asks.) The only time I was tempted to have one for Java, the "version" keyword in D would have solved my problem.

I did say: "I don't seek to start a pre-processor argument, and I don't intend to participate in one", but I'm not surprised that I was not believed. The most recent problem this has caused is while trying to get some Doxygen documentation for DTL, in order to release it in a format that stands a chance of being understood. Since Doxygen can use the preprocessor, and D uses version statements, whole swathes of the DTL code is skipped. I can think of no simple way to resolve this problem. And that's my final word on the subject, since there's precisely 0% chance of the preprocessor being added! :-)
Jul 23 2004
next sibling parent Berin Loritsch <bloritsch d-haven.org> writes:
Matthew wrote:

<snip/>

 I did say: "I don't seek to start a pre-processor argument, and I don't intend
to participate in one", but I'm not
 surprised that I was not believed.

sorry, it was not my intention to make you a liar. :(
 
 The most recent problem this has caused is while trying to get some Doxygen
documentation for DTL, in order to release
 it in a format that stands a chance of being understood. Since Doxygen can use
the preprocessor, and D uses version
 statements, whole swathes of the DTL code is skipped. I can think of no simple
way to resolve this problem.
 
 And that's my final word on the subject, since there's precisely 0% chance of
the preprocessor being added! :-)

I see. It's not so much preprocessor as another tool used for documentation purposes? I assume Doxygen is generating more output than is actually represented in the compiled D code? Maybe there is a need for the euiv. to JavaDoc--that understands the D language and only outputs the comments necessary for the compiled unit?
Jul 23 2004
prev sibling parent Regan Heath <regan netwin.co.nz> writes:
On Sat, 24 Jul 2004 00:19:57 +1000, Matthew <admin.hat stlsoft.dot.org> 
wrote:

 "Berin Loritsch" <bloritsch d-haven.org> wrote in message 
 news:cdr427$1usb$1 digitaldaemon.com...
 Matthew wrote:

 That being said, I think that the issue of language strictness is 

 practitioner. If a language is strict, it's easier to write most 

 constraints when you need to. This is one of the reasons why I 


 there are several ways in which D's eschewal of the preprocessor is 


 an example. I don't seek to start a pre-processor argument, and I 


The preprocessor thing kind of peaks my interest in the sense of why it would be giving problems. Having worked without one for so long it makes me think why one would want the preprocessor. In my knowlege, the only reasons for the precompiler are: 1) protect from double include. Fixed by "import" 2) Override implementation of method. That's evil, and a source of maintenance hell (so is it the system printf I'm using or the one in another library?). Simple, don't do it. 3) Define constants or magic numbers. Fixed by "const" or "final" 4) Conditional compilation. Fixed by "version" 4) Define "macros". Thought was that templated functions could solve this, but perhaps there might be a need for a language supported macro language? What other possible need would there be for the preprocessor? (A well versed Java programmer asks.) The only time I was tempted to have one for Java, the "version" keyword in D would have solved my problem.

I did say: "I don't seek to start a pre-processor argument, and I don't intend to participate in one", but I'm not surprised that I was not believed. The most recent problem this has caused is while trying to get some Doxygen documentation for DTL, in order to release it in a format that stands a chance of being understood. Since Doxygen can use the preprocessor, and D uses version statements, whole swathes of the DTL code is skipped. I can think of no simple way to resolve this problem. And that's my final word on the subject, since there's precisely 0% chance of the preprocessor being added! :-)

Hooray for that. I wonder how long it would take a sufficiently motivated/talented person to write a documentation tool in D, for D... Regan -- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 23 2004
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
In article <cdhnbk$svg$1 digitaldaemon.com>, Kris says...
Another vague detraction is that of  "snow blindness". I think that clearly
identifying those methods that override from those that don't is effective
in helping either/or stand out against the background; regardless of whether
your classes mostly/typically override or not. However, this is hardly an
argument against something that's guaranteed to reduce the subsequent
maintenance and debugging costs. So let's talk about that.

Kind of an aside, but in C++ I always declare methods that override parent methods "virtual" even though the virtual label is inherited. It makes it easy for me to see what's overriding inherited behavior and what's new. I've never been bitten by this particular bug myself, but it might be a nice rule to enforce. It would be an interesting feature, as traditional languages such as C++ only allow for setting overload requirements in a top down manner while this is almost bottom up. Sean
Jul 21 2004
parent reply Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Sean Kelly wrote:

 Kind of an aside, but in C++ I always declare methods that override parent
 methods "virtual" even though the virtual label is inherited. 

And I do that too, for exactly the same reasons.
Jul 21 2004
parent qw <qw_member pathlink.com> writes:
In article <cdmvs9$4l4$1 digitaldaemon.com>, Juanjo =?ISO-8859-15?Q?=C1lvarez?=
says...
Sean Kelly wrote:

 Kind of an aside, but in C++ I always declare methods that override parent
 methods "virtual" even though the virtual label is inherited. 

And I do that too, for exactly the same reasons.

When i learned C++ i thought that the virtual was in the wrong place. Why do i have to guess if a method will be overriden? What i wanted was an "override" keyword. And i guess i still think that way... (Well, i understand that some things are not designed to be overriden, so i am less annoyed now). The same thought crossed my mind when i learned virtual base classes in C++. Why do i have to design classes to be (multiply) inherited together (instead of correcting the "diamond" only when it happens)? Maybe it's an implementation issue?
Jul 21 2004
prev sibling next sibling parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Berin Loritsch" wrote ...
 He should have placed a call to super.pause() to have everything working
 as expected.  However that is done in D.

With respect Berin, may I gently suggest that you go back and read the initial posts on this topic? It's clear that you don't quite grasp the issue at hand. Take a look at those who are supporting the notion of a stricter use of "override": Matthew Andy Friesen tecDruid Blandger Derek Juanjo Álvarez Vathix Daniel Horn Kris Each of these folks have, to their credit, written a good chunk of D code ~ whereas you apparently have not written any. Of course, you wouldn't know who these people are since you just showed up. Having a "few years" experience and not having run into this issue yourself adds nothing of value to the topic, and the combined experience of the above people likely overshadow that of yours to a reasonably large degree. If you can comprehend the real issue and come up with a detraction somewhat more constructive than "it would annoy me", that would be most helpful. For example, if you /have/ run into the problems discussed and /have/ found an alternative strategy to deal with them, then let's hear it. Otherwise you're just blowing hot air out yer arse. Regards;
Jul 19 2004
next sibling parent Berin Loritsch <bloritsch d-haven.org> writes:
Kris wrote:

 "Berin Loritsch" wrote ...
 
He should have placed a call to super.pause() to have everything working
as expected.  However that is done in D.

With respect Berin, may I gently suggest that you go back and read the initial posts on this topic? It's clear that you don't quite grasp the issue at hand. Take a look at those who are supporting the notion of a stricter use of "override": Matthew Andy Friesen tecDruid Blandger Derek Juanjo Álvarez Vathix Daniel Horn Kris

Ok.
Jul 19 2004
prev sibling next sibling parent reply "Bent Rasmussen" <exo bent-rasmussen.info> writes:
There is no denying that its a potential problem. I wonder how often the
problem is occurs in practice though. Part of the problem is checkking the
APIs you use, part of the problem is that the API can change. There are good
reasons for using override but I haven't heard a really good reason for
making it compulsory. I find it rather counterproductive to write a list of
names in favor of your position, even if it is the most popular and you
might be right.
Jul 19 2004
next sibling parent Juanjo =?ISO-8859-15?Q?=C1lvarez?= <juanjuxNO SPAMyahoo.es> writes:
Bent Rasmussen wrote:

 There is no denying that its a potential problem. I wonder how often the
 problem is occurs in practice though. Part of the problem is checkking the
 APIs you use, part of the problem is that the API can change. There are
 good reasons for using override but I haven't heard a really good reason
 for making it compulsory. I find it rather counterproductive to write a
 list of names in favor of your position, even if it is the most popular
 and you might be right.

From a previous message from Daniel Horn: | I was refactoring Vega Strike (http://vegastrike.sourceforge.net/ ) to | divide graphics and physics. This happened countless times in the | process :-/ ) This has also bitten my on Python (where functions of subclasses with the same name always override) more times that I would have liked. Making "override" compulsory doesn't really change too much in the current definition of the language but it can avoid a good number of (sometimes very hard to find) bugs.
Jul 19 2004
prev sibling next sibling parent reply "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Bent Rasmussen"wrote ...
 making it compulsory. I find it rather counterproductive to write a list

 names in favor of your position, even if it is the most popular and you
 might be right

That was intended only to place the implied importance of Berin's "experience" claim into perspective for him. But I may have stretched the point, and you make a fair comment. Perhaps there's an whiff of this going on <g>: http://www.ars-technica.com/news/posts/20040717-4003.html - Kris
Jul 19 2004
parent reply "Bent Rasmussen" <exo bent-rasmussen.info> writes:
 That was intended only to place the implied importance of Berin's
 "experience" claim into perspective for him. But I may have stretched the
 point, and you make a fair comment.

 Perhaps there's an whiff of this going on <g>:
 http://www.ars-technica.com/news/posts/20040717-4003.html

 - Kris

I'm on the choice side of the fense, although I don't have deep roots there. I am impressed by your main sponsor http://dmawww.epfl.ch/roso.mosaic/dm/murphy.html :-)
Jul 19 2004
parent "Kris" <someidiot earthlink.dot.dot.dot.net> writes:
"Bent Rasmussen" wrote:
 I am impressed by your main sponsor
 http://dmawww.epfl.ch/roso.mosaic/dm/murphy.html

Right! Although Murphy was apparently an optimist ... O'Brians Law stipulates that: "Tis' fu%*ed up already, tae be shure now" <g>
Jul 19 2004
prev sibling parent Daniel Horn <hellcatv hotmail.com> writes:
Bent Rasmussen wrote:
 There is no denying that its a potential problem. I wonder how often the
 problem is occurs in practice though. Part of the problem is checkking the
 APIs you use, part of the problem is that the API can change. There are good
 reasons for using override but I haven't heard a really good reason for
 making it compulsory. I find it rather counterproductive to write a list of
 names in favor of your position, even if it is the most popular and you
 might be right.
 
 

otherwise you forget you can better believe in the beginning stages of Vega Strike development anything that was NOT manditory was ignored and I paid for it later... had the override option been manditory in C++, I would not have gotten burned by the missile thing...and the missile collision check was not the only thing--there have been at least 15 bugs that were exactly the same in Vega Strike over the years... I could name them one by one, but I think it would bore you... and VS is a young project... imagine what might happen in a more OO-based project that's older...
Jul 19 2004
prev sibling parent J C Calvarese <jcc7 cox.net> writes:
Kris wrote:
 "Berin Loritsch" wrote ...
 
He should have placed a call to super.pause() to have everything working
as expected.  However that is done in D.

With respect Berin, may I gently suggest that you go back and read the initial posts on this topic? It's clear that you don't quite grasp the issue at hand. Take a look at those who are supporting the notion of a stricter use of "override": Matthew Andy Friesen tecDruid Blandger Derek Juanjo Álvarez Vathix Daniel Horn Kris

Please add my name to the list. :) I don't claim to have the same amount of experience as the others arguing this position, but you make a compelling argument. It's a small price to pay for the prevention of subtle bugs. -- Justin (a/k/a jcc7) http://jcc_7.tripod.com/d/
Jul 19 2004
prev sibling parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Berin Loritsch wrote:
 Russ Lewis wrote:
 
 Berin Loritsch wrote:

 When would you *not* override a base class function?  I don't get 
 when you wouldn't want that to happen.

Somebody (I forget who) had a problem where he added a pause() function to one of his classes. However, he later discovered that the class Thread (which, 3 levels back, was a base class of this guy's class) also had a pause() function. So when some thread code called pause(), it got his pause() function...which definitely does NOT pause the thread.

So he did not override a method, he replaced the method? He should have placed a call to super.pause() to have everything working as expected. However that is done in D.

The point was that he didn't *realize* that there was a method named pause() in the superclass. Sure, you can make a valid argument that he could have checked the documentation...but the same problem can occur if the Thread class is later modified to include a new member function, and it turns out that there's a conflicht on that member name.
Jul 19 2004
prev sibling parent reply Andy Friesen <andy ikagames.com> writes:
Berin Loritsch wrote:

 Russ Lewis wrote:
 
 Berin Loritsch wrote:

 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.

 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.



I disagree with this completely. Implementation inheritance is a tricky problem because you not only have a public interface, you have a private invariant that must be maintained. If any method can be overridden, your job gets a whole lot harder. It's better to keep a tight leash on classes and design it such that the only non-final methods are the ones which were made to be non-final. (I prefer to write interfaces and abstract boilerplate classes. No actual method overriding involved)
 The question is not whether a function is overridABLE.  It is whether 
 the function is overridING.

Ok. I'm used to it happening implicitly. Since I do make use of inheritance, having to state that explicitly would be annoying to me.

The problem is that, when that explicit annotation is not present, changing method signatures in some base class can cause complete chaos because of all sorts of methods which aren't actually overriding anything. Nonetheless, it's all legal syntax, and typically compiles just fine. I think this is a case where it is so easy to do the wrong thing that it is worthwhile to demand that extra bit of explicitness.
 When would you *not* override a base class function?  I don't get when 
 you wouldn't want that to happen.

This problem arises in C# sometimes. It basically amounts to wanting to use some method signature which happens to conflict with another method defined in a dark corner of a superclass someplace. Your method has absolutely nothing to do with that one, save the signature, and you'd rather not be forced to kludge around it by renaming it. -- andy
Jul 19 2004
parent reply Berin Loritsch <bloritsch d-haven.org> writes:
Andy Friesen wrote:

 Berin Loritsch wrote:
 
 Russ Lewis wrote:

 Berin Loritsch wrote:

 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.

 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.



I disagree with this completely.

Which part? The part that says I have to explicitly state "I want to override" for each class/interface method from a parent class I might be working on? Or the part that says overriding is expected unless told otherwise?
 
 Implementation inheritance is a tricky problem because you not only have 
 a public interface, you have a private invariant that must be 
 maintained.  If any method can be overridden, your job gets a whole lot 
 harder.

I really haven't run into this problem, and I have developed software for a few years now. Mostly Java and C++. I run into more problems because a language (or compiler in some cases) *doesn't* override a method I expect it to.
 
 It's better to keep a tight leash on classes and design it such that the 
 only non-final methods are the ones which were made to be non-final. (I 
 prefer to write interfaces and abstract boilerplate classes.  No actual 
 method overriding involved)

This is more a policy of development style and methodology than anything else. I don't believe a language should restrict a user unnecesarily. The purpose of any language is to enable them to solve problems they could not solve before, or were very inconvenient to solve before.
 
 The question is not whether a function is overridABLE.  It is whether 
 the function is overridING.

Ok. I'm used to it happening implicitly. Since I do make use of inheritance, having to state that explicitly would be annoying to me.

The problem is that, when that explicit annotation is not present, changing method signatures in some base class can cause complete chaos because of all sorts of methods which aren't actually overriding anything. Nonetheless, it's all legal syntax, and typically compiles just fine.

Does this happen often? Once a project reaches a certain level of maturity, I rarely see a need to change a base class method name. Look at it this way: A method that does not call super.methodName() isn't overriding a method, it is replacing it. Any call to super.methodName() will cause a compile error if super.methodName() does not exist. Translate that into D syntax if that isn't right. If there is a true override instead of a replacement, then yes, the compiler will complain loudly. Or at least it should.
 
 I think this is a case where it is so easy to do the wrong thing that it 
 is worthwhile to demand that extra bit of explicitness.

I don't think it is as easy to do wrong as you think, unless there is a bug in the compiler.
 
 When would you *not* override a base class function?  I don't get when 
 you wouldn't want that to happen.

This problem arises in C# sometimes. It basically amounts to wanting to use some method signature which happens to conflict with another method defined in a dark corner of a superclass someplace. Your method has absolutely nothing to do with that one, save the signature, and you'd rather not be forced to kludge around it by renaming it.

IMO, if you have this situation, then the work should be split into two separate classes. No reason to have separate semantics for one method. It is just one more place where maintenance can go wrong.
Jul 19 2004
parent reply Andy Friesen <andy ikagames.com> writes:
Berin Loritsch wrote:

 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.

 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.



I disagree with this completely.

Which part? The part that says I have to explicitly state "I want to override" for each class/interface method from a parent class I might be working on? Or the part that says overriding is expected unless told otherwise?

I think it's a good thing that a class designer be able to decide which methods can and can't be overridden. D does provide this, which makes me a happy camper either way, but I prefer the explicit annotation. Explicitly decorating a function as being overridable serves as a positive assertion that you want it to be so. This carries with it a piece of your notion of how the class ought to be used: The class's own definition explicitly says "I was made to be subclassed in precisely this manner." Negatives aren't as strong because they mean either yes or "no but I forgot to say so".
 Implementation inheritance is a tricky problem because you not only 
 have a public interface, you have a private invariant that must be 
 maintained.  If any method can be overridden, your job gets a whole 
 lot harder.

I really haven't run into this problem, and I have developed software for a few years now. Mostly Java and C++. I run into more problems because a language (or compiler in some cases) *doesn't* override a method I expect it to.

I agree that the C++ way is even worse. :) I think, in general, there should be some mandatory decoration any time you create a method which, in some way, is made to replace a method in a super or subclass. (decorating overloads isn't as important because they're all defined in the same class)
 It's better to keep a tight leash on classes and design it such that 
 the only non-final methods are the ones which were made to be 
 non-final. (I prefer to write interfaces and abstract boilerplate 
 classes.  No actual method overriding involved)

This is more a policy of development style and methodology than anything else. I don't believe a language should restrict a user unnecesarily. The purpose of any language is to enable them to solve problems they could not solve before, or were very inconvenient to solve before.

Absolutely. I try to make a point to never forget that programming languages exist solely because both computers and people can read them. D's built-in syntax for contracts and unit testing brings out the paranoid, fault-tolerance-obsessed monster in me. :)
 The problem is that, when that explicit annotation is not present, 
 changing method signatures in some base class can cause complete chaos 
 because of all sorts of methods which aren't actually overriding 
 anything.  Nonetheless, it's all legal syntax, and typically compiles 
 just fine.

Does this happen often? Once a project reaches a certain level of maturity, I rarely see a need to change a base class method name.

Right. D does precisely the right thing in this respect. (that is the exact syntax, by the way) It is a problem is that there's no straightforward way to contractually force an override to call the superclass method it overrides, but that's neither here nor there.
 Look at it this way:
 
 A method that does not call super.methodName() isn't overriding a 
 method, it is replacing it.  Any call to super.methodName() will
 cause a compile error if super.methodName() does not exist.

This is another idea entirely. I'm not a fan of C++'s default behaviour being to "hide" the subclass method non-polymorphically. It's not what most people mean, and it is the cause of a ton of bugs as a result.
 I think this is a case where it is so easy to do the wrong thing that 
 it is worthwhile to demand that extra bit of explicitness.

I don't think it is as easy to do wrong as you think, unless there is a bug in the compiler.

Here's an example: class Base { typedef int value_type; void setValue(value_type v) { ... } } class Derived : Base { // does not override because D considers // Base.value_type to be distinct from int void setValue(int v) { ... } } Now, the question is, is this an error? Presently, in the eyes of DMD, it is not. Base defines a method, Derived defines a method with a different signature, the world goes on. Were I, as a programmer maintaining existing code, to review this, I would suspect that it is a mistake. At the very least, I'd have to spend some time fishing around to find contextual clues as to whether or not this was intentional. (time's expensive stuff!) If I a bit more rash, I might "fix" it right there on the spot, and potentially create a bug where there was none before. If 'override' was a required keyword, this could not be a problem. Either the keyword is there and it is a compile-time error, or it is not, and therefore cannot be an override. One way or another, the maintenance programmer can feel pretty sure he knows what's going on.
 When would you *not* override a base class function?  I don't get 
 when you wouldn't want that to happen.

This problem arises in C# sometimes. It basically amounts to wanting to use some method signature which happens to conflict with another method defined in a dark corner of a superclass someplace. Your method has absolutely nothing to do with that one, save the signature, and you'd rather not be forced to kludge around it by renaming it.

IMO, if you have this situation, then the work should be split into two separate classes. No reason to have separate semantics for one method. It is just one more place where maintenance can go wrong.

Typically, I'm inclined to agree myself. It's more of a way to kludge over the problem in the interest of saving a perfectly viable design. -- andy
Jul 19 2004
next sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <cdh8fs$mo2$1 digitaldaemon.com>, Andy Friesen says...
I think it's a good thing that a class designer be able to decide which 
methods can and can't be overridden.  D does provide this, which makes 
me a happy camper either way, but I prefer the explicit annotation.

D does provide this - but with the keyword "final", not the keyword "override". Of course, you knew that.
Explicitly decorating a function as being overridable serves as a 
positive assertion that you want it to be so.  This carries with it a 
piece of your notion of how the class ought to be used: The class's own 
definition explicitly says "I was made to be subclassed in precisely 
this manner."

Negatives aren't as strong because they mean either yes or "no but I 
forgot to say so".

Indeed, that's what not using the keyword "final" means, but this is unrelated to the use of "override".
I think, in general, there should be some mandatory decoration any time 
you create a method which, in some way, is made to replace a method in a 
super or subclass.  (decorating overloads isn't as important because 
they're all defined in the same class)

Okay, /now/ you're talking about "override".
Jul 19 2004
prev sibling parent reply Berin Loritsch <bloritsch d-haven.org> writes:
I am enjoying the conversation BTW.

Andy Friesen wrote:

 It is a problem is that there's no straightforward way to contractually
 force an override to call the superclass method it overrides, but that's
 neither here nor there.

I think requiring this is more elegant than forcing a keyword to be used everywhere. More on that later.
 Here's an example:

     class Base {
         typedef int value_type;
         void setValue(value_type v) { ... }
     }

     class Derived : Base {
         // does not override because D considers
         // Base.value_type to be distinct from int
         void setValue(int v) { ... }
     }

 Now, the question is, is this an error?

 Presently, in the eyes of DMD, it is not.  Base defines a method,
 Derived defines a method with a different signature, the world goes on.

:) In D, value_type and int are two different things. In C++ they are the same thing--so this is something that can be a source of confusion. Esp. if there is an implicit conversion from int to value_type (the C++ way). I can understand there being an issue here, and the possibility of requiring the override keyword in this case. On requiring the keyword all the time: ------------------------------------- I am very much not in favor of this approach. The reason is the way our brains work. When we see something repeated over and over again, it comes accross as noise which the brain attempts to filter out. It is looking for the differences to see what the problem might be. When something is repeated over and over and over it has less significance to the brain's powers of perception. When you have keywords of similar size in place to explicitly declare the intent then I know I personally wouldn't notice them as quickly. Take for instance the textbook scoping error: class Example { Label m_label; void display() { Label m_label = new Label("see"); } } Most people, even experienced developers, can miss it. Its because seeing the word "Label" starts looking like noise, so the brain just throws out the one that redeclares the variable within the method. The same effect can happen if you require a keyword to be used in each and every method. Methods are fairly plentious. -- "Programming today is a race between software engineers striving to build bigger and better idiot-proof programs, and the Universe trying to produce bigger and better idiots. So far, the Universe is winning." - Rich Cook
Jul 19 2004
parent Andy Friesen <andy ikagames.com> writes:
Berin Loritsch wrote:
 I am enjoying the conversation BTW.
 
 Andy Friesen wrote:
 
  > It is a problem is that there's no straightforward way to contractually
  > force an override to call the superclass method it overrides, but that's
  > neither here nor there.
 
 I think requiring this is more elegant than forcing a keyword to be used 
 everywhere.  More on that later.

I think I see what you are talking about. The trouble is that you're talking about something completely different. ;) Kris's problem had to do with creating a method that had absolutely nothing at all to do with Thread.pause(). There was absolutely no semantic link, or intent/desire to call super.pause() because Kris wasn't even aware that it existed. The problem is that DMD didn't say a word, because there is no requirement that a method specifically be denoted as an override. Kris overrode Thread.pause() without even realizing it: fireworks ensued. If the override keyword were mandatory, this would have not been a problem: it would be a compile error.
 :)  In D, value_type and int are two different things.  In C++ they are
 the same thing--so this is something that can be a source of confusion.
 Esp. if there is an implicit conversion from int to value_type (the C++
 way).
 
 I can understand there being an issue here, and the possibility of 
 requiring the override keyword in this case.

My point was that, whether or not it's a bug, to a person, it looks like it *could* be a bug, but you can't be sure. If it looks like a bug, you run the risk of someone "fixing" it.
 On requiring the keyword all the time:
 -------------------------------------
 
 	Label m_label = new Label("see");
 
 Most people, even experienced developers, can miss it.  Its because 
 seeing the word "Label" starts looking like noise, so the brain just
 throws out the one that redeclares the variable within the method.

This isn't a problem: if you denote every method as being an override, the compile will (rightly) fail. -- andy
Jul 19 2004
prev sibling parent Daniel Horn <hellcatv hotmail.com> writes:
I was refactoring Vega Strike (http://vegastrike.sourceforge.net/ ) to 
divide graphics and physics. This happened countless times in the 
process :-/ )

Many times were the opposite case, where I had functions that originally 
overrode other functions and then they stopped overriding them (because 
of overloading rules).
if those functions had had override  as a keyword before my refactor the 
compiler would have helped me discover those problems.
As it turns out missiles were not working for a YEAR before I found that 
they were not properly working and then identified the problem as being 
that their collide() function was not working as intended because it was 
not being overridden any more.

I strongly recommend doing this override thing *before* 1.0
-Daniel

Russ Lewis wrote:
 Berin Loritsch wrote:
 
 What's wrong with having all functions/methods being overridable
 unless there is a keywords saying don't do it?  This is something
 that works quite happily in the Java lanaguage.

 Having to explicitly say "I want this to be overriden" is quite
 annoying, as most of the time that is what I expect.

The question is not whether a function is overridABLE. It is whether the function is overridING. Putting 'override' on a function means that it must override some base class function. This is an easy way to check that you're actually overriding a base class function. The question at hand in this thread is whether or not there should be some way to explicitly say that you are NOT overriding a base class function. If you happen to accidentally override a base class function - and you don't know what that function is supposed to do - then you are practically guaranteed to have a bug, and you won't know about it until runtime.

Jul 19 2004