www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Should operator overload methods be virtual?

reply Walter Bright <newshound1 digitalmars.com> writes:
Making them not virtual would also make them not overridable, they'd all 
be implicitly final.

Is there any compelling use case for virtual operator overloads? Keep in 
mind that any non-virtual function can still be a wrapper for another 
virtual method, so it is still possible (with a bit of extra work) for a 
class to have virtual operator overloads. It just wouldn't be the default.
Nov 27 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Sat, 28 Nov 2009 02:32:21 +0300, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Making them not virtual would also make them not overridable, they'd all  
 be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep in  
 mind that any non-virtual function can still be a wrapper for another  
 virtual method, so it is still possible (with a bit of extra work) for a  
 class to have virtual operator overloads. It just wouldn't be the  
 default.
I thought operator overloading was going to be implemented via templates. As such, they are non-virtual by default, which is okay, in my opinion.
Nov 27 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 I thought operator overloading was going to be implemented via 
 templates. As such, they are non-virtual by default, which is okay, in 
 my opinion.
Yes, that's the rationale. I'm looking for a hole in it.
Nov 28 2009
prev sibling next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Walter Bright (newshound1 digitalmars.com)'s article
 Making them not virtual would also make them not overridable, they'd all
 be implicitly final.
 Is there any compelling use case for virtual operator overloads? Keep in
 mind that any non-virtual function can still be a wrapper for another
 virtual method, so it is still possible (with a bit of extra work) for a
 class to have virtual operator overloads. It just wouldn't be the default.
What would making them non-virtual accomplish? I don't think making them non-virtual would hurt too much in practice, but it would introduce an inconsistency into the language relative to "regular" methods. Therefore, I don't think it should be done without a very good reason.
Nov 27 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
dsimcha:
 but it would introduce an
 inconsistency into the language relative to "regular" methods.
Right, it's an exception to a rule of the language, so it increases the language complexity. Bye, bearophile
Nov 28 2009
parent dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Sat, 28 Nov 2009 08:16:33 -0500, bearophile wrote:
 dsimcha:
 but it would introduce an
 inconsistency into the language relative to "regular" methods.
Right, it's an exception to a rule of the language, so it increases the language complexity.
I guess the systems programming language users more often think that 'more executable bloat when compiled with the currently available practical real world tools, the more complex the language in practical real world use'. So if there's some tiny little feature that saves you 1-2 cpu cycles in practical real world systems programming applications or makes building a practical real world non-academic commercial compiler a bit easier and thus provides more practical value to the paying customer, the language should include that feature.
Ok, well then how does making operator overloads implicitly final improve over being consistent with the rest of the language and making them explicitly final if you want them final? Note: I'm not against making overloading non-virtual if it's implemented with templates, because this is non-arbitrary and consistent with the rest of the language. I'm only against it if it's done arbitrarily by treating operator overload functions as "special" in this regard.
Nov 28 2009
prev sibling next sibling parent reply retard <re tard.com.invalid> writes:
Fri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:

 Making them not virtual would also make them not overridable, they'd all
 be implicitly final.
 
 Is there any compelling use case for virtual operator overloads? Keep in
 mind that any non-virtual function can still be a wrapper for another
 virtual method, so it is still possible (with a bit of extra work) for a
 class to have virtual operator overloads. It just wouldn't be the
 default.
Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
Nov 27 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from retard (re tard.com.invalid)'s article
 Fri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:
 Making them not virtual would also make them not overridable, they'd all
 be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep in
 mind that any non-virtual function can still be a wrapper for another
 virtual method, so it is still possible (with a bit of extra work) for a
 class to have virtual operator overloads. It just wouldn't be the
 default.
Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
If so, I think it's a bad idea. 1. Eventually, we will get a better optimizer. GDC has been resurrected, and after D2 is finalized and all of the more severe bugs are fixed, hopefully Walter will have some time to focus on performance issues. 2. This optimization can trivially be done manually by declaring the overloads final. What would we gain by introducing the inconsistency with "normal" methods?
Nov 27 2009
prev sibling next sibling parent "Robert Jacques" <sandford jhu.edu> writes:
On Fri, 27 Nov 2009 22:58:00 -0500, retard <re tard.com.invalid> wrote:

 Fri, 27 Nov 2009 15:32:21 -0800, Walter Bright wrote:

 Making them not virtual would also make them not overridable, they'd all
 be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep in
 mind that any non-virtual function can still be a wrapper for another
 virtual method, so it is still possible (with a bit of extra work) for a
 class to have virtual operator overloads. It just wouldn't be the
 default.
Is this again one of those features that is supposed to hide the fact that dmd & optlink toolchain sucks? At least gcc can optimize the calls in most cases where the operator is defined to be virtual, but is used in non-polymorphic manner.
Yes and no. Yes, DMD doesn't have link time optimization (LTO), which is what enables this. No, because LTO can't do this optimization in many cases, such as creating/using a DLL/shared object. (Static libraries might also have some issues, but I'm not sure.)
Nov 27 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
retard wrote:
 Is this again one of those features that is supposed to hide the fact 
 that dmd & optlink toolchain sucks? At least gcc can optimize the calls 
 in most cases where the operator is defined to be virtual, but is used in 
 non-polymorphic manner.
The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it. struct C { virtual int foo() { return 3; } }; void bar(C* c) { c->foo(); <== no virtual call optimization here (1) } int main() { C* c = new C(); c->foo(); <== virtual call optimization here (2) bar(c); return 0; } What D doesn't do is (2). What D does do, and C++ does not, is allow one to specify a class is final or a method is final, and then both (1) and (2) will be optimized to direct calls. Doing (2) is entirely a function of the front end, not the linker.
Nov 28 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el 28 de noviembre a las 13:31 me escribiste:
 retard wrote:
Is this again one of those features that is supposed to hide the
fact that dmd & optlink toolchain sucks? At least gcc can optimize
the calls in most cases where the operator is defined to be
virtual, but is used in non-polymorphic manner.
The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.html -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- Se va a licitar un sistema de vuelos espaciales mendiante el cual, desde una plataforma que quizás se instale en la provincia de Córdoba. Esas naves espaciales va a salir de la atmósfera, va a remontar la estratósfera y desde ahí elegir el lugar donde quieran ir de tal forma que en una hora y media podamos desde Argentina estar en Japón, en Corea o en cualquier parte. -- Carlos Saúl Menem (sic)
Nov 29 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 Walter Bright, el 28 de noviembre a las 13:31 me escribiste:
 retard wrote:
 Is this again one of those features that is supposed to hide the
 fact that dmd & optlink toolchain sucks? At least gcc can optimize
 the calls in most cases where the operator is defined to be
 virtual, but is used in non-polymorphic manner.
The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.html
I don't see that particular one in the links.
Dec 01 2009
parent Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  1 de diciembre a las 11:17 me escribiste:
 Leandro Lucarella wrote:
Walter Bright, el 28 de noviembre a las 13:31 me escribiste:
retard wrote:
Is this again one of those features that is supposed to hide the
fact that dmd & optlink toolchain sucks? At least gcc can optimize
the calls in most cases where the operator is defined to be
virtual, but is used in non-polymorphic manner.
The gnu linker (ld) does not do any optimizations of virtual call => direct call. Optlink has nothing to do with it.
The *new* GNU Linker (gold) does (with plug-ins, both GCC and LLVM provides plug-ins for gold to do LTO). See: http://gcc.gnu.org/wiki/LinkTimeOptimization http://llvm.org/docs/GoldPlugin.html
I don't see that particular one in the links.
Well, I was talking about link-time optimization in general, not virtual call elimination in particular :). I don't know exactly what kind of optimizations are supported currently, but bare in mind this is all very new (Gold and the LTO plug-ins)... -- Leandro Lucarella (AKA luca) http://llucax.com.ar/ ---------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------- - Que hacés, ratita? - Espero un ratito...
Dec 01 2009
prev sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Making them not virtual would also make them not overridable, they'd all  
 be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep in  
 mind that any non-virtual function can still be a wrapper for another  
 virtual method, so it is still possible (with a bit of extra work) for a  
 class to have virtual operator overloads. It just wouldn't be the  
 default.
I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Dec 01 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Steven Schveighoffer wrote:
 On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Making them not virtual would also make them not overridable, they'd 
 all be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep 
 in mind that any non-virtual function can still be a wrapper for 
 another virtual method, so it is still possible (with a bit of extra 
 work) for a class to have virtual operator overloads. It just wouldn't 
 be the default.
I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Would you put up with a couple of forwarding functions? Andrei
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 13:53:37 -0500, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Steven Schveighoffer wrote:
 On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 Making them not virtual would also make them not overridable, they'd  
 all be implicitly final.

 Is there any compelling use case for virtual operator overloads? Keep  
 in mind that any non-virtual function can still be a wrapper for  
 another virtual method, so it is still possible (with a bit of extra  
 work) for a class to have virtual operator overloads. It just wouldn't  
 be the default.
I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Would you put up with a couple of forwarding functions?
Well, I'd certainly put up with it if I had no choice :) But if I had a choice, I'd choose to keep them virtual. I have little need for defining bulk operators with templates and mixins, my usage is mainly going to be separate implementations for each operator. If the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying. One more thing I wonder, can you alias template instantiations? For example, I have code like this: struct S { alias opAdd add; void opAdd(int x); } How does one do that when opAdd is a template with an argument? -Steve
Dec 01 2009
next sibling parent reply dsimcha <dsimcha yahoo.com> writes:
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 If the compiler could somehow
 optimize out all instances of the template function to reduce bloat, I
 think that would make it a little less annoying.
What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it. http://stackoverflow.com/questions/1771692/when-does-template-instantiation-bloat-matter-in-practice
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:

 == Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 If the compiler could somehow
 optimize out all instances of the template function to reduce bloat, I
 think that would make it a little less annoying.
What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
If I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly. -Steve
Dec 01 2009
next sibling parent dsimcha <dsimcha yahoo.com> writes:
== Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 On Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:
 == Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 If the compiler could somehow
 optimize out all instances of the template function to reduce bloat, I
 think that would make it a little less annoying.
What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
If I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly. -Steve
No, I agree. Space efficiency does matter. I've certainly jumped through some serious hoops to make my code more space efficient when dealing with large datasets. The thing is that, at least in my experience, in any modern non-embedded program large enough for space efficiency to matter, the space requirements are dominated by data, not code. Therefore, I use as many templates as I feel like and don't worry about it, and when I think about space efficiency, I think about representing my data efficiently.
Dec 01 2009
prev sibling parent reply retard <re tard.com.invalid> writes:
Tue, 01 Dec 2009 16:48:34 -0500, Steven Schveighoffer wrote:

 On Tue, 01 Dec 2009 16:28:14 -0500, dsimcha <dsimcha yahoo.com> wrote:
 
 == Quote from Steven Schveighoffer (schveiguy yahoo.com)'s article
 If the compiler could somehow
 optimize out all instances of the template function to reduce bloat, I
 think that would make it a little less annoying.
What is the sudden obsession with code bloat here lately? Check out this StackOverflow question that I posed a few weeks ago. If anyone has a decent answer to it, I'd love to hear it.
If I'm writing template code effectively as a "macro" meaning "call this virtual method", then there is no point in having template code whatsoever. If I'm forced to write it because the compiler only will call a template, then I would like for the compiler to optimize out its "mistake". Then I have no problem with it, because the net effect on the binary performance and size should be zero. Even if I have to annotate the function to force it, that is fine with me. Larger programs take more memory to run, and longer to load. Not that my D programs need to squeeze every ounce of power out of the system, but I think nowadays there's too little emphasis on executable size optimization (or even memory consumption). an ancecdote on bloatage: I once had a driver for XP for my wireless USB network adapter that put an icon on the task tray, consuming roughly 10MB of memory. Yep, to put an icon on the task tray, it needed 10MB. Just in case I ever wanted to click on that icon to set up my wireless network (which I would never do because once it's set up, I'm done). As a bonus, every week or so, some kind of memory leak would trigger, and it would consume about 200MB of memory before my system started thrashing and I had to kill the icon. I tried to disable it and use Windows to configure my wireless card, and then it used 10MB to put a *grayed out icon* in the tray (which would continue the bonus plan). I finally had to hunt down the offending executable and rename it to prevent it from starting. And guess what? the wireless adapter worked flawlessly. It's shit like this that pisses me off when people say "oh, bloat is a think of the past, you get soo much memory and cpu now adays, you don't even notice it." All those little 10MB programs add up pretty quickly.
If it leaks 200 MB per day, people can already run it for a month on a typical home PC before the machine runs out of physical memory (assuming 8GB physical RAM like most of my friends have these days on their $500-600 systems). A typical user reboots every day so a program can freely leak at least 7 gigs per day, (during a 8h work day) that's 15 MB per minute or 250 kB per second. According to Moore's law the leak rate can grow exponentially. So in 2013 your typical taskbar apps leak at least one megabyte per second and most of users are still happy. With a RAM upgrade they can use apps that leak 4+ MB per second. As users tend to restart programs when the system starts running slowly, the shorter uptime of apps means that they can leak a lot more.
Dec 01 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Wed, 02 Dec 2009 01:36:54 -0500, retard <re tard.com.invalid> wrote:

 If it leaks 200 MB per day, people can already run it for a month on a
 typical home PC before the machine runs out of physical memory (assuming
 8GB physical RAM like most of my friends have these days on their
 $500-600 systems).
Notice I said XP. this system had 500MB of RAM, it's not a new system. AFAIK, XP doesn't even *support* more than 4GB of RAM (and I don't think my chipset would support more than 1G). 200MB is probably the most the OS would give it, because I think my typical idle memory usage was 400MB. Let's just say instead of 200MB, it uses whatever memory was left to consume, ok? But the memory leak isn't the biggest issue, that is clearly a bug and not a feature. The problem I have is the 10MB of memory it uses to put an icon on the task tray. I see loads of these icons all the time on other people's computers, all using up huge chunks of memory so they can instantaneously check for the latest logitech driver for their keyboard (oooh! what new awesome amazing things will my keyboard be able to do with this upgrade!). It's the computer equivalent to hiring a team of people around you 24/7, and some of those team member's *ONLY* job is to give you a q-tip in case you want it. And moores law seems to apply to moronic icon developers as well -- the more memory available, the bloatier they make their nifty task tray icons "hey, Windows 7 supports an alpha channel! let's make the icon [that nobody ever uses] fade in and out!"
 A typical user reboots every day so a program can freely leak at least 7
 gigs per day, (during a 8h work day) that's 15 MB per minute or 250 kB
 per second. According to Moore's law the leak rate can grow
 exponentially. So in 2013 your typical taskbar apps leak at least one
 megabyte per second and most of users are still happy. With a RAM upgrade
 they can use apps that leak 4+ MB per second. As users tend to restart
 programs when the system starts running slowly, the shorter uptime of
 apps means that they can leak a lot more.
I don't know what typical users you know, but the typical users I know do not reboot their computer unless it requires it. Most of the people I know have installed so much bloatware on their system that it takes 20 minutes to boot their system, so they only reboot when necessary. Your idea of "x amount of leakage is OK" where x > 0 is exactly the developer mindset I was talking about. -Steve
Dec 02 2009
parent retard <re tard.com.invalid> writes:
Wed, 02 Dec 2009 13:15:33 -0500, Steven Schveighoffer wrote:

 I don't know what typical users you know, but the typical users I know
 do not reboot their computer unless it requires it.  Most of the people
 I know have installed so much bloatware on their system that it takes 20
 minutes to boot their system, so they only reboot when necessary.
Ok, if they accept those long boot times, you can waste even more memory since they would probably accept disk cache trashing, too. Nowadays laptops have 640 GB hard drives, so basically a taskbar applet could easily use 100 GB of virtual RAM without the stupid user noticing anything.
 Your idea of "x amount of leakage is OK" where x > 0 is exactly the
 developer mindset I was talking about.
It's not my idea :D I guess even if I was badly drunk, I couldn't make my code leak as much as those taskbar application developers do. I don't encourage writing bloaty crap applications. It's just the general trend. Applications get larger and slower. Wirth's law. If I recall correctly, my old postscript printer only needed a ppd driver file (< 100 kB). Nowadays even the cheapest printers with very modest features come with 500+ megabytes of "drivers". Since there is no good package manager on Windows, each vendor implements their own, poorly. The high end printers still use light weight drivers. What does this tell? If the printer costs $40, a webcam $15, and a network card $5..10, how can you expect extremely high quality drivers? They hire those worst off- shore coders to do the job, the cheapest artists draw the 16 color installer backgrounds (saved in 24-bit BMP format of course to waste more space) etc.
Dec 02 2009
prev sibling parent Don <nospam nospam.com> writes:
Steven Schveighoffer wrote:
 On Tue, 01 Dec 2009 13:53:37 -0500, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Steven Schveighoffer wrote:
 On Fri, 27 Nov 2009 18:32:21 -0500, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 Making them not virtual would also make them not overridable, they'd 
 all be implicitly final.

 Is there any compelling use case for virtual operator overloads? 
 Keep in mind that any non-virtual function can still be a wrapper 
 for another virtual method, so it is still possible (with a bit of 
 extra work) for a class to have virtual operator overloads. It just 
 wouldn't be the default.
I use virtual operator overloads in dcollections. Such as opCat and opAppend. collection1 ~= collection2; // 2 different collection types, using interfaces instead of templates to avoid code bloat. Also, opApply should be by default virtual, since it's not a true operator.
Would you put up with a couple of forwarding functions?
Well, I'd certainly put up with it if I had no choice :) But if I had a choice, I'd choose to keep them virtual. I have little need for defining bulk operators with templates and mixins, my usage is mainly going to be separate implementations for each operator. If the compiler could somehow optimize out all instances of the template function to reduce bloat, I think that would make it a little less annoying.
Most of the bloat in my experience comes from that ruddy int->ulong implicit conversion, which gets used in the function lookup rules. If ints didn't implicitly convert to ulong, 3/4 of my operator overloads would disappear -- because then 'long' would be able to do every integer type other than ulong.
 
 One more thing I wonder, can you alias template instantiations?  For 
 example, I have code like this:
 
 struct S
 {
   alias opAdd add;
 
   void opAdd(int x);
 }
 
 How does one do that when opAdd is a template with an argument?
 
 -Steve
Dec 01 2009