www.digitalmars.com         C & C++   DMDScript  

D - Operator overloading.

reply John Fletcher <J.P.Fletcher aston.ac.uk> writes:
Quote from the specification for D:

Operator overloading. The only practical applications for operator
overloading seem to be implementing a complex floating point type, a
string class, and smart pointers. D provides the first two natively,
smart pointers are irrelevant in a garbage collected language.

Another quote:

 D has many features to directly support features needed by numerics
programmers, like direct support for the complex data type and defined
behavior for NaN's and infinities.


Comment:

For numerical computing it is convenient to define classes e.g. vectors,
matrices and other entities beyond complex numbers, such as
quaternions.  For this overloading of operators such as +, -, +=, etc
means that top level code can be easily written and readable.

John Fletcher
Aug 14 2001
next sibling parent reply "Walter" <walter digitalmars.com> writes:
I agree that operator overloading can be used for matrix and vector classes,
in fact, the first user of my C++ compiler used it to develop a matrix
class! The results were disappointing, though. It's amazingly difficult to
write a good class that overloads operators.

What's a quaternion? 4 reals?

-Walter

John Fletcher wrote in message <3B78E154.BC62E006 aston.ac.uk>...
Quote from the specification for D:

Operator overloading. The only practical applications for operator
overloading seem to be implementing a complex floating point type, a
string class, and smart pointers. D provides the first two natively,
smart pointers are irrelevant in a garbage collected language.

Another quote:

 D has many features to directly support features needed by numerics
programmers, like direct support for the complex data type and defined
behavior for NaN's and infinities.


Comment:

For numerical computing it is convenient to define classes e.g. vectors,
matrices and other entities beyond complex numbers, such as
quaternions.  For this overloading of operators such as +, -, +=, etc
means that top level code can be easily written and readable.

John Fletcher

Aug 14 2001
next sibling parent reply John Fletcher <J.P.Fletcher aston.ac.uk> writes:
Walter wrote:

 I agree that operator overloading can be used for matrix and vector classes,
 in fact, the first user of my C++ compiler used it to develop a matrix
 class! The results were disappointing, though. It's amazingly difficult to
 write a good class that overloads operators.

 What's a quaternion? 4 reals?

 -Walter

A quaternion is an object which has four components. One is real. The other three can be viewed as three mutually perpendicular imaginary numbers. They were invented by Hamilton in the 1850's or thereabouts (before vectors) and now have a lot of use calculating rotations in three dimensions. See for example "SymbolicC++, An introduction to Computer Algebra using object oriented programming" T.K. Shi & W.H.Steeb http://issc.rau.ac.za/symbolic/symbolic.html who define a quaternion class. I use this software with DM compiler, making extensive use of templates and operator overloading, so that the top level program resembles as far as possible the algebra which is written down. You can see some the results of this in http://www.ceac.aston.ac.uk/clifford/paper25/index.htm It is difficult to write operator overloading classes well. I have used as a basis the work of Coplien. Some of the issues are around the creation of temporary copies along the way. I quite frequently have to extend the stack space to large values to get the programs to run. That is the attraction of a language with garbage collection. John Fletcher
Aug 15 2001
parent reply "Walter" <walter digitalmars.com> writes:
Wow! I see what you're doing now.

"John Fletcher" <J.P.Fletcher aston.ac.uk> wrote in message
news:3B7A3FCB.DD2550E7 aston.ac.uk...
 Walter wrote:

 I agree that operator overloading can be used for matrix and vector


 in fact, the first user of my C++ compiler used it to develop a matrix
 class! The results were disappointing, though. It's amazingly difficult


 write a good class that overloads operators.

 What's a quaternion? 4 reals?

 -Walter

A quaternion is an object which has four components. One is real. The

 three can be viewed as three mutually perpendicular imaginary numbers.

 were invented by Hamilton in the 1850's or thereabouts (before vectors)

 have a lot of use calculating rotations in three dimensions.

 See for example "SymbolicC++, An introduction to Computer Algebra using

 oriented programming" T.K. Shi & W.H.Steeb

 http://issc.rau.ac.za/symbolic/symbolic.html

 who define a quaternion class.

 I use this software with DM compiler, making extensive use of templates

 operator overloading, so that the top level program resembles as far as

 the algebra which is written down.  You can see some the results of this

 http://www.ceac.aston.ac.uk/clifford/paper25/index.htm

 It is difficult to write operator overloading classes well. I have used as

 basis the work of Coplien.  Some of the issues are around the creation of
 temporary copies along the way.  I quite frequently have to extend the

 space to large values to get the programs to run.  That is the attraction

 language with garbage collection.

 John Fletcher

Aug 15 2001
parent John Fletcher <J.P.Fletcher aston.ac.uk> writes:
Walter wrote:

 Wow! I see what you're doing now.

It was a related program which had the bug you found for me a little while ago. John
Aug 16 2001
prev sibling next sibling parent "Rolf Campbell" <rolf.campbell tropicnetworks.com> writes:
Whether it is easy to write good overloaded operators or not doesn't remove
the handiness of their existance.

How do you suggest matrix algebra be done in D?

Matrix a = matrixMult(matrixAdd(c, d), e);

or

Matrix a = c.add(d).mult(e);

???

Neither one of these is especially usefull for complex expresssions.

Also, another use for operator overloading is arbitrary presision
Integers/Floats.

If I want a 128-bit or a 256-bit integer, I want to be able to use it in
normal mathematical expressions.

-Rolf Campbell

"Walter" <walter digitalmars.com> wrote in message
news:9lbhub$20ib$1 digitaldaemon.com...
 I agree that operator overloading can be used for matrix and vector

 in fact, the first user of my C++ compiler used it to develop a matrix
 class! The results were disappointing, though. It's amazingly difficult to
 write a good class that overloads operators.

 What's a quaternion? 4 reals?

 -Walter

 John Fletcher wrote in message <3B78E154.BC62E006 aston.ac.uk>...
Quote from the specification for D:

Operator overloading. The only practical applications for operator
overloading seem to be implementing a complex floating point type, a
string class, and smart pointers. D provides the first two natively,
smart pointers are irrelevant in a garbage collected language.

Another quote:

 D has many features to directly support features needed by numerics
programmers, like direct support for the complex data type and defined
behavior for NaN's and infinities.


Comment:

For numerical computing it is convenient to define classes e.g. vectors,
matrices and other entities beyond complex numbers, such as
quaternions.  For this overloading of operators such as +, -, +=, etc
means that top level code can be easily written and readable.

John Fletcher


Aug 16 2001
prev sibling next sibling parent reply Jonathan Cano <jcano mmcnet.com> writes:
Walter wrote:

 I agree that operator overloading can be used for matrix and vector classes,
 in fact, the first user of my C++ compiler used it to develop a matrix
 class! The results were disappointing, though. It's amazingly difficult to
 write a good class that overloads operators.

Just because something is difficult is no reason to exclude it from the language. If we never allowed folks to attempt things that are difficult we'd never make any progress. Operator overloading should definitely be in the language! -- Jonathan Cano Member of Technical Staff MMC Networks, Inc.
Aug 16 2001
parent reply "Carlo E. Prelz" <fluido fluidware.com> writes:
Jonathan Cano wrote:
 
 Just because something is difficult is no reason to exclude it from the
 language.  If we never allowed folks to attempt things that are difficult we'd
 never make any progress.  Operator overloading should definitely be in the
 language!

I think the problem here is to be able to balance the complexity you are inserting with the amount of real-life problems you are solving. Operator overloading implementation as a problem has been solved many times already. I personally agree with Walter's decision: operators mean something very precise for me. For me, adding fancy meanings to them would just add to the confusion of the result. Carlo -- * Se la Strada e la sua Virtu' non fossero state messe da parte, * K * Carlo E. Prelz - fluido fluido.as che bisogno ci sarebbe * di parlare tanto di amore e di rettitudine? (Chuang-Tzu)
Aug 17 2001
parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Carlo E. Prelz wrote:
 Jonathan Cano wrote:
 ...

I think the problem here is to be able to balance the complexity you are inserting with the amount of real-life problems you are solving. Operator overloading implementation as a problem has been solved many times already. I personally agree with Walter's decision: operators mean something very precise for me. For me, adding fancy meanings to them would just add to the confusion of the result. Carlo

operators shouldn't be overloadable. Perhaps one should instead be able to define infix functions (i.e., operators) have a form like, O, just as a wild choice /\:[-~! #$%^&*_+`,?0-9A-z]*\:/, i.e., no white space, no colons, no control characters, no parens, bracketts, or braces, no backslash, and surrounded by colons. Also require white-space separation from everything else. So one could define :+: to add matrices, etc., and as these would really be functions, they should follow the normal overloading rules of functions. This would allow them to be easily parsed, would distinguish them clearly from the standard operators, and would provide the majority of the notational compactness that normal operators provide. The fact that A :+: B would be syntactic sugar for A.add (B) is minor, but convenient. As to precedence ... they should probably bind more strongly than any other operator, and all to the same degree. If you want to get fancy, you'ld need to use parentheses.
Aug 17 2001
next sibling parent reply "Walter" <walter digitalmars.com> writes:
Charles Hixson wrote in message <3B7D2C6B.4040702 earthlink.net>...
Overloaded operators can be quite useful, but perhaps the standard
operators shouldn't be overloadable.  Perhaps one should instead be able
to define infix functions (i.e., operators) have a form like, O, just as
a wild choice /\:[-~! #$%^&*_+`,?0-9A-z]*\:/, i.e., no white space, no
colons, no control characters, no parens, bracketts, or braces, no
backslash, and surrounded by colons.  Also require white-space
separation from everything else.  So one could define :+: to add
matrices, etc., and as these would really be functions, they should
follow the normal overloading rules of functions.

This would allow them to be easily parsed, would distinguish them
clearly from the standard operators, and would provide the majority of
the notational compactness that normal operators provide.  The fact that
A :+: B would be syntactic sugar for A.add (B) is minor, but convenient.
   As to precedence ... they should probably bind more strongly than any
other operator, and all to the same degree.  If you want to get fancy,
you'ld need to use parentheses.

Now this idea has a lot of merit! Thanks for posting it. I *like* it being clearly distinguishable from the native operators.
Aug 17 2001
next sibling parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
Walter wrote:
 
 Charles Hixson wrote in message <3B7D2C6B.4040702 earthlink.net>...
 ...So one could define :+: to add
matrices, etc., and as these would really be functions, they should
follow the normal overloading rules of functions.

This would allow them to be easily parsed, would distinguish them
clearly from the standard operators, and would provide the majority of
the notational compactness that normal operators provide.  The fact that
A :+: B would be syntactic sugar for A.add (B) is minor, but convenient.
   As to precedence ... they should probably bind more strongly than any
other operator, and all to the same degree.  If you want to get fancy,
you'ld need to use parentheses.

Now this idea has a lot of merit! Thanks for posting it. I *like* it being clearly distinguishable from the native operators.

Again, I think I read in Stroustrup the suggestion that a language that was happy using unicode as the source charset could allow you to overload "funny characters" as operators. This would let you use, e.g., u22C5 ("dot operator") for matrix multiply, u22C5 and u00D7 ("multiplication sign") for dot and cross products of vectors, u221A, u221B, u221C for square, third, and fourth roots, etc. This lets a huge variety of obscure branches of mathematics use a "native notation", including notations that haven't been invented yet. You do need a way of describing the prefix/postfix/infixity and possibly precedence and binding of such new operators, though.[1] Of course, adopting this rule leads you down a slippery slope back down to the plus sign. I'd be very happy to have you wind up at the bottom of that slippery slope and decide that C++-style operator overloading is acceptable for D. All language features are abusable; don't let the design of iostream sour you on the concept of operator overloading. -Russell B [1] I didn't like the "dummy operand" solution for ++/-- in C++, though. How about ++operator and operator++ ?
Aug 17 2001
parent reply "Walter" <walter digitalmars.com> writes:
Russell Bornschlegel wrote in message <3B7DC29E.3415E464 estarcion.com>...
Of course, adopting this rule leads you down a slippery slope back
down to the plus sign. I'd be very happy to have you wind up at the
bottom of that slippery slope and decide that C++-style operator
overloading is acceptable for D. All language features are abusable;
don't let the design of iostream sour you on the concept of operator
overloading.

I admit that implementing iostream and just the way it *looks* turned me off. All those << and >> just rubs me the wrong way.
[1] I didn't like the "dummy operand" solution for ++/-- in C++,
though. How about ++operator and operator++ ?

Why not just say ++ and -- are not overloadable <g> ?
Aug 17 2001
parent reply "Sean L. Palmer" <spalmer iname.com> writes:
"Walter" <walter digitalmars.com> wrote in message
news:9lkiqi$2pqd$1 digitaldaemon.com...
 Russell Bornschlegel wrote in message <3B7DC29E.3415E464 estarcion.com>...
Of course, adopting this rule leads you down a slippery slope back
down to the plus sign. I'd be very happy to have you wind up at the
bottom of that slippery slope and decide that C++-style operator
overloading is acceptable for D. All language features are abusable;
don't let the design of iostream sour you on the concept of operator
overloading.

I admit that implementing iostream and just the way it *looks* turned me off. All those << and >> just rubs me the wrong way.
[1] I didn't like the "dummy operand" solution for ++/-- in C++,
though. How about ++operator and operator++ ?

Why not just say ++ and -- are not overloadable <g> ?

If that were the case, STL iterators would be much harder to use... as normal pointers provide ++ and -- but not .inc() and .dec() or whatever. Not providing operator overloading at the time you decide to provide templates could lead to problems. Keeping an eye out for templates could be a large factor in the design of your operator overloading solution... otherwise there'd be no way to design a template that could accept either standard numeric types or user-defined types interchangeably, as the user-defined classes would end up having to use a different syntax for operators such as :+: or whatever, so the template for a + b wouldn't match operator :+:. I've written and used enough matrix and vector classes to know how valuable operator overloading is... so of course I want them in the D language. But I also want templates at some point so don't rush to conclusions about how operator overloading should work. One solution for this may be that if you specify :+:, and there's no match for the types involved for a user-redefined operator :+:, the compiler should try to apply the normal operator + instead. Same for all other operators D has built in. :*: => * , etc. Then inside templates you would always use :+:. However that will surely make templates even uglier than they would have to be already. I'd rather just be able to overload any identifier or series of symbols not allowing white space and not mixing digits or alphanumerics together with symbols, and specify whether it's infix (binary) or prefix (unary) or postfix (unary). Sure, people can misuse this. Those are the kind of things people will both praise and curse D for. For those who've been shot in the foot and are against operator overloading, I have this to say: if you can't tolerate being shot in the foot, you may be in the wrong profession. ;) Sean
Oct 29 2001
parent "Walter" <walter digitalmars.com> writes:
"Sean L. Palmer" <spalmer iname.com> wrote in message
news:9rj3u0$2pf2$1 digitaldaemon.com...
 For those who've been shot in the foot and are against operator

 I have this to say:   if you can't tolerate being shot in the foot, you

 be in the wrong profession.  ;)

 Sean

Owwww! -Walter
Oct 30 2001
prev sibling parent John English <je brighton.ac.uk> writes:
Walter wrote:
 
 Now this idea has a lot of merit! Thanks for posting it. I *like* it being
 clearly distinguishable from the native operators.

I don't see that resolving an overloaded operator is any easier just because you have colons around it to distinguish it... the parser generates a tree using expression rules for +, * etc., so no extra complexity there. The code generator then has to decide whether to emit an integer addition for int + int, a floating point addition for float + float, or look up a function in the symbol table for user-defined cases. You still have to choose int+int vs. float+float, so the extra complexity is surely minimal. It's another story if, as in Algol 68, you can define your own operator tokens and change operator priorities, or as in Ada where you can overload based on the result type rather than the parameter types, but I hope no-one's suggesting that... :-) ----------------------------------------------------------------- John English | mailto:je brighton.ac.uk Senior Lecturer | http://www.it.bton.ac.uk/staff/je Dept. of Computing | ** NON-PROFIT CD FOR CS STUDENTS ** University of Brighton | -- see http://burks.bton.ac.uk -----------------------------------------------------------------
May 02 2002
prev sibling parent Roland <rv ronetech.com> writes:
Yes overloaded operators are useful.
Creating operators like Charles Hixson sugest can be usefull as well !
Me if i use D langage in the future, i would like to have both.

In fact Charles Hixson's way it looks like an other way of calling functions
with the first argument placed before the name of the function and the second
argument placed after the name of the function and no parenthesis.
why not any function with one or two arguments be able to be called that way ?

Roland

Charles Hixson a écrit :

 Overloaded operators can be quite useful, but perhaps the standard
 operators shouldn't be overloadable.  Perhaps one should instead be able
 to define infix functions (i.e., operators) have a form like, O, just as
 a wild choice /\:[-~! #$%^&*_+`,?0-9A-z]*\:/, i.e., no white space, no
 colons, no control characters, no parens, bracketts, or braces, no
 backslash, and surrounded by colons.  Also require white-space
 separation from everything else.  So one could define :+: to add
 matrices, etc., and as these would really be functions, they should
 follow the normal overloading rules of functions.

 This would allow them to be easily parsed, would distinguish them
 clearly from the standard operators, and would provide the majority of
 the notational compactness that normal operators provide.  The fact that
 A :+: B would be syntactic sugar for A.add (B) is minor, but convenient.
    As to precedence ... they should probably bind more strongly than any
 other operator, and all to the same degree.  If you want to get fancy,
 you'ld need to use parentheses.

Aug 23 2001
prev sibling parent reply John English <je brighton.ac.uk> writes:
Walter wrote:
 
 I agree that operator overloading can be used for matrix and vector classes,
 in fact, the first user of my C++ compiler used it to develop a matrix
 class! The results were disappointing, though. It's amazingly difficult to
 write a good class that overloads operators.

There are lots of other common applications, too; consider a Date class. In Java, the lack of operator overloading means that to add a number of days to a Date you have to write this: d.setDate( d.getDate() + n ); and to compare dates: if (d1.before(d2)) ... In "sensible" languages (C++, Ada etc.), you can overload operators so that you can say "if (d1 + n < d2) ...", which IMHO is a hell of a lot more readable. ----------------------------------------------------------------- John English | mailto:je brighton.ac.uk Senior Lecturer | http://www.it.bton.ac.uk/staff/je Dept. of Computing | ** NON-PROFIT CD FOR CS STUDENTS ** University of Brighton | -- see http://burks.bton.ac.uk -----------------------------------------------------------------
May 02 2002
parent reply "Walter" <walter digitalmars.com> writes:
"John English" <je brighton.ac.uk> wrote in message
news:3CD14339.3146AD65 brighton.ac.uk...
 There are lots of other common applications, too; consider a Date
 class. In Java, the lack of operator overloading means that to add
 a number of days to a Date you have to write this:
    d.setDate( d.getDate() + n );
 and to compare dates:
    if (d1.before(d2)) ...

I don't know why Java decided to do dates that way, but I don't think it's the right way. In D (as in C, Javascript, etc.) a date is represented by an arithmetic type. Comparisons, math, etc., on dates are all ordinary arithmetic operations on an ordinary arithmetic type. The only time it has meaning as a date is when it is converted to or from a string, or when things like "day of week" is extracted from it.
 In "sensible" languages (C++, Ada etc.), you can overload operators
 so that you can say "if (d1 + n < d2) ...", which IMHO is a hell of
 a lot more readable.

Your are right, but I don't think Date is the best example <g>.
May 06 2002
next sibling parent reply "Pavel Minayev" <evilone omen.ru> writes:
"Walter" <walter digitalmars.com> wrote in message
news:ab5rnl$1o5t$1 digitaldaemon.com...

 I don't know why Java decided to do dates that way, but I don't think it's

I guess it's because you can then make properties like month, day, dayOfWeek etc, all being members of class Date, rather than global functions. With operator overloading, it would make sence.
May 06 2002
parent "OddesE" <OddesE_XYZ hotmail.com> writes:
"Pavel Minayev" <evilone omen.ru> wrote in message
news:ab63ke$20j7$1 digitaldaemon.com...
 "Walter" <walter digitalmars.com> wrote in message
 news:ab5rnl$1o5t$1 digitaldaemon.com...

 I don't know why Java decided to do dates that way, but I don't think


 I guess it's because you can then make properties like month, day,

 etc, all being members of class Date, rather than global functions.

 With operator overloading, it would make sence.

In MFC Microsoft defined a COleDateTime class which does just that. It contains a member field which holds a variable of type DATE (which is an OLE type that maps to a float) and has all kinds of helper functions and operators defined to ease working with it. -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
May 06 2002
prev sibling parent reply John English <je brighton.ac.uk> writes:
Walter wrote:
 
 "John English" <je brighton.ac.uk> wrote in message
 news:3CD14339.3146AD65 brighton.ac.uk...
 There are lots of other common applications, too; consider a Date
 class. In Java, the lack of operator overloading means that to add
 a number of days to a Date you have to write this:
    d.setDate( d.getDate() + n );
 and to compare dates:
    if (d1.before(d2)) ...

I don't know why Java decided to do dates that way, but I don't think it's the right way. In D (as in C, Javascript, etc.) a date is represented by an arithmetic type. Comparisons, math, etc., on dates are all ordinary arithmetic operations on an ordinary arithmetic type. The only time it has meaning as a date is when it is converted to or from a string, or when things like "day of week" is extracted from it.

So you can multiply and divide dates? Hmm... I wonder what 8 May 2002 divided by 3 is? ----------------------------------------------------------------- John English | mailto:je brighton.ac.uk Senior Lecturer | http://www.it.bton.ac.uk/staff/je Dept. of Computing | ** NON-PROFIT CD FOR CS STUDENTS ** University of Brighton | -- see http://burks.bton.ac.uk -----------------------------------------------------------------
May 08 2002
next sibling parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
John English wrote:

 So you can multiply and divide dates? Hmm... I wonder what 8 May 2002
 divided by 3 is?

You can't really multiply or divide a date...but you can do so for a time. So you either see a Date as "the time since some epoch," or as a special addable value, where 2 of the 3 elements in any equation must be Date and the other is a Time: Date = Date + Time Time = Date - Date -- The Villagers are Online! http://villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
May 08 2002
prev sibling parent reply "Roberto Mariottini" <rmariottini lycosmail.com> writes:
"John English" <je brighton.ac.uk> ha scritto nel messaggio
news:3CD90494.484154AF brighton.ac.uk...
 So you can multiply and divide dates? Hmm... I wonder what 8 May 2002
 divided by 3 is?

It's the wrong question. The good is: What 8 May 2002 divided by 16 Oct 1992 is? And sin(8 May 2002) * cos(16 Oct 1992) ? :-) Ciao
May 09 2002
parent reply "OddesE" <OddesE_XYZ hotmail.com> writes:
"Roberto Mariottini" <rmariottini lycosmail.com> wrote in message
news:abd9va$2osk$1 digitaldaemon.com...
 "John English" <je brighton.ac.uk> ha scritto nel messaggio
 news:3CD90494.484154AF brighton.ac.uk...
 So you can multiply and divide dates? Hmm... I wonder what 8 May 2002
 divided by 3 is?

It's the wrong question. The good is: What 8 May 2002 divided by 16 Oct 1992 is? And sin(8 May 2002) * cos(16 Oct 1992) ? :-) Ciao

What is the square root of -4? Some operations on dates or times do not make sense, but the same goes for ordinary numbers. -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
May 09 2002
parent Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
OddesE wrote:

 What is the square root of -4?

It's 2j. I'm going to fork this off...it gave me a cool ponder. -- The Villagers are Online! villagersonline.com .[ (the fox.(quick,brown)) jumped.over(the dog.lazy) ] .[ (a version.of(English).(precise.more)) is(possible) ] ?[ you want.to(help(develop(it))) ]
May 09 2002
prev sibling next sibling parent reply Brendan McCane <mccane cs.otago.ac.nz> writes:
John Fletcher wrote:

 Comment:

 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

 John Fletcher

Gotta agree wholeheartedly with this. If you are doing any numerical computation, then using infix operators is an enormous convenience. This is one (only one, there are others:-) of the problems with Java IMO. -- Cheers, Brendan. ---------------------------------------------------------------------------- Brendan McCane Email: mccane cs.otago.ac.nz Department of Computer Science Phone: +64 3 479 8588/8578. University of Otago Fax: +64 3 479 8529 Box 56, Dunedin, New Zealand. There's only one catch - Catch 22.
Aug 16 2001
next sibling parent "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
Brendan McCane wrote:

 John Fletcher wrote:

 Comment:

 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

 John Fletcher

Gotta agree wholeheartedly with this. If you are doing any numerical computation, then using infix operators is an enormous convenience. This is one (only one, there are others:-) of the problems with Java IMO.

Remember, from a distant perspective *all* "operators" are "syntactic sugar"! (A "convenience" as mentioned above.) How's that? Well, let's look at an arbitrary complex expression in C, one with several operators: this.that.pointer_to->something_else = z**(2*pi) / (x ^ 0xff); Let's move it all to functions: memberOf(deRef(memberOf(memberOf(this, that),pointer_to)), something_else) = div(pow(z, times(2, pi)), xor(x, 0xff)); Not a single operator in the line, and nearly completely unreadable. (Should I have used RPN instead?) Where to go from here? Well, the isomorphism is visible, at the very least. And that's the key. Though I'm a D newbie, one resolution path comes from the separatiion of the syntactic and semantic elements of the language. While pre-processing is not a "feature" of D, I'm sure some entreprising individuals will hack the parsers in a way that will allow arbitrary operator creation (and overloading), done under the guise of a simple mapping operation. Of course, a tool such as M4 may be easier to use, or even cpp or sed! For example, create the file "myoperators.d" with the following contents: import math; import operatorLib; Operator("**", <double LHS>, <double RHS>, "pow(LHS,RHS)"); Overload("^", <double LHS>, <double RHS>, "pow(LHS,RHS)"); Then, in D code, do an "import myoperators". The first definition would create the new operator "**", and the second would overload the "^" (XOR) operator for doubles. (The assignment versions "**=" and "^=" could be implicitly defined whenever the LHS and RHS are of the same type.) This would map expressions of the form "A**B" and "A^B" to the string "pow(A,B)" when A and B are doubles (where "LHS" and "RHS" are placeholder tokens for each side of the operator). Once the code is generated, the compiler would catch type mismatch errors as usual. Ideally, any such errors would be mapped back to the original text, and not to the replacement (a common problem with C macros). Hence the need to hack the parsers (especially if things like operator precedence are to be implemented simply). [Actually, I'd vote to eliminate multiple levels of operator precedence! Let's make everything Left-to-Right, and use parens as needed to make precedence explicit.] This could possibly be done via an enhanced module interface, allowing us to avoid many problems by simply forbidding operator definitions to be used within the files containing their definition. I feel the many of the problems with (and objections to) C++ operator creation and overloading appear to reduce to management issues. By making the process simpler and explicit (via implementation as a true dynamic language extension, rather than as a textual substitution) we may be able to control and manage the beast. I'm not yet familiar enough with D and its parsers to make a specific recommendation. Help? Of course, Lisp and Forth adherents will point out the difficulties with "operator explosion", where so many custom operators are implemented that the language morphs into something quite implementation-specific, and completely impenetrable even to language experts. Let's not make operator creation and overloading too easy! But, within reasonable restrictions, it should be made possible. How can we get what we need from operator creation and overloading with minimal fuss, safety, readability and reliability? The C++ way has well documented difficulties and is easy to misuse, but it can be made to work. Remember, D already provides overloading for the common math operators over the intrinsic numeric types. Can we allow the D programmer to extend this on their own, or must the whole concept be scrapped? If there really is no "suitable" way to do all this within D itself, then at least we'll always have functions and M4... -BobC
Aug 16 2001
prev sibling parent reply "Anthony Steele" <asteele nospam.iafrica.com> writes:
My 2c:

Operator overlaoding is purely syntactic sugar, it adds no new functionality
to a language. In many cases it makes the code harder to read and to debug.
It is aimed at the problem domain of science/math modelling, which makes up
a rather small part of computing today.

 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.


the difference between a += b; and a.append(b); is minor and really not worth adding big features to the language to support.
Sep 09 2001
parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Anthony Steele wrote:

 My 2c:
 
 Operator overlaoding is purely syntactic sugar, it adds no new functionality
 to a language. In many cases it makes the code harder to read and to debug.
 It is aimed at the problem domain of science/math modelling, which makes up
 a rather small part of computing today.
 
 
quaternions.  For this overloading of operators such as +, -, +=, etc
means that top level code can be easily written and readable.


the difference between a += b; and a.append(b); is minor and really not worth adding big features to the language to support.

a) It's not a big feature if you already have overloaded functions. When done properly it's rather simple to replace operator calls with a rewrite into functional notation. It's just harder for people to get correct. b) You can be obfuscated with nearly any language feature. Just because some people have gone hog-wild is no reason to denigrate the whole concept. OTOH, I do support a clear distinction between user defined operators and system defined operators. But I also know of languages that don't support that disctinction, and which don't suffer excessively because of that. c) The math folks have had longer to develop their opeartors. String processing might be expected to develop standard operators over time. Presumable other domains would also be appropriate. Certainly sets have readily defineable operators, and it would be desireable for them to be useable. And one would, e.g., want to use the same operators for sets composed of lists and for sets generated by rules, though of course the implementations would need to be quite different. The extends into SQL processing (not such a small field anymore, perhaps?). etc. And that's just what occurs off the top of my head. I'm sure that most people have some domain that they would use operators with, if they were possible. And the domains are probably not identical. If you are going to do a small isolated operation, then you are correct. The amount gained by overloaded operators is small. But if one is composing operations on data structrues, then the operation will not necessarily be small. To take a minor and simple example: strVal = salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title can probably be understood without any explanation. Replacing it with the re-written code: strVal = salute.append (firstName.append (midInit.append (lastName.append(perhapsComma.append(jr_etc.append (titlePunct.append (title... is not only more difficult to read ... but I didn't finish it becuase I didn't want to count how many parenthesis to put at the end. (And on looking it over, I corrected one typo of a missing open paren...nothing similar happened with the first version.)
Sep 10 2001
next sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 strVal = salute :+: firstName :+: midInit :+: lastName :+:
 perhapsComma :+: jr_etc :+: titlePunct :+: title

Well = will also be most like an overload assignment operator so: strVal :=: salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title I can already estimate that people will not be pleased to realise that a = b :+: c; will do something fundamently different than, a :=: b :+: c; This results into something error prone :/
Sep 10 2001
next sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Axel Kittenberger wrote:

strVal = salute :+: firstName :+: midInit :+: lastName :+:
perhapsComma :+: jr_etc :+: titlePunct :+: title

Well = will also be most like an overload assignment operator so: strVal :=: salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title I can already estimate that people will not be pleased to realise that a = b :+: c; will do something fundamently different than, a :=: b :+: c; This results into something error prone :/

Sep 10 2001
prev sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Axel Kittenberger wrote:

strVal = salute :+: firstName :+: midInit :+: lastName :+:
perhapsComma :+: jr_etc :+: titlePunct :+: title

Well = will also be most like an overload assignment operator so: strVal :=: salute :+: firstName :+: midInit :+: lastName :+: perhapsComma :+: jr_etc :+: titlePunct :+: title I can already estimate that people will not be pleased to realise that a = b :+: c; will do something fundamently different than, a :=: b :+: c; This results into something error prone :/

Sorry about the empty reply. If you don't want it to mean something "fundamentally different", then don't define it that way. You could also define append to mean something silly, but we generally suppose that you won't. And the = was inteneded to be the standard string-to-string assignment, so I didn't redefine it. But append is only annalogous to addition, so it was appropriate to use an analogous operator, but not appropriate to re-define the original addition. Now I would admit that it was reasonable to use to + operator for all purely numeric types, providing that the implemented transitivity, commutativity, and association. This would include matricies, vectory, etc. But notice that this doesn't extend to multiplication, as the multiplication of structured numeric types doesn't obey commutativity. I.e., A * B != B * A for matricies (except in special cases.) Similarly, string concatenation doesn't obey association: "a" + "b" != "b" + "a" so the simple + operation is the wrong choice. Etc. Frequently an operation can be defined appropriately only for the classes that implement "some interface": E.g., if Comparable is implemented then the operators: < <= == >= > != are defineable. If a particular class implements Comparable, then all of its decentant classes should be able to use those operators (well, D doesn't seem to provide a way of hiding them). But this doesn't seem to me to mean that they should necessarily use the same ordering. Actually, this highlights one of the weaknesses in the class inheritance structuring of programs. Frequently a praticular chunk of data, stored in a particular structure, will need to be accessed in more than one order. For example, databases will generally use more than one index to access the data in the files. Some of these indexes (most actually) will only pull up a part of the data. So one defines views over the data. Now imagine that the database is in memory. Probably it would be in an array, but the index would not be significant, except as a accessing method. Stepping through on the index would access all of the data in sequence, but not in any particularly useful sequence. So one would want to define several different views of the data, which would allow one to access the data in different orders (and pull up different pieces of it). Some of them would be read only views. Etc. This seems to map more easily onto structs and functions then onto classes and inheritance. And yet these aren't simple things. One would often want them to exhibit fairly complex behaviors. So one would end up implementing a class for each record type. And even fairly closely related record types would have difficulty defining an inheritance relationship that implied mixing their functionality with that of another record type. Two identical fields would be more likely to be a join key field than a field for sorting the two kinds of record on, e.g. So they typical class building operation would involve joining pieces of the two record types together into one. But this is a prohibited variety of inheritance. And in any case it would frequently be implemented by building a list of correspondences between the two record categories, and then for each funciton call on an item in the list, forwarding the call to the appropriate record type for processing. So this view of "multiple inheritance" is a bit different than the normal programming language model. This is viewing multiple inheritance as rather like a SQL Select statement, with multiple files. And one would want the join operator to be able to work on all of the record types defined. I seem to have veered a bit from operator overloading. My appologies. But it doesn't feel less important because of that.
Sep 10 2001
parent reply Axel Kittenberger <axel dtone.org> writes:
 If you don't want it to mean something "fundamentally
 different", then don't define it that way.  You could also
 define append to mean something silly, but we generally suppose
 that you won't.  And the = was inteneded to be the standard
 string-to-string assignment, so I didn't redefine it.

Well fundamentally different was a false, it's even worse than that: it's slightly different a = b; will assign the object a to point to b and to be a synonym for it in future a :=: b; will take the contents of b and assign/copy them to a; Now take folowing hypothetical code: a = b; b.printf("%d", 7); writeln(b); vs: a :=: b; b.printf("%d", 7); writeln(b); The results will be different, not the language is wrong that way, I just want to point that this way we're just creating a caveat, a hole thounds will fall into. Well supposed ten thousend people will use it :) so let's better say 25% of users. How many people debugged hour for hour again and again for write '=' in if's until compilers learned to warn there?
 But append is only annalogous to addition, so it was appropriate
 to use an analogous operator, but not appropriate to re-define
 the original addition.  Now I would admit that it was reasonable
 to use to + operator for all purely numeric types, providing
 that the implemented transitivity, commutativity, and
 association.  This would include matricies, vectory, etc.  But
 notice that this doesn't extend to multiplication, as the
 multiplication of structured numeric types doesn't obey
 commutativity.  I.e., A * B != B * A for matricies (except in
 special cases.)
 
 Similarly, string concatenation doesn't obey association:
    "a" + "b" != "b" + "a"
 so the simple + operation is the wrong choice.  Etc.

Okay I agree, but an optimizer should know if an user operation is associative, or communative, or ummm. I forget now the name of the last one from a = b, and b = c follows a = c. Now you define the syntax for telling this the compiler, a checker that checks if this is true, and an optimizer that's able to handle that :o)
 Now imagine that the database is in memory.  Probably it would
 be in an array, but the index would not be significant, except
 as a accessing method.  Stepping through on the index would
 access all of the data in sequence, but not in any particularly
 useful sequence.  So one would want to define several different
 views of the data, which would allow one to access the data in
 different orders (and pull up different pieces of it).  Some of
 them would be read only views.  Etc.  

In example, a hashmap, right?
 This seems to map more
 easily onto structs and functions then onto classes and
 inheritance.  And yet these aren't simple things.  One would
 often want them to exhibit fairly complex behaviors.  So one
 would end up implementing a class for each record type.  And
 even fairly closely related record types would have difficulty
 defining an inheritance relationship that implied mixing their
 functionality with that of another record type.  Two identical
 fields would be more likely to be a join key field than a field
 for sorting the two kinds of record on, e.g.  So they typical
 class building operation would involve joining pieces of the two
 record types together into one.  But this is a prohibited
 variety of inheritance.  And in any case it would frequently be
 implemented by building a list of correspondences between the
 two record categories, and then for each funciton call on an
 item in the list, forwarding the call to the appropriate record
 type for processing.
 
 So this view of "multiple inheritance" is a bit different than
 the normal programming language model.  This is viewing multiple
 inheritance as rather like a SQL Select statement, with multiple
 files.  And one would want the join operator to be able to work
 on all of the record types defined.

Reading this, and thinking what I touched in the past, only one world hits my mind regarding this: Deplhi! Am I right? I guess many things where handled this way there like the Tables, Lists, and it's wonderfull interface to SQL. Well wasn't this why polymorphism was introduced? (virtual functions) - Axel
Sep 10 2001
next sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Axel Kittenberger wrote:

 ...
 Now imagine that the database is in memory.  Probably it
 would be in an array, but the index would not be
 significant, except as a accessing method.  Stepping
 through on the index would access all of the data in
 sequence, but not in any particularly useful sequence.
 So one would want to define several different views of
 the data, which would allow one to access the data in different
  orders (and pull up different pieces of it).  Some of them
  would be read only views.  Etc.

In example, a hashmap, right?

A hashmap doesn't imply an ordering. I suppose that you can sort the keys, but then what's the gain in making it a hashmap rather than, say, a balanced tree, which would inherently have the order present?
 ..
 So this view of "multiple inheritance" is a bit different
 than the normal programming language model.  This is
 viewing multiple inheritance as rather like a SQL Select
 statement, with multiple files.  And one would want the
 join operator to be able to work on all of the record
 types defined.

Reading this, and thinking what I touched in the past, only one world hits my mind regarding this: Deplhi! Am I right? I guess many things where handled this way there like the Tables, Lists, and it's wonderfull interface to SQL. Well wasn't this why polymorphism was introduced? (virtual functions) - Axel

handles a portion of it. This comes more out of databases than any programming language that I've encountered. MS Basic ends up wrapping all the data definitions into a string class that's interpreted at run time. Not a good answer. Most languages seem to just ignore the problem. Python allows one to import an external B+Tree (SleepyCatDB) with an interface that's the same as a hashtable (Dictionary) even though it is inherently sorted. It works well because a B+Tree supports random access quite well. But Python allows one to dynamically determine the members of a class. This is probably inappropriate for D, but I'm not sure what the appropriate way is. And, I notice that I'm suddenly back on the thread again. Python handles the mapping by re-defining the dictionary access operators, so that one can access the B+Tree as if it were a dictionary. There are, however, also methods that permit the definition of indicies within the database, etc. But a dictionary that gets too large, or which suddenly needs to become persistent, can be migrated to a database with only the change of a few lines in the program.
Sep 10 2001
parent Axel Kittenberger <axel dtone.org> writes:
 A hashmap doesn't imply an ordering.  I suppose that you can
 sort the keys, but then what's the gain in making it a hashmap
 rather than, say, a balanced tree, which would inherently have
 the order present?

No didn't mean sorting, but you've also two ways to access a hashmap, a) iterate through all entries without any order, b) fast-get a specifc entry by it's key.
Sep 10 2001
prev sibling parent timeless <timeless mac.com> writes:
Axel Kittenberger wrote:
 associative, or communative, or ummm. I forget now the name of the last one
 from a = b, and b = c follows a = c.

Feb 04 2002
prev sibling parent reply a <a b.c> writes:
Charles Hixson wrote:

 b) You can be obfuscated with nearly any language feature.  Just
 because some people have gone hog-wild is no reason to denigrate
 the whole concept.  OTOH, I do support a clear distinction
 between user defined operators and system defined operators.
 But I also know of languages that don't support that
 disctinction, and which don't suffer excessively because of that.

Just for arguments sake, if you had distinct user defined operators, and if we ever get some form of generic programming into the language, how would you suggest writing generic code that could work both for built in type like floats, and user defined type like vectors? You can use '==' on user types, and you (presumably) can't use ':==:' (or what ever we make) on built ins. I'm all for overloads, and I don't mind differentiated between user types and built in type, but I don't want use this to paint us into a corner with regard to generic programming. Dan
Sep 10 2001
parent reply Charles Hixson <charleshixsn earthlink.net> writes:
a wrote:

 Charles Hixson wrote:
 ... 
 	Just for arguments sake, if you had distinct user defined operators,
 and if we ever get some form of generic programming into the language,
 how would you suggest writing generic code that could work both for
 built in type like floats, and user defined type like vectors?  You can
 use '==' on user types, and you (presumably) can't use ':==:' (or what
 ever we make) on built ins.
 	I'm all for overloads, and I don't mind differentiated between user
 types and built in type, but I don't want use this to paint us into a
 corner with regard to generic programming.
 
 Dan
 

One could certainly define: operator :==: (left, right : float) { return (left == right); } So I don't see that as a problem.
Sep 11 2001
parent reply a <a b.c> writes:
Charles Hixson wrote:
 
 a wrote:
 
 Charles Hixson wrote:
 ...
       Just for arguments sake, if you had distinct user defined operators,
 and if we ever get some form of generic programming into the language,
 how would you suggest writing generic code that could work both for
 built in type like floats, and user defined type like vectors?  You can
 use '==' on user types, and you (presumably) can't use ':==:' (or what
 ever we make) on built ins.
       I'm all for overloads, and I don't mind differentiated between user
 types and built in type, but I don't want use this to paint us into a
 corner with regard to generic programming.

 Dan

One could certainly define: operator :==: (left, right : float) { return (left == right); } So I don't see that as a problem.

And we go from looking at (left == right) and not know if we have built ins or object to seeing (left :==: right) and not know if we are looking at built ins or objects. I think we have just subverted someone else's intentions. Dan
Sep 11 2001
parent Charles Hixson <charleshixsn earthlink.net> writes:
a wrote:

 Charles Hixson wrote:

 a wrote:


 Charles Hixson wrote: ...




float) { return (left == right); } So I don't see that as a problem.

And we go from looking at (left == right) and not know if we have built ins or object to seeing (left :==: right) and not know if we are looking at built ins or objects. I think we have just subverted someone else's intentions. Dan

subverted. The :==: has been defined in a way that is seen by the designer as consistent with the way he generally uses the operator, so that hasn't been subverted. I don't understand your point. Whose intentions to do what have been subverted how?
Sep 12 2001
prev sibling next sibling parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

Operator overloading as it exists in C++ is unsatisfactory. Unless it can be radically improved I don't see the point of having the feature. I do a lot of graphics programming (for fun, I program Telecommunications software for a living), so I use Vectors a lot. Personally I think they provide a good reason why you *don't* want to use operator overloading. Yes, you can overload +, -, +=, and you can overload * for cross-product, but you can't overload . (period) for dot product. The problems with operator overloading in C++ are: (1) You can only overload the operators that C++ already provides, which deminishes the value of operator overloading, and is a damn pain when there are some extremely common operations for your type which would benefit from it (like a dot operator) (2) You inheirit the precedence of the native operators from C++ which is not always appropriate. (3) Overloading new is just *evil*. It makes debugging a large codebase code a nightmare. (4) It introduces another style for doing very common stuff. One group of programmers does vec.plus(v2) and another does vec + v2. One of the main issues with maintaining any large code base is uniformity of style. The more you can encourage people to express simple concepts in the same way, the better. Personally I think that overloading is not worth the hype and tends to obfuscate code. If you *must* implement it, don't do it unless you can use any operators you like (regardless of whether the language supports them natively) and you must be able to specify your own precedence (that'd be a fun compiler to write!). Peter.
Aug 17 2001
next sibling parent reply John Fletcher <J.P.Fletcher aston.ac.uk> writes:
kaffiene wrote:

 Personally I think that overloading is not worth the hype and tends to
 obfuscate code.  If you *must* implement it, don't do it unless you can use
 any operators you like (regardless of whether the language supports them
 natively) and  you must be able to specify your own precedence (that'd be a
 fun compiler to write!).

 Peter.

I know of a C++ compiler (SALFORD C++ http://www.suns.salford.ac.uk/compilers/salfordc/index.shtml) which has a pragma to allow the definition of new operators and specify the position in the precedence. It also produces very lint like error messages. I could not find a user group. John Fletcher
Aug 17 2001
parent John English <je brighton.ac.uk> writes:
John Fletcher wrote:
 
 kaffiene wrote:
 
 Personally I think that overloading is not worth the hype and tends to
 obfuscate code.  If you *must* implement it, don't do it unless you can use
 any operators you like (regardless of whether the language supports them
 natively) and  you must be able to specify your own precedence (that'd be a
 fun compiler to write!).

 Peter.

I know of a C++ compiler (SALFORD C++ http://www.suns.salford.ac.uk/compilers/salfordc/index.shtml) which has a pragma to allow the definition of new operators and specify the position in the precedence. It also produces very lint like error messages. I could not find a user group.

Algol 68 did this too, many years ago. ----------------------------------------------------------------- John English | mailto:je brighton.ac.uk Senior Lecturer | http://www.it.bton.ac.uk/staff/je Dept. of Computing | ** NON-PROFIT CD FOR CS STUDENTS ** University of Brighton | -- see http://burks.bton.ac.uk -----------------------------------------------------------------
May 03 2002
prev sibling next sibling parent reply Christophe de Dinechin <descubes earthlink.net> writes:
For LX, I invented the "written" notation. In D-like syntax, it would be:

    matrix Add(matrix M, matrix N) written M+N
    {
        ...
    }

One benefit is that it allows you to define N-way operators:

    matrix MultiplyAndAdd(matrix A, float B, matrix C) written A*B+C
    {
        ...
    }

There are many cases where this is really simpler or more efficient than
two-way operators. Another slight extension offered by LX is to allow the
definition of named infix operators (which all share the same, lowest
priority):

    int And(int A, int B) written A and B;


This is much easier to define and implement than it sounds. This being
presented, I can discuss your e-mail in more details...


kaffiene wrote:

 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

Operator overloading as it exists in C++ is unsatisfactory. Unless it can be radically improved I don't see the point of having the feature.

The point being that a very large portion of C++ code out there uses it in one form or another. I take this as an indication that it's probably useful to some. Avoid the common pitfall of thinking: "I don't need it, therefore nobody needs it."
 I do a lot of graphics programming (for fun, I program Telecommunications
 software for a living), so I use Vectors a lot.  Personally I think they
 provide a good reason why you *don't* want to use operator overloading.
 Yes, you can overload +, -, +=, and you can overload * for cross-product,
 but you can't overload . (period) for dot product.

WIth the written approach, at least, you could define float DotProduct(vector A, vector B) written A dot B (As a matter of fact, the LX compiler would let you override A.B that way, if memory serves me right, but it might not be possible in the general framework of the D semantics)
  The problems with
 operator overloading in C++ are:

 (1) You can only overload the operators that C++ already provides, which
 deminishes the value of operator overloading, and is a damn pain when there
 are some extremely common operations for your type which would benefit from
 it (like a dot operator)

As a matter of fact, overloading the . operator was a hot debate within the C++ committee. Microsoft was all in favor of it. I personally consider a mistake that it was not allowed, because it disabled a whole class of "smart objects". Anyway, this doesn't have to be the case in D.
 (2) You inheirit the precedence of the native operators from C++ which is
 not always appropriate.

That's not a problem, that's a feature. The compiler would be unable to parse expressions correctly if operator precedence changed. If the compiler can't parse it, the human brain probably would have trouble too. For instance, say you have A+B*C, with operators that make both (A+B)*C and A+(B*C) be valid, with different meanings... what do you select? If you enable variable-precedence operators, you end up with a real compile-time nightmare, at best. This being said, the written approach allows you to redefine priorities, if you are really nasty. glop Reorder(glop A, glop B, glop C) written A+B*C { return (A+B)*C; // Yuck yuck }
 (3) Overloading new is just *evil*.  It makes debugging a large codebase
 code a nightmare.

The LX compiler uses a form of garbage collection. The first thing it needs to do is to know what objects to collect. operator new tells me precisely that. The problem with C++ features in general is not that they are necessarily bad, but that people want to show off by using them at the wrong place and time. There are C++ features that are bad (declaration syntax, lookup rules). But definitely not operator overloading.
 (4) It introduces another style for doing very common stuff.  One group of
 programmers does vec.plus(v2) and another does vec + v2.  One of the main
 issues with maintaining any large code base is uniformity of style. The more
 you can encourage people to express simple concepts in the same way, the
 better.

The absence of operator overloading is bad, because it introduces yet another style for doing very common stuff. For instance, to add two integers, I write A+B, but to add two vectors, I write A.add(B). One of the main issues with maintaining any large code base is uniformity of style. The more you can encourage people to express simple concepts in the same way, the better. Actually, this is more serious than you think. You just CAN'T WRITE TEMPLATE CODE if the form in which you express things is not common.
 Personally I think that overloading is not worth the hype and tends to
 obfuscate code.  If you *must* implement it, don't do it unless you can use
 any operators you like (regardless of whether the language supports them
 natively) and  you must be able to specify your own precedence (that'd be a
 fun compiler to write!).

Personally, I think that overloading tends to less obfuscated code than most other notations. Christophe
Aug 17 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
"Christophe de Dinechin" <descubes earthlink.net> wrote in message
news:3B7D2B88.C079FD16 earthlink.net...
 For LX, I invented the "written" notation. In D-like syntax, it would be:

     matrix Add(matrix M, matrix N) written M+N
     {
         ...
     }

 One benefit is that it allows you to define N-way operators:

     matrix MultiplyAndAdd(matrix A, float B, matrix C) written A*B+C
     {
         ...
     }

 There are many cases where this is really simpler or more efficient than
 two-way operators. Another slight extension offered by LX is to allow the
 definition of named infix operators (which all share the same, lowest
 priority):

     int And(int A, int B) written A and B;


 This is much easier to define and implement than it sounds. This being
 presented, I can discuss your e-mail in more details...

That's really nice.
 For numerical computing it is convenient to define classes e.g.



 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

Operator overloading as it exists in C++ is unsatisfactory. Unless it


 be radically improved I don't see the point of having the feature.

The point being that a very large portion of C++ code out there uses it in

 form or another. I take this as an indication that it's probably useful to
 some. Avoid the common pitfall of thinking: "I don't need it, therefore

 needs it."

That is not my point of view at all - my view is that one should avoid multiplying the number of ways in which you can achieve the same result - it makes for a code maintainence headache. I think C++ is supremely guilty of this. Perl is an example of a language which will provide a zillion solutions to a given problem. It is also a language that you wouldn't want to maintain a large code base in. I think these two facts are related.
 I do a lot of graphics programming (for fun, I program


 software for a living), so I use Vectors a lot.  Personally I think they
 provide a good reason why you *don't* want to use operator overloading.
 Yes, you can overload +, -, +=, and you can overload * for


 but you can't overload . (period) for dot product.

WIth the written approach, at least, you could define float DotProduct(vector A, vector B) written A dot B

That really is nice.
 (As a matter of fact, the LX compiler would let you override A.B that way,

 memory serves me right, but it might not be possible in the general

 of the D semantics)


  The problems with
 operator overloading in C++ are:

 (1) You can only overload the operators that C++ already provides, which
 deminishes the value of operator overloading, and is a damn pain when


 are some extremely common operations for your type which would benefit


 it (like a dot operator)

As a matter of fact, overloading the . operator was a hot debate within

 committee. Microsoft was all in favor of it. I personally consider a

 that it was not allowed, because it disabled a whole class of "smart

 Anyway, this doesn't have to be the case in D.

 (2) You inheirit the precedence of the native operators from C++ which


 not always appropriate.

That's not a problem, that's a feature. The compiler would be unable to

 expressions correctly if operator precedence changed. If the compiler

 parse it, the human brain probably would have trouble too. For instance,

 you have A+B*C, with operators that make both (A+B)*C and A+(B*C) be

 with different meanings... what do you select? If you enable
 variable-precedence operators, you end up with a real compile-time

 at best.

The rhetoric of operator overloading is that it is better because it more accurately reflects the normal mathematical notation of operations in a given context. If operator overloading must enforce a given precedence then this is not true, is it? If I define a * operation on Vectors for cross-product, then I want it to have the same precedence as . for dot product. And why should a vector cross product be arbitrarily forced to haver higher or lower precedence that other operators such as addition? No such limitation is imposed on the mathematical notation. Having to put up with it because its a hangover from how the compiler handles basic types doesn't seem like a very noble, clean or advanced notion, does it?
 This being said, the written approach allows you to redefine priorities,

 are really nasty.

     glop Reorder(glop A, glop B, glop C) written A+B*C
     {
         return (A+B)*C; // Yuck yuck
     }


 (3) Overloading new is just *evil*.  It makes debugging a large codebase
 code a nightmare.

The LX compiler uses a form of garbage collection. The first thing it

 do is to know what objects to collect. operator new tells me precisely

 The problem with C++ features in general is not that they are necessarily

 but that people want to show off by using them at the wrong place and

 There are C++ features that are bad (declaration syntax, lookup rules).

 definitely not operator overloading.

 (4) It introduces another style for doing very common stuff.  One group


 programmers does vec.plus(v2) and another does vec + v2.  One of the


 issues with maintaining any large code base is uniformity of style. The


 you can encourage people to express simple concepts in the same way, the
 better.

The absence of operator overloading is bad, because it introduces yet

 style for doing very common stuff. For instance, to add two integers, I

 A+B, but to add two vectors, I write A.add(B). One of the main issues with
 maintaining any large code base is uniformity of style. The more
 you can encourage people to express simple concepts in the same way, the
 better.

My point is that you have arithmetic operators for basic types. You have methods calls on objects. This is two ways of doing things. If you add operator overloading that's three. With operator overloading you will still end up with some objects using method calls for add, subtract etc... and other code using operator overloading to achieve exactly the same kind of thing. I *know* that the idea is to have object manipulations looking like operations on simple arithmetic types but the fact is that as soon as you work with more than about three programmers, or inheirit code from someone else or import an API, someone is going to to do the object manipulations in a different way to you. Given that you are commited to having general method calls on objects (obj.method()) deciding to do *all* manipulations that way ensures that there is one and only one style for performing such operations. Choosing to use operator overloading guarantees that as soon as you move beyond your own stylistically/ideologically pure code, some other bugger will mess it up by doing it a different way and you have no choice but to use their idiom. Maintaining C++ code is full of experiences like this - *whichever* style you choose some other code that you have to work with does it radically different. Operator overloading is typically implemented in a very limited way, so it often doesn't do the job of acurately representing the mathematical notation anyway. For example, on vectors I often want to get the magnitude. This is usually represented as ||v|| for magnitude of v. Most operator overloading systems don't cope with such notation, and so cannot fully represent the syntax for working with a given mathematical type properly. It seems to me that you can: (a) Go the C++ route and say that only some predetermined operators may be overloaded and that you have to live with the precedence that they already have - this falls short of the stated goal of making the code syntax look like the mathematical syntax. (b) Go the whole hog and design a system that allows creation of arbitrary operators and overloading the precedence of these and existant operators. This actually *would* allow your system to live up to the rhetoric, but you have to put up with the proliferation of style choices made by each individual programmer you work with, and live with the fact that that a maintenance programmer who is unfamiliar with the given mathematical notation will have an uphill battle to be able to work on your codebase. (c) Don't overload operators at all and have just one idiom for calling methods on objects. I think that there are two extremes - (b) and (c). I believe that (c) is the best idea, but if you must overload operators at least do it properly (b), but personally I think that the negatives (maintenence costs) far outweigh the gains. (a) is a system that does not really do anything well. Peter.
Aug 18 2001
next sibling parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
kaffiene wrote:
 
 "Christophe de Dinechin" <descubes earthlink.net> wrote in message
 news:3B7D2B88.C079FD16 earthlink.net...
Avoid the common pitfall of thinking: "I don't need it, therefore

 needs it."

That is not my point of view at all - my view is that one should avoid multiplying the number of ways in which you can achieve the same result - it makes for a code maintainence headache. I think C++ is supremely guilty of this. Perl is an example of a language which will provide a zillion solutions to a given problem. It is also a language that you wouldn't want to maintain a large code base in. I think these two facts are related.

As a counterpoint to this, I'll just point out that no one is forced to write obfuscated Perl. The Perl culture promotes it more than it should, but you _can_ write and comment Perl in such a way that it's no harder to read than, say, C. (I'll leave aside the question of whether anyone would want to maintain a large code base in C. :) It's been said before, but bad code can be written in any language. I personally believe that _good_ code can be written in any language, as well. Even Perl.
 My point is that you have arithmetic operators for basic types. You have
 methods calls on objects.  This is two ways of doing things.  If you add
 operator overloading that's three.  With operator overloading you will still
 end up with some objects using method calls for add, subtract etc... and
 other code using operator overloading to achieve exactly the same kind of
 thing.

But the first way of doing things, the builtin arithmetic operators, aren't available to classes; the whole point of operator overloading is to allow the _user_ of a class the ability to use the "first way". Philosophically, this makes more sense if your "class implementors" and "class users" are disjoint sets of people; I've never actually worked on a C++ project where this was the case. Blah blah code reuse blah blah. -Russell B
Aug 18 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
"Russell Bornschlegel" <kaleja estarcion.com> wrote in message
news:3B7EB9F9.E9B357EF estarcion.com...
 kaffiene wrote:
 "Christophe de Dinechin" <descubes earthlink.net> wrote in message
 news:3B7D2B88.C079FD16 earthlink.net...
Avoid the common pitfall of thinking: "I don't need it, therefore

 needs it."

That is not my point of view at all - my view is that one should avoid multiplying the number of ways in which you can achieve the same


 makes for a code maintainence headache.  I think C++ is supremely guilty


 this.

 Perl is an example of a language which will provide a zillion solutions


 given problem.  It is also a language that you wouldn't want to maintain


 large code base in.  I think these two facts are related.

As a counterpoint to this, I'll just point out that no one is forced to write obfuscated Perl. The Perl culture promotes it more than it should, but you _can_ write and comment Perl in such a way that it's no harder to read than, say, C. (I'll leave aside the question of whether anyone would want to maintain a large code base in C. :)

Having done it, I'd have to say C is not a bad language to maintain. It doesn't scale well, but it is at least consistent.
 It's been said before, but bad code can be written in any language.
 I personally believe that _good_ code can be written in any language,
 as well. Even Perl.

Sure. It's just a given that when you work in Perl or C++ with more than a couple of people you will start to get code in a multitude of different styles. OTOH, C, Scheme, Java are languages where most people can read most other people's code fairly easily. This is a major factor in maintaining large codebases.
 My point is that you have arithmetic operators for basic types. You have
 methods calls on objects.  This is two ways of doing things.  If you add
 operator overloading that's three.  With operator overloading you will


 end up with some objects using method calls for add, subtract etc... and
 other code using operator overloading to achieve exactly the same kind


 thing.

But the first way of doing things, the builtin arithmetic operators, aren't available to classes; the whole point of operator overloading is to allow the _user_ of a class the ability to use the "first way".

Yes - as I said in my previous mail, I understand that point. My point was that even if you do have operators on classes so users can use them like aritmetical types, you *will* get other code that refuses to user overloading. Hence you *will* get conflicting styles. One codebase will overload operators where it is sensible, one will overload where it is not, another will refuse to oveload even though it might make sense to do so. I fully understand that disallowing operator overloading prevents a user using classes as if they were arithmetic primitives - I am saying that the gain is scant and the cost is too severe. As I also said previously, if we *must* overload operators - at least do it properly and allow for arbitrary operators and precedence so you really can represent the normal mathematical notation for a given type (e.g. ||v|| for vector magnitude). Cheers, Peter.
Aug 18 2001
parent reply Christophe de Dinechin <descubes earthlink.net> writes:
kaffiene wrote:

  I fully understand that disallowing operator overloading prevents a user
 using classes as if they were arithmetic primitives - I am saying that the
 gain is scant and the cost is too severe.

The gain is scant (if not negative) for something like "cout <<". But it is quite serious for classes that essentially represent arithmetic objects (matrices, vectors, complex numbers, large integers.) In that case, the difference between A+B*C=D+E and A.Add(B.Add(C)).Eq(D.Add(E)) is really significant. In particular in long-term maintenance. My impression is that your own background did not expose you to such "math-oriented" environments. Then, you are right, avoiding operator overloading is probably the right thing to do.
 As I also said previously, if we
 *must* overload operators - at least do it properly and allow for arbitrary
 operators and precedence so you really can represent the normal mathematical
 notation for a given type (e.g. ||v|| for vector magnitude).

No, that is a very bad idea, and not only for technological reasons. Mathematicians invented operator precedence, not compilers (precedence is not an artifact of limited technology, it is a simplifying notation.) I don't know of any context where mathematicians read AxB+C as meaning (Ax(B+C))... But technology also indicates that this is a bad idea, because arbitrary precedences would render most expressions ambiguous. Christophe
Aug 20 2001
next sibling parent "Sheldon Simms" <sheldon semanticedge.com> writes:
Im Artikel <3B80C66E.A86462DB earthlink.net> schrieb "Christophe de
Dinechin" <descubes earthlink.net>:

 kaffiene wrote:
 
  I fully understand that disallowing operator overloading prevents a
  user
 using classes as if they were arithmetic primitives - I am saying that
 the gain is scant and the cost is too severe.

The gain is scant (if not negative) for something like "cout <<". But it is quite serious for classes that essentially represent arithmetic objects (matrices, vectors, complex numbers, large integers.) In that case, the difference between A+B*C=D+E and A.Add(B.Add(C)).Eq(D.Add(E)) is really significant. In particular in long-term maintenance. My impression is that your own background did not expose you to such "math-oriented" environments. Then, you are right, avoiding operator overloading is probably the right thing to do.
 As I also said previously, if we
 *must* overload operators - at least do it properly and allow for
 arbitrary operators and precedence so you really can represent the
 normal mathematical notation for a given type (e.g. ||v|| for vector
 magnitude).

No, that is a very bad idea, and not only for technological reasons. Mathematicians invented operator precedence, not compilers (precedence is not an artifact of limited technology, it is a simplifying notation.) I don't know of any context where mathematicians read AxB+C as meaning (Ax(B+C))... But technology also indicates that this is a bad idea, because arbitrary precedences would render most expressions ambiguous.

I'm pretty sure that by "arbitrary precedence", he didn't mean that there is no precedence, or that the compiler decides what the precedence is, instead that the precedence of a given user-defined operator is user-defined, and that the user (programmer) can choose any precedence he wants for an operator that he defines. -- Sheldon Simms / sheldon semanticedge.com
Aug 20 2001
prev sibling parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
"Christophe de Dinechin" <descubes earthlink.net> wrote in message
news:3B80C66E.A86462DB earthlink.net...
 kaffiene wrote:

  I fully understand that disallowing operator overloading prevents a


 using classes as if they were arithmetic primitives - I am saying that


 gain is scant and the cost is too severe.

The gain is scant (if not negative) for something like "cout <<". But it

 quite serious for classes that essentially represent arithmetic objects
 (matrices, vectors, complex numbers, large integers.) In that case, the
 difference between A+B*C=D+E and A.Add(B.Add(C)).Eq(D.Add(E)) is really
 significant. In particular in long-term maintenance.

 My impression is that your own background did not expose you to such
 "math-oriented" environments. Then, you are right, avoiding operator
 overloading is probably the right thing to do.

I majored in Computer Graphics. I quite familiar with both animation for broadcast using raytracing and real time graphics using OpenGL and DirectX. I have written several raytracers. I am about to take up a job with a broadcast graphics company. These are "math-oriented" environments. Your impression is wrong. I noticed that another poster asked you to refrain from sarcasm at one point - I would also ask you to refrain from personal assumptions. If you want to debate the technical merits of a language feature, please do so but don't do it by claiming that people with alternate opinions are ignorant.
 As I also said previously, if we
 *must* overload operators - at least do it properly and allow for


 operators and precedence so you really can represent the normal


 notation for a given type (e.g. ||v|| for vector magnitude).

No, that is a very bad idea, and not only for technological reasons. Mathematicians invented operator precedence, not compilers (precedence is

 an artifact of limited technology, it is a simplifying notation.) I don't

 of any context where mathematicians read AxB+C as meaning (Ax(B+C))... But

If you are talking about vectors, then A + B * C is the vector A added to vector B (resulting in a vector) crossed with the vector C (resulting in a vector). This evaluates as (A + B) * C. There you have a context in which the standard arithmetic operator precedence does not hold.
 technology also indicates that this is a bad idea, because arbitrary
 precedences would render most expressions ambiguous.

*sigh* - please read the history of this thread. My point is that for *new* operators that don't already exist in the language, you need to specify precedence. For example, a Vector class could overload * for cross-product, but C++ doesn't allow overloading . (period) for dot product or ||v|| for magnitude. If you introduce new operators you must be able to specify the precedence. If you change the context of an operator (e.g. * in the vector example above) then the natural precendence of operators may change as well. Peter.
Aug 20 2001
next sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
kaffiene wrote:
 "Christophe de Dinechin" <descubes earthlink.net> wrote in message
 news:3B80C66E.A86462DB earthlink.net...
 ...vector
 example above) then the natural precendence of operators may change as well.
 
 Peter.
 

as the dot operator. If you want to specify a different order, you need to use parenthesis. The idea here is that these operators are merely syntactic sugar for the functional notation. A :+: B === A.add(B). It is a bit easier to read. But you don't want to encourage excessive proliferation, because if you do, then it won't be easier any more. So you give them the same precedence as the dot operator. (That's what they really are.)
Aug 21 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
"Charles Hixson" <charleshixsn earthlink.net> wrote in message
news:3B828ADA.8010506 earthlink.net...
 kaffiene wrote:
 "Christophe de Dinechin" <descubes earthlink.net> wrote in message
 news:3B80C66E.A86462DB earthlink.net...
 ...vector
 example above) then the natural precendence of operators may change as


 Peter.

as the dot operator. If you want to specify a different order, you need to use parenthesis. The idea here is that these operators are merely syntactic sugar for the functional notation. A :+: B === A.add(B). It is a bit easier to read. But you don't want to encourage excessive proliferation, because if you do, then it won't be easier any more. So you give them the same precedence as the dot operator. (That's what they really are.)

I agree with your point about the profliferation of operator overloading making things harder. That's why I was arguing previously that it's probably better not to overload operators at all. Peter.
Aug 21 2001
parent Charles Hixson <charleshixsn earthlink.net> writes:
kaffiene wrote:
 "Charles Hixson" <charleshixsn earthlink.net> wrote in message
 news:3B828ADA.8010506 earthlink.net...
 
kaffiene wrote:

"Christophe de Dinechin" <descubes earthlink.net> wrote in message
news:3B80C66E.A86462DB earthlink.net...
...vector
example above) then the natural precendence of operators may change as


Peter.

as the dot operator. If you want to specify a different order, you need to use parenthesis. The idea here is that these operators are merely syntactic sugar for the functional notation. A :+: B === A.add(B). It is a bit easier to read. But you don't want to encourage excessive proliferation, because if you do, then it won't be easier any more. So you give them the same precedence as the dot operator. (That's what they really are.)

I agree with your point about the profliferation of operator overloading making things harder. That's why I was arguing previously that it's probably better not to overload operators at all. Peter.

But you ignore the adjective "excessive". It makes a big difference here. Reasonable use of operator syntax greatly IMPROVES readability. Still, it's probably reasonable to restrict the overloading to non-predefined operators.
Aug 22 2001
prev sibling parent reply Christophe de Dinechin <descubes earthlink.net> writes:
kaffiene wrote:

 My impression is that your own background did not expose you to such
 "math-oriented" environments. Then, you are right, avoiding operator
 overloading is probably the right thing to do.

I majored in Computer Graphics. I quite familiar with both animation for broadcast using raytracing and real time graphics using OpenGL and DirectX. I have written several raytracers. I am about to take up a job with a broadcast graphics company. These are "math-oriented" environments. Your impression is wrong. I noticed that another poster asked you to refrain from sarcasm at one point - I would also ask you to refrain from personal assumptions. If you want to debate the technical merits of a language feature, please do so but don't do it by claiming that people with alternate opinions are ignorant.

I said it was an impression, I never claimed it was a fact. You did not qualify your comment above as such ;-) And, sometimes, sarcasm is funny, if properly identified as such. Sorry the other guy took it personally, it was actually a grunt against the C++ committee and Bjarne's decision to have template<> rather than, e.g., template[]. I take your point and will try to avoid that in the future, but I'd like you to get back to the point and answer my questions. How is A.Add(B) not introducing maintenance complexity in complicated expressions? How is A.Add(B) not adding a new notation that differs from the traditional A+B?
 No, that is a very bad idea, and not only for technological reasons.
 Mathematicians invented operator precedence, not compilers (precedence is

 an artifact of limited technology, it is a simplifying notation.) I don't

 of any context where mathematicians read AxB+C as meaning (Ax(B+C))... But

If you are talking about vectors, then A + B * C is the vector A added to vector B (resulting in a vector) crossed with the vector C (resulting in a vector). This evaluates as (A + B) * C. There you have a context in which the standard arithmetic operator precedence does not hold.

??? You may be right in your field, but that's the very first time I ever hear this. Any pointer justifying this notation? Let me take a few books here. Quantum mechanics book, formula H^=E0+A'/4.sigma1.sigma2+Dsigma[1,z].sigma[2.z]. I know that the . has higher precedence here, no parentheses. An old book by Lillian Lieber on Relativity (one of the few I have in English): Tensor notation all over the place: A.B+C.D, where + is lower priority. Just in case you wonder, some of these tensor products actually are the dot product you are describing A third book, same thing, sampling three equations at random, they all had A+B.C mean A+(B.C). So, again, I'm very curious wrt. the field where A+B.C means (A+B).C. If this is a "frequent case", I might change my mind on priority of operators in LX.
 technology also indicates that this is a bad idea, because arbitrary
 precedences would render most expressions ambiguous.

*sigh* - please read the history of this thread. My point is that for *new* operators that don't already exist in the language, you need to specify precedence. For example, a Vector class could overload * for cross-product, but C++ doesn't allow overloading . (period) for dot product or ||v|| for magnitude. If you introduce new operators you must be able to specify the precedence. If you change the context of an operator (e.g. * in the vector example above) then the natural precendence of operators may change as well.

We have a different definition of "new" operator. You call "new" an operator that can be overloaded, I call new an operator that doesn't exist in the language. While I have a weaker feeling about defining some arbitray precedence for "A znort B", I have a strong feeling about defining it for "A*B" as a function of the types of A and B. There, compiler technology has a serious issue, because it means that to parse any expression, you have to do semantics on the items in the expression. That makes it really really hard. In addition, unless you give me a pointer for the A+B.C example above, I stick to it that mathematicians do not generally change the precedence of operators based on the type of arguments. Christophe
Aug 22 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
[snip]

 I take your point and will try to avoid that in the future, but I'd like

 get back to the point and answer my questions. How is A.Add(B) not

 maintenance complexity in complicated expressions? How is A.Add(B) not

 new notation that differs from the traditional A+B?

My point is that you will be using <class>.<method>(<params>) anyway for 'normal' methods on the class - some people will use this for things like addition , subtraction etc... other people will use <class> + <class>, <class> * scalar etc... When you get programmers looking over the code, some classes will do things using method calls, some will do things using overloaded operators. The code is therefore more variant in style which means an increased maintenence cost. In *either* approach (overloaded operators or all method calls) you will still get the situation that a + b will be valid for all scalars and invalid for some classes (all in the case of no operator overloading). Even if you allow operator overloading, you can still have A.Add(B) as a variant notation so I reject the assertion that not having operator overloading gives you a 'new notation'.
 No, that is a very bad idea, and not only for technological reasons.
 Mathematicians invented operator precedence, not compilers (precedence



 not
 an artifact of limited technology, it is a simplifying notation.) I



 know
 of any context where mathematicians read AxB+C as meaning (Ax(B+C))...



 If you are talking about vectors, then A + B * C is the vector A added


 vector B (resulting in a vector) crossed with the vector C (resulting in


 vector).  This evaluates as (A + B) * C.  There you have a context in


 the standard arithmetic operator precedence does not hold.

??? You may be right in your field, but that's the very first time I ever

 this. Any pointer justifying this notation?

 Let me take a few books here. Quantum mechanics book, formula
 H^=E0+A'/4.sigma1.sigma2+Dsigma[1,z].sigma[2.z]. I know that the . has

 precedence here, no parentheses.

 An old book by Lillian Lieber on Relativity (one of the few I have in

 Tensor notation all over the place: A.B+C.D, where + is lower priority.

 case you wonder, some of these tensor products actually are the dot

 are describing

 A third book, same thing, sampling three equations at random, they all had
 A+B.C mean A+(B.C).

 So, again, I'm very curious wrt. the field where A+B.C means (A+B).C. If

 is a "frequent case", I might change my mind on priority of operators in

Oh sorry - I didn't mean to imply that the + operator had a higher precedence than *, I was using the brackets to indicate how the notation is parsed naturally. The + operator - vector addition does not have a naturally higher precedence than * - vector cross product. It parses naturally left to right. [snip] Peter.
Aug 22 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
If everyone is dead keen on operator overloading, then let's have it.  But
if we're going to add it, let's at least be able to add new (lexically new)
operators such as A . B and ||v||.  I can live with inheriting the
precedence of 'standard' operators such as * - + / etc, as long as for new
operators you can specificy their precedence (e.g., I'd like ||v|| to have
the smae precedence as unary -)


Peter.


"kaffiene" <kaffiene xtra.co.nz> wrote in message
news:9m17t7$19b9$1 digitaldaemon.com...
 [snip]

 I take your point and will try to avoid that in the future, but I'd like

 get back to the point and answer my questions. How is A.Add(B) not

 maintenance complexity in complicated expressions? How is A.Add(B) not

 new notation that differs from the traditional A+B?

My point is that you will be using <class>.<method>(<params>) anyway for 'normal' methods on the class - some people will use this for things like addition , subtraction etc... other people will use <class> + <class>, <class> * scalar etc... When you get programmers looking over the code, some classes will do things using method calls, some will do things using overloaded operators. The code is therefore more variant in style which means an increased maintenence cost. In *either* approach (overloaded operators or all method calls) you will still get the situation that a + b will be valid for all scalars and

 for some classes
 (all in the case of no operator overloading).  Even if you allow operator
 overloading, you can still have A.Add(B) as a variant notation so I reject
 the assertion that not having operator overloading gives you a 'new
 notation'.

 No, that is a very bad idea, and not only for technological reasons.
 Mathematicians invented operator precedence, not compilers




 is
 not
 an artifact of limited technology, it is a simplifying notation.) I



 know
 of any context where mathematicians read AxB+C as meaning




 But
 If you are talking about vectors, then A + B * C is the vector A added


 vector B (resulting in a vector) crossed with the vector C (resulting



 a
 vector).  This evaluates as (A + B) * C.  There you have a context in


 the standard arithmetic operator precedence does not hold.

??? You may be right in your field, but that's the very first time I


 hear
 this. Any pointer justifying this notation?

 Let me take a few books here. Quantum mechanics book, formula
 H^=E0+A'/4.sigma1.sigma2+Dsigma[1,z].sigma[2.z]. I know that the . has

 precedence here, no parentheses.

 An old book by Lillian Lieber on Relativity (one of the few I have in

 Tensor notation all over the place: A.B+C.D, where + is lower priority.

 case you wonder, some of these tensor products actually are the dot

 are describing

 A third book, same thing, sampling three equations at random, they all


 A+B.C mean A+(B.C).

 So, again, I'm very curious wrt. the field where A+B.C means (A+B).C. If

 is a "frequent case", I might change my mind on priority of operators in

Oh sorry - I didn't mean to imply that the + operator had a higher precedence than *, I was using the brackets to indicate how the notation

 parsed naturally.  The + operator - vector addition does not have a
 naturally higher precedence than * - vector cross product.  It parses
 naturally left to right.

 [snip]

 Peter.

Aug 22 2001
next sibling parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
kaffiene wrote:
 
 If everyone is dead keen on operator overloading, then let's have it.  But
 if we're going to add it, let's at least be able to add new (lexically new)
 operators such as A . B and ||v||.  I can live with inheriting the
 precedence of 'standard' operators such as * - + / etc, as long as for new
 operators you can specificy their precedence (e.g., I'd like ||v|| to have
 the smae precedence as unary -)

Agreed, with support for operators written with arbitrary Unicode. (Arrggh. I'm crushed. The bastards rejected the Klingon encoding for Unicode. There is no justice in the world.) -RB
Aug 22 2001
next sibling parent "kaffiene" <kaffiene xtra.co.nz> writes:
 If everyone is dead keen on operator overloading, then let's have it.


 if we're going to add it, let's at least be able to add new (lexically


 operators such as A . B and ||v||.  I can live with inheriting the
 precedence of 'standard' operators such as * - + / etc, as long as for


 operators you can specificy their precedence (e.g., I'd like ||v|| to


 the smae precedence as unary -)

Agreed, with support for operators written with arbitrary Unicode. (Arrggh. I'm crushed. The bastards rejected the Klingon encoding for

 There is no justice in the world.)

:-) hehe
Aug 22 2001
prev sibling parent reply Eric Gerlach <egerlach canada.com> writes:
Hmph, it's really hard to decide where exactly to put a post when you're 
replying to many...

So, I was thinking about operator overloading and templates a fair bit 
last night.  You see, I like both of them, though not enough to defend 
them with the zeal that some have.  Although I use them, I'd be willing 
to lose them, but I would mourn their demise.  While I was so thinking, 
I came up with an idea that had been described previously, but I don't 
think ever fully and in one place.

My train of thought went as such:

One of the main complaints about operator overloading is it confuses 
what a given operator means in a given situation.  This is a Bad Thing. 
  However, we get around this by not using operator overloading, but 
operator definition.  Thus I refer to exhibit a), by Charles Hixson:

 Perhaps one should instead be able
 to define infix functions (i.e., operators) have a form like, O, just as
 a wild choice /\:[-~! #$%^&*_+`,?0-9A-z]*\:/, i.e., no white space, no
 colons, no control characters, no parens, bracketts, or braces, no 

separation from everything else. So one could define :+: to add matrices, etc., and as these would really be functions, they should follow the normal overloading rules of functions. Don't do operator *overloading* do operator *definition*! So, having come up with a completely unoriginal idea, I got excited. But :+: and :-: and :*: and such all over the place could get quite ugly, IMHO. So how to solve this? Enter exhibits b) and c), by Rusell Bornschlegel:
 Agreed, with support for operators written with arbitrary Unicode.

 This would let you use, e.g., u22C5 ("dot operator") for matrix
 multiply, u22C5 and u00D7 ("multiplication sign") for dot and cross
 products of vectors, u221A, u221B, u221C for square, third, and fourth
 roots, etc.

Unicode! D supports Unicode! Again excited that I had found somthing completely unoriginal, I rushed to my computer, and went to www.unicode.org. I'm not an expert on Unicode, but I know what it does. I thought to myself: "Unicode is perfect! It must have an entire section devoted to mathematical symbols!" And lo-and-behold! It does! For the interested, you can look here: http://www.unicode.org/charts/PDF/U2200.pdf (PDF) or here: http://charts.unicode.org/Web/U2200.html (GIFs, but I couldn't connect) If you haven't seen the vast extent of possibilities with this block, go check out that PDF file. Neato. So, the end result of my proposition is this: Allow the definition of new infix and unary operators from the Unicode block 2200 to 22FF, mathematical operators. Also, symbols from this block are *reserved* for new operators. (The allowance of colon delimeted operators was something I hadn't thought of... but could be another allowed grammar for new ops) The reason I don't say 'arbitrary' unicode is that then I'd go right ahead and define all sorts of cool operators from the Japanese kanji. Obfuscated D at its finest. Heck, I'd see if I could make Rashoomon with a few english interjections a program :) I think this takes care of some of the concerns about operator overloading/definition. I'm going to do a point-by-point on Angus Graham's wonderful list, which outlines just about every reason not to have operators change (and which I 100% agree with, despite liking operator overloading). However, I'm going to rearrange it into "solved" and "not-solved": Solved (or minimised) ---------------------
 - I don't like not being able to see at a glance what is a built in 

 what is a class.  Stroustrup thinks classes should not be second class
 citizens. I disagree.

Problem solved. New operators for new types.
 - I don't look forward to explaining their use to new programmers.

True, but I think that is minimised by the fact that you're adding, not redefining.
 - I don't like the way overloaded operators hide their true runtime 

 a cursory look at the code.

The cost would be less hidden due to the fact that they are unique, and therefore identifiable.
 - I don't like the way they reduce the self-documenting nature of code.
 concatenate() is clear,
 

 ult_to_string() is clear, but string_result = string1 + string2 is
 definitely not clear.

Less of a problem due to the uniqueness of the operators. Not solved ----------
 - I don't like the added complexity to the language.

If you mean, just as a feature... well, I can't stop that :)
 - I don't like the way overloaded operators can be easily written 

 used wrong so that they generate various temporaries and assignments that
 are not obvious to the programmer.

*sigh* I can't argue against this point, it's too true.
 - I don't like the way they are harder to debug, as they encourage 

 many function calls on one line.

Again, a problem.
 - I don't like the way we always end up with some classes using 

 operators and some not leading to two ways of doing the same thing.

Yet again, a fundamental problem. So, I'll admit it isn't perfect, but it might be a start for a compromise between the two camps. OTOH, to be honest, I wouldn't miss them much at all... for mathematical vectors, and that's about it. But, in what appears to be the "philosophy of D" I was looking for the "better solution". Yet another treatise from the guy who talks too much. Cheers, Eric P.S. I never did find a solution for templates, and I'd miss them a lot... *sigh* P.P.S. For me, generics won't cut it. I need meta-programming... P.P.P.S. Would you like them in a house? Would you like them with a mouse? Or in a box? Or with a fox? Would you like them with a goat? Would you like them on a boat? (I'm sorry, I love the great Doctor... not to mention I think operator overloading is much more acceptable with a goat)
Aug 26 2001
next sibling parent reply "Walter" <walter digitalmars.com> writes:
Thanks for the entertaining and excellent summary. BTW, what is
"meta-programming" as opposed to generics?

Eric Gerlach wrote in message <3B892085.2060903 canada.com>...
Hmph, it's really hard to decide where exactly to put a post when you're
replying to many...

So, I was thinking about operator overloading and templates a fair bit
last night.  You see, I like both of them, though not enough to defend
them with the zeal that some have.  Although I use them, I'd be willing
to lose them, but I would mourn their demise.  While I was so thinking,
I came up with an idea that had been described previously, but I don't
think ever fully and in one place.

My train of thought went as such:

One of the main complaints about operator overloading is it confuses
what a given operator means in a given situation.  This is a Bad Thing.
  However, we get around this by not using operator overloading, but
operator definition.  Thus I refer to exhibit a), by Charles Hixson:

 Perhaps one should instead be able
 to define infix functions (i.e., operators) have a form like, O, just as
 a wild choice /\:[-~! #$%^&*_+`,?0-9A-z]*\:/, i.e., no white space, no
 colons, no control characters, no parens, bracketts, or braces, no

separation from everything else. So one could define :+: to add matrices, etc., and as these would really be functions, they should follow the normal overloading rules of functions. Don't do operator *overloading* do operator *definition*! So, having come up with a completely unoriginal idea, I got excited. But :+: and :-: and :*: and such all over the place could get quite ugly, IMHO. So how to solve this? Enter exhibits b) and c), by Rusell Bornschlegel:
 Agreed, with support for operators written with arbitrary Unicode.

 This would let you use, e.g., u22C5 ("dot operator") for matrix
 multiply, u22C5 and u00D7 ("multiplication sign") for dot and cross
 products of vectors, u221A, u221B, u221C for square, third, and fourth
 roots, etc.

Unicode! D supports Unicode! Again excited that I had found somthing completely unoriginal, I rushed to my computer, and went to www.unicode.org. I'm not an expert on Unicode, but I know what it does. I thought to myself: "Unicode is perfect! It must have an entire section devoted to mathematical symbols!" And lo-and-behold! It does! For the interested, you can look here: http://www.unicode.org/charts/PDF/U2200.pdf (PDF) or here: http://charts.unicode.org/Web/U2200.html (GIFs, but I couldn't connect) If you haven't seen the vast extent of possibilities with this block, go check out that PDF file. Neato. So, the end result of my proposition is this: Allow the definition of new infix and unary operators from the Unicode block 2200 to 22FF, mathematical operators. Also, symbols from this block are *reserved* for new operators. (The allowance of colon delimeted operators was something I hadn't thought of... but could be another allowed grammar for new ops) The reason I don't say 'arbitrary' unicode is that then I'd go right ahead and define all sorts of cool operators from the Japanese kanji. Obfuscated D at its finest. Heck, I'd see if I could make Rashoomon with a few english interjections a program :) I think this takes care of some of the concerns about operator overloading/definition. I'm going to do a point-by-point on Angus Graham's wonderful list, which outlines just about every reason not to have operators change (and which I 100% agree with, despite liking operator overloading). However, I'm going to rearrange it into "solved" and "not-solved": Solved (or minimised) ---------------------
 - I don't like not being able to see at a glance what is a built in

 what is a class.  Stroustrup thinks classes should not be second class
 citizens. I disagree.

Problem solved. New operators for new types.
 - I don't look forward to explaining their use to new programmers.

True, but I think that is minimised by the fact that you're adding, not redefining.
 - I don't like the way overloaded operators hide their true runtime

 a cursory look at the code.

The cost would be less hidden due to the fact that they are unique, and therefore identifiable.
 - I don't like the way they reduce the self-documenting nature of code.
 concatenate() is clear,


 ult_to_string() is clear, but string_result = string1 + string2 is
 definitely not clear.

Less of a problem due to the uniqueness of the operators. Not solved ----------
 - I don't like the added complexity to the language.

If you mean, just as a feature... well, I can't stop that :)
 - I don't like the way overloaded operators can be easily written

 used wrong so that they generate various temporaries and assignments


 are not obvious to the programmer.

*sigh* I can't argue against this point, it's too true.
 - I don't like the way they are harder to debug, as they encourage

 many function calls on one line.

Again, a problem.
 - I don't like the way we always end up with some classes using

 operators and some not leading to two ways of doing the same thing.

Yet again, a fundamental problem. So, I'll admit it isn't perfect, but it might be a start for a compromise between the two camps. OTOH, to be honest, I wouldn't miss them much at all... for mathematical vectors, and that's about it. But, in what appears to be the "philosophy of D" I was looking for the "better solution". Yet another treatise from the guy who talks too much. Cheers, Eric P.S. I never did find a solution for templates, and I'd miss them a lot... *sigh* P.P.S. For me, generics won't cut it. I need meta-programming... P.P.P.S. Would you like them in a house? Would you like them with a mouse? Or in a box? Or with a fox? Would you like them with a goat? Would you like them on a boat? (I'm sorry, I love the great Doctor... not to mention I think operator overloading is much more acceptable with a goat)

Aug 26 2001
parent reply Eric Gerlach <egerlach canada.com> writes:
 Thanks for the entertaining and excellent summary. BTW, what is
 "meta-programming" as opposed to generics?

It sounds like you are entreating me to treat you with another treatise. Hopefully I'll be able to keep the length of this down... though on a topic so near and dear to my heart, I sincerely doubt it. :) Meta-programming... meta-proramming... where to start. Well, in a nutshell it's a way to write a program that is run at compile-time by the compiler, of which the output is code, which is then compiled. Hmmm.... that probably makes little sense. Allow me to illustrate with an example in C++: The factorial. template <int N> class Factorial { public: enum { val = N*Factorial<N-1>::value }; } tmeplate <1> class Factorial { public: enum { value = 1; } } void main() { int i = Factorial<5>::value; } So what is going on here? Well, when the 'Factorial<5>::value' is compiled, the compiler tries to resolve it. But it can't without computing Factorial<4>::value, and so on down the recursion. Then the compiler computes the result of the expression (because it has to be constant, it's an enum), and uses it *as a constant* in the program. You get the time savings of not having to compute it at runtime. Now, I'll admit that was a pretty useless example. It makes for somewhat more readable code, but otherwise isn't great. Where template metaprogramming can really be useful is in things like Matricies. Now hold on, and I'll see if I can get this right first shot. template <int I, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[I][J] = m1.m[I][J] + m2.m[I][J]; MatrixComp<I-1,J,N>::add(m1, m2, result); } } template <0, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][J] = m1.m[0][J] + m2.m[0][J]; MatrixComp<N-1,J-1,N>::add(m1, m2, result); } } template <0, 0, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][0] = m1.m[0][0] + m2.m[0][0]; } } template <int N> class Matrix { public: float m[N][N]; void add(const Matrix<N> & m) { Matrix<N> temp; MatrixComp<N-1,N-1,N>::add(*this, m, temp); this->m = temp.m; // assuming this was legal :) } } void main() { Matrix<3> m1, m2; /* fill matricies */ m1.add(m2); } So, what does all that jibber-jabber do for you? Well, the add() call in main gets expanded at compile time to this: add(m2) { Matrix<3> temp; temp.m[2][2] = (*this).m[2][2] + m2[2][2]; temp.m[2][1] = (*this).m[2][1] + m2[2][1]; temp.m[2][0] = (*this).m[2][0] + m2[2][0]; temp.m[1][2] = (*this).m[1][2] + m2[1][2]; temp.m[1][1] = (*this).m[1][1] + m2[1][1]; temp.m[1][0] = (*this).m[1][0] + m2[1][0]; temp.m[0][2] = (*this).m[0][2] + m2[0][2]; temp.m[0][1] = (*this).m[0][1] + m2[0][1]; temp.m[0][0] = (*this).m[0][0] + m2[0][0]; this->m = temp.m; } And voila! All this done *before* the code is even compiled! Incredible! Now, I can hear some people saying: "But the optimiser will handle all that for you!" But that isn't completely true. First of all, you've just given the compiler a head-start. It might be able to squeeze additional optimisations out because you've already unraveled the loops. Second, consider matrix multiplication, which requires several nested loops. Written properly using template meta-programming, *all* of those loops are unraveled at compile-time, and you get a straight function such as the one above, which consists solely of multiplications and additions. No loops, no counters, just wonderfully optimised code. Of course, the more complex the procedure, the greater the runtime savings. However, this comes at increased compile-time cost. Doing template meta-programming takes a lot of time to compile, as the compiler is acting as an interpreter... in fact, it's acting very similar to LISP. Also, this generates much larger code files. But if you need the speed it can be well worth the trouble. I think that's a decent intro to the topic. I only found out about it recently myself. But it was love at first sight. Truly, properly done metaprogramming is a thing of beauty. My advantage was that I learned it from someone who learned it from the guy who wrote the book on template metaprogramming, Todd Veldhuizen. You can find his page on the subject here: http://www.extreme.indiana.edu/~tveldhui/papers/Template-Metaprograms/meta-art.html If you want to see a good metaprogramming library, check out blitz++, I've never used it myself, but apparantly it's quite good at what it does. Anywho, that's it for me for now. I don't know if stuff like this could ever make it into D (and in fact Walter has stated D isn't for real-time, so this doesn't fit the schema anyways), but it's neat stuff anyways. Eric P.S. An interesting tidbit: I wrote a template metaprogrammed matrix inversion routine. It's got 3-5 levels of recursion... I'd have to count them again. GCC, in attempting to compile and optimise a 6x6 matrix with this inversion routine, ate up >600MB of RAM. It crashed before it finished. Fortunately, we only have to use 4x4 matricies in our code.
Aug 27 2001
parent "Walter" <walter digitalmars.com> writes:
Ok, I see. Thanks for the explanation. I'm not sure I'm ready to try it yet,
though!

Eric Gerlach wrote in message <3B8B0F5E.7030305 canada.com>...
 Thanks for the entertaining and excellent summary. BTW, what is
 "meta-programming" as opposed to generics?

It sounds like you are entreating me to treat you with another treatise. Hopefully I'll be able to keep the length of this down... though on a topic so near and dear to my heart, I sincerely doubt it. :) Meta-programming... meta-proramming... where to start. Well, in a nutshell it's a way to write a program that is run at compile-time by the compiler, of which the output is code, which is then compiled. Hmmm.... that probably makes little sense. Allow me to illustrate with an example in C++: The factorial. template <int N> class Factorial { public: enum { val = N*Factorial<N-1>::value }; } tmeplate <1> class Factorial { public: enum { value = 1; } } void main() { int i = Factorial<5>::value; } So what is going on here? Well, when the 'Factorial<5>::value' is compiled, the compiler tries to resolve it. But it can't without computing Factorial<4>::value, and so on down the recursion. Then the compiler computes the result of the expression (because it has to be constant, it's an enum), and uses it *as a constant* in the program. You get the time savings of not having to compute it at runtime. Now, I'll admit that was a pretty useless example. It makes for somewhat more readable code, but otherwise isn't great. Where template metaprogramming can really be useful is in things like Matricies. Now hold on, and I'll see if I can get this right first shot. template <int I, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[I][J] = m1.m[I][J] + m2.m[I][J]; MatrixComp<I-1,J,N>::add(m1, m2, result); } } template <0, int J, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][J] = m1.m[0][J] + m2.m[0][J]; MatrixComp<N-1,J-1,N>::add(m1, m2, result); } } template <0, 0, int N> static class MatrixComp { static void add(const Matrix<N> & m1, const Matrix<N> & m2, Matrix<N> & result) { result.m[0][0] = m1.m[0][0] + m2.m[0][0]; } } template <int N> class Matrix { public: float m[N][N]; void add(const Matrix<N> & m) { Matrix<N> temp; MatrixComp<N-1,N-1,N>::add(*this, m, temp); this->m = temp.m; // assuming this was legal :) } } void main() { Matrix<3> m1, m2; /* fill matricies */ m1.add(m2); } So, what does all that jibber-jabber do for you? Well, the add() call in main gets expanded at compile time to this: add(m2) { Matrix<3> temp; temp.m[2][2] = (*this).m[2][2] + m2[2][2]; temp.m[2][1] = (*this).m[2][1] + m2[2][1]; temp.m[2][0] = (*this).m[2][0] + m2[2][0]; temp.m[1][2] = (*this).m[1][2] + m2[1][2]; temp.m[1][1] = (*this).m[1][1] + m2[1][1]; temp.m[1][0] = (*this).m[1][0] + m2[1][0]; temp.m[0][2] = (*this).m[0][2] + m2[0][2]; temp.m[0][1] = (*this).m[0][1] + m2[0][1]; temp.m[0][0] = (*this).m[0][0] + m2[0][0]; this->m = temp.m; } And voila! All this done *before* the code is even compiled! Incredible! Now, I can hear some people saying: "But the optimiser will handle all that for you!" But that isn't completely true. First of all, you've just given the compiler a head-start. It might be able to squeeze additional optimisations out because you've already unraveled the loops. Second, consider matrix multiplication, which requires several nested loops. Written properly using template meta-programming, *all* of those loops are unraveled at compile-time, and you get a straight function such as the one above, which consists solely of multiplications and additions. No loops, no counters, just wonderfully optimised code. Of course, the more complex the procedure, the greater the runtime savings. However, this comes at increased compile-time cost. Doing template meta-programming takes a lot of time to compile, as the compiler is acting as an interpreter... in fact, it's acting very similar to LISP. Also, this generates much larger code files. But if you need the speed it can be well worth the trouble. I think that's a decent intro to the topic. I only found out about it recently myself. But it was love at first sight. Truly, properly done metaprogramming is a thing of beauty. My advantage was that I learned it from someone who learned it from the guy who wrote the book on template metaprogramming, Todd Veldhuizen. You can find his page on the subject here: http://www.extreme.indiana.edu/~tveldhui/papers/Template-Metaprograms/meta-

If you want to see a good metaprogramming library, check out blitz++,
I've never used it myself, but apparantly it's quite good at what it does.

Anywho, that's it for me for now.  I don't know if stuff like this could
ever make it into D (and in fact Walter has stated D isn't for
real-time, so this doesn't fit the schema anyways), but it's neat stuff
anyways.

Eric

P.S. An interesting tidbit:  I wrote a template metaprogrammed matrix
inversion routine.  It's got 3-5 levels of recursion... I'd have to
count them again.  GCC, in attempting to compile and optimise a 6x6
matrix with this inversion routine, ate up >600MB of RAM.  It crashed
before it finished.  Fortunately, we only have to use 4x4 matricies in
our code.

Aug 29 2001
prev sibling next sibling parent reply Russell Bornschlegel <kaleja estarcion.com> writes:
This is going to look like I'm jumping the fence repeatedly on 
operator overloading. For the record, I'm in favor of D incorporating
C++'s operator overloading capability (though not necessarily using
the same definition syntax) _and_ in extending this to arbitrary 
single-unicode-above-ASCII-character operators. 

I understand the problems with operator overloading, and am willing 
to accept them. My need to write code manipulating matrices and 
vectors makes this an absolute requirement for my language-of-choice.

Eric Gerlach wrote:
 So, the end result of my proposition is this:  Allow the definition of
 new infix and unary operators from the Unicode block 2200 to 22FF,
 mathematical operators.  Also, symbols from this block are *reserved*
 for new operators. (The allowance of colon delimeted operators was
 something I hadn't thought of... but could be another allowed grammar
 for new ops)
 
 The reason I don't say 'arbitrary' unicode is that then I'd go right
 ahead and define all sorts of cool operators from the Japanese kanji.
 Obfuscated D at its finest.  Heck, I'd see if I could make Rashoomon
 with a few english interjections a program :)

I'd be annoyed if Unicode operator definition were adopted with that arbitrary restriction, personally. The Miscellaneous Technicals in the 2300 block include the operators used by the APL language, which at least have some justification for being used as user-defined operators in a programming language. The Mathematical Operators in the 2200 block also include a slash, an asterisk, and a tilde, so you can't easily distinguish from the builtin operators. There are lots of operators there that look something like other operators in C. Your proposal is thus simultaneously too restrictive for me, and not safe enough for Mr. Graham. -RB
Aug 26 2001
parent reply Eric Gerlach <egerlach canada.com> writes:
 For the record, I'm in favor of D incorporating
 C++'s operator overloading capability (though not necessarily using
 the same definition syntax) _and_ in extending this to arbitrary
 single-unicode-above-ASCII-character operators.

I'm not too picky about operator overload being in or not (I was just proposing a solution) but arbitrary unicode operators is a no-no. A big no-no. Have you ween how many characters there are in unicode? It's crazy to allow all those for operators. Go look here: http://www.unicode.org/charts/PDF/U4E00.pdf There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of unicode alone. I don't want a single one of them being an operator! The main reason behind that is some people read those as words, and would be very confused. If that isn't enough, there are dozens of other scrips that could be redefined as operators with a generic unicode solution. If it exists, it should be relegated to the block of symbols *called* Mathematical Operators.
Aug 27 2001
parent reply Russell Borogove <kaleja estarcion.com> writes:
Eric Gerlach wrote:
 There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of
 unicode alone.  I don't want a single one of them being an operator!
 The main reason behind that is some people read those as words, and
 would be very confused.

But as I point out, that arbitrary restriction doesn't make it particularly hard to confuse people. The ability to overload the mathematical-operator-asterisk to mean something different from the ASCII asterisk is far more likely to be abused than, say, something in arabic script.
Sep 05 2001
parent Charles Hixson <charleshixsn earthlink.net> writes:
Russell Borogove wrote:

 Eric Gerlach wrote:
 
There are 0x9FAF - 0x4E00 == 0x51AF == 20911 symbols in that section of
unicode alone.  I don't want a single one of them being an operator!
The main reason behind that is some people read those as words, and
would be very confused.

But as I point out, that arbitrary restriction doesn't make it particularly hard to confuse people. The ability to overload the mathematical-operator-asterisk to mean something different from the ASCII asterisk is far more likely to be abused than, say, something in arabic script.

You can also confuse people by naming a routine that takes the RMS of two numbers add. The idea is to not confuse them by accident. The possibility of obfuscation is always with us.
Sep 06 2001
prev sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
Eric Gerlach wrote:
 ...
 Unicode!  D supports Unicode!  Again excited that I had found somthing 
 completely unoriginal, I rushed to my computer, and went to 
 www.unicode.org.  I'm not an expert on Unicode, but I know what it does. 
  I thought to myself: "Unicode is perfect!  It must have an entire 
 section devoted to mathematical symbols!"  And lo-and-behold!  It does! 
  For the interested, you can look here:
 
 http://www.unicode.org/charts/PDF/U2200.pdf (PDF)
 or here:
 http://charts.unicode.org/Web/U2200.html (GIFs, but I couldn't connect)
 
 If you haven't seen the vast extent of possibilities with this block, go 
 check out that PDF file.  Neato.
 
 So, the end result of my proposition is this:  Allow the definition of 
 new infix and unary operators from the Unicode block 2200 to 22FF, 
 mathematical operators.  Also, symbols from this block are *reserved* 
 for new operators. (The allowance of colon delimeted operators was 
 something I hadn't thought of... but could be another allowed grammar 
 for new ops)
 ...
 

do find your suggestion acceptable. It would have the side effect (benefit?) of making the symbols harder to enter, and thus less commonly used. But I feel that it might be carying things a bit to extremes, unless there is some obviously easy way to remember how to enter them.
Aug 27 2001
parent Eric Gerlach <egerlach canada.com> writes:
 I prefer symbols that are easier to type on a standard keyboard, but I do find
your suggestion acceptable.  It would have the side effect (benefit?) of making
the symbols harder to enter, and thus less commonly used.  But I feel that it
might be carying things a bit to extremes, unless there is some obviously easy
way to remember how to enter them. 

Yes, that was the downside to the whole idea... not many people have unicode-enabled text editors... but maybe this would encourage proliferation of them! :)
Aug 27 2001
prev sibling next sibling parent reply "Angus Graham" <agraham_d agraham.ca> writes:
"kaffiene" <kaffiene xtra.co.nz> wrote

 If everyone is dead keen on operator overloading, then let's have it.

Not everyone is dead keen on it. I haven't commented on the subject (or followed this thread) simply because I'm so uninterested in it. But so that my silence != consent, I will say what I don't like about overloaded operators (although I'm sure I will repeat some stuff). - I don't like not being able to see at a glance what is a built in type and what is a class. Stroustrup thinks classes should not be second class citizens. I disagree. - I don't like the added complexity to the language. - I don't look forward to explaining their use to new programmers. - I don't like the way overloaded operators can be easily written wrong and used wrong so that they generate various temporaries and assignments that are not obvious to the programmer. - I don't like the way overloaded operators hide their true runtime cost in a cursory look at the code. - I don't like the way they are harder to debug, as they encourage putting many function calls on one line. - I don't like the way they reduce the self-documenting nature of code. concatenate() is clear, convert_strings_to_integer_representations_and_add_them_and_then_convert_res ult_to_string() is clear, but string_result = string1 + string2 is definitely not clear. - I don't like the way we always end up with some classes using overloaded operators and some not leading to two ways of doing the same thing. Angus Graham ps - I do not like them on a train. - I do not like them on a plane. - I do not like them here nor there. - I do not like them anywhere.
Aug 22 2001
next sibling parent "Kent Sandvik" <sandvik excitehome.net> writes:
Same here, I don't like operator overloads, the complexity added for
'syntactic' sugar is not worth it. I use to write C++ libraries with
operator overloads, and there are so many gotchas. If someone wants those,
there's always C++. --Kent
Aug 22 2001
prev sibling next sibling parent "kaffiene" <kaffiene xtra.co.nz> writes:
hehe - I don't like operator overloading either, but given that everyone was
beating me up about it, I assumed I was in the minority.  I agree with
everything on your list.

Peter.

"Angus Graham" <agraham_d agraham.ca> wrote in message
news:9m1epf$1d47$1 digitaldaemon.com...
 "kaffiene" <kaffiene xtra.co.nz> wrote

 If everyone is dead keen on operator overloading, then let's have it.

Not everyone is dead keen on it. I haven't commented on the subject (or followed this thread) simply because I'm so uninterested in it. But so that my silence != consent, I will say what I don't like about overloaded operators (although I'm sure I will repeat some stuff). - I don't like not being able to see at a glance what is a built in type

 what is a class.  Stroustrup thinks classes should not be second class
 citizens. I disagree.
 - I don't like the added complexity to the language.
 - I don't look forward to explaining their use to new programmers.
 - I don't like the way overloaded operators can be easily written wrong

 used wrong so that they generate various temporaries and assignments that
 are not obvious to the programmer.
 - I don't like the way overloaded operators hide their true runtime cost

 a cursory look at the code.
 - I don't like the way they are harder to debug, as they encourage putting
 many function calls on one line.
 - I don't like the way they reduce the self-documenting nature of code.
 concatenate() is clear,

 ult_to_string() is clear, but string_result = string1 + string2 is
 definitely not clear.
 - I don't like the way we always end up with some classes using overloaded
 operators and some not leading to two ways of doing the same thing.


 Angus Graham

 ps
 - I do not like them on a train.
 - I do not like them on a plane.
 - I do not like them here nor there.
 - I do not like them anywhere.

Aug 22 2001
prev sibling next sibling parent Arjan Knepper <arjan jak.nl> writes:
Angus Graham wrote:

 "kaffiene" <kaffiene xtra.co.nz> wrote

 - I don't like not being able to see at a glance what is a built in type and
 what is a class

So do I.
 . - I don't like the way they reduce the self-documenting nature of code.
 concatenate() is clear,
 convert_strings_to_integer_representations_and_add_them_and_then_convert_res
 ult_to_string() is clear, but string_result = string1 + string2 is
 definitely not clear.

I Agree. Using (member) functions with a clear self explaining name as Add, Minus, Eq, Ge, Gt, Le, etc is more self explaining as anything else IMHO.
 Angus Graham

 ps
 - I do not like them on a train.
 - I do not like them on a plane.
 - I do not like them here nor there.
 - I do not like them anywhere.

<g> Arjan
Aug 23 2001
prev sibling parent "Walter" <walter digitalmars.com> writes:
Mind if I put that in the FAQ? -Walter

Angus Graham wrote in message <9m1epf$1d47$1 digitaldaemon.com>...
"kaffiene" <kaffiene xtra.co.nz> wrote

 If everyone is dead keen on operator overloading, then let's have it.

Not everyone is dead keen on it. I haven't commented on the subject (or followed this thread) simply because I'm so uninterested in it. But so that my silence != consent, I will say what I don't like about overloaded operators (although I'm sure I will repeat some stuff). - I don't like not being able to see at a glance what is a built in type

what is a class.  Stroustrup thinks classes should not be second class
citizens. I disagree.
- I don't like the added complexity to the language.
- I don't look forward to explaining their use to new programmers.
- I don't like the way overloaded operators can be easily written wrong and
used wrong so that they generate various temporaries and assignments that
are not obvious to the programmer.
- I don't like the way overloaded operators hide their true runtime cost in
a cursory look at the code.
- I don't like the way they are harder to debug, as they encourage putting
many function calls on one line.
- I don't like the way they reduce the self-documenting nature of code.
concatenate() is clear,
convert_strings_to_integer_representations_and_add_them_and_then_convert_re

ult_to_string() is clear, but string_result = string1 + string2 is
definitely not clear.
- I don't like the way we always end up with some classes using overloaded
operators and some not leading to two ways of doing the same thing.


Angus Graham

ps
- I do not like them on a train.
- I do not like them on a plane.
- I do not like them here nor there.
- I do not like them anywhere.

Aug 23 2001
prev sibling parent reply Dan Hursh <hursh infonet.isl.net> writes:
	It seemed like you were mostly against A + B and Add(A, B) being
synonyms, right?  (OK you said A.add(B), but I hate that syntax and C++
supported the other so I used it.)  Anyhow, what would you think of a
syntax that allowed the design of an API to decide with a routine would
be only an infix or if there could be a public function syntax too?  We
don't need to use C++ syntax.  It was pretty ugly.  If anything, it
sound like Walter wants D to be an apology for what C++ did to C.
	I only want the operators because I doubt that we can anticipate all
the good ways people use operators when they can.  If we are sure we can
cover it all, then fine, otherwise I'd hate to lose a good tool just
because somebody else abused it.  I'm not dead set on passing out guns
and rope for the inexperience to shoot and hang themselves with, but the
inexperienced can injure themselves with some of the most handy and
powerful of tools.  It might be it a good idea to leave operator
overloading the the advanced or special purpose chapter of the manual
and let the programmer grow into it, kind of like word alignment, inline
assembler and alternate calling conventions.

Dan

kaffiene wrote:
 
 If everyone is dead keen on operator overloading, then let's have it.  But
 if we're going to add it, let's at least be able to add new (lexically new)
 operators such as A . B and ||v||.  I can live with inheriting the
 precedence of 'standard' operators such as * - + / etc, as long as for new
 operators you can specificy their precedence (e.g., I'd like ||v|| to have
 the smae precedence as unary -)
 
 Peter.
 
 "kaffiene" <kaffiene xtra.co.nz> wrote in message
 news:9m17t7$19b9$1 digitaldaemon.com...
 [snip]

 I take your point and will try to avoid that in the future, but I'd like

 get back to the point and answer my questions. How is A.Add(B) not

 maintenance complexity in complicated expressions? How is A.Add(B) not

 new notation that differs from the traditional A+B?

My point is that you will be using <class>.<method>(<params>) anyway for 'normal' methods on the class - some people will use this for things like addition , subtraction etc... other people will use <class> + <class>, <class> * scalar etc... When you get programmers looking over the code, some classes will do things using method calls, some will do things using overloaded operators. The code is therefore more variant in style which means an increased maintenence cost. In *either* approach (overloaded operators or all method calls) you will still get the situation that a + b will be valid for all scalars and

 for some classes
 (all in the case of no operator overloading).  Even if you allow operator
 overloading, you can still have A.Add(B) as a variant notation so I reject
 the assertion that not having operator overloading gives you a 'new
 notation'.

 No, that is a very bad idea, and not only for technological reasons.
 Mathematicians invented operator precedence, not compilers




 is
 not
 an artifact of limited technology, it is a simplifying notation.) I



 know
 of any context where mathematicians read AxB+C as meaning




 But
 If you are talking about vectors, then A + B * C is the vector A added


 vector B (resulting in a vector) crossed with the vector C (resulting



 a
 vector).  This evaluates as (A + B) * C.  There you have a context in


 the standard arithmetic operator precedence does not hold.

??? You may be right in your field, but that's the very first time I


 hear
 this. Any pointer justifying this notation?

 Let me take a few books here. Quantum mechanics book, formula
 H^=E0+A'/4.sigma1.sigma2+Dsigma[1,z].sigma[2.z]. I know that the . has

 precedence here, no parentheses.

 An old book by Lillian Lieber on Relativity (one of the few I have in

 Tensor notation all over the place: A.B+C.D, where + is lower priority.

 case you wonder, some of these tensor products actually are the dot

 are describing

 A third book, same thing, sampling three equations at random, they all


 A+B.C mean A+(B.C).

 So, again, I'm very curious wrt. the field where A+B.C means (A+B).C. If

 is a "frequent case", I might change my mind on priority of operators in

Oh sorry - I didn't mean to imply that the + operator had a higher precedence than *, I was using the brackets to indicate how the notation

 parsed naturally.  The + operator - vector addition does not have a
 naturally higher precedence than * - vector cross product.  It parses
 naturally left to right.

 [snip]

 Peter.


Aug 23 2001
parent Charles Hixson <charleshixsn earthlink.net> writes:
Dan Hursh wrote:
 	It seemed like you were mostly against A + B and Add(A, B) being
 synonyms, right?  (OK you said A.add(B), but I hate that syntax and
 C++ supported the other so I used it.)  Anyhow, what would you think
 of a syntax that allowed the design of an API to decide with a 
 routine would be only an infix or if there could be a public function
 syntax too?  We don't need to use C++ syntax.  It was pretty ugly....
 
 Dan
 ...

always works (inside of a class) if Add is a method of the class. So just make it a method of Object (or some descendant of Object that all of your classes descend from). Or put it in an interface that you implement, but far enough up the tree that it will be an ancestor to all the classes in which you are working.
Aug 24 2001
prev sibling parent reply "Sean L. Palmer" <spalmer iname.com> writes:
 (b) Go the whole hog and design a system that allows creation of arbitrary
 operators and overloading the precedence of these and existant operators.
 This actually *would* allow your system to live up to the rhetoric, but

 have to put up with the proliferation of style choices made by each
 individual programmer you work with, and live with the fact that that a
 maintenance programmer who is unfamiliar with the given mathematical
 notation will have an uphill battle to be able to work on your codebase.

I think it's funny that you believe that somebody that "is unfamiliar with the given mathematical notation" would be able to understand the concepts involved in such a codebase enough to *qualify* as a maintenance programmer. I know this firsthand... I never took Calculus, so reading research papers which deal with mathematical subject matter (as I have many occasion to do as a 3D graphics programmer) usually means I'm left not fully grokking the theory behind why something works, just because they're using notation to do the explanation... notation I never learned properly. Yes, to the inexperienced, it looks like job security. Forcing mathematical concepts to be handled thru functional notation just results in really unreadably messy math, and if your *math* is unreadable, you are bound to have bugs in the math code. Let's see that maintenance programmer try to fix some of those. Sean
Oct 29 2001
parent reply "Walter" <walter digitalmars.com> writes:
While operator overloading is an idea with many merits, I just don't see
that arbitrary operators with arbitrary precedence would lead to something
that wouldn't be better done with YACC.

"Sean L. Palmer" <spalmer iname.com> wrote in message
news:9rj5av$2qnl$1 digitaldaemon.com...
 (b) Go the whole hog and design a system that allows creation of


 operators and overloading the precedence of these and existant


 This actually *would* allow your system to live up to the rhetoric, but

 have to put up with the proliferation of style choices made by each
 individual programmer you work with, and live with the fact that that a
 maintenance programmer who is unfamiliar with the given mathematical
 notation will have an uphill battle to be able to work on your codebase.

I think it's funny that you believe that somebody that "is unfamiliar with the given mathematical notation" would be able to understand the concepts involved in such a codebase enough to *qualify* as a maintenance

 I know this firsthand... I never took Calculus, so reading research papers
 which deal with mathematical subject matter (as I have many occasion to do
 as a 3D graphics programmer) usually means I'm left not fully grokking the
 theory behind why something works, just because they're using notation to

 the explanation... notation I never learned properly.

 Yes, to the inexperienced, it looks like job security.  Forcing

 concepts to be handled thru functional notation just results in really
 unreadably messy math, and if your *math* is unreadable, you are bound to
 have bugs in the math code.  Let's see that maintenance programmer try to
 fix some of those.

 Sean

Nov 02 2001
parent reply "Robert W. Cunningham" <rwc_2001 yahoo.com> writes:
Walter wrote:

 While operator overloading is an idea with many merits, I just don't see
 that arbitrary operators with arbitrary precedence would lead to something
 that wouldn't be better done with YACC.

Much of the operator overloading I've seen often appears to be little more than a wish for an infix notation, to use instead standard functional call notation used to invoke a two parameter function. Rather than directly support operator overloading, why not support a generic infix function call? So, instead of: val = pow(2,10); We could have: val = 2 pow 10; Then, if I wish to "overload" the multiplication operator, I might do something like this: myType x(myType a, myType b) infix { return <however these two types multiply>; } And use it like this: product = varA x varB; For functions of two parameters, perhaps the infix form can be unambiguously detected by its use, and no "infix" keyword or declaration syntax will be needed. Be sure to give user-defined infix operators the lowest possible precedence, which should also encourage the use of parentheses to clarify such notation. Is this Food or Fluff? It certainly is Syntactic Sugar, but it looks sweet to me... -BobC
Nov 03 2001
next sibling parent Roland <rv ronetech.com> writes:
I agree.
I suggested something like this.
see Operator overloading, ann other idea

Ciao

Roland


"Robert W. Cunningham" a écrit :

 Walter wrote:

 While operator overloading is an idea with many merits, I just don't see
 that arbitrary operators with arbitrary precedence would lead to something
 that wouldn't be better done with YACC.

Much of the operator overloading I've seen often appears to be little more than a wish for an infix notation, to use instead standard functional call notation used to invoke a two parameter function. Rather than directly support operator overloading, why not support a generic infix function call? So, instead of: val = pow(2,10); We could have: val = 2 pow 10; Then, if I wish to "overload" the multiplication operator, I might do something like this: myType x(myType a, myType b) infix { return <however these two types multiply>; } And use it like this: product = varA x varB; For functions of two parameters, perhaps the infix form can be unambiguously detected by its use, and no "infix" keyword or declaration syntax will be needed. Be sure to give user-defined infix operators the lowest possible precedence, which should also encourage the use of parentheses to clarify such notation. Is this Food or Fluff? It certainly is Syntactic Sugar, but it looks sweet to me... -BobC

Nov 05 2001
prev sibling parent reply Axel Kittenberger <axel dtone.org> writes:
 So, instead of:  val = pow(2,10);
 
 We could have:  val = 2 pow 10;

The problem arises what is if you have: x = 2 pow 10 pow 3; Should it do pow(pow(2 , 10), 3) , or pow(2, pow(10, 3)) ? or with x = 2 pow 3 x 4 * 2 + 7 % 3 pow 8; How do you define and sort priority of these new infix functions? Additionally you should be able to tell the compiler which rules this operation follows, is it communative? associative? can he extract common arguments? like a * b + a * c => a * (b + c). I think a general operator decleration and ruling system is something nobody yet managed to design/implement. - Axel -- |D) http://www.dtone.org
Nov 05 2001
parent reply "Juarez Rudsatz" <juarez mpsinf.com.br> writes:
This problem could not be soulved by defining operator precedence levels ?
e.g :

Level 1
--------------------
~ , ^ , &, ando anothers
Level 3
--------------------
* / %
Level 5
--------------------
+ -
Level 7
--------------------
* / %
Level 9
--------------------
== != > <

Two or more operator in same level can be operated in any order.
Your infix fuction level must be in ( 0..10 ).

 So, instead of:  val = pow(2,10);

 We could have:  val = 2 pow 10;

The problem arises what is if you have: x = 2 pow 10 pow 3; Should it do pow(pow(2 , 10), 3) , or pow(2, pow(10, 3)) ? or with x = 2 pow 3 x 4 * 2 + 7 % 3 pow 8; How do you define and sort priority of these new infix functions? Additionally you should be able to tell the compiler which rules this operation follows, is it communative? associative? can he extract common arguments? like a * b + a * c => a * (b + c). I think a general operator decleration and ruling system is something nobody yet managed to design/implement. - Axel "Axel Kittenberger" <axel dtone.org> wrote in message news:9s62ji$1hoh$1 digitaldaemon.com...
 So, instead of:  val = pow(2,10);

 We could have:  val = 2 pow 10;

The problem arises what is if you have: x = 2 pow 10 pow 3; Should it do pow(pow(2 , 10), 3) , or pow(2, pow(10, 3)) ? or with x = 2 pow 3 x 4 * 2 + 7 % 3 pow 8; How do you define and sort priority of these new infix functions? Additionally you should be able to tell the compiler which rules this operation follows, is it communative? associative? can he extract common arguments? like a * b + a * c => a * (b + c). I think a general operator decleration and ruling system is something nobody yet managed to design/implement. - Axel -- |D) http://www.dtone.org

Nov 05 2001
parent "Sean L. Palmer" <spalmer iname.com> writes:
I think I'd rather deal with precedence in terms of existing operators in
the language.  i.e. give my new super_plus operator the same precedence as
operator +, or maybe give my dot product operator a higher precedence than
operator *, but lower than unary &.  In fact though I wouldn't mind having
to use explicit parens around my expressions for now until someone works out
a good system for this.

I like the commutative/associative modifiers (there are more, but I'm no
mathematician) because they give the compiler hints about how it can
rearrange expressions during optimization, something it does know about the
builtin operators.  M1 * M2 != M2 * M1 when M is a matrix, or a quaternion.

The big problem to be solved here is how can you make a language parser that
can insert arbitrary grammar pieces into the parser during compilation, in
fact it would have to be done only within a certain scope.  Even more
troublesome is that a symbol may not be visible in the source file; it'd
have to be able to change the grammar to add these operators whenever it
detects a scope has been accessed/entered during parsing of an expression.

Another big problem for operator overloading is how to deal with promotion
of arguments.  Ideally this could be done by stating a preference for one
typecast direction over another, thus preferring float->double as safer than
double->float for instance, char->int as safer than int->char, so that if it
sees 'R' plus 43 it'll know to promote the 'R' to int instead of vice-versa.

I'm disappointed that there will likely be no operator overloading in D 1.0.

Sean

"Juarez Rudsatz" <juarez mpsinf.com.br> wrote in message
news:9s6dbf$1pv9$1 digitaldaemon.com...
 This problem could not be soulved by defining operator precedence levels ?
 e.g :

 Level 1
 --------------------
 ~ , ^ , &, ando anothers
 Level 3
 --------------------
 * / %
 Level 5
 --------------------
 + -
 Level 7
 --------------------
 * / %
 Level 9
 --------------------
 == != > <

 Two or more operator in same level can be operated in any order.
 Your infix fuction level must be in ( 0..10 ).

 So, instead of:  val = pow(2,10);

 We could have:  val = 2 pow 10;

The problem arises what is if you have: x = 2 pow 10 pow 3; Should it do pow(pow(2 , 10), 3) , or pow(2, pow(10, 3)) ? or with x = 2 pow 3 x 4 * 2 + 7 % 3 pow 8; How do you define and sort priority of these new infix functions? Additionally you should be able to tell the compiler which rules this operation follows, is it communative? associative? can he extract common arguments? like a * b + a * c => a * (b + c). I think a general operator decleration and ruling system is something nobody yet managed to design/implement. - Axel "Axel Kittenberger" <axel dtone.org> wrote in message news:9s62ji$1hoh$1 digitaldaemon.com...
 So, instead of:  val = pow(2,10);

 We could have:  val = 2 pow 10;

The problem arises what is if you have: x = 2 pow 10 pow 3; Should it do pow(pow(2 , 10), 3) , or pow(2, pow(10, 3)) ? or with x = 2 pow 3 x 4 * 2 + 7 % 3 pow 8; How do you define and sort priority of these new infix functions? Additionally you should be able to tell the compiler which rules this operation follows, is it communative? associative? can he extract common arguments? like a * b + a * c => a * (b + c). I think a general operator decleration and ruling system is something nobody yet managed to design/implement. - Axel -- |D) http://www.dtone.org


Nov 05 2001
prev sibling next sibling parent reply Charles Hixson <charleshixsn earthlink.net> writes:
If the C++ overloading doesn't work well, then look at other languages. 
  This is one thing that Eiffel handles fairly well (though I was at one 
point ungruntled after it refused to allow = [i.e., test for equality] 
to be overridden).
Aug 17 2001
parent reply "kaffiene" <kaffiene xtra.co.nz> writes:
Eiffel seems to do a lot of things well that C++ doesn't.  But I'm a bigot -
I can't stand that pascal-esque sytnax =)


"Charles Hixson" <charleshixsn earthlink.net> wrote in message
news:3B7D2D68.4080905 earthlink.net...
 If the C++ overloading doesn't work well, then look at other languages.
   This is one thing that Eiffel handles fairly well (though I was at one
 point ungruntled after it refused to allow = [i.e., test for equality]
 to be overridden).

Aug 18 2001
parent reply Charles Hixson <charleshixsn earthlink.net> writes:
kaffiene wrote:
 Eiffel seems to do a lot of things well that C++ doesn't.  But I'm a bigot -
 I can't stand that pascal-esque sytnax =)
 ...
 

and bad. The syntax doesn't matter that much if we are mining it for features to include in a new language. Not unless we consider the syntax as one of the features that we are mining. Actually, in the case of generics, the syntax might be worth mining. It's a lot better than the template notation, and does most of the same job. And there's not too much wrong with the way that it declares operators (an infix operator is a function with one return value and two parameters, one implicitly specified by the class that contains it and one explicitly specified. In the declaration the keyword infix is used preceeding the name of the operator in quotes.), but the way that Ada does it is as reasonable, and the way that C++ does it is probably more familiar.
Aug 21 2001
parent "kaffiene" <kaffiene xtra.co.nz> writes:
"Charles Hixson" <charleshixsn earthlink.net> wrote in message
news:3B828CC2.70706 earthlink.net...
 kaffiene wrote:
 Eiffel seems to do a lot of things well that C++ doesn't.  But I'm a


 I can't stand that pascal-esque sytnax =)
 ...

and bad. The syntax doesn't matter that much if we are mining it for features to include in a new language. Not unless we consider the syntax as one of the features that we are mining.

Oh sure! That's exactly what I meant - I like a lot of the ideas in Eiffel. I just don't use it because of the syntax. Peter.
Aug 21 2001
prev sibling parent reply John English <je brighton.ac.uk> writes:
kaffiene wrote:
 
 (4) It introduces another style for doing very common stuff.  One group of
 programmers does vec.plus(v2) and another does vec + v2.

Without overloading, one group does vec.plus(v2), another does vec.add(v2). It's like in the Java API, you have add() for Vectors and put() for Hashtables -- consistent naming is hard to enforce and something always goes wrong. If there is an "obvious" name like +, surely it's better to use that (even at the risk of idiots defing it in "non-obvious" ways...)? ----------------------------------------------------------------- John English | mailto:je brighton.ac.uk Senior Lecturer | http://www.it.bton.ac.uk/staff/je Dept. of Computing | ** NON-PROFIT CD FOR CS STUDENTS ** University of Brighton | -- see http://burks.bton.ac.uk -----------------------------------------------------------------
May 03 2002
parent "OddesE" <OddesE_XYZ hotmail.com> writes:
"John English" <je brighton.ac.uk> wrote in message
news:3CD27936.7152F760 brighton.ac.uk...
 kaffiene wrote:
 (4) It introduces another style for doing very common stuff.  One group


 programmers does vec.plus(v2) and another does vec + v2.

Without overloading, one group does vec.plus(v2), another does vec.add(v2). It's like in the Java API, you have add() for Vectors and put() for Hashtables -- consistent naming is hard to enforce and something always goes wrong. If there is an "obvious" name like +, surely it's better to use that (even at the risk of idiots defing it in "non-obvious" ways...)?

This discussion is periodically returning here. I am in favour of operator overloading for exactly this reason. Common stuff, such as adding things already have very common and standardized 'names' in math, +, -, * and /. With named functions, you can get: vec1.Add (vec2); // seems logical right? vec1.add (vec2); // Argh, we are case sensitive... vec1.Plus (vec2); // Mmm makes sense too right? vec1.And (vec2) // And this too... vec1.Und (vec2); // written by a german :) vec1.En (vec2); // Or by a dutchmen... Need I go on? + is + and math is math. If you are a programmer you must know math, period. Ofcourse + could be interpreted to mean something different by different people, but this also goes for named functions. What does matrix1.Mul (matrix2) mean? Does it do a cross product? It seems to me operator overloading solves some problems and introduces some others, but one of the things it does solve is allow the use of a common language for describing mathematical operations, not english, german or dutch, but math itself! If you are interested, read the thread "Operator overloading, a way to make everybody happy', it talks about many of these things, Don't be afraid of a little flaming here and there though. In abovementioned thread I proposed a way where standard interfaces would be defined for supporting common math operations, such as IAddable, IAssignable, IComparable etcetera. The compiler would then map the operators =, ==, <, >, +, -, / and * to these interfaces, creating some sweet syntax sugar. Walter has implemented a tweaked version of this for the comparison operators, where the base class Object contains functions cmp() and equals() for comparison and for testing for equality. The compiler then maps the operators <, <=, > and >= to cmp() and == and != to equals(). You can overload the functions cmp() and equals() and you will then automagically overload the comparison operators too! At the moment there is no operator overloadin support for assignment. A big problem with this is how to distinguish between assignment by reference and by value. I proposed using = for assignment by reference and := for assignment by value, but this has not meet with enthousiastic reception (and for obvious reasons...). At the moment you will need to call dup(), but at least it is standard for all objects. For addition, subtraction, multiplication and division all bets are off, but this might change quick, like it has done with the overloading of the comparison operators and with the inclusion of delegates.. :) Fingers crossed... :) -- Stijn OddesE_XYZ hotmail.com http://OddesE.cjb.net _________________________________________________ Remove _XYZ from my address when replying by mail
May 05 2002
prev sibling next sibling parent "Tim Sweeney" <tim epicgames.com> writes:
As someone who writes tons of vector math code in production applications,
my feeling is that a language which doesn't support operator overloading and
template-style programming is really going to be really painful for moden
math-intensive applications, such as games, modelling programs, etc.

I'm not necessarily advocating C++'s approach (operator overloading can be
confusing, such as using an overloaded "<<" operator to represent both bit
shifting and character output!)

Haskell-style "typeclasses" are an interesting replacement for general
operator overloading.  This approach encapsulates the idea that a given
operator like "+" or "*" should have the same notional meaning everywhere,
even when operating on different data types.

A simple example is that "+" and "*" would belong to the "numeric"
typeclass.  If you create a new numeric typeclass of your own (for example,
a "bigint" class), then you declare your class to belong to the "numeric"
typeclass, and implement those specific operators.  However, you wouldn't be
able to create a "+" operator for a non-numeric class.

This can map to popular languages easily by having your class implement an
abstract interface (aka typeclass) and provide an appropriate static or
friend function for those operators.

However, this approach doesn't extend well to more general situations.  For
example, say you define a template like vector<float,4> (a mathematical
4-component vector), one wants the ability to multiply those vectors by
scalars using "*", which requires the more general form of overloading.

-Tim

"John Fletcher" <J.P.Fletcher aston.ac.uk> wrote in message
news:3B78E154.BC62E006 aston.ac.uk...
 Quote from the specification for D:

 Operator overloading. The only practical applications for operator
 overloading seem to be implementing a complex floating point type, a
 string class, and smart pointers. D provides the first two natively,
 smart pointers are irrelevant in a garbage collected language.

 Another quote:

  D has many features to directly support features needed by numerics
 programmers, like direct support for the complex data type and defined
 behavior for NaN's and infinities.


 Comment:

 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.

 John Fletcher

Aug 17 2001
prev sibling parent reply a <hursh infonet.isl.net> writes:
	I'll second this.  I understand that C++ overloads are insuficant but
ugly, but this might be a case to find a better solution.  I like the
suggestion of the 'written' syntax.  It is powerful but could blow off
several toes if aimed at one's foot.  It might be nice to declare a
precedence level for new ops, that way if you define a 'dot' infix
expression you can say he has the precedence of a multiplication op.
	At the very least, I would like to see D better address matrix math
(quaternions too) and arbitrary sized numbers.

Dan

John Fletcher wrote:
 
 Quote from the specification for D:
 
 Operator overloading. The only practical applications for operator
 overloading seem to be implementing a complex floating point type, a
 string class, and smart pointers. D provides the first two natively,
 smart pointers are irrelevant in a garbage collected language.
 
 Another quote:
 
  D has many features to directly support features needed by numerics
 programmers, like direct support for the complex data type and defined
 behavior for NaN's and infinities.
 
 Comment:
 
 For numerical computing it is convenient to define classes e.g. vectors,
 matrices and other entities beyond complex numbers, such as
 quaternions.  For this overloading of operators such as +, -, +=, etc
 means that top level code can be easily written and readable.
 
 John Fletcher

Aug 18 2001
parent reply Christophe de Dinechin <descubes earthlink.net> writes:
a wrote:

 I like the
 suggestion of the 'written' syntax.  It is powerful but could blow off
 several toes if aimed at one's foot.

Why? Could you give an example? Christophe
Aug 20 2001
parent Dan Hursh <hursh infonet.isl.net> writes:
Christophe de Dinechin wrote:
 
 a wrote:
 
 I like the
 suggestion of the 'written' syntax.  It is powerful but could blow off
 several toes if aimed at one's foot.

Why? Could you give an example? Christophe

It's hard to give an example without a concrete syntax to start with. (Christophe, you might also remember that I'm bad at examples. :) One case I'm thinking of would be unexpected ambiguity like: typedef int[3] vec; typedef double scalar; vec build_vec(scalar larry, scalar curly, scalar moe) written ( larry curly moe ) { return { larry, curly, moe }; // I know this array syntax wouldn't } // but wouldn't it would be nice? scalar abs(vec tor) written || tor || { // using the sqrt syntax I defined as \/ <exp> | return \/tor[0] * tor[0] + tor[1] * tor[1] + tor[2] * tor[2] |; } scalar x = 0; scalar z = 1.0; vec why = {1, 1, 1} // NOTE: I'm assuming that abs is defined for built in types like int int it = abs( x || why || z ); // and that bool could cast to int What is it? Is it 0 since x could short circuit the '||' or is it sqrt(0, sqrt(3), 1). It could reasonably be either. You might be able to build a compiler that could brute force through and catch all the possible ambiguities in a given piece of code, but you could still end up with libraries that can't be used together do to a nasty mix of poorly chosen written clauses. Don't get me wrong. I like the feature. It's just like many of C++'s (or perl's) features. Beautiful when used right and dangerous if used wrong. The same it true of C++'s operator overloads, but man they are handy at times. The ability to say: written A * B + C written (A * B) + C or the like would have made libraries like blitz++ a lot easier to do I bet. I suspect they would have made the templates less necessary, making the compile times faster. But Walter seems to favor safety and simpler implementation. This probably crosses a line for him. Cheers, Dan
Aug 21 2001