www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - WWI-#1: "How close are we to 1.0, and how does that impact amenability to fundamental change?"

reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Walter

Some of the discussions recently have hedged scope/input/effort as a 
result of uncertainty regarding the state of D-1.0-ness.

In one thread on opCmp, when discussing a proposal I'd made with 
Ben, Kris et al, I made the following observation regarding our need 
to address this uncertainty. Can you provide answers?

(Note to Brad and fellow WWI-ers, I'm posting this according to your 
proposed WWI-protocol, since several people have already said they'd 
also like to see Walter's response on the questions raised.)


["
Before we get into this, and potentially waste many more hours on
challenging, enlightening, but eventually fruitless debate, I want
to hear from Walter as to his _feeling_ on how close we are to 1.0,
and therefore how amenable he's going to be to making significant
language changes at this point.

Walter, can you characterise your position? Specifically,
    1. Do you believe that there are any "serious flaws" in the D 
specification as it stands?
    2. If the answer is no, do you have a degree of certainty that 
is unlikely to be overridden by any observations any of us might 
make?
    3. If the answer to either 1 or 2 is yes, do you nonetheless 
have a need to expedite 1.0 despite any/all such "major flaws"?

Obviously, if we get (X, Yes, Y) or (X, Y, Yes), then there's no 
point having the debate.

"]

I think getting a clear position from you on this will help either 
to eliminate a lot of pointless debate, or to provide resolve to 
those people who want to achieve change to push on and perhaps 
formalise their proposals for change.

Cheers

Matthew
Apr 20 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:d46rrl$1km0$1 digitaldaemon.com...
 Walter

 Some of the discussions recently have hedged scope/input/effort as a 
 result of uncertainty regarding the state of D-1.0-ness.

 In one thread on opCmp, when discussing a proposal I'd made with Ben, Kris 
 et al, I made the following observation regarding our need to address this 
 uncertainty. Can you provide answers?
I hope Walter eventually responds to some of the recent posts. Aside from fixing std.stream bugs I personally am not working any more on phobos issues (ie - any issues raised in the few recent threads requesting phobos feedback) until it's a little more clear just what the situation is. His recent post about CRTP actually scared me a little on where his focus is. But then he can do what he wants...
Apr 24 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d4hnu7$q62$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
 news:d46rrl$1km0$1 digitaldaemon.com...
 Walter

 Some of the discussions recently have hedged scope/input/effort 
 as a result of uncertainty regarding the state of D-1.0-ness.

 In one thread on opCmp, when discussing a proposal I'd made with 
 Ben, Kris et al, I made the following observation regarding our 
 need to address this uncertainty. Can you provide answers?
I hope Walter eventually responds to some of the recent posts. Aside from fixing std.stream bugs I personally am not working any more on phobos issues (ie - any issues raised in the few recent threads requesting phobos feedback) until it's a little more clear just what the situation is. His recent post about CRTP actually scared me a little on where his focus is. But then he can do what he wants...
Other than I'm working with Lars on expanding std.openrj, I am in the same position, and agree completely.
Apr 24 2005
parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
I hope Walter doesn't mind me quoting him in here on something that was
said outside of the newsgroup,
but I figure as busy as he is, asking would just take up
more of his time.

When I mentioned to him that we were trying to save him time by bringing his
attention to specific topics, part of his response was...

<quote>
I try to read all the posts, because the overall quality of the posts and
the posters in the D newsgroup is very high.
</quote>

So, even if he doesn't respond...
he probably at least reads what we're hoping he will.
Don't know how he can keep up.
Lots of new stuff in here every day.
I barely have time to skim it, myself.

TZ

"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d4hphn$rcn$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d4hnu7$q62$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d46rrl$1km0$1 digitaldaemon.com...
 Walter

 Some of the discussions recently have hedged scope/input/effort
 as a result of uncertainty regarding the state of D-1.0-ness.

 In one thread on opCmp, when discussing a proposal I'd made with
 Ben, Kris et al, I made the following observation regarding our
 need to address this uncertainty. Can you provide answers?
I hope Walter eventually responds to some of the recent posts. Aside from fixing std.stream bugs I personally am not working any more on phobos issues (ie - any issues raised in the few recent threads requesting phobos feedback) until it's a little more clear just what the situation is. His recent post about CRTP actually scared me a little on where his focus is. But then he can do what he wants...
Other than I'm working with Lars on expanding std.openrj, I am in the same position, and agree completely.
Apr 25 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message
news:d4hnu7$q62$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d46rrl$1km0$1 digitaldaemon.com...
 Walter

 Some of the discussions recently have hedged scope/input/effort as a
 result of uncertainty regarding the state of D-1.0-ness.

 In one thread on opCmp, when discussing a proposal I'd made with Ben,
Kris
 et al, I made the following observation regarding our need to address
this
 uncertainty. Can you provide answers?
I hope Walter eventually responds to some of the recent posts. Aside from fixing std.stream bugs I personally am not working any more on phobos
issues
 (ie - any issues raised in the few recent threads requesting phobos
 feedback) until it's a little more clear just what the situation is. His
 recent post about CRTP actually scared me a little on where his focus is.
 But then he can do what he wants...
My focus on the last few releases, and will be for the next one, fixing the outstanding obvious bugs in the implementation. Having a stable, reliable compiler is of paramount importance. The CRTP took about 10 minutes to write, I wrote it to point out that D's templates are powerful.
Apr 30 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d50voc$1as2$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message
 news:d4hnu7$q62$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d46rrl$1km0$1 digitaldaemon.com...
 Walter

 Some of the discussions recently have hedged scope/input/effort 
 as a
 result of uncertainty regarding the state of D-1.0-ness.

 In one thread on opCmp, when discussing a proposal I'd made 
 with Ben,
Kris
 et al, I made the following observation regarding our need to 
 address
this
 uncertainty. Can you provide answers?
I hope Walter eventually responds to some of the recent posts. Aside from fixing std.stream bugs I personally am not working any more on phobos
issues
 (ie - any issues raised in the few recent threads requesting 
 phobos
 feedback) until it's a little more clear just what the situation 
 is. His
 recent post about CRTP actually scared me a little on where his 
 focus is.
 But then he can do what he wants...
My focus on the last few releases, and will be for the next one, fixing the outstanding obvious bugs in the implementation. Having a stable, reliable compiler is of paramount importance. The CRTP took about 10 minutes to write, I wrote it to point out that D's templates are powerful.
Walter, since this thread's obviously not passed under your radar, is there any chance you might respond to its main theme? That's kind of a significant watershed for a lot of people, both in terms of the specific issues raised by the questions, and in terms of your willingness to be drawn out on such things at all.
Apr 30 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d510v1$1bqj$1 digitaldaemon.com...
 Walter, since this thread's obviously not passed under your radar,
 is there any chance you might respond to its main theme?

 That's kind of a significant watershed for a lot of people, both in
 terms of the specific issues raised by the questions, and in terms
 of your willingness to be drawn out on such things at all.
I view the single most important impediment to D 1.0 at the moment as being the bug list. It's too long, there are too many problems. Design problems can always be worked around, but nothing is a turnoff as much as buggy software. I know there's been a lot of discussion about what to do with opCmp and opEquals. I understand there are some issues with it, and I don't have a clear idea on what to do about it yet. But let me extoll some of the reasons why it is not such a terrible idea: o The language should be usable without having to resort to templates. o The cost of the vtbl[] slots for them is one per class. So, if your application has 2000 different classes in it, opCmp costs 8000 bytes. Not a big deal. There is no runtime cost to having them there. o Having them in the root class enables one way (templates being the other way) of doing generic programming. o Having them in Object makes the implementation of associative arrays and sorting compact. o Having reasonable default behaviors for them means that one can write fully functional classes without doing a lot of typing. This is not true for C++. o It can be completely ignored. You can implement class C, provide an opCmp(C), and it will work for comparisons between objects of type C and types derived from C. You'll get a compile time error if you try to compare c<o, where o is an Object. Array .sort and associative arrays of C will fail, and this is a problem, but if you don't use those features for C, it'll be fine. o Continuing with that thought, there's nothing impeding writing ones' own template sorting device like C++ that completely ignore's Object.opCmp. Java's mistakes, but evidently this particular issue did not appear on the radar of being a huge Java mistake. You know I'm not in the camp of "do it because Java did it", but Java is a very successful language and one can't just offhandedly dismiss what it does as all wrong. Now for the problems: o The current Object implementations of hash and opCmp fail if a moving GC is used, because they rely on the address of the object instance. This is not as insurmountable as it first seems. There's a straightforward way to fix this in the presence of a moving GC, there's just no point at the moment because there isn't a moving GC. o The current Object.opEquals just compares addresses. One alternative is doing a bit compare of the instance contents, but that gets problematical with a moving GC. Not sure what the right answer is. o The (c == null) thing. Issuing a compiler error on it is incomplete, as what if it's (c == n) instead, and n is null? The best solution is probably to have c.opEquals(o) return false if o is null. (null == c) will get rewritten by the compiler as c.opEquals(null) anyway. Not really sure what to do here. Rewriting the expression as (c == n || c && n && c.opEquals(n)) is just too inefficient. o The C.opCmp(C) thing. This is in my opinion the most serious issue, because its failure won't be obvious (unlike the c==null thing, which will produce the obvious exception). This is a more general problem not just with opCmp but with any overriding function which hides all the previous functions with the same name up in the heirarchy. Perhaps the solution here is to issue a compiler diagnostic if this happens, and if the programmer wants it to happen, use the 'override' attribute on the C.opCmp(C).
Apr 30 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Give a man a crumb, and he'll eat for a moment, (but he'll stick 
with you a long time in case he gets another). :-)

(Hmmm, not sure that sounds as polite and appreciative as I 
intended, but I think it gives the gist.)

What I take from this reply is:
    - You're practically encumbered with other things at the moment, 
having chosen the course - as reasonable as any - of focusing on 
bugs before language at the moment. That's useful knowledge for us 
to have.
    - There're some issues no-one has yet raised about practical 
advances towards "proper opCmp semantics", which _might_ mean it's 
less terrible than we thought. Presumably no-one else has mentioned 
this because no-one else knows it.
    - You're not closed minded about the whole affair.

I take that as a positive sign, and will personally find it adds a 
bit of help in sustaining my interest/involvement. Of course, some 
of that information would've profited us, both factually and 
spiritually, had it been offered earlier, but I'm not going to smack 
the gift horse on the chops today.

Thanks

Matthew


"Walter" <newshound digitalmars.com> wrote in message 
news:d51jpj$1qh7$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d510v1$1bqj$1 digitaldaemon.com...
 Walter, since this thread's obviously not passed under your 
 radar,
 is there any chance you might respond to its main theme?

 That's kind of a significant watershed for a lot of people, both 
 in
 terms of the specific issues raised by the questions, and in 
 terms
 of your willingness to be drawn out on such things at all.
I view the single most important impediment to D 1.0 at the moment as being the bug list. It's too long, there are too many problems. Design problems can always be worked around, but nothing is a turnoff as much as buggy software. I know there's been a lot of discussion about what to do with opCmp and opEquals. I understand there are some issues with it, and I don't have a clear idea on what to do about it yet. But let me extoll some of the reasons why it is not such a terrible idea: o The language should be usable without having to resort to templates. o The cost of the vtbl[] slots for them is one per class. So, if your application has 2000 different classes in it, opCmp costs 8000 bytes. Not a big deal. There is no runtime cost to having them there. o Having them in the root class enables one way (templates being the other way) of doing generic programming. o Having them in Object makes the implementation of associative arrays and sorting compact. o Having reasonable default behaviors for them means that one can write fully functional classes without doing a lot of typing. This is not true for C++. o It can be completely ignored. You can implement class C, provide an opCmp(C), and it will work for comparisons between objects of type C and types derived from C. You'll get a compile time error if you try to compare c<o, where o is an Object. Array .sort and associative arrays of C will fail, and this is a problem, but if you don't use those features for C, it'll be fine. o Continuing with that thought, there's nothing impeding writing ones' own template sorting device like C++ that completely ignore's Object.opCmp. root a lot of Java's mistakes, but evidently this particular issue did not appear on the radar of being a huge Java mistake. You know I'm not in the camp of "do it because Java did it", but Java is a very successful language and one can't just offhandedly dismiss what it does as all wrong. Now for the problems: o The current Object implementations of hash and opCmp fail if a moving GC is used, because they rely on the address of the object instance. This is not as insurmountable as it first seems. There's a straightforward way to fix this in the presence of a moving GC, there's just no point at the moment because there isn't a moving GC. o The current Object.opEquals just compares addresses. One alternative is doing a bit compare of the instance contents, but that gets problematical with a moving GC. Not sure what the right answer is. o The (c == null) thing. Issuing a compiler error on it is incomplete, as what if it's (c == n) instead, and n is null? The best solution is probably to have c.opEquals(o) return false if o is null. (null == c) will get rewritten by the compiler as c.opEquals(null) anyway. Not really sure what to do here. Rewriting the expression as (c == n || c && n && c.opEquals(n)) is just too inefficient. o The C.opCmp(C) thing. This is in my opinion the most serious issue, because its failure won't be obvious (unlike the c==null thing, which will produce the obvious exception). This is a more general problem not just with opCmp but with any overriding function which hides all the previous functions with the same name up in the heirarchy. Perhaps the solution here is to issue a compiler diagnostic if this happens, and if the programmer wants it to happen, use the 'override' attribute on the C.opCmp(C).
Apr 30 2005
prev sibling next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, 
 as
 what if it's (c == n) instead, and n is null? The best solution is 
 probably
 to have c.opEquals(o) return false if o is null. (null == c) will get
 rewritten by the compiler as c.opEquals(null) anyway. Not really sure what
 to do here. Rewriting the expression as (c == n || c && n && 
 c.opEquals(n))
 is just too inefficient.
c.opEqual(o) already returns null if o is null. The problem is if c is null. The idiom (c == null) is very common (more common than c == n) so I think a warning (when c has class type) would do the trick. The expression would only have to be (c && c.opEquals(n)). Everything else can be caught in the opEquals body.
May 01 2005
next sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d52hk7$2hqp$1 digitaldaemon.com...
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, 
 as
 what if it's (c == n) instead, and n is null? The best solution is 
 probably
 to have c.opEquals(o) return false if o is null. (null == c) will get
 rewritten by the compiler as c.opEquals(null) anyway. Not really sure 
 what
 to do here. Rewriting the expression as (c == n || c && n && 
 c.opEquals(n))
 is just too inefficient.
c.opEqual(o) already returns null if o is null. The problem is if c is null. The idiom (c == null) is very common (more common than c == n) so I think a warning (when c has class type) would do the trick. The expression would only have to be (c && c.opEquals(n)). Everything else can be caught in the opEquals body.
oh, scratch the expression part since that wouldn't catch null==null. So the expression would have to be what you said. I agree that's probably too much and a warning will do.
May 01 2005
parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
news:d52hv8$2i54$1 digitaldaemon.com...
 "Ben Hinkle" <ben.hinkle gmail.com> wrote in message 
 news:d52hk7$2hqp$1 digitaldaemon.com...
 o    The (c == null) thing. Issuing a compiler error on it is 
 incomplete, as
 what if it's (c == n) instead, and n is null? The best solution is 
 probably
 to have c.opEquals(o) return false if o is null. (null == c) will get
 rewritten by the compiler as c.opEquals(null) anyway. Not really sure 
 what
 to do here. Rewriting the expression as (c == n || c && n && 
 c.opEquals(n))
 is just too inefficient.
c.opEqual(o) already returns null if o is null. The problem is if c is null. The idiom (c == null) is very common (more common than c == n) so I think a warning (when c has class type) would do the trick. The expression would only have to be (c && c.opEquals(n)). Everything else can be caught in the opEquals body.
oh, scratch the expression part since that wouldn't catch null==null. So the expression would have to be what you said. I agree that's probably too much and a warning will do.
holy schnitzel. never write code before having coffee. The expression would only have to be c==n || c && c.opEquals(n).
May 01 2005
prev sibling parent reply d c++.com writes:
In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, 
 as what if it's (c == n) instead, and n is null? The best solution is 
 probably to have c.opEquals(o) return false if o is null. (null == c)
 will get rewritten by the compiler as c.opEquals(null) anyway. Not
 really sure what to do here. Rewriting the expression as
 (c == n || c && n &&  c.opEquals(n)) is just too inefficient.
Ahh, too many things happens behind the scene! How the programmer is going to remember all these special "rules"? Now I start to agree that operator overloading may be a bad thing in this case. Why can't we make simple things simple (the Java approach): 0) remove all the '===/!==/is/isnot' operator completely 1) use '==' as identity test 2) if equality test is required, let the programmer *explicitly* call the method a.equals(b) This way the programmer may even implement a different version of equality test, e.g. a.deepEquals(b), which cannot be offered by just opEquals alone. (Similar approach should be applied to clone() and deepClone())
c.opEqual(o) already returns null if o is null. The problem is if c is null. 
The idiom (c == null) is very common (more common than c == n) so I think a 
warning (when c has class type) would do the trick. The expression would 
only have to be (c && c.opEquals(n)). Everything else can be caught in the 
opEquals body. 
I'd vote for it as a *second* option (i.e. a compromise). A warning message certainly helps here if Walter doesn't what take the approach suggested above.
May 02 2005
next sibling parent reply Benji Smith <dlanguage xxagg.com> writes:
 In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 Why can't we make simple things simple (the Java approach):
 
 0) remove all the '===/!==/is/isnot' operator completely
 1) use '==' as identity test
 2) if equality test is required, let the programmer *explicitly* call the
method
 a.equals(b)
 
 This way the programmer may even implement a different version of equality
test,
 e.g. a.deepEquals(b), which cannot be offered by just opEquals alone.  (Similar
 approach should be applied to clone() and deepClone())
That sounds a wee bit too simple and straightforward to me. Personally, I've noticed that we ought to have ==== for deepEquals() and !=== as notDeepEquals() Then we really also should have !=!= for neitherDeepNorShallowEquals() and ==!= as shallowEqualsButDoesntDeepEqual() and also maybe !!!= for doesntEqualAnything() After we get those, I'd like to have !=>= for for whoTheHellNeedsSoManyEqualityOperatorsAnyhow() Viva la Overloade Operateur!! --BenjiSmith
May 02 2005
parent John Reimer <brk_6502 yahoo.com> writes:
Benji Smith wrote:
 In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 Why can't we make simple things simple (the Java approach):

 0) remove all the '===/!==/is/isnot' operator completely
 1) use '==' as identity test
 2) if equality test is required, let the programmer *explicitly* call 
 the method
 a.equals(b)

 This way the programmer may even implement a different version of 
 equality test,
 e.g. a.deepEquals(b), which cannot be offered by just opEquals alone.  
 (Similar
 approach should be applied to clone() and deepClone())
That sounds a wee bit too simple and straightforward to me. Personally, I've noticed that we ought to have ==== for deepEquals() and !=== as notDeepEquals() Then we really also should have !=!= for neitherDeepNorShallowEquals() and ==!= as shallowEqualsButDoesntDeepEqual() and also maybe !!!= for doesntEqualAnything() After we get those, I'd like to have !=>= for for whoTheHellNeedsSoManyEqualityOperatorsAnyhow() Viva la Overloade Operateur!! --BenjiSmith
LOL! That just about killed me! :-D That about gave me enough medicine for the year!
May 02 2005
prev sibling next sibling parent Derek Parnell <derek psych.ward> writes:
On Mon, 2 May 2005 17:22:10 +0000 (UTC), d c++.com wrote:

 In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, 
 as what if it's (c == n) instead, and n is null? The best solution is 
 probably to have c.opEquals(o) return false if o is null. (null == c)
 will get rewritten by the compiler as c.opEquals(null) anyway. Not
 really sure what to do here. Rewriting the expression as
 (c == n || c && n &&  c.opEquals(n)) is just too inefficient.
Ahh, too many things happens behind the scene! How the programmer is going to remember all these special "rules"? Now I start to agree that operator overloading may be a bad thing in this case. Why can't we make simple things simple (the Java approach): 0) remove all the '===/!==/is/isnot' operator completely 1) use '==' as identity test 2) if equality test is required, let the programmer *explicitly* call the method a.equals(b)
Or, for consistency sake, remove the "==" operator too and let the programmer *explicitly* call the free function "is(a,b)" ;-) (Don't bother to respond, its a joke okay. ) -- Derek Melbourne, Australia 3/05/2005 9:50:14 AM
May 02 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
<d c++.com> wrote in message news:d55nk2$26fs$1 digitaldaemon.com...
 In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 o    The (c == null) thing. Issuing a compiler error on it is 
 incomplete,
 as what if it's (c == n) instead, and n is null? The best solution is
 probably to have c.opEquals(o) return false if o is null. (null == c)
 will get rewritten by the compiler as c.opEquals(null) anyway. Not
 really sure what to do here. Rewriting the expression as
 (c == n || c && n &&  c.opEquals(n)) is just too inefficient.
Ahh, too many things happens behind the scene! How the programmer is going to remember all these special "rules"? Now I start to agree that operator overloading may be a bad thing in this case. Why can't we make simple things simple (the Java approach): 0) remove all the '===/!==/is/isnot' operator completely 1) use '==' as identity test 2) if equality test is required, let the programmer *explicitly* call the method a.equals(b) This way the programmer may even implement a different version of equality test, e.g. a.deepEquals(b), which cannot be offered by just opEquals alone. (Similar approach should be applied to clone() and deepClone())
An example where it overloading == is best is writing generic code that involves a BigInt class (like the mpz class I play around with sometimes). Say one has the test if (x+1 == 2*y) { ... } and x and y are ints. That test compiles fine. Now with overloading == I can plug in my own BigInt class for things larger than long and the same test works fine (as long as I overload + and *, too). If a method like equals() was requred then I could no longer use the same code for ints and my BigInts. If == wasn't overloadable there would have to be some way of saying x.equals(y) for ints x and y. Needless to say overloading == is a simpler languge concept than adding methods to ints.
May 02 2005
next sibling parent B.G. <B.G._member pathlink.com> writes:
In article <d56j99$aj0$1 digitaldaemon.com>, Ben Hinkle says...
<d c++.com> wrote in message news:d55nk2$26fs$1 digitaldaemon.com...
 In article <d52hk7$2hqp$1 digitaldaemon.com>, Ben Hinkle says...
 o    The (c == null) thing. Issuing a compiler error on it is 
 incomplete,
 as what if it's (c == n) instead, and n is null? The best solution is
 probably to have c.opEquals(o) return false if o is null. (null == c)
 will get rewritten by the compiler as c.opEquals(null) anyway. Not
 really sure what to do here. Rewriting the expression as
 (c == n || c && n &&  c.opEquals(n)) is just too inefficient.
Ahh, too many things happens behind the scene! How the programmer is going to remember all these special "rules"? Now I start to agree that operator overloading may be a bad thing in this case. Why can't we make simple things simple (the Java approach): 0) remove all the '===/!==/is/isnot' operator completely 1) use '==' as identity test 2) if equality test is required, let the programmer *explicitly* call the method a.equals(b) This way the programmer may even implement a different version of equality test, e.g. a.deepEquals(b), which cannot be offered by just opEquals alone.
That's an interesting point... But there may exist different versions of many operators, like less and greater for case-sensitive and case-insensitive string comparison. What can we do about that? I think it's a typical situation when it's better to find a 'natural' way covering most cases and use operator overloading syntactic sugar for this case. For opEquals, I personally perceive shallowEquals to be the 'natural' use for == and covering most cases. The rest are special cases. Any other opinions?
 (Similar
 approach should be applied to clone() and deepClone())
An example where it overloading == is best is writing generic code that involves a BigInt class (like the mpz class I play around with sometimes). Say one has the test if (x+1 == 2*y) { ... } and x and y are ints. That test compiles fine. Now with overloading == I can plug in my own BigInt class for things larger than long and the same test works fine (as long as I overload + and *, too). If a method like equals() was requred then I could no longer use the same code for ints and my BigInts. If == wasn't overloadable there would have to be some way of saying x.equals(y) for ints x and y. Needless to say overloading == is a simpler languge concept than adding methods to ints.
That's a Very good point I think... Any new language expects some new habits to be adapted, D is different in many terms. A dilemma like this is truly painfull. I've been programming a lot in C and java and it really appears very strange and even ugly for curly-bracket-language-minded people. But if I were the one to decide, having a young language like this, I'd be very carefull before making it 'compatible' with existing _HABITS_ in favor of have nightmares and regrets why I did that ;-) Actually this behaviour is not really unique remember comparison with NULL in SQL?
May 02 2005
prev sibling next sibling parent reply Oskar Linde <Oskar_member pathlink.com> writes:
Jumping in here from nothing.

In article <d56j99$aj0$1 digitaldaemon.com>, Ben Hinkle says...
An example where it overloading == is best is writing generic code that 
involves a BigInt class (like the mpz class I play around with sometimes). 
Say one has the test
  if (x+1 == 2*y) { ... }
and x and y are ints. That test compiles fine. Now with overloading == I can 
plug in my own BigInt class for things larger than long and the same test 
works fine (as long as I overload + and *, too). If a method like
equals() was requred then I could no longer use the same code for ints and 
my BigInts. If == wasn't overloadable there would have to be some way of 
saying x.equals(y) for ints x and y. Needless to say overloading == is a 
simpler languge concept than adding methods to ints.
Hmm. This got me thinking about value/reference semantics. Why try to make classes and structs have the same operator overloading rules, when their semantics are so different? A BigInt is supposed to conform to the behaviour of primitive types, i.e. have value semantics...? Shouldn't it therefore be made a struct instead (internally, it could just contain a (possibly shared) pointer to whatever data the bigint is stored as)? That would not be (much) more expensive than passing pointers to classes. Structs with value-semantics is the natural choice for implementing numeric types, such as vectors, matrices, etc. Operator overloads for structs should make is possible to make structs equal to primitive types in all respects. Why not make/keep structs as the way to make things that conform to the primitive types and consider classes as different beasts with other requirements? I.e. overloading of ==, etc... They have after all different semantics. /Oskar Linde Ps. I'm sure there has been lots of discussion about this that I've missed.
May 03 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 3 May 2005 08:42:45 +0000 (UTC), Oskar Linde wrote:

 Hmm. This got me thinking about value/reference semantics.
 Why try to make classes and structs have the same operator overloading rules,
 when their semantics are so different?
Because it makes template writing a lot easier? -- Derek Parnell Melbourne, Australia 3/05/2005 7:28:32 PM
May 03 2005
parent reply Oskar Linde <Oskar_member pathlink.com> writes:
In article <1esq63e2pos0u$.1473fyzzok7n7$.dlg 40tude.net>, Derek Parnell says...
On Tue, 3 May 2005 08:42:45 +0000 (UTC), Oskar Linde wrote:

 Hmm. This got me thinking about value/reference semantics.
 Why try to make classes and structs have the same operator overloading rules,
 when their semantics are so different?
Because it makes template writing a lot easier?
True, but my feeling is that a considerable amount of the template code (collections etc) still need to differ between things with reference vs value semantics. No idea about how large amount thou... /Oskar Linde
May 03 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 3 May 2005 11:10:26 +0000 (UTC), Oskar Linde wrote:

 In article <1esq63e2pos0u$.1473fyzzok7n7$.dlg 40tude.net>, Derek Parnell
says...
On Tue, 3 May 2005 08:42:45 +0000 (UTC), Oskar Linde wrote:

 Hmm. This got me thinking about value/reference semantics.
 Why try to make classes and structs have the same operator overloading rules,
 when their semantics are so different?
Because it makes template writing a lot easier?
True, but my feeling is that a considerable amount of the template code (collections etc) still need to differ between things with reference vs value semantics.
Why? Conceptually, assigning the value of one thing to another is the same regardless of how the things have been implemented. And a programming language is a symbolic language for humans to use, (rather than computers) so it should help us people express concepts and algorithms, and leave the details of implementation to the compiler/linker. Thus different syntax, depending on value/reference semantics, is a hindrance to efficiently writing concepts and algorithms. -- Derek Parnell Melbourne, Australia 3/05/2005 9:12:35 PM
May 03 2005
parent reply Oskar Linde <Oskar_member pathlink.com> writes:
In article <1gfbyn52jdrya.r2kk552kym1i.dlg 40tude.net>, Derek Parnell says...
On Tue, 3 May 2005 11:10:26 +0000 (UTC), Oskar Linde wrote:

 In article <1esq63e2pos0u$.1473fyzzok7n7$.dlg 40tude.net>, Derek Parnell
says...
On Tue, 3 May 2005 08:42:45 +0000 (UTC), Oskar Linde wrote:

 Hmm. This got me thinking about value/reference semantics.
 Why try to make classes and structs have the same operator overloading rules,
 when their semantics are so different?
Because it makes template writing a lot easier?
True, but my feeling is that a considerable amount of the template code (collections etc) still need to differ between things with reference vs value semantics.
Why? Conceptually, assigning the value of one thing to another is the same regardless of how the things have been implemented. And a programming language is a symbolic language for humans to use, (rather than computers) so it should help us people express concepts and algorithms, and leave the details of implementation to the compiler/linker. Thus different syntax, depending on value/reference semantics, is a hindrance to efficiently writing concepts and algorithms.
There are already subtle or not-so-subtle differences. The most obvious is that changing the value of a copy with reference semantics also changes the value of the original. Another difference is that references can be invalid (null), values can not. /Oskar Linde
May 03 2005
next sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 3 May 2005 12:07:02 +0000 (UTC), Oskar Linde wrote:

 In article <1gfbyn52jdrya.r2kk552kym1i.dlg 40tude.net>, Derek Parnell says...
[snip]
Thus different syntax, depending on value/reference semantics, is a
hindrance to efficiently writing concepts and algorithms.
There are already subtle or not-so-subtle differences. The most obvious is that changing the value of a copy with reference semantics also changes the value of the original.
Agreed. This is a big difference. To help with templates, it would be useful to be able to interrogate (at template instantiation time), whether an identifier is a reference-type, array-type, or value-type item and then be able to generate the appropriate code. At the syntax level, it would be useful to be able to express the concept of a copy-assign for all types and not just arrays. The '.dup' is a reasonable syntax device to express copy-assign for arrays, but we do not (yet?) have the same syntax for other data types. For classes and structs, it is up to the class/struct programmer to code the various dup() methods but there is no default method. And .dup for primitive types is just not available in any form. For the sake of making templates easier to write, this idea would help.
 Another difference is that references can be invalid (null), values can not.
Again, is there any good reason why we can't test value-types against null? The result would always be false, but it would make generic (template) programming easier. -- Derek Parnell Melbourne, Australia 3/05/2005 10:18:29 PM
May 03 2005
prev sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Oskar Linde" <Oskar_member pathlink.com> wrote in message
news:d57ph6$1vpc$1 digitaldaemon.com...
 In article <1gfbyn52jdrya.r2kk552kym1i.dlg 40tude.net>, Derek Parnell says...

 There are already subtle or not-so-subtle differences.
 The most obvious is that changing the value of a copy with reference semantics
 also changes the value of the original.
 Another difference is that references can be invalid (null), values can not.

 /Oskar Linde
Actually, that's not entirely true. Conceptually, you're simply talking about changing something that has two names. When you change the contents of something, the change happens to the contents... without regard to the name being used. If another name identifies the same item, that's not really another item at all. It's the same item being called something else. The problem is that reference items expose their identities by default in the same way that value items expose their contents. This is exactly why it would be good to have ways of explicitly assigning and comparing the value referenced by an item's identity, without having to choose which syntax to use based on whether the item is a value type or reference type item. There are advantages to the differences between value and reference types, and removing those advantages would be a big mistake... However, there are also advantages to not being forced to use one syntax with value types and a different syntax with reference types to accomplish the same operation. This is why it would be best to have a way of addressing the identity of an item with a syntax that will work on any type (for example, itemname.identity or itemname.reference), and a way of addressing the item pointed to by that identity with a syntax that will work on any type (for example, itemname.value or itemname.contents or maybe even itemname.item). The value assignment operator would then just be a simplified way of writing a value assignment. destinationitem := sourceitem; would mean the same thing as... destinationitem.value = sourceitem.value; or the same as... destinationitem = sourceitem; as it has also been proposed. TZ
May 03 2005
prev sibling parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Oskar Linde" <Oskar_member pathlink.com> wrote in message 
news:d57di5$1ecg$1 digitaldaemon.com...
 Jumping in here from nothing.

 In article <d56j99$aj0$1 digitaldaemon.com>, Ben Hinkle says...
An example where it overloading == is best is writing generic code that
involves a BigInt class (like the mpz class I play around with sometimes).
Say one has the test
  if (x+1 == 2*y) { ... }
and x and y are ints. That test compiles fine. Now with overloading == I 
can
plug in my own BigInt class for things larger than long and the same test
works fine (as long as I overload + and *, too). If a method like
equals() was requred then I could no longer use the same code for ints and
my BigInts. If == wasn't overloadable there would have to be some way of
saying x.equals(y) for ints x and y. Needless to say overloading == is a
simpler languge concept than adding methods to ints.
Hmm. This got me thinking about value/reference semantics. Why try to make classes and structs have the same operator overloading rules, when their semantics are so different? A BigInt is supposed to conform to the behaviour of primitive types, i.e. have value semantics...? Shouldn't it therefore be made a struct instead (internally, it could just contain a (possibly shared) pointer to whatever data the bigint is stored as)? That would not be (much) more expensive than passing pointers to classes.
I'm not sure what you are exactly suggesting - is it that BigInt be a struct with a shared pointer to the digits and that == not be overloadable and do a bitwise comparison of the members (ie compare the pointers)? This looks to me the same as having a class and doing a === on the reference. In any case the contents of the pointers needs to be compared since two different computations can produce two BigInts that have pointers to two different memory locations that happen to have the digit "1". We would want those two "1"'s to compare equal.
 Structs with value-semantics is the natural choice for implementing 
 numeric
 types, such as vectors, matrices, etc. Operator overloads for structs 
 should
 make is possible to make structs equal to primitive types in all respects.
A struct containing a single pointer is the same as a class reference - no? I don't see the advantage of using structs. The only one I can think of is that structs get initialized to their default values and are immediately usble while classes need to be explicitly instantiated.
 Why not make/keep structs as the way to make things that conform to the
 primitive types and consider classes as different beasts with other
 requirements? I.e. overloading of ==, etc...
 They have after all different semantics.

 /Oskar Linde

 Ps. I'm sure there has been lots of discussion about this that I've 
 missed.
May 03 2005
parent Oskar Linde <Oskar_member pathlink.com> writes:
In article <d57r6u$22f7$1 digitaldaemon.com>, Ben Hinkle says...
"Oskar Linde" <Oskar_member pathlink.com> wrote in message 
news:d57di5$1ecg$1 digitaldaemon.com...
 Jumping in here from nothing.

 In article <d56j99$aj0$1 digitaldaemon.com>, Ben Hinkle says...
An example where it overloading == is best is writing generic code that
involves a BigInt class (like the mpz class I play around with sometimes).
Say one has the test
  if (x+1 == 2*y) { ... }
and x and y are ints. That test compiles fine. Now with overloading == I 
can
plug in my own BigInt class for things larger than long and the same test
works fine (as long as I overload + and *, too). If a method like
equals() was requred then I could no longer use the same code for ints and
my BigInts. If == wasn't overloadable there would have to be some way of
saying x.equals(y) for ints x and y. Needless to say overloading == is a
simpler languge concept than adding methods to ints.
Hmm. This got me thinking about value/reference semantics. Why try to make classes and structs have the same operator overloading rules, when their semantics are so different? A BigInt is supposed to conform to the behaviour of primitive types, i.e. have value semantics...? Shouldn't it therefore be made a struct instead (internally, it could just contain a (possibly shared) pointer to whatever data the bigint is stored as)? That would not be (much) more expensive than passing pointers to classes.
I'm not sure what you are exactly suggesting - is it that BigInt be a struct with a shared pointer to the digits
Yes.
and that == not be overloadable
No.
and do a 
bitwise comparison of the members (ie compare the pointers)?
No.
This looks to 
me the same as having a class and doing a === on the reference. In any case 
the contents of the pointers needs to be compared since two different 
computations can produce two BigInts that have pointers to two different 
memory locations that happen to have the digit "1". We would want those two 
"1"'s to compare equal.
They sould of course compare equal. Sorry, my paragraph makes me confused too. I just mean that structs (value semantics) is natural for numeric objects. Implementing copy-on-write for BigInts implemented as structs is possible. What I was trying to imply about overloadability was quite the opposite. I am actually suggesting to make overloading of = (opAssign?) possible for structs, without making it possible for classes (where it makes sense to disallow it since you are merely assigning references). This would help making it possible to make numeric types behave as primitive types.
 Structs with value-semantics is the natural choice for implementing 
 numeric
 types, such as vectors, matrices, etc. Operator overloads for structs 
 should
 make is possible to make structs equal to primitive types in all respects.
A struct containing a single pointer is the same as a class reference - no?
Yes, unless you can overload opAssign.
I don't see the advantage of using structs. The only one I can think of is 
that structs get initialized to their default values and are immediately 
usble while classes need to be explicitly instantiated.
Yes, this makes them behave more like primitive types. /Oskar Linde
May 03 2005
prev sibling parent d c++.com writes:
An example where it overloading == is best is writing generic code that 
involves a BigInt class (like the mpz class I play around with sometimes). 
Say one has the test
  if (x+1 == 2*y) { ... }
and x and y are ints. That test compiles fine. Now with overloading == I can 
plug in my own BigInt class for things larger than long and the same test 
works fine (as long as I overload + and *, too). If a method like
equals() was requred then I could no longer use the same code for ints and 
my BigInts. If == wasn't overloadable there would have to be some way of 
saying x.equals(y) for ints x and y. Needless to say overloading == is a 
simpler languge concept than adding methods to ints.
I don't quite buy that argument. Taking C++/STL as example: although C++ language offers operator overloading, STL makes the decision of not relying on it; instead the programmer has to pass an explicit parameter when instantiating the template: hash_map<Key, Data, HashFcn, EqualKey, Alloc> (ref. http://www.sgi.com/tech/stl/hash_map.html) I believe this is the right way to go: let the programmer has the explicit control, and give him least surprise.
May 03 2005
prev sibling next sibling parent "TechnoZeus" <TechnoZeus PeoplePC.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d51jpj$1qh7$1 digitaldaemon.com...
 I view the single most important impediment to D 1.0 at the moment as being
 the bug list. It's too long, there are too many problems. Design problems
 can always be worked around, but nothing is a turnoff as much as buggy
 software.

 I know there's been a lot of discussion about what to do with opCmp and
 opEquals. I understand there are some issues with it, and I don't have a
 clear idea on what to do about it yet. But let me extoll some of the reasons
 why it is not such a terrible idea:

 o    The language should be usable without having to resort to templates.
*snip* Thanks Walter. I'm sure everyone here appreciates your imput on this matter, and especially the head-up on your current mode of operation. TechnoZeus
May 01 2005
prev sibling next sibling parent reply Daniel Horn <hellcatv hotmail.com> writes:
Walter wrote:
 
 Now for the problems:
 
 o    The current Object implementations of hash and opCmp fail if a moving
 GC is used, because they rely on the address of the object instance. This is
 not as insurmountable as it first seems. There's a straightforward way to
 fix this in the presence of a moving GC, there's just no point at the moment
 because there isn't a moving GC.
I think having a moving GC (even if it isn't efficient at all, just to test the idea of moving live memory as much as possible) is important if we intend people to write libraries that don't depend on the GC not moving things around (pinning things before passing them to C, for example)
 
 o    The current Object.opEquals just compares addresses. One alternative is
 doing a bit compare of the instance contents, but that gets problematical
 with a moving GC. Not sure what the right answer is.
a bit compare is probably the only option--but for empty classes that would be the vtable only?
 
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, as
 what if it's (c == n) instead, and n is null? The best solution is probably
 to have c.opEquals(o) return false if o is null. (null == c) will get
 rewritten by the compiler as c.opEquals(null) anyway. Not really sure what
 to do here. Rewriting the expression as (c == n || c && n && c.opEquals(n))
 is just too inefficient.
why is that inefficient--I'm sure some assembly magic could make it quite efficient... if people want it to be efficient, they can always call c.opEquals(n) you could even make new syntactic sugar to that test (that might segfault)
 o    The C.opCmp(C) thing. This is in my opinion the most serious issue,
 because its failure won't be obvious (unlike the c==null thing, which will
 produce the obvious exception). This is a more general problem not just with
an "obvious exception" is far too late in the development cycle to be considered for a large system language...
 opCmp but with any overriding function which hides all the previous
 functions with the same name up in the heirarchy. Perhaps the solution here
 is to issue a compiler diagnostic if this happens, and if the programmer
 wants it to happen, use the 'override' attribute on the C.opCmp(C).
I think bit compare with contents is the only reasonable default
 
May 01 2005
parent reply "Uwe Salomon" <post uwesalomon.de> writes:
 opCmp but with any overriding function which hides all the previous
 functions with the same name up in the heirarchy. Perhaps the solution  
 here
 is to issue a compiler diagnostic if this happens, and if the programmer
 wants it to happen, use the 'override' attribute on the C.opCmp(C).
I think bit compare with contents is the only reasonable default
No, it is not. The only reasonable default is an exception. Otherwise you would have to write a dummy opCmp() for every class that is not comparable, with an assertion in it, to protect from programming failures. Ciao uwe
May 01 2005
parent reply Daniel Horn <hellcatv hotmail.com> writes:
Uwe Salomon wrote:
 opCmp but with any overriding function which hides all the previous
 functions with the same name up in the heirarchy. Perhaps the 
 solution  here
 is to issue a compiler diagnostic if this happens, and if the programmer
 wants it to happen, use the 'override' attribute on the C.opCmp(C).
I think bit compare with contents is the only reasonable default
No, it is not. The only reasonable default is an exception. Otherwise you would have to write a dummy opCmp() for every class that is not comparable, with an assertion in it, to protect from programming failures. Ciao uwe
an exception is not an option---that turns something that *should* be a reasonable compile error into an error that happens at runtime. Might as well code in python then without any sort of compile time guarantees about errors. in C++ if I type a < b; I know that won't segfault unless I *personally* have written a strange opCmp function. if that guarantee went away I might lose my mind. if opCmp is there at all it shouldn't throw an exception--this could cause so much havoc--the most reasonable option is to remove it and then you'd get a compile error if you used it if that's not feasable then bitwise comparison is certainly better than pointer comparison which could theoretically change around! The argument is you may want to dump otherwise incomparable objects into a balanced tree (though I guess you could just use a hashmap) If you want to allow incomparible objects then toplevel object shouldn't have a < or better yet classes should inherit from Comparable by default unless overridden by a user to inherit from Object
May 02 2005
next sibling parent "Uwe Salomon" <post uwesalomon.de> writes:
 I think bit compare with contents is the only reasonable default
No, it is not. The only reasonable default is an exception. Otherwise you would have to write a dummy opCmp() for every class that is not comparable, with an assertion in it, to protect from programming failures.
an exception is not an option---that turns something that *should* be a reasonable compile error into an error that happens at runtime.
Yes i thought that as well. But if you write a template container which somewhere uses <> operators, it will only instantiate for objects that have an opCmp, even if you do not use the functionality. But that happens with structs too. So this probably is not a very good argument. Well, maybe you are right. But if this is not possible/feasible, then i would favor the exception, because you would otherwise turn a clean runtime error (easy to fix: "class xxx has no opCmp") into some undefined behaviour which works silently! Even better than Python... :) Anyways, i do not think this is a big issue in practice. Ciao uwe
May 02 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Daniel Horn" <hellcatv hotmail.com> wrote in message
news:d54kun$14im$1 digitaldaemon.com...
 in C++ if I type a < b; I know that won't segfault unless
 I *personally* have written a strange opCmp function. if that guarantee
 went away I might lose my mind.
It will seg fault if operator<() is written as a virtual function.
May 02 2005
parent "Matthew" <admin.hat stlsoft.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:d560bf$2hjp$1 digitaldaemon.com...
 "Daniel Horn" <hellcatv hotmail.com> wrote in message
 news:d54kun$14im$1 digitaldaemon.com...
 in C++ if I type a < b; I know that won't segfault unless
 I *personally* have written a strange opCmp function. if that guarantee
 went away I might lose my mind.
It will seg fault if operator<() is written as a virtual function.
That's a worthless argument. There is no established idiom in C++ for a virtual operator <().
May 03 2005
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Walter wrote:
<snip>
 I know there's been a lot of discussion about what to do with opCmp and
 opEquals. I understand there are some issues with it, and I don't have a
 clear idea on what to do about it yet. But let me extoll some of the reasons
 why it is not such a terrible idea:
<snip>
 o    Having them in Object makes the implementation of associative arrays
 and sorting compact.
I'm not sure I see how a template implementation would be bloated in comparison....
 o    Having reasonable default behaviors for them means that one can write
 fully functional classes without
 doing a lot of typing. This is not true for C++.
 
 o    It can be completely ignored. You can implement class C, provide an
 opCmp(C), and it will work for comparisons between objects of type C and
 types derived from C. You'll get a compile time error if you try to compare
 c<o, where o is an Object. Array .sort and associative arrays of C will
 fail, and this is a problem, but if you don't use those features for C,
 it'll be fine.
Until someone else uses your library and expects to be able to use the class as an AA key....
 o    Continuing with that thought, there's nothing impeding writing ones'
 own template sorting device like C++ that completely ignore's Object.opCmp.
Is there anything impending "ones'" hooking up such a thing as the behaviour of .sort? <snip>
 o    The current Object.opEquals just compares addresses. One alternative is
 doing a bit compare of the instance contents, but that gets problematical
 with a moving GC. Not sure what the right answer is.
IMO the right answer is leave it opEquals it is. Changing it to do a bit compare would break some classes that use the default principle - each object is equal only to itself.
 o    The (c == null) thing. Issuing a compiler error on it is incomplete, as
 what if it's (c == n) instead, and n is null? The best solution is probably
 to have c.opEquals(o) return false if o is null. (null == c) will get
 rewritten by the compiler as c.opEquals(null) anyway. Not really sure what
 to do here. Rewriting the expression as (c == n || c && n && c.opEquals(n))
 is just too inefficient.
It would also infinitely recurse. I guess you meant (c === n || c && n && c.opEquals(n)) or (c is n || c && n && c.opEquals(n)) ?
 o    The C.opCmp(C) thing. This is in my opinion the most serious issue,
 because its failure won't be obvious (unlike the c==null thing, which will
 produce the obvious exception). This is a more general problem not just with
 opCmp but with any overriding function which hides all the previous
 functions with the same name up in the heirarchy. Perhaps the solution here
 is to issue a compiler diagnostic if this happens, and if the programmer
 wants it to happen, use the 'override' attribute on the C.opCmp(C).
In what sense would it override? By something like override int opCmp(C c) { ... } being defined to mean override int opCmp(Object o) { return opCmp(cast(C) o); } int opCmp(C c) { ... } ? And if there's more than one matching method of the base class, would it override them all? You left off the other problem that's been mentioned on various occasions: it prevents AAs from working properly on classes that have no ordering. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
May 03 2005