www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [Again] One big isue before 1.0

reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
There are bigger problems but:

Can please these:
    int opCmp(Object o);
    int opEquals(Object o);

be removed from Object!

No one uses them (i guess) (except AAs but i believe
this can be changed!) and they can cause a lot of problems!

Isn't it time to get rid of them?
Aug 07 2004
next sibling parent reply Ant <duitoolkit yahoo.ca> writes:
On Sat, 07 Aug 2004 13:02:40 +0200, Ivan Senji wrote:

 There are bigger problems but:
 
 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);
 
 be removed from Object!
 
 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!
 
 Isn't it time to get rid of them?
I use them! How else can an array of objects be sorted? what problems do they cause? Ant
Aug 07 2004
next sibling parent reply Maik Zumstrull <Maik.Zumstrull gmx.de> writes:
Ant schrieb:

 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);
 
 be removed from Object!
 
 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!
 
 Isn't it time to get rid of them?
I use them!
And why not?
 How else can an array of objects be sorted?
Have a look at Java's "Comparable" Interface. Their util.Arrays.sort() can use it. It works, but that's about it. Ugly to use, even uglier to implement. I think operator overloading is a much nicer approach than "Interfaces for everything".
 what problems do they cause?
People who still think in plain C get confused.
Aug 07 2004
parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Maik Zumstrull" <Maik.Zumstrull gmx.de> wrote in message
news:cf2h7n$1q95$1 digitaldaemon.com...
 Ant schrieb:

 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);

 be removed from Object!

 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!

 Isn't it time to get rid of them?
I use them!
And why not?
 How else can an array of objects be sorted?
Have a look at Java's "Comparable" Interface. Their util.Arrays.sort() can use it. It works, but that's about it. Ugly to use, even uglier to implement. I think operator overloading is a much nicer approach than "Interfaces for everything".
 what problems do they cause?
People who still think in plain C get confused.
I am confused becouse i am thinking in 'plain' C++: (not C) If i don't write an operator i don't want it to magically be there :(
Aug 07 2004
prev sibling next sibling parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Ant" <duitoolkit yahoo.ca> wrote in message
news:pan.2004.08.07.11.23.04.166202 yahoo.ca...
 On Sat, 07 Aug 2004 13:02:40 +0200, Ivan Senji wrote:

 There are bigger problems but:

 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);

 be removed from Object!

 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!

 Isn't it time to get rid of them?
I use them!
I use them too, but i wish that finally they are not automatically defined for every class!
 How else can an array of objects be sorted?
 what problems do they cause?
Problems are: I write a class or struct and by accident forget to declare these operators, and if i pass my type to a template using == or < on this type the compiler will not complain but use the default ones from object! In the newer versions of DMD object doesn't define these operators, it only declares them, but this is still legal: class A{} A a,b; a = new A; b = new A; if(a==b)... if(a<b)... IMO this shouldn't work because i didn't write these operators! And additionally i may wan't my class objects to be incomparable, and this is not possible now.
 Ant
Aug 07 2004
next sibling parent reply Ant <duitoolkit yahoo.ca> writes:
On Sat, 07 Aug 2004 14:36:15 +0200, Ivan Senji wrote:

 "Ant" <duitoolkit yahoo.ca> wrote in message
 news:pan.2004.08.07.11.23.04.166202 yahoo.ca...
 On Sat, 07 Aug 2004 13:02:40 +0200, Ivan Senji wrote:

 There are bigger problems but:

 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);

 be removed from Object!
Problems are:
ahh... wasn't it called cmp(Object) (and equals(Object)?)? why did it change? Ant
Aug 07 2004
parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
i think walter added "op" to the front of all the operator functions to
avoid name conflicts (they used to be things like add(), mul() etc.  and now
they're opAdd(), opMul()).  personally i don't see what's wrong with
operator+() , but oh well.
Aug 07 2004
parent reply "Walter" <newshound digitalmars.com> writes:
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message
news:cf2qfj$1toe$1 digitaldaemon.com...
 personally i don't see what's wrong with
 operator+() , but oh well.
operator+ doesn't work when you need both the forward and the 'r' versions of an operator overload. Also, opCmp handles <, <=, >, >=. And lastly, opCmp is eminently greppable.
Aug 07 2004
parent reply davepermen <davepermen_member pathlink.com> writes:
In article <cf31k5$223l$2 digitaldaemon.com>, Walter says...
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message
news:cf2qfj$1toe$1 digitaldaemon.com...
 personally i don't see what's wrong with
 operator+() , but oh well.
operator+ doesn't work when you need both the forward and the 'r' versions of an operator overload. Also, opCmp handles <, <=, >, >=. And lastly, opCmp is eminently greppable.
i'd still prefer operator cmp() and similar. opAnything looks ugly. operator Anything() not (imho). but thats me.. and doesn't really mather
Aug 08 2004
parent reply "Walter" <newshound digitalmars.com> writes:
"davepermen" <davepermen_member pathlink.com> wrote in message
news:cf60oo$1pt$1 digitaldaemon.com...
 In article <cf31k5$223l$2 digitaldaemon.com>, Walter says...
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message
news:cf2qfj$1toe$1 digitaldaemon.com...
 personally i don't see what's wrong with
 operator+() , but oh well.
operator+ doesn't work when you need both the forward and the 'r'
versions
of an operator overload. Also, opCmp handles <, <=, >, >=. And lastly,
opCmp
is eminently greppable.
i'd still prefer operator cmp() and similar. opAnything looks ugly.
operator
 Anything() not (imho). but thats me.. and doesn't really mather
I think it's advantageous to have a predictable and recognizable 'look' for operator overload function names, since they don't have a keyword setting them off.
Aug 09 2004
parent reply davepermen <davepermen_member pathlink.com> writes:
your style has no advantage. it is just another special name for something that
has a special meaning. a context-dependent keyword.

searching for 'opAnything' or 'operator Anything' doesn't do any difference. i
just prefer the second by much. that way, the method really shines out to have
special behaviour, and is not planned to get called manually, too..

but as i said. it's just me, and it's just about style. you don't have any
keywords with capital letters in else. and it would remove all those op*
keywords, and instead create only one keyword, namely operator.

later, in D 2.0, or so, you could even allow others.. like float operator
dot(vec second)
to get called as v0 dot v1

for example..


but i guess this was discussed before. it's a minor issue. it's just about
style. keywords shall be small case, and low in count. all those opAnything are
ugly imho.

i prefer the c++/cli style, with the concatenated keywords, for new features..

value class X {}, ref class Y {} etc.. in the same way i like the operator XX()
style.

In article <cf79j8$dqp$4 digitaldaemon.com>, Walter says...
"davepermen" <davepermen_member pathlink.com> wrote in message
news:cf60oo$1pt$1 digitaldaemon.com...
 In article <cf31k5$223l$2 digitaldaemon.com>, Walter says...
"Jarrett Billingsley" <kb3ctd2 yahoo.com> wrote in message
news:cf2qfj$1toe$1 digitaldaemon.com...
 personally i don't see what's wrong with
 operator+() , but oh well.
operator+ doesn't work when you need both the forward and the 'r'
versions
of an operator overload. Also, opCmp handles <, <=, >, >=. And lastly,
opCmp
is eminently greppable.
i'd still prefer operator cmp() and similar. opAnything looks ugly.
operator
 Anything() not (imho). but thats me.. and doesn't really mather
I think it's advantageous to have a predictable and recognizable 'look' for operator overload function names, since they don't have a keyword setting them off.
Aug 09 2004
parent reply "Walter" <newshound digitalmars.com> writes:
"davepermen" <davepermen_member pathlink.com> wrote in message
news:cf8oq1$10lq$1 digitaldaemon.com...
 i prefer the c++/cli style, with the concatenated keywords, for new
features.. The context-sensitive keyword technique breaks the separation between lexical analysis and syntactic analysis, something that C++ does all the time but I wish to avoid with D. One consequence of breaking this rule is it makes syntax highlighters a lot more work to build.
Aug 09 2004
parent davepermen <davepermen_member pathlink.com> writes:
has to be hell of a lot more work to scan for... 'operator add' compared to
opAdd..

but it's okay. it's your language, and you can do what ever you want.


In article <cf8upo$12fc$1 digitaldaemon.com>, Walter says...
"davepermen" <davepermen_member pathlink.com> wrote in message
news:cf8oq1$10lq$1 digitaldaemon.com...
 i prefer the c++/cli style, with the concatenated keywords, for new
features.. The context-sensitive keyword technique breaks the separation between lexical analysis and syntactic analysis, something that C++ does all the time but I wish to avoid with D. One consequence of breaking this rule is it makes syntax highlighters a lot more work to build.
Aug 10 2004
prev sibling next sibling parent Ben Hinkle <bhinkle4 juno.com> writes:
Ivan Senji wrote:

 "Ant" <duitoolkit yahoo.ca> wrote in message
 news:pan.2004.08.07.11.23.04.166202 yahoo.ca...
 On Sat, 07 Aug 2004 13:02:40 +0200, Ivan Senji wrote:

 There are bigger problems but:

 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);

 be removed from Object!

 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!

 Isn't it time to get rid of them?
I use them!
I use them too, but i wish that finally they are not automatically defined for every class!
 How else can an array of objects be sorted?
 what problems do they cause?
Problems are: I write a class or struct and by accident forget to declare these operators, and if i pass my type to a template using == or < on this type the compiler will not complain but use the default ones from object!
That's exactly what I would expect it to do. One person's bug is another's feature.
 In the newer versions of DMD object doesn't define these
 operators, it only declares them, but this is still legal:
 
 class A{}
 
 A a,b;
 a = new A;
 b = new A;
 
 if(a==b)...
 if(a<b)...
 
 IMO this shouldn't work because i didn't write these operators!
By that argument we shouldn't have anything in Object since you don't seem to want to inherit any behavior automatically.
 And additionally i may wan't my class objects to be incomparable,
 and this is not possible now.
Then throw an exception in opCmp - sure it's run-time but incomparability is presumably very rare.
 Ant
Aug 07 2004
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cf2ibj$1qnr$1 digitaldaemon.com...
 And additionally i may wan't my class objects to be incomparable,
 and this is not possible now.
Just put an assert(0) in the opCmp() overload for your class.
Aug 07 2004
parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:cf31k5$223l$1 digitaldaemon.com...
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
 news:cf2ibj$1qnr$1 digitaldaemon.com...
 And additionally i may wan't my class objects to be incomparable,
 and this is not possible now.
Just put an assert(0) in the opCmp() overload for your class.
This isn't right! If i don't write opCmp i have opCmp, If i don't wan't to have opCmp i have to write one! What is so special about opCmp, (ok it is used in AAs but couldn't compiler report: "this type cann't be used in AA because of the missing opCmp!") I know everyone would agree that it would be stupid if we had to write: A opAdd(A a){assert(0);} to make are type un-addable! opCmp shouldn't be discriminated this way! :)
Aug 07 2004
next sibling parent reply Lars Ivar Igesund <larsivar igesund.net> writes:
Ivan Senji wrote:

 opCmp shouldn't be discriminated this way! :)
Ivan's correct. You got my vote. Lars Ivar Igesund
Aug 07 2004
next sibling parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Lars Ivar Igesund" <larsivar igesund.net> wrote in message
news:cf3ik6$2abe$1 digitaldaemon.com...
 Ivan Senji wrote:

 opCmp shouldn't be discriminated this way! :)
Ivan's correct. You got my vote.
Hooray! Thanks. But: Walter is the one to say if i am correct or not and it seems to me he thinks i am wrong on this subject. :(
 Lars Ivar Igesund
Aug 08 2004
prev sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
Me too. I dislike the root class having things that derived classes (or their
designers) cannot make informed choices
on.

And a runtime assert is a revolting way to denote that a type should not take
part in comparison operations.

"Lars Ivar Igesund" <larsivar igesund.net> wrote in message
news:cf3ik6$2abe$1 digitaldaemon.com...
 Ivan Senji wrote:

 opCmp shouldn't be discriminated this way! :)
Ivan's correct. You got my vote. Lars Ivar Igesund
Aug 08 2004
parent Charles Hixson <charleshixsn earthlink.net> writes:
Matthew wrote:
 Me too. I dislike the root class having things that derived classes (or their
designers) cannot make informed choices
 on.
 
 And a runtime assert is a revolting way to denote that a type should not take
part in comparison operations.
 
 "Lars Ivar Igesund" <larsivar igesund.net> wrote in message
news:cf3ik6$2abe$1 digitaldaemon.com...
 
Ivan Senji wrote:


opCmp shouldn't be discriminated this way! :)
Ivan's correct. You got my vote. Lars Ivar Igesund
If there were multiple inheritance, then I could see your point. There isn't. I know that in principle it's feasible to add them in to every class via interfaces...but UGH! The problem is, sometimes I need to add functionality that's common to multiple subclasses of a class that didn't need it. And it needs to be seen as not just the same name, but the same function (with variations for inherited alterations). Well, I CAN do it, I've proven that. I've also proved I don't like the process. And the functions that most frequently need to be added are the operator equivalents. And of those, the comparison operators are the most frequent. Having opCmp defined at Object saves me a bunch of time, though what I really want is more like "This comparison is via the grandparent inheritance object". I DO wish that C++ hadn't poisoned the well for multiple inheritance. Eiffel showed a valid and decent approach, but almost nobody notices because C++ has convinced everyone that it's a horrible idea. (Even Ada95 did it better than C++, and Ada95 was designed under tremendous constraints...and decided on by a committee.)
Aug 23 2004
prev sibling next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cf3i3d$2a7r$3 digitaldaemon.com...
 What is so special about opCmp, (ok it is used in AAs but
 couldn't compiler report: "this type cann't be used in AA because
 of the missing opCmp!")
Putting it in Object reserves a predictable vtbl[] slot for it.
Aug 09 2004
parent reply Regan Heath <regan netwin.co.nz> writes:
On Mon, 9 Aug 2004 00:32:37 -0700, Walter <newshound digitalmars.com> 
wrote:
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
 news:cf3i3d$2a7r$3 digitaldaemon.com...
 What is so special about opCmp, (ok it is used in AAs but
 couldn't compiler report: "this type cann't be used in AA because
 of the missing opCmp!")
Putting it in Object reserves a predictable vtbl[] slot for it.
That that achieves... allowing you to aggressively optimise performance? Could you achieve the same thing with some sort of hard-coded globally defined interface? eg. interface IComparable { int opCmp(IComparable rhs); } class Foo : IComparable { } Does an interface reserve a vtbl slot? and can it be predictable? Regan -- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Aug 09 2004
parent reply "Walter" <newshound digitalmars.com> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opschjvoak5a2sq9 digitalmars.com...
 On Mon, 9 Aug 2004 00:32:37 -0700, Walter <newshound digitalmars.com>
 wrote:
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
 news:cf3i3d$2a7r$3 digitaldaemon.com...
 What is so special about opCmp, (ok it is used in AAs but
 couldn't compiler report: "this type cann't be used in AA because
 of the missing opCmp!")
Putting it in Object reserves a predictable vtbl[] slot for it.
That that achieves... allowing you to aggressively optimise performance? Could you achieve the same thing with some sort of hard-coded globally defined interface? eg. interface IComparable { int opCmp(IComparable rhs); } class Foo : IComparable { } Does an interface reserve a vtbl slot? and can it be predictable?
Each interface implemented by a class creates another vptr and corresponding vtbl[] for it. Testing whether an object supports a particular interface is a runtime check, not a compile time one. So, yes, it will impact sorting performance.
Aug 09 2004
parent Sha Chancellor <schancel pacific.net> writes:
In article <cf8vcm$12k3$1 digitaldaemon.com>,
 "Walter" <newshound digitalmars.com> wrote:

 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opschjvoak5a2sq9 digitalmars.com...
 On Mon, 9 Aug 2004 00:32:37 -0700, Walter <newshound digitalmars.com>
 wrote:
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
 news:cf3i3d$2a7r$3 digitaldaemon.com...
 What is so special about opCmp, (ok it is used in AAs but
 couldn't compiler report: "this type cann't be used in AA because
 of the missing opCmp!")
Putting it in Object reserves a predictable vtbl[] slot for it.
That that achieves... allowing you to aggressively optimise performance? Could you achieve the same thing with some sort of hard-coded globally defined interface? eg. interface IComparable { int opCmp(IComparable rhs); } class Foo : IComparable { } Does an interface reserve a vtbl slot? and can it be predictable?
Each interface implemented by a class creates another vptr and corresponding vtbl[] for it. Testing whether an object supports a particular interface is a runtime check, not a compile time one. So, yes, it will impact sorting performance.
So LEAVE the opCmp operator there, and put assert( 0 ); in it.....
Aug 18 2004
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Ivan Senji wrote:
<snip>
 What is so special about opCmp, (ok it is used in AAs but
 couldn't compiler report: "this type cann't be used in AA because
 of the missing opCmp!")
<snip> If the compiler were capable of that, it would be equally capable of hooking up an alternative AA implementation that doesn't depend on opCmp. See also http://www.digitalmars.com/drn-bin/wwwnews?D/26144 http://www.digitalmars.com/drn-bin/wwwnews?D/26193 http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/5406 Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004
prev sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Ant wrote:
<snip>
 How else can an array of objects be sorted?
 what problems do they cause?
Can you think of a scenario in which an array of objects, of no common base class besides Object, would be mutually comparable? Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004
parent reply Russ Lewis <spamhole-2001-07-16 deming-os.org> writes:
Stewart Gordon wrote:
 Ant wrote:
 <snip>
 
 How else can an array of objects be sorted?
 what problems do they cause?
Can you think of a scenario in which an array of objects, of no common base class besides Object, would be mutually comparable?
Object[] list; void Add(Object o) { list ~= o; } char[] Work() { char[] ret; foreach(Object o; list) ret ~= "{"~o.toString()~"}"; return ret; } void Remove(Object o) { list.sort; <find object> <remove object> }
Aug 09 2004
next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Russ Lewis wrote:
 Stewart Gordon wrote:
 
 Ant wrote:
 <snip>

 How else can an array of objects be sorted?
 what problems do they cause?
Can you think of a scenario in which an array of objects, of no common base class besides Object, would be mutually comparable?
Object[] list; void Add(Object o) { list ~= o; } char[] Work() { char[] ret; foreach(Object o; list) ret ~= "{"~o.toString()~"}"; return ret; } void Remove(Object o) { list.sort; <find object> <remove object> }
This is arguably the "D way" of thinking, but it strikes me as being a hack brought around by the lack of a standard Set or MultiSet container. Also, the cost is very high: class Apple { } class Orange { } Apple a = new Apple(); Orange o = new Orange(); if (a > o) { /* why can we do this? */ } Object.opCmp causes unintuitive behaviour in other places too: class IntWrapper { int value; this(int i) { value = i; } char[] toString() { return std.string.toString(value); } int opCmp(TestInteger rhs) { return value - rhs.value; } int opEquals(TestInteger rhs) { return rhs !== null && value == rhs.value; } } Due to overload resolution rules, the sort property will use Object.opCmp, not IntWrapper.opCmp! When overloading opCmp for class types, you must /always/ recieve an Object and do a dynamic cast if only wish to allow comparison against specific types. This means, in the vast majority of cases, an unneccesary runtime test for every object comparison. As an unrelated sidenote, omitting the 'override' annotation effectively achieves C++-style non-polymorphic inheritance: the opCmp chosen depends purely on how the reference is cast. (and, in this case, both overloads can match the argument, so we don't get the benefit of a compile error!) Going back to the root of the problem, Object defines opCmp and opEquals for exactly two reasons: array.sort and associative arrays. Associative arrays can just as easily use the object's hash value instead of calling opCmp. The actual ordering of the keys in an AA isn't any of our business anyway. Fixing array.sort is easy too: dump it and replace it with a Phobos function. For the sake of testing, I refactored the built-in sort function into a template that recieves a comparer function. It took about 20 minutes, and it still passes the unit test. (attached. Use -debug=main to build a standalone executable that runs the unit test then quits. Use -debug=qsort for verbosity) This isn't a hard change, or even a big one. But it is causing a lot of inconvenient and unintuitive behaviour, which suggests that it is an important change. -- andy
Aug 09 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Andy Friesen wrote:
<snip>
 This is arguably the "D way" of thinking, but it strikes me as being a 
 hack brought around by the lack of a standard Set or MultiSet container. 
 Also, the cost is very high:
Actually, I think the "D way" is that you'd almost never need a container of arbitrary Objects, since D has templates. <snip>
 Object.opCmp causes unintuitive behaviour in other places too:
 
     class IntWrapper {
       int value;
       this(int i) { value = i; }
 
       char[] toString() {
         return std.string.toString(value);
       }
 
       int opCmp(TestInteger rhs) {
         return value - rhs.value;
       }
 
       int opEquals(TestInteger rhs) {
         return rhs !== null && value == rhs.value;
       }
     }
 
 Due to overload resolution rules, the sort property will use 
 Object.opCmp, not IntWrapper.opCmp!
And because TestInteger is not a superclass of IntWrapper.
 When overloading opCmp for class types, you must /always/ recieve an 
 Object and do a dynamic cast if only wish to allow comparison against 
 specific types.  This means, in the vast majority of cases, an 
 unneccesary runtime test for every object comparison.
Yes, as has been said already, this should be a compile-time check.
 As an unrelated sidenote, omitting the 'override' annotation effectively 
 achieves C++-style non-polymorphic inheritance: the opCmp chosen depends 
 purely on how the reference is cast. (and, in this case, both overloads 
 can match the argument, so we don't get the benefit of a compile error!)
AIUI the override keyword is just a typo catcher. I.e. if a program compiles, it will compile to exactly the same with the keyword removed.
 Going back to the root of the problem, Object defines opCmp and opEquals 
 for exactly two reasons:  array.sort and associative arrays.
 
 Associative arrays can just as easily use the object's hash value 
 instead of calling opCmp.  The actual ordering of the keys in an AA 
 isn't any of our business anyway.
AIUI, AAs already use both toHash and opCmp.
 Fixing array.sort is easy too: dump it and replace it with a Phobos 
 function.
<snip> I'm not sure how simply moving something into Phobos would fix it. Indeed, a lot of D's builtins are actually part of Phobos. Look at dmd\src\phobos\internal. But by being builtins, they have the advantage that a compiler can optimise them if it likes. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 10 2004
parent reply Andy Friesen <andy ikagames.com> writes:
Stewart Gordon wrote:
 Andy Friesen wrote:
 <snip>
 
 This is arguably the "D way" of thinking, but it strikes me as being a 
 hack brought around by the lack of a standard Set or MultiSet 
 container. Also, the cost is very high:
Actually, I think the "D way" is that you'd almost never need a container of arbitrary Objects, since D has templates. <snip>
 Object.opCmp causes unintuitive behaviour in other places too:

     class IntWrapper {
       int value;
       this(int i) { value = i; }

       char[] toString() {
         return std.string.toString(value);
       }

       int opCmp(IntWrapper rhs) {
         return value - rhs.value;
       }

       int opEquals(IntWrapper rhs) {
         return rhs !== null && value == rhs.value;
       }
     }

 Due to overload resolution rules, the sort property will use 
 Object.opCmp, not IntWrapper.opCmp!
And because TestInteger is not a superclass of IntWrapper.
eugh. That's a typo. Assume that TestInteger and IntWrapper are the same class. Sorry.
 When overloading opCmp for class types, you must /always/ recieve an 
 Object and do a dynamic cast if only wish to allow comparison against 
 specific types.  This means, in the vast majority of cases, an 
 unneccesary runtime test for every object comparison.
Yes, as has been said already, this should be a compile-time check.
 As an unrelated sidenote, omitting the 'override' annotation 
 effectively achieves C++-style non-polymorphic inheritance: the opCmp 
 chosen depends purely on how the reference is cast. (and, in this 
 case, both overloads can match the argument, so we don't get the 
 benefit of a compile error!)
AIUI the override keyword is just a typo catcher. I.e. if a program compiles, it will compile to exactly the same with the keyword removed.
Right.
 Going back to the root of the problem, Object defines opCmp and 
 opEquals for exactly two reasons:  array.sort and associative arrays.

 Associative arrays can just as easily use the object's hash value 
 instead of calling opCmp.  The actual ordering of the keys in an AA 
 isn't any of our business anyway.
AIUI, AAs already use both toHash and opCmp.
Currently, yes. But the algorithm doesn't fundamentally need to in order to work correctly.
 Fixing array.sort is easy too: dump it and replace it with a Phobos 
 function.
<snip> I'm not sure how simply moving something into Phobos would fix it. Indeed, a lot of D's builtins are actually part of Phobos. Look at dmd\src\phobos\internal. But by being builtins, they have the advantage that a compiler can optimise them if it likes.
What I meant was to remove the built-in sort property altogether and replace it with a plain old, ordinary function in Phobos. The current implementation effectively translates "x.sort" to "_adSort(x, typeid(typeof(x[0])));". _adSort() is implemented in phobos/internal/qsort.d, so the gain in optimizibility isn't all that much. In contrast, a template function is actually *more* optimizable because the comparison function can be inlined instead of requiring a virtual call to a TypeInfo instance. (this virtual call, incidently, itself performs another virtual call: opCmp of the class type) -- andy
Aug 10 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Andy Friesen wrote:

<snip>
 In contrast, a template function is actually *more* optimizable because 
 the comparison function can be inlined instead of requiring a virtual 
 call to a TypeInfo instance. (this virtual call, incidently, itself 
 performs another virtual call: opCmp of the class type)
A builtin could just as well be implemented via a template. Have you seen my post on this, FTM? http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/5406 Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 10 2004
parent reply Andy Friesen <andy ikagames.com> writes:
Stewart Gordon wrote:

 Andy Friesen wrote:
 
 <snip>
 
 In contrast, a template function is actually *more* optimizable 
 because the comparison function can be inlined instead of requiring a 
 virtual call to a TypeInfo instance. (this virtual call, incidently, 
 itself performs another virtual call: opCmp of the class type)
A builtin could just as well be implemented via a template. Have you seen my post on this, FTM? http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/5406
No, I must have missed it. Your proposed solution is more or less perfect. :) I was working under the assumption that having the compiler instantiate templates would be a nontrivial increase in complexity, so I suggested a solution which can easily be implemented. -- andy
Aug 10 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Andy Friesen wrote:
<snip>
 I was working under the assumption that having the compiler instantiate 
 templates would be a nontrivial increase in complexity, so I suggested a 
 solution which can easily be implemented.
The compiler instantiates templates whenever the programmer uses them. It would be fairly trivial to expand int[] data; ... data.sort; into int[] data; ... internal.sort!(int)(data); Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 10 2004
parent Andy Friesen <andy ikagames.com> writes:
Stewart Gordon wrote:
 Andy Friesen wrote:
 <snip>
 
 I was working under the assumption that having the compiler 
 instantiate templates would be a nontrivial increase in complexity, so 
 I suggested a solution which can easily be implemented.
The compiler instantiates templates whenever the programmer uses them. It would be fairly trivial to expand int[] data; ... data.sort; into int[] data; ... internal.sort!(int)(data);
Right. I was speaking more from the perspective of how much the compiler would need to be changed to implement this than sheer complexity. (currently, the .sort translation is done in the back-end, where code is assumed to have already been parsed and checked) -- andy
Aug 10 2004
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Russ Lewis wrote:
 Stewart Gordon wrote:
<snip>
 Can you think of a scenario in which an array of objects, of no common 
 base class besides Object, would be mutually comparable?
<snip> Your code has nothing to do with my question at all. You've merely given me a set of wrapper functions for the process of sorting an Object[]. They give not a proton, neutron or electron of evidence of objects of different classes being added to the list, nor of their being mutually comparable. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 10 2004
prev sibling next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Ivan Senji wrote:

 There are bigger problems but:
 
 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);
 
 be removed from Object!
 
 No one uses them (i guess) (except AAs but i believe
 this can be changed!) and they can cause a lot of problems!
 
 Isn't it time to get rid of them?
Yes. Anything Object defines, *everything* defines, and, when it's something as broad as this, it sucks. The wost part is what exactly Object's behaviour is: the default opCmp effectively generates a random number for each object, and yields a comparison between those. If this isn't abuse of overloaded operators, what is? All it would take, as far as I can tell, is to change TypeInfoClass to compare hash codes instead of calling opCmp and opEquals. Lucky for us, this means that Object can still be sorted in an array and used as an AA key while making comparison operators raise a compile time error. Everybody wins. If nothing else, this would go a huge way toward disambiguating == and != from === and !==. Using the former on most object references would be a compile error. -- andy
Aug 07 2004
next sibling parent Ben Hinkle <bhinkle4 juno.com> writes:
[snip]

 The wost part is what exactly Object's behaviour is: the default opCmp
 effectively generates a random number for each object, and yields a
 comparison between those.  
The random number is unique to the object. Think of it an an ID - kindof like a Social Security Number for objects. The problem I see with the current behavior is the impact it has on the GC architecture. Maybe the way out of that one is to have a marker interface called MovableObject that tells the GC an object can be moved - or something like that. [snip]
 All it would take, as far as I can tell, is to change TypeInfoClass to
 compare hash codes instead of calling opCmp and opEquals.  
[snip] The problem is that two distinct objects (by any measurement) can have the same hash code.
Aug 07 2004
prev sibling parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Andy Friesen" <andy ikagames.com> wrote in message
news:cf2s0k$1uu5$1 digitaldaemon.com...
 Ivan Senji wrote:

 Isn't it time to get rid of them?
Yes.
I'm glad that someone agrees with me on this. Maybe if we are boring enough we can convince others. :)
 Anything Object defines, *everything* defines, and, when it's something
 as broad as this, it sucks.

 The wost part is what exactly Object's behaviour is: the default opCmp
 effectively generates a random number for each object, and yields a
 comparison between those.  If this isn't abuse of overloaded operators,
 what is?

 All it would take, as far as I can tell, is to change TypeInfoClass to
 compare hash codes instead of calling opCmp and opEquals.  Lucky for us,
 this means that Object can still be sorted in an array and used as an AA
 key while making comparison operators raise a compile time error.
 Everybody wins.

 If nothing else, this would go a huge way toward disambiguating == and
 != from === and !==.  Using the former on most object references would
 be a compile error.

   -- andy
Aug 07 2004
prev sibling next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cf2cs6$1p49$1 digitaldaemon.com>, Ivan Senji says...

    int opCmp(Object o);
    int opEquals(Object o);
I tend to think about things in mathematical terms, and in math, everything can be compared for equality, but not for less than. That is, for all x in U (the Universal Set), x = x. Put more simply, everything equals itself. So an equality test makes sense for everything. However, less than and greater than /don't/ make sense for everything (in math). You cannot, for example, assign a truth value to the sentence "Wednesday is less than Sunday". It doesn't make sense. Nor does ((3+4i) < (4+3i)). Matrices, vectors, symmetry groups, functions, etc., etc., (the list is endless) do not impose a complete ordering on their elements, and therefore < and > are mathematically (rightly) undefined for those sets. In D, every class is derived from Object, so every class inherits opCmp() - and this is nonsense. Suppose I write a class Weekday, or a class Complex, or Vector, or Matrix. These classes should /not/ allow expressions like (x < y) to compile. So, from an inheritance point of view, the problem is not that opCmp() is bad, it's that it's not, actually, a property of /everything/. It's only a property of /some/ things. There's a basic principle of polymorphism. Derived classes are supposed to /add/ functions, not remove them. If class A has a function f(), then polymorphism says that class B (which derives from A) must also have a function f(). Similarly, if Object has a function opCmp(), then polymorphism says that Matrix (which derives from Object) must also have a function opCmp(). BUT - the elements of the set of all matrices are not ordered, so opCmp() makes no sense for a matrix. Walter advises us to override opCmp() in these cases, and to throw an exception in the implementation. That sucks. But more than that, this "solution" would prevent such objects (Matrices etc) from ever being stored in an associative array. That sucks even more. But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays don't need opCmp(). Honestly, they don't. Why not? - because the only thing AAs require is a sort criterion. That sort criterion does not have to be based on <. The current implementation of AAs is to sort based on (x < y), but then to have Object define < such that (x < y) is implemented as (&x < &y). So why not just have the sort criterion for AAs be (&x < &y) instead of (x < y)? If this were so, Object wouldn't need <, everything would still work, < and > would still be defined for those objects which /explicitly/ defined opCmp(), and everyone would be happy. Arcane Jill PS. Actually, I retract "everyone would be happy". (If only making everyone happy were that easy). It's still a workable suggestion though.
Aug 07 2004
next sibling parent "Stratus" <vdai spamnet.net> writes:
Another workable solution is to remove AA's from the language syntax
altogether; then the fundamental opCmp() requirement goes away. For all
their initially perceived value, the AA baggage affects the core language in
a number of detrimental ways. This would not be the case if AA's were a
(replaceable & extensible) library module.


"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cf3dst$28kb$1 digitaldaemon.com...
 In article <cf2cs6$1p49$1 digitaldaemon.com>, Ivan Senji says...

    int opCmp(Object o);
    int opEquals(Object o);
I tend to think about things in mathematical terms, and in math,
everything can
 be compared for equality, but not for less than. That is, for all x in U
(the
 Universal Set), x = x.

 Put more simply, everything equals itself. So an equality test makes sense
for
 everything.

 However, less than and greater than /don't/ make sense for everything (in
math).
 You cannot, for example, assign a truth value to the sentence "Wednesday
is less
 than Sunday". It doesn't make sense. Nor does ((3+4i) < (4+3i)). Matrices,
 vectors, symmetry groups, functions, etc., etc., (the list is endless) do
not
 impose a complete ordering on their elements, and therefore < and > are
 mathematically (rightly) undefined for those sets.

 In D, every class is derived from Object, so every class inherits
opCmp() - and
 this is nonsense. Suppose I write a class Weekday, or a class Complex, or
 Vector, or Matrix. These classes should /not/ allow expressions like (x <
y) to
 compile.

 So, from an inheritance point of view, the problem is not that opCmp() is
bad,
 it's that it's not, actually, a property of /everything/. It's only a
property
 of /some/ things.

 There's a basic principle of polymorphism. Derived classes are supposed to
/add/
 functions, not remove them. If class A has a function f(), then
polymorphism
 says that class B (which derives from A) must also have a function f().
 Similarly, if Object has a function opCmp(), then polymorphism says that
Matrix
 (which derives from Object) must also have a function opCmp(). BUT - the
 elements of the set of all matrices are not ordered, so opCmp() makes no
sense
 for a matrix.

 Walter advises us to override opCmp() in these cases, and to throw an
exception
 in the implementation. That sucks. But more than that, this "solution"
would
 prevent such objects (Matrices etc) from ever being stored in an
associative
 array. That sucks even more.

 But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays
don't
 need opCmp(). Honestly, they don't. Why not? - because the only thing AAs
 require is a sort criterion. That sort criterion does not have to be based
on <.
 The current implementation of AAs is to sort based on (x < y), but then to
have
 Object define < such that (x < y) is implemented as (&x < &y). So why not
just
 have the sort criterion for AAs be (&x < &y) instead of (x < y)? If this
were
 so, Object wouldn't need <, everything would still work, < and > would
still be
 defined for those objects which /explicitly/ defined opCmp(), and everyone
would
 be happy.

 Arcane Jill

 PS. Actually, I retract "everyone would be happy". (If only making
everyone
 happy were that easy). It's still a workable suggestion though.
Aug 07 2004
prev sibling next sibling parent Andy Friesen <andy ikagames.com> writes:
Arcane Jill wrote:
 In article <cf2cs6$1p49$1 digitaldaemon.com>, Ivan Senji says...
 
 
   int opCmp(Object o);
   int opEquals(Object o);
I tend to think about things in mathematical terms, and in math, everything can be compared for equality, but not for less than. That is, for all x in U (the Universal Set), x = x. Put more simply, everything equals itself. So an equality test makes sense for everything. However, less than and greater than /don't/ make sense for everything (in math). You cannot, for example, assign a truth value to the sentence "Wednesday is less than Sunday". It doesn't make sense. Nor does ((3+4i) < (4+3i)). Matrices, vectors, symmetry groups, functions, etc., etc., (the list is endless) do not impose a complete ordering on their elements, and therefore < and > are mathematically (rightly) undefined for those sets. In D, every class is derived from Object, so every class inherits opCmp() - and this is nonsense. Suppose I write a class Weekday, or a class Complex, or Vector, or Matrix. These classes should /not/ allow expressions like (x < y) to compile. So, from an inheritance point of view, the problem is not that opCmp() is bad, it's that it's not, actually, a property of /everything/. It's only a property of /some/ things. There's a basic principle of polymorphism. Derived classes are supposed to /add/ functions, not remove them. If class A has a function f(), then polymorphism says that class B (which derives from A) must also have a function f(). Similarly, if Object has a function opCmp(), then polymorphism says that Matrix (which derives from Object) must also have a function opCmp(). BUT - the elements of the set of all matrices are not ordered, so opCmp() makes no sense for a matrix. Walter advises us to override opCmp() in these cases, and to throw an exception in the implementation. That sucks. But more than that, this "solution" would prevent such objects (Matrices etc) from ever being stored in an associative array. That sucks even more. But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays don't need opCmp(). Honestly, they don't. Why not? - because the only thing AAs require is a sort criterion. That sort criterion does not have to be based on <. The current implementation of AAs is to sort based on (x < y), but then to have Object define < such that (x < y) is implemented as (&x < &y). So why not just have the sort criterion for AAs be (&x < &y) instead of (x < y)? If this were so, Object wouldn't need <, everything would still work, < and > would still be defined for those objects which /explicitly/ defined opCmp(), and everyone would be happy.
Right! But there's still two kinks. One is easy, the other is harder. The easy one is dealing with a potentially compacting garbage collector. &x < &y isn't so reliable if the collector decides to shuffle things around. As promised, though, this is easy: use x.toHash() < y.toHash(). The hard one is that arrays have that built-in sort property. Personally, I think we could just dump it and be done with it. Back before D grew the respectable template mechanism we know and love today, building this into the language made perfect sense, but now that we do, it's cruft. We don't need a built-in sort now that we can easily implement a good one using a template. Failing that, the compiler could always use do a search on the array element type to determine whether it has an opCmp method and generate code accordingly (potentially ugly to implement, but completely seamless to the programmer), or define a 'special' interface ('Comparable' or somesuch) and only provide sorting functionality to classes which implement it. -- andy
Aug 07 2004
prev sibling next sibling parent Lars Ivar Igesund <larsivar igesund.net> writes:
Arcane Jill wrote:

 There's a basic principle of polymorphism. Derived classes are supposed to
/add/
 functions, not remove them. 
Well, Eiffel (which seems to be the most complete OO-language I've seen) allow you to change *everything*, even removal. Neither Java nor C++ should define OOP nor polymorphism (IMHO), although I feel most comfortable with those two languages myself. Lars Ivar Igesund
Aug 07 2004
prev sibling next sibling parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cf3dst$28kb$1 digitaldaemon.com...
 In article <cf2cs6$1p49$1 digitaldaemon.com>, Ivan Senji says...

    int opCmp(Object o);
    int opEquals(Object o);
I tend to think about things in mathematical terms, and in math,
everything can
 be compared for equality, but not for less than. That is, for all x in U
(the
 Universal Set), x = x.

 Put more simply, everything equals itself. So an equality test makes sense
for
 everything.
Isn't this identity? x === x ? That is what we have a great seperate operator === and !== for. Why should we also have == be defined by default to do the same as ===. Ridicolous!
 However, less than and greater than /don't/ make sense for everything (in
math).
 You cannot, for example, assign a truth value to the sentence "Wednesday
is less
 than Sunday". It doesn't make sense. Nor does ((3+4i) < (4+3i)). Matrices,
 vectors, symmetry groups, functions, etc., etc., (the list is endless) do
not
 impose a complete ordering on their elements, and therefore < and > are
 mathematically (rightly) undefined for those sets.

 In D, every class is derived from Object, so every class inherits
opCmp() - and
 this is nonsense. Suppose I write a class Weekday, or a class Complex, or
 Vector, or Matrix. These classes should /not/ allow expressions like (x <
y) to
 compile.

 So, from an inheritance point of view, the problem is not that opCmp() is
bad,
 it's that it's not, actually, a property of /everything/. It's only a
property
 of /some/ things.
Great, now not only Andy Friesen agrees on this with me but even you (at least in some part) :)
 There's a basic principle of polymorphism. Derived classes are supposed to
/add/
 functions, not remove them. If class A has a function f(), then
polymorphism
 says that class B (which derives from A) must also have a function f().
 Similarly, if Object has a function opCmp(), then polymorphism says that
Matrix
 (which derives from Object) must also have a function opCmp(). BUT - the
 elements of the set of all matrices are not ordered, so opCmp() makes no
sense
 for a matrix.

 Walter advises us to override opCmp() in these cases, and to throw an
exception
 in the implementation. That sucks. But more than that, this "solution"
would
 prevent such objects (Matrices etc) from ever being stored in an
associative
 array. That sucks even more.

 But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays
don't
 need opCmp(). Honestly, they don't. Why not? - because the only thing AAs
 require is a sort criterion. That sort criterion does not have to be based
on <.
 The current implementation of AAs is to sort based on (x < y), but then to
have
 Object define < such that (x < y) is implemented as (&x < &y). So why not
just
 have the sort criterion for AAs be (&x < &y) instead of (x < y)? If this
were
 so, Object wouldn't need <, everything would still work, < and > would
still be
 defined for those objects which /explicitly/ defined opCmp(), and everyone
would
 be happy.
I would be perfectly happy with AAs requiring opCmp. But a compiler should be smart enough (and i Know that W could do this) to generate an error message if a class that doesn't define opCmp is used as a key. so class A{} int[A] xy; would generate: A cannot be a key of AA (missing opCmp) in line (someline) It is much better than being able to have "class A{}" as a key but with instances of A magically sorted by their adresses wich is ofcourse ridicolous(again)!
 Arcane Jill

 PS. Actually, I retract "everyone would be happy". (If only making
everyone
 happy were that easy). It's still a workable suggestion though.
Impossible! :) (i mean making everyone happy)
Aug 07 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Ivan Senji wrote:

<snip>
 Isn't this identity? x === x ?
 That is what we have a great seperate operator === and !== for.
 Why should we also have == be defined by default to do the same as ===.
 Ridicolous!
<snip> We rebutted this argument of yours a while back: http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/3419 and the chain of followups. Do you have a further defence of your position here? Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004
parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cf7kdt$hg5$1 digitaldaemon.com...
 Ivan Senji wrote:

 <snip>
 Isn't this identity? x === x ?
 That is what we have a great seperate operator === and !== for.
 Why should we also have == be defined by default to do the same as ===.
 Ridicolous!
<snip> We rebutted this argument of yours a while back: http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/3419 and the chain of followups. Do you have a further defence of your position here?
There isn't really much more i could say about this. Why does == have to work as === by default? I have never writen a code were two objects are equal when they are in the same place in memory (===), and when one object is less than an other when his adress is smaller. It IMO doesn't make any sence to use these comparisons in AAs, as i also wouldn't use these in a contanier. But it looks like i will have to give up on this for a while :)
 Stewart.

 --
 My e-mail is valid but not my primary mailbox.  Please keep replies on
 the 'group where everyone may benefit.
Aug 17 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Ivan Senji wrote:
<snip>
 There isn't really much more i could say about this. Why does == have 
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that. For example, suppose you have an auto class representing a lock on some resource. How would you define two of them being equal? Them locking the same resource? If the setup is such that each resource can only be locked once, then the lock object is equal only to itself. Suppose you have a GUI library. Two GUI control objects could be considered equal if they interface the same control. But if the library can only create controls, and not interface existing controls (such as those on a Windows dialog resource), and cannot create two objects interfacing the same control, then each control object is equal only to itself. For that matter, I can't think of any examples in which an object should not be equal to itself. Can you?
 and when one object is less
 than an other when his adress is smaller.
That's changing the subject....
 It IMO doesn't make any sence to use
 these comparisons in AAs, as i also wouldn't use these in a contanier.
Exactly. That's why it's been suggested plenty of times that Object.opCmp be got rid of, and AAs be reimplemented so that they will work without a comparator. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on on the 'group where everyone may benefit.
Aug 22 2004
next sibling parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does == have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that. For example, suppose you have an auto class representing a lock on some resource. How would you define two of them being equal? Them locking the same resource? If the setup is such that each resource can only be locked once, then the lock object is equal only to itself. Suppose you have a GUI library. Two GUI control objects could be considered equal if they interface the same control. But if the library can only create controls, and not interface existing controls (such as those on a Windows dialog resource), and cannot create two objects interfacing the same control, then each control object is equal only to itself.
OK! These are good examples. But if i was writing that rasource locking stuff and i wanted to know if they lock the same resource i would use === to test if these locks are identical.
 For that matter, I can't think of any examples in which an object should
 not be equal to itself.  Can you?
Just hypothetically (hm!spelling:) an object whose state depends on processors clock would never be equal to itself because the two opEquals could not be called at the same moment. But still two objects of this type could be identical.
 and when one object is less
 than an other when his adress is smaller.
That's changing the subject....
No it isn't: i see a==b <-> a===b and a<b <-> &a<&b as part of the same bag of problems. That is one default behavior that i don't like.
 It IMO doesn't make any sence to use
 these comparisons in AAs, as i also wouldn't use these in a contanier.
Exactly. That's why it's been suggested plenty of times that Object.opCmp be got rid of, and AAs be reimplemented so that they will work without a comparator.
Yes! Get rid of them (and take Object.opEquals with it) :) I still can't understand why it can't work like in C++. If C++ had AAs it would when seeing int[SomeType] check if SomeType defines opCmp and opEquals and would report an error if they are undefined. And when D looks at int[SomeType] it knows that these exist (even if we didn't write them, D did). Why? I don't have anything against AAs needing opCmp and opEquals just as long as we are informed when we havent writen them. PS. (a==b) !=== (a===b) && (a<b) !== (&a<&b)
 Stewart.

 --
 My e-mail is valid but not my primary mailbox.  Please keep replies on
 on the 'group where everyone may benefit.
Aug 22 2004
parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Ivan Senji wrote:

<snip>
 No it isn't:
 i see
 a==b <-> a===b
 and
 a<b <-> &a<&b
 
 as part of the same bag of problems.
Yes, the same bag. But not the same problem. <snip>
 I don't have anything against AAs needing opCmp and opEquals
 just as long as we are informed when we havent writen them.
<snip> But what should it mean if opEquals hasn't been written? (a) that it's a class representing a one-to-one lock or GUI widget or something, in which an object is equal only to itself and the type could be used as an AA key? Makes sense to me. (b) that an object shouldn't even be equal to itself, preventing use in AAs? (c) that the class isn't finished? Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on on the 'group where everyone may benefit.
Aug 22 2004
prev sibling next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Stewart Gordon wrote:

 For that matter, I can't think of any examples in which an object should 
 not be equal to itself.  Can you?
NaN? :) The real problem is that opEquals and opCmp are defined for Object, which means that any object must be comparable to any other: class Apple { } class Orange { } Apple a = new Apple(); Orange o = new Orange(); bool b = a == o; -- andy
Aug 22 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cgar00$126b$1 digitaldaemon.com>, Andy Friesen says...

The real problem is that opEquals and opCmp are defined for Object, 
which means that any object must be comparable to any other:
I figured out a way to make == (etc) into a compile error. Here's an example: This is what you get when you try to compile it: eq.d(11): function opEquals () does not match argument types (A ) eq.d(11): Error: expected 0 arguments, not 1 eq.d(11): void does not have a boolean value Well, at least the file and line number are right! :) It works because of Walter's now famous rule that name resolution happens before signature matching. The zero-argument version of opEquals() will be found first, so the rule says it will be used, and the version in Object won't! But that function has the wrong argument list, so now D can't find a match at all. Voila - one compile-time error. And you can do the same thing for opCmp() too. Arcane Jill
Aug 22 2004
next sibling parent "antiAlias" <fu bar.com> writes:
Nice one! That's rather amusing, Jill <G>


"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cgat6b$1424$1 digitaldaemon.com...
 In article <cgar00$126b$1 digitaldaemon.com>, Andy Friesen says...

The real problem is that opEquals and opCmp are defined for Object,
which means that any object must be comparable to any other:
I figured out a way to make == (etc) into a compile error. Here's an
example:
















 This is what you get when you try to compile it:

 eq.d(11): function opEquals () does not match argument types (A )
 eq.d(11): Error: expected 0 arguments, not 1
 eq.d(11): void does not have a boolean value

 Well, at least the file and line number are right! :)

 It works because of Walter's now famous rule that name resolution happens
before
 signature matching. The zero-argument version of opEquals() will be found
first,
 so the rule says it will be used, and the version in Object won't! But
that
 function has the wrong argument list, so now D can't find a match at all.
Voila
 - one compile-time error. And you can do the same thing for opCmp() too.

 Arcane Jill
Aug 22 2004
prev sibling next sibling parent Andy Friesen <andy ikagames.com> writes:
Arcane Jill wrote:
 In article <cgar00$126b$1 digitaldaemon.com>, Andy Friesen says...
 
 
The real problem is that opEquals and opCmp are defined for Object, 
which means that any object must be comparable to any other:
I figured out a way to make == (etc) into a compile error. Here's an example: This is what you get when you try to compile it: eq.d(11): function opEquals () does not match argument types (A ) eq.d(11): Error: expected 0 arguments, not 1 eq.d(11): void does not have a boolean value Well, at least the file and line number are right! :) It works because of Walter's now famous rule that name resolution happens before signature matching. The zero-argument version of opEquals() will be found first, so the rule says it will be used, and the version in Object won't! But that function has the wrong argument list, so now D can't find a match at all. Voila - one compile-time error. And you can do the same thing for opCmp() too.
Awesome! How about this one? :) class Rational { int numerator, denominator; int opCmp(Rational rhs) { ... } } Rational[] r = ...; r.sort; // sorts by hash code! -- andy
Aug 22 2004
prev sibling next sibling parent "Matthew" <admin.hat stlsoft.dot.org> writes:
"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cgat6b$1424$1 digitaldaemon.com...
 In article <cgar00$126b$1 digitaldaemon.com>, Andy Friesen says...

The real problem is that opEquals and opCmp are defined for Object,
which means that any object must be comparable to any other:
I figured out a way to make == (etc) into a compile error. Here's an example:
No. Stupid. (it, not you <g>) All one needs to is cast an A to Object, and then it's visible again.












 This is what you get when you try to compile it:

 eq.d(11): function opEquals () does not match argument types (A )
 eq.d(11): Error: expected 0 arguments, not 1
 eq.d(11): void does not have a boolean value

 Well, at least the file and line number are right! :)

 It works because of Walter's now famous rule that name resolution happens
before
 signature matching. The zero-argument version of opEquals() will be found
first,
 so the rule says it will be used, and the version in Object won't! But that
 function has the wrong argument list, so now D can't find a match at all. Voila
 - one compile-time error. And you can do the same thing for opCmp() too.

 Arcane Jill
Aug 22 2004
prev sibling parent Daniel Horn <hellcatv hotmail.com> writes:
brilliant! Score one for compile-time type checking!

down with runtime checks ;-)

Arcane Jill wrote:
 In article <cgar00$126b$1 digitaldaemon.com>, Andy Friesen says...
 
 
The real problem is that opEquals and opCmp are defined for Object, 
which means that any object must be comparable to any other:
I figured out a way to make == (etc) into a compile error. Here's an example: This is what you get when you try to compile it: eq.d(11): function opEquals () does not match argument types (A ) eq.d(11): Error: expected 0 arguments, not 1 eq.d(11): void does not have a boolean value Well, at least the file and line number are right! :) It works because of Walter's now famous rule that name resolution happens before signature matching. The zero-argument version of opEquals() will be found first, so the rule says it will be used, and the version in Object won't! But that function has the wrong argument list, so now D can't find a match at all. Voila - one compile-time error. And you can do the same thing for opCmp() too. Arcane Jill
Aug 22 2004
prev sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does == have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than any kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on some
 resource.  How would you define two of them being equal?  Them locking
 the same resource?  If the setup is such that each resource can only be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any way? I can think of no reason why one would ever need to so do. Indeed of all such things I've been writing and using over the last 10+ years in C++, I've never had cause to do such a thing. C++ misstepped badly in allowing copying (initialisation and assignment) by default for classes, where it should have only preserved that for structs. But it wisely does not provide any default value comparison for class types. We know why D has done such a thing - built-in AAs, and array.sort - but it's a serious mistake, and is a blight on the language. If a given semantic makes sense for only a subset of all types, why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the library
 can only create controls, and not interface existing controls (such as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only to
 itself.
*If* that's the value comparison you define for two GUI control objects, then you should be able to define that. No-one's saying otherwise, just that the default === === == is jejune and harmful.
 For that matter, I can't think of any examples in which an object should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate value comparison. That's a practically infinite set of things
Aug 22 2004
parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
Thank you! I (naturally) agree with everything you said because
you agree that something that i am babling about for some time
now makes sence. Not that this means that things will change now
but atleast they have a slightly better chanse to change. :)

"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does == have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than any
kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on some
 resource.  How would you define two of them being equal?  Them locking
 the same resource?  If the setup is such that each resource can only be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any way?
I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using over
the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and assignment)
by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's a
serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all types,
why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the library
 can only create controls, and not interface existing controls (such as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only to
 itself.
*If* that's the value comparison you define for two GUI control objects,
then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune and
harmful.
 For that matter, I can't think of any examples in which an object should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate value
comparison. That's a practically infinite set
 of things
Aug 22 2004
parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
 Thank you! I (naturally) agree with everything you said because
 you agree that something that i am babling about for some time
 now makes sence. Not that this means that things will change now
 but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's (increasingly shaky, IMO) preference for ease of compiler implementation over language "complexity". I still like D a lot, but I increasingly see several of its "unique features" as a bad idea. Because we're still pre 1.0, I'd be voting for weilding the scythe at this stage, before we're irretrievably stuck with some very ugly warts. By no means an exhaustive list: - drop the bit type - have a serious discussion about dropping the AA built-in type - make the return type from opApply delegates be an enum with a single public "Ok" member - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals() - give pseudo methods to built-in types, e.g. toString() - stop shoehorning C-struct and stack-based class functionality into the single D class-key "struct". This is a marriage made in hell! The fact that I've identified a very real problem with bit arrays in this regard is, IMO, only the tip of the ice-berg. Let's let stack-based classes have a new class-key, to go with the fact that they're a new concept. I don't care if it's something as hideous as "stackclass", "sclass", "stass", or even "sklass". Any of those naming yucks are irrelevant compared to the current conceptual nightmare. (A side effect of this is that sklasses could have ctors, and maybe even dtors!!) - address the DLL-GC issue. This is a *HUGE* issue with respect to the future appeal of the language. Without D seamlessly supporting DLLs in (almost) all of their potential class/function C/D/D/C guises, it'll be still born. - allow recursive templates - have a clear import/name-resolution policy. (Note: I cannot really comment on this, since I've not (yet) dived into this hairy beast, but I respect the people who've been banging on about it for some time.) Some others I'd also like to see, but which I'm not going to bother arguing for: - make bool => int - have incomplete switch cases be flagged as compile-time errors If anyone takes this as evidence why D will _never_ take off: don't. But I do hope Walter is influenced to address these issues, as I believe they are going to be very serious areas for criticism if ratified into 1.0. A couple of examples: - it seems to me that opCmp() and/or opEquals() are in Object to support AAs, and array.sort. They are very troublesome theoretically, and are beginning to be practically troublesome for some. Since AAs are getting increasing criticism, aren't we keeping a big wart in to support a small wart? If so, it's time to get out the liquid paper, I think. And why not just provide a template sort() function for arrays in Phobos? - as I've opined recently, the conceptual attractiveness of the bit type is specious. Some of us have had this opinion for a long time, some more recently, but I'd hazard a guess that now a large portion of D's devotees think it's now more trouble than it's worth. I still strongly assert that my recent discussion of bit arrays in D structs is compelling, and I"ve not yet heard any counter to it. Here's a challenge to any remaining holders of the "keep bit" position: can anyone identify to me the compelling case for having the bit type, i.e. the things that cannot be adequalty covered by a library. Anyway, to repeat myself. I am optimisitic about D, but I believe me *must* have a serious refactoring of the language *before* 1.0, and I think we're getting close to the appropriate time, before Walter's time is completely consumed on issues that could disappear just by a collective belt-tightening. btw, I've a shocking case of the flu at the mo, so if any of this reads as insane, it might well be. :-) William Smythe, ready with Scythe
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does == have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than any
kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on some
 resource.  How would you define two of them being equal?  Them locking
 the same resource?  If the setup is such that each resource can only be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any way?
I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using over
the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and assignment)
by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's a
serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all types,
why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the library
 can only create controls, and not interface existing controls (such as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only to
 itself.
*If* that's the value comparison you define for two GUI control objects,
then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune and
harmful.
 For that matter, I can't think of any examples in which an object should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate value
comparison. That's a practically infinite set
 of things
Aug 22 2004
next sibling parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cgc4fc$20kf$1 digitaldaemon.com...
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
 Thank you! I (naturally) agree with everything you said because
 you agree that something that i am babling about for some time
 now makes sence. Not that this means that things will change now
 but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's
(increasingly shaky, IMO) preference for ease of
 compiler implementation over language "complexity".
I don't think i do. Everyone can have some influence if what they are saying makes sence. But afterall D is Walter's baby so he is allowed to take some time to think about things. For more than a couple of times i have seen things suddenly change after a lot of people complained.
 I still like D a lot, but I increasingly see several of its "unique
features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts. (I had to look up scythe in the dictionary) Yes
 By no means an exhaustive list:

 - drop the bit type
I was in favour of keeing bit and adding normal bool, but bit really is only useful for bit[] wich could be very nicelly implemented in a library
 - have a serious discussion about dropping the AA built-in type
Don't drop it! (It would be a shame because it can be useful (for example: quick and dirty implementation of set)) but fix it!
 - make the return type from opApply delegates be an enum with a single
public "Ok" member Agree
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
!!
 - give pseudo methods to built-in types, e.g. toString()
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell!
I'll yust believe you on this one
 The fact that I've identified a very real problem with bit arrays in this
regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with
the fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or
even "sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the
future appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential
class/function C/D/D/C guises, it'll be still born.
 - allow recursive templates
 - have a clear import/name-resolution policy. (Note: I cannot really
comment on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it
for some time.)
 Some others I'd also like to see, but which I'm not going to bother
arguing for:
 - make bool => int
 - have incomplete switch cases be flagged as compile-time errors


 If anyone takes this as evidence why D will _never_ take off: don't. But I
do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for criticism
if ratified into 1.0. A couple of examples:
 - it seems to me that opCmp() and/or opEquals() are in Object to support
AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically troublesome for some. Since AAs are
getting increasing criticism, aren't we keeping
 a big wart in to support a small wart? If so, it's time to get out the
liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?

 - as I've opined recently, the conceptual attractiveness of the bit type
is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a
large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent discussion
of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that
cannot be adequalty covered by a library.
 Anyway, to repeat myself. I am optimisitic about D, but I believe me
*must* have a serious refactoring of the language
 *before* 1.0, and I think we're getting close to the appropriate time,
before Walter's time is completely consumed on
 issues that could disappear just by a collective belt-tightening.
I am also very optimistic (as are most of us here probabbly or we wouldn't be using this language for such a long time now) Walter is going to fix things :)
 btw, I've a shocking case of the flu at the mo, so if any of this reads as
insane, it might well be. :-) I read nothing crazy...
 William Smythe, ready with Scythe
...until now :)
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does ==
have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than
any
 kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on
some
 resource.  How would you define two of them being equal?  Them
locking
 the same resource?  If the setup is such that each resource can only
be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any
way?
 I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using
over
 the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and
assignment)
 by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's
a
 serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all
types,
 why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the
library
 can only create controls, and not interface existing controls (such
as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only
to
 itself.
*If* that's the value comparison you define for two GUI control
objects,
 then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune
and
 harmful.
 For that matter, I can't think of any examples in which an object
should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate
value
 comparison. That's a practically infinite set
 of things
Aug 23 2004
prev sibling next sibling parent Daniel Horn <hellcatv hotmail.com> writes:
Matthew wrote:
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
 
Thank you! I (naturally) agree with everything you said because
you agree that something that i am babling about for some time
now makes sence. Not that this means that things will change now
but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's (increasingly shaky, IMO) preference for ease of compiler implementation over language "complexity". I still like D a lot, but I increasingly see several of its "unique features" as a bad idea. Because we're still pre 1.0, I'd be voting for weilding the scythe at this stage, before we're irretrievably stuck with some very ugly warts.
agree...there are huge swaths of good ideas, but the bad features keep many of the good features at bay but introducing unnecessary run-time problems.
 By no means an exhaustive list:
 
 - drop the bit type
agree--it's only useful as vector<bool> which can be a specialization in DTL. the number of people who use this is I'm sure ironically small. I used it back when I had 640K of memory to save space for a "has this part of the framebuffer changed" pool. it made the system grind to a halt wrt speed ;-), so I ate the memory cost of a char-per. Perhaps if they were built in, it would have gone faster, but nowadays I'd use vector<bool> to try it
 - have a serious discussion about dropping the AA built-in type
agree..that belongs in a library--perhaps in phobos, but a library nevertheless. If it can't be written efficiently *in* the language then perhaps the language needs to change
 - make the return type from opApply delegates be an enum with a single public
"Ok" member
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
got to get rid of these-- if I have an opEquals() (typo, no argument) function in a subclass, then it depends whether or not I've cast my subclass to object which one it'll call--yuck!
 - give pseudo methods to built-in types, e.g. toString()
sure...the whole std.string.toString(...) is clunky and leads me to alias tos(..) everywhere
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell! The fact that I've identified a very real problem with bit arrays in
this regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with the
fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or even
"sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
agree...numeric types as stack-based items is important...classes force people into using the heap--and for librarys people need to be as generic as possible, i.e. using the stack unless new MyStruct is used....
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the future
appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential class/function
C/D/D/C guises, it'll be still born.
 - allow recursive templates
the metaprogramming language must be turing complete, yes! (at least with some sort of recusion limit set by the user)
 - have a clear import/name-resolution policy. (Note: I cannot really comment
on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it for
some time.)
 
 Some others I'd also like to see, but which I'm not going to bother arguing
for:
I agree that the way it is now makes it utterly fragile to import new libraries which may cause completely unrelated conflicts with other libraries.
 
 - make bool => int
typesafety wrt bools is also a key issue here I'd think...but arguing is wasteful of time
 - have incomplete switch cases be flagged as compile-time errors
anything potentially incorrect that *can* be a compile time error *should* be one :-) fixing all these issues will help D take off... I think adding more compile time sort of checks is also important--things like manditory override keywords and other ways to help a willing programmer prevent self-foot-shooting ;-) --Daniel
 
 
 If anyone takes this as evidence why D will _never_ take off: don't. But I do
hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for criticism if
ratified into 1.0. A couple of examples:
 
 - it seems to me that opCmp() and/or opEquals() are in Object to support AAs,
and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically troublesome for some. Since AAs are
getting increasing criticism, aren't we keeping
 a big wart in to support a small wart? If so, it's time to get out the liquid
paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?
 
 - as I've opined recently, the conceptual attractiveness of the bit type is
specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a large
portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent discussion of
bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
 
 Anyway, to repeat myself. I am optimisitic about D, but I believe me *must*
have a serious refactoring of the language
 *before* 1.0, and I think we're getting close to the appropriate time, before
Walter's time is completely consumed on
 issues that could disappear just by a collective belt-tightening.
 
 btw, I've a shocking case of the flu at the mo, so if any of this reads as
insane, it might well be. :-)
 
 William Smythe, ready with Scythe
 
 
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cgblol$1mt9$1 digitaldaemon.com...

"Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
Ivan Senji wrote:
<snip>

There isn't really much more i could say about this. Why does == have
to work as === by default?   I have never writen a code were two
objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than any
kind of software engineering axiom.
For example, suppose you have an auto class representing a lock on some
resource.  How would you define two of them being equal?  Them locking
the same resource?  If the setup is such that each resource can only be
locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any way?
I can think of no reason why one would ever
need to so do. Indeed of all such things I've been writing and using over
the last 10+ years in C++, I've never had
cause to do such a thing.

C++ misstepped badly in allowing copying (initialisation and assignment)
by default for classes, where it should have
only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
why D has done such a thing - built-in AAs, and array.sort - but it's a
serious mistake, and is a blight on the
language. If a given semantic makes sense for only a subset of all types,
why provide it for all? That's MFC!
Suppose you have a GUI library.  Two GUI control objects could be
considered equal if they interface the same control.  But if the library
can only create controls, and not interface existing controls (such as
those on a Windows dialog resource), and cannot create two objects
interfacing the same control, then each control object is equal only to
itself.
*If* that's the value comparison you define for two GUI control objects,
then you should be able to define that.
No-one's saying otherwise, just that the default === === == is jejune and
harmful.
For that matter, I can't think of any examples in which an object should
not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate value
comparison. That's a practically infinite set
of things
Aug 23 2004
prev sibling next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Matthew wrote:

 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell! The fact that I've identified a very real problem with bit arrays in
this regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with the
fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or even
"sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
I think we already (almost) have the tools for this: auto classes. The problem is that they still require that 'new' cruft. Structs don't, and we're all lazy oafs, so here we are. :) This deserves emphasis:
 Here's a challenge to any remaining holders of the "keep bit" position: can
anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
 
 Anyway, to repeat myself. I am optimisitic about D, but I believe me *must*
have a serious refactoring of the language
 *before* 1.0, and I think we're getting close to the appropriate time, before
Walter's time is completely consumed on
 issues that could disappear just by a collective belt-tightening.
I agree. Only now are we starting to get a feel for what the completed language is turning into, and we should feel lucky that it's happening before 1.0, when there's still a chance to fix things. Fixing the conceptual bugs is a lot harder than fixing the implementation bugs once you have backwards compatibility to worry about. :) I forget who said this, but it seems fitting: "It's done when there isn't anything you could possibly remove" -- andy
Aug 23 2004
parent Daniel Horn <hellcatv hotmail.com> writes:
but I wouldn't use an auto class to make a numeric type... numeric types 
need to be in containers and need to sit in the heap if people wish to 
keep them there... auto is merely to tell the compiler that resources 
need to be freed as soon as the class goes out of scope...it doesn't 
specify where the "class" itself comes from

Andy Friesen wrote:
 
 I think we already (almost) have the tools for this: auto classes.
 
 The problem is that they still require that 'new' cruft.  Structs don't, 
 and we're all lazy oafs, so here we are. :)
 
Aug 23 2004
prev sibling next sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cgc4fc$20kf$1 digitaldaemon.com>, Matthew says...
Because we're still pre
1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts.
Okay, let's call this a poll and see where the votes go. I do hope Walter counts up the votes.
By no means an exhaustive list:

- drop the bit type
At first, I liked the bit type. Then I learned of some problems with bit arrays and wrote a workaround for some special cases, but problems remained with (for example) expressions like &myBitArray[3], or passing myBitArray[3] to a function as an out or inout parameter. Currently, I'd say I could live with ditching the bit /if/ a bitarray type existed to replace it. I have functions in my (pending) random number library which return a bit[]. I do need bit /arrays/, I just don't need bit. So, if the suggestion had been:
- drop the bit type and replace it with a bitarray type
then I would have voted yes. Without the replacement, I'm not sure. Furthermore, the only (note, *ONLY*) reason I don't want to argue about bool at this time, is because I know that doing so would be a complete waste of time, because this is one area in which Walter doesn't appear to listen to majority opinion. But "bool" must be aliased to /something/, since libraries need, on occasion, to return "true" or "false" to calling applications. So if we really can't have a typesafe bool until the distant future, then for now I'd prefer to see it aliased to int (rather than byte).
- have a serious discussion about dropping the AA built-in type
We don't need to vote on whether or not to have a serious discussion. Just start the thread - I'll join in.
- make the return type from opApply delegates be an enum with a single public
"Ok" member
Sorry, you've lost me. If there is only /one/ possible return type, then why not just return void? Surely, you only need a return value if there are at least two possibilities for it?
- remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
Removing opCmp() gets a loud yes vote from me. I'm still not convinced about opEquals() though. In math (and yes, I know D is not math), an equals test always makes sense - even if you're comparing apples and oranges (but the answer is always false in such cases). For example, if N is the set of non-negative integers, and V is the set of three dimensional vectors, (two sets which don't intersect), then I can still write: and it would be a true statement (as opposed to a meaningless statement). For most things, you want == to return false, but you always want an object to be equal to itself. The current default gives this behaviour in /most/ cases, and for that reason I think it should stand (and simply be overridden where that behaviour is different from the default). But I'm with you on opCmp().
- give pseudo methods to built-in types, e.g. toString()
Aye
- stop shoehorning C-struct and stack-based class functionality into the single
D class-key "struct".
At first, I wasn't too keen on this, but then I realized that mostly I was objecting because I didn't like the suggested names for things. But since you said "Any of those naming yucks are irrelevant", I guess you're saying you only want the feature, not necessarily any particular syntax. That being so, I'll vote yes, and here are my suggested names for things: * struct -- a D-style structure, as now * union -- a D-style union, as now * strict struct -- a struct which is only permitted to contain primitive types,
                   strict stucts and strict unions
* strict union -- a union which is only permitted to contain primitive types,
                   strict stucts and strict unions
The "strict" variants are your C-style structs/unions.
(A side effect of this is that sklasses could have ctors, and
maybe even dtors!!)
No, I think there are some very real issues with D-structs having D-structors. I think we'll have to be content with constructors only for a while. (When are we going to get those struct constructors, by the way?)
- address the DLL-GC issue. This is a *HUGE* issue with respect to the future
appeal of the language. Without D
seamlessly supporting DLLs in (almost) all of their potential class/function
C/D/D/C guises, it'll be still born.
Aye.
- allow recursive templates
Aye. And member function templates. And type deduction.
- have a clear import/name-resolution policy. (Note: I cannot really comment on
this, since I've not (yet) dived into
this hairy beast, but I respect the people who've been banging on about it for
some time.)
Aye. In particular, it should /not/ be an error for a module (d source file) and a package (directory) to have the same name in the same place. This is a /horrible/ wart.
- make bool => int
I vote for make bool => bool, a distinct type. But I know Walter will ignore that. bool => int would be my second preference.
- have incomplete switch cases be flagged as compile-time errors
Aye
Here's a challenge to any remaining holders of the "keep bit" position: can
anyone
identify to
me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
That's not a fair question. "bit" type is the status quo. One does not need a "compelling case" to maintain the status quo. Rather, the status quo remains unless there is a compelling case to change it. Personally, I don't think there is a compelling case to keep the bit, but I'd expect to need a stronger argument than that to ditch it. Come to think of it, I don't think there's a compelling case for keeping char or wchar either. Let's ditch them both, then we can rename "dchar" as "char" and use /one/ character type throughout, with appropriate transcoding relegated to library routines. Just /imagine/ how much that would simplify D string handling. No more toUTF8(); no more cast(dchar[])"hello"; - just the obvious representation: What more could you want? D's multiple char types are a bad idea because they don't store characters, they store "code units" (UTF-xx fragments), and trying to explain this to everyone is getting to be more trouble than it's worth. Once library-based transcoding is in place, with the UTF family fully supported, we actually won't need char or wchar any more. (Sorry, that digression should probably have been in a separate thread). Arcane Jill
Aug 23 2004
parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:cgcfba$284k$1 digitaldaemon.com...

- have a serious discussion about dropping the AA built-in type
We don't need to vote on whether or not to have a serious discussion. Just start the thread - I'll join in.
I was lazily hoping to prompt arguments from others who have stronger (and better informed) opinions on the subject than I
- make the return type from opApply delegates be an enum with a single public
"Ok" member
Sorry, you've lost me. If there is only /one/ possible return type, then why not just return void? Surely, you only need a return value if there are at least two possibilities for it?
The foreach delegate, created by the compiler, can return 0 to indicate "continue enumeration" to opApply(), or it can return one of several other integer values to indicate "break", "goto" and other such things. The author of an opApply() *MUST* only either return 0, or retain unaltered the non-0 value returned by the delegate. But since the type is int there's nothing syntactically helping them do so. If we had an enum with only one public value, say Ok, or ContinueEnumeration, coders would have to resort to casting to break the foreach mechanism, whereas at the moment all they need do is any integral operation. Naturally, the compiler would use other values, but we don't care about what they are, since they're hidden in the foreach/delegate mechanisms.
- stop shoehorning C-struct and stack-based class functionality into the single
D class-key "struct".
At first, I wasn't too keen on this, but then I realized that mostly I was objecting because I didn't like the suggested names for things. But since you said "Any of those naming yucks are irrelevant", I guess you're saying you only want the feature, not necessarily any particular syntax. That being so, I'll vote yes, and here are my suggested names for things: * struct -- a D-style structure, as now * union -- a D-style union, as now * strict struct -- a struct which is only permitted to contain primitive types,
                   strict stucts and strict unions
* strict union -- a union which is only permitted to contain primitive types,
                   strict stucts and strict unions
The "strict" variants are your C-style structs/unions.
I don't like the two keyword. How about "cstruct" and "cunion"?
(A side effect of this is that sklasses could have ctors, and
maybe even dtors!!)
No, I think there are some very real issues with D-structs having D-structors. I think we'll have to be content with constructors only for a while. (When are we going to get those struct constructors, by the way?)
Sure. I'm not in the least expert about D structs, so I take your word for that.
 Aye.

 And member function templates.
We have these, although there are problems with them.
 And type deduction.
Here's a challenge to any remaining holders of the "keep bit" position: can
anyone
identify to
me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
That's not a fair question. "bit" type is the status quo. One does not need a "compelling case" to maintain the status quo. Rather, the status quo remains unless there is a compelling case to change it. Personally, I don't think there is a compelling case to keep the bit, but I'd expect to need a stronger argument than that to ditch it.
Reasonable
 Come to think of it, I don't think there's a compelling case for keeping char
or
 wchar either. Let's ditch them both, then we can rename "dchar" as "char" and
 use /one/ character type throughout, with appropriate transcoding relegated to
 library routines. Just /imagine/ how much that would simplify D string
handling.
 No more toUTF8(); no more cast(dchar[])"hello"; - just the obvious
 representation:



 What more could you want?

 D's multiple char types are a bad idea because they don't store characters,
they
 store "code units" (UTF-xx fragments), and trying to explain this to everyone
is
 getting to be more trouble than it's worth. Once library-based transcoding is
in
 place, with the UTF family fully supported, we actually won't need char or
wchar
 any more.

 (Sorry, that digression should probably have been in a separate thread).
No. It was apposite. And also thrilling. Let's have only one char type, if indeed it really is possible. That'd be marvellous. (I'm one of those completely baffled by D's internationalisation mechanisms!)
Aug 23 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cgdt7a$267$1 digitaldaemon.com>, Matthew says...

- have a serious discussion about dropping the AA built-in type
We don't need to vote on whether or not to have a serious discussion. Just start the thread - I'll join in.
I was lazily hoping to prompt arguments from others who have stronger (and better informed) opinions on the subject than I
Ah, well that's probably not I. I am very familiar with /using/ C++'s STL containers, but I have never /written/ a decent container class. (I do know how they work though, so I'm kind of half-informed).
 Surely, you only need a return value if there are at least two
 possibilities for it?
The foreach delegate, created by the compiler, can return 0 to indicate "continue enumeration" to opApply(), or it can return one of several other integer values to indicate "break", "goto" and other such things. The author of an opApply() *MUST* only either return 0, or retain unaltered the non-0 value returned by the delegate. But since the type is int there's nothing syntactically helping them do so. If we had an enum with only one public value, say Ok, or ContinueEnumeration, coders would have to resort to casting to break the foreach mechanism, whereas at the moment all they need do is any integral operation. Naturally, the compiler would use other values, but we don't care about what they are, since they're hidden in the foreach/delegate mechanisms.
So in other words there are TWO possible returns. Why can't they be true and false then? I'm not arguing with you, just trying to understand, and keep things simple. (Of course, I understand that bools are less typesafe than enums. It's something that I've complained about for ages. But if you want to use enums to get around the lack of type-safety in bools then that merely strengthens the case for typesafe bools.)
 The "strict" variants are your C-style structs/unions.
I don't like the two keyword. How about "cstruct" and "cunion"?
Okay. Suits me. Arcane Jill
Aug 24 2004
next sibling parent "Carlos Santander B." <carlos8294 msn.com> writes:
"Arcane Jill" <Arcane_member pathlink.com> escribió en el mensaje
news:cgf74s$ogc$1 digitaldaemon.com
| In article <cgdt7a$267$1 digitaldaemon.com>, Matthew says...
|| The foreach delegate, created by the compiler, can return 0 to indicate
"continue
|| enumeration" to opApply(), or it can return one of several other integer
values
|| to indicate "break", "goto" and other such things. The author of an opApply()
|| *MUST* only either return 0, or retain unaltered the non-0 value returned by
the
|| delegate. But since the type is int there's nothing syntactically helping
them do
|| so. If we had an enum with only one public value, say Ok, or
ContinueEnumeration,
|| coders would have to resort to casting to break the foreach mechanism,
whereas at
|| the moment all they need do is any integral operation. Naturally, the
compiler
|| would use other values, but we don't care about what they are, since they're
|| hidden in the foreach/delegate mechanisms.
|
| So in other words there are TWO possible returns. Why can't they be true and
| false then? I'm not arguing with you, just trying to understand, and keep
things
| simple.
|

Not two: several, as Matthew stated. We don't know which values are those, and I
don't think we have to. So a bool wouldn't suffice.

-----------------------
Carlos Santander Bernal
Aug 24 2004
prev sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Surely, you only need a return value if there are at least two
 possibilities for it?
The foreach delegate, created by the compiler, can return 0 to indicate "continue enumeration" to opApply(), or it
can
return one of several other integer values to indicate "break", "goto" and
other such things. The author of an
opApply()
*MUST* only either return 0, or retain unaltered the non-0 value returned by
the delegate. But since the type is int
there's nothing syntactically helping them do so. If we had an enum with only
one public value, say Ok, or
ContinueEnumeration, coders would have to resort to casting to break the
foreach mechanism, whereas at the moment all
they need do is any integral operation. Naturally, the compiler would use other
values, but we don't care about what
they are, since they're hidden in the foreach/delegate mechanisms.
So in other words there are TWO possible returns. Why can't they be true and false then? I'm not arguing with you, just trying to understand, and keep things simple.
No. There are many returns. But the opApply() implementator may only know the following of the value returned from the delegate: - the value is zero. Carry on enumerating - the value is non-zero. This value must be returned *UNMODIFIED* to the caller of opApply. This value is the communcation from the compiler-generated delegate to the compiler-generated foreach code. The important aspect of this is that if the opApply() code (accidentally) modifies this value, then the whole thing'll drop like a cow in an abertoire. Since there is *NO NEED* for the opApply() to ever know anything about this value, other than the fact that it is non-zero, I've been requesting that it be changed to a single-valued enum, as in enum OpApplyRetVal { OK = 0 }; for a _long_ time. That way, the only way that an error can occur is if the opApply() author _deliberately_ tries to screw it. (Inside the compiler-created infrastructure, Walter'll still use it as an int, of course.) This is something that would cost nothing performance-wise, would be easy to do in the compiler, has absolutely no bad points to users of D, and offers a significant improvement in robustness. I've never understood why Walter considers this too trivial to be bothered with, and I still question that. Just because no-one's screwed opApply() - to our knowledge, anyway - does not mean that no-one will in the future. IMO, it's pretty much guaranteed. And when one considers that this is _more_ likely the _more_ complex the opApply() is, the harder it will be for the hapless author to spot that they've done something wrong. They'll just think that D's foreach mechanism is flaky shit, and, in a funny kind of way, they'll be right!
 The "strict" variants are your C-style structs/unions.
I don't like the two keyword. How about "cstruct" and "cunion"?
Let's go for it then, eh? Walter, cstruct & cunion??
Aug 24 2004
prev sibling next sibling parent reply Dave <Dave_member pathlink.com> writes:
Ok, so I'm a newbie to the language, but I think that makes it even more 
significant that I can immediately recognize and agree on four issues
brought up by people a lot smarter than me who've used the language a lot
longer <g>.

Based on the criteria of usefulness, ease of compiler implementation and
performance, I think built-in AA's, the bit type and 'bool as bit' could be
dropped. I also strongly agree that a stack based class needs to be added.

- Built-in AA's. I originally thought the idea was cool, but they are not
performant and already they are seemingly being 'replaced' by library
functionality (that in itself is pretty damning). The clincher here is that
they also seem to force other things on the language that perhaps should not
be there. If AA's and the adverse semantics they currently require for D can
be fixed then I think they would still be a great addition.

- Bit type. Seems cool but in reality may only end up as a 'nice to
have for small jobs' type of thing that is better implemented through
libraries anyhow. Also I think it makes the language itself harder to
implement. Take a look at gc.d for example. About 1/5 of that code is
dedicated just to bit arrays. I'd imagine it adds a fair amount to the
compiler implementation as well.

- bool as bit. For this I have to ask: Why?!? Much slower than byte or word
and I really don't see any real practical advantage to it, at least at this
point in my D education.

- Stack based class. IMHO, this is perhaps the most important performance
advantage that C++ has over Java from a pure computer science point of view.
And evidently in the 'real world' it can make a heck of a difference
because C++ is surviving, some would say thriving, for OOP programming in a
world where Java is widely recognized as a better pure-OOP language (Much of
the currently relevant 'Java is SLOW' sentiment is related to this one
thing). It would be really, really useful to have Ctors for some sort of
very efficient object.

I think D should really, really concentrate on 1) fixing the weaknesses of

functionality only if it doesn't impede 1) & 2) _or_ add weaknesses to D
that may facilitate a need for 'E' ;)

- Dave

Matthew wrote:

 
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
 news:cgc1s6$1vdk$1 digitaldaemon.com...
 Thank you! I (naturally) agree with everything you said because
 you agree that something that i am babling about for some time
 now makes sence. Not that this means that things will change now
 but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's (increasingly shaky, IMO) preference for ease of compiler implementation over language "complexity". I still like D a lot, but I increasingly see several of its "unique features" as a bad idea. Because we're still pre 1.0, I'd be voting for weilding the scythe at this stage, before we're irretrievably stuck with some very ugly warts. By no means an exhaustive list: - drop the bit type - have a serious discussion about dropping the AA built-in type - make the return type from opApply delegates be an enum with a single public "Ok" member - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals() - give pseudo methods to built-in types, e.g. toString() - stop shoehorning C-struct and stack-based class functionality into the single D class-key "struct". This is a marriage made in hell! The fact that I've identified a very real problem with bit arrays in this regard is, IMO, only the tip of the ice-berg. Let's let stack-based classes have a new class-key, to go with the fact that they're a new concept. I don't care if it's something as hideous as "stackclass", "sclass", "stass", or even "sklass". Any of those naming yucks are irrelevant compared to the current conceptual nightmare. (A side effect of this is that sklasses could have ctors, and maybe even dtors!!) - address the DLL-GC issue. This is a *HUGE* issue with respect to the future appeal of the language. Without D seamlessly supporting DLLs in (almost) all of their potential class/function C/D/D/C guises, it'll be still born. - allow recursive templates - have a clear import/name-resolution policy. (Note: I cannot really comment on this, since I've not (yet) dived into this hairy beast, but I respect the people who've been banging on about it for some time.) Some others I'd also like to see, but which I'm not going to bother arguing for: - make bool => int - have incomplete switch cases be flagged as compile-time errors If anyone takes this as evidence why D will _never_ take off: don't. But I do hope Walter is influenced to address these issues, as I believe they are going to be very serious areas for criticism if ratified into 1.0. A couple of examples: - it seems to me that opCmp() and/or opEquals() are in Object to support AAs, and array.sort. They are very troublesome theoretically, and are beginning to be practically troublesome for some. Since AAs are getting increasing criticism, aren't we keeping a big wart in to support a small wart? If so, it's time to get out the liquid paper, I think. And why not just provide a template sort() function for arrays in Phobos? - as I've opined recently, the conceptual attractiveness of the bit type is specious. Some of us have had this opinion for a long time, some more recently, but I'd hazard a guess that now a large portion of D's devotees think it's now more trouble than it's worth. I still strongly assert that my recent discussion of bit arrays in D structs is compelling, and I"ve not yet heard any counter to it. Here's a challenge to any remaining holders of the "keep bit" position: can anyone identify to me the compelling case for having the bit type, i.e. the things that cannot be adequalty covered by a library. Anyway, to repeat myself. I am optimisitic about D, but I believe me *must* have a serious refactoring of the language *before* 1.0, and I think we're getting close to the appropriate time, before Walter's time is completely consumed on issues that could disappear just by a collective belt-tightening. btw, I've a shocking case of the flu at the mo, so if any of this reads as insane, it might well be. :-) William Smythe, ready with Scythe
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does ==
 have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than any
kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on
 some
 resource.  How would you define two of them being equal?  Them
 locking
 the same resource?  If the setup is such that each resource can only
 be locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any way?
I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using
 over
the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and
 assignment)
by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's a
serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all
 types,
why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the
 library can only create controls, and not interface existing controls
 (such as those on a Windows dialog resource), and cannot create two
 objects interfacing the same control, then each control object is
 equal only to itself.
*If* that's the value comparison you define for two GUI control objects,
then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune
 and
harmful.
 For that matter, I can't think of any examples in which an object
 should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate value
comparison. That's a practically infinite set
 of things
Aug 23 2004
parent Ilya Minkov <minkov cs.tum.edu> writes:
Dave schrieb:
 Ok, so I'm a newbie to the language, but I think that makes it even more 
 significant that I can immediately recognize and agree on four issues
 brought up by people a lot smarter than me who've used the language a lot
 longer <g>.
There are people who have used the language even longer, but don't have enough time to read the newsgroup, or just stick out of discussions which won't change Walters mind anyway. (That is, otherwise we'd loose faith in Walter) I have been here much longer than Mathhew, Jill and many others. I'm definately not smarter then them, but i just want to have my point of view stated as a counter-weight.
 Based on the criteria of usefulness, ease of compiler implementation and
 performance, I think built-in AA's, the bit type and 'bool as bit' could be
 dropped. I also strongly agree that a stack based class needs to be added.
Stack based class would turn D into only better C++ performance wise. We will have implicitly generated try...finally frames all over, which i would like to avoid.
 - Built-in AA's. I originally thought the idea was cool, but they are not
 performant and already they are seemingly being 'replaced' by library
 functionality (that in itself is pretty damning). The clincher here is that
 they also seem to force other things on the language that perhaps should not
 be there. If AA's and the adverse semantics they currently require for D can
 be fixed then I think they would still be a great addition.
I got used to using them. And there is nothing which makes them "not performant" in principle. And a very nice syntax. But if there are so many people against them, then to hell with them, they're not gonna be missed too much.
 - Bit type. Seems cool but in reality may only end up as a 'nice to
 have for small jobs' type of thing that is better implemented through
 libraries anyhow. Also I think it makes the language itself harder to
 implement. Take a look at gc.d for example. About 1/5 of that code is
 dedicated just to bit arrays. I'd imagine it adds a fair amount to the
 compiler implementation as well.
The strong point of this is added consistency. If we had, say, a larger sized bool, then people would still wish that boolarrays would be packed bitwise. And then we're back to the complexity.
 - bool as bit. For this I have to ask: Why?!? Much slower than byte or word
 and I really don't see any real practical advantage to it, at least at this
 point in my D education.
Why the heck should it be slower? The compiler groups bits always into bytes. Single bools may even be int-sized - although in the current implementation they are not, Walter said he would like to do that once.
 - Stack based class. IMHO, this is perhaps the most important performance
 advantage that C++ has over Java from a pure computer science point of view.
 And evidently in the 'real world' it can make a heck of a difference
 because C++ is surviving, some would say thriving, for OOP programming in a
 world where Java is widely recognized as a better pure-OOP language (Much of
 the currently relevant 'Java is SLOW' sentiment is related to this one
 thing). It would be really, really useful to have Ctors for some sort of
 very efficient object.
Where raw performance is needed, polymorphy has to pass, for some good reasons - one of them is that the space that a class takes on the stack must be known in advance. This also works in C++ this way - polymorphy breaks when you create objects on a stack. So why not use a struct already??? We could make some nice enhancements, like allow a struct to inherit from a class, and vice versa. There is also a technical problem with destructors. For heap-placed objects, garbage collector is responsible for calling the destructor. For stack-based objects, a try...finally frame has to be arranged. In C++, this has to be arranged in reality almost every function. This makes you say goodbye to small (non-inlinable, e.g. virtual) functions being fast, due to higher call overhead, programs binaries grow faster than without, exceptions propagate more slowly... So, we have our auto objects which place such a frame, but they don't invite to be used frequently. Perhaps if someone proposes a sane extension which could be made to the auto objects, it could be considered. If you are talking about initial allocation of an object on the stack, this can already be done, but it may not have a distructor called. One of the other strong points of C++ performance is that you can allocate objects from "pools" or other specialised memory managers much faster than a standard allocator would. In D, we have a class new and delete construsts to accomplish that easily.
 I think D should really, really concentrate on 1) fixing the weaknesses of

 functionality only if it doesn't impede 1) & 2) _or_ add weaknesses to D
 that may facilitate a need for 'E' ;)
D must have a strong face and resist certain kinds of temptations. :) -eye
Sep 04 2004
prev sibling next sibling parent "antiAlias" <fu bar.com> writes:
I'll give my full support (FWIW) to each of the items you note, except for
opEquals(). And I'll add (or emphasize) a few more, too:

1) the damnable name-resolution approach belongs in hell; not a modern
language. AJ's example of breaking opEquals(), opCmp() etc shows just how
puny and fragile the approach truly is. If D were strongly-typed regarding
method-arguments, all those wacky examples of sloppy-coding and
edge-conditions would not be a problem. As Regan pointed out a while back,
it's the conversion and promotion of primitive types that cause the
confusion over which method should be invoked within an inheritance scheme.
If that were fixed, the method-alias 'concept' would have zero value, at
best.

2) Interface-contracts should be satisfied by any means available to the
class; including inherited methods, abstract declarations, etc, etc

3) primitive types should have overloaded methods on them, like properties

that D could do something slick via a struct for each primitive type.

4) the override keyword should be mandatory.

5) bit-array is now stillborn. Better off with a bit-set class, or
equivalent.

6) AAs are a dead duck. The one benefit they exhibit (IMO) is that the
compiler can sneakily avoid casting the return value, and it works for
primitive types. Both of these can be addressed by templates. The negatives
against AAs are as long as my arm. Array sorting is in the same boat. There
was a need at one time, but that time has long since past.

7) additional emphasis on the shared GC, and the whole DLL issue.

8) the weird dual behavior of concatenation (~+ different than ax ~ y ~ z)

9) printf is /still/ linked-in via the root Object

I, also, am an optimist regarding D. That doesn't mean I have to close my
eyes to the flaws it often exhibits.



"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cgc4fc$20kf$1 digitaldaemon.com...
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
 Thank you! I (naturally) agree with everything you said because
 you agree that something that i am babling about for some time
 now makes sence. Not that this means that things will change now
 but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's
(increasingly shaky, IMO) preference for ease of
 compiler implementation over language "complexity".

 I still like D a lot, but I increasingly see several of its "unique
features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts.
 By no means an exhaustive list:

 - drop the bit type
 - have a serious discussion about dropping the AA built-in type
 - make the return type from opApply delegates be an enum with a single
public "Ok" member
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
 - give pseudo methods to built-in types, e.g. toString()
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell! The fact that I've identified a very real problem with bit arrays in
this regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with
the fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or
even "sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the
future appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential
class/function C/D/D/C guises, it'll be still born.
 - allow recursive templates
 - have a clear import/name-resolution policy. (Note: I cannot really
comment on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it
for some time.)
 Some others I'd also like to see, but which I'm not going to bother
arguing for:
 - make bool => int
 - have incomplete switch cases be flagged as compile-time errors


 If anyone takes this as evidence why D will _never_ take off: don't. But I
do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for criticism
if ratified into 1.0. A couple of examples:
 - it seems to me that opCmp() and/or opEquals() are in Object to support
AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically troublesome for some. Since AAs are
getting increasing criticism, aren't we keeping
 a big wart in to support a small wart? If so, it's time to get out the
liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?

 - as I've opined recently, the conceptual attractiveness of the bit type
is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a
large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent discussion
of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that
cannot be adequalty covered by a library.
 Anyway, to repeat myself. I am optimisitic about D, but I believe me
*must* have a serious refactoring of the language
 *before* 1.0, and I think we're getting close to the appropriate time,
before Walter's time is completely consumed on
 issues that could disappear just by a collective belt-tightening.

 btw, I've a shocking case of the flu at the mo, so if any of this reads as
insane, it might well be. :-)
 William Smythe, ready with Scythe

 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does ==
have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than
any
 kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on
some
 resource.  How would you define two of them being equal?  Them
locking
 the same resource?  If the setup is such that each resource can only
be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any
way?
 I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using
over
 the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and
assignment)
 by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's
a
 serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all
types,
 why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the
library
 can only create controls, and not interface existing controls (such
as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only
to
 itself.
*If* that's the value comparison you define for two GUI control
objects,
 then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune
and
 harmful.
 For that matter, I can't think of any examples in which an object
should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate
value
 comparison. That's a practically infinite set
 of things
Aug 23 2004
prev sibling next sibling parent Sean Kelly <sean f4.ca> writes:
In article <cgc4fc$20kf$1 digitaldaemon.com>, Matthew says...
I still like D a lot, but I increasingly see several of its "unique features"
as a bad idea. Because we're still pre
1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts.

By no means an exhaustive list:

- drop the bit type
Agreed. It's a nice idea, but seems to cause too many problems in practice.
- remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
As long as they're removed from Object and not removed altogether. Unless we want to rely entirely on external comparison functions, which seems clunky.
- give pseudo methods to built-in types, e.g. toString()
Works for me.
- stop shoehorning C-struct and stack-based class functionality into the single
D class-key "struct". This is a marriage
made in
hell! The fact that I've identified a very real problem with bit arrays in this
regard is, IMO, only the tip of the
ice-berg. Let's let stack-based classes have a new class-key, to go with the
fact that they're a new concept. I don't
care if it's something as hideous as "stackclass", "sclass", "stass", or even
"sklass". Any of those naming yucks are
irrelevant compared to the current conceptual nightmare. (A side effect of this
is that sklasses could have ctors, and
maybe even dtors!!)
I like using "struct" for POD types, but it stinks that there's no built-in support for constructors. As for stack classes, I think "auto" might serve in most cases.
- address the DLL-GC issue. This is a *HUGE* issue with respect to the future
appeal of the language. Without D
seamlessly supporting DLLs in (almost) all of their potential class/function
C/D/D/C guises, it'll be still born.
Definately. This has to be sorted out before 1.0.
- allow recursive templates
Yes please.
- make bool => int
Why make it == sizeof(int) rather than sizeof(byte)? Not to derail the thread, I'm just curious.
- have incomplete switch cases be flagged as compile-time errors
I haven't seen any discussion of this one. Will have to search the archives.
- it seems to me that opCmp() and/or opEquals() are in Object to support AAs,
and array.sort. They are very troublesome
theoretically,
and are beginning to be practically troublesome for some. Since AAs are getting
increasing criticism, aren't we keeping
a big wart in to support a small wart? If so, it's time to get out the liquid
paper, I think. And why not just provide a
template sort() function for arrays in Phobos?
I haven't done enough class-level programming in D to really say. I like the idea of the built-in AAs, even though they don't have all the features that a library implementation would have. Would it be too much to keep AAs and drop Object.opCmp and Object.OpEquals?
- as I've opined recently, the conceptual attractiveness of the bit type is
specious. Some of us have had this opinion
for a long time, some more recently, but I'd hazard a guess that now a large
portion of D's devotees think it's now more
trouble than it's worth. I still strongly assert that my recent discussion of
bit arrays in D structs is compelling, and
I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
identify to
me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
I still have your long post on bits marked "unread" so I will get around to replying to it, but I can summarize. I personally like "bit" though I think it is a mistake to confuse it with "bool." Also, getting bit to work with slicing and such seems complex enough that I don't think it's worth keeping. A bitset library class could be implemented fairly easily and has the potential to be just as fast as the language version we have now. Also, I think it would be a mistake to have more than one language type that has "bool" semantics. So adding bool as a separate type and leaving bit is just not workable IMO.
Anyway, to repeat myself. I am optimisitic about D, but I believe me *must*
have a serious refactoring of the language
*before* 1.0, and I think we're getting close to the appropriate time, before
Walter's time is completely consumed on
issues that could disappear just by a collective belt-tightening.
Agreed. Especially since most of these issues are fairly significant aspects of the language. I suppose it's worth discussing all of this with attention to the fact that D should be different from C++, so feature evaluation should be as disconnected from past experience as possible. Sean
Aug 23 2004
prev sibling next sibling parent "antiAlias" <fu bar.com> writes:
And let's not forget complete & full debug symbols ...


"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cgc4fc$20kf$1 digitaldaemon.com...
 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
 Thank you! I (naturally) agree with everything you said because
 you agree that something that i am babling about for some time
 now makes sence. Not that this means that things will change now
 but atleast they have a slightly better chanse to change. :)
I think you overestimate my influence and/or underestimate Walter's
(increasingly shaky, IMO) preference for ease of
 compiler implementation over language "complexity".

 I still like D a lot, but I increasingly see several of its "unique
features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts.
 By no means an exhaustive list:

 - drop the bit type
 - have a serious discussion about dropping the AA built-in type
 - make the return type from opApply delegates be an enum with a single
public "Ok" member
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
 - give pseudo methods to built-in types, e.g. toString()
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell! The fact that I've identified a very real problem with bit arrays in
this regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with
the fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or
even "sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the
future appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential
class/function C/D/D/C guises, it'll be still born.
 - allow recursive templates
 - have a clear import/name-resolution policy. (Note: I cannot really
comment on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it
for some time.)
 Some others I'd also like to see, but which I'm not going to bother
arguing for:
 - make bool => int
 - have incomplete switch cases be flagged as compile-time errors


 If anyone takes this as evidence why D will _never_ take off: don't. But I
do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for criticism
if ratified into 1.0. A couple of examples:
 - it seems to me that opCmp() and/or opEquals() are in Object to support
AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically troublesome for some. Since AAs are
getting increasing criticism, aren't we keeping
 a big wart in to support a small wart? If so, it's time to get out the
liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?

 - as I've opined recently, the conceptual attractiveness of the bit type
is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a
large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent discussion
of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that
cannot be adequalty covered by a library.
 Anyway, to repeat myself. I am optimisitic about D, but I believe me
*must* have a serious refactoring of the language
 *before* 1.0, and I think we're getting close to the appropriate time,
before Walter's time is completely consumed on
 issues that could disappear just by a collective belt-tightening.

 btw, I've a shocking case of the flu at the mo, so if any of this reads as
insane, it might well be. :-)
 William Smythe, ready with Scythe

 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cgblol$1mt9$1 digitaldaemon.com...
 "Stewart Gordon" <smjg_1998 yahoo.com> wrote in message
news:cgamjk$uvl$1 digitaldaemon.com...
 Ivan Senji wrote:
 <snip>
 There isn't really much more i could say about this. Why does ==
have
 to work as === by default?   I have never writen a code were two
 objects are equal when they are in the same place in memory (===),
To save the user having to write code like int opEquals(Object o) { return this === o; } everywhere when they _do_ write code like that.
That's no argument. One might also say that *every* single object Such things are convention - and a bad one in this case - rather than
any
 kind of software engineering axiom.
 For example, suppose you have an auto class representing a lock on
some
 resource.  How would you define two of them being equal?  Them
locking
 the same resource?  If the setup is such that each resource can only
be
 locked once, then the lock object is equal only to itself.
This is nonsense. Why should lock objects be value-comparable in any
way?
 I can think of no reason why one would ever
 need to so do. Indeed of all such things I've been writing and using
over
 the last 10+ years in C++, I've never had
 cause to do such a thing.

 C++ misstepped badly in allowing copying (initialisation and
assignment)
 by default for classes, where it should have
 only preserved that for structs. But it wisely does not provide any
default value comparison for class types. We know
 why D has done such a thing - built-in AAs, and array.sort - but it's
a
 serious mistake, and is a blight on the
 language. If a given semantic makes sense for only a subset of all
types,
 why provide it for all? That's MFC!
 Suppose you have a GUI library.  Two GUI control objects could be
 considered equal if they interface the same control.  But if the
library
 can only create controls, and not interface existing controls (such
as
 those on a Windows dialog resource), and cannot create two objects
 interfacing the same control, then each control object is equal only
to
 itself.
*If* that's the value comparison you define for two GUI control
objects,
 then you should be able to define that.
 No-one's saying otherwise, just that the default === === == is jejune
and
 harmful.
 For that matter, I can't think of any examples in which an object
should
 not be equal to itself.  Can you?
Absolutely. Any type that is not a value type should not facilitate
value
 comparison. That's a practically infinite set
 of things
Aug 24 2004
prev sibling next sibling parent Lars Ivar Igesund <larsivar igesund.net> writes:
Matthew wrote:
 By no means an exhaustive list:
 
 - drop the bit type
Ok, if I can keep 'true' and 'false'.
 - have a serious discussion about dropping the AA built-in type
Discussion, yes.
 - make the return type from opApply delegates be an enum with a single public
"Ok" member
You make good arguments for this.
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
I agree!
 - give pseudo methods to built-in types, e.g. toString()
Fair enough.
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
 made in
 hell! The fact that I've identified a very real problem with bit arrays in
this regard is, IMO, only the tip of the
 ice-berg. Let's let stack-based classes have a new class-key, to go with the
fact that they're a new concept. I don't
 care if it's something as hideous as "stackclass", "sclass", "stass", or even
"sklass". Any of those naming yucks are
 irrelevant compared to the current conceptual nightmare. (A side effect of
this is that sklasses could have ctors, and
 maybe even dtors!!)
Agreed.
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the future
appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential class/function
C/D/D/C guises, it'll be still born.
Heartily agree. D has made the DllHell much bigger.
 - allow recursive templates
I'm sure that will be nice.
 - have a clear import/name-resolution policy. (Note: I cannot really comment
on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it for
some time.)
Definately. And as antiAlias just said; Complete, working debug symbols. Lars Ivar Igesund
Aug 25 2004
prev sibling next sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Matthew schrieb:

 I still like D a lot, but I increasingly see several of its "unique features"
as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're
irretrievably stuck with some very ugly warts.
 
 By no means an exhaustive list:
 
 - drop the bit type
Not necessary IMO. Perhaps the implementation can be enhanced or some semantic changes can be brought in.
 - have a serious discussion about dropping the AA built-in type
That wouldn't hurt much. However, if your consern is efficiency, it shouldn't be. A compiler could be made check idiomatic cases of successive test and access, and optimize the second test on access out.
 - make the return type from opApply delegates be an enum with a single public
"Ok" member
Hmm... some bit paranoia? ;> I agree that this is a consistency spot which could (should?) be enhanced. But than again, if you mean to protect the programmer against his own stupidity, this spot is too harmless to concentrate on - compared with hundreds of other issues.
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), opEquals()
Hm, perhaps... But it would make an impact on performance, wouldn't it?
 - give pseudo methods to built-in types, e.g. toString()
And to operator overloads, which would be nice for templates, thus thingls like "5.opCmp(6)" would work.
 - stop shoehorning C-struct and stack-based class functionality into the
single D class-key "struct". This is a marriage
I think i'm taking Walter's position here.
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the future
appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential class/function
C/D/D/C guises, it'll be still born.
Perhaps we could develop our own cross-OS binary format for libraries, and embed a loader in Phobos. The task of this should be to provide both interface and object code on the one hand, and allow the library to use the object code from the main executable. It would not only solve the GC problem, but also all of the plug-in writing problems. DLL is not very adequate, i guess.
 - have a clear import/name-resolution policy. (Note: I cannot really comment
on this, since I've not (yet) dived into
 this hairy beast, but I respect the people who've been banging on about it for
some time.)
In fact, probably top-level private import works quite OK, but i still don't like the situation of mixing the scopes. Import "as" would be cool.
 Some others I'd also like to see, but which I'm not going to bother arguing
for:
 
 - make bool => int
I think you have been mixing up the principal, design issues, and the implementation issues. It is possible to make a single bit take the same storage as an int.
 - have incomplete switch cases be flagged as compile-time errors
Perhaps they should be, but it's not worth bringing up.
 If anyone takes this as evidence why D will _never_ take off: don't. But I do
hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for criticism if
ratified into 1.0. A couple of examples:
 
 - it seems to me that opCmp() and/or opEquals() are in Object to support AAs,
and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically troublesome for some. Since AAs are
getting increasing criticism, aren't we keeping
 a big wart in to support a small wart? If so, it's time to get out the liquid
paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?
Is the library willing to pay the interface casting penalty? How much is it anyway?
 - as I've opined recently, the conceptual attractiveness of the bit type is
specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a large
portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent discussion of
bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any remaining
holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that cannot be
adequalty covered by a library.
It is only a consistency issue. When dealing with boolean types, there will always be issues and/or hacks, because everything which looks like a boolean array is expected to be bit-packed. -eye
Sep 05 2004
next sibling parent h3r3tic <h3r3tic dev.null> writes:
Ilya Minkov wrote:
 Matthew schrieb:
 - address the DLL-GC issue. This is a *HUGE* issue with respect to the 
 future appeal of the language. Without D
 seamlessly supporting DLLs in (almost) all of their potential 
 class/function C/D/D/C guises, it'll be still born.
Perhaps we could develop our own cross-OS binary format for libraries, and embed a loader in Phobos. The task of this should be to provide both interface and object code on the one hand, and allow the library to use the object code from the main executable. It would not only solve the GC problem, but also all of the plug-in writing problems. DLL is not very adequate, i guess.
++this.votes; // :)
Sep 05 2004
prev sibling next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Ilya Minkov wrote:
 Matthew schrieb:
 
 I still like D a lot, but I increasingly see several of its "unique 
 features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're 
 irretrievably stuck with some very ugly warts.

 By no means an exhaustive list:

 - drop the bit type
Not necessary IMO. Perhaps the implementation can be enhanced or some semantic changes can be brought in.
I disagree. bit has always been the black sheep, and it probably always will. You can't take the address of a bit, nor can you pass one by reference. Personally, I think the fact that it can easily be implemented within D is the most compelling reason to drop it.
 - have a serious discussion about dropping the AA built-in type
That wouldn't hurt much. However, if your consern is efficiency, it shouldn't be. A compiler could be made check idiomatic cases of successive test and access, and optimize the second test on access out.
I think AAs could be removed for the same reason as bit: it's very easy to implement it using D, so why complicate the language by building it in? (further, there's the fact that the implementation requires that Object define opCmp and opEquals)
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), 
 opEquals()
Hm, perhaps... But it would make an impact on performance, wouldn't it?
Only if AAs and array.sort remain as they are. It would be more than feasable to turn this into a performance /gain/. (more on this below)
 If anyone takes this as evidence why D will _never_ take off: don't. 
 But I do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for 
 criticism if ratified into 1.0. A couple of examples:

 - it seems to me that opCmp() and/or opEquals() are in Object to 
 support AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically            
 troublesomeforsome.SinceAAsaregettingincreasingcriticism,aren'twekeeping
 a big wart in to support a small wart? If so, it's time to get out the 
 liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?
Is the library willing to pay the interface casting penalty? How much is it anyway?
As it stands, array.sort always makes a virtual call to TypeInfo.compare, which itself, in the case of classes, makes a virtual call to Object.opCmp. A template sort() function could skip the TypeInfo and go straight to Object.opCmp at the very least. In the best case, even that call could potentially be inlined. (of course, it could always be inlined in the case of primitive types, so there's an instant win there)
 - as I've opined recently, the conceptual attractiveness of the bit 
 type is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a 
 large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent 
 discussion of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any 
 remaining holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that 
 cannot be adequalty covered by a library.
It is only a consistency issue. When dealing with boolean types, there will always be issues and/or hacks, because everything which looks like a boolean array is expected to be bit-packed.
I would instead say that bit[] needs the axe because D doesn't need it anymore. It's my understanding that, in the beginning, operator overloading was intentionally omitted from D because it's misused so much in C++. In this context, bit[] and builtin AAs make perfect sense: they had to be, lest they be second-class citizens like Java containers. (methods for everything, manual boxing because there were no templates either) Now, though, it's easily possible to implement these as simple, succinct little template classes. The only change that pre-existing code would need is an import (unless it's done in object.d) and different declaration syntax. D hasn't even hit 1.0, and it already has language constructs which only exist because of its history. :) -- andy
Sep 05 2004
next sibling parent "antiAlias" <fu bar.com> writes:
Hear!  Hear!  Well said, Andy.

It's a golden opportunity to clean house, while at the same time showing
some exemplary benefits of D templates.


"Andy Friesen" <andy ikagames.com> wrote in message
news:chfh34$g4i$1 digitaldaemon.com...
Ilya Minkov wrote:
 Matthew schrieb:

 I still like D a lot, but I increasingly see several of its "unique
 features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're
 irretrievably stuck with some very ugly warts.

 By no means an exhaustive list:

 - drop the bit type
Not necessary IMO. Perhaps the implementation can be enhanced or some semantic changes can be brought in.
I disagree. bit has always been the black sheep, and it probably always will. You can't take the address of a bit, nor can you pass one by reference. Personally, I think the fact that it can easily be implemented within D is the most compelling reason to drop it.
 - have a serious discussion about dropping the AA built-in type
That wouldn't hurt much. However, if your consern is efficiency, it shouldn't be. A compiler could be made check idiomatic cases of successive test and access, and optimize the second test on access out.
I think AAs could be removed for the same reason as bit: it's very easy to implement it using D, so why complicate the language by building it in? (further, there's the fact that the implementation requires that Object define opCmp and opEquals)
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(),
 opEquals()
Hm, perhaps... But it would make an impact on performance, wouldn't it?
Only if AAs and array.sort remain as they are. It would be more than feasable to turn this into a performance /gain/. (more on this below)
 If anyone takes this as evidence why D will _never_ take off: don't.
 But I do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for
 criticism if ratified into 1.0. A couple of examples:

 - it seems to me that opCmp() and/or opEquals() are in Object to
 support AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically
 troublesomeforsome.SinceAAsaregettingincreasingcriticism,aren'twekeeping
 a big wart in to support a small wart? If so, it's time to get out the
 liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?
Is the library willing to pay the interface casting penalty? How much is it anyway?
As it stands, array.sort always makes a virtual call to TypeInfo.compare, which itself, in the case of classes, makes a virtual call to Object.opCmp. A template sort() function could skip the TypeInfo and go straight to Object.opCmp at the very least. In the best case, even that call could potentially be inlined. (of course, it could always be inlined in the case of primitive types, so there's an instant win there)
 - as I've opined recently, the conceptual attractiveness of the bit
 type is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a
 large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent
 discussion of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any
 remaining holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that
 cannot be adequalty covered by a library.
It is only a consistency issue. When dealing with boolean types, there will always be issues and/or hacks, because everything which looks like a boolean array is expected to be bit-packed.
I would instead say that bit[] needs the axe because D doesn't need it anymore. It's my understanding that, in the beginning, operator overloading was intentionally omitted from D because it's misused so much in C++. In this context, bit[] and builtin AAs make perfect sense: they had to be, lest they be second-class citizens like Java containers. (methods for everything, manual boxing because there were no templates either) Now, though, it's easily possible to implement these as simple, succinct little template classes. The only change that pre-existing code would need is an import (unless it's done in object.d) and different declaration syntax. D hasn't even hit 1.0, and it already has language constructs which only exist because of its history. :) -- andy
Sep 05 2004
prev sibling parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <chfh34$g4i$1 digitaldaemon.com>, Andy Friesen says...
Ilya Minkov wrote:
 Matthew schrieb:
 
 I still like D a lot, but I increasingly see several of its "unique 
 features" as a bad idea. Because we're still pre
 1.0, I'd be voting for weilding the scythe at this stage, before we're 
 irretrievably stuck with some very ugly warts.

 By no means an exhaustive list:

 - drop the bit type
Not necessary IMO. Perhaps the implementation can be enhanced or some semantic changes can be brought in.
I disagree. bit has always been the black sheep, and it probably always will. You can't take the address of a bit, nor can you pass one by reference. Personally, I think the fact that it can easily be implemented within D is the most compelling reason to drop it.
I bet most people don't like bit because of bool. The fact that bits are not addressable is just because bytes are the smallest addressable unit. If bits were addressable I'd have to ask which bit is being addressed - and what about the addresses of the other 7 bits sharing that same address?
 - have a serious discussion about dropping the AA built-in type
That wouldn't hurt much. However, if your consern is efficiency, it shouldn't be. A compiler could be made check idiomatic cases of successive test and access, and optimize the second test on access out.
I think AAs could be removed for the same reason as bit: it's very easy to implement it using D, so why complicate the language by building it in? (further, there's the fact that the implementation requires that Object define opCmp and opEquals)
 - remove some of the top-heavy hierarchy methods, e.g. opCmp(), 
 opEquals()
Hm, perhaps... But it would make an impact on performance, wouldn't it?
Only if AAs and array.sort remain as they are. It would be more than feasable to turn this into a performance /gain/. (more on this below)
 If anyone takes this as evidence why D will _never_ take off: don't. 
 But I do hope Walter is influenced to address these
 issues, as I believe they are going to be very serious areas for 
 criticism if ratified into 1.0. A couple of examples:

 - it seems to me that opCmp() and/or opEquals() are in Object to 
 support AAs, and array.sort. They are very troublesome
 theoretically,
 and are beginning to be practically            
 troublesomeforsome.SinceAAsaregettingincreasingcriticism,aren'twekeeping
 a big wart in to support a small wart? If so, it's time to get out the 
 liquid paper, I think. And why not just provide a
 template sort() function for arrays in Phobos?
Is the library willing to pay the interface casting penalty? How much is it anyway?
As it stands, array.sort always makes a virtual call to TypeInfo.compare, which itself, in the case of classes, makes a virtual call to Object.opCmp. A template sort() function could skip the TypeInfo and go straight to Object.opCmp at the very least. In the best case, even that call could potentially be inlined. (of course, it could always be inlined in the case of primitive types, so there's an instant win there)
There isn't anything stopping the builtin AA from using templates in the implementation (at least that I can think of). Speaking abstractly anything a library can do the compiler can do, so putting it into the compiler can only improve performance and ease of use. The fact that D's AA's have different declaration syntax than template instantiation means generic programming is impacted but presumably that can be fixed by introducing some names into object.d that are aliases for the AAs.
 - as I've opined recently, the conceptual attractiveness of the bit 
 type is specious. Some of us have had this opinion
 for a long time, some more recently, but I'd hazard a guess that now a 
 large portion of D's devotees think it's now more
 trouble than it's worth. I still strongly assert that my recent 
 discussion of bit arrays in D structs is compelling, and
 I"ve not yet heard any counter to it. Here's a challenge to any 
 remaining holders of the "keep bit" position: can anyone
 identify to
 me the compelling case for having the bit type, i.e. the things that 
 cannot be adequalty covered by a library.
It is only a consistency issue. When dealing with boolean types, there will always be issues and/or hacks, because everything which looks like a boolean array is expected to be bit-packed.
I would instead say that bit[] needs the axe because D doesn't need it anymore. It's my understanding that, in the beginning, operator overloading was intentionally omitted from D because it's misused so much in C++. In this context, bit[] and builtin AAs make perfect sense: they had to be, lest they be second-class citizens like Java containers. (methods for everything, manual boxing because there were no templates either) Now, though, it's easily possible to implement these as simple, succinct little template classes. The only change that pre-existing code would need is an import (unless it's done in object.d) and different declaration syntax. D hasn't even hit 1.0, and it already has language constructs which only exist because of its history. :) -- andy
Sep 05 2004
parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Ben Hinkle schrieb:

 I bet most people don't like bit because of bool. The fact that bits are not
 addressable is just because bytes are the smallest addressable unit. If bits
 were addressable I'd have to ask which bit is being addressed - and what about
 the addresses of the other 7 bits sharing that same address?
This simply means that bits requiere a completely separate infrastructure of the following: * bit as a boolean type. Can be placed in a byte or int or whatever. * bit pointer type - consisiting of byte pointer and additional 3 bits offset. * bit array struct - conceptually consisting of 2 bit pointers for beginning and end. This may also be bit pointer and length, perhaps the range of length can even be limited to save space. This is how it was intended to be implemented. Walter probably just doesn't have enough hands to do so. The latter 2 of these types would not work by the usual DMD types implementation (as now), but simply map separate to corresponding syntaxes. There's probably nothing fancy about it, it just takes some time and power to implement, and the current "implementation" is just a quick hack. The implementation itself can be in the library, the compiler would only have to translate the syntax. The current implementation may not stay as it is, but discarding the good idea just because the implementation is not there yet would be a great pity. And Andy, please don't place any bets on bits staying crippled: they are not crippled by the spec, and you don't want them to be, do you?
 There isn't anything stopping the builtin AA from using templates in the
 implementation (at least that I can think of). Speaking abstractly anything a
 library can do the compiler can do, so putting it into the compiler can only
 improve performance and ease of use. The fact that D's AA's have different
 declaration syntax than template instantiation means generic programming is
 impacted but presumably that can be fixed by introducing some names into
 object.d that are aliases for the AAs.
Very true. Refining a built-in implementation may ultimately result in less codebloat and/or higher performance than the rigid template library. -eye
Sep 05 2004
next sibling parent reply Andy Friesen <andy ikagames.com> writes:
Ilya Minkov wrote:
 Ben Hinkle schrieb:
 
 I bet most people don't like bit because of bool. The fact that bits 
 are not
 addressable is just because bytes are the smallest addressable unit. 
 If bits
 were addressable I'd have to ask which bit is being addressed - and 
 what about
 the addresses of the other 7 bits sharing that same address?
This simply means that bits requiere a completely separate infrastructure of the following: * bit as a boolean type. Can be placed in a byte or int or whatever. * bit pointer type - consisiting of byte pointer and additional 3 bits offset. * bit array struct - conceptually consisting of 2 bit pointers for beginning and end. This may also be bit pointer and length, perhaps the range of length can even be limited to save space. This is how it was intended to be implemented. Walter probably just doesn't have enough hands to do so. The latter 2 of these types would not work by the usual DMD types implementation (as now), but simply map separate to corresponding syntaxes. There's probably nothing fancy about it, it just takes some time and power to implement, and the current "implementation" is just a quick hack. The implementation itself can be in the library, the compiler would only have to translate the syntax. The current implementation may not stay as it is, but discarding the good idea just because the implementation is not there yet would be a great pity. And Andy, please don't place any bets on bits staying crippled: they are not crippled by the spec, and you don't want them to be, do you?
Fair enough. I'm just not convinced that fixing them would be worth the trouble. :) What's the use case that justifies all this complexity? I haven't seen any D at all that uses, or even calls for the use of bit[], nor can I think of any situation where they would be significantly better than a library implementation. -- andy
Sep 05 2004
next sibling parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Andy Friesen wrote:
<snip>
 This simply means that bits requiere a completely separate 
 infrastructure of the following:

 * bit as a boolean type. Can be placed in a byte or int or whatever.
Do you mean that if one declares just a bit, not a bit[], it would occupy a byte or four? IMM, it would make sense to pack multiple bit members of a struct or class into bytes or four-byte blocks. Moreover, if in a struct, it ought to obey the alignment setting. (I haven't checked this, and don't know how much impact it would have on existing code.)
 * bit pointer type - consisiting of byte pointer and additional 3 bits 
 offset.
 * bit array struct - conceptually consisting of 2 bit pointers for 
 beginning and end. This may also be bit pointer and length, perhaps 
 the range of length can even be limited to save space.
This idea has been around for a while. I coded up an implementation on roughly this basis a while back: http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/495 Indeed, I recall from the change log that the current bit pointer implementation allows space to address individual bits, but hasn't implemented the use of it yet. <snip>
 Fair enough.  I'm just not convinced that fixing them would be worth the 
 trouble. :)
 
 What's the use case that justifies all this complexity?  I haven't seen 
 any D at all that uses, or even calls for the use of bit[],
I've used it experimentally, but I'm quite sure it could be put to practical use. Otherwise, it probably wouldn't have been put in at all. Some possible things that could benefit from bits/bit arrays: - bit flags, which seem to be common in APIs - data compression algorithms - implementations of things such as Life or Turing machines OK, so the latter could be done with bytes just as well, but it would bloat memory requirements.
 nor can I think of any situation where they would be significantly
 better than a library implementation.
Hmm.... Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Sep 06 2004
parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Stewart Gordon schrieb:

 Andy Friesen wrote:
 <snip>
 
 This simply means that bits requiere a completely separate 
 infrastructure of the following:

 * bit as a boolean type. Can be placed in a byte or int or whatever.
Do you mean that if one declares just a bit, not a bit[], it would occupy a byte or four? IMM, it would make sense to pack multiple bit members of a struct or class into bytes or four-byte blocks. Moreover, if in a struct, it ought to obey the alignment setting. (I haven't checked this, and don't know how much impact it would have on existing code.)
Yes, packing adjacent bits together may be done someday as well. However, it is not requiered for a complete and working implmentation. And now single bits are types as well, if i'm not mistaken.
 * bit pointer type - consisiting of byte pointer and additional 3 
 bits offset.
 * bit array struct - conceptually consisting of 2 bit pointers for 
 beginning and end. This may also be bit pointer and length, perhaps 
 the range of length can even be limited to save space.
This idea has been around for a while. I coded up an implementation on roughly this basis a while back:
I have been around for a while as well. This was how approximately it was intended to be implemented. A couple of years ago, bit pointer was not implemented at all and, there was just a problem that there were outstanding issues in different parts of compiler and the language so that Walter didn't want to deal with it, but the users actively requiered for a bit pointer so that inout bit parameters and such would work. I think we already had templates back then, added due to Daniel Yokomiso developing a template library. I had suggested that a temporary implementation would make a byte pointer out of it, and throw exception if a bit doesn't lie on a byte boundary. So did Walter implement it back then. Now, we still have many other issues outstanding in the language and the compiler. Thanks for the Implementation. Chances are it can be put to use by Walter. I keep the link for it here for future reference. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/495
 Indeed, I recall from the change log that the current bit pointer 
 implementation allows space to address individual bits, but hasn't 
 implemented the use of it yet.
Ah, interesting.
 Fair enough.  I'm just not convinced that fixing them would be worth 
 the trouble. :)

 What's the use case that justifies all this complexity?  I haven't 
 seen any D at all that uses, or even calls for the use of bit[],
Avoid adding another specialization to the templates which contain arrays and may be used on bit? -eye
Sep 06 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Ilya Minkov wrote:
<snip>
 Yes, packing adjacent bits together may be done someday as well. 
 However, it is not requiered for a complete and working implmentation. 
 And now single bits are types as well, if i'm not mistaken.
<snip> But a clear specification of how bits are held within structs clearly is required. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Sep 06 2004
parent Ilya Minkov <minkov cs.tum.edu> writes:
Stewart Gordon schrieb:

 Ilya Minkov wrote:
 <snip>
 
 Yes, packing adjacent bits together may be done someday as well. 
 However, it is not requiered for a complete and working implmentation. 
 And now single bits are types as well, if i'm not mistaken.
A typo. This should be bytes, not types.
 <snip>
 
 But a clear specification of how bits are held within structs clearly is 
 required.
You are right, haven't thought of that. You mean to relace C's bit fields with them? However, in classes and on stack the layout is left unspecified. -eye
Sep 06 2004
prev sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <chgd5m$pm5$1 digitaldaemon.com>, Andy Friesen says...

What's the use case that justifies all this complexity?  I haven't seen 
any D at all that uses, or even calls for the use of bit[],
I do have one "use", but I wouldn't say that it justifies any complexity. I have an entropy provider thingy in development which uses bit[], allowing the user to get N bits of entropy for use in random-number stuff. But the truth is, I only used bit[] because it was there. It would be no trouble at all to instead return a ubyte[], or some library bit-array implementation in its place.
nor can I 
think of any situation where they would be significantly better than a 
library implementation.
Nor can I. For that matter, I'm starting to wonder if even a library implementation is strictly necessary. I suspect we could get away with just the existing ubyte[] type, and the following four functions: (where bool is aliased to something other than bit, obviously). Arcane Jill
Sep 06 2004
prev sibling parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <chg524$n07$1 digitaldaemon.com>, Ilya Minkov says...
Ben Hinkle schrieb:

 I bet most people don't like bit because of bool. The fact that bits are not
 addressable is just because bytes are the smallest addressable unit. If bits
 were addressable I'd have to ask which bit is being addressed - and what about
 the addresses of the other 7 bits sharing that same address?
This simply means that bits requiere a completely separate infrastructure of the following: * bit as a boolean type. Can be placed in a byte or int or whatever. * bit pointer type - consisiting of byte pointer and additional 3 bits offset. * bit array struct - conceptually consisting of 2 bit pointers for beginning and end. This may also be bit pointer and length, perhaps the range of length can even be limited to save space. This is how it was intended to be implemented. Walter probably just doesn't have enough hands to do so. The latter 2 of these types would not work by the usual DMD types implementation (as now), but simply map separate to corresponding syntaxes. There's probably nothing fancy about it, it just takes some time and power to implement, and the current "implementation" is just a quick hack. The implementation itself can be in the library, the compiler would only have to translate the syntax.
hmm - a pointer to bits should have the same size as pointers to anything else IMO. I want to be able to cast pointers to void* and back without losing information. Pascal had bit arrays (you put the word "packed" in front of the declaration) and they seemed to work fine back when I used Pascal. I don't know if I ever tried taking the address of a packed array in Pascal, though.
Sep 06 2004
next sibling parent reply Ilya Minkov <minkov cs.tum.edu> writes:
Ben Hinkle schrieb:

 hmm - a pointer to bits should have the same size as pointers to anything else
 IMO. I want to be able to cast pointers to void* and back without losing
 information. 
Why do you want to be able to cast it to void*? If you mean it for containers, then i'll have to think it over what kinds of solutions are possible. One of them would be that a cast to void* would new a bit pointer on the heap and copy the original one there, and return a void* which points to it. A cast back would return a bit pointer struct. The leftover place in the struct can be filled with checksum, which can be generated on one cast and checked on another, just to try to make sure that it's not random data. A garbage solution. :> There is a problem with "walking" such a void*, but you cannot safely walk a pointer if you don't know the size of a type. It looks very differently with void[], which starts to be a real problem. For templates, a separate solution can be figured out. I have to think over exactly how much trouble it is causing. Perhaps really much.
 Pascal had bit arrays (you put the word "packed" in front of the
 declaration) and they seemed to work fine back when I used Pascal. I don't know
 if I ever tried taking the address of a packed array in Pascal, though.
Oh, was it long ago that i used Delphi. If one declares a packed boolean array, can one take adress of a single element of it? I don't know any longer and don't have Delphi to test here. Perhaps Carlos can test it. And then whether it can be converted to a byte pointer is another question. -eye
Sep 06 2004
next sibling parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <chhrke$1dk2$1 digitaldaemon.com>, Ilya Minkov says...
Ben Hinkle schrieb:

 hmm - a pointer to bits should have the same size as pointers to anything else
 IMO. I want to be able to cast pointers to void* and back without losing
 information. 
Why do you want to be able to cast it to void*?
For any type Foo a pointer to Foo (Foo*) should be castable to void*. It's probably just the word "pointer" that tells me it has a certain behavior. A "reference" to a bit in a bit array can be any size and for that I wouldn't expect to be able to cast to void*. So maybe my issue is just with using the word "pointer" in this context.
If you mean it for containers, then i'll have to think it over what 
kinds of solutions are possible. One of them would be that a cast to 
void* would new a bit pointer on the heap and copy the original one 
there, and return a void* which points to it. A cast back would return a 
bit pointer struct. The leftover place in the struct can be filled with 
checksum, which can be generated on one cast and checked on another, 
just to try to make sure that it's not random data. A garbage solution. :>

There is a problem with "walking" such a void*, but you cannot safely 
walk a pointer if you don't know the size of a type. It looks very 
differently with void[], which starts to be a real problem.
I don't follow. The size in question isn't the size of the thing being pointed to - it's the size of the pointer itself.
For templates, a separate solution can be figured out. I have to think 
over exactly how much trouble it is causing. Perhaps really much.

 Pascal had bit arrays (you put the word "packed" in front of the
 declaration) and they seemed to work fine back when I used Pascal. I don't know
 if I ever tried taking the address of a packed array in Pascal, though.
Oh, was it long ago that i used Delphi. If one declares a packed boolean array, can one take adress of a single element of it? I don't know any longer and don't have Delphi to test here. Perhaps Carlos can test it. And then whether it can be converted to a byte pointer is another question.
Maybe a separate type really is needed to distinguish between packed arrays of bits and unpacked arrays of bits. Something like "unpackedbit". Also we can make bool an alias for "unpackedbit" instead of regular packable "bit". That way arrays of bools don't surprise people used to bools as bytes.
-eye
Sep 06 2004
parent reply "antiAlias" <fu bar.com> writes:
Ben; why is there a need for a bit[] in the first place? And why the need
for a native bit pointer? One can easily put together a BitSet class, which
will take care of the required functionality. It could even support array
semantics, and perhaps even expose a pseudo-pointer (with the bit offset).
Perhaps the thing to do is write one, stick it into MinTL, and put an end to
all this 'debate' ?

I just don't understand why BitSet needs to be a built-in, primitive type.
Can you enlighten me, please?



"Ben Hinkle" <Ben_member pathlink.com> wrote in message
news:chhtv8$1edk$1 digitaldaemon.com...
In article <chhrke$1dk2$1 digitaldaemon.com>, Ilya Minkov says...
Ben Hinkle schrieb:

 hmm - a pointer to bits should have the same size as pointers to anything
else
 IMO. I want to be able to cast pointers to void* and back without losing
 information.
Why do you want to be able to cast it to void*?
For any type Foo a pointer to Foo (Foo*) should be castable to void*. It's probably just the word "pointer" that tells me it has a certain behavior. A "reference" to a bit in a bit array can be any size and for that I wouldn't expect to be able to cast to void*. So maybe my issue is just with using the word "pointer" in this context.
If you mean it for containers, then i'll have to think it over what
kinds of solutions are possible. One of them would be that a cast to
void* would new a bit pointer on the heap and copy the original one
there, and return a void* which points to it. A cast back would return a
bit pointer struct. The leftover place in the struct can be filled with
checksum, which can be generated on one cast and checked on another,
just to try to make sure that it's not random data. A garbage solution. :>

There is a problem with "walking" such a void*, but you cannot safely
walk a pointer if you don't know the size of a type. It looks very
differently with void[], which starts to be a real problem.
I don't follow. The size in question isn't the size of the thing being pointed to - it's the size of the pointer itself.
For templates, a separate solution can be figured out. I have to think
over exactly how much trouble it is causing. Perhaps really much.

 Pascal had bit arrays (you put the word "packed" in front of the
 declaration) and they seemed to work fine back when I used Pascal. I
don't know
 if I ever tried taking the address of a packed array in Pascal, though.
Oh, was it long ago that i used Delphi. If one declares a packed boolean array, can one take adress of a single element of it? I don't know any longer and don't have Delphi to test here. Perhaps Carlos can test it. And then whether it can be converted to a byte pointer is another question.
Maybe a separate type really is needed to distinguish between packed arrays of bits and unpacked arrays of bits. Something like "unpackedbit". Also we can make bool an alias for "unpackedbit" instead of regular packable "bit". That way arrays of bools don't surprise people used to bools as bytes.
-eye
Sep 06 2004
parent reply Ben Hinkle <Ben_member pathlink.com> writes:
In article <chi18u$1fhn$1 digitaldaemon.com>, antiAlias says...
Ben; why is there a need for a bit[] in the first place? And why the need
for a native bit pointer? One can easily put together a BitSet class, which
will take care of the required functionality. It could even support array
semantics, and perhaps even expose a pseudo-pointer (with the bit offset).
Perhaps the thing to do is write one, stick it into MinTL, and put an end to
all this 'debate' ?

I just don't understand why BitSet needs to be a built-in, primitive type.
Can you enlighten me, please?
I don't know. Walter probably decided to pack bit arrays without realizing all the side effects - after all if one wants unpacked "bit" arrays one can explicitly use bytes. I used a bit array in D a long time ago and I don't even remember exactly what for - I think it had to do with casting an int to a bit array or something. They are cute but it looks like they cause lots of feather ruffling. Personally if supporting them in the D language gets too messy then I agree they should be removed and moved to the/a library. As I mentioned before I think the reason people actually care is because of bool. A packed bool array seems less useful than a packed bit array - if you get my meaning.
"Ben Hinkle" <Ben_member pathlink.com> wrote in message
news:chhtv8$1edk$1 digitaldaemon.com...
In article <chhrke$1dk2$1 digitaldaemon.com>, Ilya Minkov says...
Ben Hinkle schrieb:

 hmm - a pointer to bits should have the same size as pointers to anything
else
 IMO. I want to be able to cast pointers to void* and back without losing
 information.
Why do you want to be able to cast it to void*?
For any type Foo a pointer to Foo (Foo*) should be castable to void*. It's probably just the word "pointer" that tells me it has a certain behavior. A "reference" to a bit in a bit array can be any size and for that I wouldn't expect to be able to cast to void*. So maybe my issue is just with using the word "pointer" in this context.
If you mean it for containers, then i'll have to think it over what
kinds of solutions are possible. One of them would be that a cast to
void* would new a bit pointer on the heap and copy the original one
there, and return a void* which points to it. A cast back would return a
bit pointer struct. The leftover place in the struct can be filled with
checksum, which can be generated on one cast and checked on another,
just to try to make sure that it's not random data. A garbage solution. :>

There is a problem with "walking" such a void*, but you cannot safely
walk a pointer if you don't know the size of a type. It looks very
differently with void[], which starts to be a real problem.
I don't follow. The size in question isn't the size of the thing being pointed to - it's the size of the pointer itself.
For templates, a separate solution can be figured out. I have to think
over exactly how much trouble it is causing. Perhaps really much.

 Pascal had bit arrays (you put the word "packed" in front of the
 declaration) and they seemed to work fine back when I used Pascal. I
don't know
 if I ever tried taking the address of a packed array in Pascal, though.
Oh, was it long ago that i used Delphi. If one declares a packed boolean array, can one take adress of a single element of it? I don't know any longer and don't have Delphi to test here. Perhaps Carlos can test it. And then whether it can be converted to a byte pointer is another question.
Maybe a separate type really is needed to distinguish between packed arrays of bits and unpacked arrays of bits. Something like "unpackedbit". Also we can make bool an alias for "unpackedbit" instead of regular packable "bit". That way arrays of bools don't surprise people used to bools as bytes.
-eye
Sep 06 2004
parent reply "antiAlias" <fu bar.com> writes:
"Ben Hinkle" <Ben_member pathlink.com>
As I mentioned before I think the reason people actually care is because of
bool. A packed bool array seems less useful than a packed bit array - if you
get
my meaning.

===============

No question that BitSet functionality is very useful, if that's what you
mean. Unfortunately, bit[] does not support operations like OR, XOR, AND,
etc.  This relegates the usefulness of bit[] to, uhhh, somewhere dark. In
comparison, a BitSet class would be all sweetness and light :-)
Sep 06 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <chi86l$1iqg$1 digitaldaemon.com>, antiAlias says...

No question that BitSet functionality is very useful, if that's what you
mean. Unfortunately, bit[] does not support operations like OR, XOR, AND,
etc.  This relegates the usefulness of bit[] to, uhhh, somewhere dark. In
comparison, a BitSet class would be all sweetness and light :-)
There is /almost/ a BitSet class already. Please note the word "almost". The class Int in Deimos represents an unlimited precision integer ... which could easily be interpretted as an unlimited size bit array. It already has those very functions - OR, XOR and AND. It also has bit test, bit set, and bit clear functions. Unfortunately, it is not quite suitable for this purpose, because Ints are immutable. Setting a bit doesn't actually set a bit - instead, it makes a copy, and then sets a bit in the copy. But it strikes me that with very little work, the bigint package could be made to do the job. In addition to the regular functions and operator overloads, the bigint package also provides low-level functions, and these are allowed to modify-in-place. It strikes me that with only minimal change, the package could easily implement those low-level functions in a high-level class with reference semantics (possibly called BitArray) with straightforward conversion to and from Int. Anyone think that a good way to go? Arcane Jill
Sep 07 2004
parent "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Arcane Jill" <Arcane_member pathlink.com> wrote in message
news:chjmpn$26al$1 digitaldaemon.com...
 In article <chi86l$1iqg$1 digitaldaemon.com>, antiAlias says...

No question that BitSet functionality is very useful, if that's what you
mean. Unfortunately, bit[] does not support operations like OR, XOR, AND,
etc.  This relegates the usefulness of bit[] to, uhhh, somewhere dark. In
comparison, a BitSet class would be all sweetness and light :-)
There is /almost/ a BitSet class already. Please note the word "almost". The class Int in Deimos represents an unlimited precision integer ...
which
 could easily be interpretted as an unlimited size bit array. It already
has
 those very functions - OR, XOR and AND. It also has bit test, bit set, and
bit
 clear functions.

 Unfortunately, it is not quite suitable for this purpose, because Ints are
 immutable. Setting a bit doesn't actually set a bit - instead, it makes a
copy,
 and then sets a bit in the copy.

 But it strikes me that with very little work, the bigint package could be
made
 to do the job. In addition to the regular functions and operator
overloads, the
 bigint package also provides low-level functions, and these are allowed to
 modify-in-place. It strikes me that with only minimal change, the package
could
 easily implement those low-level functions in a high-level class with
reference
 semantics (possibly called BitArray) with straightforward conversion to
and from
 Int.
Mine is called BitArray :) Right now you can do things like: float f = 3.141; BitArray ba = new BitArray(f); f[31] = 1; //f is now -3.141 and you can have slices: BitArray ba2 = ba[3..14]; for(int i=0; i<ba2.length; i++) ba2[i]=1; now bits 3 .. 13 of f are set to 1. But i am strugling at the moment with the way to make al the constructors with mixins.
 Anyone think that a good way to go?

 Arcane Jill
Sep 07 2004
prev sibling parent "Carlos Santander B." <carlos8294 msn.com> writes:
"Ilya Minkov" <minkov cs.tum.edu> escribió en el mensaje
news:chhrke$1dk2$1 digitaldaemon.com
| Oh, was it long ago that i used Delphi. If one declares a packed boolean
| array, can one take adress of a single element of it? I don't know any
| longer and don't have Delphi to test here. Perhaps Carlos can test it.
| And then whether it can be converted to a byte pointer is another question.
|
| -eye

(warning: Delphi code follows)

////////////////////////////////
program Project1;

{$APPTYPE CONSOLE}

uses
  SysUtils;

var
  arr     : packed array of boolean;
  element : ^boolean;
  i       : integer;

begin
  setlength(arr,8);
  element :=   arr[2];
  arr[2] := true;
  writeln(element^);
  for i := 0 to 7 do
    writeln(arr[i]);
end.
////////////////////////////////

Outputs:
TRUE
FALSE
FALSE
TRUE
FALSE
FALSE
FALSE
FALSE
FALSE

So, unless I didn't understand something, it is possible.

-----------------------
Carlos Santander Bernal
Sep 06 2004
prev sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <chhns7$1c3p$1 digitaldaemon.com>, Ben Hinkle says...

I want to be able to cast pointers to void* and back without losing
information.
That's an intriguing one. According to "The C Programming Language" (K&R), the rules for C are: (1)It is guaranteed that a pointer to an object may be converted to a pointer to an object whose type requires less or equally strict storage alignment and back again without change. And: (2) A pointer may be converted to type void * and back again without change. Note that the first rule implies that converting char* -> int* -> char* is NOT safe, because when you go from char* to int*, the C compiler is permitted to zero the low order bits in order to meet operating system specific alignment requirements. The second rule is true in C, because void* has the same alignment requirements as char* (char is the smallest addressable unit). But the bit makes things different. In D, given that bit requires less strict storage alignment than byte, we must choose between either: (1) bit* -> void* -> bit* is NOT safe, or (2) void* is implemented with a bit offset Doesn't the bit make life interesting? Arcane Jill
Sep 06 2004
parent Joel Salomon <spamm_trapp yahoo.com> writes:
Arcane Jill wrote:
 But the bit makes things different. In D, given that bit requires less strict
 storage alignment than byte, we must choose between either:
 
 (1) bit* -> void* -> bit* is NOT safe, or
 (2) void* is implemented with a bit offset
How about device dependency? i.e. bit* -> void* -> bit* is valid only on bit-addressable machines. I'm new to D, but are function pointers compatible with void* ? If not, then bit* (and nybble* and tayste*) needn't be either - or void* should be extended to hold function pointers and represent sub-byte addressing.
 
 Doesn't the bit make life interesting?
 
And how! --Joel
Sep 06 2004
prev sibling parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <cheqr4$308q$1 digitaldaemon.com>, Ilya Minkov says...

Perhaps we could develop our own cross-OS binary format for libraries, 
and embed a loader in Phobos. The task of this should be to provide both 
interface and object code on the one hand, and allow the library to use 
the object code from the main executable. It would not only solve the GC 
problem, but also all of the plug-in writing problems. DLL is not very 
adequate, i guess.
Is cross-OS actually what you mean? I don't see how an application running on a Windows OS can run object code compiled and running on a Linus OS. You mean two different-OS PCs networked together? Connected via the internet? What? I am particularly intrigued by anything which claims to "solve the GC problem". Please could you define "the GC problem", and tell me how such a cross-OS-binary format would solve it? That would be amazing. Arcane Jill
Sep 06 2004
parent Martin <Martin_member pathlink.com> writes:
Most of code is very OS undependent, mostly only the IO operations are os
dependent and also the headers of the excecutables.

For example, if you write a math library, then only the header of this library
is OS dependent, and it is quite simple to runtime convert it.

And if this library only uses input-output through D standart libraris, then is
is the same situation.

I think it is a good idea.

By the way, what is THE GC problem?


In article <chhhp2$19td$1 digitaldaemon.com>, Arcane Jill says...
In article <cheqr4$308q$1 digitaldaemon.com>, Ilya Minkov says...

Perhaps we could develop our own cross-OS binary format for libraries, 
and embed a loader in Phobos. The task of this should be to provide both 
interface and object code on the one hand, and allow the library to use 
the object code from the main executable. It would not only solve the GC 
problem, but also all of the plug-in writing problems. DLL is not very 
adequate, i guess.
Is cross-OS actually what you mean? I don't see how an application running on a Windows OS can run object code compiled and running on a Linus OS. You mean two different-OS PCs networked together? Connected via the internet? What? I am particularly intrigued by anything which claims to "solve the GC problem". Please could you define "the GC problem", and tell me how such a cross-OS-binary format would solve it? That would be amazing. Arcane Jill
Sep 06 2004
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 23 Aug 2004 16:56:49 +1000, Matthew wrote:

 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
[snip]
 - drop the bit type
I suspect that there is some ambiguity loaded in the use of the term 'bit'. To me, a bit is an integer that can hold only one of two possible values - zero and one. (And a bool is almost the same except that its not a number; it holds either True or False.) So an array of bits is a vector of integers that can each hold only one of two values, and this can be implemented in a multitude of ways. In addition, there is a real need to be able to describe the physical layout of RAM, and as such we often superimpose a bit array (in language terms) over a RAM area to access the RAM bits. However a 'bit' as a programming concept is not necessarily the same as a physical RAM bit. So in D, it would be useful if we could have a conceptual bit type - used to implement programming concepts of switches, etc..; and a physical bit type - used to describe RAM layouts. We could then workout a method of notating the address of RAM bits based on physical RAM (byte) addresses. Addressing conceptual bits via standard vector addressing would remain the same. cbit v[17]; v[5] = 1; -- Sets the 6th conceptual bit to 1. And something like this could describe the layout of a 4-byte (32-bit) area of RAM... struct X { pbit a[2]; pbit b[5]; align byte; byte2 c; byte d; } Or am I missing something again? -- Derek Melbourne, Australia 7/Sep/04 9:22:18 AM
Sep 06 2004
parent reply Sean Kelly <sean f4.ca> writes:
In article <chisup$1s00$1 digitaldaemon.com>, Derek Parnell says...
On Mon, 23 Aug 2004 16:56:49 +1000, Matthew wrote:

 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
[snip]
 - drop the bit type
I suspect that there is some ambiguity loaded in the use of the term 'bit'. To me, a bit is an integer that can hold only one of two possible values - zero and one. (And a bool is almost the same except that its not a number; it holds either True or False.) So an array of bits is a vector of integers that can each hold only one of two values, and this can be implemented in a multitude of ways. In addition, there is a real need to be able to describe the physical layout of RAM, and as such we often superimpose a bit array (in language terms) over a RAM area to access the RAM bits. However a 'bit' as a programming concept is not necessarily the same as a physical RAM bit. So in D, it would be useful if we could have a conceptual bit type - used to implement programming concepts of switches, etc..; and a physical bit type - used to describe RAM layouts. We could then workout a method of notating the address of RAM bits based on physical RAM (byte) addresses. Addressing conceptual bits via standard vector addressing would remain the same. cbit v[17]; v[5] = 1; -- Sets the 6th conceptual bit to 1. And something like this could describe the layout of a 4-byte (32-bit) area of RAM... struct X { pbit a[2]; pbit b[5]; align byte; byte2 c; byte d; } Or am I missing something again?
There may be some confusion if bits are packed across adjacent bit arrays. It could also lead so some odd problems or compiler issues if multithreading is thrown into the picture (imagine a and b being protected by separate locks though a[0] and b[0] occupy the same byte in memory). I'm pretty sure that D has alignment specifiers so we don't really need bit arrays for this. The contention is around the fact that some believe that arrays of anything should behave the same. That is, that the address of an element can be taken, etc. For obvious reasons, this isn't easy if bit arrays are packed :) I think that things should either stay the way they are, or 'bit' should be replaced with 'bool' and always have a 1-byte storage value. As others have said, it's easy enough to create a packed bitset in a library. Sean
Sep 06 2004
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 7 Sep 2004 03:08:20 +0000 (UTC), Sean Kelly wrote:

 In article <chisup$1s00$1 digitaldaemon.com>, Derek Parnell says...
On Mon, 23 Aug 2004 16:56:49 +1000, Matthew wrote:

 "Ivan Senji" <ivan.senji public.srce.hr> wrote in message
news:cgc1s6$1vdk$1 digitaldaemon.com...
[snip]
 - drop the bit type
I suspect that there is some ambiguity loaded in the use of the term 'bit'. To me, a bit is an integer that can hold only one of two possible values - zero and one. (And a bool is almost the same except that its not a number; it holds either True or False.) So an array of bits is a vector of integers that can each hold only one of two values, and this can be implemented in a multitude of ways. In addition, there is a real need to be able to describe the physical layout of RAM, and as such we often superimpose a bit array (in language terms) over a RAM area to access the RAM bits. However a 'bit' as a programming concept is not necessarily the same as a physical RAM bit. So in D, it would be useful if we could have a conceptual bit type - used to implement programming concepts of switches, etc..; and a physical bit type - used to describe RAM layouts. We could then workout a method of notating the address of RAM bits based on physical RAM (byte) addresses. Addressing conceptual bits via standard vector addressing would remain the same. cbit v[17]; v[5] = 1; -- Sets the 6th conceptual bit to 1. And something like this could describe the layout of a 4-byte (32-bit) area of RAM... struct X { pbit a[2]; pbit b[5]; align byte; byte2 c; byte d; } Or am I missing something again?
I suspect that either I don't understand you, or that you don't understand me. ;-)
 There may be some confusion if bits are packed across adjacent bit arrays.
Are you talking about physical RAM layouts or conceptual bit arrays? They are not the same thing. All the 'struct' above is saying is ... 'a' is the first two bits in RAM. 'b' is the next 5 bits in RAM. Then align to the next byte boundary. 'c' is a single 2-byte value. 'd is a single 1-byte value. So how can one say "if bits are packed across adjacent bit arrays"? Do you mean if the last bit in 'a' and 'the first bit in 'b' (thus the 3rd and 4th bit in the structure) are accessed as if they were a single bit array? Or are you talking about conceptual bits in arrays? If so, it makes as much sense as saying "but what if integers are packed across integer arrays?". There is no guarantee that such arrays are even adjacently stored in RAM?
  It
 could also lead so some odd problems or compiler issues if multithreading is
 thrown into the picture (imagine a and b being protected by separate locks
 though a[0] and b[0] occupy the same byte in memory).
What are you on about??? The struct above is trying to say that these two bit arrays are *explicitly* not sharing the same RAM. If you are talking about conceptual bit arrays, then the same problem also exists for arrays of any sort of data type, not just bits.
  I'm pretty sure that D
 has alignment specifiers so we don't really need bit arrays for this. 
It does? How does one address the 3rd and 4th bits of a byte as if they were a single 2-bit integer or even a 2-bit array? I'd be pleased if you could show me example code for that. I'm not saying that one can't, but just that I haven't learned how to yet.
 The
 contention is around the fact that some believe that arrays of anything should
 behave the same.  That is, that the address of an element can be taken, etc.
Yep. Sounds like a good idea.
 For obvious reasons, this isn't easy if bit arrays are packed :)
If I declare 'a' as a bit array of 17 bits long, then doesn't a[5] refer to the 6th bit in that array? <code> import std.stdio; bit[17] a; void main() { a[5] = 1; writef("%d\n", a[5]); writef("%d\n", a.length); writef("%d\n", a.sizeof); } </code> Out is ... c:\temp>type test.d import std.stdio; bit[17] a; void main() { a[5] = 1; writef("%d\n", a[5]); writef("%d\n", a.length); writef("%d\n", a.sizeof); } c:\temp>dmd test C:\DPARNELL\DMD\BIN\..\..\dm\bin\link.exe test,,,user32+kernel32/noi; c:\temp>test 1 17 4 c:\temp> What am I misunderstanding?
 
 I think that things should either stay the way they are, or 'bit' should be
 replaced with 'bool' and always have a 1-byte storage value. 
But bits are numbers and bools are not numbers. Just like ints are numbers and chars are not numbers.
 As others have
 said, it's easy enough to create a packed bitset in a library.
What is a "packed bitset"? Maybe this is the term that I'm not understanding? -- Derek Melbourne, Australia 7/Sep/04 2:30:47 PM
Sep 06 2004
parent Sean Kelly <sean f4.ca> writes:
In article <chjepa$236l$1 digitaldaemon.com>, Derek Parnell says...
And something like this could describe the layout of a 4-byte (32-bit) area
of RAM...

  struct X
  {
     pbit a[2];
     pbit b[5];
     align byte;
     byte2 c;
     byte d;
  }

Or am I missing something again?
I suspect that either I don't understand you, or that you don't understand me. ;-)
 There may be some confusion if bits are packed across adjacent bit arrays.
Are you talking about physical RAM layouts or conceptual bit arrays? They are not the same thing. All the 'struct' above is saying is ... 'a' is the first two bits in RAM. 'b' is the next 5 bits in RAM. Then align to the next byte boundary. 'c' is a single 2-byte value. 'd is a single 1-byte value. So how can one say "if bits are packed across adjacent bit arrays"? Do you mean if the last bit in 'a' and 'the first bit in 'b' (thus the 3rd and 4th bit in the structure) are accessed as if they were a single bit array?
I mean that since a byte is the smallest addressable unit, you have two separate non-overlapping variables (ie. not aliased or sliced) that effectively address the same memory location.
Or are you talking about conceptual bits in arrays? If so, it makes as much
sense as saying "but what if integers are packed across integer arrays?".
There is no guarantee that such arrays are even adjacently stored in RAM?
True enough. Though I assumed because you said "a 4-byte area of RAM" that in your example the data was stored adjacently.
 could also lead so some odd problems or compiler issues if multithreading is
 thrown into the picture (imagine a and b being protected by separate locks
 though a[0] and b[0] occupy the same byte in memory).
What are you on about??? The struct above is trying to say that these two bit arrays are *explicitly* not sharing the same RAM. If you are talking about conceptual bit arrays, then the same problem also exists for arrays of any sort of data type, not just bits.
I thought the above struct was requiring pbit a and pbit b to share the same byte. Maybe not the same bits, but the memory location will be effectively identical.
  I'm pretty sure that D
 has alignment specifiers so we don't really need bit arrays for this. 
It does? How does one address the 3rd and 4th bits of a byte as if they were a single 2-bit integer or even a 2-bit array? I'd be pleased if you could show me example code for that. I'm not saying that one can't, but just that I haven't learned how to yet.
Ooooh... I see what you're saying. No, AFAIK D does not support anything like this natively.
 For obvious reasons, this isn't easy if bit arrays are packed :)
If I declare 'a' as a bit array of 17 bits long, then doesn't a[5] refer to the 6th bit in that array?
..
What am I misunderstanding?
I think we're on the same page, and that there's just been some miscommunication. I was merely saying since modern computers don't support bit addressing, you've got to do some magic to deal with bit arrays and such--offsets, masks, etc.
 I think that things should either stay the way they are, or 'bit' should be
 replaced with 'bool' and always have a 1-byte storage value. 
But bits are numbers and bools are not numbers. Just like ints are numbers and chars are not numbers.
The distinction shouldn't matter in most cases. One could argue that everything is a number of a sequence of numbers, or that nothing is. It depends on the level of abstraction you're talking about :)
 As others have
 said, it's easy enough to create a packed bitset in a library.
What is a "packed bitset"? Maybe this is the term that I'm not understanding?
I think it's a term used in the D documentation. A bit array is "packed" in that it only uses 1 bit of RAM to represent 1 'bit' in D, so conceptually: &a[0] == &a[1] == ... == &a[7] Thus the compiler had to do some magic behind the scenes to allow a bit array to behave as if every bit were individually addressible. Sean
Sep 06 2004
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 7 Sep 2004 03:08:20 +0000 (UTC), Sean Kelly wrote:

 In article <chisup$1s00$1 digitaldaemon.com>, Derek Parnell says...
[snip]
 That is, that the address of an element can be taken, etc.
 For obvious reasons, this isn't easy if bit arrays are packed :)
Of sorry, you mean the RAM address of a single physical bit array element. Yes, this is difficult, only because of the convention (solidified in silicon) that a RAM address refers to a byte (collection of 8-bits). If the convention however was that a RAM address referred to a bit, then it would not be a problem. I guess we would have to adopt the segment:offset addressing notation again ;-) -- Derek Melbourne, Australia 7/Sep/04 2:55:47 PM
Sep 06 2004
parent Sean Kelly <sean f4.ca> writes:
In article <chjf8s$23fl$1 digitaldaemon.com>, Derek Parnell says...
On Tue, 7 Sep 2004 03:08:20 +0000 (UTC), Sean Kelly wrote:

 In article <chisup$1s00$1 digitaldaemon.com>, Derek Parnell says...
[snip]
 That is, that the address of an element can be taken, etc.
 For obvious reasons, this isn't easy if bit arrays are packed :)
Of sorry, you mean the RAM address of a single physical bit array element. Yes, this is difficult, only because of the convention (solidified in silicon) that a RAM address refers to a byte (collection of 8-bits). If the convention however was that a RAM address referred to a bit, then it would not be a problem. I guess we would have to adopt the segment:offset addressing notation again ;-)
Yup :) IMO this is too messy just to support packed bit arrays, though I still like the idea in theory. Sean
Sep 06 2004
prev sibling next sibling parent reply teqDruid <me teqdruid.com> writes:
On Sat, 07 Aug 2004 20:26:05 +0000, Arcane Jill wrote:
 But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays don't
 need opCmp(). Honestly, they don't. Why not? - because the only thing AAs
 require is a sort criterion. That sort criterion does not have to be based on
<.
 The current implementation of AAs is to sort based on (x < y), but then to have
 Object define < such that (x < y) is implemented as (&x < &y). So why not just
 have the sort criterion for AAs be (&x < &y) instead of (x < y)? If this were
I don't know how the AA algorithm works, but since &x != &y doesn't mean x != y, wouldn't using &x < &y not be kosher? In other words, import AJ's bigint; Int a = new Int(5); Int b = new Int(5); bit[Int] aa; aa[a] = true; if (aa[b]) printf("true\n\0"); else printf("false\n\0"); That should print true, but if the AA is sorted using &a and &b, it could easily print false. Again, I don't know how the algorithm works, so this small example might work using the references, but with larger AAs, what's the point of sorting by a (basically) arbitrary and irrelevant number? Or am I WAY off base? (If so, please explain) Instead of using (x < y) I think the hash should be used, this way it can be overridden by classes, since &x can't be overridden (and I really hope I'm not wrong about that.) John
Aug 08 2004
parent reply Arcane Jill <Arcane_member pathlink.com> writes:
In article <pan.2004.08.08.08.43.07.193849 teqdruid.com>, teqDruid says...

That should print true, but if the AA is sorted using &a and &b, it could
easily print false.  Again, I don't know how the algorithm works, so this
small example might work using the references, but with larger AAs, what's
the point of sorting by a (basically) arbitrary and irrelevant number?

Or am I WAY off base? (If so, please explain)
No, you're right. Damn!
Instead of using (x < y) I think the hash should be used, this way it can
be overridden by classes, since &x can't be overridden (and I really hope
I'm not wrong about that.)
I don't think that would work either, since hash(x) == hash(y) does not imply x == y. (Although obviously using hash() would work for a hashmap). What you need for AAs is a sort function for which precisely one of the following conditions is always true: either (a) x preceeds y; or (b) x == y; or (c) x succeeds y. Using (hash(x)<hash(y)) as the sort criterion would allow a fourth possibility: that (a), (b) and (c) are all simulataneously false. An AA would fall over if that possibility were encountered. So I don't have an answer any more. The way that C++ does it (in std::map) is that the sort criterion is <, but < is not defined for everything, so if < isn't defined, you simply can't store your object in a std::map. C++ also provides std::hash_map, which requires storable objects to implement the function hash(). I wonder if AAs couldn't simply be re-implemented /as/ a hashmap? That way you don't need a sort criterion at all. It would require hash() to implemented for all storable objects though. Jill
Aug 08 2004
next sibling parent Ben Hinkle <bhinkle4 juno.com> writes:
Arcane Jill wrote:

 In article <pan.2004.08.08.08.43.07.193849 teqdruid.com>, teqDruid says...
 
That should print true, but if the AA is sorted using &a and &b, it could
easily print false.  Again, I don't know how the algorithm works, so this
small example might work using the references, but with larger AAs, what's
the point of sorting by a (basically) arbitrary and irrelevant number?

Or am I WAY off base? (If so, please explain)
No, you're right. Damn!
Instead of using (x < y) I think the hash should be used, this way it can
be overridden by classes, since &x can't be overridden (and I really hope
I'm not wrong about that.)
I don't think that would work either, since hash(x) == hash(y) does not imply x == y. (Although obviously using hash() would work for a hashmap). What you need for AAs is a sort function for which precisely one of the following conditions is always true: either (a) x preceeds y; or (b) x == y; or (c) x succeeds y. Using (hash(x)<hash(y)) as the sort criterion would allow a fourth possibility: that (a), (b) and (c) are all simulataneously false. An AA would fall over if that possibility were encountered. So I don't have an answer any more. The way that C++ does it (in std::map) is that the sort criterion is <, but < is not defined for everything, so if < isn't defined, you simply can't store your object in a std::map. C++ also provides std::hash_map, which requires storable objects to implement the function hash(). I wonder if AAs couldn't simply be re-implemented /as/ a hashmap? That way you don't need a sort criterion at all. It would require hash() to implemented for all storable objects though. Jill
One can see how AAs are implemented by looking at src/phobos/internal/aaA.d It is a hash table with a nifty collision data structure (none of this slow linked-list collision stuff). I bet Walter has been tweaking it for years.
Aug 08 2004
prev sibling parent reply Nick <Nick_member pathlink.com> writes:
In article <cf5gq2$2uq4$1 digitaldaemon.com>, Arcane Jill says...
 So I don't have an answer any more. The way that C++ does it (in std::map) is
 that the sort criterion is <, but < is not defined for everything, so if <
isn't
 defined, you simply can't store your object in a std::map.
Basing AAs on the user implemented opCmp is not a good idea, since it's behavior is not defined. I might for example have a group of objects which are all "equal" in the way I want to compare them (opCmp() always 0, opEquals() always true) but which contain entirely different data. The AAs should use the algorithm it already uses, but it should be implemented as part of the AA, not in Object. It's different for std::map, since it's defined and "advertised" as a _sorted_ container, placing certain restrictions on the input type (in the docs.) Also, STL always allows you to specify a custom ordering functor as an optional template parameter, so < and > might not be used at all if you don't want them to. This is a good practice for DTL to follow, btw :)
 C++ also provides
 std::hash_map, which requires storable objects to implement the function
 hash().
The hash containers are not really part of C++, just in the SGI implementation (which gcc uses.) Nick
Aug 08 2004
next sibling parent reply "Ivan Senji" <ivan.senji public.srce.hr> writes:
"Nick" <Nick_member pathlink.com> wrote in message
news:cf5rd5$75$1 digitaldaemon.com...
 In article <cf5gq2$2uq4$1 digitaldaemon.com>, Arcane Jill says...
 So I don't have an answer any more. The way that C++ does it (in
std::map) is
 that the sort criterion is <, but < is not defined for everything, so if
< isn't
 defined, you simply can't store your object in a std::map.
Basing AAs on the user implemented opCmp is not a good idea, since it's
behavior
 is not defined.
What do you mean by undefined behaviour?
 I might for example have a group of objects which are all
 "equal" in the way I want to compare them (opCmp() always 0, opEquals()
always
 true) but which contain entirely different data. The AAs should use the
 algorithm it already uses, but it should be implemented as part of the AA,
not
 in Object.
Maybe your objects are strange? In most cases my for my classes the two objects are either a<b or a==b (or !(a<b))
 It's different for std::map, since it's defined and "advertised" as a
_sorted_
 container, placing certain restrictions on the input type (in the docs.)
I like it the way std::map dos things!
 Also,
 STL always allows you to specify a custom ordering functor as an optional
 template parameter, so < and > might not be used at all if you don't want
them
 to. This is a good practice for DTL to follow, btw :)
Yes it is! But what do you think about the subject of removing opCmp and opEquals from Object! They prevent us from writing templates that require these operators because there are default versions of these, and they probbably don't do what the author of the class wants!
 C++ also provides
 std::hash_map, which requires storable objects to implement the function
 hash().
The hash containers are not really part of C++, just in the SGI
implementation
 (which gcc uses.)

 Nick
Aug 08 2004
parent reply Nick <Nick_member pathlink.com> writes:
In article <cf61ji$1vh$1 digitaldaemon.com>, Ivan Senji says...
 Basing AAs on the user implemented opCmp is not a good idea, since it's
behavior
 is not defined.
What do you mean by undefined behaviour? (...) Maybe your objects are strange? In most cases my for my classes the two objects are either a<b or a==b (or !(a<b))
Yes, that's the point. Maybe I WANT strange objects, I should be free to make opCmp work however I want (and independent of ==). And since the docs for AAs don't mention anywhere that they even use opCmp, I should expect my strange objects to work perfectly in an associated array.
 It's different for std::map, since it's defined and "advertised" as
 a _sorted_ container, placing certain restrictions on the input type
 (in the docs.)
I like it the way std::map dos things!
I do too, I love it in fact.
 Also, STL always allows you to specify a custom ordering functor as
 an optional template parameter, so < and > might not be used at all
 if you don't want them to. This is a good practice for DTL to follow,
 btw :)
Yes it is! But what do you think about the subject of removing opCmp and opEquals from Object! They prevent us from writing templates that require these operators because there are default versions of these, and they probbably don't do what the author of the class wants!
I agree with you, they should be removed. And take toString and print with you while you're at it. Also, now that writef/writefln is here (and they work great!) having printf and wprintf imported in object.d is just silly. Take them out too. That's my opinion on that :) Nick
Aug 09 2004
parent reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Nick wrote:

<snip>
 Yes, that's the point. Maybe I WANT strange objects, I should be free
 to make opCmp work however I want (and independent of ==).
If you're going to abuse operator overloading, it follows that you should be aware of the consequences. D data structures are designed for those of us who use D 'correctly', not for abusers.
 And since the docs for AAs don't mention anywhere that they even use 
 opCmp, I should expect my strange objects to work perfectly in an
 associated array.
Yes, the documentation needs clearing more than a bit here. <snip>
 I agree with you, they should be removed. And take toString and print
 with you while you're at it.
<snip> What's wrong with toString? Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004
parent Nick <Nick_member pathlink.com> writes:
In article <cf808g$m2r$1 digitaldaemon.com>, Stewart Gordon says...
Nick wrote:
 And since the docs for AAs don't mention anywhere that they even use 
 opCmp, I should expect my strange objects to work perfectly in an
 associated array.
Yes, the documentation needs clearing more than a bit here.
Then I think I agree with you on this point.
What's wrong with toString?
Ok maybe there's nothing wrong with toString(). But print() seems a bit unnecessary, all it does is write toString() to stdout. Nick
Aug 09 2004
prev sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Nick" <Nick_member pathlink.com> wrote in message
news:cf5rd5$75$1 digitaldaemon.com...
 In article <cf5gq2$2uq4$1 digitaldaemon.com>, Arcane Jill says...
 So I don't have an answer any more. The way that C++ does it (in std::map) is
 that the sort criterion is <, but < is not defined for everything, so if <
isn't
 defined, you simply can't store your object in a std::map.
Basing AAs on the user implemented opCmp is not a good idea, since it's behavior is not defined. I might for example have a group of objects which are all "equal" in the way I want to compare them (opCmp() always 0, opEquals() always true) but which contain entirely different data. The AAs should use the algorithm it already uses, but it should be implemented as part of the AA, not in Object. It's different for std::map, since it's defined and "advertised" as a _sorted_ container, placing certain restrictions on the input type (in the docs.) Also, STL always allows you to specify a custom ordering functor as an optional template parameter, so < and > might not be used at all if you don't want them to. This is a good practice for DTL to follow, btw :)
Those prototypes are already in. (Just not implemented in most cases.)
 C++ also provides
 std::hash_map, which requires storable objects to implement the function
 hash().
The hash containers are not really part of C++, just in the SGI implementation (which gcc uses.)
Aug 08 2004
parent Nick <Nick_member pathlink.com> writes:
Nick:
 It's different for std::map, since it's defined and "advertised" as
 a _sorted_ container, placing certain restrictions on the input type
 (in the docs.) Also, STL always allows you to specify a custom ordering
 functor as an optional template parameter, so < and > might not be used
 at all if you don't want them to. This is a good practice for DTL to
 follow, btw :)
Matthew:
Those prototypes are already in. (Just not implemented in most cases.)
Good to hear! I wasn't implying anything else :) You're doing a great job. Nick
Aug 09 2004
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Arcane Jill wrote:
<snip>
 But we /can/ have our cake and eat it. [SUGGESTION:] Associative arrays don't
 need opCmp(). Honestly, they don't. Why not? - because the only thing AAs
 require is a sort criterion.
They don't even need that. In reality, they only need an equality criterion. At the moment, AAs rely on three methods: toHash, opCmp, opEquals. The opCmp stage could easily be skipped if it doesn't exist, if only it weren't for Object.opCmp.
 That sort criterion does not have to be based on <.
 The current implementation of AAs is to sort based on (x < y), but then to have
 Object define < such that (x < y) is implemented as (&x < &y). So why not just
 have the sort criterion for AAs be (&x < &y) instead of (x < y)?
Generally, equality of objects is completely independent of their relative memory addresses. Also, screwing up GC: " Depending on the ordering of pointers: if (p1 < p2) // error: undefined behavior ... since, again, the garbage collector can move objects around in memory." But that itself depends on too many things: http://www.digitalmars.com/drn-bin/wwwnews?D/26273 -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
Ivan Senji wrote:

 There are bigger problems but:
 
 Can please these:
     int opCmp(Object o);
     int opEquals(Object o);
 
 be removed from Object!
<snip> Been suggested many a time. Several links to posts on this issue here: http://www.wikiservice.at/wiki4d/wiki.cgi?PendingPeeves Not getting rid of these (at least opCmp) is a bit like not trying to give up smoking - every excuse I have seen has a rebuttal. Stewart. -- My e-mail is valid but not my primary mailbox. Please keep replies on the 'group where everyone may benefit.
Aug 09 2004