www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Method resolution sucks

reply Kris <Kris_member pathlink.com> writes:
As many of you know, I have a certain distaste for the D method-resolution
imlementation. This example is one of many that I feel justifies that distaste:

We have a method foo: 

uint foo (inout int x) {return 0;}

and we have a caller bar:

void bar()
{
uint y;

foo (y);
}

Compiler says "cast(uint)(x) is not an lvalue" -- referring to the invocation of
foo(y)

Fair enough. Implicit conversion is another issue altogether, so we'll add
another foo() method to handle the alternate argument type:

uint foo (inout int x) {return 0;}
uint foo (inout uint x) {return 0;}

void wumpus()
{
uint y;

return foo(y);
}

Compiler says "function foo overloads uint(inout uint x) and uint(inout int x)
both match argument list for foo" -- again, for the invocation of foo(y).

You get the drift of that? 

Note, of course, that one cannot even think about typecasting with
inout/reference arguments ~ that's yet another dead-end issue.

The whole arena of method resolution (when combined with implicit typecasting,
pass by reference, overloading, etc) is about as bulletproof as a torn t-shirt.

Please try to do something about this, Walter.
Feb 24 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Fri, 25 Feb 2005 03:34:18 +0000 (UTC), Kris wrote:

 As many of you know, I have a certain distaste for the D method-resolution
 imlementation. This example is one of many that I feel justifies that distaste:
 
 We have a method foo: 
 
 uint foo (inout int x) {return 0;}
 
 and we have a caller bar:
 
 void bar()
 {
 uint y;
 
 foo (y);
 }
 
 Compiler says "cast(uint)(x) is not an lvalue" -- referring to the invocation
of
 foo(y)
 
 Fair enough. Implicit conversion is another issue altogether, so we'll add
 another foo() method to handle the alternate argument type:
 
 uint foo (inout int x) {return 0;}
 uint foo (inout uint x) {return 0;}
 
 void wumpus()
 {
 uint y;
 
 return foo(y);
 }
 
 Compiler says "function foo overloads uint(inout uint x) and uint(inout int x)
 both match argument list for foo" -- again, for the invocation of foo(y).
 
 You get the drift of that? 
 
 Note, of course, that one cannot even think about typecasting with
 inout/reference arguments ~ that's yet another dead-end issue.
 
 The whole arena of method resolution (when combined with implicit typecasting,
 pass by reference, overloading, etc) is about as bulletproof as a torn t-shirt.
 
 Please try to do something about this, Walter.

Sorry, Kris, but I'm not getting any errors here, except that your example function 'wumpus()' returns a void. Once I changed that to a uint it compiled fine. Windows v0.113 <code> uint foo (inout int x) {return 0;} uint foo (inout uint x) {return 0;} uint wumpus() { uint y; return foo(y); } </code> -- Derek Melbourne, Australia 25/02/2005 2:48:33 PM
Feb 24 2005
next sibling parent Kris <Kris_member pathlink.com> writes:
In article <1vsrqzrq891rl$.heltnytmakww.dlg 40tude.net>, Derek Parnell says...
Sorry, Kris, but I'm not getting any errors here, except that your example
function 'wumpus()' returns a void. Once I changed that to a uint it
compiled fine.  Windows v0.113

This is bizarre. Ignoring the typo's (from writing whilst on the phone), the snippet was intended to represent the real thing. But I cannot get this to fail in an islolated test-case either! Even when I copy out the original code, try two modules instead of one, change the order of declaration, and so on. Yet the problem persists in the original code; Something smells a bit fishy.
Feb 24 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <1vsrqzrq891rl$.heltnytmakww.dlg 40tude.net>, Derek Parnell says...
Sorry, Kris, but I'm not getting any errors here, except that your example
function 'wumpus()' returns a void. Once I changed that to a uint it
compiled fine.  Windows v0.113

<code>
uint foo (inout int x) {return 0;}
uint foo (inout uint x) {return 0;}

uint wumpus()
{
uint y;

return foo(y);
}
</code>

OK ~ found out what actually breaks: uint foo (char[] s, inout int v, uint r=10) {return 0;} uint foo (char[] s, inout uint v, uint r=10) {return 0;} void bar() { char[] s; int c; foo (s, c); // compiles foo (s, c, 12); // fails } There's some kind of issue with the default assignment ... yet the error message states "cast(uint)(c) is not an lvalue", along with the other msg about "both match argument list" Again; about as bulletproof as a broken-window
Feb 24 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 uint foo (char[] s, inout int v, uint r=10) {return 0;}
 uint foo (char[] s, inout uint v, uint r=10) {return 0;}
 
 void bar()
 {
 char[] s;
 int    c;
 
 foo (s, c);      // compiles
 foo (s, c, 12);  // fails
 }

Yep, you've tripped up on the unexpected implicit conversion of literals. Try ... foo (s, c, 12U); // Force it to use an unsigned literal. -- Derek Melbourne, Australia 25/02/2005 4:22:45 PM
Feb 24 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <ztzjyq7wiz1e$.1e67b7jjrdwpz.dlg 40tude.net>, Derek Parnell says...
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 uint foo (char[] s, inout int v, uint r=10) {return 0;}
 uint foo (char[] s, inout uint v, uint r=10) {return 0;}
 
 void bar()
 {
 char[] s;
 int    c;
 
 foo (s, c);      // compiles
 foo (s, c, 12);  // fails
 }

Yep, you've tripped up on the unexpected implicit conversion of literals. Try ... foo (s, c, 12U); // Force it to use an unsigned literal.

Oof ... that shows just how fragile it truly is. The '12' will always be implicity cast anyway (given the current scheme of things), yet it breaks in this particular case with an error on the argument 'c' instead ~ great. Can we expect a fix for this please, Walter?
Feb 24 2005
parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message 
news:cvme71$2svv$1 digitaldaemon.com...
 In article <ztzjyq7wiz1e$.1e67b7jjrdwpz.dlg 40tude.net>, Derek Parnell 
 says...
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 uint foo (char[] s, inout int v, uint r=10) {return 0;}
 uint foo (char[] s, inout uint v, uint r=10) {return 0;}

 void bar()
 {
 char[] s;
 int    c;

 foo (s, c);      // compiles
 foo (s, c, 12);  // fails
 }

Yep, you've tripped up on the unexpected implicit conversion of literals. Try ... foo (s, c, 12U); // Force it to use an unsigned literal.

Oof ... that shows just how fragile it truly is. The '12' will always be implicity cast anyway (given the current scheme of things), yet it breaks in this particular case with an error on the argument 'c' instead ~ great. Can we expect a fix for this please, Walter?

I don't have an opinion about what the behavior should be but in any case the help in http://www.digitalmars.com/d/function.html#overloading could use some more details and/or examples. An example of where "multiple matches with implicit conversions" can come up would be nice. eg: void foo(int v, uint r) { } void foo(uint v, int r) { } void bar() { foo(10,10); } // doesn't compile multiple match another example void foo(int v, uint r) { } void foo(uint v, char[] r) { } void bar() { foo(10,"hello"); } // does compile another example void foo(int v, uint r) { } void foo(uint v, uint r) { } void bar() { foo(10,10); } // doesn't compile multiple match - should it?
Feb 25 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <cvngu1$uvm$1 digitaldaemon.com>, Ben Hinkle says...
"Kris" <Kris_member pathlink.com> wrote in message 
news:cvme71$2svv$1 digitaldaemon.com...
 In article <ztzjyq7wiz1e$.1e67b7jjrdwpz.dlg 40tude.net>, Derek Parnell 
 says...
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 uint foo (char[] s, inout int v, uint r=10) {return 0;}
 uint foo (char[] s, inout uint v, uint r=10) {return 0;}

 void bar()
 {
 char[] s;
 int    c;

 foo (s, c);      // compiles
 foo (s, c, 12);  // fails
 }

Yep, you've tripped up on the unexpected implicit conversion of literals. Try ... foo (s, c, 12U); // Force it to use an unsigned literal.

Oof ... that shows just how fragile it truly is. The '12' will always be implicity cast anyway (given the current scheme of things), yet it breaks in this particular case with an error on the argument 'c' instead ~ great. Can we expect a fix for this please, Walter?

I don't have an opinion about what the behavior should be but in any case the help in http://www.digitalmars.com/d/function.html#overloading could use some more details and/or examples. An example of where "multiple matches with implicit conversions" can come up would be nice. eg: void foo(int v, uint r) { } void foo(uint v, int r) { } void bar() { foo(10,10); } // doesn't compile multiple match another example void foo(int v, uint r) { } void foo(uint v, char[] r) { } void bar() { foo(10,"hello"); } // does compile another example void foo(int v, uint r) { } void foo(uint v, uint r) { } void bar() { foo(10,10); } // doesn't compile multiple match - should it?

You're right, Ben. The documentation could always be better. This is not a case of documentation though, so you might have misread the thread. To reiterate, there's a bug in the signature-matching algorithm whereby it gets hopelessly confused by the prior example; and emits an error that is both thoroughly misleading and confounding. It doesn't complain about literal-number type mismatches ... instead it whines about the preceding argument of a reference type, which would otherwise be acceptable. Your examples above are great, but they are not quite representative of the issue spawning this thread. Again, here is the issue: void foo (inout int x, uint y = 10){} void foo (inout uint x, uint y = 10){} void bar() { int x; foo (x); // good foo (x, 0); // fails, with error on type of 'x', and multiple sig match } As you can see, there's no possible confusion over the type of 'y' -- they're both of type 'uint'. Yet the complier error points at a problem with 'x'. Bork. Bork. The number of errors is dependent upon order of foo() declarations. I'm a bit surprised no-one else seems to have posted about this in the past.
Feb 25 2005
next sibling parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message 
news:cvnq8n$18lj$1 digitaldaemon.com...
 In article <cvngu1$uvm$1 digitaldaemon.com>, Ben Hinkle says...
"Kris" <Kris_member pathlink.com> wrote in message
news:cvme71$2svv$1 digitaldaemon.com...
 In article <ztzjyq7wiz1e$.1e67b7jjrdwpz.dlg 40tude.net>, Derek Parnell
 says...
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 uint foo (char[] s, inout int v, uint r=10) {return 0;}
 uint foo (char[] s, inout uint v, uint r=10) {return 0;}

 void bar()
 {
 char[] s;
 int    c;

 foo (s, c);      // compiles
 foo (s, c, 12);  // fails
 }

Yep, you've tripped up on the unexpected implicit conversion of literals. Try ... foo (s, c, 12U); // Force it to use an unsigned literal.

Oof ... that shows just how fragile it truly is. The '12' will always be implicity cast anyway (given the current scheme of things), yet it breaks in this particular case with an error on the argument 'c' instead ~ great. Can we expect a fix for this please, Walter?

I don't have an opinion about what the behavior should be but in any case the help in http://www.digitalmars.com/d/function.html#overloading could use some more details and/or examples. An example of where "multiple matches with implicit conversions" can come up would be nice. eg: void foo(int v, uint r) { } void foo(uint v, int r) { } void bar() { foo(10,10); } // doesn't compile multiple match another example void foo(int v, uint r) { } void foo(uint v, char[] r) { } void bar() { foo(10,"hello"); } // does compile another example void foo(int v, uint r) { } void foo(uint v, uint r) { } void bar() { foo(10,10); } // doesn't compile multiple match - should it?

You're right, Ben. The documentation could always be better. This is not a case of documentation though, so you might have misread the thread. To reiterate, there's a bug in the signature-matching algorithm whereby it gets hopelessly confused by the prior example; and emits an error that is both thoroughly misleading and confounding. It doesn't complain about literal-number type mismatches ... instead it whines about the preceding argument of a reference type, which would otherwise be acceptable. Your examples above are great, but they are not quite representative of the issue spawning this thread. Again, here is the issue: void foo (inout int x, uint y = 10){} void foo (inout uint x, uint y = 10){} void bar() { int x; foo (x); // good foo (x, 0); // fails, with error on type of 'x', and multiple sig match } As you can see, there's no possible confusion over the type of 'y' -- they're both of type 'uint'. Yet the complier error points at a problem with 'x'. Bork. Bork. The number of errors is dependent upon order of foo() declarations. I'm a bit surprised no-one else seems to have posted about this in the past.

Hmm. The error message I got (and I just tried with the reposted example) is just the multiple match error. I stored the code above in test.d and ran dmd test.d on Windows: D:\d>dmd test.d test.d(10): function test.foo overloads void(inout int x,uint y = cast(uint)(10)) and void(inout uint x,uint y = cast(uint)(10)) both match argument list for foo Anyway, I don't really get it so I'll stop confusing the issue any more than I already have... :-P
Feb 25 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <cvnt5c$1bre$1 digitaldaemon.com>, Ben Hinkle says...
"Kris" <Kris_member pathlink.com> wrote in message 
 Bork. The number of errors is dependent upon order of foo() declarations.

Hmm. The error message I got (and I just tried with the reposted example) is just the multiple match error.

That's due to the order of foo() declarations, as noted above. Switch them around (or change x from uint to int; or vice versa) and you'll get the "cast(type)(x) not an lvalue" as well.
Feb 25 2005
parent reply Nick Sabalausky <a a.a> writes:
Kris wrote:
 In article <cvnt5c$1bre$1 digitaldaemon.com>, Ben Hinkle says...
 
"Kris" <Kris_member pathlink.com> wrote in message 

Bork. The number of errors is dependent upon order of foo() declarations.

Hmm. The error message I got (and I just tried with the reposted example) is just the multiple match error.

That's due to the order of foo() declarations, as noted above. Switch them around (or change x from uint to int; or vice versa) and you'll get the "cast(type)(x) not an lvalue" as well.

Here's what I'm getting on Windows with DMD 0.113: In this order: void foo (inout int x, uint y = 10){} void foo (inout uint x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo In this order: void foo (inout uint x, uint y = 10){} void foo (inout int x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo bork.d(10): cast(uint)(x) is not an lvalue I both cases I get the multiple overloads error, and in one I also get the strange "not an lvalue". I agree, this is strange. Both of the errors, in fact. Although, I have noticed that DMD tends to give a lot of cryptic/inaccurate error messages anyway - I've just been attributing it to DMD still being beta. Perahps this is just due to some underlying bug in default arguments? Just to clarify, are you saying you think something about the method resolution needs redesigned, or that it just needs a lot of bug-fixing?
Feb 25 2005
next sibling parent kris <fu bar.org> writes:
Nick Sabalausky wrote:
 Kris wrote:
 
 In article <cvnt5c$1bre$1 digitaldaemon.com>, Ben Hinkle says...

 "Kris" <Kris_member pathlink.com> wrote in message

 Bork. The number of errors is dependent upon order of foo() 
 declarations.

Hmm. The error message I got (and I just tried with the reposted example) is just the multiple match error.

That's due to the order of foo() declarations, as noted above. Switch them around (or change x from uint to int; or vice versa) and you'll get the "cast(type)(x) not an lvalue" as well.

Here's what I'm getting on Windows with DMD 0.113: In this order: void foo (inout int x, uint y = 10){} void foo (inout uint x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo In this order: void foo (inout uint x, uint y = 10){} void foo (inout int x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo bork.d(10): cast(uint)(x) is not an lvalue I both cases I get the multiple overloads error, and in one I also get the strange "not an lvalue". I agree, this is strange. Both of the errors, in fact. Although, I have noticed that DMD tends to give a lot of cryptic/inaccurate error messages anyway - I've just been attributing it to DMD still being beta. Perahps this is just due to some underlying bug in default arguments? Just to clarify, are you saying you think something about the method resolution needs redesigned, or that it just needs a lot of bug-fixing?

Before realizing the default-arg was involved in some bizarre manner, I put this down to another glorious wart in that area of the language; hence the thread title. As it turns out, it appears to be an unintentional mistake that will hopefully (one would imagine) be resolved quickly. As to my thoughts about some of the related design aspects: you'll forgive me if I refrain from opening that particular Pandora's Box ~ we've been down that path before, and I'm not writing the compiler. Cheers;
Feb 26 2005
prev sibling parent "Walter" <newshound digitalmars.com> writes:
"Nick Sabalausky" <a a.a> wrote in message
news:cvp5q6$2ivf$1 digitaldaemon.com...
 Kris wrote:
 In article <cvnt5c$1bre$1 digitaldaemon.com>, Ben Hinkle says...

"Kris" <Kris_member pathlink.com> wrote in message

Bork. The number of errors is dependent upon order of foo()




Hmm. The error message I got (and I just tried with the reposted



just the multiple match error.

That's due to the order of foo() declarations, as noted above. Switch


 around (or change x from uint to int; or vice versa) and you'll get the
 "cast(type)(x) not an lvalue" as well.

Here's what I'm getting on Windows with DMD 0.113: In this order: void foo (inout int x, uint y = 10){} void foo (inout uint x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo In this order: void foo (inout uint x, uint y = 10){} void foo (inout int x, uint y = 10){} I get: bork.d(10): function bork.foo overloads void(inout uint x,uint y = cast(uint)(10)) and void(inout int x,uint y = cast(uint)(10)) both match argument list for foo bork.d(10): cast(uint)(x) is not an lvalue I both cases I get the multiple overloads error, and in one I also get the strange "not an lvalue".

What happens is there's an overload error where two functions match the argument list types. So the compiler reports the first error message. Then, it attempts to recover from the error by just picking one of the functions (the first one) and soldiering on, trying to make it work. This results in the second error message when that doesn't work.
 I agree, this is strange. Both of the errors, in fact.

Why is the first error message strange?
Feb 27 2005
prev sibling parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Kris <Kris_member pathlink.com> wrote:

[...]
 
 void foo (inout int x, uint y = 10){}
 void foo (inout uint x, uint y = 10){}
 
 void bar()
 {
 int x;
 
 foo (x);     // good
 
 foo (x, 0);  // fails, with error on type of 'x', and multiple
 sig match }
 
 As you can see, there's no possible confusion over the type of
 'y' -- they're both of type 'uint'. Yet the complier error
 points at a problem with 'x'. Bork. Bork. The number of errors
 is dependent upon order of foo() declarations. 
 
 I'm a bit surprised no-one else seems to have posted about this
 in the past. 

That is no surprise: the error messages are totally okay, because there is no exact match and no single match by implicit casting and/or integer propagation. Therefore the multiple signature match is okay. And if there are multiple signature matches then the lexically first is choosen for exact comparison of the elements of the signature, giving one more error in the case that the (uint, uint)- version comes lexically first. -manfred
Feb 26 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <cvqfcj$11bl$1 digitaldaemon.com>, Manfred Nowak says...
Kris <Kris_member pathlink.com> wrote:

[...]
 
 void foo (inout int x, uint y = 10){}
 void foo (inout uint x, uint y = 10){}
 
 void bar()
 {
 int x;
 
 foo (x);     // good
 
 foo (x, 0);  // fails, with error on type of 'x', and multiple
 sig match }
 
 As you can see, there's no possible confusion over the type of
 'y' -- they're both of type 'uint'. Yet the complier error
 points at a problem with 'x'. Bork. Bork. The number of errors
 is dependent upon order of foo() declarations. 
 
 I'm a bit surprised no-one else seems to have posted about this
 in the past. 

That is no surprise: the error messages are totally okay, because there is no exact match and no single match by implicit casting and/or integer propagation. Therefore the multiple signature match is okay. And if there are multiple signature matches then the lexically first is choosen for exact comparison of the elements of the signature, giving one more error in the case that the (uint, uint)- version comes lexically first.

-------------------- [Ignoring the bogus msg about cast(uint)(x)] Aye; that's logically sound ~ if you're behind the looking-glass. Just part of the twisted horror foisted upon us through the combination of method-overloading and implicit-casting. Are you perhaps saying this is a /good/ thing, Manfred ? WALTER: Follow! But! follow only if ye be men of valor, for the entrance to this cave is guarded by a notion so foul, so cruel, that no man yet has fought with it and lived! Bones of four fifty men lie strewn about its lair. So, brave knights, if you do doubt your courage or your strength, come nae further, for death awaits you all with nasty, big, pointy teeth ... [The rest of us; chanting] Pie Iesu domine, dona eis requiem. [bonk] Pie Iesu domine,... [bonk] ..dona eis requiem. [bonk] Pie Iesu domine,... [bonk] ..dona eis requiem.
Feb 26 2005
parent reply Manfred Nowak <svv1999 hotmail.com> writes:
Kris <Kris_member pathlink.com> wrote:

[...]
 Are you perhaps saying this is a /good/ thing, Manfred ?

Good or bad is decided on ones wishes. It is known that every error renders the following code more or less useless. Therefore the error messages generated after the first error message are also more or less useless. In this case the first error message reported is that of a multiple signature match. If you are content with this, then just ignore the following error messages gegenrated by the compiler. If you want more information than that first error message, please generically describe what you want. Walter has choosen the solution with the least computational complexity for the compiler but one of the main goals of D is to increase productivity. So, instead of intoning choruses prove that your wish improves productivity in general and Walter will follow. -manfred
Feb 27 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Manfred Nowak" <svv1999 hotmail.com> wrote in message
news:cvs3dl$2km0$1 digitaldaemon.com...
 Walter has choosen the solution with the least computational
 complexity for the compiler but one of the main goals of D is to
 increase productivity.

I find C++ overloading rules to be exceptionally and overly complicated, with layers and layers of special cases and rules only an expert would understand. For D I opted for a simple 3 level matching system - exact match, match with implicit conversions, no match. So, for the example, comparing the types of the arguments with the types for each of the function, each function is matched with implicit conversions, hence the ambiguity error: foo(int, uint) foo(uint, uint) called with two int arguments, c and 12, looking like: foo(int, int) The rule is easy to explain. I think much of the confusion comes from being used to the complicated C++ rules, that when it's so simple, I suspect people just assume it is far more complicated than it is. I think this can be cleared up by improving the error message.
Feb 27 2005
next sibling parent reply Derek <derek psych.ward> writes:
On Sun, 27 Feb 2005 10:31:34 -0800, Walter wrote:

[snip]
 
 I think this can be cleared up by improving the error message.

Agreed. A simple message such as "cannot find a matching routine for foo(int,int)" would be wonderful. My main issue is that the current message does not make it clear what are the results of implicit conversions. Especially with literals. -- Derek Melbourne, Australia
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek" <derek psych.ward> wrote in message
news:1h5dxczfxlo28$.1l3a1aduh16x5.dlg 40tude.net...
 On Sun, 27 Feb 2005 10:31:34 -0800, Walter wrote:
 I think this can be cleared up by improving the error message.

Agreed. A simple message such as "cannot find a matching routine for foo(int,int)" would be wonderful. My main issue is that the current message does not make it clear what are the results of implicit conversions. Especially with literals.

I've modified the error message to be: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) This will go out in the next update.
Feb 27 2005
next sibling parent Derek Parnell <derek psych.ward> writes:
On Sun, 27 Feb 2005 13:44:58 -0800, Walter wrote:
 I've modified the error message to be:
 
 C:mars>dmd test
 test.d(9): function test.foo called with argument types:
         (char[],int,int)
 matches both:
         test.foo(char[],int,uint)
 and:
         test.foo(char[],uint,uint)
 
 This will go out in the next update.

That'll do it. Thank you. -- Derek Melbourne, Australia 28/02/2005 8:57:27 AM
Feb 27 2005
prev sibling next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cvtf29$sri$1 digitaldaemon.com...
 "Derek" <derek psych.ward> wrote in message
 news:1h5dxczfxlo28$.1l3a1aduh16x5.dlg 40tude.net...
 On Sun, 27 Feb 2005 10:31:34 -0800, Walter wrote:
 I think this can be cleared up by improving the error message.

Agreed. A simple message such as "cannot find a matching routine for foo(int,int)" would be wonderful. My main issue is that the current message does not make it clear what are the results of implicit conversions. Especially with literals.

I've modified the error message to be: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) This will go out in the next update.

That'll be a big boon.
Feb 27 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvtf29$sri$1 digitaldaemon.com>, Walter says...
"Derek" <derek psych.ward> wrote in message
news:1h5dxczfxlo28$.1l3a1aduh16x5.dlg 40tude.net...
 On Sun, 27 Feb 2005 10:31:34 -0800, Walter wrote:
 I think this can be cleared up by improving the error message.

Agreed. A simple message such as "cannot find a matching routine for foo(int,int)" would be wonderful. My main issue is that the current message does not make it clear what are the results of implicit conversions. Especially with literals.

I've modified the error message to be: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) This will go out in the next update.

That certainly cannot make matters worse. The problem, though, is that the compiler can inadvertantly focus one in the wrong direction (in my case it was the 'inout int' and 'inout uint' that I thought was wrong). It was two days before anyone posted the real crux of the error. I think that indicates a problem that this reworked error message does not address. Having said that, better diagnostics are always welcome.
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message
news:cvtmi8$145v$1 digitaldaemon.com...
 In article <cvtf29$sri$1 digitaldaemon.com>, Walter says...
I've modified the error message to be:

C:mars>dmd test
test.d(9): function test.foo called with argument types:
        (char[],int,int)
matches both:
        test.foo(char[],int,uint)
and:
        test.foo(char[],uint,uint)

This will go out in the next update.

That certainly cannot make matters worse. The problem, though, is that the compiler can inadvertantly focus one in the wrong direction (in my case it

 the 'inout int' and 'inout uint' that I thought was wrong).

That's why, in the error diagnostic above, things irrelevant to the overload matching (the existence of 'inout', any default parameter values, function return values, etc.) are omitted.
 It was two days before anyone posted the real crux of the error.

Your first post did not reproduce the error. The second one was posted 2/24 at 9:16pm, and Derek replied with the cause of the error at 9:23pm. 7 minutes ain't bad <g>.
 I think that indicates a problem that this reworked error message does not

You did have an issue with the "cast(uint)(c) is not an lvalue" message, however, that was the second error message put out by the compiler. The real error was the first error message (both error messages pointed to the same line). This is the well known "cascading error message" thing, where the compiler discovers an error, issues a correct diagnostic, then makes its best guess at how to patch things up so it can move on. If it guesses wrong (there is no way it can reliably guess write, otherwise it might as well write the program for you!), then more errors come out. The cascading error message problem is common to all compilers that try to recover from errors and continue compiling (it's an especially horrific problem with C++ templates <g>). The way for the user to deal with it is to focus on fixing the first error, and on not worrying about the subsequent errors if they don't make sense. The alternative is for the compiler to just quit on the first error, and the early DMD compiler did that. But users found that unacceptable, and I agreed. If there's another problem you're referring to, I don't know what it might be.
 Having said that, better diagnostics are always welcome.

Diagnostics are always going to be a work in progress, like any user interface.
Feb 27 2005
next sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvtoi1$16mn$1 digitaldaemon.com>, Walter says...
[snip]
Your first post did not reproduce the error. The second one was posted 2/24
at 9:16pm, and Derek replied with the cause of the error at 9:23pm. 7
minutes ain't bad <g>.

<sigh> We're both wrong, Walter. I posted the broken example late Friday afternoon at 5:16pm. Manfred posted the 'compiler perspective' early Saturday evening -- unfortunately, I didn't pick it up until Sunday afternoon, and didn't actually inspect the posting date. Shame on me. We're both off by a day. Is this entirely necessary?
You did have an issue with the "cast(uint)(c) is not an lvalue" message,
however, that was the second error message put out by the compiler. The real
error was the first error message (both error messages pointed to the same
line). This is the well known "cascading error message" thing, where the
compiler discovers an error, issues a correct diagnostic, then makes its
best guess at how to patch things up so it can move on. If it guesses wrong
(there is no way it can reliably guess write, otherwise it might as well
write the program for you!), then more errors come out.

Yes, yes. It was a red-herring, and we should all know better. But the above might as well be the documentation from the algorithms involved; it doesn't even begin to consider some of very real problems inherent within combined implicit-casting & method-overloading. Does it? Or do you feel there are perhaps no problems there at all? This is an important question, Walter.
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message
news:cvtu6b$1cl3$1 digitaldaemon.com...
 In article <cvtoi1$16mn$1 digitaldaemon.com>, Walter says...
 The second one was posted 2/24 at 9:16pm,


5:16, my newsreader shows 9:16. Obviously, we're in different timezones!
 [...] it doesn't even
 begin to consider some of very real problems inherent within combined
 implicit-casting & method-overloading. Does it?
 Or do you feel there are perhaps no problems there at all? This is an

 question, Walter.

I think it would save time if you list the problems as you see them, or if it's in a message I have overlooked (a real possibility, I don't think I'll ever catch up!) please point me to that message. I don't feel the particular example you posted in this thread has problems beyond a confusing error message.
Feb 27 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <cvu1in$1gkr$1 digitaldaemon.com>, Walter says...
 We're both wrong, Walter. I posted the broken example late Friday
 afternoon at 5:16pm.

5:16, my newsreader shows 9:16. Obviously, we're in different timezones!

Imagine!
 [...] it doesn't even
 begin to consider some of very real problems inherent within combined
 implicit-casting & method-overloading. Does it?
 Or do you feel there are perhaps no problems there at all? This is an

 question, Walter.

I think it would save time if you list the problems as you see them, or if it's in a message I have overlooked (a real possibility, I don't think I'll ever catch up!) please point me to that message. I don't feel the particular example you posted in this thread has problems beyond a confusing error message.

I've already noted some in a prior post in this thread (today). For more, you might refer to the very examples you used against the issues brought up via the old "alias peek-a-boo game" thread. Those examples are both speculative cases where issues arise over implicit-casting combined with method-overloading. For want of a better term, I'll refer to the latter as ICMO. At that time, you argued such examples were the reason why it was so important for any and all overloaded superclass-methods be /deliberately hidden/ from any subclass ~ and using an 'alias' to (dubiously, IMO) bring them back into scope was the correct solution. I noted at that time my suspicion this usage of 'alias' was a hack. Indeed, it is used as part of an attempt to cover up some of the issues surrounding ICMO. To reiterate some of the other issues, here they are, copied from the prior response to your post (the first one is obviously related to what's described above): =========================================================== Just look at what we (the users) have to do with 'alias' to bring back superclass methods that have had their name overloaded in the subclass? Each of those contrived old examples as to /why/ alias is supposedly necessary are based upon the fragility of combined implicit-casting & overloading within various scenarios. Here's another old & trivial example of this type of bogosity: void print (char[] s); void print (wchar[] s); {print ("bork");} Because the char literal can be implicitly cast to wchar[], the compiler fails. One has to do this instead: print (cast(char[]) "bork"); This is daft, Walter. And it hasn't been fixed a year later. Oh, and let's not forget that D will implicitly, and silently, convert a long argument into an int or byte. Not even a peep from the compiler. MSVC 6 certainly will not allow one to be so reckless. If I drove my car in a similar manner, I'd likely be put in jail. There's many, many more examples. Here's another old one that I always found somewhat amusing: news:cgat6b$1424$1 digitaldaemon.com I'm not saying that implicit-casting is necessarily bad, and I'm not saying that method-overloading is bad. I am saying the combination of the two leads to all sort of nastiness; some of which you've attempted to cover-up by implicitly hiding superclass method-names overloaded by a subclass, and which you just might cover up with an explicit-type-prefix on "literal" strings (such as w"wide-string"). It shouldn't have to be like this. Surely there's a more elegant solution all round, both for the compiler and for the user? Less special-cases is better for everyone. - Kris
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message
news:cvu5go$1k2j$1 digitaldaemon.com...
 I've already noted some in a prior post in this thread (today). For more,

 might refer to the very examples you used against the issues brought up

 old "alias peek-a-boo game" thread. Those examples are both speculative

 where issues arise over implicit-casting combined with method-overloading.

 want of a better term, I'll refer to the latter as ICMO.

 At that time, you argued such examples were the reason why it was so

 for any and all overloaded superclass-methods be /deliberately hidden/

 subclass ~ and using an 'alias' to (dubiously, IMO) bring them back into

 was the correct solution. I noted at that time my suspicion this usage of
 'alias' was a hack. Indeed, it is used as part of an attempt to cover up

 the issues surrounding ICMO.

Yes, we discussed that at length. I don't think either of us have changed our minds.
 Here's another old & trivial example of this type of bogosity:

 void print (char[] s);
 void print (wchar[] s);

 {print ("bork");}

 Because the char literal can be implicitly cast to wchar[], the compiler

 One has to do this instead:

 print (cast(char[]) "bork");

 This is daft, Walter. And it hasn't been fixed a year later.

That kind of thing happens when top down type inference meets bottom up type inference. It's on the list of things to fix. (It's technically not really an implicit casting issue. A string literal begins life with no type, the type is inferred from its context. This falls down in the overloading case you mentioned.)
 Oh, and let's not forget that D will implicitly, and silently, convert a

 argument into an int or byte. Not even a peep from the compiler. MSVC 6
 certainly will not allow one to be so reckless.

Actually, it's the defined, standard behavior of both C and C++ to do this. MSVC 6 will allow it without a peep unless you crank up the warning level. At one point I had such disallowed, but it produced more error messages than it was worth, requiring the insertion of many daft cast expressions (and I know how you don't like them with string literals!). Unlike C and C++, implicit conversions of floating point expressions to integral ones is disallowed in D, and that doesn't seem to cause any problems.
 There's many, many more examples. Here's another old one that I always

 somewhat amusing: news:cgat6b$1424$1 digitaldaemon.com

That's the same issue as you brought up above - which happens first, name lookup or overloading? C++ does it one way, Java the other. D does it the C++ way. C++ can achieve the Java semantics with a 'using' declaration, D provides the same with 'alias'. In fact, D goes further by offering complete control over which functions are overloaded with which by using alias. Java has no such capability.
 I'm not saying that implicit-casting is necessarily bad, and I'm not

 method-overloading is bad. I am saying the combination of the two leads to

 sort of nastiness; some of which you've attempted to cover-up by

 hiding superclass method-names overloaded by a subclass,

I suppose if you look at it from a Java perspective, it might seem like a coverup. But if you look at it from a C++ perspective, it behaves just as one would expect and be used to. Implicit casting has been in C and C++ for a very long time, and it's necessary to cope with the plethora of basic types (Java has a sharply reduced number of basic types, reducing the need for implicit conversions.)
 and which you just
 might cover up with an explicit-type-prefix on "literal" strings (such as
 w"wide-string").

C/C++ use L"wide string".
 It shouldn't have to be like this. Surely there's a more elegant solution

 round, both for the compiler and for the user? Less special-cases is

 everyone.

Nobody likes special cases, but they are an inevitable result of conflicting requirements.
Feb 27 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 Oh, and let's not forget that D will implicitly, and silently,
 convert a

 argument into an int or byte. Not even a peep from the compiler. MSVC
 6
 certainly will not allow one to be so reckless.

Actually, it's the defined, standard behavior of both C and C++ to do this. MSVC 6 will allow it without a peep unless you crank up the warning level.

That being the case, why have all those commercially-minded compiler vendors added in warnings to detect them? What makes you/D different from them?
 At one point I had such disallowed, but it produced more error
 messages than
 it was worth, requiring the insertion of many daft cast expressions

You say they're daft, and yet we're denied any mechanism to determine the daftness for ourselves (save spending the very significant amount of time in hacking away at the compiler front-end). When we discuss the issue, you present us with your favoured examples, but this might well be only 1% of the totality of contexts for narrowing conversions. Unless and until we have a *standard* compiler option that turns a narrowing conversion into an error, or at least a warning, such that we can judge the situation for ourselves, we're just going to continue to think that you're wrong and partial.
 (and I
 know how you don't like them with string literals!).

Disingenuous. Regan: please let me know (privately, if you wish) what kind of fallacy this is. :-)
 Unlike C and C++,
 implicit conversions of floating point expressions to integral ones is
 disallowed in D, and that doesn't seem to cause any problems.

That's good. But, erm, so what? You're telling us you're fixing some bad things in C/C++ and not some others. Or that floating=>integral narrowing is wrong in *all* cases, and integral=>integral narrowing is right in *all* cases. Doesn't stand up to common sense. Without being able to test it out for ourselves, we're left with only two options: Walter is smarter than us in all circumstances, or Walter is wrong but not listening/caring. For all that warnings and options are lambasted by one or two people - and Bob knows I don't want to get a GCC level of insanity - there seems no acknowledgement of the fact that it affords the *user* the ability to make choices for themselves. Given that the inventor of C++ (nor the stds members) did not foresee the incredibly arcane yet powerful uses for templates, is it so hard to accept that one man, however bright (pun intended, but not sarcastically), cannot see the extent of D world?
 I suppose if you look at it from a Java perspective, it might seem
 like a
 coverup. But if you look at it from a C++ perspective, it behaves just
 as
 one would expect and be used to. Implicit casting has been in C and
 C++ for
 a very long time

Yes, and it totally frigging stinks. Right now I'm running the last couple of cycles on STLSoft 1.8.3, and am building unittest projects again and again with 19 C/C++ compilers. *Many* times most of the compilers have let me through with 0E0W, only for one to reject (by dint of -wx) an implicit integral conversion. In at least 50% of these cases, I am bloody glad to have had at least one compiler that has sussed it out, so I can (re-)consider the ramifications. In many cases, say at least 5%, this has shown a bug. So, once again, I claim that your insight and experience, though huge, and probably much larger than mine, is not total. Therefore, the all-or-nothing approach is flawed.
 Surely there's a more elegant solution
 all
 round, both for the compiler and for the user? Less special-cases is

 everyone.

Nobody likes special cases, but they are an inevitable result of conflicting requirements.

Exactly. And if there is even 1 special case in a particular area, intransigence on the part of the compiler is the absolute wrong approach. Maybe we're just going to have to wait until there are several D compilers (for a given platform) such that one can only trust one's code when it's been multiplexed through a bunch of them. Don't see how that's any kind of evolution ...
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
Implicit narrowing conversions:

"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cvue5s$1sh4$1 digitaldaemon.com...
 Or that floating=>integral
 narrowing is wrong in *all* cases, and integral=>integral narrowing  is
 right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code. Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare. It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other. Warnings: I'm pretty familiar with warnings. If a compiler has 15 different warnings, then it is compiling 15! (15 factorial) different languages. Warnings tend to be unique to each compiler vendor (sometimes even contradictory), making more and more different languages that call themselves C++. Even worse, with each warning often comes a pragma to turn it off here and there, because, well, warnings are inherently wishy-washy and often wrong. Now I download Bob's Cool Project off the net, and compile it. I get a handful of warnings. What do I do about them? I don't know Bob's code, I don't know if they're bugs or not. I just want a clean compile so I can use the program. If the language behaves consistently, I've got a better chance of that happening. Warnings make a language inconsistent. There are a lot of things Java has done wrong, and a lot they've done right. One of the things they've done right is specify the language so it is consistent from platform to platform. A Java program compiles successfully or it does not, there are no warnings. If it compiles successfully and runs correctly on one platform, chances are far better than they are for C++ that it will compile and run correctly without modification on another platform. You know as well as anyone how much work it is to get a complex piece of code to run on multiple platforms with multiple C++ compilers (though this has gotten better in recent years). The idea has come up before of creating a "lint" program out of the D front end sources. It's an entirely reasonable thing to do, and I encourage anyone who wants to to do it. It would serve as a nice tool for proving out ideas about what should be an error and what shouldn't.
Feb 28 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:cvuku9$24j5$1 digitaldaemon.com...
 But the reverse is true
 for other integral conversions. They happen frequently, legitimately.

 to insert casts for them means you'll wind up with a lot of casts in the
 code.

An example is in order: byte b; ... b = b + 1; That gives an error if implicit narrowing conversions are disallowed. You'd have to write: b = cast(byte)(b + 1); or, if writing generic code: b = cast(typeof(b))(b + 1); Yuk.
Feb 28 2005
next sibling parent reply Derek <derek psych.ward> writes:
On Mon, 28 Feb 2005 02:01:18 -0800, Walter wrote:

 "Walter" <newshound digitalmars.com> wrote in message
 news:cvuku9$24j5$1 digitaldaemon.com...
 But the reverse is true
 for other integral conversions. They happen frequently, legitimately.

 to insert casts for them means you'll wind up with a lot of casts in the
 code.

An example is in order: byte b; ... b = b + 1; That gives an error if implicit narrowing conversions are disallowed. You'd have to write: b = cast(byte)(b + 1); or, if writing generic code: b = cast(typeof(b))(b + 1);

Unless the '1' was interpreted to be a byte rather than an int. DMD gives up trying to soon. A '1' is a member of many integer subsets, not just the 'int' subset. -- Derek Melbourne, Australia
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek" <derek psych.ward> wrote in message
news:1uxqa6iv5mgq3.hjei032qxzwk.dlg 40tude.net...
 On Mon, 28 Feb 2005 02:01:18 -0800, Walter wrote:

 "Walter" <newshound digitalmars.com> wrote in message
     b = cast(typeof(b))(b + 1);

Unless the '1' was interpreted to be a byte rather than an int. DMD gives up trying to soon. A '1' is a member of many integer subsets, not just the 'int' subset.

Even if the '1' was typed as a byte, such as with a cast, you'd still have to write: b = cast(typeof(b))(b + cast(typeof(b))1); because of the integral promotion rules: given (e1 + e2), both are promoted to ints or uints if they are integral types smaller than ints or uints. The integral promotion rules are taken from C, and I believe it would be a huge mistake to try and rewrite them.
Feb 28 2005
parent reply Derek <derek psych.ward> writes:
On Mon, 28 Feb 2005 10:06:58 -0800, Walter wrote:

 "Derek" <derek psych.ward> wrote in message
 news:1uxqa6iv5mgq3.hjei032qxzwk.dlg 40tude.net...
 On Mon, 28 Feb 2005 02:01:18 -0800, Walter wrote:

 "Walter" <newshound digitalmars.com> wrote in message
     b = cast(typeof(b))(b + 1);

Unless the '1' was interpreted to be a byte rather than an int. DMD gives up trying to soon. A '1' is a member of many integer subsets, not just the 'int' subset.

Even if the '1' was typed as a byte, such as with a cast, you'd still have to write: b = cast(typeof(b))(b + cast(typeof(b))1); because of the integral promotion rules: given (e1 + e2), both are promoted to ints or uints if they are integral types smaller than ints or uints. The integral promotion rules are taken from C, and I believe it would be a huge mistake to try and rewrite them.

I'm also looking forward to the day when somebody designs the language which learns from *all* the mistakes of C, C++ and D. -- Derek Melbourne, Australia
Feb 28 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek wrote:

 I'm also looking forward to the day when somebody designs the language
 which learns from *all* the mistakes of C, C++ and D.

++D, right ? (Since the green E seems to be taken already, which would otherwise be a natural progression from red D)
 Why E?
 
 Java, Perl, Python, C++, TCL, and on and on. Don't we have enough
 languages already? How could we justify bringing yet another
 programming language into the world?

(from http://www.skyhunter.com/marcs/ewalnut.html, with apologies) --anders
Feb 28 2005
prev sibling next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 An example is in order:
 
     byte b;
     ...
     b = b + 1;
 
 That gives an error if implicit narrowing conversions are disallowed. You'd
 have to write:
 
     b = cast(byte)(b + 1);
 
 or, if writing generic code:
 
     b = cast(typeof(b))(b + 1);
 
 Yuk.

I assume this is because of Natural number literals being integers? Now, there's a gratuituous widening implicit cast -- which then results to the narrowing cast later, right? I'd wish such literals get implicitly cast to the type "needed". Or better yet, cast to the smallest possible type.
Feb 28 2005
parent "Walter" <newshound digitalmars.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:4222F9DA.1020707 nospam.org...
 I assume this is because of Natural number literals being integers?

No. The same issue happens even with: byte a,b; b = a + b; It's because of the "default integral promotions".
 Now,
 there's a gratuituous widening implicit cast -- which then results to
 the narrowing cast later, right?

 I'd wish such literals get implicitly cast to the type "needed".
 Or better yet, cast to the smallest possible type.

What you're suggesting is called top down type inference, rather than bottom up type inference. Currently, there are a couple cases in D (and C++) where top down inference is used. The two together cause conflicts. It causes a lot of trouble with overloading, and it may not be possible to ever get it working in a reasonable manner.
Feb 28 2005
prev sibling next sibling parent reply Matthias Becker <Matthias_member pathlink.com> writes:
 But the reverse is true
 for other integral conversions. They happen frequently, legitimately.

 to insert casts for them means you'll wind up with a lot of casts in the
 code.

An example is in order: byte b; ... b = b + 1; That gives an error if implicit narrowing conversions are disallowed. You'd have to write: b = cast(byte)(b + 1); or, if writing generic code: b = cast(typeof(b))(b + 1); Yuk.

The type of string literals is infered by the context. Couldn't this be done with integer lietrals, too? If byte + byte results in byte the above wouldn't be a problem. -- Matthias Becker
Feb 28 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthias Becker wrote:

 The type of string literals is infered by the context. Couldn't this be done
 with integer lietrals, too? If byte + byte results in byte the above wouldn't
be
 a problem.

Converting string literals is not narrowing, it's round-trip... Besides, you still have to cast strings used as parameters if both char[] and wchar[] method overloads have been provided ? --anders
Feb 28 2005
parent reply Matthias Becker <Matthias_member pathlink.com> writes:
In article <cvuvsd$2gdd$1 digitaldaemon.com>,
=?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
Matthias Becker wrote:

 The type of string literals is infered by the context. Couldn't this be done
 with integer lietrals, too? If byte + byte results in byte the above wouldn't
be
 a problem.

Converting string literals is not narrowing, it's round-trip...

Infering the type of strings isn't narrowing, right, but infering the type of integers isn't neigther. Infering the type has nothing to do with narrowing at all. -- Matthias Becker
Feb 28 2005
parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthias Becker wrote:

 Infering the type of strings isn't narrowing, right, but infering the type of
 integers isn't neigther. Infering the type has nothing to do with narrowing at
 all.

I don't think that D infers the type of integer literals at all... They're all "int" due to C tradition ? Maybe stupid, but so be it. Strings are all char[], but can be interpreted as wchar[] and dchar[] too for special uses (although it is only implicit for string literals) If D should have some way of specifying smaller literals too, it would have to be added to the language. Right now, they're either int or long. Like Walter said: "a string literal begins life with no type", but integer literals (without L) still begins life being of the type: int. Are you saying all integers (lit.) should start without a type as well ? (equally likely to convert into byte/short/int, depending on context) Maybe I am misunderstanding something, but I think the casts work... The integer promotions are part of the C legacy built into the design. They also happen to map nicely onto 32-bit registers, which my CPU has. (the new computer actually has 64-bit and 128-bit registers too :-D ) --anders
Feb 28 2005
parent "Walter" <newshound digitalmars.com> writes:
"Anders F Björklund" <afb algonet.se> wrote in message
news:cvv5jq$2m2o$1 digitaldaemon.com...
 Like Walter said: "a string literal begins life with no type", but
 integer literals (without L) still begins life being of the type: int.

That's right. It's a bit subtle, but a key difference.
Feb 28 2005
prev sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Matthias Becker" <Matthias_member pathlink.com> wrote in message
news:cvuuh5$2evh$1 digitaldaemon.com...
 But the reverse is true
 for other integral conversions. They happen frequently, legitimately.

 to insert casts for them means you'll wind up with a lot of casts in the
 code.

An example is in order: byte b; ... b = b + 1; That gives an error if implicit narrowing conversions are disallowed. You'd have to write: b = cast(byte)(b + 1); or, if writing generic code: b = cast(typeof(b))(b + 1); Yuk.

The type of string literals is infered by the context. Couldn't this be done with integer lietrals, too? If byte + byte results in byte the above wouldn't be a problem.

Alas, it's a little more complex than that. byte+byte would have to result in ushort, since byte+byte can overflow byte. This in itself is an implicit narrowing conversion. But the example itself is bogus. Let's look at it again, with a bit more flesh: byte b; b = 255 b = b + 1; Hmmm .... do we still want that implicit behaviour? I think not!!
Feb 28 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 But the example itself is bogus. Let's look at it again, with a bit more flesh:
 
     byte b;
 
     b = 255
 
     b = b + 1;
 
 Hmmm .... do we still want that implicit behaviour? I think not!!

int big; big = A_BIG_LITERAL_VALUE; big = big + 1; // or any value at all, actually. Where's the difference? If somebody typed b as byte, then it's his own decision. Anyone using a small integral type has to know they can overflow -- this is no place for nannying. Heck, any integral type might overflow. (What I'd love is a switch for non-release code to check for overflow!)
Feb 28 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:422311B7.3010701 nospam.org...
 Matthew wrote:
 But the example itself is bogus. Let's look at it again, with a bit 
 more flesh:

     byte b;

     b = 255

     b = b + 1;

 Hmmm .... do we still want that implicit behaviour? I think not!!

int big; big = A_BIG_LITERAL_VALUE; big = big + 1; // or any value at all, actually. Where's the difference?

In principal none. In practice: - the larger a type, the less likely it is to overflow with expressions involving literals - the larger a type, the fewer types there are larger than it - there has to be some line drawn.
 If somebody typed b as byte, then it's his own decision. Anyone using 
 a small integral type has to know they can overflow -- this is no 
 place for nannying.

That's just total crap. Calling it nannying doesn't make it any less important, it just shows that your argument is in need of some emotional bolstering. I got back to the basic point: a language that allows the following: long l = 12345678901234; byte b = l; short s = 1; without allowing *any* feedback from a standards compliant compiler is a non-starter.
 Heck, any integral type might overflow.

True. There are limits to all principles, and imperfections to all notions. However, saying all integral=>integral conversions are *by definition* valid is not the pragmatic answer to our theoretical limitations. It's ostrich stuff.
 (What I'd love is a switch for non-release code to check for 
 overflow!)

Why non-release?
Feb 28 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:422311B7.3010701 nospam.org...
 
Matthew wrote:

But the example itself is bogus. Let's look at it again, with a bit 
more flesh:

    byte b;

    b = 255

    b = b + 1;

Hmmm .... do we still want that implicit behaviour? I think not!!

int big; big = A_BIG_LITERAL_VALUE; big = big + 1; // or any value at all, actually. Where's the difference?

In principal none. In practice: - the larger a type, the less likely it is to overflow with expressions involving literals - the larger a type, the fewer types there are larger than it - there has to be some line drawn.

So we swipe it under the rug.
Heck, any integral type might overflow.

notions. However, saying all integral=>integral conversions are *by definition* valid is not the pragmatic answer to our theoretical limitations. It's ostrich stuff.

I was suggesting the opposite.
(What I'd love is a switch for non-release code to check for 
overflow!)

Why non-release?

For practical matters, and speed.
Feb 28 2005
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Georg Wrede wrote:
 
 
 Matthew wrote:
 
 But the example itself is bogus. Let's look at it again, with a bit 
 more flesh:

     byte b;

     b = 255

     b = b + 1;

 Hmmm .... do we still want that implicit behaviour? I think not!!

int big; big = A_BIG_LITERAL_VALUE; big = big + 1; // or any value at all, actually. Where's the difference? If somebody typed b as byte, then it's his own decision. Anyone using a small integral type has to know they can overflow -- this is no place for nannying. Heck, any integral type might overflow. (What I'd love is a switch for non-release code to check for overflow!)

If you want it to wrap, you can declare a variable of a type that wraps. If you want an error, you can declare a variable of a type which has a limited range. (OTOH, if you want type promotion, you've got to take special steps to allow it...)
Feb 28 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cvv05b$2gnu$2 digitaldaemon.com...
 But the example itself is bogus. Let's look at it again, with a bit more

     byte b;

     b = 255

     b = b + 1;

 Hmmm .... do we still want that implicit behaviour? I think not!!

Yes, we do want it. Consider the following: byte b; int a,c; ... a = b + 1 + c; Do you really want the subexpression (b + 1) to "wrap around" on byte overflow? No. The notion of the default integral promotions is *deeply* rooted in the C psyche. Breaking this would mean that quite a lot of complicated, debugged, working C expressions will subtly break if transliterated into D. People routinely use the shorter integral types to save memory, and the expressions using them *expect* them to be promoted to int. D will just get thrown out the window by any programmer having to write: a = cast(int)b + 1 + c; The default integral promotion rules are what makes the plethora of integral types in C (and D) usable.
Feb 28 2005
next sibling parent reply xs0 <xs0 xs0.com> writes:
Walter wrote:
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cvv05b$2gnu$2 digitaldaemon.com...
 
But the example itself is bogus. Let's look at it again, with a bit more

flesh:
    byte b;

    b = 255

    b = b + 1;

Hmmm .... do we still want that implicit behaviour? I think not!!

Yes, we do want it. Consider the following: byte b; int a,c; ... a = b + 1 + c; Do you really want the subexpression (b + 1) to "wrap around" on byte overflow? No. The notion of the default integral promotions is *deeply* rooted in the C psyche. Breaking this would mean that quite a lot of complicated, debugged, working C expressions will subtly break if transliterated into D. People routinely use the shorter integral types to save memory, and the expressions using them *expect* them to be promoted to int. D will just get thrown out the window by any programmer having to write: a = cast(int)b + 1 + c; The default integral promotion rules are what makes the plethora of integral types in C (and D) usable.

Wouldn't it be possible to produce an error only in the case that the result of an expression is stored (or otherwise used) in a type smaller than any other type that was involved in the expression, with constants handled as being the smallest type they can be that preserves their value? (what an ugly sentence :) I think I understand the need for default integral promotions and I don't think this would hurt that.. In other words, the expressions still get evaluated like they do now, the only thing that changes is that the maximum size of all operators in an expression is checked, and if the assignee (left part of =, or function parameter type) is smaller, it is an error, otherwise it is not.. For example: byte=int+int; // error, must use cast int=byte+short; // OK, the same as it works now (both get cast to int // before being added up) int=3+short; // OK, both 3 and short are used as int (like now) short=short+5; // OK - 5 fits into short short=short+50000; // error, because 50000 is int int=byte+1+int; // OK, this is the above example int=100+100; // OK, you get 200, because expansion-to-int is still done // even though 100s could be considered as bytes short=int+short; // error - int is larger than short in the result short=30000+30000; // error - constant expression is evaluated as ints // and it doesn't fit in short, even though // both 30000s do xs0
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 short=short+5; // OK - 5 fits into short

Not necessarily
 int=byte+1+int; // OK, this is the above example

Not necessarily
Feb 28 2005
parent reply xs0 <xs0 xs0.com> writes:
Matthew wrote:
short=short+5; // OK - 5 fits into short

Not necessarily
int=byte+1+int; // OK, this is the above example

Not necessarily

And your point is? That + should be forbidden, because the result may overflow? If int=int+int+int is deemed completely acceptable (or don't you agree even with this?), I don't see how int=byte+5+int can be not acceptable? xs0
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"xs0" <xs0 xs0.com> wrote in message 
news:d00dq4$14vh$1 digitaldaemon.com...
 Matthew wrote:
short=short+5; // OK - 5 fits into short

Not necessarily
int=byte+1+int; // OK, this is the above example

Not necessarily

And your point is? That + should be forbidden, because the result may overflow?

No, my point was to correct a mistake in the post.
 If int=int+int+int is deemed completely acceptable (or don't you agree 
 even with this?), I don't see how int=byte+5+int can be not 
 acceptable?

These things are what's under debate. I don't believe I've defined a precise and overarching policy. Indeed, I'm suggesting that stipulating such a policy - as is currently the case with D's "all integral=>integral conversions are valid" - is a mistake. So, pay attention, don't misrepresent, and save your sarcasm for when you have some ammo.. In the meanwhile why not try and contribute to the debate?
Feb 28 2005
parent xs0 <xs0 xs0.com> writes:
Sorry for the tone, had a long day at work :)

I was trying to contribute, I just fail to see what you mean with "not 
necessarily" (i.e. why not?), and you didn't even address the basic 
suggestion I made..


xs0


 These things are what's under debate. I don't believe I've defined a 
 precise and overarching policy. Indeed, I'm suggesting that stipulating 
 such a policy - as is currently the case with D's "all 
 integral=>integral conversions are valid" - is a mistake.
 
 So, pay attention, don't misrepresent, and save your sarcasm for when 
 you have some ammo.. In the meanwhile why not try and contribute to the 
 debate?

Mar 01 2005
prev sibling parent MAtthias Becker <MAtthias_member pathlink.com> writes:
 But the example itself is bogus. Let's look at it again, with a bit more

     byte b;

     b = 255

     b = b + 1;

 Hmmm .... do we still want that implicit behaviour? I think not!!

Yes, we do want it. Consider the following: byte b; int a,c; ... a = b + 1 + c; Do you really want the subexpression (b + 1) to "wrap around" on byte overflow? No. The notion of the default integral promotions is *deeply* rooted in the C psyche. Breaking this would mean that quite a lot of complicated, debugged, working C expressions will subtly break if transliterated into D. People routinely use the shorter integral types to save memory, and the expressions using them *expect* them to be promoted to int. D will just get thrown out the window by any programmer having to write: a = cast(int)b + 1 + c; The default integral promotion rules are what makes the plethora of integral types in C (and D) usable.

# float foo, bar; # ... # foo = bar * 2.5f; I use a suffix to indicate that 2.5 has type float so I don'T get a warning from my C++-compiler. I could do the smae in D: # byte b; # int a,c; # ... # a = b + 1i + c; OK your next example will be: # byte b,c; # int a,d; # ... # a = b + c + d; But what about: # int b,c; # long a,d; # ... # a = b + c + d; Do I have to insert a cast to prvent 'a + b' from warapping around? -- Matthias Becker
Mar 01 2005
prev sibling parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message
news:cvuq59$2agl$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> wrote in message
 news:cvuku9$24j5$1 digitaldaemon.com...
 But the reverse is true
 for other integral conversions. They happen frequently, legitimately.

 to insert casts for them means you'll wind up with a lot of casts in the
 code.

An example is in order: byte b; ... b = b + 1; That gives an error if implicit narrowing conversions are disallowed. You'd have to write: b = cast(byte)(b + 1); or, if writing generic code: b = cast(typeof(b))(b + 1); Yuk.

Sorry, but that's only required if all integral expressions must be promoted to (at least) int. Who says that's the one true path? Why is int the level at which it's drawn? Why can there not a bit more contextual information applied? Is there no smarter way of dealing with it?
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cvv05a$2gnu$1 digitaldaemon.com...
 Sorry, but that's only required if all integral expressions must be

Right.
 Who says that's the one true path?

Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels.
 Why is int the level at which it's drawn?

Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky. Note that when the initial C standard was being drawn up, there was an unfortunate reality that there were two main branches of default integral promotions - the "sign preserving" and "value preserving" camps. They were different in some edge cases. One way had to be picked, the newly wrong compilers had to be fixed, and some old code would break. There was a lot of wailing and gnashing and a minor earthquake about it, but everyone realized it had to be done. That was a *minor* change compared to throwing out default integral promotions.
 Why can there not a bit more contextual information applied?
 Is there no smarter way of dealing with it?

Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization. D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is.
Feb 28 2005
next sibling parent reply brad domain.invalid writes:
Just a comment from the sidelines - feel free to ignore.
To me it doesn't look like Walter is in any hurry to flag narrowing 
casts as warnings (or add any warnings into D at all).  And it doesn't 
look like everyone else is willing to give up without a fight.  Why not 
use this as an opportunity to get started on a D lint program?  There 
are enough skilled people with a stake in this particular issue alone to 
make it worth while.
Walter - how hard, in your opinion, is it to add lint like checking into 
the GPL D frontend?  Can it be some in a painless way so that future 
updates to the frontend have low impact on the lint code?

Brad
Feb 28 2005
next sibling parent "Walter" <newshound digitalmars.com> writes:
<brad domain.invalid> wrote in message
news:cvvrs8$e8v$1 digitaldaemon.com...
 Just a comment from the sidelines - feel free to ignore.
 To me it doesn't look like Walter is in any hurry to flag narrowing
 casts as warnings (or add any warnings into D at all).  And it doesn't
 look like everyone else is willing to give up without a fight.  Why not
 use this as an opportunity to get started on a D lint program?  There
 are enough skilled people with a stake in this particular issue alone to
 make it worth while.
 Walter - how hard, in your opinion, is it to add lint like checking into
 the GPL D frontend?

Not hard. Of course, that depends on how far one wants to go with it.
 Can it be some in a painless way so that future
 updates to the frontend have low impact on the lint code?

Painless, probably not. But I try to be careful not to introduce wholesale or gratuitous changes, because I don't want to upset GDC. If the lint additions are marked in the source, it probably wouldn't be too hairy to fold them into future versions. It probably also would only be necessary to occaisionally do this, not with every update to DMD.
Feb 28 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
'cause if it's not part of the language spec, it might as well not 
exist.

Several commentators (including IIRC, the original authors of the 
language; must trip off and check my copy of The Art Of UNIX Programming 
...) have observed that the biggest single mistake in the development of 
the C language was having, for reasons of program size, the lint 
functionality as a separate program. Most people don't use it.

Consider the case of C++, which is simply too complicated for an 
effective lint.

And anyway, there's the more fundamental thing that if any item's going 
to be effectively mandatory in lint, it should be in the compiler in the 
first place. As I've said before, which competent C++ engineer does not 
set warnings to max?

<brad domain.invalid> wrote in message 
news:cvvrs8$e8v$1 digitaldaemon.com...
 Just a comment from the sidelines - feel free to ignore.
 To me it doesn't look like Walter is in any hurry to flag narrowing 
 casts as warnings (or add any warnings into D at all).  And it doesn't 
 look like everyone else is willing to give up without a fight.  Why 
 not use this as an opportunity to get started on a D lint program? 
 There are enough skilled people with a stake in this particular issue 
 alone to make it worth while.
 Walter - how hard, in your opinion, is it to add lint like checking 
 into the GPL D frontend?  Can it be some in a painless way so that 
 future updates to the frontend have low impact on the lint code?

 Brad 

Feb 28 2005
next sibling parent reply brad domain.invalid writes:
Matthew wrote:
 'cause if it's not part of the language spec, it might as well not 
 exist.
 
 Several commentators (including IIRC, the original authors of the 
 language; must trip off and check my copy of The Art Of UNIX Programming 
 ...) have observed that the biggest single mistake in the development of 
 the C language was having, for reasons of program size, the lint 
 functionality as a separate program. Most people don't use it.
 
 Consider the case of C++, which is simply too complicated for an 
 effective lint.
 
 And anyway, there's the more fundamental thing that if any item's going 
 to be effectively mandatory in lint, it should be in the compiler in the 
 first place. As I've said before, which competent C++ engineer does not 
 set warnings to max?
 

I agree. However, I think that with a lint like program built into the existing frontend via a #define would provide good quantitative evidence on this issue, and others. At the moment the whole thread boils down to "I like it one way" vs "I like it the other way". With a lint program built into the frontend, I think that you will have harder evidence either way. For example, add narrowing warnings to the frontend, run it over Mango. If it turns up genuine bugs, then the "warn on narrowing cast" crowd is proved right. If it turns up no actual bugs, well Walter's case is stronger. At the moment it is just a face off between two (obviously qualified experts) who happen to have differing opinions. Brad
Feb 28 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
<brad domain.invalid> wrote in message 
news:cvvvqp$jo5$1 digitaldaemon.com...
 Matthew wrote:
 'cause if it's not part of the language spec, it might as well not 
 exist.

 Several commentators (including IIRC, the original authors of the 
 language; must trip off and check my copy of The Art Of UNIX 
 Programming ...) have observed that the biggest single mistake in the 
 development of the C language was having, for reasons of program 
 size, the lint functionality as a separate program. Most people don't 
 use it.

 Consider the case of C++, which is simply too complicated for an 
 effective lint.

 And anyway, there's the more fundamental thing that if any item's 
 going to be effectively mandatory in lint, it should be in the 
 compiler in the first place. As I've said before, which competent C++ 
 engineer does not set warnings to max?

I agree. However, I think that with a lint like program built into the existing frontend via a #define would provide good quantitative evidence on this issue, and others. At the moment the whole thread boils down to "I like it one way" vs "I like it the other way". With a lint program built into the frontend, I think that you will have harder evidence either way. For example, add narrowing warnings to the frontend, run it over Mango. If it turns up genuine bugs, then the "warn on narrowing cast" crowd is proved right. If it turns up no actual bugs, well Walter's case is stronger. At the moment it is just a face off between two (obviously qualified experts) who happen to have differing opinions.

Absolutely. There's nothing to be gained from our chronic incapability to resist the temptation of having the last word. Action will prove it. Since Walter's the compiler writer, and (I presume) could add those warnings in within a day, I think it's appropriate to make the request of him, rather than waiting for the otherwise considerable efforts for any of the rest of us. I would think this would be of benefit to Walter, since he will undoubtedly learn something from it (as will we all!), and also stands to gain shutmouthedness from me and Kris, which he will not otherwise achieve.
Feb 28 2005
prev sibling parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:cvvun0$i78$1 digitaldaemon.com...
 'cause if it's not part of the language spec, it might as well not exist.

Is this an argument for making certain warnings part of the spec? I don't understand the context.
 Several commentators (including IIRC, the original authors of the 
 language; must trip off and check my copy of The Art Of UNIX Programming 
 ...) have observed that the biggest single mistake in the development of 
 the C language was having, for reasons of program size, the lint 
 functionality as a separate program. Most people don't use it.

 Consider the case of C++, which is simply too complicated for an effective 
 lint.

Plus the preprocessor makes linting harder. I run the MATLAB editor and the Emacs mode for MATLAB with mlint turned on and it lints code on load and save. It totally rocks. Load a file and you immediately see the "problem areas". MATLAB is interpreted so the concept of compiler warnings is moot but the work-flow is similar enough that integrated linting should have obvious benefits. Plus the linting interface can be more customizable than the warning interface on a compiler. I'd rather have Walter fix compiler bugs than work on a fancy warning infrastructure (or even a non-fancy warning infrastructure).
 And anyway, there's the more fundamental thing that if any item's going to 
 be effectively mandatory in lint, it should be in the compiler in the 
 first place. As I've said before, which competent C++ engineer does not 
 set warnings to max?

A compiler's verboseness about warnings is entirely up to the compiler writer. If one doesn't like the lack of warnings in one compiler run or make another. GDC is a good candidate for a starting point. If all D has to worry about is the lack of warnings from the reference compiler then we are in great shape.
 <brad domain.invalid> wrote in message 
 news:cvvrs8$e8v$1 digitaldaemon.com...
 Just a comment from the sidelines - feel free to ignore.
 To me it doesn't look like Walter is in any hurry to flag narrowing casts 
 as warnings (or add any warnings into D at all).  And it doesn't look 
 like everyone else is willing to give up without a fight.  Why not use 
 this as an opportunity to get started on a D lint program? There are 
 enough skilled people with a stake in this particular issue alone to make 
 it worth while.
 Walter - how hard, in your opinion, is it to add lint like checking into 
 the GPL D frontend?  Can it be some in a painless way so that future 
 updates to the frontend have low impact on the lint code?

 Brad


Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 And anyway, there's the more fundamental thing that if any item's 
 going to be effectively mandatory in lint, it should be in the 
 compiler in the first place. As I've said before, which competent C++ 
 engineer does not set warnings to max?

A compiler's verboseness about warnings is entirely up to the compiler writer. If one doesn't like the lack of warnings in one compiler run or make another. GDC is a good candidate for a starting point. If all D has to worry about is the lack of warnings from the reference compiler then we are in great shape.

But isn't the whole point to avoid dialecticism? At the rate it's going, DMD is going to be like Borland, an easy to use toy compiler that people learn on, but which is simply not good enough for any serious development. I can't believe that's in line with Walter's ambitions for Digital Mars (as opposed to D). Anyway, I tire of this, mainly my own voice if I'm honest. I'll try my best to keep my word and shut up.
Feb 28 2005
parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message 
news:d005sa$r86$1 digitaldaemon.com...
 And anyway, there's the more fundamental thing that if any item's going 
 to be effectively mandatory in lint, it should be in the compiler in the 
 first place. As I've said before, which competent C++ engineer does not 
 set warnings to max?

A compiler's verboseness about warnings is entirely up to the compiler writer. If one doesn't like the lack of warnings in one compiler run or make another. GDC is a good candidate for a starting point. If all D has to worry about is the lack of warnings from the reference compiler then we are in great shape.

But isn't the whole point to avoid dialecticism?

If by dialect you mean a different language then I disagree that having two compilers - one with warnings and one without - forks the language into two dialects. It's the same D underneath.
 At the rate it's going, DMD is going to be like Borland, an easy to use 
 toy compiler that people learn on, but which is simply not good enough for 
 any serious development. I can't believe that's in line with Walter's 
 ambitions for Digital Mars (as opposed to D).

I find it hard to believe that DMD lives and dies for "serious development" based on the verbosity of its warnings. I think D will live or die for serious development if there is a lack of tools, libraries and community surrounding it.
 Anyway, I tire of this, mainly my own voice if I'm honest. I'll try my 
 best to keep my word and shut up.

Feb 28 2005
prev sibling next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cvvp0e$b66$1 digitaldaemon.com...
 "Matthew" <admin.hat stlsoft.dot.org> wrote in message
 news:cvv05a$2gnu$1 digitaldaemon.com...
 Sorry, but that's only required if all integral expressions must be

Right.
 Who says that's the one true path?

Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels.
 Why is int the level at which it's drawn?

Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky. Note that when the initial C standard was being drawn up, there was an unfortunate reality that there were two main branches of default integral promotions - the "sign preserving" and "value preserving" camps. They were different in some edge cases. One way had to be picked, the newly wrong compilers had to be fixed, and some old code would break. There was a lot of wailing and gnashing and a minor earthquake about it, but everyone realized it had to be done. That was a *minor* change compared to throwing out default integral promotions.
 Why can there not a bit more contextual information applied?
 Is there no smarter way of dealing with it?

Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization. D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is.

Ok, I'm sold on integral promotion. Then the answer is that we must have narrowing warnings. I can't see a sensible alternative. Sorry
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cvvs75$ehg$1 digitaldaemon.com...
 Ok, I'm sold on integral promotion.

 Then the answer is that we must have narrowing warnings. I can't see a
 sensible alternative. Sorry

You might be interested in that: int foo(char c) { int i = 300; foo(i); c = c + 1; return c; } compiled with: gcc -c -Wall test.c produces no warnings. Neither does: g++ -c -Wall test.c
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cvvtnq$gjk$2 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cvvs75$ehg$1 digitaldaemon.com...
 Ok, I'm sold on integral promotion.

 Then the answer is that we must have narrowing warnings. I can't see 
 a
 sensible alternative. Sorry

You might be interested in that: int foo(char c) { int i = 300; foo(i); c = c + 1; return c; } compiled with: gcc -c -Wall test.c produces no warnings. Neither does: g++ -c -Wall test.c

Why? Does that prove you right? Me? All it says is that G++ is not perfect, which I already knew. How come sometimes all that's needed to justify something in D is to show something wrong in C++? Not much of an evolution. More like a statis in different colour. :-(
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cvvuq4$ica$1 digitaldaemon.com...
 Why? Does that prove you right? Me?

 All it says is that G++ is not perfect, which I already knew.

 How come sometimes all that's needed to justify something in D is to
 show something wrong in C++? Not much of an evolution. More like a
 statis in different colour.

 :-(

I meant it as a comment as to whether the existence of such warnings make a compiler/language usable for serious development or not, and as a comment on its relative importance to developers in general. Why doesn't g++ warn on it? It has many other warnings. 1) is it too hard to implement? I doubt it. g++ implements many very hard things. 2) did the keepers of the main g++ branch reject such a warning? Why? 3) has nobody cared enough about that particular issue to implement it?
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d008bq$ukc$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cvvuq4$ica$1 digitaldaemon.com...
 Why? Does that prove you right? Me?

 All it says is that G++ is not perfect, which I already knew.

 How come sometimes all that's needed to justify something in D is to
 show something wrong in C++? Not much of an evolution. More like a
 statis in different colour.

 :-(

I meant it as a comment as to whether the existence of such warnings make a compiler/language usable for serious development or not, and as a comment on its relative importance to developers in general. Why doesn't g++ warn on it? It has many other warnings. 1) is it too hard to implement? I doubt it. g++ implements many very hard things. 2) did the keepers of the main g++ branch reject such a warning? Why? 3) has nobody cared enough about that particular issue to implement it?

Maybe they don't care about it (which is what I presume you're getting at)? Other compiler vendors do. I compile my libraries with various versions of ~10 compiler vendors, including GCC. Every single compiler, even Watcom, has provided me with a warning that I have found to be useful that the others have missed. It irks me that I need to go to such lengths, but I value the facility of doing so. Your answer to this tiresome activity of the poor unpaid library writer is to just no worry about it. Mine is to pray for (several) standards-violating D compilers. I think both attitudes are regrettable, and I refuse to believe there's not a sensible, and largely agreeable, answer. If there isn't, then one would have to wonder whether that represented an underlying fundamental problem with D.
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d008rv$v8k$1 digitaldaemon.com...
 Maybe they don't care about it (which is what I presume you're getting

I'd consider that the most likely.
 Other compiler vendors do.

I just tried it with Comeau with "strict mode" http://www.comeaucomputing.com/tryitout/. No warnings.
 I compile my libraries with various versions of ~10 compiler vendors,
 including GCC. Every single compiler, even Watcom, has provided me with
 a warning that I have found to be useful that the others have missed. It
 irks me that I need to go to such lengths, but I value the facility of
 doing so.

I'd be curious of the results of that snippet (and the other one I posted) run through each. It'd be fun to see the results. Every compiler has its own personality, and the warnings reflect the particular biases of its developer. Warnings are not part of the language standard, and there are no standard warnings, despite some attempts to put them in.
 Your answer to this tiresome activity of the poor unpaid library writer
 is to just no worry about it. Mine is to pray for (several)
 standards-violating D compilers. I think both attitudes are regrettable,
 and I refuse to believe there's not a sensible, and largely agreeable,
 answer. If there isn't, then one would have to wonder whether that
 represented an underlying fundamental problem with D.

Why would it be a problem with D and not with the other languages? Anyone implementing a D compiler is free to add warnings to it for whatever they feel it justifiable, just like they are with C++.
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
Using the Arturius Compiler Multiplexer (coming to a download site to 
you sometime this year .... don't hold your breath):


H:\Dev\Test\compiler\D_warnings>arcc.debug 
D_warnings.cpp -c --warning-level=maximum --announce-tools
arcc: Intel C/C++
Tool: bcc/5.6
D_warnings.cpp(5): Warning W8071 : Conversion may lose significant 
digits in function foo(char)
D_warnings.cpp(6): Warning W8071 : Conversion may lose significant 
digits in function foo(char)
Tool: cw/7
### mwcc.exe Compiler:
#    File: D_warnings.cpp
# -----------------------
#       3:  {
# Warning:  ^
#   function has no prototype
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:           ^
#   implicit arithmetic conversion from 'int' to 'char'
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:            ^
#   result of function call is not used
### mwcc.exe Compiler:
#       6:      c = c + 1;
# Warning:               ^
#   implicit arithmetic conversion from 'int' to 'char'
Tool: cw/8
### mwcc.exe Compiler:
#    File: D_warnings.cpp
# -----------------------
#       3:  {
# Warning:  ^
#   function has no prototype
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:           ^
#   implicit arithmetic conversion from 'int' to 'char'
### mwcc.exe Compiler:
#       5:      foo(i);
# Warning:            ^
#   result of function call is not used
### mwcc.exe Compiler:
#       6:      c = c + 1;
# Warning:               ^
#   implicit arithmetic conversion from 'int' to 'char'
Tool: dm/8.40
Tool: dm/beta-sgi
Tool: dm/beta-stlport
Tool: gcc/2.9.5
Tool: gcc/3.2
Tool: gcc/3.3
Tool: gcc/3.4
Tool: icl/6
Tool: icl/7
D_warnings.cpp(2): remark #1418: external definition with no prior 
declaration
Tool: icl/8
D_warnings.cpp(2): remark #1418: external definition with no prior 
declaration
Tool: ow/1.2
[OW;Ruby]: D_warnings.cpp(5): Warning! W716: col(9) integral value may 
be truncated
D_warnings.cpp(5): Warning! W716: col(9) integral value may be truncated
[OW;Ruby]: D_warnings.cpp(6): Warning! W716: col(11) integral value may 
be truncated
D_warnings.cpp(6): Warning! W716: col(11) integral value may be 
truncated
[OW;Ruby]: Error: Compiler returned a bad status compiling 
'D_warnings.cpp'
Tool: vc/2
D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to 
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to 
'char', possible loss of data
Tool: vc/4.2
D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to 
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to 
'char', possible loss of data
Tool: vc/5
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char', 
possible loss of data
Tool: vc/6
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
'char', possible loss of data
D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char', 
possible loss of data
Tool: vc/7
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
overflow
Tool: vc/7.1
D_warnings.cpp(5): warning C4242: 'argument' : conversion from 'int' to 
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
overflow
Tool: vc/8
D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
'char', possible loss of data
h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
overflow

H:\Dev\Test\compiler\D_warnings>


"Walter" <newshound digitalmars.com> wrote in message 
news:d00cg1$13em$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d008rv$v8k$1 digitaldaemon.com...
 Maybe they don't care about it (which is what I presume you're 
 getting

I'd consider that the most likely.
 Other compiler vendors do.

I just tried it with Comeau with "strict mode" http://www.comeaucomputing.com/tryitout/. No warnings.
 I compile my libraries with various versions of ~10 compiler vendors,
 including GCC. Every single compiler, even Watcom, has provided me 
 with
 a warning that I have found to be useful that the others have missed. 
 It
 irks me that I need to go to such lengths, but I value the facility 
 of
 doing so.

I'd be curious of the results of that snippet (and the other one I posted) run through each. It'd be fun to see the results. Every compiler has its own personality, and the warnings reflect the particular biases of its developer. Warnings are not part of the language standard, and there are no standard warnings, despite some attempts to put them in.
 Your answer to this tiresome activity of the poor unpaid library 
 writer
 is to just no worry about it. Mine is to pray for (several)
 standards-violating D compilers. I think both attitudes are 
 regrettable,
 and I refuse to believe there's not a sensible, and largely 
 agreeable,
 answer. If there isn't, then one would have to wonder whether that
 represented an underlying fundamental problem with D.

Why would it be a problem with D and not with the other languages? Anyone implementing a D compiler is free to add warnings to it for whatever they feel it justifiable, just like they are with C++.

Feb 28 2005
next sibling parent "Walter" <newshound digitalmars.com> writes:
Thanks. It is interesting, and a bit surprising. I'm a bit bemused by the
one about no prototype for foo() <g>.
Feb 28 2005
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 1 Mar 2005 11:24:35 +1100, Matthew wrote:

 Using the Arturius Compiler Multiplexer (coming to a download site to 
 you sometime this year .... don't hold your breath):
 
 H:\Dev\Test\compiler\D_warnings>arcc.debug 
 D_warnings.cpp -c --warning-level=maximum --announce-tools
 arcc: Intel C/C++
 Tool: bcc/5.6
 D_warnings.cpp(5): Warning W8071 : Conversion may lose significant 
 digits in function foo(char)
 D_warnings.cpp(6): Warning W8071 : Conversion may lose significant 
 digits in function foo(char)
 Tool: cw/7
 ### mwcc.exe Compiler:
 #    File: D_warnings.cpp
 # -----------------------
 #       3:  {
 # Warning:  ^
 #   function has no prototype
 ### mwcc.exe Compiler:
 #       5:      foo(i);
 # Warning:           ^
 #   implicit arithmetic conversion from 'int' to 'char'
 ### mwcc.exe Compiler:
 #       5:      foo(i);
 # Warning:            ^
 #   result of function call is not used
 ### mwcc.exe Compiler:
 #       6:      c = c + 1;
 # Warning:               ^
 #   implicit arithmetic conversion from 'int' to 'char'
 Tool: cw/8
 ### mwcc.exe Compiler:
 #    File: D_warnings.cpp
 # -----------------------
 #       3:  {
 # Warning:  ^
 #   function has no prototype
 ### mwcc.exe Compiler:
 #       5:      foo(i);
 # Warning:           ^
 #   implicit arithmetic conversion from 'int' to 'char'
 ### mwcc.exe Compiler:
 #       5:      foo(i);
 # Warning:            ^
 #   result of function call is not used
 ### mwcc.exe Compiler:
 #       6:      c = c + 1;
 # Warning:               ^
 #   implicit arithmetic conversion from 'int' to 'char'
 Tool: dm/8.40
 Tool: dm/beta-sgi
 Tool: dm/beta-stlport
 Tool: gcc/2.9.5
 Tool: gcc/3.2
 Tool: gcc/3.3
 Tool: gcc/3.4
 Tool: icl/6
 Tool: icl/7
 D_warnings.cpp(2): remark #1418: external definition with no prior 
 declaration
 Tool: icl/8
 D_warnings.cpp(2): remark #1418: external definition with no prior 
 declaration
 Tool: ow/1.2
 [OW;Ruby]: D_warnings.cpp(5): Warning! W716: col(9) integral value may 
 be truncated
 D_warnings.cpp(5): Warning! W716: col(9) integral value may be truncated
 [OW;Ruby]: D_warnings.cpp(6): Warning! W716: col(11) integral value may 
 be truncated
 D_warnings.cpp(6): Warning! W716: col(11) integral value may be 
 truncated
 [OW;Ruby]: Error: Compiler returned a bad status compiling 
 'D_warnings.cpp'
 Tool: vc/2
 D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to 
 'char', possible loss of data
 D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to 
 'char', possible loss of data
 Tool: vc/4.2
 D_warnings.cpp(5): warning C4244: 'function' : conversion from 'int' to 
 'char', possible loss of data
 D_warnings.cpp(6): warning C4244: '=' : conversion from 'const int' to 
 'char', possible loss of data
 Tool: vc/5
 D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
 'char', possible loss of data
 D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char', 
 possible loss of data
 Tool: vc/6
 D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
 'char', possible loss of data
 D_warnings.cpp(6): warning C4244: '=' : conversion from 'int' to 'char', 
 possible loss of data
 Tool: vc/7
 D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
 'char', possible loss of data
 h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
 overflow
 Tool: vc/7.1
 D_warnings.cpp(5): warning C4242: 'argument' : conversion from 'int' to 
 'char', possible loss of data
 h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
 overflow
 Tool: vc/8
 D_warnings.cpp(5): warning C4244: 'argument' : conversion from 'int' to 
 'char', possible loss of data
 h:\dev\test\compiler\d_warnings\d_warnings.cpp(8): warning C4717: 'foo' 
: recursive on all control paths, function will cause runtime stack 
 overflow
 
 H:\Dev\Test\compiler\D_warnings>
 
 "Walter" <newshound digitalmars.com> wrote in message 
 news:d00cg1$13em$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:d008rv$v8k$1 digitaldaemon.com...
 Maybe they don't care about it (which is what I presume you're 
 getting

I'd consider that the most likely.
 Other compiler vendors do.

I just tried it with Comeau with "strict mode" http://www.comeaucomputing.com/tryitout/. No warnings.
 I compile my libraries with various versions of ~10 compiler vendors,
 including GCC. Every single compiler, even Watcom, has provided me 
 with
 a warning that I have found to be useful that the others have missed. 
 It
 irks me that I need to go to such lengths, but I value the facility 
 of
 doing so.

I'd be curious of the results of that snippet (and the other one I posted) run through each. It'd be fun to see the results. Every compiler has its own personality, and the warnings reflect the particular biases of its developer. Warnings are not part of the language standard, and there are no standard warnings, despite some attempts to put them in.
 Your answer to this tiresome activity of the poor unpaid library 
 writer
 is to just no worry about it. Mine is to pray for (several)
 standards-violating D compilers. I think both attitudes are 
 regrettable,
 and I refuse to believe there's not a sensible, and largely 
 agreeable,
 answer. If there isn't, then one would have to wonder whether that
 represented an underlying fundamental problem with D.

Why would it be a problem with D and not with the other languages? Anyone implementing a D compiler is free to add warnings to it for whatever they feel it justifiable, just like they are with C++.


I'm with Matthew on this one. The more warnings the better, so long as they are under the coder's control. I *hate* surprises in my code. If I can explicitly code away a warning (either by 'fixing' the code or telling the compiler that I'm grown up and will accept the consequences), I will do it. This has also shown me that there are some real wimpy C compilers out there. -- Derek Melbourne, Australia 1/03/2005 11:44:36 AM
Feb 28 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvvp0e$b66$1 digitaldaemon.com>, Walter says...
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cvv05a$2gnu$1 digitaldaemon.com...
 Sorry, but that's only required if all integral expressions must be

Right.
 Who says that's the one true path?

Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels.
 Why is int the level at which it's drawn?

Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky.

This appears to be thoroughly misleading. We're talking about what the compiler enforces as the model of correctness; /not/ how that is translated into machine code. Do you see the distinction, Walter?
Note that when the initial C standard was being drawn up, there was an
unfortunate reality that there were two main branches of default integral
promotions - the "sign preserving" and "value preserving" camps. They were
different in some edge cases. One way had to be picked, the newly wrong
compilers had to be fixed, and some old code would break. There was a lot of
wailing and gnashing and a minor earthquake about it, but everyone realized
it had to be done. That was a *minor* change compared to throwing out
default integral promotions.

Great stuff for the talk show, but unrelated. D is not C ~ you've already broken that concept in so many ways.
 Why can there not a bit more contextual information applied?
 Is there no smarter way of dealing with it?

Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization.

On the face of it, the contrary would be true. However, that hardly answers Matthew's question about whether there's another approach. Instead it seems to indicate that you don't know of one, or are not open to one. Or am I misreading this?
CPUs are designed to execute C semantics
efficiently. That pretty much nails down accepting C's operator precedence
and default integral promotion rules as is.

Misleading and invalid. Your argument fully binds the language model to the machine-code model. There's no need for them to be as tightly bound as you claim. Sure, there's tension between the two ~ but you are claiming it's all or nothing. It may have been a number of years since I wrote a compiler, but I don't buy this nonsense. And neither should anyone else. - Kris
Feb 28 2005
next sibling parent Derek <derek psych.ward> writes:
On Mon, 28 Feb 2005 20:22:04 +0000 (UTC), Kris wrote:

 In article <cvvp0e$b66$1 digitaldaemon.com>, Walter says...
"Matthew" <admin.hat stlsoft.dot.org> wrote in message
news:cvv05a$2gnu$1 digitaldaemon.com...
 Sorry, but that's only required if all integral expressions must be

Right.
 Who says that's the one true path?

Breaking that would make transliterating C and C++ code into D an essentially impractical task. It's even worse than trying to "fix" C's operator precedence levels.
 Why is int the level at which it's drawn?

Because of the long history of the C notion of 'int' being the natural size of the underlying CPU, and the incestuous tendency of CPU's to be designed to execute C code efficiently. (Just to show what kind of trouble one can get into with things like this, the early Java spec required floating point behavior to be different than how the most popular floating point hardware on the planet wanted to do it. This caused Java implementations to be either slow or non-compliant.) CPUs makers, listening to their marketing department, optimize their designs for C, and that means the default integral promotion rules. Note that doing short arithmetic on Intel CPUs is slow and clunky.

This appears to be thoroughly misleading. We're talking about what the compiler enforces as the model of correctness; /not/ how that is translated into machine code. Do you see the distinction, Walter?

Kris has a damn good point or two here. DMD is a compiler not an assembler.
Note that when the initial C standard was being drawn up, there was an
unfortunate reality that there were two main branches of default integral
promotions - the "sign preserving" and "value preserving" camps. They were
different in some edge cases. One way had to be picked, the newly wrong
compilers had to be fixed, and some old code would break. There was a lot of
wailing and gnashing and a minor earthquake about it, but everyone realized
it had to be done. That was a *minor* change compared to throwing out
default integral promotions.

Great stuff for the talk show, but unrelated. D is not C ~ you've already broken that concept in so many ways.

D has deliberately broken away from some aspects of C and C++ in order to come up with a better language. But not all excess baggage seems to have been left behind.
 
 Why can there not a bit more contextual information applied?
 Is there no smarter way of dealing with it?

Start pulling on that string, and everything starts coming apart, especially for overloading and partial specialization.

On the face of it, the contrary would be true. However, that hardly answers Matthew's question about whether there's another approach. Instead it seems to indicate that you don't know of one, or are not open to one. Or am I misreading this?
CPUs are designed to execute C semantics
efficiently. That pretty much nails down accepting C's operator precedence
and default integral promotion rules as is.

Misleading and invalid. Your argument fully binds the language model to the machine-code model. There's no need for them to be as tightly bound as you claim. Sure, there's tension between the two ~ but you are claiming it's all or nothing. It may have been a number of years since I wrote a compiler, but I don't buy this nonsense. And neither should anyone else.

With Walter's reasoning here, it would seem that DMD would be better off outputting C source code and getting that compiled with a decent C compiler. -- Derek Melbourne, Australia
Feb 28 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message
news:cvvuhc$hqd$1 digitaldaemon.com...
 In article <cvvp0e$b66$1 digitaldaemon.com>, Walter says...
CPUs are designed to execute C semantics
efficiently. That pretty much nails down accepting C's operator


and default integral promotion rules as is.


Correct, but that's because much of the context is omitted. I wrote: "D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is." The second sentence just adds to the first, it does not stand alone. Where D does break from C/C++ is in things that look different, or that will issue error messages if the C syntax is tried. Many older CPUs had special instructions in them that catered to Pascal compilers. With the decline of Pascal and the rise of C, there was a change to support C better and drop Pascal. No CPU designer expects to stay in business if their new design does not execute compiled C efficiently. Intel added special instructions to support C integral promotion rules, and made darn sure they executed efficiently. If you carefully read their optimization manuals, they don't give much attention to the performance of integer operations that are not normally generated by C compilers. Changes in compiler code generation techniques led to a complete redesign of the FPU instructions. Even Java was forced to change their semantics to allow for the way Intel FPUs operated. And if you follow Java, you'll know how resistant Sun is to making any changes in the VM's semantics, it has to be a huge issue. Were that D was big enough in the market to dictate CPU design, but it isn't. C is, but even Java isn't. As such, it just makes sense to work with what works efficiently. I honestly believe that if D gets a reputation of being less efficient than C/C++, it will be dead in the marketplace. Look at all the drubbing Java gets over this issue, and the enormous resources Sun/IBM/etc. pour into trying to fix it. I believe Ada does what you wish (been a long time since I looked at it), but it just isn't very successful.
Feb 28 2005
parent Kris <Kris_member pathlink.com> writes:
In article <d00ba7$125c$1 digitaldaemon.com>, Walter says...
"Kris" <Kris_member pathlink.com> wrote in message
news:cvvuhc$hqd$1 digitaldaemon.com...
 In article <cvvp0e$b66$1 digitaldaemon.com>, Walter says...
CPUs are designed to execute C semantics
efficiently. That pretty much nails down accepting C's operator


and default integral promotion rules as is.


Correct, but that's because much of the context is omitted. I wrote: "D, being derived from C and C++ and being designed to appeal to those programmers, is designed to try hard to not subtly break code that looks the same in both languages. CPUs are designed to execute C semantics efficiently. That pretty much nails down accepting C's operator precedence and default integral promotion rules as is." The second sentence just adds to the first, it does not stand alone. Where D does break from C/C++ is in things that look different, or that will issue error messages if the C syntax is tried. Many older CPUs had special instructions in them that catered to Pascal compilers. With the decline of Pascal and the rise of C, there was a change to support C better and drop Pascal. No CPU designer expects to stay in business if their new design does not execute compiled C efficiently. Intel added special instructions to support C integral promotion rules, and made darn sure they executed efficiently. If you carefully read their optimization manuals, they don't give much attention to the performance of integer operations that are not normally generated by C compilers. Changes in compiler code generation techniques led to a complete redesign of the FPU instructions. Even Java was forced to change their semantics to allow for the way Intel FPUs operated. And if you follow Java, you'll know how resistant Sun is to making any changes in the VM's semantics, it has to be a huge issue. Were that D was big enough in the market to dictate CPU design, but it isn't. C is, but even Java isn't. As such, it just makes sense to work with what works efficiently. I honestly believe that if D gets a reputation of being less efficient than C/C++, it will be dead in the marketplace. Look at all the drubbing Java gets over this issue, and the enormous resources Sun/IBM/etc. pour into trying to fix it. I believe Ada does what you wish (been a long time since I looked at it), but it just isn't very successful.

What I wish? What do I wish for, Walter? I can't even get you to the point of discourse without an avalanche of blanket denials. The above is just another example of what I noted in another post: you completely ignored the salient points about the questionable tight bonding you seem to believe in, or any of the other points, and replied with more of the same branded gumph you're becoming so adept at. You'd think we all worked in the laundry rather than being steeped in this stuff ourselves. If you'd have bothered to address the other points also, then fair play. But that's not your recent style. Please read back what you wrote. It's exactly the kind of misguidance that diminishes your position as a realist. It's not even particularly factual. And in reply to a post that noted you should /not/ be coupling the language model /directly/ to the CPU model. There's no need to do that, as you are no doubt intensely aware. I'm astonished. And thanks for the subtle direction on language choice, dude. Along with the unnesessary 'success' sarcasm. I imagine that's the closest you'll get to saying "F-off, Kris". This is hardly the manner in which to limit escalation, Walter. Please take a good look at what's happened here: 1) An issue arises with the language, some examples are knocked around 2) it seems that there are maybe some issues, maybe not 3) I, and certain others, are concerned that method-resolution is not what it's perhaps cracked up to be: some new limitatons come to light 4) Some old notions are brought to the fore again, within a somewhat different perspective than before 5) There's a whole spat of denial from you that anything could possibly be wrong, mixed in with the kind of spin that would make some marketeers blush. You make noises about "my wishes" or "Matthew's wishes", when we can't even get you to the point of rational discussion. 6) You make subtle and vaguely sarcastic overtones that I move on from D Way to go, dude. That's constructive co-operation at it's best. There may be little love lost between us Walter, but I've still given over a vast quantity of my time and effort to help this thing along. During that time, I've often felt that the biggest impediment to the success of D is the stranglehold you have upon it, combined with your outright stubborness and your apparent lack of experience in a variety of software realms beyond writing compilers. You are certainly most adept at the latter, but language design must account for many things other than bit-twiddling. I have no doubt you already know this. But you often act as though you don't. Finally, it's worth pointing out that regardless of how far apart our positions might seem, the truth (as is so often the case) lies somewhere between. You don't have all the answers Walter; I doubt any of us do. It's why open-minded discussion is generally viewed as a good thing.
Feb 28 2005
prev sibling next sibling parent reply Matthias Becker <Matthias_member pathlink.com> writes:
Implicit narrowing conversions:

 Or that floating=>integral
 narrowing is wrong in *all* cases, and integral=>integral narrowing  is
 right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code. Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare. It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other.

When I get a warning from my C++ compiler that I do a narrowing conversation I allways insert a cast. So I wouldn't have a problem with inserting them in D code. But what about down-casting? It's often done when you expect it to work (just as you do with integral narrowing conversations). And it is even more save as you get null if it doesn't work. So why aren't down-casts implicit? -- Matthias Becker
Feb 28 2005
parent reply "Matthew" <admin.hat stlsoft.dot.org> writes:
"Matthias Becker" <Matthias_member pathlink.com> wrote in message
news:cvuqnu$2b05$1 digitaldaemon.com...
Implicit narrowing conversions:

 Or that floating=>integral
 narrowing is wrong in *all* cases, and integral=>integral narrowing  is
 right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code. Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare. It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other.

When I get a warning from my C++ compiler that I do a narrowing conversation I allways insert a cast.

Do you mean, "after considering whether the code is correct"? If not, the you add grist to Walter's mill that narrowing errors just precipiate the careless (ab)use of casts. (Which I don't believe.)
Feb 28 2005
parent Matthias Becker <Matthias_member pathlink.com> writes:
 When I get a warning from my C++ compiler that I do a narrowing conversation I
 allways insert a cast.

Do you mean, "after considering whether the code is correct"? If not, the you add grist to Walter's mill that narrowing errors just precipiate the careless (ab)use of casts. (Which I don't believe.)

casts without looking at the code, I could turn of the warning as well. -- Matthias Becker
Mar 01 2005
prev sibling next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 Implicit narrowing conversions:
 
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cvue5s$1sh4$1 digitaldaemon.com...
 
Or that floating=>integral
narrowing is wrong in *all* cases, and integral=>integral narrowing  is
right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said. What I did say, perhaps indirectly, is that implicit floating->integral conversions in the wild tend to nearly always be a mistake. For the few cases where it is legitimate, a cast can be used. But the reverse is true for other integral conversions. They happen frequently, legitimately. Having to insert casts for them means you'll wind up with a lot of casts in the code. Having lots of casts in the code results in *less* type safety, not more. Having language rules that encourage lots of casting is not a good idea. Casts should be fairly rare. It isn't all or nothing. It's a judgement call on the balance between the risk of bugs from a narrowing conversion and the risk of bugs from a cast masking a totally wrong type. With integral conversions, it falls on the one side, with floating conversions, it falls on the other. Warnings: I'm pretty familiar with warnings. If a compiler has 15 different warnings, then it is compiling 15! (15 factorial) different languages. Warnings tend

Assuming 15 different on-offable compiler warning switches, one would tend to think that it should be 2^15 different languages? Or did I miss something?
 to be unique to each compiler vendor (sometimes even contradictory), making
 more and more different languages that call themselves C++. Even worse, with
 each warning often comes a pragma to turn it off here and there, because,
 well, warnings are inherently wishy-washy and often wrong. Now I download
 Bob's Cool Project off the net, and compile it. I get a handful of warnings.
 What do I do about them? I don't know Bob's code, I don't know if they're
 bugs or not. I just want a clean compile so I can use the program. If the
 language behaves consistently, I've got a better chance of that happening.
 Warnings make a language inconsistent.
 
 There are a lot of things Java has done wrong, and a lot they've done right.
 One of the things they've done right is specify the language so it is
 consistent from platform to platform. A Java program compiles successfully
 or it does not, there are no warnings. If it compiles successfully and runs
 correctly on one platform, chances are far better than they are for C++ that
 it will compile and run correctly without modification on another platform.
 You know as well as anyone how much work it is to get a complex piece of
 code to run on multiple platforms with multiple C++ compilers (though this
 has gotten better in recent years).
 
 The idea has come up before of creating a "lint" program out of the D front
 end sources. It's an entirely reasonable thing to do, and I encourage anyone
 who wants to to do it. It would serve as a nice tool for proving out ideas
 about what should be an error and what shouldn't.
 
 

Feb 28 2005
parent "Walter" <newshound digitalmars.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:4222F26F.8070509 nospam.org...
 I'm pretty familiar with warnings. If a compiler has 15 different


 then it is compiling 15! (15 factorial) different languages. Warnings


 Assuming 15 different on-offable compiler warning switches, one would
 tend to think that it should be 2^15 different languages? Or did I miss
 something?

It's factorial. Using 2^15 creates duplicates, since the order of the switches doesn't matter.
Feb 28 2005
prev sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:cvuku9$24j5$1 digitaldaemon.com...
 Implicit narrowing conversions:

 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cvue5s$1sh4$1 digitaldaemon.com...
 Or that floating=>integral
 narrowing is wrong in *all* cases, and integral=>integral narrowing 
 is
 right in *all* cases. Doesn't stand up to common sense.

Of course it doesn't stand up to common sense, and it is not what I said.

Maybe I misspoke. When I said "You're telling us ... that floating=>integral narrowing is wrong in *all* cases, and integral=>integral narrowing is right in *all* cases." I didn't mean that you've explicitly said that. Rather it is implicit in the fact that there are no warnings / compiler switches / pragmas / whatever. A fixed decision has been made, which the D programmer has precisely no power to adjust. Now, in a general sense that's right and proper. People cannot and should not be able to change the language willy-nilly. But the fine print on this issue is that you've allowed any implicit integral=>integral conversion, and disallowed any floating point=>integral conversion. That's what I'm objecting to. It's an all or nothing judgment, in a case - int=>int conv - that's not all or nothing: int=>long is ok in general, int=>bool is not ok in general. I cannot see the sense in the argument that says those two things should be treated alike.
 What I did say, perhaps indirectly, is that implicit 
 floating->integral
 conversions in the wild tend to nearly always be a mistake. For the 
 few
 cases where it is legitimate, a cast can be used. But the reverse is 
 true
 for other integral conversions. They happen frequently, legitimately. 
 Having
 to insert casts for them means you'll wind up with a lot of casts in 
 the
 code.

 Having lots of casts in the code results in *less* type safety, not 
 more.

Says who?
 Having language rules that encourage lots of casting is not a good 
 idea.
 Casts should be fairly rare.

I absolutely agree. When I'm coding in C++ - and let's be honest: I do a *lot* of that - I absolutely want to know when I'm converting from a larger to a smaller int. Every single time. No exceptions. And I am *very glad indeed* that my code contains an ugly cast in it. When I, or someone else, reads m_size = static_cast<size_t>(std::distance(first, last)); it is clear that the ptrdiff_t (signed) returned by std::distance() must be explicitly converted to an unsigned type. After all, I don't want to define m_size as a signed quantity, because there's no way that a container with a -ve size makes sense. That's what happens in Java. And std::distance() cannot return size_t, because it can obviously be passed a pair of pointers, as well as iterators. There's only one place where the decision makes sense, and it only makes sense for that decision to be explicit. In the previous lines of code to that shown, we will have established that !(last < front), explicitly. And only then can we, the designers of that code, know that it's valid to convert from a signed value to an unsigned one.
 It isn't all or nothing.

Indeed
 It's a judgement call on the balance between the
 risk of bugs from a narrowing conversion

Risk implies the possibility of not having them. Since I frequently am prevented from introducing bugs by judicious use of promoted compiler warnings in C++, I believe I can authoritatively stipulate that having unguarded narrowing conversions is a guaranteed source of bugs. Please point out the logical flaw in that conclusion.
 and the risk of bugs from a cast
 masking a totally wrong type.

Please explain how this might be so. One thing that occurs: if D's cast()s are to coarse a tool, then why don't we consider the divide and conquer approach. What's wrong with a long l = -12345678901234; ubyte u = narrow(l); ??? narrow() can't be used for downcasting, or for casting between pointer types, etc. etc. This is what C++ does, albeit on an overly coarse degree. There are only static_cast reinterpret_cast const_cast dynamic_cast The good thing about C++ is that, by using templates + default paramters, one can implement custom casts. As you know, I've written several, including: union_cast - for when you have to be *really* dirty in casting interface_cast - for casting between COM interface pointers explicit_cast - for providing explicit, rather than implicit, conversion operators literal_cast - check the size of literals ptr_cast - throws bad_cast for ptrs as well as for refs I've yet to hear a single sensible criticism of a rich cast taxonomy. People say they're ugly. Well, duh! Show me how that's not a good thing People say there are too many casts. Well, that's the whole bloody point. C's uber cast sucks hard. C++'s four casts was a good start, and the facility for introducing more is a very powerful feature. Why can't we have a fine-grained approach in D? It is supposed to be an evolution
 With integral conversions, it falls on the one
 side, with floating conversions, it falls on the other.

 Warnings:

 I'm pretty familiar with warnings. If a compiler has 15 different 
 warnings,
 then it is compiling 15! (15 factorial) different languages.

Maybe so in theory. In practice, what competent C++ developer does not set warnings to max, allowing exceptions on a case-by-case basis?
 Warnings tend
 to be unique to each compiler vendor (sometimes even contradictory), 
 making
 more and more different languages that call themselves C++.

True. But I'd rather have to deal with 29 dithering but insightful geniuses than one cock-sure dogmatist. Kind of like the current schism in the 'civilised' world, no?
 Even worse, with
 each warning often comes a pragma to turn it off here and there, 
 because,
 well, warnings are inherently wishy-washy and often wrong. Now I 
 download
 Bob's Cool Project off the net, and compile it. I get a handful of 
 warnings.
 What do I do about them? I don't know Bob's code, I don't know if 
 they're
 bugs or not. I just want a clean compile so I can use the program. If 
 the
 language behaves consistently, I've got a better chance of that 
 happening.
 Warnings make a language inconsistent.

That's a compelling example, and would be persuasive if you were promoting Java, where they've largely gone to town on strictness. But it just doesn't hold for a language where long l = -12345678901234; short s = l; byte b = l; is valid code. Given this, you cannot *ever* know that any code you interface to is valid. Hmm, maybe the answer is to never download any D code?
 There are a lot of things Java has done wrong, and a lot they've done 
 right.
 One of the things they've done right is specify the language so it is
 consistent from platform to platform.

Agreed
 A Java program compiles successfully
 or it does not, there are no warnings.

Great. They have the luxury of being super strict. Some here would go for super strict for D. Some others - like me - would live with super strict if there's to be no variability But what we have at the moment is super slack. Given that, the analogy with Java makes no sense.
 If it compiles successfully and runs
 correctly on one platform, chances are far better than they are for 
 C++ that
 it will compile and run correctly without modification on another 
 platform.

True. But that doesn't hold for D, since we've already established that D is by definition more open to bugs that it need be.
 You know as well as anyone how much work it is to get a complex piece 
 of
 code to run on multiple platforms with multiple C++ compilers (though 
 this
 has gotten better in recent years).

I know all too well. But my experience with C++ does not incline me towards D's smaller degree of platform variability. Quite the opposite. When deciding on a language, one needs to consider a host of factors, including support over multiple platforms, efficiency, expressiveness, etc. etc. If I don't care about performance, but I do care about portability, I'm going to use Ruby. Hands down. (Except when I need a rich set of established libraries, in which case it's Python.) Since Ruby and Python are untyped, the issue of compile time control of conversions is moot. If I care about performance, I'm choosing C/C++ every time, no matter how much pain I have to go through porting between platforms. Since C++ is an almost perfect superset of C, it's easy peasy to deal with platform APIs. If I'm doing something that involves working with large teams of, how to say, junior programmers, then I'm going to use Java. I can barely think of a single reason to use .NET. The only time I've done so in a commercial context was because it could interface to a C++ library of mine with minimal effort. (The extension was <60 mins to write.) Ah, hang on, the only other time was when I needed to quickly write a non-trivial TCP test driver - the only thing I'm so far impressed about .NET is that it's got some nice networking libs. So where will D fit it? It's not untyped, so not going to be a competitor for scripting languages. So it has to compete with mindshare with Java/.NET or C/C++. To be incredibly simplistic, the problems with Java/.NET are that they treat the programmer like an idiot and are slow. The problems with C/C++ are that they are minefields for even the very experienced, and C++ is evolving into an almost impossible to understand pursuit of the intellectually pretentious. D has to be ~ as efficient as C/C++, as powerfully expressive as C++, as easy to use as Java/.NET, and have a tendency to suppress the dialectical nature of the D language at large in the line of Java/Python, rather than C++/Perl/(and to some degree Ruby), be as productive as .NET/Python. It also has to have better libs than C++ (not hard!), be easier on the end user than Java/.NET (not hard!), be easier to learn than C++/Perl/.NET (not hard!). Finally, it needs to help users avoid making errors, without nannying them and without _overly_ impinging on productivity. Now I believe that D has the potential to do all of that. At such time, it'd probably take a significant share of my development language time - which is now pretty much 80% C/C++, 20% Ruby - maybe up to 40-50%! That's why I'm sticking around. But it is _not_ there yet, and there's going to have to be a lot more argy bargy before it gets there. If it keeps the problems it has now, I _could_ still use it successfully, because I'm very experienced in C++, and am very skilled at avoiding errors in C/C++. But I wouldn't do so because I don't think it'd succeed, and I think I'd be wasting my investment of time/skills - my ambition is not to be an exponent of the next Sather/Dylan/Modula-2/whatever. And I could not recommend D as it stands now, to my clients, or to people who want to learn programming, because it holds traps for people who've not already learned hard lessons and good instincts. But I'm going to shut up now, because I _do_ believe it will come right, and that we'll end up with a pretty fine balance of all the must-haves mentioned above. You and I've already agreed that as we go through the research for DPD that we'll look in detail at all the areas, and I will provide cogent evidentiary rationale for my criticisms, rather than (actually, in addition to) my ng rants. Optimistically The Dr .....
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
There's a lot there, and I just want to respond to a couple of points.

1) You ask what's the bug risk with having a lot of casts. The point of a
cast is to *escape* the typing rules of the language. Consider the extreme
case where the compiler inserted a cast whereever there was any type
mismatch. Typechecking pretty much will just go out the window.

2) You said you use C++ every time when you want speed. That implies you
feel that C++ is inherently more efficient than D (or anything else). That
isn't my experience with D compared with C++. D is faster. DMDScript in D is
faster than the C++ version. The string program in D
www.digitalmars.com/d/cppstrings.html is much faster in D. The dhrystone
benchmark is faster in D.

And this is using an optimizer and back end that is designed to efficiently
optimize C/C++ code, not D code. What can be done with a D optimizer hasn't
even been explored yet.

There's a lot of conventional wisdom that since C++ offers low level
control, that therefore C++ code executes more efficiently. The emperor has
no clothes, Matthew! I can offer detailed explanations of why this is true
if you like.

I challenge you to take the C++ program in
www.digitalmars.com/d/cppstrings.html and make it faster than the D version.
Use any C++ technique, hack, trick, you need to.
Feb 28 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 There's a lot there, and I just want to respond to a couple of points.

Yes. I was faced with a choice to do my final STLSoft tests on Linux, or prate on to an unwilling audience. ...
 1) You ask what's the bug risk with having a lot of casts. The point 
 of a
 cast is to *escape* the typing rules of the language. Consider the 
 extreme
 case where the compiler inserted a cast whereever there was any type
 mismatch. Typechecking pretty much will just go out the window.

This point is lost on my. Why would the compiler insert casts at every mismatch? Do you mean the user? AFACS, the compiler already is inserting casts wherever there is an integral type mismatch.
 2) You said you use C++ every time when you want speed. That implies 
 you
 feel that C++ is inherently more efficient than D (or anything else). 
 That
 isn't my experience with D compared with C++. D is faster. DMDScript 
 in D is
 faster than the C++ version. The string program in D
 www.digitalmars.com/d/cppstrings.html is much faster in D. The 
 dhrystone
 benchmark is faster in D.

 And this is using an optimizer and back end that is designed to 
 efficiently
 optimize C/C++ code, not D code. What can be done with a D optimizer 
 hasn't
 even been explored yet.

 There's a lot of conventional wisdom that since C++ offers low level
 control, that therefore C++ code executes more efficiently. The 
 emperor has
 no clothes, Matthew! I can offer detailed explanations of why this is 
 true
 if you like.

 I challenge you to take the C++ program in
 www.digitalmars.com/d/cppstrings.html and make it faster than the D 
 version.
 Use any C++ technique, hack, trick, you need to.

Not so. I choose C/C++ for speed over established languages Ruby/Python/Java/.NET, for the eminently sensible and persuasive reasons given. (One day I'll take up that challenge, btw. <g>) The reasons I choose C++ over D are nothing to do with speed. That was kind of the point of my whole rant. It concerns me a little that you respond relatively voluminously to the perceived slight on speed, but not about my concerns about D's usability and robustness. I believe that software engineering has a pretty much linear order of concerns: Correctness - does the chosen language/libraries/techniques promote the software actually do the right thing? Robustness - does the language/libraries/techniques support writing software that can cope with erroneous environmental conditions _and_ with design violations (CP) Efficiency - does the language/libraries/techniques support writing software that can perform with all possible speed (in circumstances where that's a factor) Maintainability - does the language/libraries/techniques support writing software that is maintainable Reusability - does the language/libraries/techniques support writing software that may be reused As I said, I think that, in most cases, these concerns are in descending order as shown. In other words, Correctness is more important than Robustness. Maintainability is more important than Reusability. Sometimes Effeciency moves about, e.g. it might swap places with Maintainability. Given that list, correct use of C++ scores (I'm using scores out of five since this is entirely subjective and not intended to be a precise scale) Correctness - 4/5: very good, but not perfect; you have to know what you're doing Robustness - 3/5: good, but could be much better Efficiency - 4/5: very good, but not perfect. Maintainability - 2/5: yes, but it's hard yacka. If you're disciplined, you can get 4/5, but man! that's some rare discipline Reusability - 2/5: pretty ordinary; takes a lot of effort/skill/experience/luck I also think there's an ancillary trait, of importance purely to the programmer Challenge/Expressiveness/Enjoyment - 4/5 I'd say Ruby scores Correctness - 3/5: it's a scripting language, so you need to be able to test all your cases!! Robustness - 2/5: hard to ensure you're writing correctly without; hard to have asserts Efficiency - 2/5: scripting language Maintainability - 4/5: pretty easy Reusability - 4/5: if you write your stuff in modules, it's really quite nice Challenge/Expressiveness/Enjoyment - 5/5 I'd say Python scores Correctness - 4/5: Robustness - 3/5: Efficiency - 2/5: Maintainability - 4/5: Reusability - 5/5: Challenge/Expressiveness/Enjoyment - 2/5 I'd say Java/.NET score: Correctness - 4/5: it might be like pulling teeth to use, but you _can_ write very robust software in them Robustness - 3.5/5: Efficiency - 3/5: Maintainability - 4/5: pretty easy Reusability - ~5/5: whatever what one might think of these horrid things, they have damn impressive libraries. Challenge/Expressiveness/Enjoyment - 0/5 I'd be very interested to hear what people think of D. IMO, D may well score 4.75 out of 5 on performance (although we're waiting to see what effect the GC has in large-scale high-throughput systems), but it scores pretty poorly on correctness. Since Correctness is the sine qua non of software - there's precisely zero use for a piece of incorrect software to a client; ask 'em! - it doesn't matter if D programs perform 100x faster than C/C++ programs on all possible architectures. If the language promotes the writing of write buggy software, I ain't gonna use it. D probably scores really highly on Robustness, with unittests, exceptions, etc. But it's pretty damn unmaintainable when we've things like the 'length' pseudo keyword, and the switch-default/return thing. I demonstrated only a couple of days ago how easy it was to introduce broken switch/case code with nary a peep from the compiler. All we got from Walter was a "I don't want to be nagged when I'm inserting debugging code". That's just never gonna fly in commercial development. As for reusability, I know it aims to be very good, and I would say from personal experience that it's quite good. But I've thus far only done small amounts of reuse, using primarily function APIs. I know others have experienced problems with name resolution, and then there's the whole dynamic class loading stuff. However optimistically we might view it, it's not going to be anywhere near the level of Java/.NET/Python, but it absolutely must, and can, be better that C++. So, I'd give two scores for D. What I think it is now: Correctness - 2/5: Robustness - 3/5: Efficiency - ~5/5: Maintainability - 1/5: Reusability - 2/5: Challenge/Expressiveness/Enjoyment - 3/5 What I think it can be: Correctness - 4/5: Robustness - 4/5: Efficiency - ~5/5: Maintainability - 3/5: Reusability - 4/5: Challenge/Expressiveness/Enjoyment - 4/5 What do others think, both "is" and "could be"? Anyway, that really is going to be my last word on the subject. Unless and until D crawls out of the bottom half in Correctness and Maintainability, I just don't see a future for it as a commercial language. (And as I've said before, I'm _not_ going to stop barracking for that, because I *really* want to use it for such. The pull of the potential of DTL is strong, Luke ...) I'd be interested to hear what others think on all facets of this issue, but I'm particularly keen to hear people's views on how much D presents as a realistic choice for large-scale, commercial developments. Thoughts?
Feb 28 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthew wrote:

 IMO, D may well score 4.75 out of 5 on performance (although we're 
 waiting to see what effect the GC has in large-scale high-throughput 
 systems), but it scores pretty poorly on correctness.
 
 Since Correctness is the sine qua non of software - there's precisely 
 zero use for a piece of incorrect software to a client; ask 'em! - it 
 doesn't matter if D programs perform 100x faster than C/C++ programs on 
 all possible architectures. If the language promotes the writing of 
 write buggy software, I ain't gonna use it.

Also, there's a lot of interesting new stuff coming up on the C/C++ front, like system vendor's heavy optimizations and auto-vectorization, that the D implementation will be missing out on and thus could lose... I like that performance is a constant D focus. But it can't be *all* ? --anders
Feb 28 2005
next sibling parent reply MicroWizard <MicroWizard_member pathlink.com> writes:
To Matthew and Anders:

Maybe you are right in some points but ...

Performance is still an important issue nowadays, when everybody says
..want speed, buy a bigga machine...
(Try to buy X, 2*X, X^2 or 2^X size machines when X is growing :-)

Most software developer (company and individual managers also) does
not take care about theoretical "correctness".
If it was true Windows would never born.
Windows is what managers plan and people buy. Works, but not correct.

I personally like D while is it much more correct than C/C++
in some aspects. In D the binary code does (roughly) what I want.
A C++ binary it does not (the code is unreadable compared to D).
A C binary does what is written in the source, but it is
very painful to explain every small things to the compiler.

In C/C++ there are everywhere traps. I have to watch all and every steps.
If I forget to delete something I'll be punished with a crash where
I do not expect it... There are no (or not too much) help from the
language(compiler) itself.

D helps a lot _while_ developing. I mean this is Correctness.
I agree, that the compiler is not finished yet.

Tamas Nagy


In article <d004m9$pbo$2 digitaldaemon.com>,
=?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
Matthew wrote:

 IMO, D may well score 4.75 out of 5 on performance (although we're 
 waiting to see what effect the GC has in large-scale high-throughput 
 systems), but it scores pretty poorly on correctness.
 
 Since Correctness is the sine qua non of software - there's precisely 
 zero use for a piece of incorrect software to a client; ask 'em! - it 
 doesn't matter if D programs perform 100x faster than C/C++ programs on 
 all possible architectures. If the language promotes the writing of 
 write buggy software, I ain't gonna use it.

Also, there's a lot of interesting new stuff coming up on the C/C++ front, like system vendor's heavy optimizations and auto-vectorization, that the D implementation will be missing out on and thus could lose... I like that performance is a constant D focus. But it can't be *all* ? --anders

Feb 28 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 To Matthew and Anders:

 Maybe you are right in some points but ...

 Performance is still an important issue nowadays

In no way have I said it's not important. (You should see the Herculian efforts of self-infliched torture I go through in the STLSoft libraries to eek out performance!) I just said that if a program is not correct, it's performance is absolutely irrelevant.
, when everybody says
 ..want speed, buy a bigga machine...
 (Try to buy X, 2*X, X^2 or 2^X size machines when X is growing :-)

 Most software developer (company and individual managers also) does
 not take care about theoretical "correctness".

Who said I was talking about "theoretical" correctness. AFAIUI, no non-trivial program is amenable to program proof techniques, and therefore theoretical correctness is of precisely 0 use in the real world, or in this debate.
 If it was true Windows would never born.
 Windows is what managers plan and people buy. Works, but not correct.

 I personally like D while is it much more correct than C/C++
 in some aspects. In D the binary code does (roughly) what I want.
 A C++ binary it does not (the code is unreadable compared to D).
 A C binary does what is written in the source, but it is
 very painful to explain every small things to the compiler.

 In C/C++ there are everywhere traps. I have to watch all and every 
 steps.
 If I forget to delete something I'll be punished with a crash where
 I do not expect it... There are no (or not too much) help from the
 language(compiler) itself.

 D helps a lot _while_ developing. I mean this is Correctness.
 I agree, that the compiler is not finished yet.

 Tamas Nagy

I have worked on several projects over the last decade where (practical) correctness was of paramount importance. They also had exacting performance requirements. They were written in C/C++. They've all worked fine for months/years without a single failure once they've gone into production. Had they performed well but only been roughly correct, there'd've been losses of $Ms, and the clients wouldn't have cared a toss that the performance was good.
Feb 28 2005
parent reply MicroWizard <MicroWizard_member pathlink.com> writes:
It seems to me a joke...

I just said that if a program is not correct, it's performance is 
absolutely irrelevant.

Triviality.
I have worked on several projects over the last decade where (practical) 
correctness was of paramount importance. They also had exacting 
performance requirements. They were written in C/C++. They've all worked 
fine for months/years without a single failure once they've gone into 
production. Had they performed well but only been roughly correct, 
there'd've been losses of $Ms, and the clients wouldn't have cared a 
toss that the performance was good.

And I worked on several banking, database, stock exchange dataproviding, industrial automation, bookkeeping bla-bla-bla project (as a programmer and as project manager also). C/C++ were never the _correct_ but a "we should try it because it is extremely fast" language. The development is painful, few (average) programmers understand what it does, easy to make bugs... What were correct: Basic, MUMPS, Turbo Pascal, Clipper, FoxPro, LINC, DOS batch, nowadays PHP, Java ... they did what we wrote (generally :-) This conversation remembers me an old hungarian TV advertisement. Which is almost a joke. Two small children sit in the sand on a playground. They are vying/talking about what kind of insurance their fathers have... (Like Tom and the new guy in Mark Twain's Tom Sawyer) If it hurts you, please forgive me. Tamas Nagy
Feb 28 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"MicroWizard" <MicroWizard_member pathlink.com> wrote in message 
news:d00all$11ak$1 digitaldaemon.com...
 It seems to me a joke...

I just said that if a program is not correct, it's performance is
absolutely irrelevant.

Triviality.

Trivial, and obvious. And yet something you disagreed on in your previous post. Or am I misreading you?
I have worked on several projects over the last decade where 
(practical)
correctness was of paramount importance. They also had exacting
performance requirements. They were written in C/C++. They've all 
worked
fine for months/years without a single failure once they've gone into
production. Had they performed well but only been roughly correct,
there'd've been losses of $Ms, and the clients wouldn't have cared a
toss that the performance was good.

And I worked on several banking, database, stock exchange dataproviding, industrial automation, bookkeeping bla-bla-bla project (as a programmer and as project manager also). C/C++ were never the _correct_ but a "we should try it because it is extremely fast" language. The development is painful, few (average) programmers understand what it does, easy to make bugs... What were correct: Basic, MUMPS, Turbo Pascal, Clipper, FoxPro, LINC, DOS batch, nowadays PHP, Java ... they did what we wrote (generally :-) This conversation remembers me an old hungarian TV advertisement. Which is almost a joke. Two small children sit in the sand on a playground. They are vying/talking about what kind of insurance their fathers have... (Like Tom and the new guy in Mark Twain's Tom Sawyer) If it hurts you, please forgive me.

Nah, you'll have to try harder. ;) But I actually don't get your point. The debate has involved, to a significant degree, discussion of the flaws of C/C++, and the related trade-offs of correctness and performance. That's why I mentioned my work. Maybe we work with different kinds of people? I would say that the worst project I ever worked on - as a Software Quality Manager - involved Java technologies. The programmers there had such little skill - maybe a trait attendant with some Java projects, but I couldn't generalise - that it was an unmitigated disaster. Never experienced anything like it before or since.
Feb 28 2005
prev sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
MicroWizard wrote:

 Also, there's a lot of interesting new stuff coming up on the C/C++
front, like system vendor's heavy optimizations and auto-vectorization,
that the D implementation will be missing out on and thus could lose...

I like that performance is a constant D focus. But it can't be *all* ?


 Performance is still an important issue nowadays, when everybody says
 ..want speed, buy a bigga machine...
 (Try to buy X, 2*X, X^2 or 2^X size machines when X is growing :-)

I did not say performance was unimportant, on the contrary... I'm an old-time assembler/C nerd, so them's fighting words! :-) Just think it would be sad to spend all time optimizing only to be beaten by the C/C++ compilers anyway (since they "cheat") and then getting run over because the code isn't correct... ? --anders
Feb 28 2005
prev sibling parent reply Dave <Dave_member pathlink.com> writes:
In article <d004m9$pbo$2 digitaldaemon.com>,
=?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= says...
Matthew wrote:

 IMO, D may well score 4.75 out of 5 on performance (although we're 
 waiting to see what effect the GC has in large-scale high-throughput 
 systems), but it scores pretty poorly on correctness.
 
 Since Correctness is the sine qua non of software - there's precisely 
 zero use for a piece of incorrect software to a client; ask 'em! - it 
 doesn't matter if D programs perform 100x faster than C/C++ programs on 
 all possible architectures. If the language promotes the writing of 
 write buggy software, I ain't gonna use it.

Also, there's a lot of interesting new stuff coming up on the C/C++ front, like system vendor's heavy optimizations and auto-vectorization, that the D implementation will be missing out on and thus could lose...

If I'm understanding you correctly, you are talking about new compiler implementations, right? Compared to C/C++, D the language should actually allow more aggressive optimization than C/C++ within the current confines of those languages (imports vs. headers is a great example). Am I understanding you? Or are you speaking of proposals for the next C or C++ language specs.? Thanks, - Dave
I like that performance is a constant D focus. But it can't be *all* ?

--anders

Feb 28 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Dave wrote:

Also, there's a lot of interesting new stuff coming up on the C/C++
front, like system vendor's heavy optimizations and auto-vectorization,
that the D implementation will be missing out on and thus could lose...

If I'm understanding you correctly, you are talking about new compiler implementations, right?

Yes, specific example: GCC 3.3 and GCC 4.0 - with patches from Apple.
 Compared to C/C++, D the language should actually allow more aggressive
 optimization than C/C++ within the current confines of those languages (imports
 vs. headers is a great example).
 
 Am I understanding you? Or are you speaking of proposals for the next C or C++
 language specs.?

Same old boring languages (C/C++), just better compilers for them. --anders
Feb 28 2005
prev sibling next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:cvvtk1$ggn$1 digitaldaemon.com...
 1) You ask what's the bug risk with having a lot of casts. The point
 of a
 cast is to *escape* the typing rules of the language. Consider the
 extreme
 case where the compiler inserted a cast whereever there was any type
 mismatch. Typechecking pretty much will just go out the window.

This point is lost on my. Why would the compiler insert casts at every mismatch?

It's just a hypothetical what if to illustrate the point.
 AFACS, the compiler already is inserting casts wherever there is an
 integral type mismatch.

Implicit casting is a form of this, but a very restricted form. Full casting is unrestricted, and so will negate most of the advantages of type checking. I agree that implicit casting *does* negate some of the features of static type checking. But it's worth it.
 I believe that software engineering has a pretty much linear order of
 concerns:

     Correctness           - does the chosen
 language/libraries/techniques promote the software actually do the right
 thing?
     Robustness            - does the language/libraries/techniques
 support writing software that can cope with erroneous environmental
 conditions _and_ with design violations (CP)
     Efficiency              - does the language/libraries/techniques
 support writing software that can perform with all possible speed (in
 circumstances where that's a factor)
     Maintainability       - does the language/libraries/techniques
 support writing software that is maintainable
     Reusability            - does the language/libraries/techniques
 support writing software that may be reused

It's a good list, but I think a serious issue that matters is programmer productivity in the language. Programmer productivity is one of the strong appeals of Perl and Python, enough that it often justifies the performance hit. Trying to improve productivity is a major goal of D, it's why, for example, you don't have to write .h files and forward reference declarations.
 Given that list, correct use of C++ scores (I'm using scores out of five
 since this is entirely subjective and not intended to be a precise
 scale)

     Correctness           - 4/5: very good, but not perfect; you have to
 know what you're doing

I don't agree with this assessment at all. I'd give it a 2, maybe a 3. I'll give some examples where C++ makes it very, very difficult to write correct code. This is based on many years of real experience: 1) No default initialization of variables. This leads to code that appears to work, but sometimes fails mysteriously. When you try to insert debugging code, the failure shifts away or disappears. Sometimes it doesn't show up until you port to another platform. It's a rich source of erratic, random bugs, which are bugs of the WORST sort. 2) No default initialization of class members. I can't tell you how many times I've had mysterious bugs because I've added a member to a class with many constructors, and forgot to add an initializer to one of them. 3) Overriding a non-virtual member function in base class, and forgetting to go back and make the earliest appearance of it in the heirarchy virtual. Programmers tend to make such functions non-virtual for efficiency. 4) 'Holes' in between struct members being filled with random garbage. 5) Namespace pollution caused by macros, making it very hard to integrate diverse libraries and ensure they do not step on each other. 6) No array bounds checking for builtin arrays. Yes, I know about std::vector<>, and how sometimes it does bounds checking and sometimes it doesn't. We all know how well this works in practice, witness the endless buffer overrun security bugs. 7) Overreliance on pointers to do basic chores. Pointers have no bounds checking, or really much of any checking at all. 8) The old bugaboos of memory leaks, dangling pointers, double deletion of memory, etc. STL claims to mitigate this, but I am unconvinced. The more STL tries to lock this up, the more tempted programmers are to go around it for efficiency. 9) C++ code is not portable 'out of the box'. One has to learn how to write portable code by having lots of experience with it. This is due to all the "undefined" and "implementation defined" behaviors in it. It is very hard to examine a piece of code and determine if it is portable or not. 10) Static constructors are run in an "undefined" order. 11) Lovely syntactical traps like: for (i = 0; i < 10; i++); { ... } 12) Undefined order of evaluation of expression side effects. 13) Name lookup rules. How many C++ programmers really understand dependent and non-dependent lookups? Or how forward references work in class scope but not global scope? Or ADL? Or how about some of those wretchedly bizarre rules like the base class not being in scope in a template class? How is correctness achieved when not just programmers don't understand the lookup rules, but the compiler developers get it wrong often too (resulting in variation among compilers)? The 'length' D issue is a real one, but it pales in comparison to C++'s problems. 14) Then there are the portability issues, like implementation defined int sizes. The only reason you and I are able to be successful writing 'correct' code in C++ is because we have many, many, many years of practice running into the potholes and learning to avoid them. I suspect you gave C++ such a high score on this is because you are *so used* to driving around those potholes, you don't see them anymore. Watch some of the newbie code posts on comp.lang.c++ sometimes <g>. Even I was surprised when I ported some of my C++ code over to D, code I thought was thoroughly debugged, field tested, and correct. Bink! Array overflows!
     Robustness            - 3/5: good, but could be much better

Agree with 3, doubt it can be improved. I know that Boost claims to resolve much of this, but Boost is far too heavilly reliant on standards compliance at the very edges of the language, pushing compilers past their limits. I also feel that what Boost is doing is inventing a language on top of C++, sort of using the C++ compiler as a back end, and is that new language actually C++ or does it merit being called a different language?
     Efficiency              - 4/5: very good, but not perfect.

Agree. C++'s overreliance on pointers, and the aliasing problem, are significant impediments.
     Maintainability       - 2/5: yes, but it's hard yacka. If you're
 disciplined, you can get 4/5, but man! that's some rare discipline

Agree.
     Reusability            - 2/5: pretty ordinary; takes a lot of
 effort/skill/experience/luck

Agree.
   I also think there's an ancillary trait, of importance purely to the
 programmer
     Challenge/Expressiveness/Enjoyment            -    4/5

I'd give it a 3. I find D a lot more fun to program in, as it frees me of much of the tedium required by C++.
 IMO, D may well score 4.75 out of 5 on performance (although we're
 waiting to see what effect the GC has in large-scale high-throughput
 systems),

I've used gc's on large-scale high throughput systems. I'm confident D's will perform well. It's also possible to do a much better gc than the rather old-fashioned one it has now. I know how to do a better one, and have carefully left the door open for it in the semantics of the language.
 but it scores pretty poorly on correctness.

I disagree strongly on this. Unless you have something else in mind I'm forgetting, in your posts you've focussed in on two or three issues and have assigned major importance to them. None of them, even if they go awry in the manner you predict, will cause the kinds of erratic, random, awful bugs that C++ holes mentioned above will and do cause. All of them, as I've argued (and you and Kris disagree, fair enough), will open the door to other kinds of correctness bugs if they are fixed per your suggestions.
 Since Correctness is the sine qua non of software - there's precisely
 zero use for a piece of incorrect software to a client; ask 'em! - it
 doesn't matter if D programs perform 100x faster than C/C++ programs on
 all possible architectures. If the language promotes the writing of
 write buggy software, I ain't gonna use it.

But I think you already do <g>. See my list above.
 D probably scores really highly on Robustness, with unittests,
 exceptions, etc. But it's pretty damn unmaintainable when we've things
 like the 'length' pseudo keyword, and the switch-default/return thing. I
 demonstrated only a couple of days ago how easy it was to introduce
 broken switch/case code with nary a peep from the compiler. All we got
 from Walter was a "I don't want to be nagged when I'm inserting
 debugging code". That's just never gonna fly in commercial development.

Try the following code with g++ -Wall : int foo() { return 3; return 4; } and this: int foo(int x) { switch (x) { case 3: x = 4; break; x = 5; case 4: break; } return x; } I understand how you feel about it, but I don't agree that it is a showstopper of an issue.
 So, I'd give two scores for D. What I think it is now:

     Correctness           - 2/5:
     Robustness            - 3/5:

I'm going to argue about the robustness issue. I've gotten programs to work reliably in much less time in D than in C++.
     Efficiency              - ~5/5:
     Maintainability       - 1/5:

I don't understand your basis for 1/5. Even a small thing like 'deprecated' is a big improvement for any maintainer wanting to upgrade a library.
     Reusability            - 2/5:

This remains to be seen. Certainly, D doesn't suffer from the macro problems that seriously impede C++ reusability. Just that should move it up to a 3.
     Challenge/Expressiveness/Enjoyment            -    3/5

Feb 28 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d006j3$s40$1 digitaldaemon.com...
 "Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
 news:cvvtk1$ggn$1 digitaldaemon.com...
 1) You ask what's the bug risk with having a lot of casts. The 
 point
 of a
 cast is to *escape* the typing rules of the language. Consider the
 extreme
 case where the compiler inserted a cast whereever there was any 
 type
 mismatch. Typechecking pretty much will just go out the window.

This point is lost on my. Why would the compiler insert casts at every mismatch?

It's just a hypothetical what if to illustrate the point.

Sigh. You just keep moving the goalposts If it was just hypothetical, then a furphy indeed.
 AFACS, the compiler already is inserting casts wherever there is an
 integral type mismatch.

Implicit casting is a form of this, but a very restricted form. Full casting is unrestricted, and so will negate most of the advantages of type checking. I agree that implicit casting *does* negate some of the features of static type checking. But it's worth it.
 I believe that software engineering has a pretty much linear order of
 concerns:

     Correctness           - does the chosen
 language/libraries/techniques promote the software actually do the 
 right
 thing?
     Robustness            - does the language/libraries/techniques
 support writing software that can cope with erroneous environmental
 conditions _and_ with design violations (CP)
     Efficiency              - does the language/libraries/techniques
 support writing software that can perform with all possible speed (in
 circumstances where that's a factor)
     Maintainability       - does the language/libraries/techniques
 support writing software that is maintainable
     Reusability            - does the language/libraries/techniques
 support writing software that may be reused

It's a good list, but I think a serious issue that matters is programmer productivity in the language. Programmer productivity is one of the strong appeals of Perl and Python, enough that it often justifies the performance hit. Trying to improve productivity is a major goal of D, it's why, for example, you don't have to write .h files and forward reference declarations.
 Given that list, correct use of C++ scores (I'm using scores out of 
 five
 since this is entirely subjective and not intended to be a precise
 scale)

     Correctness           - 4/5: very good, but not perfect; you have 
 to
 know what you're doing

I don't agree with this assessment at all. I'd give it a 2, maybe a 3. I'll give some examples where C++ makes it very, very difficult to write correct code. This is based on many years of real experience: 1) No default initialization of variables. This leads to code that appears to work, but sometimes fails mysteriously. When you try to insert debugging code, the failure shifts away or disappears. Sometimes it doesn't show up until you port to another platform. It's a rich source of erratic, random bugs, which are bugs of the WORST sort. 2) No default initialization of class members. I can't tell you how many times I've had mysterious bugs because I've added a member to a class with many constructors, and forgot to add an initializer to one of them. 3) Overriding a non-virtual member function in base class, and forgetting to go back and make the earliest appearance of it in the heirarchy virtual. Programmers tend to make such functions non-virtual for efficiency. 4) 'Holes' in between struct members being filled with random garbage. 5) Namespace pollution caused by macros, making it very hard to integrate diverse libraries and ensure they do not step on each other. 6) No array bounds checking for builtin arrays. Yes, I know about std::vector<>, and how sometimes it does bounds checking and sometimes it doesn't. We all know how well this works in practice, witness the endless buffer overrun security bugs. 7) Overreliance on pointers to do basic chores. Pointers have no bounds checking, or really much of any checking at all. 8) The old bugaboos of memory leaks, dangling pointers, double deletion of memory, etc. STL claims to mitigate this, but I am unconvinced. The more STL tries to lock this up, the more tempted programmers are to go around it for efficiency. 9) C++ code is not portable 'out of the box'. One has to learn how to write portable code by having lots of experience with it. This is due to all the "undefined" and "implementation defined" behaviors in it. It is very hard to examine a piece of code and determine if it is portable or not. 10) Static constructors are run in an "undefined" order. 11) Lovely syntactical traps like: for (i = 0; i < 10; i++); { ... } 12) Undefined order of evaluation of expression side effects. 13) Name lookup rules. How many C++ programmers really understand dependent and non-dependent lookups? Or how forward references work in class scope but not global scope? Or ADL? Or how about some of those wretchedly bizarre rules like the base class not being in scope in a template class? How is correctness achieved when not just programmers don't understand the lookup rules, but the compiler developers get it wrong often too (resulting in variation among compilers)? The 'length' D issue is a real one, but it pales in comparison to C++'s problems. 14) Then there are the portability issues, like implementation defined int sizes. The only reason you and I are able to be successful writing 'correct' code in C++ is because we have many, many, many years of practice running into the potholes and learning to avoid them. I suspect you gave C++ such a high score on this is because you are *so used* to driving around those potholes, you don't see them anymore. Watch some of the newbie code posts on comp.lang.c++ sometimes <g>. Even I was surprised when I ported some of my C++ code over to D, code I thought was thoroughly debugged, field tested, and correct. Bink! Array overflows!
     Robustness            - 3/5: good, but could be much better

Agree with 3, doubt it can be improved. I know that Boost claims to resolve much of this, but Boost is far too heavilly reliant on standards compliance at the very edges of the language, pushing compilers past their limits. I also feel that what Boost is doing is inventing a language on top of C++, sort of using the C++ compiler as a back end, and is that new language actually C++ or does it merit being called a different language?
     Efficiency              - 4/5: very good, but not perfect.

Agree. C++'s overreliance on pointers, and the aliasing problem, are significant impediments.
     Maintainability       - 2/5: yes, but it's hard yacka. If you're
 disciplined, you can get 4/5, but man! that's some rare discipline

Agree.
     Reusability            - 2/5: pretty ordinary; takes a lot of
 effort/skill/experience/luck

Agree.
   I also think there's an ancillary trait, of importance purely to 
 the
 programmer
     Challenge/Expressiveness/Enjoyment            -    4/5

I'd give it a 3. I find D a lot more fun to program in, as it frees me of much of the tedium required by C++.
 IMO, D may well score 4.75 out of 5 on performance (although we're
 waiting to see what effect the GC has in large-scale high-throughput
 systems),

I've used gc's on large-scale high throughput systems. I'm confident D's will perform well. It's also possible to do a much better gc than the rather old-fashioned one it has now. I know how to do a better one, and have carefully left the door open for it in the semantics of the language.
 but it scores pretty poorly on correctness.

I disagree strongly on this. Unless you have something else in mind I'm forgetting, in your posts you've focussed in on two or three issues and have assigned major importance to them. None of them, even if they go awry in the manner you predict, will cause the kinds of erratic, random, awful bugs that C++ holes mentioned above will and do cause. All of them, as I've argued (and you and Kris disagree, fair enough), will open the door to other kinds of correctness bugs if they are fixed per your suggestions.
 Since Correctness is the sine qua non of software - there's precisely
 zero use for a piece of incorrect software to a client; ask 'em! - it
 doesn't matter if D programs perform 100x faster than C/C++ programs 
 on
 all possible architectures. If the language promotes the writing of
 write buggy software, I ain't gonna use it.

But I think you already do <g>. See my list above.
 D probably scores really highly on Robustness, with unittests,
 exceptions, etc. But it's pretty damn unmaintainable when we've 
 things
 like the 'length' pseudo keyword, and the switch-default/return 
 thing. I
 demonstrated only a couple of days ago how easy it was to introduce
 broken switch/case code with nary a peep from the compiler. All we 
 got
 from Walter was a "I don't want to be nagged when I'm inserting
 debugging code". That's just never gonna fly in commercial 
 development.

Try the following code with g++ -Wall : int foo() { return 3; return 4; } and this: int foo(int x) { switch (x) { case 3: x = 4; break; x = 5; case 4: break; } return x; } I understand how you feel about it, but I don't agree that it is a showstopper of an issue.
 So, I'd give two scores for D. What I think it is now:

     Correctness           - 2/5:
     Robustness            - 3/5:

I'm going to argue about the robustness issue. I've gotten programs to work reliably in much less time in D than in C++.
     Efficiency              - ~5/5:
     Maintainability       - 1/5:

I don't understand your basis for 1/5. Even a small thing like 'deprecated' is a big improvement for any maintainer wanting to upgrade a library.
     Reusability            - 2/5:

This remains to be seen. Certainly, D doesn't suffer from the macro problems that seriously impede C++ reusability. Just that should move it up to a 3.
     Challenge/Expressiveness/Enjoyment            -    3/5


In Steve Krug's rather excellent book "Don't Make Me Think!", he discusses the notion of "satisficing" He says "we tend to assume that [people] ... consider all of the available options, and choose the best one. In reality, though, most of the time we don't choose the *best* option - we choose the *first reasonable option*, a strategy known as _satisficing_. As soon as we find [something] that [leads] to what we're looking for, there's a very good chance we'll [choose] it." (The heavy editing is because he's talking about usage patterns for Web sites. However, he goes on to show how this is based on drawing data from a variety of fields.) My point is that D may very well solve some of the Big Scary Issues in C++, but that doesn't matter. I, along with all the readers of Imperfect C++ <g>, already know how to obviate/ameliorate/avoid these problems. That D solves them is therefore of little consequence where there are other, perhaps seemingly trivial to some, issues that are far more fundamental that it gets completely wrong. I repeat, I have no confidence in using D as it stands now to develop commercial software. <Aside> My interest in D ascribes precisely 0.0 to the fact that D initialises variables, or other such stuff that you claim is so important. (I'm not saying this is not important to others, or in principle, mind.) What I'm interested in is: - slicing - foreach - intelligent and efficient class loading (a post 1.0 feature, I know) - an auto type mechanism (another post 1.0) - a full and proper (and therefore unique!) handling of threading. (a post 2.0 feature, I suspect!!) - doing a truly beautiful+powerful+efficient+EASY_TO_UNDERSTAND template library that surpasses all the arcane nonsense of C++ (whether Boost, or STLSoft or whatever) - a "portable C++" - a language that has good libraries of broad scope - this community - the incredibly fortuitous mix of features that makes D's memory management "Not Your Father's Resource Management" But if those things are layered atop something that I believe is fundamentally fragile/flawed, I'm not gonna use it (and I'm also gonna cry). </Aside> I rely on Java proscribing my writing invalid conversions. I rely on Ruby making conversions not matter (so much). I rely on C/C++ compilers warning me about dodgy conversions. But in D I would have to rely on code reviews, and nothing else. Consequent confidence factor: 0%. But instead of all this ultimately fruitless and wearing back and forth, why can't you simply provide us with flags - "--pre1.0-only--check-narrowing", "--pre1.0-only--check-switch-default", "--pre1.0-only--check-missing-return", "--pre1.0-only--check-boolean-subexpression", etc. - to test things out. Let's run such things over Mango's impressive code base? If I'm wrong, I'll be happy to admit it. I'm sure Kris is of similar metal. As it stands, I'm *never* going to be convinced I'm wrong through talk, because I have _real experience_ in other similar languages where such things _have_ caused bugs when unchecked.
Feb 28 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <d006j3$s40$1 digitaldaemon.com>, Walter says...
[snip]
I disagree strongly on this. Unless you have something else in mind I'm
forgetting, in your posts you've focussed in on two or three issues and have
assigned major importance to them. None of them, even if they go awry in the
manner you predict, will cause the kinds of erratic, random, awful bugs that
C++ holes mentioned above will and do cause. All of them, as I've argued
(and you and Kris disagree, fair enough), will open the door to other kinds
of correctness bugs if they are fixed per your suggestions.

Suggestions? Fixes? What? How can one get to the point of remedy when one cannot get you to even tentatively admit there might /actually/ be a problem to begin with? That has to happen before one can even begin to weight the odds. You know? That thing about getting the Moose onto the Table? This is what's so infuriating. It's called denial, or stonewalling. I've tried all kinds of ways to elicit some real honesty about the current design, over almost a year, yet you've been consistent in denying the very possibility of issue (not to mention the measurable quantities of subtle misinformation, which is yet more frustrating). How can you possibly say I, or anyone else, has managed to even begin to offer a resolution when you simply deny, deny, and deny again? And besides, the point is not necessarily for us to offer resolutions, but hopefully to encourage you to think of one. Where the forthright interaction here? I've yet to witness a discourse upon a serious topic with you, where it did not feel like I'm corresponding with the ex Iraqi Information Minister ~ like there's a whole lot of spin and very little truth. I see you using the same approach with certain others, but the vast majority are spared. Is this in line with your expectations? This is what troubles me far more than anything, manifest or not, within the D language itself. I mean, it's just another computer language ~ take it or leave it. What I find disturbing is that neither I, nor anyone else, can make /serious/ critique of the language without a corresponding barrage of denial and something resembling marketing propoganda. Where's the constructive discourse? Naturally, you might find that my approach is not to your liking. Perhaps I'm overly brusque for your taste. Perhaps I just get your back up. Since you've made a point about not replying to my last several posts, if you choose to not answer this one I'll understand the message loud and clear. - Kris
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
While I may not necessarily be in full accord with either Kris's content 
or his presentation in this particular post, I do think he has something 
of a point.

Let me put a question to you Walter:

    1. I have found bugs in my C/C++ code involving 
unintended/undesirable integral narrowing. This is a fact. If you cannot 
accept this, then we have no basis for discussion. Do you accept this to 
be so?
    2. D provides, by definition, no facility for 
detecting/warning/remedying unintended/undesirable integral narrowing. 
This is a fact. Do you accept this to be so?

Now I am NOT saying that D's integral conversion is wrong. I'm NOT 
saying it has to be changed. I'm NOT saying DMD (and every other 
compiler) must have narrowing warnings.

What I AM saying is that, if 1 is true, and 2 is true, then one can draw 
one of two conclusions:

    A. D is flawed, by design, since errors due to integral narrowing 
cannot be automatically detected, other than by tools that do not 
represent the absolute letter of the language definition. Or,
    B. D is different from C/C++ in a fundamental way such that the 
truth of 1. does not affect 2.

What I'm asking, Walter, is for you to tell me:

    - that 1. is wrong, or
    - that 2. is wrong, or
    - that A holds, or
    - that B holds

I don't want/need to hear any more examples of stupid and pointless 
conversion warnings/errors, because I absolutely admit they are legion. 
I don't want/need to have to demonstrate the contrary position of 
dangerous unwarned/unerrored conversions, because they are also legion.

I just want to know which of the above four is true. Whichever of those 
four it is, I'd like some explanation, but at this point I'd just settle 
for the bald answer.

I don't care what the solution is, either, because this ng is full of 
super smart people. But to get a solution requires a problem, which is 
where the answer comes in.

Cheers

Matthew



"Kris" <Kris_member pathlink.com> wrote in message 
news:d00dfj$14nj$1 digitaldaemon.com...
 In article <d006j3$s40$1 digitaldaemon.com>, Walter says...
 [snip]
I disagree strongly on this. Unless you have something else in mind 
I'm
forgetting, in your posts you've focussed in on two or three issues 
and have
assigned major importance to them. None of them, even if they go awry 
in the
manner you predict, will cause the kinds of erratic, random, awful 
bugs that
C++ holes mentioned above will and do cause. All of them, as I've 
argued
(and you and Kris disagree, fair enough), will open the door to other 
kinds
of correctness bugs if they are fixed per your suggestions.

Suggestions? Fixes? What? How can one get to the point of remedy when one cannot get you to even tentatively admit there might /actually/ be a problem to begin with? That has to happen before one can even begin to weight the odds. You know? That thing about getting the Moose onto the Table? This is what's so infuriating. It's called denial, or stonewalling. I've tried all kinds of ways to elicit some real honesty about the current design, over almost a year, yet you've been consistent in denying the very possibility of issue (not to mention the measurable quantities of subtle misinformation, which is yet more frustrating). How can you possibly say I, or anyone else, has managed to even begin to offer a resolution when you simply deny, deny, and deny again? And besides, the point is not necessarily for us to offer resolutions, but hopefully to encourage you to think of one. Where the forthright interaction here? I've yet to witness a discourse upon a serious topic with you, where it did not feel like I'm corresponding with the ex Iraqi Information Minister ~ like there's a whole lot of spin and very little truth. I see you using the same approach with certain others, but the vast majority are spared. Is this in line with your expectations? This is what troubles me far more than anything, manifest or not, within the D language itself. I mean, it's just another computer language ~ take it or leave it. What I find disturbing is that neither I, nor anyone else, can make /serious/ critique of the language without a corresponding barrage of denial and something resembling marketing propoganda. Where's the constructive discourse? Naturally, you might find that my approach is not to your liking. Perhaps I'm overly brusque for your taste. Perhaps I just get your back up. Since you've made a point about not replying to my last several posts, if you choose to not answer this one I'll understand the message loud and clear. - Kris

Feb 28 2005
next sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Matthew" <admin stlsoft.dot.dot.dot.dot.org> wrote in message
news:d00fgo$16uf$3 digitaldaemon.com...
     1. I have found bugs in my C/C++ code involving
 unintended/undesirable integral narrowing. This is a fact. If you cannot
 accept this, then we have no basis for discussion. Do you accept this to
 be so?

Yes, and I've already done so to you (in the post about the risk of bugs with it).
     2. D provides, by definition, no facility for
 detecting/warning/remedying unintended/undesirable integral narrowing.
 This is a fact. Do you accept this to be so?

Yes.
 What I AM saying is that, if 1 is true, and 2 is true, then one can draw
 one of two conclusions:

     A. D is flawed, by design, since errors due to integral narrowing
 cannot be automatically detected, other than by tools that do not
 represent the absolute letter of the language definition. Or,
     B. D is different from C/C++ in a fundamental way such that the
 truth of 1. does not affect 2.

Let's just dismiss (B) as wrong. I have trouble with (A), though. All constructs in any language can result in bugs if used wrong, including explicit casts, and no compiler can detect them. The nirvana is to be able to figure out a language design that eliminates all possible error, but that's a research project I am not equipped to attempt. The issue is a judgement call - what are the benefits vs risks of a particular construct? You assign the risk of an unintended narrowing as unacceptably high. I do not understand why you place it so high, but I am convinced you feel this to be so. Factoring in to that risk is the cost of the bug produced. I submit that such a bug will likely be reproducible and hence relatively easy to find. (This is as opposed to an uninitialized pointer bug, which can be really, really hard to reproduce and find. Part of my failure here is not understanding why you seem to regard the former problem as more dangerous than the latter, which is endemic in C++. I really loathe irreproducible bugs, and the design of D is clearly oriented towards stamping those out.) Clearly there is at least *some* benefit to the implicit narrowing conversions, as it has survived unscathed in two revisions of the C standard, one in C++, and I know of no proposal to C++ to make it an error. Then there are the benefits and risks of the alternatives. The proposed solution is, issue a warning, and then eliminate the warning by inserting a cast. A cast is powerful but very blunt. It won't just do narrowing conversions, it will heroically attempt all sorts of things to convert the type. I submit that these kinds of casts can hide bugs. So far, I have not convinced you that this is a risk greater than 0. Running mango through this with warnings on will not prove that mango doesn't have such bugs in it, I don't know any way to automatically detect such things. You've proposed a special cast type, narrow(t)e, to provide a finer point to casting. I know you like having many different kinds of casting available as seperate constructs. I find the multitude of them, even the C++ ones, confusing and find myself constantly going back to just ordinary casts. The same points here generally apply to the other correctness issues we've discussed here - is the cure worse than the problem? It's not that there is *no* problem, it's that the cure is a worse problem. This is a point I have obviously failed to make in an understandable manner.
Feb 28 2005
next sibling parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
I'm ignoring much of the attendant stuff as the reflective airborne 
particulate matter that it is:

    "nirvana", "research project", "opposed to an uninitialized pointer 
bug, which can be really, really hard to reproduce and find" - smoke

    "I do not understand why you place it so high, but I am convinced 
you feel this to be so" - irrelevant

    "I submit that such a bug will likely be reproducible and hence 
relatively easy to find" - conjecture. (I've just been submitting 
STLSoft 1.8.3 to many-compiler-compilations, and in a more organised and 
comprehensive fashion than ever before, and I've found several long 
dormant bugs. Either I'm a complete moron, or you're wrong. Could be 
either ...)

    "The proposed solution is, issue a warning, and then eliminate the 
warning by inserting a cast." - invalid assumption of the response to 
the problem, based on ... ??what?? (That is to say, maybe I might change 
the lhs type?)

    "A cast is powerful but very blunt" - says who? It is in C. Less so 
in C++. Why must it be in D?


Thankfully, you've provided enough substance within to _finally_ allow 
me to understand your POV. Here's the best I can make of your argument, 
then. Please point out where I'm misrepresenting.

    1. D supports all integral=>integral conversions, including 
narrowing ones. This is a potential source of bugs, which you 
acknowledge, though you suggest that the alternative is worse.

    2. The reason for 1. is that you believe that providing a warning 
may incline diligent developers to develop a bad habit of applying casts 
with insufficient thought. Furthermore, the problem with this is not 
that they will do so for narrowing, since the compiler already does that 
for them, but rather than they will *also* do so in other circumstances. 
I acknowledge that this is a very real danger.

    3. You do not think that separating the concerns of casts into 
different casts operators is a good idea. Eureka! Now we're at the nub 
of the matter, at last.

Your whole argument rests, then, on the fact that using a cast for 
narrowing might incline people to (mis)use casts elsewhere. As a 
consequence, narrowing is always allowed.

You agree that the current behaviour is manifestly a cause of bugs 
(naturally we would debate frequency of incidence, detectability, 
seriousness, etc....). I agree that requiring a simple cast may lead to 
cast-happy behaviour, which is also a source of bugs. All agree so far?

The problem, then, simply boils down to the notion of whether we split 
up cast functionality. You have not argued that a narrow_cast() would 
fail to address the requirement for narrowing conversions to be part of 
the programmer's cognisance. Nor have you argued that it would fail by 
inclining unrelated cast abuse (e.g. people'd start going mad with 
cross_cast(), down_cast(), etc. etc.). No, what you've said is:

    "I know you like having many different kinds of casting available as
    seperate constructs. I find the multitude of them, even the C++ 
ones,
    confusing and find myself constantly going back to just ordinary 
casts."

This is bogus because:

    1. It's based on your own personal tastes and experiences. (As the 
only commentator in recent times to have published a C++ book that, in 
one circumstance only, advocates eschewing them for the wicked old C 
cast, I think I am qualified to confidently say that C++ casts are well 
received.)

    2. It's based on the (naive, IMO) dogma of simplicity, which 
pervades D. Einstein said "as simple as possible, but no simpler". I 
submit that, in this case at least, you've forgotten the second half of 
that sentence. cast() is _too_ simple. It's needs complicationating. (I 
confess that, prior to this topic, I hadn't really realised that D's 
cast() has the same multifaceted nature as the C cast in C++. Now I 
have, I must say I'm completely appalled, and will inevitably be banging 
on about that too very soon.)

We've moved from two informed, intelligent, reasoned, reasonable, but 
opposing points of view, to an arbitrary decision based on whim and 
partial experience. While I would acknowledge that your experience may 
be greater than that of anyone else on this newsgroup, it does not 
represent the totality of software engineering, and I think this is a 
real and growing problem for D.

From a more general perspective, I wonder why you don't see how this 
will look to other people. You've now acknowledged that D has (at least 
one) flaw-by-design, and we've pretty much identified the rationale 
behind it. I submit that people coming from outside this community will 
look at that and dismiss D. Not because they care more about integral 
conversions above all else (though some might). But because they will 
naturally infer from this relatively simple and straightforward flaw 
that there will be other, deeper, and potentially nastier flaws. After 
all, how can one fault the reasoning: "I don't know all about X, but I 
have seen that X has done something very simple quite badly, and I 
therefore assume that X does non-so-simple things very badly indeed. 
I'll avoid X."

One last comment: I would agree with Kris in so far as that you compound 
the arbitrary nature of these decisions by refusing us ready access to 
the means by which we might prove/disprove them. If you're so confident 
of your position, allow us a pre-1.0 cast warning to play with, and 
instrument the inappropriate use of casts for non-narrowing things. 
Data's far more convincing that partial argument, however much it might 
be informed by experience.

Matthew
Feb 28 2005
prev sibling parent reply Roberto Mariottini <Roberto_member pathlink.com> writes:
In article <d00kbq$1bfj$1 digitaldaemon.com>, Walter says...

All
constructs in any language can result in bugs if used wrong, including
explicit casts, and no compiler can detect them. The nirvana is to be able
to figure out a language design that eliminates all possible error, but
that's a research project I am not equipped to attempt.

A cast is powerful but very blunt. It won't just do narrowing
conversions, it will heroically attempt all sorts of things to convert the
type. I submit that these kinds of casts can hide bugs. So far, I have not
convinced you that this is a risk greater than 0. Running mango through this
with warnings on will not prove that mango doesn't have such bugs in it, I
don't know any way to automatically detect such things.

Once again, what about doing this at runtime? Like the common "missing default in a switch" error, such narrowing conversion errors can be found at runtime (and I'd add overflows): int i = 256; ubyte b1 = (ubyte) i; // no error: explicit cast ubyte b2 = i; // throws NarrowCastError at runtime in debug builds ubyte b; for (i = 0; i < 1000; ++i) { .. .. b++; // throws IntegerOverflowError at runtime in debug builds } Ciao
Mar 01 2005
next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Roberto Mariottini wrote:

 Once again, what about doing this at runtime? Like the common "missing default
 in a switch" error, such narrowing conversion errors can be found at runtime
 (and I'd add overflows):
 
 int i = 256;
 ubyte b1 = (ubyte) i; // no error: explicit cast
 ubyte b2 = i; // throws NarrowCastError at runtime in debug builds
 
 ubyte b;
 for (i = 0; i < 1000; ++i)
 {
 ..
 ..
 b++; // throws IntegerOverflowError at runtime in debug builds
 }

Petty language lawyer warning: "debug builds" in D is not really very clear, since debug is just a special version in the D specification ? You meant to say "non-relase builds" :-) --anders
Mar 01 2005
prev sibling parent "Charlie Patterson" <charliep1 excite.com> writes:
"Roberto Mariottini" <Roberto_member pathlink.com> wrote in message 
news:d01bt4$25h8$1 digitaldaemon.com...
 Once again, what about doing this at runtime? Like the common "missing 
 default
 in a switch" error, such narrowing conversion errors can be found at 
 runtime
 (and I'd add overflows):

 int i = 256;
 ubyte b1 = (ubyte) i; // no error: explicit cast
 ubyte b2 = i; // throws NarrowCastError at runtime in debug builds

I was going to suggest the same thing. <soapbox>I don't see why, when it comes to an impasse over preference, you guys don't look for *other* solutions. Back up in your decision tree, so to speak. In the end, and I've said it before, if you get mired in preference, and Walter is writing the language, then Walter wins. So the impetus is really on Matthew or whoever to find other creative solutions when the only difference is preference.</soapbox> Here are a couple of other ideas: int i = 32; ubyte b1 = (ubyte) i; // not allowed; don't care if it *would* fit. I've been programming at various levels for 20 years, and I can't remember any times that one really wanted to overflow an integer or even "narrow" one in practice. The exception which proves the rule is unsigned int c = -1; // to get 0xFFFFFFFF Part of the excuse is that the coder doesn't have a set size for an int. And I think this one comes up now and then as a "hack" that should be avoided and replaced with unsigned int c = ~0; Can others name good situations for "narrow casting"? If there were no cast and trying it without a cast were an error, coders would be forced to "upgrade" the LHS even if it propogates up the code tree. OK. Admittedly this is harsh, because you can never get narrow casted if you want, but I can't think of reasons it would be necessary to narrow cast. The run-time check in debug sounds pretty good. In short, if you consider comp langs as a way to express math sans infinity, overflow and lost most significant bits is simply an error. Not something that you might need occasionally. The solution categories for a language problem X are * always trust the coder knows what he's doing and let X through, * allow implicit X but check at run-time (probably in debug only for speed), * allow the coder to explicity force X, * allow the coder to explicity force X, but check at run-time (probably in debug only for speed), * always stop the coder from doing X * avoid the issue of X altogether
Mar 01 2005
prev sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
First, let me say, good post Matthew, I think this cuts down to the real  
issue here.
Which IMO is a basic disagreement to an initial proposition or suposition.  
(*see end)

I'll just sprinkle my opinions in below, no-one asked me to, but this _is_  
a discussion group of sorts...

On Tue, 1 Mar 2005 12:11:48 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 While I may not necessarily be in full accord with either Kris's content
 or his presentation in this particular post, I do think he has something
 of a point.

 Let me put a question to you Walter:

     1. I have found bugs in my C/C++ code involving
 unintended/undesirable integral narrowing. This is a fact. If you cannot
 accept this, then we have no basis for discussion. Do you accept this to
 be so?

     2. D provides, by definition, no facility for
 detecting/warning/remedying unintended/undesirable integral narrowing.
 This is a fact. Do you accept this to be so?

 Now I am NOT saying that D's integral conversion is wrong. I'm NOT
 saying it has to be changed. I'm NOT saying DMD (and every other
 compiler) must have narrowing warnings.

 What I AM saying is that, if 1 is true, and 2 is true, then one can draw
 one of two conclusions:

     A. D is flawed, by design, since errors due to integral narrowing
 cannot be automatically detected, other than by tools that do not
 represent the absolute letter of the language definition. Or,
     B. D is different from C/C++ in a fundamental way such that the
 truth of 1. does not affect 2.

I think there are other conclusions, for example: C. Integral narrowing might be a bug, it might not. It is outside the scope of the D compiler to identify/notify you of this possibility. The reason for the above is that fact that D does not have warnings, so, unless it can be 100% sure something is a bug then it cannot do anything about it. Perhaps _this_ is the flaw with D? That said, a number of decisions have been made to call things which are 'likely' to be erroneous as simply erroneous, for example: if (a) ; <- error, must use {} technically it cannot know for sure if that _is_ an error, but, it's 'likely' to be an error, and there is another easy alternative, so it calls it an error. So, is it 'likely' to be an error? What percentage of implicit integer narrowing occurances result in an error? The alternative is explicit casts, or refactoring the code in some way, Walter has given some examples of the results of that... they don't look pretty, at least to me. If the answer is that it's not likely and/or the alternative is bad, then I don't think D can give an error, in which case, given it has no warnings, it is a "lint processing" task. I view Walters responses as attempts to show that it's unlikely and/or the alternative is bad. * I think the basic disagreement is a combination of "disagreement over how likely this is to cause a bug" and "the responsibility of the compiler to notify the programmer", which comes back to "D doesn't give warnings". So, first you need to agree whether it is likely enough to warrant an error... if not, then isn't it a "lint processing" task? I personally like the concept of segregating "compile" and "lint processing". I view notifying you of any and every even remotely possible bug a "lint processing" task. I think "lint processing" needs to be done once and only once on the same code, meaning, you run it when you compile on your dev box, but not when you compile the same code on your other x machines. If anyone asked my opinion I would say Matthew was the best person for the task of writing the lint processor. Regan
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message
news:opsmxt9puo23k2f5 ally...
 I view Walters responses as attempts to show that it's unlikely and/or the
 alternative is bad.

Yes, that's correct. I'd like to add that not all bugs are equal - some are easy to reproduce and find, some are very hard to track down. I argue that the latter are so costly that they are worth considerable cost to try and prevent, whereas the former are less costly and so less risky. D expends a lot more cost on trying to prevent the latter than the former. Things like array bounds checking, guaranteed initialization, garbage collection, etc., are examples of such steps.
Feb 28 2005
next sibling parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsmxt9puo23k2f5 ally...
 I view Walters responses as attempts to show that it's unlikely 
 and/or the
 alternative is bad.

Yes, that's correct. I'd like to add that not all bugs are equal - some are easy to reproduce and find, some are very hard to track down. I argue that the latter are so costly that they are worth considerable cost to try and prevent, whereas the former are less costly and so less risky. D expends a lot more cost on trying to prevent the latter than the former. Things like array bounds checking, guaranteed initialization, garbage collection, etc., are examples of such steps.

Oh, come on!! Is this truly _your_ experience, or supposition? Whichever, it is *NOT* my experience. I have not had an array bounds error since somewhere early last year. I have not had an uninitialised variable bug since late last year. I had a memory leak (one) in January (I assume this is the equiv to GC item) I have had two truncation bugs last month, one of which I only found out about when running STLSoft through multiple compilers. Now either you're representative of every other developer on the planet bar one, or I am, or we each represent but a small set of the spectrum. Of course it's the latter, which means that _you_ cannot cherry pick which issues you think matter and which do not. (Well, you can, it being your language, but you are not going to be right, and it ain't going to make D all it can be.) Does no one else get this?!?!
Feb 28 2005
next sibling parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 1 Mar 2005 14:39:18 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsmxt9puo23k2f5 ally...
 I view Walters responses as attempts to show that it's unlikely
 and/or the
 alternative is bad.

Yes, that's correct. I'd like to add that not all bugs are equal - some are easy to reproduce and find, some are very hard to track down. I argue that the latter are so costly that they are worth considerable cost to try and prevent, whereas the former are less costly and so less risky. D expends a lot more cost on trying to prevent the latter than the former. Things like array bounds checking, guaranteed initialization, garbage collection, etc., are examples of such steps.

Oh, come on!! Is this truly _your_ experience, or supposition? Whichever, it is *NOT* my experience. I have not had an array bounds error since somewhere early last year. I have not had an uninitialised variable bug since late last year. I had a memory leak (one) in January (I assume this is the equiv to GC item) I have had two truncation bugs last month, one of which I only found out about when running STLSoft through multiple compilers. Now either you're representative of every other developer on the planet bar one, or I am, or we each represent but a small set of the spectrum. Of course it's the latter, which means that _you_ cannot cherry pick which issues you think matter and which do not. (Well, you can, it being your language, but you are not going to be right, and it ain't going to make D all it can be.) Does no one else get this?!?!

Sure, but how do you pick the right thing in this case? without canvasing every programmer in existance? Someone has to make a decision, that person is Walter, D is his project. D will become all that Walter makes it, plus all that we contribute. My opinion is that he's (and we are) doing a fairly good job so far. There is nothing stopping you taking the D front end and writing a lint-like program, I think having one, written by you, would be the best possible thing for D for several reasons: 1. you're going to do your damndest (sp?) to include as many possible things (like this) which you think should be done by the compiler. 2. you already use 10 different C++ compilers to get a series of unique, and useful warnings. 3. you have a good understanding of the many pitfalls and have thought about how to avoid them in the best possible manner. The only problem being the amount of time it might take, so, perhaps you or someone else organises a team of people to work on this. I'd help, but I have a lack of spare time myself. Regan
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opsmxxnciy23k2f5 ally...
 On Tue, 1 Mar 2005 14:39:18 +1100, Matthew 
 <admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsmxt9puo23k2f5 ally...
 I view Walters responses as attempts to show that it's unlikely
 and/or the
 alternative is bad.

Yes, that's correct. I'd like to add that not all bugs are equal - some are easy to reproduce and find, some are very hard to track down. I argue that the latter are so costly that they are worth considerable cost to try and prevent, whereas the former are less costly and so less risky. D expends a lot more cost on trying to prevent the latter than the former. Things like array bounds checking, guaranteed initialization, garbage collection, etc., are examples of such steps.

Oh, come on!! Is this truly _your_ experience, or supposition? Whichever, it is *NOT* my experience. I have not had an array bounds error since somewhere early last year. I have not had an uninitialised variable bug since late last year. I had a memory leak (one) in January (I assume this is the equiv to GC item) I have had two truncation bugs last month, one of which I only found out about when running STLSoft through multiple compilers. Now either you're representative of every other developer on the planet bar one, or I am, or we each represent but a small set of the spectrum. Of course it's the latter, which means that _you_ cannot cherry pick which issues you think matter and which do not. (Well, you can, it being your language, but you are not going to be right, and it ain't going to make D all it can be.) Does no one else get this?!?!

Sure, but how do you pick the right thing in this case? without canvasing every programmer in existance?

That's my point. The only person is the programmer writing the code that's about to be compiled. Any other judgement is flawed.
 Someone has to make a decision, that person is Walter, D is his 
 project. D  will become all that Walter makes it, plus all that we 
 contribute. My  opinion is that he's (and we are) doing a fairly good 
 job so far.

Good, but not great. Get this: I believe that D has the potential to be truly fantastic. At its going I suspect it'll be hamstrung by its flaws, and become YAGI
 There is nothing stopping you taking the D front end and writing a 
 lint-like program, I think having one, written by you, would be the 
 best  possible thing for D for several reasons:

But that's the whole bleeding point. (Not swearing at you, btw.) I don't _want_ to write something that's going to go _against_ the standard. If I wanted that I'd just write a front end that gave me a boolean type, prevented any non-boolean subexprssions, etc. etc. But what's the point in that. That's not a commercial solution, it's a hobby.
 1. you're going to do your damndest (sp?) to include as many possible 
 things (like this) which you think should be done by the compiler.

 2. you already use 10 different C++ compilers to get a series of 
 unique,  and useful warnings.

I do, because I'm a pragmatist, and that's the pragmatic choice for C++. Were I to be presented with having to do the same thing for D, the pragmatic choice would be to stick to C++. (I'm serious.)
 3. you have a good understanding of the many pitfalls and have thought 
 about how to avoid them in the best possible manner.

 The only problem being the amount of time it might take, so, perhaps 
 you  or someone else organises a team of people to work on this. I'd 
 help, but  I have a lack of spare time myself.

I do plan to write such a thing in the future, just as I did for Java some years ago, to help with things that are outside the language, e.g. bracing styles, import lists, checking for doc comments, etc.. I have no intention to do it for such fundamental things _within_ the language. If D needs a lint for such things, I think D will not be worth using. (I base this on the supposition that many/most engineers do not use a lint, and belief that they should not have to use a lint to be able to write robust programs.)
Feb 28 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 1 Mar 2005 15:17:12 +1100, Matthew  
<admin stlsoft.dot.dot.dot.dot.org> wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsmxxnciy23k2f5 ally...
 Sure, but how do you pick the right thing in this case? without
 canvasing  every programmer in existance?

That's my point. The only person is the programmer writing the code that's about to be compiled. Any other judgement is flawed.

Fair enough. So, the tool, whatever it is, must be very configurable, in order to make the right choice for each person it is used by, right? I think, and this is due to Walter convincing me some time back, that the compiler should behave the same no matter which platform you compile your code on, code should not pass one compiler and fail another. There seems to me to be only one path to achieve this, and that is to define everything that is an error, and not allow anything outside that to be given as an error or a warning. The reason you cannot go on to define all the possible warnings, is that, as you've shown there are many of them, and each person has different ideas about what should be warned about. Given that, and the desire for the compiler to behave the same in all incarnations, warnings are not allowed. I think there are several cases, of which this is one, where it's not likely enough to always be an error, or isn't detectable, so the compiler cannot call an error. But, we still want some sort of warning about it, as the compiler can't give one, we need another app to do it, a lint program.
 Someone has to make a decision, that person is Walter, D is his
 project. D  will become all that Walter makes it, plus all that we
 contribute. My  opinion is that he's (and we are) doing a fairly good
 job so far.

Good, but not great. Get this: I believe that D has the potential to be truly fantastic. At its going I suspect it'll be hamstrung by its flaws, and become YAGI

Perhaps, time will tell.
 There is nothing stopping you taking the D front end and writing a
 lint-like program, I think having one, written by you, would be the
 best  possible thing for D for several reasons:

But that's the whole bleeding point. (Not swearing at you, btw.) I don't _want_ to write something that's going to go _against_ the standard.

AFAICS A lint program does not go _against_ the standard, in fact Walter has supported it's creation ever since I can remember.
 If
 I wanted that I'd just write a front end that gave me a boolean type,
 prevented any non-boolean subexprssions, etc. etc. But what's the point
 in that. That's not a commercial solution, it's a hobby.

It's a commmercial solution, if you write it as a lint program, eg. change "prevented any non-boolean subexprssions" to "optionally warned of non-boolean subexprssions", Walter endorses it (which I believe he will), it gets packaged with DMD, and people start to use it.
 1. you're going to do your damndest (sp?) to include as many possible
 things (like this) which you think should be done by the compiler.

 2. you already use 10 different C++ compilers to get a series of
 unique,  and useful warnings.

I do, because I'm a pragmatist, and that's the pragmatic choice for C++. Were I to be presented with having to do the same thing for D, the pragmatic choice would be to stick to C++. (I'm serious.)

Sure, until you write the lint program some of us have been dreaming of.
 3. you have a good understanding of the many pitfalls and have thought
 about how to avoid them in the best possible manner.

 The only problem being the amount of time it might take, so, perhaps
 you  or someone else organises a team of people to work on this. I'd
 help, but  I have a lack of spare time myself.

I do plan to write such a thing in the future, just as I did for Java some years ago, to help with things that are outside the language, e.g. bracing styles, import lists, checking for doc comments, etc..

Interesting... I don't see the above examples as things a lint program does, more like they're other tools in the build process i.e. bracing styles sounds like an editor/code formatting task. dmake automatically figures out import lists. doc comments, a task for documentation generating software/app. All this could be combined into one app, but why, when if you segregate then you can then choose from the many different incarnations of each which will no doubt spring up due to the differences in style each person has.
 I have no
 intention to do it for such fundamental things _within_ the language. If
 D needs a lint for such things, I think D will not be worth using.

Given that IMO the D compilers task is not to question code, but to compile it, provided it can be compiled, I think this issue and many like it should be handled in a lint program.
 (I
 base this on the supposition that many/most engineers do not use a lint,
 and belief that they should not have to use a lint to be able to write
 robust programs.)

I think the above belief to be based on experience with C/C++ compilers, the same is not true for D, because the D compiler does not issue warnings, it does not perform all of the same tasks as a C/C++ compiler. I believe we can change the above attitude, with D, by supplying a D compiler and a D lint program AKA 'the nanny', I think both are seperate but essential. Regan
Feb 28 2005
parent reply brad beveridge <brad nowhere.com> writes:
Why not have the compiler be able to run programs at compile time?  What 
I mean is that the DMD config file can have a set of (I hate to say this 
word) "preprocessors", programs that get run before the real compiler 
and get passed the same input as the compiler.
Sure, you can do this with a script - but then people won't bother.  So, 
in this way the compiler can remain pure - either code passes or fails, 
and you can enforce the running of a lint (or anyother) program 
transparently.  D could potentially check the return values from those 
programs and fail to run if any of the "preprocessors" fail.
As long as D supports different config files (ie, look for the config 
file in the current directory first, then /etc/dmd.conf), then you can 
make sure that you have global sensible options, with project specific 
overrides.

This is also in keeping with the philosophy that each program should do 
one task and do it well.  There is some lost efficiency in multiple 
handling of the files, but D is so fast at compiling that I think that 
would be minimal.

Brad
Feb 28 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 01 Mar 2005 18:39:14 +1300, brad beveridge <brad nowhere.com>  
wrote:
 Why not have the compiler be able to run programs at compile time?  What  
 I mean is that the DMD config file can have a set of (I hate to say this  
 word) "preprocessors", programs that get run before the real compiler  
 and get passed the same input as the compiler.

I don't think you want 'lint' to run before the compile, but rather after, I mean, if compile fails, who cares what lint says?
 Sure, you can do this with a script - but then people won't bother.  So,  
 in this way the compiler can remain pure - either code passes or fails,  
 and you can enforce the running of a lint (or anyother) program  
 transparently.

I think it's a good idea, once a lint program has been written it can be bundled with the compiler and this configured to happen automatically, this serves to get everyone doing it, by default. You'd then use lint on your dev box, and disable it on your build boxes.
 D could potentially check the return values from those programs and fail  
 to run if any of the "preprocessors" fail.

I don't think lint processesing will give a pass/fail result.. it will simply give a list of things to double check, things you should check then either fix, or ignore. Incidently, if you want to ignore a lint warning, how do you go about doing that? I mean, you could just ignore it, but it will still popup every compile and may obfuscate other lint warnings.
 As long as D supports different config files (ie, look for the config  
 file in the current directory first, then /etc/dmd.conf), then you can  
 make sure that you have global sensible options, with project specific  
 overrides.

 This is also in keeping with the philosophy that each program should do  
 one task and do it well.  There is some lost efficiency in multiple  
 handling of the files, but D is so fast at compiling that I think that  
 would be minimal.

The lint program is going to need to read object files also, isn't it, because some lint warnings might be based on global variables, types etc defined outside the source file it's processing.. Regan
Mar 01 2005
parent reply "Ben Hinkle" <bhinkle mathworks.com> writes:
"Regan Heath" <regan netwin.co.nz> wrote in message 
news:opsmy95bqr23k2f5 ally...
 On Tue, 01 Mar 2005 18:39:14 +1300, brad beveridge <brad nowhere.com> 
 wrote:
 Why not have the compiler be able to run programs at compile time?  What 
 I mean is that the DMD config file can have a set of (I hate to say this 
 word) "preprocessors", programs that get run before the real compiler 
 and get passed the same input as the compiler.

I don't think you want 'lint' to run before the compile, but rather after, I mean, if compile fails, who cares what lint says?

That sounds reasonable though I can see why people would want to know about linting info independent of whether the compile worked or not. I can see arguments for running either before or after.
 Sure, you can do this with a script - but then people won't bother.  So, 
 in this way the compiler can remain pure - either code passes or fails, 
 and you can enforce the running of a lint (or anyother) program 
 transparently.

I think it's a good idea, once a lint program has been written it can be bundled with the compiler and this configured to happen automatically, this serves to get everyone doing it, by default. You'd then use lint on your dev box, and disable it on your build boxes.
 D could potentially check the return values from those programs and fail 
 to run if any of the "preprocessors" fail.

I don't think lint processesing will give a pass/fail result.. it will simply give a list of things to double check, things you should check then either fix, or ignore.

agreed.
 Incidently, if you want to ignore a lint warning, how do you go about 
 doing that?

The lint program should have ways of customizing the output to filter some messages you don't ever want to see.
 I mean, you could just ignore it, but it will still popup every compile 
 and may obfuscate other lint warnings.

 As long as D supports different config files (ie, look for the config 
 file in the current directory first, then /etc/dmd.conf), then you can 
 make sure that you have global sensible options, with project specific 
 overrides.

 This is also in keeping with the philosophy that each program should do 
 one task and do it well.  There is some lost efficiency in multiple 
 handling of the files, but D is so fast at compiling that I think that 
 would be minimal.

The lint program is going to need to read object files also, isn't it, because some lint warnings might be based on global variables, types etc defined outside the source file it's processing..

Lint doesn't check link errors - only syntax and probably some semantics errors. So it won't need to look at object files. The first pass - syntax checking - can also happen without looking at other source files. I don't know how much semantic analysis is possible without looking at the imports, though. Probably not much.
 Regan 

Mar 01 2005
parent reply "Regan Heath" <regan netwin.co.nz> writes:
On Tue, 1 Mar 2005 17:01:56 -0500, Ben Hinkle <bhinkle mathworks.com>  
wrote:
 "Regan Heath" <regan netwin.co.nz> wrote in message
 news:opsmy95bqr23k2f5 ally...
 On Tue, 01 Mar 2005 18:39:14 +1300, brad beveridge <brad nowhere.com>
 wrote:
 Why not have the compiler be able to run programs at compile time?   
 What
 I mean is that the DMD config file can have a set of (I hate to say  
 this
 word) "preprocessors", programs that get run before the real compiler
 and get passed the same input as the compiler.

I don't think you want 'lint' to run before the compile, but rather after, I mean, if compile fails, who cares what lint says?

That sounds reasonable though I can see why people would want to know about linting info independent of whether the compile worked or not. I can see arguments for running either before or after.

True, given that it's a seperate app, they can run it stand-alone, whenever they like.
 Sure, you can do this with a script - but then people won't bother.   
 So,
 in this way the compiler can remain pure - either code passes or fails,
 and you can enforce the running of a lint (or anyother) program
 transparently.

I think it's a good idea, once a lint program has been written it can be bundled with the compiler and this configured to happen automatically, this serves to get everyone doing it, by default. You'd then use lint on your dev box, and disable it on your build boxes.
 D could potentially check the return values from those programs and  
 fail
 to run if any of the "preprocessors" fail.

I don't think lint processesing will give a pass/fail result.. it will simply give a list of things to double check, things you should check then either fix, or ignore.

agreed.
 Incidently, if you want to ignore a lint warning, how do you go about
 doing that?

The lint program should have ways of customizing the output to filter some messages you don't ever want to see.

Ok, but what if it's a warning about x, and you do want to know about x, because it might be a bug, but in this one instance you've already decided it's not. Basically it seems to me you want to be able to ignore x in this instance only. Or does that never occur for some reason?
 I mean, you could just ignore it, but it will still popup every compile
 and may obfuscate other lint warnings.

 As long as D supports different config files (ie, look for the config
 file in the current directory first, then /etc/dmd.conf), then you can
 make sure that you have global sensible options, with project specific
 overrides.

 This is also in keeping with the philosophy that each program should do
 one task and do it well.  There is some lost efficiency in multiple
 handling of the files, but D is so fast at compiling that I think that
 would be minimal.

The lint program is going to need to read object files also, isn't it, because some lint warnings might be based on global variables, types etc defined outside the source file it's processing..

Lint doesn't check link errors - only syntax and probably some semantics errors. So it won't need to look at object files. The first pass - syntax checking - can also happen without looking at other source files. I don't know how much semantic analysis is possible without looking at the imports, though. Probably not much.

I was thinking of a "narrowing integer conversions warning", say you're assigning a value to a global int. Lint would need to know that global was an int in order to give a warning about it. Regan
Mar 01 2005
next sibling parent reply brad domain.invalid writes:
 
 Ok, but what if it's a warning about x, and you do want to know about 
 x,  because it might be a bug, but in this one instance you've already 
 decided  it's not. Basically it seems to me you want to be able to 
 ignore x in this  instance only.
 
 Or does that never occur for some reason?

- You could use a pragma to suppress the warning, but that results in ugly code - You could use a config file to suppress specific warnings on specific lines, but that will break when the code moves - You could use a config file to suppress specific warnings within a function and happening on a particular symbol, but that may be too coarse - You could do as above, and specify the count, if the counts don't match, warn. Ie - "suppress narrow cast : foo.d : main : counter : 2" could perhaps mean, suppress the narrowing cast warnings for the foo.d file, within the function main, happening on the symbol counter - and suppress 2 of them. If you change the code so that lint doesn't see exactly 2 narrowing casts on that symbol in that function, then it warns again. Hand editing config files sucks, but presumably if you are super paranoid and have lint set to super-paranoid, then you won't mind suppressing the genuine occasions that you know better. Brad
Mar 01 2005
parent "Regan Heath" <regan netwin.co.nz> writes:
On Wed, 02 Mar 2005 11:35:18 +1300, <brad domain.invalid> wrote:
  Ok, but what if it's a warning about x, and you do want to know about  
 x,  because it might be a bug, but in this one instance you've already  
 decided  it's not. Basically it seems to me you want to be able to  
 ignore x in this  instance only.
  Or does that never occur for some reason?

I think that this would be a hard problem in general.

That was my impression also.
   - You could use a pragma to suppress the warning, but that results in  
 ugly code

And ties the code to the lint, not something I think is a good idea.
   - You could use a config file to suppress specific warnings on  
 specific lines, but that will break when the code moves

Yep.. I thought about this also.
   - You could use a config file to suppress specific warnings within a  
 function and happening on a particular symbol, but that may be too coarse

Possibly, but it's not as bad as the other options.
   - You could do as above, and specify the count, if the counts don't  
 match, warn.  Ie - "suppress narrow cast : foo.d : main : counter : 2"  
 could perhaps mean, suppress the narrowing cast warnings for the foo.d  
 file, within the function main, happening on the symbol counter - and  
 suppress 2 of them.  If you change the code so that lint doesn't see  
 exactly 2 narrowing casts on that symbol in that function, then it warns  
 again.

Sounds like A useful, optional addition to the idea.
 Hand editing config files sucks, but presumably if you are super  
 paranoid and have lint set to super-paranoid, then you won't mind  
 suppressing the genuine occasions that you know better.

Or the lint program could manage the config, i.e. you run lint -s W10254 foo.d main 2 (meaning "suppress narrow cast : foo.d : main : counter : 2") once, and it writes the config. Regan
Mar 01 2005
prev sibling parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
 The lint program is going to need to read object files also, isn't it,
 because some lint warnings might be based on global variables, types etc
 defined outside the source file it's processing..

Lint doesn't check link errors - only syntax and probably some semantics errors. So it won't need to look at object files. The first pass - syntax checking - can also happen without looking at other source files. I don't know how much semantic analysis is possible without looking at the imports, though. Probably not much.

I was thinking of a "narrowing integer conversions warning", say you're assigning a value to a global int. Lint would need to know that global was an int in order to give a warning about it.

It might need to search imported modules for the declaration of the global int, but it wouldn't have to know anything about object files (to me "object files" are the things generated by the compiler that contain machine code). I don't know how much a lint program would be able to accomplish aside from syntax checking without reading declarations from imported modules.
Mar 01 2005
prev sibling next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 1 Mar 2005 14:39:18 +1100, Matthew wrote:


[snipped a lot of good stuff that I might come back to]
 
 Does no one else get this?!?!

I must admit that if I really look at what has been said so far, I feel a little offended by Walter's apparent attitude. It's sort of like Walter assumes I still wear nappies (diapers) and can't be trusted to use a toilet correctly. I guess that's the crux as I see it now; Walter doesn't trust me; its not personal, as he doesn't trust any programmer. If I could be trusted to write responsible code, then DMD would be free to show me what could possibly be mistakes that I've made, and then let me act on them in a responsible manner, or at least take responsibility for my coding. It could be that Walter has seen so much irresponsible code that he assumes everybody (including himself?) will always do the wrong thing if they could. And that position hurts a little, so maybe I'm too much of a boy scout. -- Derek Melbourne, Australia 1/03/2005 3:01:54 PM
Feb 28 2005
parent reply "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:1myhii2vp4h97$.5zhponxkjp52$.dlg 40tude.net...
 On Tue, 1 Mar 2005 14:39:18 +1100, Matthew wrote:


 [snipped a lot of good stuff that I might come back to]

 Does no one else get this?!?!

I must admit that if I really look at what has been said so far, I feel a little offended by Walter's apparent attitude. It's sort of like Walter assumes I still wear nappies (diapers) and can't be trusted to use a toilet correctly. I guess that's the crux as I see it now; Walter doesn't trust me; its not personal, as he doesn't trust any programmer.

I get the opposite impression - the situation he's arguing for is allow implicit casting without any warnings. That's putting all the responsibility on the coder (or other tools besides the compiler) to make sure it's right.
 If I could be trusted to write responsible code, then DMD would be free to
 show me what could possibly be mistakes that I've made, and then let me 
 act
 on them in a responsible manner, or at least take responsibility for my
 coding.

I expect the grumbling masses will eventually make Walter put a warning or two in the "verbose mode".
 It could be that Walter has seen so much irresponsible code that he 
 assumes
 everybody (including himself?) will always do the wrong thing if they
 could. And that position hurts a little, so maybe I'm too much of a boy
 scout.

He's not a backseat driver - he's letting us drive on our own without someone nagging them to signal when changing lanes, etc.
 -- 
 Derek
 Melbourne, Australia
 1/03/2005 3:01:54 PM 

Feb 28 2005
parent Derek Parnell <derek psych.ward> writes:
On Mon, 28 Feb 2005 23:15:21 -0500, Ben Hinkle wrote:

 "Derek Parnell" <derek psych.ward> wrote in message 
 news:1myhii2vp4h97$.5zhponxkjp52$.dlg 40tude.net...
 On Tue, 1 Mar 2005 14:39:18 +1100, Matthew wrote:


 [snipped a lot of good stuff that I might come back to]

 Does no one else get this?!?!

I must admit that if I really look at what has been said so far, I feel a little offended by Walter's apparent attitude. It's sort of like Walter assumes I still wear nappies (diapers) and can't be trusted to use a toilet correctly. I guess that's the crux as I see it now; Walter doesn't trust me; its not personal, as he doesn't trust any programmer.

I get the opposite impression - the situation he's arguing for is allow implicit casting without any warnings. That's putting all the responsibility on the coder (or other tools besides the compiler) to make sure it's right.

But why no warnings? Because he doesn't trust us to handle the situation responsibly? However, I'm resigned to wait for a 'lint' to appear.
 If I could be trusted to write responsible code, then DMD would be free to
 show me what could possibly be mistakes that I've made, and then let me 
 act
 on them in a responsible manner, or at least take responsibility for my
 coding.

I expect the grumbling masses will eventually make Walter put a warning or two in the "verbose mode".

I wouldn't hold my breath.
 It could be that Walter has seen so much irresponsible code that he 
 assumes
 everybody (including himself?) will always do the wrong thing if they
 could. And that position hurts a little, so maybe I'm too much of a boy
 scout.

He's not a backseat driver - he's letting us drive on our own without someone nagging them to signal when changing lanes, etc.

I'm sorry Ben, I guess I didn't make myself clear again ;-) *I* want to control the nagging, not let someone else decide for me whether I should be nagged or not. -- Derek Melbourne, Australia 1/03/2005 3:17:45 PM
Feb 28 2005
prev sibling parent Daniel Horn <hellcatv hotmail.com> writes:
It took 1.5 years of painful debugging and harrassment by users to 
actually get weapons working in Vega Strike (C++ and python) more than a 
few light seconds away from the sun because of some implicit casts from 
double -> float...
and believe me, trying to distinguish a double from a float multiplied 
by and added to several doubles is not trivial.

clearly implicit casts can cause nastier bugs than a few minutes with 
valgrind to find uninitialized memory or bad pointers ;-)


Matthew wrote:
 
 Oh, come on!!
 
 Is this truly _your_ experience, or supposition? Whichever, it is *NOT* 
 my experience.
 
     I have not had an array bounds error since somewhere early last 
 year.
     I have not had an uninitialised variable bug since late last year.
     I had a memory leak (one) in January (I assume this is the equiv to 
 GC item)
     I have had two truncation bugs last month, one of which I only found 
 out about when running STLSoft through multiple compilers.
 
 Now either you're representative of every other developer on the planet 
 bar one, or I am, or we each represent but a small set of the spectrum. 
 Of course it's the latter, which means that _you_ cannot cherry pick 
 which issues you think matter and which do not. (Well, you can, it being 
 your language, but you are not going to be right, and it ain't going to 
 make D all it can be.)
 
 Does no one else get this?!?!
 
 
 

Mar 01 2005
prev sibling parent "Charlie Patterson" <charliep1 excite.com> writes:
"Walter" <newshound digitalmars.com> wrote in message 
news:d00n60$1e5e$1 digitaldaemon.com...
 D expends a lot more cost on trying to prevent the latter than the former.
 Things like array bounds checking, guaranteed initialization, garbage
 collection, etc., are examples of such steps.

Hmm, I was more on Walter's side of this until the list came out. (-: I agree that pointer intialization, for instance, can be costly to find. However, I haven't made a bounds mistake in years (but I still think checking it is nice and maybe for newbs), and lack of regular variable initialization isn't so bad, and garbage collection was just for making coding easier for me, not really bugs. So, now I'm thinking there is a continuum here and casting issues are about par with array bounds, to me. So if array bounds deserve run-time checks then I think casts might, too, in my experience.
Mar 01 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 1 Mar 2005 07:06:26 +1100, Matthew wrote:


[snip]

 
 The reasons I choose C++ over D are nothing to do with speed. That was 
 kind of the point of my whole rant. It concerns me a little that you 
 respond relatively voluminously to the perceived slight on speed, but 
 not about my concerns about D's usability and robustness.
 
 I believe that software engineering has a pretty much linear order of 
 concerns:
 
     Correctness           - does the chosen 
 language/libraries/techniques promote the software actually do the right 
 thing?
     Robustness            - does the language/libraries/techniques 
 support writing software that can cope with erroneous environmental 
 conditions _and_ with design violations (CP)
     Efficiency              - does the language/libraries/techniques 
 support writing software that can perform with all possible speed (in 
 circumstances where that's a factor)
     Maintainability       - does the language/libraries/techniques 
 support writing software that is maintainable
     Reusability            - does the language/libraries/techniques 
 support writing software that may be reused
 
 As I said, I think that, in most cases, these concerns are in descending 
 order as shown.
 
 In other words, Correctness is more important than Robustness. 
 Maintainability is more important than Reusability. Sometimes Effeciency 
 moves about, e.g. it might swap places with Maintainability.

If I may put this into a slightly different point of view, but still from the perspective of a commercial development company (eg. The one I work for). Basically it boils down to the cost of an application of its life-time. (1) Cost to purchaser. This includes 'Correctness', 'Efficiency', 'Maintainability', ... A correctly working (100%) program is usually more important than one which runs as fast as possible. To have a really fast program that produces rubbish is pretty costly to the purchaser. However, having a program that is mostly correct (95+%) with a trade-off for speed is also often an acceptable cost to the purchaser. Having a program that is expensive to upgrade or fix, is not a cost that purchasers appreciate. (2) Cost to developer. This includes 'Maintainability', 'Reusability', 'Robustness', 'Fun', ... Having program source that is legible (humans can read it without effort), is a godsend. Having a program that is cheap to maintain and design is a blessing. The initial coding effort is tiny when compared to the on going maintenance coding effort. So lots of time needs to be spent in design (get it right the first time), and quality control. The fun factor influences the cost of training people and retaining people. If the source coding effort is boring or too hard, you will lose people regardless of how good the quality of the program is, or fast it runs. And a boring code base will have more mistakes than an exciting and interesting one.
 Given that list, correct use of C++ scores (I'm using scores out of five 
 since this is entirely subjective and not intended to be a precise 
 scale)

I don't know C++, Java, Ruby, Python, etc... well enough to comment, so I'll limit myself to generalities. Interpreted Dynamic-typed languages are both cheap for purchasers and developers. An initial release of a program may not be 100% correct, but the cost of repairs is reasonable. The speed of most all programs, is not an issue! This is because most programs in commercial environments meet bottlenecks long before the code efficiency comes into play. So long as most GUI apps can keep up with the user's keystroke rate, nobody complains. The speed of database systems, networks, and printers, are more likely to be the source of irritation long before the code is worked hard. Of course, there are some applications that do need to be AFAP, such as a bank's overnight interest calculation and accruals. And these few apps (less than 10% of all the apps) can be targeted for efficient design and coding. Compiled Static-typed languages are much more expensive to develop and thus to purchase. However, their forte is applications that *must* be fast to run. These are typically non-interactive apps. So, generally speaking, there is a valid place in the world for both types (and other types too (assembler, hybrids, ...) ). With respect to C++ vs D performance, I didn't really care. With respect to their comparative costs, I do care, and it would seem that D can be a winner there. [snip]
 
 I'd be interested to hear what others think on all facets of this issue, 
 but I'm particularly keen to hear people's views on how much D presents 
 as a realistic choice for large-scale, commercial developments.
 
 Thoughts?

D appears to have a *bloody huge* potential to excel in reducing costs to both purchasers and developers. Currently however, it can't be used to produce cheaper commercial software because its still a Work-In-Progress, and has many areas that cry out for tidying up. My concern is not whether or not it can be tidied, but how long will it be before that happens. Using the current method of improving D, I can't see a commercially prudent D (or DMD) for some years to come. Either Walter needs to get some serious extra manpower (not a sexist term BTW), or give up the control of D. Neither of which I can see happening anytime soon. So this saddens me, but not enough to jump ship yet. -- Derek Melbourne, Australia 1/03/2005 10:30:59 AM
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message
news:110v9i57xoxck$.kym8gprc7gyx$.dlg 40tude.net...
 D appears to have a *bloody huge* potential to excel in reducing costs to
 both purchasers and developers. Currently however, it can't be used to
 produce cheaper commercial software because its still a Work-In-Progress,
 and has many areas that cry out for tidying up.

Can you list those areas, please?
Feb 28 2005
next sibling parent Derek Parnell <derek psych.ward> writes:
On Mon, 28 Feb 2005 16:41:07 -0800, Walter wrote:

 "Derek Parnell" <derek psych.ward> wrote in message
 news:110v9i57xoxck$.kym8gprc7gyx$.dlg 40tude.net...
 D appears to have a *bloody huge* potential to excel in reducing costs to
 both purchasers and developers. Currently however, it can't be used to
 produce cheaper commercial software because its still a Work-In-Progress,
 and has many areas that cry out for tidying up.

Can you list those areas, please?

Glad to, but later. I'll be back soon. Got some clients coming in to see me in a few minutes. -- Derek Melbourne, Australia 1/03/2005 11:55:59 AM
Feb 28 2005
prev sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 "Derek Parnell" <derek psych.ward> wrote in message
 news:110v9i57xoxck$.kym8gprc7gyx$.dlg 40tude.net...
 
D appears to have a *bloody huge* potential to excel in reducing costs to
both purchasers and developers. Currently however, it can't be used to
produce cheaper commercial software because its still a Work-In-Progress,
and has many areas that cry out for tidying up.


First sensible entry for a while! OTOH, isn't this as it's supposed? We are pre 1.0. Most other languages haven't been this far even at 3.0! Not that these shouldn't be fixed, of course. ;-)
 Can you list those areas, please?

Mar 01 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Georg Wrede wrote:

 OTOH, isn't this as it's supposed?
 We are pre 1.0. Most other languages
 haven't been this far even at 3.0!

I think it was either of "Mac OS X 10.x", the "Windows 2000" / "XP" releases of NT 5.x, or "Java 2 Platform Standard Edition (J2SE) version 5.0" release of JDK 1.5 that has convinced me that Marketing has destroyed all notions of version numbers forever... :-) But you are right. It's pre-release still. --anders
Mar 01 2005
prev sibling next sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvvqp9$d89$1 digitaldaemon.com>, Walter says...
There's a lot there, and I just want to respond to a couple of points.

1) You ask what's the bug risk with having a lot of casts. The point of a
cast is to *escape* the typing rules of the language. Consider the extreme
case where the compiler inserted a cast whereever there was any type
mismatch. Typechecking pretty much will just go out the window.

2) You said you use C++ every time when you want speed. That implies you
feel that C++ is inherently more efficient than D (or anything else). That
isn't my experience with D compared with C++. D is faster. DMDScript in D is
faster than the C++ version. The string program in D
www.digitalmars.com/d/cppstrings.html is much faster in D. The dhrystone
benchmark is faster in D.

And this is using an optimizer and back end that is designed to efficiently
optimize C/C++ code, not D code. What can be done with a D optimizer hasn't
even been explored yet.

There's a lot of conventional wisdom that since C++ offers low level
control, that therefore C++ code executes more efficiently. The emperor has
no clothes, Matthew! I can offer detailed explanations of why this is true
if you like.

I challenge you to take the C++ program in
www.digitalmars.com/d/cppstrings.html and make it faster than the D version.
Use any C++ technique, hack, trick, you need to.

There is no doubt in my mind that array slicing is a huge factor in making D faster than C in very specific areas. I'm a big proponent of this. Having said that; making sweeping statements regarding "low level control" and so on does nobody any favours. I would very much like to hear this detailed explanation you offer:
Feb 28 2005
parent reply Dave <Dave_member pathlink.com> writes:
In article <cvvuqm$id4$1 digitaldaemon.com>, Kris says...
In article <cvvqp9$d89$1 digitaldaemon.com>, Walter says...
There's a lot there, and I just want to respond to a couple of points.

1) You ask what's the bug risk with having a lot of casts. The point of a
cast is to *escape* the typing rules of the language. Consider the extreme
case where the compiler inserted a cast whereever there was any type
mismatch. Typechecking pretty much will just go out the window.

2) You said you use C++ every time when you want speed. That implies you
feel that C++ is inherently more efficient than D (or anything else). That
isn't my experience with D compared with C++. D is faster. DMDScript in D is
faster than the C++ version. The string program in D
www.digitalmars.com/d/cppstrings.html is much faster in D. The dhrystone
benchmark is faster in D.

And this is using an optimizer and back end that is designed to efficiently
optimize C/C++ code, not D code. What can be done with a D optimizer hasn't
even been explored yet.

There's a lot of conventional wisdom that since C++ offers low level
control, that therefore C++ code executes more efficiently. The emperor has
no clothes, Matthew! I can offer detailed explanations of why this is true
if you like.

I challenge you to take the C++ program in
www.digitalmars.com/d/cppstrings.html and make it faster than the D version.
Use any C++ technique, hack, trick, you need to.

There is no doubt in my mind that array slicing is a huge factor in making D faster than C in very specific areas. I'm a big proponent of this. Having said that; making sweeping statements regarding "low level control" and so on does nobody any favours. I would very much like to hear this detailed explanation you offer:

So would I (just for the educational aspects), but I'd rather have Walter use that time to continue work on the reference compiler ;-) As to the 'performance via low-level control' type of argument, the recent thread on why higher level array and vector expressions could be optimized by a compiler (but very difficult if not impossible to optimize as well with hand-tuned, low-level C code) offers a good general example of what I think Walter had in mind w/ his statement. - Dave
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Dave" <Dave_member pathlink.com> wrote in message
news:d00055$k34$1 digitaldaemon.com...
 So would I (just for the educational aspects), but I'd rather have Walter

 that time to continue work on the reference compiler ;-)

Kris mentions slicing, that's just part of it. Slicing, *coupled with garbage collection* makes for a big win. Not having to keep track of ownership of memory means one can be much more flexible in designing and manipulating data structures. The need for copy constructors, move constructors, overloaded assignment operators, pretty much goes away. So not only is the code easier to write, there's much less that has to be executed. Ergo, faster. I'll go out on a limb and say that there's no way Matthew is going to be able to beat D in the wc benchmark without writing a very ugly, kludgy piece of C++ hackery. There's another, subtler issue of why D code is faster. I originally wrote DMDScript in C++ (and it used a gc, leveling the playing field with D on that issue). The C++ version was heavilly tuned with a lot of hacks in it to make it fast. I did a fairly rote translation of it into D (taking out the hacks). Initially, the D version was slightly slower. But then an interesting thing happened. The D version was significantly shorter and easier to express. I started rewriting a bit in a more "D" style than a C++ style. I found I could easilly experiment with different data structures and layouts. The C++ one was hard to change, data structures and algorithms in it are much more 'brittle'. So in just a few days, I was able to tweak the D version into being significantly faster, *without* the hacks I resorted to in the C++ version. In other words, because expressing the algorithms is more straightforward, without the brittle framework C++ always seems to require, it's easier to see inefficiencies in those algorithms and change them.
 As to the 'performance via low-level control' type of argument, the recent
 thread on why higher level array and vector expressions could be optimized

 compiler (but very difficult if not impossible to optimize as well with
 hand-tuned, low-level C code) offers a good general example of what I

 Walter had in mind w/ his statement.

Yes, you hit the nail on the head with that. With C++, optimizers need to "reverse engineer" what was intended, which is much harder than optimizing what was expressed directly. For another trivial example, in D you can write code like: s ~ "xxx" ~ "yyy" and the optimizer could combine the two strings at compile time.
Feb 28 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter wrote:

 Kris mentions slicing, that's just part of it. Slicing, *coupled with
 garbage collection* makes for a big win. Not having to keep track of
 ownership of memory means one can be much more flexible in designing and
 manipulating data structures. The need for copy constructors, move
 constructors, overloaded assignment operators, pretty much goes away. So not
 only is the code easier to write, there's much less that has to be executed.
 Ergo, faster.

Yeah, if Mango could be even compiled on Mac OS X using GDC that is... I'm sure the car's speedy, if only she started ?
 I'll go out on a limb and say that there's no way Matthew is going to be
 able to beat D in the wc benchmark without writing a very ugly, kludgy piece
 of C++ hackery.

Or perhaps by using vector instructions or just different compilers :-) I don't like C++ much, but last I checked you could still use it for C. --anders
Feb 28 2005
prev sibling parent reply "Martin M. Pedersen" <martin moeller-pedersen.dk> writes:
"Walter" <newshound digitalmars.com> skrev i en meddelelse 
news:cvvqp9$d89$1 digitaldaemon.com...
 I challenge you to take the C++ program in
 www.digitalmars.com/d/cppstrings.html and make it faster than the D 
 version.
 Use any C++ technique, hack, trick, you need to.

I could not resist having a go at it. The following are my results for the attached version: dmd -o -release 359 ms dmc -o -6 375 ms cl -Ox -G6 -ML 296 ms I used an input file with 64 times "alice30.txt" to have something measurable. "cl" is MSVC 7.1. Regards, Martin
Feb 28 2005
next sibling parent reply brad domain.invalid writes:
Martin M. Pedersen wrote:
 "Walter" <newshound digitalmars.com> skrev i en meddelelse 
 news:cvvqp9$d89$1 digitaldaemon.com...
 
I challenge you to take the C++ program in
www.digitalmars.com/d/cppstrings.html and make it faster than the D 
version.
Use any C++ technique, hack, trick, you need to.

I could not resist having a go at it. The following are my results for the attached version: dmd -o -release 359 ms

Brad
Feb 28 2005
parent "Martin M. Pedersen" <martin moeller-pedersen.dk> writes:
<brad domain.invalid> skrev i en meddelelse 
news:d00i8f$19dg$1 digitaldaemon.com...
     dmd -o -release          359 ms


Sorry, I didn't write it as I compiled it. I used "dmd -O -inline -release" and consistently get 359/360 ms. Regards, Martin
Feb 28 2005
prev sibling next sibling parent "Ben Hinkle" <ben.hinkle gmail.com> writes:
"Martin M. Pedersen" <martin moeller-pedersen.dk> wrote in message 
news:d00i1o$1974$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> skrev i en meddelelse 
 news:cvvqp9$d89$1 digitaldaemon.com...
 I challenge you to take the C++ program in
 www.digitalmars.com/d/cppstrings.html and make it faster than the D 
 version.
 Use any C++ technique, hack, trick, you need to.

I could not resist having a go at it. The following are my results for the attached version: dmd -o -release 359 ms dmc -o -6 375 ms cl -Ox -G6 -ML 296 ms I used an input file with 64 times "alice30.txt" to have something measurable. "cl" is MSVC 7.1. Regards, Martin

Well, the code has very little C++ in it (the only STL is the sort at the end). No use of C++ strings. But technically, yes, it does count as a C++ program since it can be compiled by a C++ compiler. But I think it misses the spirit of Walter's challenge. I'd be curious what the D compiler's performance would be on that code ported to D. Since the code is almost entirely C it should be pretty easy to write a D version. -Ben
Feb 28 2005
prev sibling parent reply "Walter" <newshound digitalmars.com> writes:
"Martin M. Pedersen" <martin moeller-pedersen.dk> wrote in message
news:d00i1o$1974$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> skrev i en meddelelse
 news:cvvqp9$d89$1 digitaldaemon.com...
 I challenge you to take the C++ program in
 www.digitalmars.com/d/cppstrings.html and make it faster than the D
 version.
 Use any C++ technique, hack, trick, you need to.

I could not resist having a go at it. The following are my results for the attached version: dmd -o -release 359 ms dmc -o -6 375 ms cl -Ox -G6 -ML 296 ms I used an input file with 64 times "alice30.txt" to have something measurable. "cl" is MSVC 7.1.

I noticed that you're using memory mapped files! Cheater! <g> Trying the D version with std.mmfile, I get about a 10% speedup. It adds one line of code and an import. Anyhow, I think the effort you made just shows the point!
Feb 28 2005
parent reply "Martin M. Pedersen" <martin moeller-pedersen.dk> writes:
"Walter" <newshound digitalmars.com> skrev i en meddelelse 
news:d00oun$1frg$1 digitaldaemon.com...
 I noticed that you're using memory mapped files! Cheater! <g> Trying the D
 version with std.mmfile, I get about a 10% speedup. It adds one line of 
 code
 and an import.
 Anyhow, I think the effort you made just shows the point!

I partly agree. The std::string is a sorry accident of history, and without reference counting, it has very little to offer compared to vector<char>. But no string type will fit for all purposes. If you really are going for efficiency, you need some specialized string handling, and my point is, that you can have that in C and C++ too (but STL will not help you). The slicing concept of D is strong, and my solution in C++ was using slices too. Of cause much less elegantly than in D :-) Another win for the D version, is its associative arrays which I suspect are hash tables internally. STL does not offer hash tables yet, but that does not mean that you cannot have them anyway. You just need to look elsewhere. Regards, Martin
Mar 01 2005
parent "Walter" <newshound digitalmars.com> writes:
"Martin M. Pedersen" <martin moeller-pedersen.dk> wrote in message
news:d02i4u$lsk$1 digitaldaemon.com...
 "Walter" <newshound digitalmars.com> skrev i en meddelelse
 news:d00oun$1frg$1 digitaldaemon.com...
 I noticed that you're using memory mapped files! Cheater! <g> Trying the


 version with std.mmfile, I get about a 10% speedup. It adds one line of
 code
 and an import.
 Anyhow, I think the effort you made just shows the point!

I partly agree. The std::string is a sorry accident of history, and

 reference counting, it has very little to offer compared to vector<char>.
 But no string type will fit for all purposes. If you really are going for
 efficiency, you need some specialized string handling, and my point is,

 you can have that in C and C++ too (but STL will not help you). The

 concept of D is strong, and my solution in C++ was using slices too. Of
 cause much less elegantly than in D :-)

 Another win for the D version, is its associative arrays which I suspect

 hash tables internally.

And you'd be right. They are general purpose and aren't as efficient as a full custom hash table, but they work better than what one could quickly or casually build, and they're available at your fingertips.
 STL does not offer hash tables yet, but that does
 not mean that you cannot have them anyway. You just need to look

Yup.
Mar 01 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvubs2$1qdl$1 digitaldaemon.com>, Walter says...
"Kris" <Kris_member pathlink.com> wrote in message
news:cvu5go$1k2j$1 digitaldaemon.com...
 I've already noted some in a prior post in this thread (today). For more,

 might refer to the very examples you used against the issues brought up

 old "alias peek-a-boo game" thread. Those examples are both speculative

 where issues arise over implicit-casting combined with method-overloading.

 want of a better term, I'll refer to the latter as ICMO.

 At that time, you argued such examples were the reason why it was so

 for any and all overloaded superclass-methods be /deliberately hidden/

 subclass ~ and using an 'alias' to (dubiously, IMO) bring them back into

 was the correct solution. I noted at that time my suspicion this usage of
 'alias' was a hack. Indeed, it is used as part of an attempt to cover up

 the issues surrounding ICMO.

Yes, we discussed that at length. I don't think either of us have changed our minds.

What you didn't note at the time was those issues are all due primarily to implicit-casting. We did not discuss that at all. That's why it's being brought up again. Perhaps you didn't recognise the other culprit.
 Here's another old & trivial example of this type of bogosity:

 void print (char[] s);
 void print (wchar[] s);

 {print ("bork");}

 Because the char literal can be implicitly cast to wchar[], the compiler

 One has to do this instead:

 print (cast(char[]) "bork");

 This is daft, Walter. And it hasn't been fixed a year later.

That kind of thing happens when top down type inference meets bottom up type inference. It's on the list of things to fix. (It's technically not really an implicit casting issue. A string literal begins life with no type, the type is inferred from its context. This falls down in the overloading case you mentioned.)

Agreed. It's easier to talk about these things in simplistic terms though.
 Oh, and let's not forget that D will implicitly, and silently, convert a

 argument into an int or byte. Not even a peep from the compiler. MSVC 6
 certainly will not allow one to be so reckless.

Actually, it's the defined, standard behavior of both C and C++ to do this. MSVC 6 will allow it without a peep unless you crank up the warning level.

Not true. It will even complain about casting int to uint, if the argument is an expression rather than a simple variable. That's without any warning levels explicitly set (as I recall).
At one point I had such disallowed, but it produced more error messages than
it was worth, requiring the insertion of many daft cast expressions 

Some might say that lots of implicit-casts are a sign of sloppy code. Will you allow us to try it out please? How about a test? There's a vast sea of code in Mango, for example. I'd very much like to try a strict compiler on the code there. Frankly, it could only serve to tighten up my coding habits. Don't you think the results would be useful for purposes of discussion? You do realize, don't you, all this nonsense with ICMO simply vanishes (along with some special cases) if implicit-casting were to be narrowed in scope? Have you thought about how a model might be constructed, whereby implicit-casting is /not/ the norm? That's another serious question for you.
(and I know how you don't like them with string literals!). 
implicit conversions of floating point expressions to integral ones is
disallowed in D, and that doesn't seem to cause any problems.

I agree that it doesn't cause problems. If you're changing the basic type some value, then you'd better be explicit about it. This is not the case with char[] (as you appear to imply by proximity). So ~ great! Let's see more of that!
 There's many, many more examples. Here's another old one that I always

 somewhat amusing: news:cgat6b$1424$1 digitaldaemon.com

That's the same issue as you brought up above - which happens first, name lookup or overloading? C++ does it one way, Java the other. D does it the C++ way. C++ can achieve the Java semantics with a 'using' declaration, D provides the same with 'alias'. In fact, D goes further by offering complete control over which functions are overloaded with which by using alias. Java has no such capability.

I'm sorely pushed to call you out on this one. You're falling back on the old defensive posture of Java vs C++ again. Perhaps you think I'm some kind of Java bigot? I'm not. There's no benefit in that: those languages are not D. And who really cares about how it's implemented? The end result is what's important. In addition, you're claiming the use of alias gives "complete control" over these ICMO issue. You know (or ought to know) very well that alias is not so fine-grained. It will pull in all methods of a particular name, which can just as surely lead to the ICMO issues you claim your 'method-hiding' approach is 'resolving'. The 'horror' of those examples you used in your arguments, so long ago, is back again in a slightly smaller dose. Not only that; any and all alias's are inherited down the subclass chain. So one can easily imagine a scenario where the poor hapless sod using someone else's code runs into the same minefield ~ wearing snow shoes rather than Ballet Pumps. Except this time the minefield has been "blessed" by alias instead. Great. It's broken, and it's a sham. Implicit-casting combined with method-overloading is borked. Changing my mind on that would be sticking my head into the sand. This is not about language comparison, Walter. And this is not about who's nuts are bigger. It's about usability, maintainability, and deterministic assertion. D has taken on board all the baggage of C++ regarding ICMO, and tried to cover it up with something else. There are serious issues here. Pretending it's all OK doesn't make them go away, and it would be far more productive if you were at least open to thinking about how the whole mess could be resolved cleanly and elegantly. I am not trying to cut your limbs off! :-) Are you not /open/ to considering *any* other alternative?
 I'm not saying that implicit-casting is necessarily bad, and I'm not

 method-overloading is bad. I am saying the combination of the two leads to

 sort of nastiness; some of which you've attempted to cover-up by

 hiding superclass method-names overloaded by a subclass,

I suppose if you look at it from a Java perspective, it might seem like a coverup. But if you look at it from a C++ perspective, it behaves just as one would expect and be used to. Implicit casting has been in C and C++ for a very long time, and it's necessary to cope with the plethora of basic types (Java has a sharply reduced number of basic types, reducing the need for implicit conversions.)

There's a whole raft of assumption in there. Again, I really don't give a whit about how Java or C++ does it. D is supposed to be better, so let's at least discuss how that could happen?
 It shouldn't have to be like this. Surely there's a more elegant solution

 round, both for the compiler and for the user? Less special-cases is

 everyone.

Nobody likes special cases, but they are an inevitable result of conflicting requirements.

What are these conflicting requirements? Please spell them out. Are they: 1) implicit conversion? 2) overloaded methods? And, please, I'm completely serious about testing a strict compiler on Mango. It would be a good test case -- and who knows what it might tell us? At least there would be some concrete evidence to discuss. That's often so much better than personal opinion. - Kris
Feb 27 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Kris wrote:
 There are serious issues here. Pretending it's all OK doesn't
 make them go away, and it would be far more productive if you
 were at least open to thinking about how the whole mess could
 be resolved cleanly and elegantly. I am not trying to cut your
 limbs off!   :-)

Maybe we should do it ourselves? Discuss it here, and if/when we come to a resolution, then ask Walter to reconsider? My point being that Walter gets quite a few demands to think about a number of things, all while he's also writing DMD, running a company, having a family, etc. Maybe we could start with experimenting the idea that if D had no implicit casts at all. Then what would happen. (This is not offensive, it's just that in my experience one should investigate the perimeter and then approach the center.) With the combined practical and theoretical knowledge on this forum, we may very well come up with something worthwhile.
Feb 28 2005
prev sibling parent Matthias Becker <Matthias_member pathlink.com> writes:
 I think that indicates a problem that this reworked error message does not

You did have an issue with the "cast(uint)(c) is not an lvalue" message, however, that was the second error message put out by the compiler. The real error was the first error message (both error messages pointed to the same line). This is the well known "cascading error message" thing, where the compiler discovers an error, issues a correct diagnostic, then makes its best guess at how to patch things up so it can move on. If it guesses wrong (there is no way it can reliably guess write, otherwise it might as well write the program for you!), then more errors come out.

In this case the best would be to test all the ambingious functions. Than it would know that one of the functions has a problem with the argument types while the other one does not and could report that. But that might be too complicated to implement. -- Matthias Becker
Feb 28 2005
prev sibling next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:

 For D I opted for a simple 3 level matching system - exact
 match, match with implicit conversions, no match. 

The guys forced to do C++ for a living would cry big tears of envy. ;-) BTW, is it possible that overloading should be in that phrase somewhere? Implicit conversion is mathcing arguments to the function at hand, and overload resolution is the opposite (i.e. matching functions to the arguments at hand). I'd sure wish for it to be as easy to remember when that happens, as it is to remember the excellent "three level rule" of D.
Feb 27 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:42224176.8020809 nospam.org...
 Walter wrote:

 For D I opted for a simple 3 level matching system - exact
 match, match with implicit conversions, no match.

The guys forced to do C++ for a living would cry big tears of envy. ;-)

<rant> Not so. I know you've a smiley there, but I really think this "C++ is too complicated, D is superior" is incredibly hasty, not to say naive. I accept that from the compiler writer's perspective, D wins out over C++ in straightforwardness, implementatility, etc.etc. But I cannot believe everyone simply takes as read that out there, in the real world of large projects and diverse and tricky minds, D can be confidently stated and assumed to be superior. I'll give you one tiny little issue: D does not have implicit template instantiation. This has potentially enormous ramifications. It may turn out that Walter has another of his brilliant insights and works a way around it, or that we discover techniques with module namespaces and 'standard' utility function conventions that obviate it. But that's all a big maybe. To ignore/dismiss this _at this time_ is just mental. And there are lots more. We've only scratched the surface of TMP so far. The module model is, AFAICT, still a bit confused/ing. I don't think the issue of dynamic link-units is solved to everyone's satisfaction as yet. There's an ugly amount of coupling. We are yet to see how D's 'slack' implicit conversion handles in big projects with multiple developers and commercial deadlines. I think the writing of DPD, the maturation of D towards 1.0, and the increasing number and sophistication of libraries like Mango will, over the next several months, inform on this debate. But it is *not* a done deal at the moment, and saying so makes the D community, IMO, look somewhat idiotic. btw, all the foregoing is not aimed at Georg personally, just at the oft espoused idea that "D's superior to C++". It's not, albiet I look forward to 2005 being the year that it achieves the status of being so, or at least being a highly attractive alternative. </rant>
Feb 27 2005
next sibling parent reply Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:42224176.8020809 nospam.org...
 
Walter wrote:


For D I opted for a simple 3 level matching system - exact
match, match with implicit conversions, no match.

The guys forced to do C++ for a living would cry big tears of envy. ;-)

<rant> Not so. I know you've a smiley there, but I really think this "C++ is too complicated, D is superior" is incredibly hasty, not to say naive. I accept that from the compiler writer's perspective, D wins out over C++ in straightforwardness, implementatility, etc.etc. But I cannot believe everyone simply takes as read that out there, in the real world of large projects and diverse and tricky minds, D can be confidently stated and assumed to be superior. I'll give you one tiny little issue: D does not have implicit template instantiation. This has potentially enormous ramifications. It may turn out that Walter has another of his brilliant insights and works a way around it, or that we discover techniques with module namespaces and 'standard' utility function conventions that obviate it. But that's all a big maybe. To ignore/dismiss this _at this time_ is just mental. And there are lots more. We've only scratched the surface of TMP so far. The module model is, AFAICT, still a bit confused/ing. I don't think the issue of dynamic link-units is solved to everyone's satisfaction as yet. There's an ugly amount of coupling. We are yet to see how D's 'slack' implicit conversion handles in big projects with multiple developers and commercial deadlines. I think the writing of DPD, the maturation of D towards 1.0, and the increasing number and sophistication of libraries like Mango will, over the next several months, inform on this debate. But it is *not* a done deal at the moment, and saying so makes the D community, IMO, look somewhat idiotic. btw, all the foregoing is not aimed at Georg personally, just at the oft espoused idea that "D's superior to C++". It's not, albiet I look forward to 2005 being the year that it achieves the status of being so, or at least being a highly attractive alternative.

What can I say? You are right. And most of the things you refer to are, admittedly wishful thinking. No argument there. But it's a little like the football (american) team assembling on the field and murmuring "we will kick ass". Never mind the other guys are better and bigger. It's just what one does to keep up morale. And dammit, if D ain't gonna kick ass! (One day, that is.) Heck, if we didn't thoroughly believe that, then we'd all skip this forum, read Imperfect C++, etc. and go back to C++. I think we need this "my daddy is stronger than your daddy" attitude. (Actually making this explicit and discussing it here may deteriorate the function of such thinking.) :-)
Feb 27 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:42225D39.7010302 nospam.org...
 Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:42224176.8020809 nospam.org...

Walter wrote:


For D I opted for a simple 3 level matching system - exact
match, match with implicit conversions, no match.

The guys forced to do C++ for a living would cry big tears of envy. ;-)

<rant> Not so. I know you've a smiley there, but I really think this "C++ is too complicated, D is superior" is incredibly hasty, not to say naive. I accept that from the compiler writer's perspective, D wins out over C++ in straightforwardness, implementatility, etc.etc. But I cannot believe everyone simply takes as read that out there, in the real world of large projects and diverse and tricky minds, D can be confidently stated and assumed to be superior. I'll give you one tiny little issue: D does not have implicit template instantiation. This has potentially enormous ramifications. It may turn out that Walter has another of his brilliant insights and works a way around it, or that we discover techniques with module namespaces and 'standard' utility function conventions that obviate it. But that's all a big maybe. To ignore/dismiss this _at this time_ is just mental. And there are lots more. We've only scratched the surface of TMP so far. The module model is, AFAICT, still a bit confused/ing. I don't think the issue of dynamic link-units is solved to everyone's satisfaction as yet. There's an ugly amount of coupling. We are yet to see how D's 'slack' implicit conversion handles in big projects with multiple developers and commercial deadlines. I think the writing of DPD, the maturation of D towards 1.0, and the increasing number and sophistication of libraries like Mango will, over the next several months, inform on this debate. But it is *not* a done deal at the moment, and saying so makes the D community, IMO, look somewhat idiotic. btw, all the foregoing is not aimed at Georg personally, just at the oft espoused idea that "D's superior to C++". It's not, albiet I look forward to 2005 being the year that it achieves the status of being so, or at least being a highly attractive alternative.

What can I say? You are right. And most of the things you refer to are, admittedly wishful thinking. No argument there. But it's a little like the football (american) team assembling on the field and murmuring "we will kick ass". Never mind the other guys are better and bigger. It's just what one does to keep up morale.

But it's really misleading, both to newbies, who might read such posts and think all is well, and then subsequently be really disheartened and annoyed when they discover all is not quite so rosy. The other mislead is more dangerous: Walter may be mislead into thinking everyone's happy (and, more importantly, working successfully with D) with the current state of play, and that the odd carp from Kris and myself are just noises from people who like to scream a lot.
 And dammit, if D ain't gonna kick ass! (One day, that is.)

I agree that it has enormous potential. I don't think it's anywhere near realised it yet.
 Heck, if we didn't thoroughly believe that, then we'd all skip this 
 forum, read Imperfect C++, etc. and go back to C++.

He he. Well, a wise friend once told me that to ally oneself to only one language was folly. In any case, it's boring. I'm interested in D because: (i) this is the one of a very few forums that're not full of complete tw*ts (ii) Walter's overall attitude to IT and DM users is very attractive (iii) I think the idea of D showing up .NET and Java is appealing (iv) the potential for D to be better, or rather to better support, template programming without the necessary intellectual pain that goes on in C++ I do not *believe* that it will become all that it can, anymore than I *believe* my sons will grow up to be Nobel prize-winning physicists, or that Imperfect C++ will eventually be recognised as better for developers than some recent C++ books that like to make the reader feel inferior with all manner of nonsensical intellectual masturbation. I just think these things are not beyond the bounds of reason.
 I think we need this "my daddy is stronger than your daddy" attitude.

 (Actually making this explicit and discussing it here may deteriorate 
 the function of such thinking.) :-)

I think strength comes from acknowledging one's shortcomings, and either learning to live with / obviate them, or using them as impetus to improve. Pretending that everything's good is the kind of crap we get from large corporations, or, dare I say it, 21st English-speaking governments (Canada and NZ largely excepted).
Feb 27 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:42225D39.7010302 nospam.org...

(lots of good and to-the-point stuff deleted) (also points taken) :-)
     (iv) the potential for D to be better, or rather to better support, 
 template programming without the necessary intellectual pain that goes 
 on in C++

We really do need to somehow get a grip on what's needed for templates!! In a couple of posts I've tried to find out exactly what the requirements for a (superior? superlative? -- aw crap, just well working) template system are. What I'd wish is a discussion about what kinds of things people see a template system should bring. Like instead of how we can tweak the existing one, we'd start from scratch -- at least with the thinking and discussion. I also somehow feel that we'd need to try out some of the better ideas. At least the syntax could be tried out with preprocessors (to some extent at least). Matthew, (as the best qualified in this particular subject) what would you really wish a "perfect" template system could do? Like if you were granted Any Three Wishes style.
 I do not *believe* that it will become all that it can, anymore than I 
 *believe* my sons will grow up to be Nobel prize-winning physicists, or 
 that Imperfect C++ will eventually be recognised as better for 
 developers than some recent C++ books that like to make the reader feel 
 inferior with all manner of nonsensical intellectual masturbation. I 
 just think these things are not beyond the bounds of reason.

Oh, Bob! What if my sons don't either??? {8-O
 I think strength comes from acknowledging one's shortcomings, and either 
 learning to live with / obviate them, or using them as impetus to 
 improve. Pretending that everything's good is the kind of crap we get 
 from large corporations, or, dare I say it, 21st English-speaking 
 governments (Canada and NZ largely excepted).

Oh, God, let me have strength to change all I could. And let me placidly accept all I can't change!
Feb 28 2005
parent reply "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message 
news:4222DA20.2030206 nospam.org...
 Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 
 news:42225D39.7010302 nospam.org...

(lots of good and to-the-point stuff deleted) (also points taken) :-)
     (iv) the potential for D to be better, or rather to better 
 support, template programming without the necessary intellectual pain 
 that goes on in C++

We really do need to somehow get a grip on what's needed for templates!! In a couple of posts I've tried to find out exactly what the requirements for a (superior? superlative? -- aw crap, just well working) template system are. What I'd wish is a discussion about what kinds of things people see a template system should bring. Like instead of how we can tweak the existing one, we'd start from scratch -- at least with the thinking and discussion. I also somehow feel that we'd need to try out some of the better ideas. At least the syntax could be tried out with preprocessors (to some extent at least). Matthew, (as the best qualified in this particular subject) what would you really wish a "perfect" template system could do? Like if you were granted Any Three Wishes style.

Here's where my particular talents/disabilities cause a problem, for me at least. My brain just doesn't work the way (that I suspect) that most other engineers' do. I barely retain a conscious cognisance of the terms 'parameterisation' and 'instantiation' - anything deeper than that in a conversational context leaves me wandering around in a mental fog. I'm afraid I am a do-er - I believe it's more formally known as kinetic form thinking - and as such will be thoroughly unqualified to comment/request/philosophise until such time as I dive back into DTL. Thankfully, that time is very close. (Although it may seem to (any interested parties in) the outside world that I've been thumb twiddling over the last few months, but the truth is quite the opposite: I've been working through lots of core stuff with my libraries, and also formulating ideas, strategies and automated tools.) So, I am confident that I'll be heavily back in DTL next month (March), and will make good progress. I'll be full of all kinds of fun then. :-)
 I do not *believe* that it will become all that it can, anymore than 
 I *believe* my sons will grow up to be Nobel prize-winning 
 physicists, or that Imperfect C++ will eventually be recognised as 
 better for developers than some recent C++ books that like to make 
 the reader feel inferior with all manner of nonsensical intellectual 
 masturbation. I just think these things are not beyond the bounds of 
 reason.

Oh, Bob! What if my sons don't either??? {8-O

Then we'll just have to hope they're happy. (Which'll probably be the better outcome anyway)
Feb 28 2005
parent reply Georg Wrede <georg.wrede nospam.org> writes:
Matthew wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message 

Matthew, (as the best qualified in this particular subject)
what would you really wish a "perfect" template system
could do? Like if you were granted Any Three Wishes style.

Here's where my particular talents/disabilities cause a problem, for me at least. My brain just doesn't work the way (that I suspect) that most other engineers' do. I barely retain a conscious cognisance of the terms 'parameterisation' and 'instantiation' - anything deeper than that in a conversational context leaves me wandering around in a mental fog. I'm afraid I am a do-er - I believe it's more formally known as kinetic form thinking - and as such will be thoroughly unqualified to comment/request/philosophise until such time as I dive back into DTL.

I may end up with some customer before long, so I'm pursuing this hard right now. So, instead, what was the last thing you would've done, except it was too hard/impossible with the current template system. (Either in D or in C++ !) I could also use snippets of code, as you would have written them (given a better D/C++ cpp).
Oh, Bob! What if my sons don't either???    {8-O

Then we'll just have to hope they're happy. (Which'll probably be the better outcome anyway)

Yeah. At times I feel that is a challenge. Give a toy, and they're happy for the day. But what to do to get them happy _for the rest of their lives_? ((Er, this was _rhetorical_, so don't nobody answer this, or we'll get a thread that'll wipe off D from this ng. And still end up back at square one.))
Feb 28 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Georg Wrede" <georg.wrede nospam.org> wrote in message
news:42231636.8090805 nospam.org...
 Give a toy, and they're happy
 for the day. But what to do to get them happy _for the rest of their
 lives_?

Ultimately, they decide for themselves if they're going to be happy or not. You cannot cause them to be happy. Probably the best you can do is help them realize this, and that if they expect that things or other people will make them happy, they'll be disappointed.
Feb 28 2005
parent Georg Wrede <georg.wrede nospam.org> writes:
Walter wrote:
 "Georg Wrede" <georg.wrede nospam.org> wrote in message
 news:42231636.8090805 nospam.org...
 
Give a toy, and they're happy
for the day. But what to do to get them happy _for the rest of their
lives_?

Ultimately, they decide for themselves if they're going to be happy or not. You cannot cause them to be happy. Probably the best you can do is help them realize this, and that if they expect that things or other people will make them happy, they'll be disappointed.

True. Now, that I don't see them every week (we're separated, as you remember), it occasionally worries me, though. My ex is not too good at this sort of thing, so the entire burden is on my (remote) shoulders. But why whine, I didn't take the kids to Thailand this Christmas. For long it was an option.
Mar 01 2005
prev sibling parent =?windows-1252?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Matthew wrote:

 The guys forced to do C++ for a living would cry big tears of envy. 
 ;-) 


 <rant>
 Not so. I know you've a smiley there, but I really think this "C++ is 
 too complicated, D is superior" is incredibly hasty, not to say naive.
 
 I accept that from the compiler writer's perspective, D wins out over 
 C++ in straightforwardness, implementatility, etc.etc.

[...]
 btw, all the foregoing is not aimed at Georg personally, just at the oft 
 espoused idea that "D's superior to C++". It's not, albiet I look 
 forward to 2005 being the year that it achieves the status of being so, 
 or at least being a highly attractive alternative.
 </rant>

I'm not sure that D ("in itself") set out to be superior to C++ ? I know that it set out to be *simpler*, and I think it has succeeded. (at least in theory, in practice there are a few D "gotchas" still) For people not having had to use C++, like myself, D is a nice mix between the regular old C (it looks a lot similar, if not doing OOP) and Java 2 (with the garbage collection and the single inheritance etc) But is D more portable than C ? No. Is it simpler to start with than Java* ? No. And is it a just-drop-in replacement for C++ ? No. Is it still worth learning and playing with anyway ? Yes! :-) Though it would be nice if the show-stoppers could be fixed so that a "D 1.0" can be released ? Even if it means dropping a few features, like the array operations or the array literals... Otherwise it runs a rather great risk of ending up as another could-have-been-great idea/language, and be eaten by C++/Java* ? And here on Mac OS X we already have one weirdo C-based language. Just my 2 öre, --anders * Or C#, same difference. (I can run that language too, using Mono™)
Feb 27 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvt3m9$iao$1 digitaldaemon.com>, Walter says...
"Manfred Nowak" <svv1999 hotmail.com> wrote in message
news:cvs3dl$2km0$1 digitaldaemon.com...
 Walter has choosen the solution with the least computational
 complexity for the compiler but one of the main goals of D is to
 increase productivity.

I find C++ overloading rules to be exceptionally and overly complicated, with layers and layers of special cases and rules only an expert would understand. For D I opted for a simple 3 level matching system - exact match, match with implicit conversions, no match.

It may, indeed, be simpler in certain manifest ways. Yet, the fact remains that implicit-casting and method-overloading make ill-suited bedfellows. Just look at what we (the users) have to do with 'alias' to bring back superclass methods that have had their name overloaded in the subclass? Each of those contrived old examples as to /why/ alias is supposedly necessary are based upon the fragility of combined implicit-casting & overloading within various scenarios. Here's another old & trivial example of this type of bogosity: void print (char[] s); void print (wchar[] s); {print ("bork");} Because the char literal can be implicitly cast to wchar[], the compiler fails. One has to do this instead: print (cast(char[]) "bork"); This is daft, Walter. And it hasn't been fixed a year later. Oh, and let's not forget that D will implicitly, and silently, convert a long argument into an int or byte. Not even a peep from the compiler. MSVC 6 certainly will not allow one to be so reckless. If I drove my car in a similar manner, I'd likely be put in jail. There's many, many more examples. Here's another old one that I always found somewhat amusing: news:cgat6b$1424$1 digitaldaemon.com I'm not saying that implicit-casting is necessarily bad, and I'm not saying that method-overloading is bad. I am saying the combination of the two leads to all sort of nastiness; some of which you've attempted to cover-up by implicitly hiding superclass method-names overloaded by a subclass, and which you just might cover up with an explicit-type-prefix on "literal" strings (such as w"wide-string"). It shouldn't have to be like this. Surely there's a more elegant solution all round, both for the compiler and for the user? Less special-cases is better for everyone. - Kris
Feb 27 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 27 Feb 2005 23:18:59 +0000 (UTC), Kris wrote:

[snip]
 
 Here's another old & trivial example of this type of bogosity:
 
 void print (char[] s);
 void print (wchar[] s);
 
 {print ("bork");}
 
 Because the char literal can be implicitly cast to wchar[], the compiler fails.
 One has to do this instead:
 
 print (cast(char[]) "bork");
 
 This is daft, Walter. And it hasn't been fixed a year later.

Kris, what do you think the compiler ought to do in this case, given only the information you have supplied above? I ask because I can't think of a decent alternative to failing (given that we are not allowed to have warning messages). Upon seeing ... print ("bork"); the compiler is required to generate a call to a 'print' routine. It has two to choose from. One requires char[] and the other wchar[]. What the coder as told the compiler is that is has "bork" which is a string literal in an unspecified encoding format. Because the coder hasn't specified the format, the compiler tries the various possible ones available to it. So it tries the obvious first one 'char[]' and gets a possible candidate routine. It then tries 'wchar[]' and finds another candidate. Now it has two routines which could be called - so which does it decide to call? The compiler cannot know your intentions is such an ambiguous situation, and since its too timid to assume one, it fails instead. So it seems to me, that the coder is obliged to give a bit more information to the compiler to help it make better decisions. That could be decorating the literal with some sort of encoding notation, or maybe putting in a 'pragma' type thingy to tell the compiler that all undecorated string literals are all of type char[] so don't go implicitly trying other encoding formats, or maybe some thing else. In any case, the coder needs to take more responsibility rather than giving that up to the compiler writer.
 Oh, and let's not forget that D will implicitly, and silently, convert a long
 argument into an int or byte. Not even a peep from the compiler.

Yes. No this is a bit daft. Silently loosing data is bound to cause grief sooner or later. Here is a case in which the compiler is not timid and does make a unilateral decision, without alerting the coder. [snip]
 I'm not saying that implicit-casting is necessarily bad, and I'm not saying
that
 method-overloading is bad. I am saying the combination of the two leads to all
 sort of nastiness; some of which you've attempted to cover-up by implicitly
 hiding superclass method-names overloaded by a subclass, and which you just
 might cover up with an explicit-type-prefix on "literal" strings (such as
 w"wide-string").

And the alternative is ...??
 It shouldn't have to be like this. Surely there's a more elegant solution all
 round, both for the compiler and for the user? Less special-cases is better for
 everyone.

Yes, but have you any ideas as to what that might look like? -- Derek Melbourne, Australia 28/02/2005 10:38:33 AM
Feb 27 2005
parent reply Kris <Kris_member pathlink.com> writes:
In article <1hc00815rv4ri$.mu3h8w8oh3gq.dlg 40tude.net>, Derek Parnell says...

[lots of snipping]

 print (cast(char[]) "bork");
 
 This is daft, Walter. And it hasn't been fixed a year later.

Kris, what do you think the compiler ought to do in this case, given only the information you have supplied above?

 might cover up with an explicit-type-prefix on "literal" strings (such as
 w"wide-string").

And the alternative is ...??

Experience (here) suggests whenever anyone posts their own brand of 'resolution', it often tends to steer discussion in an unproductive direction. Ignoring that completely, here's one approach: string literals are always char[] ~ unless explicitly denoted as something other. The latter could be via the presense of an embedded wide-char (that "\u1234" notion the D website discusses), or a 'w' or 'd' prefix. Walter wasn't keen on the prefix notion, although there are plenty of prefixes already (perhaps that's why?). Regardless, the approach is simple and easy to remember.
 It shouldn't have to be like this. Surely there's a more elegant solution all
 round, both for the compiler and for the user? Less special-cases is better for
 everyone.

Yes, but have you any ideas as to what that might look like?

With respect, Derek, I'd rather not go there. There's several big issues bundled up within this arena, and I don't have anything I'd consider bulletproof at this time. Frankly, I'd rather Walter be encouraged into thinking long and hard about a better overall approach.
Feb 27 2005
parent Derek Parnell <derek psych.ward> writes:
On Mon, 28 Feb 2005 01:09:43 +0000 (UTC), Kris wrote:

 In article <1hc00815rv4ri$.mu3h8w8oh3gq.dlg 40tude.net>, Derek Parnell says...
 
 [lots of snipping]
 
 print (cast(char[]) "bork");
 
 This is daft, Walter. And it hasn't been fixed a year later.

Kris, what do you think the compiler ought to do in this case, given only the information you have supplied above?

 might cover up with an explicit-type-prefix on "literal" strings (such as
 w"wide-string").

And the alternative is ...??

Experience (here) suggests whenever anyone posts their own brand of 'resolution', it often tends to steer discussion in an unproductive direction. Ignoring that completely, here's one approach: string literals are always char[] ~ unless explicitly denoted as something other. The latter could be via the presense of an embedded wide-char (that "\u1234" notion the D website discusses), or a 'w' or 'd' prefix. Walter wasn't keen on the prefix notion, although there are plenty of prefixes already (perhaps that's why?). Regardless, the approach is simple and easy to remember.

That would sure simplify things nicely. I'd abide with it.
 It shouldn't have to be like this. Surely there's a more elegant solution all
 round, both for the compiler and for the user? Less special-cases is better for
 everyone.

Yes, but have you any ideas as to what that might look like?

With respect, Derek, I'd rather not go there. There's several big issues bundled up within this arena, and I don't have anything I'd consider bulletproof at this time. Frankly, I'd rather Walter be encouraged into thinking long and hard about a better overall approach.

Fair enough. Hope the approach works. -- Derek Melbourne, Australia 28/02/2005 12:15:32 PM
Feb 27 2005
prev sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvs3dl$2km0$1 digitaldaemon.com>, Manfred Nowak says...
So, instead of intoning choruses prove that your wish improves 
productivity in general and Walter will follow.

That's a convenient, if disingenuous position to take. You might as well suggest this instead: "prove Iraq had no weapons-of-mass-destruction, and Bush will step down from office". Whatever one's personal beliefs are, neither will happen ~ so let's forego the "Norman Rockwell", please. If you feel that such behaviour (the topic) is indicative of a language that will be a wild and raving success, then fair enough. I think it could be handled a whole lot more intuitively ~ and I'm bringing it up, based on real-world experience with D, so that it doesn't get swept under the carpet.
Feb 27 2005
parent "Matthew" <admin stlsoft.dot.dot.dot.dot.org> writes:
"Kris" <Kris_member pathlink.com> wrote in message 
news:cvte46$rl4$1 digitaldaemon.com...
 In article <cvs3dl$2km0$1 digitaldaemon.com>, Manfred Nowak says...
So, instead of intoning choruses prove that your wish improves
productivity in general and Walter will follow.

That's a convenient, if disingenuous position to take. You might as well suggest this instead: "prove Iraq had no weapons-of-mass-destruction, and Bush will step down from office". Whatever one's personal beliefs are, neither will happen ~ so let's forego the "Norman Rockwell", please. If you feel that such behaviour (the topic) is indicative of a language that will be a wild and raving success, then fair enough. I think it could be handled a whole lot more intuitively ~ and I'm bringing it up, based on real-world experience with D, so that it doesn't get swept under the carpet.

Whatever your position on such matters - and beware this advice from me precisely because I agree with him about most things - I think Kris' opinions are generally well worth regarding because he is doing a lot of D. AFAIK, and please correct me if I'm wrong, Mango is the largest single D project thus far. Even if I'm wrong about that, Mango's certainly brought to the fore many previously unforeseen issues in 'normal' D. FWIW, when I _finally_ get back to DTL next month, I expect it'll provide a similar function for the template side of D. I'm going to start by trawling the NG and various projects for template wisdoms, and then jump back in. So, any template-related issues/ideas are best floated (either here or via email) in next 2-3 weeks.
Feb 27 2005
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Fri, 25 Feb 2005 05:16:31 +0000 (UTC), Kris wrote:

 In article <1vsrqzrq891rl$.heltnytmakww.dlg 40tude.net>, Derek Parnell says...
Sorry, Kris, but I'm not getting any errors here, except that your example
function 'wumpus()' returns a void. Once I changed that to a uint it
compiled fine.  Windows v0.113

<code>
uint foo (inout int x) {return 0;}
uint foo (inout uint x) {return 0;}

uint wumpus()
{
uint y;

return foo(y);
}
</code>

OK ~ found out what actually breaks: uint foo (char[] s, inout int v, uint r=10) {return 0;} uint foo (char[] s, inout uint v, uint r=10) {return 0;} void bar() { char[] s; int c; foo (s, c); // compiles foo (s, c, 12); // fails } There's some kind of issue with the default assignment ... yet the error message states "cast(uint)(c) is not an lvalue", along with the other msg about "both match argument list" Again; about as bulletproof as a broken-window

I've been doing a little bit more analysis on the actual message given by this situation. Firstly, the messages as report by Kris seem to imply that the "not an lvalue" message appears before the "both match" message. However, using the *code as supplied*, I can only get the "both match" message. There is no "not an lvalue" message. I can however, get the lvalue message if I change the example code to read "uint c;" rather than "int c;", but then it occurs after the "both match" message. However, here is the exact message that I get with the example above... test.d(10): function test.foo overloads uint(char[]s,inout int v,uint r = cast(uint)(10)) and uint(char[]s,inout uint v,uint r = cast(uint)(10)) both match argument list for foo Now if I reformat the message to improve legibility ... test.d(10): function test.foo overloads uint(char[], inout int, uint) and uint(char[], inout uint, uint) both match argument list for foo Then add the (implied) signature for the failing foo call ... test.d(10): function test.foo uint(char[], inout int, int) overloads uint(char[], inout int, uint) and uint(char[], inout uint, uint) both match argument list for foo I find myself asking, "why did the compiler implicitly cast the constant 12 to and int *and* then stopped scanning for matches when it found the first mismatch?" For example, if it had of continued the scanning for *exact* matches, it could have cast the 12 to a uint and found the exact match. So I'm guessing the algorithm for matching goes something like this ... (1) Build a list of all routines in scope that have the same name. -- this gives us two -- uint(in char[], inout int, in uint) uint(in char[], inout uint, in uint) (2) For each item in the list of names above: See if the call in question exactly matches, if so flag it. Otherwise, see if the call in question can be coerced into matching. If so, flag it. (BTW, I'm assuming that matching is done on both data type and storage class ~ in, inout, out.) (3) If there are two or more items in the list flagged, issue the "both match" message. (4) If there are zero items in the list flagged, issue the "no match" message. (5) If there is exactly one item flagged, use this as the target of the call. Using this algorithm, with what we have "( char[], int, 12)", and note that I disregard the return type, the first item "(in char[], inout int, in uint)" does not exactly match because we don't know what to do with the literal yet. But it can be coerced to match, so flag it. The second item "(in char[], inout uint, in uint)" also does not match. And it *cannot* be coerced, because even though we can cast the 12 to a (in uint), we cannot cast the "int c" into a "inout uint" (because by doing so we'd have an lvalue error. So we don't flag the item because it cannot match the call. That leaves with exactly one matched routine. The compiler should have selected "uint foo (char[] s, inout int v, uint r=10) {return 0;}" because that is the only routine that can match the calling arguments without errors. So why did the compiler complain? Because I suspect that the storage class is not really being used as I expect for signature matching. If the routines did not have "inout" then we would have had both routines matching, and the error message would have made better sense. -- Derek Melbourne, Australia 28/02/2005 2:27:18 PM
Feb 27 2005
parent reply "Walter" <newshound digitalmars.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message
news:1aryxa64qi6wv.1tueiq1jegdq2.dlg 40tude.net...
 I find myself asking, "why did the compiler implicitly cast the constant

 to and int

It didn't. The literal 12 is an int. Hence: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) both match with implicit casts, hence the ambiguity error. Also, storage classes (such as inout) are ignored for overload matching, *only* argument types are looked at.
Feb 27 2005
next sibling parent reply Kris <Kris_member pathlink.com> writes:
In article <cvuceu$1r0p$1 digitaldaemon.com>, Walter says...
"Derek Parnell" <derek psych.ward> wrote in message
news:1aryxa64qi6wv.1tueiq1jegdq2.dlg 40tude.net...
 I find myself asking, "why did the compiler implicitly cast the constant

 to and int

It didn't. The literal 12 is an int. Hence: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) both match with implicit casts, hence the ambiguity error. Also, storage classes (such as inout) are ignored for overload matching, *only* argument types are looked at.

Well, of course they both match with implicit casts Walter -- you went and stripped off the important distinguishing traits before the matching process occurred! Doh! Pointer types are treated more strictly, yet you state D deliberately ignores that information ... what's left is something that only partially resembles the original signature, and the user intent. If I refrain from stating the obvious, may I ask if this is how it will remain? - Kris
Feb 28 2005
parent "Walter" <newshound digitalmars.com> writes:
"Kris" <Kris_member pathlink.com> wrote in message
news:cvul8e$255i$1 digitaldaemon.com...
 In article <cvuceu$1r0p$1 digitaldaemon.com>, Walter says...
"Derek Parnell" <derek psych.ward> wrote in message
news:1aryxa64qi6wv.1tueiq1jegdq2.dlg 40tude.net...
 I find myself asking, "why did the compiler implicitly cast the



12
 to and int

It didn't. The literal 12 is an int. Hence: C:mars>dmd test test.d(9): function test.foo called with argument types: (char[],int,int) matches both: test.foo(char[],int,uint) and: test.foo(char[],uint,uint) both match with implicit casts, hence the ambiguity error. Also, storage classes (such as inout) are ignored for overload matching, *only*


types are looked at.

Well, of course they both match with implicit casts Walter -- you went and stripped off the important distinguishing traits before the matching

 occurred! Doh!

 Pointer types are treated more strictly, yet you state D deliberately

 that information ... what's left is something that only partially

 original signature, and the user intent.

inout is a storage class, not a type. Overloading is based on the types. Although the inout is implemented as a pointer, that isn't how the type system views it.
 If I refrain from stating the obvious, may I ask if this is how it will

Yes. I don't think overloading based on the storage class is a good idea, it would introduce the possibility of things like: void foo(int x); void foo(inout int x); existing, and what would that mean? Saying that case isn't allowed starts opening up the door to the C++ long list of special overloading cases. (C++ has a lot of funky weird overloading rules about when an int is 'better' than int&, and when it isn't. Matthew and I had the recent joy of attempting to track down yet another oddity with template partial specialization due to this. Both of us are experts at it, and we both expended much time trying to figure out what the exact behaviour should be, and it wound up surprising us both.) Sometimes making the "obvious" cases work leads to a lot of unintended consequences in the corners, that old "conflicting requirements" thing I was talking about before. Sometimes I think living with a simple rule is better.
Feb 28 2005
prev sibling parent reply Derek <derek psych.ward> writes:
On Sun, 27 Feb 2005 22:07:21 -0800, Walter wrote:

 "Derek Parnell" <derek psych.ward> wrote in message
 news:1aryxa64qi6wv.1tueiq1jegdq2.dlg 40tude.net...
 I find myself asking, "why did the compiler implicitly cast the constant

 to and int

It didn't. The literal 12 is an int. Hence:

Hmmm... I think we have a problem. I would have said that 12 is an *integer*, but I couldn't tell you what sort of integer it is until I saw it in context. I believe that 12 could be any of ... byte ubyte short ushort int uint long ulong All are integers, but they are all different subsets of the integer set. And you are still absolutely positive that 12 is only an int? Could it not have been a uint? If not, why not? Also, if 12 is an int, why does this compile without complaint? void foo(real x) { } void main(){ foo(12); } Because, if the compiler casts it to a real it can find a match. So let's flow with that idea for a moment...
 C:mars>dmd test
 test.d(9): function test.foo called with argument types:
         (char[],int,int)

And yet, if the compiler had chosen to assume 12 was a uint, then there would have been an exact match. (char[],int,uint)
 matches both:
         test.foo(char[],int,uint)
 and:
         test.foo(char[],uint,uint)
 
 both match with implicit casts, hence the ambiguity error.

So that's why I wondered why the compiler stopped trying to find an exact match. It gave up too soon.
 Also, storage
 classes (such as inout) are ignored for overload matching, *only* argument
 types are looked at.

However, isn't 'inout int' very much similar in nature to the C++ '*int' pointer concept. And our beloved C++ would notice that an 'int' is very much different to a '*int', so why should D see no difference between 'inout int' and 'in int'. But aside from that, what would be the consequence of having the compiler take into consideration the storage classes of the signature? And this is a serious question from someone who is trying to find some answers to that very question. It is not a rhetorical one, nor is it trolling. So far, I would think that it would make the compiler find more bugs (mistakes), and it might be possible for the compiler to discover better optimizations. -- Derek Melbourne, Australia
Feb 28 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek wrote:

 Hmmm... I think we have a problem.  I would have said that 12 is an
 *integer*, but I couldn't tell you what sort of integer it is until I saw
 it in context. I believe that 12 could be any of ...
 
   byte
   ubyte
   short
   ushort
   int
   uint
   long
   ulong

I think "smallest type that could possibly match" would be a good rule ? Thus, 42 would be a byte, 4242 a short, "hello" would be a char[] and "smörgåsbord" a wchar[] (just an idea, what do you think... ?)
 All are integers, but they are all different subsets of the integer set.
 And you are still absolutely positive that 12 is only an int? Could it not
 have been a uint? If not, why not?

Because then you say 12U, if you mean unsigned ? (just as 0.0f is a float, and 0.0 is a double...) --anders
Feb 28 2005
parent reply Derek <derek psych.ward> writes:
On Mon, 28 Feb 2005 11:35:05 +0100, Anders F Björklund wrote:

 Derek wrote:
 
 Hmmm... I think we have a problem.  I would have said that 12 is an
 *integer*, but I couldn't tell you what sort of integer it is until I saw
 it in context. I believe that 12 could be any of ...
 
   byte
   ubyte
   short
   ushort
   int
   uint
   long
   ulong

I think "smallest type that could possibly match" would be a good rule ? Thus, 42 would be a byte, 4242 a short, "hello" would be a char[] and "smörgåsbord" a wchar[] (just an idea, what do you think... ?)
 All are integers, but they are all different subsets of the integer set.
 And you are still absolutely positive that 12 is only an int? Could it not
 have been a uint? If not, why not?

Because then you say 12U, if you mean unsigned ? (just as 0.0f is a float, and 0.0 is a double...) --anders

So maybe all integer subsets need a literal decorator. How whould I indicate the 42 is meant to be a ushort? -- Derek Melbourne, Australia
Feb 28 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek wrote:

Thus, 42 would be a byte, 4242 a short, "hello" would be a char[]
and "smörgåsbord" a wchar[] (just an idea, what do you think... ?)


Let's leave the strings out of this, by the way, since it might as well use char[] always (works fine for both) Got myself confused with 'a' => char, 'å' => wchar, which was what I meant to write in the first place ;-)
 So maybe all integer subsets need a literal decorator. How whould I
 indicate the 42 is meant to be a ushort?

"cast(ushort) 42U" :-) In theory the part-register int types; byte and short, *could* have suffixes too. But casts work in practice ? Can't say I have worried about the type of int literals much, they have always been "int" (but that was since C) And as long as "true" is of the type bit in D, I haven't really started to worry about the other D types either ? The only one with a designated type in D at the moment is long, which has the L suffix appended - as in 42L... For the floating point types, all three have suffixes. Even if the extended/real type might just be double too. (D only *guarantees* 64 bits of precision for "real", but provides 80 bits on X86 hardware. Other CPUs vary) --anders
Feb 28 2005
prev sibling parent "Walter" <newshound digitalmars.com> writes:
"Derek" <derek psych.ward> wrote in message
news:qmpq5b84012d$.1un5lj704bsoa$.dlg 40tude.net...
 And you are still absolutely positive that 12 is only an int?

Yes.
 Could it not have been a uint? If not, why not?

It needs a type assigned at the bottom level. So it gets an "int". The type inference semantics in C, C++ and D are "bottom up".
 And yet, if the compiler had chosen to assume 12 was a uint, then there
 would have been an exact match.

Right. But there you're arguing for "top down" type inference, which is something very, very different from how C, C++ and D handle expressions.
 However, isn't 'inout int' very much similar in nature to the C++ '*int'
 pointer concept.

It's more akin to the C++ "int&" concept.
 And our beloved C++ would notice that an 'int' is very
 much different to a '*int', so why should D see no difference between
 'inout int' and 'in int'.

C++ overloading rules with int and int& are complicated and a rich source of bugs with overloading and template partial specialization. Even experts have trouble with it, as Matthew and I were working on just that a couple days ago. (The problems come from "when are they the same, and when are they different.")
 But aside from that, what would be the consequence of having the compiler
 take into consideration the storage classes of the signature? And this is

 serious question from someone who is trying to find some answers to that
 very question. It is not a rhetorical one, nor is it trolling.

It's a very good question. Consider the following: void foo(int i); void foo(inout int i); Is it really a good idea to pick different overloads based on whether the argument is a literal or a variable, but they are the *same* types?
Feb 28 2005