www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - automatic int to short conversion - the HELL?

reply downs <default_357-line yahoo.de> writes:
void main() { int i; short x; x = i; }

Excuse me, but - how exactly is it that this is in any way, shape or form valid
code?

How can I trust a language that allows those kind of shenanigans?
Sep 17 2008
next sibling parent reply downs <default_357-line yahoo.de> writes:
n/t
Sep 17 2008
parent "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Wed, Sep 17, 2008 at 10:32 PM, downs <default_357-line yahoo.de> wrote:
 n/t

I'm not sure what this has to do with you not specifying -w.
Sep 17 2008
prev sibling next sibling parent reply "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or form
valid code?

 How can I trust a language that allows those kind of shenanigans?

lern2warningsflag.
Sep 17 2008
parent reply downs <default_357-line yahoo.de> writes:
Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or form
valid code?

 How can I trust a language that allows those kind of shenanigans?

lern2warningsflag.

"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.
Sep 17 2008
next sibling parent reply "Chris R. Miller" <lordsauronthegreat gmail.com> writes:
downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or form
valid code?

 How can I trust a language that allows those kind of shenanigans?


"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.

I don't get it. Why can that not be simple implicit type casting?
Sep 17 2008
parent reply downs <default_357-line yahoo.de> writes:
Chris R. Miller wrote:
 downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de>
 wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or
 form valid code?

 How can I trust a language that allows those kind of shenanigans?


"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.

I don't get it. Why can that not be simple implicit type casting?

Because short is not a superset of int.
Sep 17 2008
next sibling parent reply "Chris R. Miller" <lordsauronthegreat gmail.com> writes:
downs wrote:
 Chris R. Miller wrote:
 downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de>
 wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or
 form valid code?

 How can I trust a language that allows those kind of shenanigans?


I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.


Because short is not a superset of int.

Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.
Sep 18 2008
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Chris R. Miller wrote:
 downs wrote:
 Chris R. Miller wrote:
 downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de>
 wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or
 form valid code?

 How can I trust a language that allows those kind of shenanigans?


I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.


Because short is not a superset of int.

Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.

No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".
Sep 18 2008
next sibling parent reply Tomas Lindquist Olsen <tomas famolsen.dk> writes:
Ary Borenszweig wrote:
 Chris R. Miller wrote:
 downs wrote:
 Chris R. Miller wrote:
 downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de>
 wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, shape or
 form valid code?

 How can I trust a language that allows those kind of shenanigans?


I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.


Because short is not a superset of int.

Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.

No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".

Doesn't all this come down to convincing Walter to lose the "must follow C rules" mantra? I doubt that's gonna happen ... Who knows :) - Tomas
Sep 18 2008
parent "Chris R. Miller" <lordsauronthegreat gmail.com> writes:
Tomas Lindquist Olsen wrote:
 Ary Borenszweig wrote:
 Chris R. Miller wrote:
 downs wrote:
 Chris R. Miller wrote:
 downs wrote:
 Jarrett Billingsley wrote:
 On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de>
 wrote:
 void main() { int i; short x; x = i; }

 Excuse me, but - how exactly is it that this is in any way, 
 shape or
 form valid code?

 How can I trust a language that allows those kind of shenanigans?


I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.


Because short is not a superset of int.

Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.

No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".

Doesn't all this come down to convincing Walter to lose the "must follow C rules" mantra? I doubt that's gonna happen ...

Yeah, when pigs fly, correct? <looks out window> Never mind, gotta find a different metaphor. Stupid viking catapult... ;-)
Sep 18 2008
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer. Bye, bearophile
Sep 19 2008
next sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.
 Bye,
 bearophile

Sep 19 2008
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)

But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?
Sep 19 2008
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar>
 wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

the type of array.length ;)

what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?

The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.

Now I see what you mean, and I agree.
Sep 19 2008
prev sibling parent reply Don <nospam nospam.com.au> writes:
Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar>
 wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

the type of array.length ;)

what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?

The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.

But the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease. One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.
Sep 19 2008
next sibling parent Sean Kelly <sean invisibleduck.org> writes:
Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 11:52 AM, Don <nospam nospam.com.au> wrote:
 One of the biggest stupidities from C is that 0 is an int. Once you've done
 that, you HAVE to have implicit conversions. And once you have implicit
 conversions, you have to allow signed-unsigned comparision.

OK, then what's the solution? "cast(int)0" everywhere?

I'd be inclined to say that Polysemous values are one possible solution. Sean
Sep 19 2008
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Don:

 But the solution is NOT to leave the language as-is, only disallowing 
 signed-unsigned comparison. That's a cure that's as bad as the disease.

May I ask you why?
 One of the biggest stupidities from C is that 0 is an int. Once you've 
 done that, you HAVE to have implicit conversions. And once you have 
 implicit conversions, you have to allow signed-unsigned comparision.

I don't understand (I know no languages where 0 isn't an int), can you explain a bit better? Bye and thank you, bearophile
Sep 19 2008
parent reply Sean Kelly <sean invisibleduck.org> writes:
bearophile wrote:
 Don:
 
 But the solution is NOT to leave the language as-is, only disallowing 
 signed-unsigned comparison. That's a cure that's as bad as the disease.

May I ask you why?
 One of the biggest stupidities from C is that 0 is an int. Once you've 
 done that, you HAVE to have implicit conversions. And once you have 
 implicit conversions, you have to allow signed-unsigned comparision.

I don't understand (I know no languages where 0 isn't an int), can you explain a bit better?

I think this actually applies to any integer literal. For example: short i = 0; unsigned j = 1; In C, the above code implicitly converts int(0) to short and int(1) to unsigned. If literals had a type and implicit conversions were illegal, this code would have to be: short i = (short)0; unsigned j = (unsigned)1; which obviously stinks. However, typed literals plus allowed conversion also makes this legal: unsigned k = -2; which makes no sense, given the types involved. Sean
Sep 19 2008
parent reply bearophile <bearophileHUGS lycos.com> writes:
Sean Kelly:
 In C, the above code implicitly converts int(0) to short and int(1) to 
 unsigned.  If literals had a type and implicit conversions were illegal, 
 this code would have to be:
      short i = (short)0;
      unsigned j = (unsigned)1;
 which obviously stinks.  However, typed literals plus allowed conversion 
 also makes this legal:
 
      unsigned k = -2;
 
 which makes no sense, given the types involved.

Let's see, this is a quick rough list of things that can be done: - Type conversions can be implicit only when upcasting. All downcasting require an explicit cast. - Literals are controlled at compile time. So at compile time the compiler can perform an implicit cast safely, even downcasting. - < > between signed and unsigned numbers requires automatic upcasting (uint > int becomes long > long), or manual casting. - Things like unsigned k = -2; are of course disallowed. - In arrays and other collections the length can be named "size" and it has to be a signed integer as long as the CPU word. - When not in -release mode the executable contains overflow controls on all integral operations (Delphi almost does this, with a trick). They can be removed in release mode. - cent/ucent can be introduced and maybe it can be silently used by the compiler to avoid some bugs, when not in release mode. - Eventually, later a multiprecision integer type, optimized for 4-10 bytes long integers can be used here and there, but the compiler can replace its operations with signed or unsigned operations among CPU-word integrals everywhere there's no risk of overflow. - Octal literals have to be improved. Similar things can help remove some of the bugs from the future D2 programs. Bye, bearophile
Sep 19 2008
parent bearophile <bearophileHUGS lycos.com> writes:
Jarrett Billingsley:
 I like the idea of replacing 'length' with 'size', but why does it
 have to be signed?  As long as the interactions between signed and
 unsigned integers are better defined and restricted, there's no point.

Because I am not sure the interactions between signed and unsigned integers will become that good :o) So I don't trust a fully perfect global solution to the problem and I prefer to solve it locally too :-) That's a way to design more reliable engineering systems, to keep each subsystem safer by itself :-) Bye, bearophile
Sep 20 2008
prev sibling parent "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Fri, Sep 19, 2008 at 8:23 PM, bearophile <bearophileHUGS lycos.com> wrote:

 - In arrays and other collections the length can be named "size" and it has to
be a signed integer as long as the CPU word.

I like the idea of replacing 'length' with 'size', but why does it have to be signed? As long as the interactions between signed and unsigned integers are better defined and restricted, there's no point.
Sep 19 2008
prev sibling parent "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Fri, Sep 19, 2008 at 11:52 AM, Don <nospam nospam.com.au> wrote:
 Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar>
 wrote:
 Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar>
 wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)

But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?

The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.

But the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease. One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.

OK, then what's the solution? "cast(int)0" everywhere?
Sep 19 2008
prev sibling parent "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 Jarrett Billingsley wrote:
 On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar>
 wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)

But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?

The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.
Sep 19 2008
prev sibling next sibling parent Janderson <ask me.com> writes:
downs wrote:
 void main() { int i; short x; x = i; }
 
 Excuse me, but - how exactly is it that this is in any way, shape or form
valid code?
 
 How can I trust a language that allows those kind of shenanigans?

I totally agree. This should be an error. You should be required to explicitly cast. -Joel
Sep 18 2008
prev sibling next sibling parent "Bill Baxter" <wbaxter gmail.com> writes:
On Fri, Sep 19, 2008 at 4:05 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 Chris R. Miller wrote:
 No, no. In Java it's an error, an explicit cast is required.

 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting

 Which is perfect. It expresses the intents of the programmer:

 long x = ...;
 int y = (int) x; // yes, I know I might loose information, but I'm sure
                 // it won't happen

Well, I wouldn't say it *perfectly* expresses the programmer's intent. I doubt the programmer really intended to say "I want to pretend x is an int no mater what its actual type is". By which I mean to say a better expression of the user's intent would be "I want to convert x into an integer so long as it is a type for which the compiler knows how to convert". I.e. I don't want to accidentally try to convert an array or something like that to an int. Or maybe that's not an issue in Java? Maybe it doesn't let you cast arbitrary things to ints. But anyway D's cast is not like that, so the same solution needs tweaking to port to D. Like some kind of conversion cast that's distinct from the generic coercive cast. I suppose that's basically what the modules like std.conv aim to provide. An alternate kind of cast. to!(int)(x) instead of cast(int)x. --bb
Sep 18 2008
prev sibling parent "Jarrett Billingsley" <jarrett.billingsley gmail.com> writes:
On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:
 bearophile wrote:
 Ary Borenszweig:
 No, no. In Java it's an error, an explicit cast is required.
 http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting
 Which is perfect. It expresses the intents of the programmer:

About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.

Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. I've checked C#, which has uint as a type. The length of an array is an int, not an unit. A much better choice.

signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)
Sep 19 2008