digitalmars.D.learn - automatic int to short conversion - the HELL?
- downs (3/3) Sep 17 2008 void main() { int i; short x; x = i; }
- downs (1/1) Sep 17 2008 n/t
- Jarrett Billingsley (2/3) Sep 17 2008 I'm not sure what this has to do with you not specifying -w.
- Jarrett Billingsley (2/5) Sep 17 2008 lern2warningsflag.
- downs (3/12) Sep 17 2008 "Warning. Your code is broken."
- Chris R. Miller (2/15) Sep 17 2008 I don't get it. Why can that not be simple implicit type casting?
- downs (2/21) Sep 17 2008 Because short is not a superset of int.
- Chris R. Miller (4/24) Sep 18 2008 Well.... then it's just a loss of precision warning like on every other
- Ary Borenszweig (15/41) Sep 18 2008 No, no. In Java it's an error, an explicit cast is required.
- Tomas Lindquist Olsen (5/54) Sep 18 2008 Doesn't all this come down to convincing Walter to lose the "must follow...
- Chris R. Miller (4/58) Sep 18 2008 Yeah, when pigs fly, correct?
- Bill Baxter (16/23) Sep 18 2008 Well, I wouldn't say it *perfectly* expresses the programmer's intent.
- bearophile (12/15) Sep 19 2008 About such matters I suggest you all to also take a look at how Ada work...
- Ary Borenszweig (9/27) Sep 19 2008 Wow. At first, I thought that was already fixed. Now I've written that
- Jarrett Billingsley (3/35) Sep 19 2008 signed-unsigned comparison is, I think, a slightly larger problem than
- Ary Borenszweig (8/42) Sep 19 2008 But if the length of an array is an uint, if you compare it to an int
- Jarrett Billingsley (4/61) Sep 19 2008 The point is that signed-unsigned comparison _in general_ is a bad
- Ary Borenszweig (2/61) Sep 19 2008 Now I see what you mean, and I agree.
- Don (6/65) Sep 19 2008 But the solution is NOT to leave the language as-is, only disallowing
- Jarrett Billingsley (2/83) Sep 19 2008 OK, then what's the solution? "cast(int)0" everywhere?
- Sean Kelly (3/10) Sep 19 2008 I'd be inclined to say that Polysemous values are one possible solution.
- bearophile (5/10) Sep 19 2008 I don't understand (I know no languages where 0 isn't an int), can you e...
- Sean Kelly (14/27) Sep 19 2008 I think this actually applies to any integer literal. For example:
- bearophile (14/25) Sep 19 2008 Let's see, this is a quick rough list of things that can be done:
- Jarrett Billingsley (4/5) Sep 19 2008 I like the idea of replacing 'length' with 'size', but why does it
- bearophile (4/7) Sep 20 2008 Because I am not sure the interactions between signed and unsigned integ...
- Janderson (4/9) Sep 18 2008 I totally agree. This should be an error. You should be required to
void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?
Sep 17 2008
On Wed, Sep 17, 2008 at 10:32 PM, downs <default_357-line yahoo.de> wrote:n/tI'm not sure what this has to do with you not specifying -w.
Sep 17 2008
On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 17 2008
Jarrett Billingsley wrote:On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 17 2008
downs wrote:Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 17 2008
Chris R. Miller wrote:downs wrote:Because short is not a superset of int.Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 17 2008
downs wrote:Chris R. Miller wrote:Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.downs wrote:Because short is not a superset of int.Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 18 2008
Chris R. Miller wrote:downs wrote:No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".Chris R. Miller wrote:Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.downs wrote:Because short is not a superset of int.Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 18 2008
Ary Borenszweig wrote:Chris R. Miller wrote:Doesn't all this come down to convincing Walter to lose the "must follow C rules" mantra? I doubt that's gonna happen ... Who knows :) - Tomasdowns wrote:No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".Chris R. Miller wrote:Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.downs wrote:Because short is not a superset of int.Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 18 2008
Tomas Lindquist Olsen wrote:Ary Borenszweig wrote:Yeah, when pigs fly, correct? <looks out window> Never mind, gotta find a different metaphor. Stupid viking catapult... ;-)Chris R. Miller wrote:Doesn't all this come down to convincing Walter to lose the "must follow C rules" mantra? I doubt that's gonna happen ...downs wrote:No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happen However, if you see this code (in D): long x = ...; int y = x; you start wondering whether the original author simply forgot to add the cast or he knew what he was doing. How can you know? I like the compiler to force you to write an explicit cast. It is saying: "Hey, please tell me you know what you are doing here... because maybe you didn't notice you might loose information here".Chris R. Miller wrote:Well.... then it's just a loss of precision warning like on every other language (Java and C++ off the top of my head). -w and be on thy way, unless I'm missing something else.downs wrote:Because short is not a superset of int.Jarrett Billingsley wrote:I don't get it. Why can that not be simple implicit type casting?On Wed, Sep 17, 2008 at 10:26 PM, downs <default_357-line yahoo.de> wrote:"Warning. Your code is broken." I still claim it should actually be an error, although the only practical and correct solution might be full ranged type support.void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?lern2warningsflag.
Sep 18 2008
On Fri, Sep 19, 2008 at 4:05 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:Chris R. Miller wrote: No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer: long x = ...; int y = (int) x; // yes, I know I might loose information, but I'm sure // it won't happenWell, I wouldn't say it *perfectly* expresses the programmer's intent. I doubt the programmer really intended to say "I want to pretend x is an int no mater what its actual type is". By which I mean to say a better expression of the user's intent would be "I want to convert x into an integer so long as it is a type for which the compiler knows how to convert". I.e. I don't want to accidentally try to convert an array or something like that to an int. Or maybe that's not an issue in Java? Maybe it doesn't let you cast arbitrary things to ints. But anyway D's cast is not like that, so the same solution needs tweaking to port to D. Like some kind of conversion cast that's distinct from the generic coercive cast. I suppose that's basically what the modules like std.conv aim to provide. An alternate kind of cast. to!(int)(x) instead of cast(int)x. --bb
Sep 18 2008
Ary Borenszweig:No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer. Bye, bearophile
Sep 19 2008
bearophile wrote:Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. int, not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.Bye, bearophile
Sep 19 2008
On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
Jarrett Billingsley wrote:On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:Jarrett Billingsley wrote:The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. int, not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
Jarrett Billingsley wrote:On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:Now I see what you mean, and I agree.Jarrett Billingsley wrote:The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. int, not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
Jarrett Billingsley wrote:On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease. One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.Jarrett Billingsley wrote:The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. int, not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
On Fri, Sep 19, 2008 at 11:52 AM, Don <nospam nospam.com.au> wrote:Jarrett Billingsley wrote:OK, then what's the solution? "cast(int)0" everywhere?On Fri, Sep 19, 2008 at 9:45 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease. One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.Jarrett Billingsley wrote:The point is that signed-unsigned comparison _in general_ is a bad thing, and array.length-int is just one manifestation of it. If signed-unsigned comparison were made illegal, this would be an error.On Fri, Sep 19, 2008 at 9:29 AM, Ary Borenszweig <ary esperanto.org.ar> wrote:But if the length of an array is an uint, if you compare it to an int then what bearophile has just shown might happen. Now, if the length of an array is an int, but which is guaranteed to be always positive or zero, then if you compare it to an int or an unit, you always get the desired result. Conclusion: you avoid a bug to happen at zero cost. So... why not?bearophile wrote:signed-unsigned comparison is, I think, a slightly larger problem than the type of array.length ;)Ary Borenszweig:Wow. At first, I thought that was already fixed. Now I've written that code, compiled it and run it and saw it gives false. Ok, I know a.length is an uint because logically it cannot be negative. But... shouldn't it be an int to avoid this kind of bugs?? You loose nothing doing this. You are never going to need an array of 2147483647 positions, much less a bigger array. int, not an unit. A much better choice.No, no. In Java it's an error, an explicit cast is required. http://www.programmersheaven.com/2/FAQ-JAVA-Type-Conversion-Casting Which is perfect. It expresses the intents of the programmer:About such matters I suggest you all to also take a look at how Ada works. Ada was designed first of all to create reliable software, so avoiding casting-derived bugs too is essential. D tries to avoid some of the pitfalls of C, to be a language less bug-prone: casts is where D has to improve still in such regards. Time ago (when I was more a D newbie) I have already had a bug in my code because of a casting bug: import std.stdio; void main() { int n = -5; int[] a = [1, 2, 3]; writefln(a.length > n); // prints false } A well designed language, even a system language like Ada or D, must avoid such kinds of bugs, regardless the amount of ignorance of the programmer.
Sep 19 2008
Jarrett Billingsley wrote:On Fri, Sep 19, 2008 at 11:52 AM, Don <nospam nospam.com.au> wrote:I'd be inclined to say that Polysemous values are one possible solution. SeanOne of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.OK, then what's the solution? "cast(int)0" everywhere?
Sep 19 2008
Don:But the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease.May I ask you why?One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.I don't understand (I know no languages where 0 isn't an int), can you explain a bit better? Bye and thank you, bearophile
Sep 19 2008
bearophile wrote:Don:I think this actually applies to any integer literal. For example: short i = 0; unsigned j = 1; In C, the above code implicitly converts int(0) to short and int(1) to unsigned. If literals had a type and implicit conversions were illegal, this code would have to be: short i = (short)0; unsigned j = (unsigned)1; which obviously stinks. However, typed literals plus allowed conversion also makes this legal: unsigned k = -2; which makes no sense, given the types involved. SeanBut the solution is NOT to leave the language as-is, only disallowing signed-unsigned comparison. That's a cure that's as bad as the disease.May I ask you why?One of the biggest stupidities from C is that 0 is an int. Once you've done that, you HAVE to have implicit conversions. And once you have implicit conversions, you have to allow signed-unsigned comparision.I don't understand (I know no languages where 0 isn't an int), can you explain a bit better?
Sep 19 2008
Sean Kelly:In C, the above code implicitly converts int(0) to short and int(1) to unsigned. If literals had a type and implicit conversions were illegal, this code would have to be: short i = (short)0; unsigned j = (unsigned)1; which obviously stinks. However, typed literals plus allowed conversion also makes this legal: unsigned k = -2; which makes no sense, given the types involved.Let's see, this is a quick rough list of things that can be done: - Type conversions can be implicit only when upcasting. All downcasting require an explicit cast. - Literals are controlled at compile time. So at compile time the compiler can perform an implicit cast safely, even downcasting. - < > between signed and unsigned numbers requires automatic upcasting (uint > int becomes long > long), or manual casting. - Things like unsigned k = -2; are of course disallowed. - In arrays and other collections the length can be named "size" and it has to be a signed integer as long as the CPU word. - When not in -release mode the executable contains overflow controls on all integral operations (Delphi almost does this, with a trick). They can be removed in release mode. - cent/ucent can be introduced and maybe it can be silently used by the compiler to avoid some bugs, when not in release mode. - Eventually, later a multiprecision integer type, optimized for 4-10 bytes long integers can be used here and there, but the compiler can replace its operations with signed or unsigned operations among CPU-word integrals everywhere there's no risk of overflow. - Octal literals have to be improved. Similar things can help remove some of the bugs from the future D2 programs. Bye, bearophile
Sep 19 2008
On Fri, Sep 19, 2008 at 8:23 PM, bearophile <bearophileHUGS lycos.com> wrote:- In arrays and other collections the length can be named "size" and it has to be a signed integer as long as the CPU word.I like the idea of replacing 'length' with 'size', but why does it have to be signed? As long as the interactions between signed and unsigned integers are better defined and restricted, there's no point.
Sep 19 2008
Jarrett Billingsley:I like the idea of replacing 'length' with 'size', but why does it have to be signed? As long as the interactions between signed and unsigned integers are better defined and restricted, there's no point.Because I am not sure the interactions between signed and unsigned integers will become that good :o) So I don't trust a fully perfect global solution to the problem and I prefer to solve it locally too :-) That's a way to design more reliable engineering systems, to keep each subsystem safer by itself :-) Bye, bearophile
Sep 20 2008
downs wrote:void main() { int i; short x; x = i; } Excuse me, but - how exactly is it that this is in any way, shape or form valid code? How can I trust a language that allows those kind of shenanigans?I totally agree. This should be an error. You should be required to explicitly cast. -Joel
Sep 18 2008