www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - unsigned policy

reply Henning Hasemann <hhasemann web.de> writes:
I know this is a more general questions as it applies to C and C++ as well,
but somewhere I have to ask and actually D is what Im coding in:

Should one try to use uint in favor of int whenever one knows for sure the value
wont be negative? That whould be a bit more expressive but on the other hand
sometimes leads to type problems.
For example, when having things like this:

T min(T)(T a, T b) {
  return a < b ? a : b;
}

Here you whould need to ensure to cast values so they share a common type.

How do you code? Do you use uint whenever it suitable reflects the data to
store (eg a x-y-position on the screen) or only when necessary?

TIA for your tips,
Henning
Feb 07 2007
next sibling parent renoX <renosky free.fr> writes:
Henning Hasemann a écrit :
 I know this is a more general questions as it applies to C and C++ as well,
 but somewhere I have to ask and actually D is what Im coding in:
 
 Should one try to use uint in favor of int whenever one knows for sure the
value
 wont be negative? That whould be a bit more expressive but on the other hand
 sometimes leads to type problems.
 For example, when having things like this:
 
 T min(T)(T a, T b) {
   return a < b ? a : b;
 }
 
 Here you whould need to ensure to cast values so they share a common type.
 
 How do you code? Do you use uint whenever it suitable reflects the data to
 store (eg a x-y-position on the screen) or only when necessary?

I don't know the generic answer, but for an x-y position, using unsigned would be a bad idea: if you had a rectangle only partially visible on the screen for example, you wouldn't be able to represent it. Regards, renoX
 
 TIA for your tips,
 Henning

Feb 07 2007
prev sibling next sibling parent reply orgoton <orgoton mindless.com> writes:
I always use unsigned variables when their values don't go below 0. I also
always use the smallest variable possible.

More often than not, I use "ubyte" instead of "int" in "for" loops. Signed
variables use the most significant bit to represent sign. I don't know if
there's any performance gain (even if marginal) when using mathematical
operations on unsigned variables.

About conditions, you can always force a variable to be unsigned by masking
away it's most significant bit:

short var2;
(...)
unsigned=var2 && 0x3FFF; (0x3FFF is hexadecimal for 0111_1111_1111_1111)

but it would be simpler just to use "abs()" function to obtain the absolute
value.
Feb 07 2007
parent torhu <fake address.dude> writes:
orgoton wrote:
 I always use unsigned variables when their values don't go below 0. I also
always use the smallest variable possible.
 
 More often than not, I use "ubyte" instead of "int" in "for" loops. Signed
variables use the most significant bit to represent sign. I don't know if
there's any performance gain (even if marginal) when using mathematical
operations on unsigned variables.

Using the smallest variable type possible doesn't gain you anything. It's common to use int in most cases, or size_t for indices. Smaller types are generally used to save space for strings, structs that you have large arrays of, etc. int and other 32-bit types are generally the fastest type for a 32-bit cpu to work with. Except when copying arrays of data around, that's when it can be faster to use smaller types. Not for individual variables.
 About conditions, you can always force a variable to be unsigned by masking
away it's most significant bit:
 
 short var2;
 (...)
 unsigned=var2 && 0x3FFF; (0x3FFF is hexadecimal for 0111_1111_1111_1111)

I think this is what you mean: ushort var3 = var2 & 0x7FFF; But this doesn't make the variable unsigned. Signed or unsigned is not a quality of the value, it's a matter of how a value is interpreted and treated by the compiler/cpu/library. Look at this: uint x = -1; If you print x, you will se that it's 4294967295, since -1 is 0xFFFFFFFF.
Feb 07 2007
prev sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as well,
 but somewhere I have to ask and actually D is what Im coding in:
 
 Should one try to use uint in favor of int whenever one knows for sure the
value
 wont be negative? That whould be a bit more expressive but on the other hand
 sometimes leads to type problems.
 For example, when having things like this:
 
 T min(T)(T a, T b) {
   return a < b ? a : b;
 }
 
 Here you whould need to ensure to cast values so they share a common type.
 
 How do you code? Do you use uint whenever it suitable reflects the data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed. Andrei
Feb 07 2007
next sibling parent Derek Parnell <derek nomail.afraid.org> writes:
On Wed, 07 Feb 2007 15:29:22 -0800, Andrei Alexandrescu (See Website For
Email) wrote:

 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not be 
 allowed implicitly.

Yes, please! This has been a wart for far too long. Can we please slow down featuritis and return to cleaning up the product as a priority. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Down with mediocrity!" 8/02/2007 10:38:39 AM
Feb 07 2007
prev sibling next sibling parent reply Bill Baxter <dnewsgroup billbaxter.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:

 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not be 
 allowed implicitly. Walter is willing to fix D in accordance to that 
 rule, which would yield an implicit conversion graph as shown in:
 
 http://erdani.org/d-implicit-conversions.pdf
 

I notice the graph doesn't include complex types. Is there any reason why float shouldn't be automatically converted to cfloat? --bb
Feb 07 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Thu, 08 Feb 2007 08:59:56 +0900, Bill Baxter wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 http://erdani.org/d-implicit-conversions.pdf


What is the justification for character types to be implicitly converted to integer types? For the same reason that arithmetic with booleans is meaningless, so is such operations on characters. int x = 'a' + 'z'; // Is meaningless. int y = cast(int)'a' + cast(int)'z'; // Is now purposeful. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 8/02/2007 11:09:42 AM
Feb 07 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Thu, 08 Feb 2007 08:59:56 +0900, Bill Baxter wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:
 http://erdani.org/d-implicit-conversions.pdf


What is the justification for character types to be implicitly converted to integer types? For the same reason that arithmetic with booleans is meaningless, so is such operations on characters. int x = 'a' + 'z'; // Is meaningless. int y = cast(int)'a' + cast(int)'z'; // Is now purposeful.

I agree. Those conversions are in there mostly for historical reasons. Andrei
Feb 07 2007
prev sibling next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not 
 be allowed implicitly. Walter is willing to fix D in accordance to 
 that rule, which would yield an implicit conversion graph as shown in:

 http://erdani.org/d-implicit-conversions.pdf

I notice the graph doesn't include complex types. Is there any reason why float shouldn't be automatically converted to cfloat?

Sharp eyes :o). I was simply too lazy to include complex types. Probably real-to-complex conversion should be allowed implicitly, too, as long as the basic principle of preserving value is respected. Andrei
Feb 07 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 I notice the graph doesn't include complex types.
 Is there any reason why float shouldn't be automatically converted to 
 cfloat?

Sharp eyes :o). I was simply too lazy to include complex types. Probably real-to-complex conversion should be allowed implicitly, too, as long as the basic principle of preserving value is respected.

Implicit conversions from floats to complex types was disallowed because it caused overloading problems with math functions. Separate functions for float and complex functions are desirable.
Feb 12 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 I notice the graph doesn't include complex types.
 Is there any reason why float shouldn't be automatically converted to 
 cfloat?

Sharp eyes :o). I was simply too lazy to include complex types. Probably real-to-complex conversion should be allowed implicitly, too, as long as the basic principle of preserving value is respected.

Implicit conversions from floats to complex types was disallowed because it caused overloading problems with math functions. Separate functions for float and complex functions are desirable.

So the way things should be is: all meaning-preserving integral promotions should be kept; then, all implicit integral->floating point promotions should be severed; then, all implicit floating point->complex should go. Right? Andrei
Feb 12 2007
parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 So the way things should be is: all meaning-preserving integral 
 promotions should be kept; then, all implicit integral->floating point 
 promotions should be severed; then, all implicit floating point->complex 
  should go.
 
 Right?

Yes. Also disallow implicit conversion of Object to void*.
Feb 12 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 So the way things should be is: all meaning-preserving integral 
 promotions should be kept; then, all implicit integral->floating point 
 promotions should be severed; then, all implicit floating 
 point->complex  should go.

 Right?

Yes. Also disallow implicit conversion of Object to void*.

How iz zis: http://erdani.org/d-implicit-conversions.pdf I put Object and void* in there for your sake :o). Did I forget something? Andrei
Feb 12 2007
next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
Email) wrote:

 
 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 13/02/2007 1:03:32 PM
Feb 12 2007
next sibling parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

http://www.digitalmars.com/d/type.html specifies they are "unsigned" and also the number of bits. Their default initial value is written in hex. This made me assume that they can be treated as numbers. Andrei
Feb 12 2007
parent Derek Parnell <derek nomail.afraid.org> writes:
On Mon, 12 Feb 2007 18:14:50 -0800, Andrei Alexandrescu (See Website For
Email) wrote:

 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

http://www.digitalmars.com/d/type.html specifies they are "unsigned" and also the number of bits. Their default initial value is written in hex. This made me assume that they can be treated as numbers.

I was trying not to confuse implementation with theory. D implements characters using unsigned integers, but as they are not semantically numbers, it makes no sense to do many numerical operations on characters; such as adding or multiplying them. To allow them to be /implicitly/ converted to integers may lead to subtle bugs, as the compiler will miss semantically incorrect usage. int add(int a, int b) { return a + b; } int d = add('a', 'z'); // Should not match signature, but does. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 13/02/2007 1:49:57 PM
Feb 12 2007
prev sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.
Feb 12 2007
next sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:

 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.

Ionno. Probably instead of dealing with data as a stream/string of characters, you handle it as integers, and that's just one cast. Pascal didn't offer you that. How about the infamous automatic bool -> int conversion? Now that's a sucker that caused a ton of harm to C++. Andrei
Feb 12 2007
prev sibling next sibling parent reply James Dennett <jdennett acm.org> writes:
Walter Bright wrote:
 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:

 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.

I don't find that; for reasons we don't need to go into (except to say that I'm glad C++09 will have better Unicode support than C++03), I've been using a separate type for characters in a significant body of C++ code, and find very little need for casts. Certainly not enough to dispense with the advantages of type safety. When the code gets low level enough to need integral values, I don't mind doing the conversion manually as there will typically be a need to handle byte ordering issues or similar too. But that's just in the cases I've seen in the last {mumble} years. The examples you give are real, make up a tiny fraction of code that handles characters, and aren't, in my experience, significantly adversely affected by the elimination of these implicit conversions. -- James
Feb 12 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
James Dennett wrote:
 Walter Bright wrote:
 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:

 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?


that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.

I don't find that; for reasons we don't need to go into (except to say that I'm glad C++09 will have better Unicode support than C++03), I've been using a separate type for characters in a significant body of C++ code, and find very little need for casts. Certainly not enough to dispense with the advantages of type safety. When the code gets low level enough to need integral values, I don't mind doing the conversion manually as there will typically be a need to handle byte ordering issues or similar too. But that's just in the cases I've seen in the last {mumble} years. The examples you give are real, make up a tiny fraction of code that handles characters, and aren't, in my experience, significantly adversely affected by the elimination of these implicit conversions.

I agree. To add insult to injury, the inverse automated conversion would allow me to call toupper(a * b /c) without a cast in sight. What the hell is that needed for? Dammit. Andrei
Feb 12 2007
prev sibling next sibling parent reply Derek Parnell <derek nomail.afraid.org> writes:
On Mon, 12 Feb 2007 18:59:49 -0800, Walter Bright wrote:

 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?

Characters are not numbers.

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.

D has a neat property sub-system already. For example, it is used to get at the underlying implementation data for arrays. So why not call spade a spade and stop helping bug-making. You have recently done this with implicit conversion from array pointers and arrays. If characters had a property called, for example, ".numval" then ugly casts would not be needed *and* such special character usage will be documented. In the examples above, (1) and (2) are really far to complex in the unicode world to simply perform arithmetic on the implementation value of a specific character to get a result. They really need table look ups or similar to do it well. As you know, not all strings are ASCII. Compression/encryption is best done using unsigned bytes so I would cast the 'string' to that. And by doing so, it highlights to the code reader that something special is going on here. ubyte[] res = encrypt(cast(ubyte[]) stringdata ); Note the result of encryption/compression is most certainly not going to be a valid UTF string so a ubyte[] would be a better choice. Finally, the fourth example lends itself to the .numval property very nicely ... ulong a = char_prop[ somechar.numval ]; If our aim is to make writing and reading D code as easy as possible, while also helping the coder to implement their algorithms correctly, then the compiler should at least highlight inappropriate implicit conversions such as ... return lowchar + 'A' - 'a'; If one really feels that they must do this then at least let the coder reader know that this is odd. return lowchar + 'A'.numval - 'a'.numval; -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 13/02/2007 2:27:39 PM
Feb 12 2007
parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Derek Parnell wrote:
 On Mon, 12 Feb 2007 18:59:49 -0800, Walter Bright wrote:
 
 Derek Parnell wrote:
 On Mon, 12 Feb 2007 16:03:14 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:

 http://erdani.org/d-implicit-conversions.pdf

 Did I forget something?


that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case 3) doing compression/encryption code 4) using characters as indices (isspace() for example) Take away the implicit conversions, and such code gets littered with ugly casts.

D has a neat property sub-system already. For example, it is used to get at the underlying implementation data for arrays. So why not call spade a spade and stop helping bug-making. You have recently done this with implicit conversion from array pointers and arrays. If characters had a property called, for example, ".numval" then ugly casts would not be needed *and* such special character usage will be documented.

I was thinking pretty much the same. That (the ".num" property) together with the idea that Joel Salomon presented (that we could still allow subtraction of characters without casts) would neatly solve any problems in disallowing the implicit conversion of char to numbers. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 13 2007
prev sibling parent reply "Joel C. Salomon" <JoelCSalomon Gmail.com> writes:
Walter Bright wrote:
 Derek Parnell wrote:
 Characters are not numbers. 

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case

Neither are pointers numbers, but &a - &b yields a usable number. So long as x += c - 'a' works, I don’t care if 'a' * '3' breaks.
 4) using characters as indices (isspace() for example)

Is there a way to declare the index type of an array? --Joel
Feb 12 2007
parent reply Frits van Bommel <fvbommel REMwOVExCAPSs.nl> writes:
Joel C. Salomon wrote:
 Walter Bright wrote:
 Derek Parnell wrote:
 Characters are not numbers. 

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case

Neither are pointers numbers, but &a - &b yields a usable number. So long as x += c - 'a' works, I don’t care if 'a' * '3' breaks.

Thank $DEITY. If nobody made this point before I read all of this subthread I would have to write a long post explaining this. Let me reiterate that: characters are to integers as pointers are to integers. The difference between two pointers is an integer, and you can add integers to pointers. These are the only arithmetic operations allowed on pointers. The same should hold if you substitute 'character' for 'pointer' everywhere in previous two sentences. Now, a short comment for each of the cases: Walter Bright wrote:
 Examples:

 1) converting text <=> integers

I don't see any reason why disallowing conversions from characters to integers would disallow one to add or subtract integers from characters. So (for c of type char/wchar/dchar) c - '0' can still be an integer, for example. But it makes absolutely no sense to be able to say c + '0'. Or c * '0'.
 2) converting case

As above, (c - 'A') + 'a' can still be allowed. (c - 'A') is an integer, add 'a' to get a character again.
 3) doing compression/encryption code

These should probably use void[] for input and ubyte[] for output.
 4) using characters as indices (isspace() for example)

If you're using Unicode this is a bad idea anyway. Except perhaps if you use a sparce associative array, and then this isn't a problem anyway. If you insist on using a regular array (and make sure the character value is suitably small) you don't necessarily have to use a cast, you can also subtract '\0' if you prefer. Back to Joel:
 4) using characters as indices (isspace() for example)

Is there a way to declare the index type of an array?

Only if you use an associative array.
Feb 13 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Frits van Bommel wrote:
 Joel C. Salomon wrote:
 Walter Bright wrote:
 Derek Parnell wrote:
 Characters are not numbers. 

That's an enticing point of view, and it sounds good. But Pascal has that view, and my experience with it is it's one of the reasons Pascal sucks. Examples: 1) converting text <=> integers 2) converting case

Neither are pointers numbers, but &a - &b yields a usable number. So long as x += c - 'a' works, I don’t care if 'a' * '3' breaks.

Thank $DEITY. If nobody made this point before I read all of this subthread I would have to write a long post explaining this. Let me reiterate that: characters are to integers as pointers are to integers. The difference between two pointers is an integer, and you can add integers to pointers. These are the only arithmetic operations allowed on pointers. The same should hold if you substitute 'character' for 'pointer' everywhere in previous two sentences. Now, a short comment for each of the cases: Walter Bright wrote: > Examples: > > 1) converting text <=> integers I don't see any reason why disallowing conversions from characters to integers would disallow one to add or subtract integers from characters. So (for c of type char/wchar/dchar) c - '0' can still be an integer, for example. But it makes absolutely no sense to be able to say c + '0'. Or c * '0'. > 2) converting case As above, (c - 'A') + 'a' can still be allowed. (c - 'A') is an integer, add 'a' to get a character again. > 3) doing compression/encryption code These should probably use void[] for input and ubyte[] for output. > 4) using characters as indices (isspace() for example) If you're using Unicode this is a bad idea anyway. Except perhaps if you use a sparce associative array, and then this isn't a problem anyway. If you insist on using a regular array (and make sure the character value is suitably small) you don't necessarily have to use a cast, you can also subtract '\0' if you prefer. Back to Joel:
 4) using characters as indices (isspace() for example)

Is there a way to declare the index type of an array?

Only if you use an associative array.

I think these are great ideas that could help us rethink the whole character handling business. Andrei
Feb 13 2007
prev sibling next sibling parent reply Walter Bright <newshound digitalmars.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 So the way things should be is: all meaning-preserving integral 
 promotions should be kept; then, all implicit integral->floating 
 point promotions should be severed; then, all implicit floating 
 point->complex  should go.

 Right?

Yes. Also disallow implicit conversion of Object to void*.

How iz zis: http://erdani.org/d-implicit-conversions.pdf I put Object and void* in there for your sake :o). Did I forget something?

ubyte => char, ushort => wchar, uint => dchar.
Feb 12 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 So the way things should be is: all meaning-preserving integral 
 promotions should be kept; then, all implicit integral->floating 
 point promotions should be severed; then, all implicit floating 
 point->complex  should go.

 Right?

Yes. Also disallow implicit conversion of Object to void*.

How iz zis: http://erdani.org/d-implicit-conversions.pdf I put Object and void* in there for your sake :o). Did I forget something?

ubyte => char, ushort => wchar, uint => dchar.

Is that both ways??? :oO Andrei
Feb 12 2007
prev sibling parent Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Walter Bright wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 So the way things should be is: all meaning-preserving integral 
 promotions should be kept; then, all implicit integral->floating 
 point promotions should be severed; then, all implicit floating 
 point->complex  should go.

 Right?

Yes. Also disallow implicit conversion of Object to void*.

How iz zis: http://erdani.org/d-implicit-conversions.pdf I put Object and void* in there for your sake :o). Did I forget something? Andrei

That chart looks nice. Perhaps it should be put in the official D doc (when it is finished)? -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 13 2007
prev sibling parent "Rioshin an'Harthen" <rharth75 hotmail.com> writes:
"Walter Bright" <newshound digitalmars.com> wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Bill Baxter wrote:
 I notice the graph doesn't include complex types.
 Is there any reason why float shouldn't be automatically converted to 
 cfloat?

Sharp eyes :o). I was simply too lazy to include complex types. Probably real-to-complex conversion should be allowed implicitly, too, as long as the basic principle of preserving value is respected.

Implicit conversions from floats to complex types was disallowed because it caused overloading problems with math functions. Separate functions for float and complex functions are desirable.

True, but there has been a proposal for this, which would make it work correctly, and allow for the implicit conversions when needed. :) I refer you to http://www.digitalmars.com/d/archives/digitalmars/D/36360.html, from last spring.
Feb 15 2007
prev sibling parent reply don <nospam nospam.com> writes:
Bill Baxter Wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 
 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not be 
 allowed implicitly. Walter is willing to fix D in accordance to that 
 rule, which would yield an implicit conversion graph as shown in:
 
 http://erdani.org/d-implicit-conversions.pdf
 

I notice the graph doesn't include complex types. Is there any reason why float shouldn't be automatically converted to cfloat? --bb

Yes. It used to. It was removed at my request. The problem is, that it introduces *two* directions that float can be converted to. float -> double -> real and float -> cfloat. Suppose you have: void func(real) {} void func(creal){} and then you type: func(3.1); What happens? 3.1 is a double, not a real, so there's no exact match. So the compiler has an ambiguous conversion, and the code won't compile. Consequence: under the old rules, if you provide both real and complex overloads for a function, you must provide float, double, and real versions. If the function has multiple arguments, you must provide all combinations. It's untenable. Note that the same thing happens if you had int-> double conversions: func(long) func(real) --> you must provide func(int) and func(short) variants. If would be OK if there was a rule that 'lengthening' conversions char > wchar > dchar, byte > short> int > long, float>double>real, ... were preferred over meaning-changing conversions (char > byte, wchar > ushort, int > double, ....) but that would require another level of matching in the lookup rules.
Feb 08 2007
parent Bill Baxter <dnewsgroup billbaxter.com> writes:
don wrote:
 Bill Baxter Wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:

 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not be 
 allowed implicitly. Walter is willing to fix D in accordance to that 
 rule, which would yield an implicit conversion graph as shown in:

 http://erdani.org/d-implicit-conversions.pdf

Is there any reason why float shouldn't be automatically converted to cfloat? --bb

Yes. It used to. It was removed at my request. The problem is, that it introduces *two* directions that float can be converted to. float -> double -> real and float -> cfloat. Suppose you have: void func(real) {} void func(creal){} and then you type: func(3.1); What happens? 3.1 is a double, not a real, so there's no exact match. So the compiler has an ambiguous conversion, and the code won't compile. Consequence: under the old rules, if you provide both real and complex overloads for a function, you must provide float, double, and real versions. If the function has multiple arguments, you must provide all combinations. It's untenable.

Hmm. Well maybe in that case there should be a distinction made based on context. Because to me, this being an error is just silly: cfloat x = 1.0; --bb
Feb 09 2007
prev sibling next sibling parent reply Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 
 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not be 
 allowed implicitly.

Exactly.
 Walter is willing to fix D in accordance to that
 rule, which would yield an implicit conversion graph as shown in:
 
 http://erdani.org/d-implicit-conversions.pdf
 
 Notice that there is no arrow e.g. between int and uint (loss of 
 meaning), or between int and float (loss of precision). But there is an 
 arrow from int and uint to double, because double is able to represent 
 them faithfully.
 
 If we are nice, we may convince Walter to implement that soon (maybe in 
 1.006?) but it must be understood that the tighter rules will prompt 
 changes in existing code.

That's fine with me. Many of us have been asking for this for quite a while. Sean
Feb 07 2007
parent kris <foo bar.com> writes:
Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 
 Current D botches quite a few of the arithmetic conversions. Basically 
 all conversions that may lose value, meaning, or precision should not 
 be allowed implicitly.

Exactly. > Walter is willing to fix D in accordance to that
 rule, which would yield an implicit conversion graph as shown in:

 http://erdani.org/d-implicit-conversions.pdf

 Notice that there is no arrow e.g. between int and uint (loss of 
 meaning), or between int and float (loss of precision). But there is 
 an arrow from int and uint to double, because double is able to 
 represent them faithfully.

 If we are nice, we may convince Walter to implement that soon (maybe 
 in 1.006?) but it must be understood that the tighter rules will 
 prompt changes in existing code.

That's fine with me. Many of us have been asking for this for quite a while.

Yeah, me too. Don't care if it means 1 change or 1,000 in my code ...
Feb 08 2007
prev sibling next sibling parent reply Bradley Smith <digitalmars-com baysmith.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for sure 
 the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed. Andrei

Does this mean that int would no longer implicitly convert to bool? For example, the following would not longer compile. int i = 1; if (i) {} This would instead give an error something like "no implicit conversion from int to bool". If this were the case, wouldn't it make the expression (a < b < c) illegal, as is being discussed in another thread. Since "a < b" would result in a bool, but bool < int is not a legal comparison. Thanks, Bradley
Feb 07 2007
next sibling parent Derek Parnell <derek nomail.afraid.org> writes:
On Wed, 07 Feb 2007 22:46:02 -0800, Bradley Smith wrote:

 Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for sure 
 the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed. Andrei

Does this mean that int would no longer implicitly convert to bool? For example, the following would not longer compile. int i = 1; if (i) {} This would instead give an error something like "no implicit conversion from int to bool".

I hope it would cause this error. Unless the compiler treats this a shorthand for if (i != 0) {}
 If this were the case, wouldn't it make the expression (a < b < c) 
 illegal, as is being discussed in another thread. Since "a < b" would 
 result in a bool, but bool < int is not a legal comparison.

Not so I think, because the bool would be converted to an int then the comparison would take place. -- Derek (skype: derek.j.parnell) Melbourne, Australia "Justice for David Hicks!" 8/02/2007 5:52:03 PM
Feb 07 2007
prev sibling parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bradley Smith wrote:
 
 Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for 
 sure the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed. Andrei

Does this mean that int would no longer implicitly convert to bool? For example, the following would not longer compile. int i = 1; if (i) {} This would instead give an error something like "no implicit conversion from int to bool".

if (i) {} does not mean that i is converted to a bool and then tested. It's just a shortcut for if (i != 0) {}. Andrei
Feb 07 2007
prev sibling next sibling parent Lionello Lunesu <lio lunesu.remove.com> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for sure 
 the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf

's got my vote! L.
Feb 08 2007
prev sibling next sibling parent reply Bruno Medeiros <brunodomedeiros+spam com.gmail> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for sure 
 the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf

By the way, do you think bool should be implicitely convertible to a numeric type? Such that this preciousness is allowed in D: if( true == 2 ) writeln("THEN"); and the "then" clause is not executed. -- Bruno Medeiros - MSc in CS/E student http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
Feb 12 2007
parent "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Bruno Medeiros wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for 
 sure the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf

By the way, do you think bool should be implicitely convertible to a numeric type? Such that this preciousness is allowed in D: if( true == 2 ) writeln("THEN"); and the "then" clause is not executed.

I don't like it. I couldn't convince Walter. Andrei
Feb 12 2007
prev sibling parent reply Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for sure 
 the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed.

When this change occurs (since it seems like it will) is there any chance that the default opEquals method in Object could have its signature changed from: int opEquals(Object o); to: bool opEquals(Object o); Not doing so would disallow the following seemingly legal statement: bool c = a == b; This issue has come up before, and it was shown that the bool rval case can be made no less efficient than the int rval case, so the only remaining problem is all the code that would break. However, since a lot of code will probably break anyway with the tighter implicit conversion rules, perhaps it would be a good time to address this issue as well? Sean
Feb 14 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Sean Kelly wrote:
 Andrei Alexandrescu (See Website For Email) wrote:
 Henning Hasemann wrote:
 I know this is a more general questions as it applies to C and C++ as 
 well,
 but somewhere I have to ask and actually D is what Im coding in:

 Should one try to use uint in favor of int whenever one knows for 
 sure the value
 wont be negative? That whould be a bit more expressive but on the 
 other hand
 sometimes leads to type problems.
 For example, when having things like this:

 T min(T)(T a, T b) {
   return a < b ? a : b;
 }

 Here you whould need to ensure to cast values so they share a common 
 type.

 How do you code? Do you use uint whenever it suitable reflects the 
 data to
 store (eg a x-y-position on the screen) or only when necessary?

Current D botches quite a few of the arithmetic conversions. Basically all conversions that may lose value, meaning, or precision should not be allowed implicitly. Walter is willing to fix D in accordance to that rule, which would yield an implicit conversion graph as shown in: http://erdani.org/d-implicit-conversions.pdf Notice that there is no arrow e.g. between int and uint (loss of meaning), or between int and float (loss of precision). But there is an arrow from int and uint to double, because double is able to represent them faithfully. If we are nice, we may convince Walter to implement that soon (maybe in 1.006?) but it must be understood that the tighter rules will prompt changes in existing code. To answer your question, with the new rules in hand, using unsigned types will considerably increase your expressiveness and your ability to detect bugs statically. Also, by the new rules ordering comparisons between mixed-sign types will be disallowed.

When this change occurs (since it seems like it will) is there any chance that the default opEquals method in Object could have its signature changed from: int opEquals(Object o); to: bool opEquals(Object o); Not doing so would disallow the following seemingly legal statement: bool c = a == b; This issue has come up before, and it was shown that the bool rval case can be made no less efficient than the int rval case, so the only remaining problem is all the code that would break. However, since a lot of code will probably break anyway with the tighter implicit conversion rules, perhaps it would be a good time to address this issue as well?

It's up to Walter. Also I haven't heard word about dropping the implicit bool-to-int conversion. What I think is reasonable: int x; if (x) {} // should work: compare x against zero bool y; if (y) {} // should work :o) y = x; // should not work, use y = (x != 0); x = y; // should not work, use x = cast(bool)y or x = y ? 1 : 0 The last rule seems overkill, but really it's rare that you want to convert a bool to an integer, and when you do, many people actually do use the ?: notation to clarify their intent. Andrei
Feb 14 2007
parent reply Derek Parnell <derek psych.ward> writes:
On Wed, 14 Feb 2007 12:24:59 -0800, Andrei Alexandrescu (See Website For
Email) wrote:


 x = y;    // should not work, use x = cast(bool)y or x = y ? 1 : 0

Did you mean ... int x; bool y; x = y; // should not work, use x = cast(int)y or x = y ? 1 : 0 I'm in full agreement with your bool/int rules, BTW. -- Derek Parnell Melbourne, Australia "Justice for David Hicks!" skype: derek.j.parnell
Feb 14 2007
parent reply "Andrei Alexandrescu (See Website For Email)" <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Wed, 14 Feb 2007 12:24:59 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:
 
 
 x = y;    // should not work, use x = cast(bool)y or x = y ? 1 : 0

Did you mean ... int x; bool y; x = y; // should not work, use x = cast(int)y or x = y ? 1 : 0 I'm in full agreement with your bool/int rules, BTW.

Yah, that's correct, thanks for the fix. Let's hear what Walter has to say :o). In fact, I suggest that we all think of some compelling cases one way or another. We all know of the mess created by implicit bool->int in C++, so let's look for some "positive" example. On both sides. Andrei
Feb 14 2007
parent Sean Kelly <sean f4.ca> writes:
Andrei Alexandrescu (See Website For Email) wrote:
 Derek Parnell wrote:
 On Wed, 14 Feb 2007 12:24:59 -0800, Andrei Alexandrescu (See Website For
 Email) wrote:


 x = y;    // should not work, use x = cast(bool)y or x = y ? 1 : 0

Did you mean ... int x; bool y; x = y; // should not work, use x = cast(int)y or x = y ? 1 : 0 I'm in full agreement with your bool/int rules, BTW.

Yah, that's correct, thanks for the fix. Let's hear what Walter has to say :o). In fact, I suggest that we all think of some compelling cases one way or another. We all know of the mess created by implicit bool->int in C++, so let's look for some "positive" example. On both sides.

The only one I can think of is using bool as a primitive constrained type. For example, BitArray can be initialized using an array of bools as: BitArray b; b.init( [1,0,1,0] ); With the conversion rules in place, this would have to be rewritten as: BitArray b; b.init( [true,false,true,false] ); Not too pretty :-) But bool is a logical type so it's really being misused here anyway. What we really want is: alias ${0,1} bit; class BitArray { void init( bit[] buf ) {} } Or something like that. I've personally never had much of a need to convert bool->int and vice-versa. In the few instances where this has been necessary, it's easy enough to use: int i; bool b; b = i != 0; i = b ? 1 : 0; Sean
Feb 14 2007