www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - bit vs. bool?!

reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
I am totally lost here.  I must have missed something.

What is the difference between bit and bool?

Bool is obviously not part of the D spec.

This line appears in object.d:

alias bit bool;

I keep seeing people trying to correct others by insisting that they use 
"bool" over "bit."

What's the deal?

Is there something wrong with representing a true/false value with a bit? 
It makes perfect sense to me.  And if bool is an alias for bit...? 
May 09 2005
next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 9 May 2005 20:30:48 -0400, Jarrett Billingsley wrote:

 I am totally lost here.  I must have missed something.
 
 What is the difference between bit and bool?
 
 Bool is obviously not part of the D spec.
 
 This line appears in object.d:
 
 alias bit bool;
 
 I keep seeing people trying to correct others by insisting that they use 
 "bool" over "bit."
 
 What's the deal?
 
 Is there something wrong with representing a true/false value with a bit? 
 It makes perfect sense to me.  And if bool is an alias for bit...?
Sorry, I was being a bit 'naughty'. Conceptually, there is a difference. But Walter has chosen to *implement* the concept of _bool_ via the _bit_ data type. This has certain advantages from the compiler writer's point of view but from a purist point of view it also allows a degree of silliness to happen. (e.g. You can do arithmetic on truth values such as "true * 3 + false" ) In practice it probably doesn't matter all that much as people are not generally going to do crazy things with truth values. However, from a readability point of view, if a coder uses bool when expressing truth values it more clearly explains their intentions. By using numeric types instead, it adds a slight fog over the coder's intentions. So just to help people read *and* understand what you are intending by your code, one should try to be consistent and use bool for truth concepts and bit for numeric concepts (on-off) concepts. The "D" compiler doesn't care, but humans reading your code might. It is similar to the idea that a string is a vector of characters and not really a vector of integers, even though we all know that characters are represented by specific sets of integers. -- Derek Melbourne, Australia 10/05/2005 10:30:09 AM
May 09 2005
next sibling parent reply =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek Parnell wrote:

 Sorry, I was being a bit 'naughty'. 
A "bool" naughty, yes ? (sorry, sorry)
 So just to help people read *and* understand what you are intending by your
 code, one should try to be consistent and use bool for truth concepts and
 bit for numeric concepts (on-off) concepts. The "D" compiler doesn't care,
 but humans reading your code might.
I've been wondering: Does this include concepts "true" and "false" versus "0" and 1" ? So would it be better to write, say: "assert(false);", instead ? Another thing is that "bit" needs casts to work in numeric contexts! I expected code like this to compile, I was using bit for a "sign": void main() { int i; bit sign; i = -1; sign = i >> 31; } But it does not: "cannot implicitly convert expression (i >> 31) of type int to bit" Since bit is a pseudo-boolean type, and not one of the integer ones.
 It is similar to the idea that a string is a vector of characters and not
 really a vector of integers, even though we all know that characters are
 represented by specific sets of integers.
Actually it's an array of code units, but that's another discussion... (more on http://www.prowiki.org/wiki4d/wiki.cgi?CharsAndStrs) --anders
May 09 2005
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 10 May 2005 08:34:54 +0200, Anders F Björklund wrote:

 Derek Parnell wrote:
 
 Sorry, I was being a bit 'naughty'. 
A "bool" naughty, yes ? (sorry, sorry)
LOL ... very good.
 So just to help people read *and* understand what you are intending by your
 code, one should try to be consistent and use bool for truth concepts and
 bit for numeric concepts (on-off) concepts. The "D" compiler doesn't care,
 but humans reading your code might.
I've been wondering: Does this include concepts "true" and "false" versus "0" and 1" ? So would it be better to write, say: "assert(false);", instead ?
I believe so. I feel that 'assert(0)' is just as meaningful (conceptually) as 'assert(42)', 'assert("qwerty")' or 'assert(3.1472)'.
 Another thing is that "bit" needs casts to work in numeric contexts!
 I expected code like this to compile, I was using bit for a "sign":
 
 void main()
 {
    int i;
    bit sign;
 
    i = -1;
    sign = i >> 31;
 }
 
 But it does not:
 "cannot implicitly convert expression (i >> 31) of type int to bit"
That is because your code is trying to convert an integer (-1 >> 31) to a bit and Walter has decided that just isn't possible. You can't do implicit conversion from double to integer, either for the same reasons. But you can implicitly convert bit to integer, just like you can do integer to double. Thus you don't always need casts to do numeric work with bits. int i = true * 3 + false; // compiles with no casts.
 Since bit is a pseudo-boolean type, and not one of the integer ones.
True, not all integers are bits but all bits are integers. The bit is a subset of integers *but* with a couple of restrictions.
 It is similar to the idea that a string is a vector of characters and not
 really a vector of integers, even though we all know that characters are
 represented by specific sets of integers.
Actually it's an array of code units, but that's another discussion...
No, conceptually its not. UTF8, UTF16, and UTF32 are vectors of code units, because these are explicit encoding (implementation) of strings. I'm talking about *strings* as a concept, not how they may or may not have been implemented by any specific programming language.
 (more on http://www.prowiki.org/wiki4d/wiki.cgi?CharsAndStrs)
-- Derek Parnell Melbourne, Australia 10/05/2005 6:52:11 PM
May 10 2005
parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Derek Parnell wrote:

"cannot implicitly convert expression (i >> 31) of type int to bit"
That is because your code is trying to convert an integer (-1 >> 31) to a bit and Walter has decided that just isn't possible. You can't do implicit conversion from double to integer, either for the same reasons.
It worked OK when I inserted a cast, but it was still surprising.
Actually it's an array of code units, but that's another discussion...
No, conceptually its not. UTF8, UTF16, and UTF32 are vectors of code units, because these are explicit encoding (implementation) of strings. I'm talking about *strings* as a concept, not how they may or may not have been implemented by any specific programming language.
Okay, then it's exactly the same as with D's booleans - I agree. --anders
May 10 2005
prev sibling parent reply "Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"Derek Parnell" <derek psych.ward> wrote in message 
news:9mqpaqqe6nrg.1ik5gcaa5j0yv$.dlg 40tude.net...
 But Walter has chosen to *implement* the concept of _bool_ via the _bit_
 data type. This has certain advantages from the compiler writer's point of
 view but from a purist point of view it also allows a degree of silliness
 to happen. (e.g. You can do arithmetic on truth values such as "true * 3 +
 false" )
Okay, I can see what you mean. I don't know, I've just never thought of bits as anything other than a true/false value.. I've never really considered them as "numbers," as they can't do much besides represent a boolean value. I suppose the ability to use bits in numeric expressions is so you can write obfuscated code, i.e. int x=(5<y)*7; ;)
May 10 2005
parent Hasan Aljudy <hasan.aljudy gmail.com> writes:
Jarrett Billingsley wrote:
 "Derek Parnell" <derek psych.ward> wrote in message 
 news:9mqpaqqe6nrg.1ik5gcaa5j0yv$.dlg 40tude.net...
 
But Walter has chosen to *implement* the concept of _bool_ via the _bit_
data type. This has certain advantages from the compiler writer's point of
view but from a purist point of view it also allows a degree of silliness
to happen. (e.g. You can do arithmetic on truth values such as "true * 3 +
false" )
Okay, I can see what you mean. I don't know, I've just never thought of bits as anything other than a true/false value.. I've never really considered them as "numbers," as they can't do much besides represent a boolean value. I suppose the ability to use bits in numeric expressions is so you can write obfuscated code, i.e. int x=(5<y)*7; ;)
I think you do bit arithmetic alot in assembly, or at least your CPU does it all the time.
May 22 2005
prev sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Jarrett Billingsley wrote:

 I am totally lost here.  I must have missed something.
 
 What is the difference between bit and bool?
Uh-oh... "We haven't had a fight in a while" :-) http://www.prowiki.org/wiki4d/wiki.cgi?BooleanNotEquBit
 Bool is obviously not part of the D spec.
 
 This line appears in object.d:
 
 alias bit bool;
Yes, and true and false are of type: const bit... (but implemented in the compiler instead of lib) assert(typeid(typeof(true)) == typeid(bit)); assert(typeid(typeof(false)) == typeid(bit)); That's how it's being done in the *current* D spec.
 I keep seeing people trying to correct others by insisting that they use 
 "bool" over "bit."
 
 What's the deal?
Some people are used to having a boolean type in their language, and have some trouble going back... And keep using the bool alias, and hoping for change ?
 Is there something wrong with representing a true/false value with a bit? 
 It makes perfect sense to me.  And if bool is an alias for bit...? 
Not to nit-pick, but true and false are represented with "1" and "0". They just happen to usually be stored in a bit. But sometimes in int ? More on http://www.prowiki.org/wiki4d/wiki.cgi?BitsAndBools --anders
May 09 2005