www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - DIP 1015--Deprecation of Implicit Conversion of Int. & Char. Literals

reply Mike Parker <aldacron gmail.com> writes:
DIP 1015, "Deprecation and removal of implicit conversion from 
integer and character literals to bool, has been rejected, 
primarily on the grounds that it is factually incorrect in 
treating bool as a type distinct from other integral types.

The TL;DR is that the DIP is trying to change behavior that is 
working as intended.

 From Example A in the DIP:

     bool b = 1;

This works because bool is a "small integral" with a range of 
0..1. The current behavior is consistent with all other integrals.

 From Example B in the DIP:

```
int f(bool b) { return 1; }
int f(int i) { return 2; }

enum E : int
{
     a = 0,
     b = 1,
     c = 2,
}
```

Here, f(a) and f(b) call the bool overload, while f(c) calls the 
int version. This works because D selects the overload with the 
tightest conversion. This behavior is consistent across all 
integral types. Replace bool with ubyte and f(a), f(b) would both 
call the ubyte version. The same holds for the DIP's Example C.

Walter and Andrei left the door open to change the overload 
behavior for *all* integral types, with the caveat that it's a 
huge hurdle for such a DIP to be accepted. It would need a 
compelling argument.

You can read a few more details in the summary I appended to the 
DIP:

https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1015.md#formal-assessment

Thanks to Mike Franklin for sticking with the process to the end.
Nov 12
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Monday, November 12, 2018 2:45:14 AM MST Mike Parker via Digitalmars-d-
announce wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from
 integer and character literals to bool, has been rejected,
 primarily on the grounds that it is factually incorrect in
 treating bool as a type distinct from other integral types.
*sigh* Well, I guess that's the core issue right there. A lot of us would strongly disagree with the idea that bool is an integral type and consider code that treats it as such as inviting bugs. We _want_ bool to be considered as being completely distinct from integer types. The fact that you can ever pass 0 or 1 to a function that accepts bool without a cast is a problem in and of itself. But it doesn't really surprise me that Walter doesn't agree on that point, since he's never agreed on that point, though I was hoping that this DIP was convincing enough, and its failure is certainly disappointing. - Jonathan M Davis
Nov 12
next sibling parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis 
wrote:
 I was hoping that this DIP was convincing enough, and its 
 failure is certainly disappointing.
Indeed.
Nov 12
prev sibling next sibling parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis 
wrote:
 On Monday, November 12, 2018 2:45:14 AM MST Mike Parker via 
 Digitalmars-d- announce wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from 
 integer and character literals to bool, has been rejected, 
 primarily on the grounds that it is factually incorrect in 
 treating bool as a type distinct from other integral types.
*sigh* Well, I guess that's the core issue right there. A lot of us would strongly disagree with the idea that bool is an integral type and consider code that treats it as such as inviting bugs. We _want_ bool to be considered as being completely distinct from integer types. The fact that you can ever pass 0 or 1 to a function that accepts bool without a cast is a problem in and of itself. But it doesn't really surprise me that Walter doesn't agree on that point, since he's never agreed on that point, though I was hoping that this DIP was convincing enough, and its failure is certainly disappointing. - Jonathan M Davis
The issue that I see is unintended implicit conversation when passing values to functions that have both int and bool overloads. If we have a way of indicating that implicit conversions are not allowed, when passing values to functions then the issues that the DIP brought up is resolved. - Alex
Nov 12
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2018 8:28 AM, 12345swordy wrote:
 The issue that I see is unintended implicit conversation when passing values
to 
 functions that have both int and bool overloads.
The exact same thing happens when there are both int and short overloads. The underlying issue is is bool a one bit integer type, or something special? D defines it as a one bit integer type, fitting it into the other integer types using exactly the same rules. If it is to be a special type with special rules, what about the other integer types? D has a lot of basic types :-)
Nov 12
next sibling parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Monday, 12 November 2018 at 21:38:27 UTC, Walter Bright wrote:
 On 11/12/2018 8:28 AM, 12345swordy wrote:
 The issue that I see is unintended implicit conversation when 
 passing values to functions that have both int and bool 
 overloads.
The exact same thing happens when there are both int and short overloads. The underlying issue is is bool a one bit integer type, or something special? D defines it as a one bit integer type, fitting it into the other integer types using exactly the same rules. If it is to be a special type with special rules, what about the other integer types? D has a lot of basic types :-)
Ok, you don't want to introduce special rules for integers, and that understandable. However there needs be a tool for the programmer to prevent unwanted implicit conversation when it comes to other users passing values to their public overload functions.(Unless there is already a zero cost abstraction that we are not aware of). -Alex
Nov 12
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Nov 13, 2018 at 02:12:30AM +0000, 12345swordy via
Digitalmars-d-announce wrote:
 On Monday, 12 November 2018 at 21:38:27 UTC, Walter Bright wrote:
[...]
 The underlying issue is is bool a one bit integer type, or something
 special? D defines it as a one bit integer type, fitting it into the
 other integer types using exactly the same rules.
 
 If it is to be a special type with special rules, what about the
 other integer types? D has a lot of basic types :-)
Ok, you don't want to introduce special rules for integers, and that understandable. However there needs be a tool for the programmer to prevent unwanted implicit conversation when it comes to other users passing values to their public overload functions.(Unless there is already a zero cost abstraction that we are not aware of).
[...] This discussion makes me want to create a custom bool type that does not allow implicit conversion. Something like: struct Boolean { private bool impl; static Boolean True = Boolean(1); static Boolean False = Boolean(0); // For if(Boolean b) opCast(T : bool)() { return impl; } ... } Unfortunately, it wouldn't quite work because there's no way for built-in comparisons to convert to Boolean instead of bool. So you'd have to manually surround everything with Boolean(...), which is a severe usability handicap. T -- People tell me that I'm skeptical, but I don't believe them.
Nov 12
prev sibling next sibling parent reply NoMoreBugs <NoMoreBugs outlook.com> writes:
On Monday, 12 November 2018 at 21:38:27 UTC, Walter Bright wrote:
 On 11/12/2018 8:28 AM, 12345swordy wrote:
 The issue that I see is unintended implicit conversation when 
 passing values to functions that have both int and bool 
 overloads.
The exact same thing happens when there are both int and short overloads. The underlying issue is is bool a one bit integer type, or something special? D defines it as a one bit integer type, fitting it into the other integer types using exactly the same rules. If it is to be a special type with special rules, what about the other integer types? D has a lot of basic types :-)
You nailed it on the head. The only sensible course of action, therefore, is to give programmers the option to disable implicit conversions, completely (or if doable, more precisely). And while you're thinking about how to do that, can you also please think about how to give the programmer the option to enforce privacy on variable/method within a module. Programmers just want to be able to write code that is more likely to be correct, than not, and have the compiler catch it when it's not. You want to attract such programmers, or not?
Nov 12
parent NoMoreBugs <NoMoreBugs outlook.com> writes:
On Tuesday, 13 November 2018 at 07:13:01 UTC, NoMoreBugs wrote:
 You nailed it on the head.

 The only sensible course of action, therefore, is to give 
 programmers the option to disable implicit conversions, 
 completely (or if doable, more precisely).

 And while you're thinking about how to do that, can you also 
 please think about how to give the programmer the option to 
 enforce privacy on variable/method within a module.

 Programmers just want to be able to write code that is more 
 likely to be correct, than not, and have the compiler catch it 
 when it's not.

 You want to attract such programmers, or not?
ok...not such a great idea (the compiler switch), after I thought more about that idea. but some sort of annotation, and the programmers intent could become *much* clearer (allowing the programmer - and those reading it - to better reason about the correctness of the code): bool(this) b; // compiler not allowed to do implicit conversions when assigning on b double(this) d; // compiler not allowed to do implicit conversions when assigning to d class C //(or struct) { private(this) password; // compiler will not allow other code outside of this type, // (but in the same module) to directly access password. } i.e (this) is just an annotation, for saying: "I own this type, and the type needs to stay the way it's defined - no exceptions". Too much? The language could not handle it? The programmer would never want it?
Nov 13
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/12/18 4:38 PM, Walter Bright wrote:
 On 11/12/2018 8:28 AM, 12345swordy wrote:
 The issue that I see is unintended implicit conversation when passing 
 values to functions that have both int and bool overloads.
The exact same thing happens when there are both int and short overloads. The underlying issue is is bool a one bit integer type, or something special? D defines it as a one bit integer type, fitting it into the other integer types using exactly the same rules.
D's definition is wanting. Most integer types act differently than bool: 1. Integer types can be incremented, bool cannot 2. Integer types truncate by removing the extraneous bits, bool truncates to `true` for all values except 0. 3. Integer types have signed and unsigned variants, bool does not. 4. Integer types allow negation, bool does not. 5. Integer types can be used in a foreach(x; v1 .. v2), bool cannot. It is true that bools act similarly to a 1-bit integer type in many cases, but only via promotion. That is, they *convert* to 1-bit integers, but don't behave like integers in their own type. Regarding enums with base types, I admit I would totally expect an enum based on int to match an int overload over a short overload. You don't think this is confusing to an average developer? import std.stdio; void foo(int x) { writeln("integer"); } void foo(short x) { writeln("short"); } enum A : int { a = 1, b = 2, c = 3 } void main() { auto a = A.a; foo(A.a); // case 1 foo(a); // case 2 } case 1 prints short, but case 2 prints integer. Both are passed the same value. This comes into play when using compile-time generation -- you are expecting the same behavior when using the same values. This is super-confusing. But on the other hand, an A can ONLY be 3 or less, so why doesn't case 2 print short? If VRP is used here, it seems lacking. Maybe the biggest gripe here is that enums don't prefer their base types over what their base types convert to. In the developer's mind, the conversion is: A => int => (via VRP) short which seems more complex than just A => int
 If it is to be a special type with special rules, what about the other 
 integer types? D has a lot of basic types :-)
The added value of having bool implicitly cast to integer types is great. I wouldn't want to eliminate that. The other way around seems of almost no value, except maybe to avoid extra code in the compiler. So, I would be fine to have bool be a non-integer type that implicitly casts to integer for use in math or other reasons. But having 1 or 0 implicitly cast to true or false has little value, and especially using it as "just another 1-bit integer", which it really isn't, has almost no usage. -Steve
Nov 13
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Tue, 13 Nov 2018 09:46:17 -0500, Steven Schveighoffer wrote:
 Maybe the biggest gripe here is that enums don't prefer their base types
 over what their base types convert to. In the developer's mind, the
 conversion is:
 
 A => int => (via VRP) short
 
 which seems more complex than just
 
 A => int
It affects explicit casts too: void foo(short a) { writefln("short %s", a); } void foo(int a) { writefln("int %s", a); } foo(cast(int)0); // prints: short 0 In order to force the compiler to choose a particular overload, you either need to assign to a variable or use a struct with alias this. C++, Java, and C# all default to int, even for bare literals that fit into bytes or shorts, and let you use casts to select overloads. C++ has some weird stuff where an enum that doesn't fit into an int is an equal match for all integer types: void foo(unsigned long long); void foo(short); enum A : unsigned long long { a = 2 }; foo(a); // ambiguous! But if you just have an unsigned long long that's not in an enum, it only matches the unsigned long long overload. In C#, if you define multiple implicit casts from a type that match multiple overloads, the compiler prefers the smallest matching type, and it prefers signed over unsigned types. However, for this situation to come up at all, you need to define implicit conversions for multiple numeric types, so it's not directly comparable. Anyway, VRP overload selection hit me yesterday (accepts-invalid sort): I was calling a function `init_color(short, short, short, short)` with a bunch of things that I explicitly casted to int. Tried wrapping it in a function and I discovered the compiler had implicitly casted int to short. Not the end of the world, but I thought a cast would set the type of the expression (instead of just, in this case, truncating floating point numbers).
Nov 13
parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Tuesday, 13 November 2018 at 17:50:20 UTC, Neia Neutuladh 
wrote:
 On Tue, 13 Nov 2018 09:46:17 -0500, Steven Schveighoffer wrote:
 Maybe the biggest gripe here is that enums don't prefer their 
 base types over what their base types convert to. In the 
 developer's mind, the conversion is:
 
 A => int => (via VRP) short
 
 which seems more complex than just
 
 A => int
It affects explicit casts too: void foo(short a) { writefln("short %s", a); } void foo(int a) { writefln("int %s", a); } foo(cast(int)0); // prints: short 0
Ok, now that has got to be a bug. If you explicit cast the number to an integer then you expect the overload function with int to be called. -Alex
Nov 13
parent Neia Neutuladh <neia ikeran.org> writes:
On Tue, 13 Nov 2018 17:53:27 +0000, 12345swordy wrote:
 Ok, now that has got to be a bug. If you explicit cast the number to an
 integer then you expect the overload function with int to be called.
 
 -Alex
...my mistake, I can't reproduce that anymore. Pretend I didn't say anything.
Nov 13
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2018 2:05 AM, Jonathan M Davis wrote:
 *sigh* Well, I guess that's the core issue right there. A lot of us would
 strongly disagree with the idea that bool is an integral type and consider
 code that treats it as such as inviting bugs.
In my college daze I was learning programming alongside designing and building digital circuits, and later software for FPGAs and PLDs (ABEL). The notions of True, T, 1, !0 (from C and Asm), and +5V are all completely interchangeable in my mind. I once worked with software that defined true as 0 and false as 1, and it was like being in London where all your intuition about cars is wrong. (I'd look to the left when stepping into the street, just to get nearly smacked by a car coming from the right.)
Nov 12
next sibling parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Monday, 12 November 2018 at 21:29:20 UTC, Walter Bright wrote:

 I once worked with software that defined true as 0 and false as 
 1
OK, I got to know what language you were using at the time, because I am curious at what other oddities does it have. -Alex
Nov 12
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2018 1:39 PM, 12345swordy wrote:
 OK, I got to know what language you were using at the time, because I am
curious 
 at what other oddities does it have.
I wish I could remember what it was. It was like 40 years ago :-)
Nov 12
prev sibling parent Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Monday, 12 November 2018 at 21:29:20 UTC, Walter Bright wrote:
 *snip*

 In my college daze I was learning programming alongside 
 designing and building digital circuits, and later software for 
 FPGAs and PLDs (ABEL). The notions of True, T, 1, !0 (from C 
 and Asm), and +5V are all completely interchangeable in my mind.

 *snip*
Well if we're talking about code smell: I would regard any code that expects true to be the same as 1 a code smell (I've interacted with C++ code that uses integers instead of bools and it's annoying and hard to read). There's a reason I use const foo = boolValue ? otherValue : 0; and not const foo = otherValue * boolValue; because it shows _intent_. I think the rest of us would like to hear an actual justification for bool being an "integer" type.
Nov 12
prev sibling next sibling parent NoMoreBugs <NoMoreBugs outlook.com> writes:
On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis 
wrote:
 *sigh* Well, I guess that's the core issue right there. A lot 
 of us would strongly disagree with the idea that bool is an 
 integral type and consider code that treats it as such as 
 inviting bugs. We _want_ bool to be considered as being 
 completely distinct from integer types. The fact that you can 
 ever pass 0 or 1 to a function that accepts bool without a cast 
 is a problem in and of itself. But it doesn't really surprise 
 me that Walter doesn't agree on that point, since he's never 
 agreed on that point, though I was hoping that this DIP was 
 convincing enough, and its failure is certainly disappointing.

 - Jonathan M Davis
Well, I think the DIP was too narrow in its thinking - by restricting itself to bool. There is a bigger picture, which is more important. Fact 1 - Implicit conversions are nothing more than a weakening of type safety. Fact 2 - A weakening of type safety can (and often does) contribute to bugs. If anyone wants to dispute facts 1 and 2, please go ahead. Ideally, a 'modern' programming language would have addressed these two facts already. (i.e Rust). Unfortunately, D is very much tied to its C/C++ heritage, so 'modernizing' can be painful. D still can still modernize though, without breaking backward compatibility, by providing 'an option' for the programmer to explicitly declare their desire for greater type safety - and not just with bools. Fact 3 - Everyone will benefit from greater type safety (disputable - at least for those that prefer convenience over correctness). There is just no reason that I can see, why any modern programming language should allow my bool to be implicitly converted to a char, int, short, byte, long, double, float.....and god knows what else...and certainly not without some warning. Additionally, it really troubles me to see a programming language wanting to strut itself on the worlds stage, that can (and worst, just will) do things like that - no warning, no option to prevent it.
Nov 13
prev sibling parent reply Carl Sturtivant <sturtivant gmail.com> writes:
On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis 
wrote:
 *sigh* Well, I guess that's the core issue right there. A lot 
 of us would strongly disagree with the idea that bool is an 
 integral type and consider code that treats it as such as 
 inviting bugs. We _want_ bool to be considered as being 
 completely distinct from integer types. The fact that you can 
 ever pass 0 or 1 to a function that accepts bool without a cast 
 is a problem in and of itself. But it doesn't really surprise 
 me that Walter doesn't agree on that point, since he's never 
 agreed on that point, though I was hoping that this DIP was 
 convincing enough, and its failure is certainly disappointing.
I'm at a loss to see any significant advantage to having bool as a part of the language itself if it isn't deliberately isolated from `integral types`.
Nov 14
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Nov 14, 2018 at 06:59:30PM +0000, Carl Sturtivant via
Digitalmars-d-announce wrote:
 On Monday, 12 November 2018 at 10:05:09 UTC, Jonathan M Davis wrote:
 *sigh* Well, I guess that's the core issue right there. A lot of us
 would strongly disagree with the idea that bool is an integral type
 and consider code that treats it as such as inviting bugs. We _want_
 bool to be considered as being completely distinct from integer
 types. The fact that you can ever pass 0 or 1 to a function that
 accepts bool without a cast is a problem in and of itself.
+1. Honestly, I think 'bool' as understood by Walter & Andrei ought to be renamed to 'bit', i.e., a numerical, rather than logical, value. Of course, that still doesn't address the conceptually awkward behaviour of && and || returning a numerical value rather than a logical true/false state. The crux of the issue is whether we look at it from an implementation POV, or from a conceptual POV. Since there's a trivial 1-to-1 mapping from a logical true/false state to a binary digit, it's tempting to conflate the two, but they are actually two different things. It just so happens that in D, a true/false state is *implemented* as a binary value of 0 or 1. Hence, if you think of it from an implementation POV, it sort of makes sense to treat it as a numerical entity, since after all, at the implementation level it's just a binary digit, a numerical entity. However, if you look at it from a conceptual POV, the mapping true=>1, false=>0 is an arbitrary one, and nothing about the truth values true/false entail an ability to operate on them as numerical values, much less promotion to multi-bit binary numbers like int. I argue that viewing it from an implementation POV is a leaky abstraction, whereas enforcing the distinction of bool from integral types is more encapsulated -- because it hides away the implementation detail that a truth value is implemented as a binary digit. It's a similar situation with char vs. ubyte: if we look at it from an implementation point of view, there is no need for the existence of char at all, since at the implementation level it's not any different from a ubyte. But clearly, it is useful to distinguish between them, since otherwise why would Walter & Andrei have introduced distinct types for them in the first place? The usefulness is that we can define char to be a UTF-8 code unit, with a different .init value, and this distinction lets the compiler catch potentially incorrect usages of the types in user code. (Unfortunately, even here there's a fly in the ointment that char also implicitly converts to int -- again you see the symptoms of viewing things from an implementation POV, and the trouble that results, such as the wrong overload being invoked when you pass a char literal that no-thanks to VRP magically becomes an integral value.)
 But it doesn't really surprise me that Walter doesn't agree on that
 point, since he's never agreed on that point, though I was hoping
 that this DIP was convincing enough, and its failure is certainly
 disappointing.
I am also disappointed. One of the reasons I like D so much is its powerful abstraction mechanisms, and the ability of user types to behave (almost) like built-in types. This conflation of bool with its implementation as a binary digit seems to be antithetical to abstraction and encapsulation, and frankly does not leave a good taste in the mouth. (Though I will concede that it's a minor enough point that it wouldn't be grounds for deserting D. But still, it does leave a bad taste in the mouth.)
 I'm at a loss to see any significant advantage to having bool as a
 part of the language itself if it isn't deliberately isolated from
 `integral types`.
Same thing with implicit conversion to/from char types and integral types. I understand the historical / legacy reasons behind both cases, but I have to say it's rather disappointing from a modern programming language design point of view. T -- Written on the window of a clothing store: No shirt, no shoes, no service.
Nov 14
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2018-11-12 10:45, Mike Parker wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from integer
 and character literals to bool, has been rejected, primarily on the
 grounds that it is factually incorrect in treating bool as a type
 distinct from other integral types.

 The TL;DR is that the DIP is trying to change behavior that is working
 as intended.

  From Example A in the DIP:

      bool b = 1;

 This works because bool is a "small integral" with a range of 0..1. The
 current behavior is consistent with all other integrals.

  From Example B in the DIP:

 ```
 int f(bool b) { return 1; }
 int f(int i) { return 2; }

 enum E : int
 {
      a = 0,
      b = 1,
      c = 2,
 }
 ```

 Here, f(a) and f(b) call the bool overload, while f(c) calls the int
 version. This works because D selects the overload with the tightest
 conversion.
Why is that? Is it because "a" and "b" are enum members? I mean, "E" is typed as an int. If I pass an integer literal directly to the function it doesn't behave like that. Example: import std.stdio; void foo(int a) { writeln("int"); } void foo(bool a) { writeln("bool"); } void main() { foo(0); foo(1); foo(2); } The above example prints "int" three times. The bool overload is not called. Seems like the enum is treated specially. -- /Jacob Carlborg
Nov 12
prev sibling next sibling parent Kagamin <spam here.lot> writes:
That's strange, I thought polysemous literals prefer default 
type, not tightest type.
---
auto b=1;
static assert(is(typeof(b)==bool));
---
Error: static assert:  is(int == bool) is false
Nov 12
prev sibling next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 The TL;DR is that the DIP is trying to change behavior that is 
 working as intended.
I thought the whole point of a DIP is to change behavior that is working as intended. Otherwise, we have a bug fix rather than a language change.
Nov 12
parent reply M.M. <matus email.cz> writes:
On Monday, 12 November 2018 at 15:03:08 UTC, Adam D. Ruppe wrote:
 On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 The TL;DR is that the DIP is trying to change behavior that is 
 working as intended.
I thought the whole point of a DIP is to change behavior that is working as intended. Otherwise, we have a bug fix rather than a language change.
+1
Nov 12
parent reply Mike Parker <aldacron gmail.com> writes:
On Monday, 12 November 2018 at 15:15:17 UTC, M.M. wrote:
 On Monday, 12 November 2018 at 15:03:08 UTC, Adam D. Ruppe 
 wrote:
 On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 The TL;DR is that the DIP is trying to change behavior that 
 is working as intended.
I thought the whole point of a DIP is to change behavior that is working as intended. Otherwise, we have a bug fix rather than a language change.
+1
Let's not get hung up on my apparently poor choice of words for an informal summary in a newsgroup post. The more formal summary I appended to the DIP is closer to what they actually said. The DIP starts from the assumption that bool should be a distinct type from integrals, a point of view that is not uncommon. Walter and Andrei take the position that this is incorrect the wrong way to view a bool.
Nov 12
parent reply Johannes Loher <johannes.loher fg4f.de> writes:
On Monday, 12 November 2018 at 16:39:47 UTC, Mike Parker wrote:
 Walter and Andrei take the position that this is incorrect the 
 wrong way to view a bool.
Unfortunately you did not include their justification for this position (if any). To me it would be interesting to know about the reasoning that is behind this position.
Nov 12
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Monday, 12 November 2018 at 17:25:15 UTC, Johannes Loher wrote:
 On Monday, 12 November 2018 at 16:39:47 UTC, Mike Parker wrote:
 Walter and Andrei take the position that this is incorrect the 
 wrong way to view a bool.
Unfortunately you did not include their justification for this position (if any). To me it would be interesting to know about the reasoning that is behind this position.
Everything I know is in the summary at the bottom of the DIP.
Nov 12
prev sibling parent reply Joakim <dlang joakim.fea.st> writes:
On Monday, 12 November 2018 at 17:25:15 UTC, Johannes Loher wrote:
 On Monday, 12 November 2018 at 16:39:47 UTC, Mike Parker wrote:
 Walter and Andrei take the position that this is incorrect the 
 wrong way to view a bool.
Unfortunately you did not include their justification for this position (if any). To me it would be interesting to know about the reasoning that is behind this position.
Maybe you didn't read the link to their reasoning in the DIP, but it's quite simple: they view a bool as an integral type with two possible values, a `bit` if you like. As such, they prefer to fit it into the existing scheme for integral types rather than special-casing booleans as Mike proposed.
Nov 12
parent reply Bastiaan Veelo <Bastiaan Veelo.net> writes:
On Monday, 12 November 2018 at 17:49:55 UTC, Joakim wrote:
[…]
 it's quite simple: they view a bool as an integral type with 
 two possible values, a `bit` if you like. As such, they prefer 
 to fit it into the existing scheme for integral types rather 
 than special-casing booleans as Mike proposed.
I can’t say I have a strong opinion on this, but possibly it would be right to have an integral “bit” type to differentiate it from the Boolean type, just like we have a “byte” type to differentiate it from “char”...
Nov 12
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 12 November 2018 at 18:25:22 UTC, Bastiaan Veelo wrote:
 I can’t say I have a strong opinion on this, but possibly it 
 would be right to have an integral “bit” type to differentiate 
 it from the Boolean type, just like we have a “byte” type to 
 differentiate it from “char”...
D used to have a `bit` type, waaaay back in the day. It was renamed to `bool` way back in D 0.148, released Feb 25, 2006. And BTW D 0.149 includes this under the bugs fixed section: "Implicit casts of non-bool to bool disallowed". What happened since then?
Nov 12
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2018 11:28 AM, Adam D. Ruppe wrote:
 D used to have a `bit` type, waaaay back in the day. It was renamed to `bool` 
 way back in D 0.148, released Feb 25, 2006.
D's old bit type was not a bool. It literally was a single bit, and an array of bits was packed into an int by the compiler. It was abandoned because it caused more or less ugly problems with the type system, i.e. a <pointer to bit> required a phat pointer to represent it. A bit type is far better done as a library type.
Nov 12
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/12/18 4:45 AM, Mike Parker wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from integer 
 and character literals to bool, has been rejected, primarily on the 
 grounds that it is factually incorrect in treating bool as a type 
 distinct from other integral types.
 
 The TL;DR is that the DIP is trying to change behavior that is working 
 as intended.
 
  From Example A in the DIP:
 
      bool b = 1;
 
 This works because bool is a "small integral" with a range of 0..1. The 
 current behavior is consistent with all other integrals.
But it's not consistent: void isItAnInteger(T, bool makeCompile = false)() // works for all integers { T val = T.min; // I was surprised these work for bool val = T.max; static if(!makeCompile) { long x = -val; // Error: operation not allowed on bool b ++val; // Error: operation not allowed on bool val += 1 val += 1; // same error } val = cast(T)(T.max + 1); assert(val == val.min); // error for bool, true + 1 == 2, but cast(bool)2 truncates to true, not false. } void main() { import std.meta; static foreach(T; AliasSeq!(int, uint, long, ulong, short, ushort, byte, ubyte)) { isItAnInteger!T(); } // switch second parameter to false to see compiler errors. isItAnInteger!(bool, true)(); } If you have the makeCompile flag set to true, then it asserts for bool, but nothing else. -Steve
Nov 12
parent Neia Neutuladh <neia ikeran.org> writes:
On Mon, 12 Nov 2018 14:10:42 -0500, Steven Schveighoffer wrote:
 But it's not consistent:
And std.traits.isIntegral has not considered bools integral since its initial creation in 2007. Both Walter and Andrei have mucked about with that code and saw no reason to change it, even in wild and lawless days without deprecation cycles or DIPs. Andrei added a doc comment to explicitly note that bools and character types aren't considered integral back in 2009.
Nov 12
prev sibling next sibling parent reply Neia Neutuladh <neia ikeran.org> writes:
On Mon, 12 Nov 2018 09:45:14 +0000, Mike Parker wrote:
  From Example B in the DIP:
 
 ```
 int f(bool b) { return 1; }
 int f(int i) { return 2; }
 
 enum E : int {
      a = 0,
      b = 1,
      c = 2,
 }
 ```
 
 Here, f(a) and f(b) call the bool overload, while f(c) calls the int
 version. This works because D selects the overload with the tightest
 conversion. This behavior is consistent across all integral types.
enum : int { a = 0 } enum A : int { a = 0 } f(a); // calls the int overload f(A.a); // calls the bool overload Tell me more about this "consistency".
Nov 12
next sibling parent Neia Neutuladh <neia ikeran.org> writes:
On Mon, 12 Nov 2018 20:34:11 +0000, Neia Neutuladh wrote:
 enum : int { a = 0 }
 enum A : int { a = 0 }
 f(a);   // calls the int overload f(A.a); // calls the bool overload
 
 Tell me more about this "consistency".
Filed issue 19394. (Sorry for spam.)
Nov 12
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/12/2018 12:34 PM, Neia Neutuladh wrote:
 Tell me more about this "consistency".
int f(short s) { return 1; } int f(int i) { return 2; } enum : int { a = 0 } enum A : int { a = 0 } pragma (msg, f(a)); // calls f(int) pragma (msg, f(A.a)); // calls f(short) I.e. it's consistent. Here's how it works: f(a): `a` is a manifest constant of type `int`, and `int` is an exact match for f(int), and f(short) requires an implicit conversion. The exact match of f(int) is better. f(A.a): `a` is an enum of type `A`. `A` gets implicitly converted to `int`. The `int` then gets exact match to f(int), and an implicit match to f(short). The sequence of conversions is folded into one according to: <implicit conversion> <exact> => <implicit conversion> <implicit conversion> <implicit conversion> => <implicit conversion> Both f(int) and f(short) match, because implicit conversions rank the same. To disambiguate, f(short) is pitted against f(int) using partial ordering rules, which are: Can a short be used to call f(int)? Yes. Can an int be used to call f(short)? No. So f(short) is selected, because the "Most Specialized" function is selected when there is an ambiguous match. Note: the "most specialized" partial ordering rules are independent of the arguments being passed. --- One could have <implicit conversion><exact> be treated as "better than" <implicit conversion><implicit conversion>, and it sounds like a good idea, but even C++, not known for simplicity, tried that and had to abandon it as nobody could figure it out once the code examples got beyond trivial examples.
Nov 12
next sibling parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
 *snip*
 Both f(int) and f(short) match, because implicit conversions 
 rank the same.
 To disambiguate, f(short) is pitted against f(int) using 
 partial ordering rules,
 which are:

    Can a short be used to call f(int)? Yes.
    Can an int be used to call f(short)? No.

 So f(short) is selected, because the "Most Specialized" 
 function is selected
 when there is an ambiguous match.
 Note: the "most specialized" partial ordering rules are 
 independent of the arguments being passed.
Well frankly that's bad design. If I declare Foo as an int enum I (and any _reasonable_ programmer) would expect Foo to prefer the int overload. Think of it this way, each enum can be thought of like this: immutable struct EnumName(T) { T value; alias value this; } So, if a programmer declares an enum of type int (EnumName!int), the alias-this will convert the enum to an int. Thus, the programmer expects the value to implicitly convert to an int; which is a _direct_ implicit conversion (IE: is weighed heavier than just an implicit conversion). Regardless of what you believe, it is an inconsistent behavior to the programmer (IE: The person you should be considering as a language designer). If this horrid design choice stays, this _will_ go down as a mistaken "feature" of the language that everyone has to account for otherwise it bites them (Example: C++ implicitly converting by default instead of requiring an implicit attribute). If you really want this plaque in the language, at least make it not affect those that gave their enum a type. If you at least do that, someone can add it to DScanner to tell anyone that doesn't type their enum to expect illogical behavior.
Nov 12
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Tue, 13 Nov 2018 00:08:04 +0000, Isaac S. wrote:
 If you really want this plaque in the language, at least make it not
 affect those that gave their enum a type. If you at least do that,
 someone can add it to DScanner to tell anyone that doesn't type their
 enum to expect illogical behavior.
Unfortunately, dscanner only parses code. It can't tell you that your overload resolution depends on value range propagation on an enum value; that depends on semantic analysis. So it would have to aggressively warn you against using `enum Foo : some_int_type`.
Nov 12
parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Tuesday, 13 November 2018 at 00:21:25 UTC, Neia Neutuladh 
wrote:
 On Tue, 13 Nov 2018 00:08:04 +0000, Isaac S. wrote:
 If you really want this plaque in the language, at least make 
 it not affect those that gave their enum a type. If you at 
 least do that, someone can add it to DScanner to tell anyone 
 that doesn't type their enum to expect illogical behavior.
Unfortunately, dscanner only parses code. It can't tell you that your overload resolution depends on value range propagation on an enum value; that depends on semantic analysis. So it would have to aggressively warn you against using `enum Foo : some_int_type`.
Sorry if it wasn't clear, I meant that if `enum Foo : some_int_type` makes it so some_int_type is preferred (because it's a more direct conversion) DScanner could warn anyone that just does `enum Foo`.
Nov 12
parent Neia Neutuladh <neia ikeran.org> writes:
On Tue, 13 Nov 2018 00:28:46 +0000, Isaac S. wrote:
 Sorry if it wasn't clear, I meant that if `enum Foo : some_int_type`
 makes it so some_int_type is preferred (because it's a more direct
 conversion) DScanner could warn anyone that just does `enum Foo`.
Sorry, I read too hastily and thought you meant relative to the status quo.
Nov 12
prev sibling next sibling parent Neia Neutuladh <neia ikeran.org> writes:
On Mon, 12 Nov 2018 14:07:39 -0800, Walter Bright wrote:
      <implicit conversion> <exact>               => <implicit
      conversion>
      <implicit conversion> <implicit conversion> => <implicit
      conversion>
One confusion is from value range propagation / constant folding reaching past the static type information to yield a different result from what static typing alone would suggest. My intuition was that the compiler should prefer the declared type of the symbol when it's got a symbol with a declared type. Like, A implicitly converts to int, and int doesn't implicitly convert to short, so an expression of type A shouldn't implicitly convert to short. And this is *generally* true, but when the compiler can use constant folding to get a literal value out of the expression, it does things I don't expect. And this doesn't happen with structs with alias this, but I can't tell if that's an oversight or not, and there's no doubt some nuanced explanation of how things work, and it probably solves some edge cases to have it work differently...
Nov 12
prev sibling next sibling parent reply aliak <something something.com> writes:
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
 On 11/12/2018 12:34 PM, Neia Neutuladh wrote:
 Tell me more about this "consistency".
int f(short s) { return 1; } int f(int i) { return 2; } enum : int { a = 0 } enum A : int { a = 0 } pragma (msg, f(a)); // calls f(int) pragma (msg, f(A.a)); // calls f(short) I.e. it's consistent. Here's how it works: f(a): `a` is a manifest constant of type `int`, and `int` is an exact match for f(int), and f(short) requires an implicit conversion. The exact match of f(int) is better. f(A.a): `a` is an enum of type `A`. `A` gets implicitly converted to `int`. The `int` then gets exact match to f(int), and an implicit match to f(short). The sequence of conversions is folded into one according to: <implicit conversion> <exact> => <implicit conversion> <implicit conversion> <implicit conversion> => <implicit conversion>
Doesn't the above miss a step, and wouldn't it be: 1) A.a => <implicit-convert-to-int><exact-match-on-f(int)> 2) A.a => <implicit-convert-to-int><implicit-convert-to-short><exact-match-on-f(short)> So basically for the f(short) path you have 3 steps instead of 2 for the f(int) path. So does it matter how many implicit conversions need to happen before D stops trying? Or is it basically convert as long as you can? Does D actually do a "find the shortest path via implicit conversions to an overload" algorithm?
 One could have <implicit conversion><exact> be treated as 
 "better than" <implicit conversion><implicit conversion>, and 
 it sounds like a good idea, but even C++, not known for 
 simplicity, tried that and had to abandon it as nobody could 
 figure it out once the code examples got beyond trivial 
 examples.
Interesting. This seems simpler intuitively (shorter path, pick it), so I'm wondering if there're any links you can point to that describe what these problems were? Cheers, - Ali
Nov 13
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 12:23 AM, aliak wrote:
 Doesn't the above miss a step, and wouldn't it be:
 
 1) A.a => <implicit-convert-to-int><exact-match-on-f(int)>
 2) A.a => 
 <implicit-convert-to-int><implicit-convert-to-short><exact-match-on-f(short)>
 
 So basically for the f(short) path you have 3 steps instead of 2 for the
f(int) 
 path.
 
 So does it matter how many implicit conversions need to happen before D stops 
 trying? Or is it basically convert as long as you can? Does D actually do a 
 "find the shortest path via implicit conversions to an overload" algorithm?
It is not a shortest path algorithm. It's simply the enum is converted to the base type and the base type is matched against the parameter type.
 One could have <implicit conversion><exact> be treated as "better than" 
 <implicit conversion><implicit conversion>, and it sounds like a good idea, 
 but even C++, not known for simplicity, tried that and had to abandon it as 
 nobody could figure it out once the code examples got beyond trivial examples.
Interesting. This seems simpler intuitively (shorter path, pick it), so I'm wondering if there're any links you can point to that describe what these problems were?
No, I simply remember the discussions about it in the early 90's. Yes, it seems to intuitively make sense, but if you look at real C++ code and try to figure it out, it's a nightmare. There can also be multiple paths of conversions, and loops in those paths. There's a quadratic problem when there are multiple parameters.
Nov 13
parent reply aliak <something something.com> writes:
On Tuesday, 13 November 2018 at 09:17:51 UTC, Walter Bright wrote:
 On 11/13/2018 12:23 AM, aliak wrote:
 Doesn't the above miss a step, and wouldn't it be:
 
 1) A.a => <implicit-convert-to-int><exact-match-on-f(int)>
 2) A.a => 
 <implicit-convert-to-int><implicit-convert-to-short><exact-match-on-f(short)>
 
 So basically for the f(short) path you have 3 steps instead of 
 2 for the f(int) path.
 
 So does it matter how many implicit conversions need to happen 
 before D stops trying? Or is it basically convert as long as 
 you can? Does D actually do a "find the shortest path via 
 implicit conversions to an overload" algorithm?
It is not a shortest path algorithm. It's simply the enum is converted to the base type and the base type is matched against the parameter type.
Ok, thanks!
 One could have <implicit conversion><exact> be treated as 
 "better than" <implicit conversion><implicit conversion>, and 
 it sounds like a good idea, but even C++, not known for 
 simplicity, tried that and had to abandon it as nobody could 
 figure it out once the code examples got beyond trivial 
 examples.
Interesting. This seems simpler intuitively (shorter path, pick it), so I'm wondering if there're any links you can point to that describe what these problems were?
No, I simply remember the discussions about it in the early 90's. Yes, it seems to intuitively make sense, but if you look at real C++ code and try to figure it out, it's a nightmare. There can also be multiple paths of conversions, and loops in those paths. There's a quadratic problem when there are multiple parameters.
Bummer. At least if this enum : int case is fixed that doesn't seem like it's hard to work out in my head at least - but I guess I'm missing some edge case maybe, but I can't figure it out. Pus, it seems to work as "expected" with alias this. So I kinda wonder what reasons there could be to not make it work as expected for other scenarios. struct B { enum A : int { a } alias b = A.a; alias b this; } void f(short) {} void f(int) {} f(B()); // does what anyone would expect
Nov 13
parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 14 November 2018 at 06:56:12 UTC, aliak wrote:
 On Tuesday, 13 November 2018 at 09:17:51 UTC, Walter Bright 
 wrote:
 [...]
Ok, thanks!
 [...]
Bummer. At least if this enum : int case is fixed that doesn't seem like it's hard to work out in my head at least - but I guess I'm missing some edge case maybe, but I can't figure it out. Pus, it seems to work as "expected" with alias this. So I kinda wonder what reasons there could be to not make it work as expected for other scenarios. struct B { enum A : int { a } alias b = A.a; alias b this; } void f(short) {} void f(int) {} f(B()); // does what anyone would expect
Hahaha! That is hilarious! for the curious https://run.dlang.io/is/fqlllS
Nov 13
prev sibling next sibling parent reply Rubn <where is.this> writes:
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
 On 11/12/2018 12:34 PM, Neia Neutuladh wrote:
 Tell me more about this "consistency".
int f(short s) { return 1; } int f(int i) { return 2; } enum : int { a = 0 } enum A : int { a = 0 } pragma (msg, f(a)); // calls f(int) pragma (msg, f(A.a)); // calls f(short) I.e. it's consistent. Here's how it works: f(a): `a` is a manifest constant of type `int`, and `int` is an exact match for f(int), and f(short) requires an implicit conversion. The exact match of f(int) is better. f(A.a): `a` is an enum of type `A`. `A` gets implicitly converted to `int`. The `int` then gets exact match to f(int), and an implicit match to f(short). The sequence of conversions is folded into one according to: <implicit conversion> <exact> => <implicit conversion> <implicit conversion> <implicit conversion> => <implicit conversion> Both f(int) and f(short) match, because implicit conversions rank the same. To disambiguate, f(short) is pitted against f(int) using partial ordering rules, which are: Can a short be used to call f(int)? Yes. Can an int be used to call f(short)? No. So f(short) is selected, because the "Most Specialized" function is selected when there is an ambiguous match. Note: the "most specialized" partial ordering rules are independent of the arguments being passed. --- One could have <implicit conversion><exact> be treated as "better than" <implicit conversion><implicit conversion>, and it sounds like a good idea, but even C++, not known for simplicity, tried that and had to abandon it as nobody could figure it out once the code examples got beyond trivial examples.
This just seems like a bug to me. Any sane human being would expect all these functions to output the same thing. But it entirely depends on how you use it. import std.stdio; void foo(byte v) { writeln("byte ", v); } void foo(int v) { writeln("int ", v); } enum : int { a = 127 } enum A : int { a = 127 } void main() { A v = A.a; foo(A.a); // byte 127 < These two are probably the best showcase of what's wrong foo(v); // int 127 < same values being passed with same type but different result foo(a); // int 127 foo(127); // int 127 } https://run.dlang.io/is/aARCDo
Nov 13
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 3:29 PM, Rubn wrote:
 enum : int { a = 127 }
To reiterate, this does not create an anonymous enum type. 'a' is typed as 'int'. Technically, `a` is a manifest constant of type `int` with a value of `127`.
 enum A : int { a = 127 }
`a` is a manifest constant of type `A` with a value of `127`. Remember that `A` is not an `int`. It is implicitly convertible to an integer type that its value will fit in (Value Range Propagation). Other languages do not have VRP, so expectations from how those languages behave do not apply to D. VRP is a nice feature, it is why: enum s = 100; // typed as int enum t = 300; // also typed as int ubyte u = s + 50; // works, no cast required, // although the type is implicitly converted ubyte v = t + 50; // fails In your articles, it is crucial to understand the difference between a manifest constant of type `int` and one of type `A`.
Nov 13
next sibling parent Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Wednesday, 14 November 2018 at 02:45:38 UTC, Walter Bright 
wrote:
 In your articles, it is crucial to understand the difference 
 between a manifest constant of type `int` and one of type `A`.
Still doesn't change the fact that a typed enum should convert to its own type first (rather than blindly using the literal).
Nov 13
prev sibling next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2018-11-14 03:45, Walter Bright wrote:
 On 11/13/2018 3:29 PM, Rubn wrote:
 enum : int { a = 127 }
To reiterate, this does not create an anonymous enum type. 'a' is typed as 'int'. Technically, `a` is a manifest constant of type `int` with a value of `127`. > enum A : int { a = 127 } `a` is a manifest constant of type `A` with a value of `127`. Remember that `A` is not an `int`.
What is ": int" doing, only specifying the size? -- /Jacob Carlborg
Nov 14
parent Neia Neutuladh <neia ikeran.org> writes:
On Wed, 14 Nov 2018 12:09:33 +0100, Jacob Carlborg wrote:
 What is ": int" doing, only specifying the size?
It specifies the type to match for overloading when the compiler isn't required by the language to constant-fold the value.
Nov 14
prev sibling parent Rubn <where is.this> writes:
On Wednesday, 14 November 2018 at 02:45:38 UTC, Walter Bright 
wrote:
 On 11/13/2018 3:29 PM, Rubn wrote:
 enum A : int { a = 127 }
`a` is a manifest constant of type `A` with a value of `127`. Remember that `A` is not an `int`. It is implicitly convertible to an integer type that its value will fit in (Value Range Propagation). Other languages do not have VRP, so expectations from how those languages behave do not apply to D. VRP is a nice feature, it is why: enum s = 100; // typed as int enum t = 300; // also typed as int ubyte u = s + 50; // works, no cast required, // although the type is implicitly converted ubyte v = t + 50; // fails In your articles, it is crucial to understand the difference between a manifest constant of type `int` and one of type `A`.
At least can you understand where the problem lies? If you have code like this: foo(Enum.value); Then it gets changed: // ops might be calling a different function now foo(runtimeCond ? Enum.value : Enum.otherValue); Or how about if we just add another enum to our list: enum Enum : int { // ... // add new enum here, shifting the values down value, // 126 -> 127 otherValue, // 127 -> 128 - Ops now we are calling a different function ~somewhere~ // ... } From your implementation perspective I can see why it is a good thing. But from my user's perspective this just screams unreliable chaotic mess, even in the most trivial examples. What D does is only suitable for the absolute most trivial example: enum int s = 100; ubyte v = s; // ok no cast required But even just a slightly more trivial example like we have now, and it falls apart: enum int s = 100; void foo(int); void foo(byte); foo(s); // Not suitable for determining overloads // though is good for variable initialization Not one's really asking to add another layer to anything. Merely to not treat named enum types as if they are just constants like anonymous enums. ubyte a = Enum.value; // this is ok foo(Enum.value); // this needs to be x1000 more reliable
Nov 14
prev sibling next sibling parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
 int f(short s) { return 1; }
 int f(int i) { return 2; }

 enum : int { a = 0 }
 enum A : int { a = 0 }

 pragma (msg, f(a));   // calls f(int)
 pragma (msg, f(A.a)); // calls f(short)

 *snip*

 So f(short) is selected, because the "Most Specialized" 
 function is selected when there is an ambiguous match.

 Note: the "most specialized" partial ordering rules are 
 independent of the arguments being passed.
Walter, this still doesn't change the fact any _reasonable_ programmer would expect foo(A.a) to, in a way, convert to foo(int(0)) because that keeps the type information rather than ignoring the type information completely and just putting the literal value in like it was foo(0). Honestly, while I (and most others in the community) wanted DIP1015, I'm not going to sweat it (even given the illogical reason for refusing it), but marking issue 10560 as "correct behavior" is asinine and ignorant.
Nov 13
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 3:50 PM, Isaac S. wrote:
 is asinine and ignorant.
Some friendly advice - nobody is going to pay serious attention to articles that sum up with such unprofessional statements. Continuing the practice will just result in the moderators removing them.
Nov 13
next sibling parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Wednesday, 14 November 2018 at 03:02:48 UTC, Walter Bright 
wrote:
 On 11/13/2018 3:50 PM, Isaac S. wrote:
 is asinine and ignorant.
Some friendly advice - nobody is going to pay serious attention to articles that sum up with such unprofessional statements. Continuing the practice will just result in the moderators removing them.
I'm sorry that it is unkind but I came to D because I found it to be an extremely well-designed language. Seeing something like 10560 be declared as "correct" is really disheartening because it's _obviously_ a design flaw (and should thus be fixed). Regardless of my unprofessional attitude (I do apologize; I normally try to be professional but something like this is really irritating): why should an enum not convert to its declared type, rather than blindly using its literal value. Just using the literal value discards the secondary-type information the programmer had given it.
Nov 13
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 7:12 PM, Isaac S. wrote:
 why should an enum 
 not convert to its declared type, rather than blindly using its literal value. 
 Just using the literal value discards the secondary-type information the 
 programmer had given it.
D has the following match levels: 1. exact 2. conversion to const 3. implicit conversion 4. no match C++, on the other hand, has a long list of match levels, which nobody remembers, and yet still causes problems (see Scott Meyers). The conversion of `A` to `int` already drops it to match level 3, from which it will not rise. I.e. the second level being an exact match with `int` does not help. The further disambiguation between multiple matching functions is done using partial ordering. This has NOTHING to do with the arguments. It just looks at: f(int) f(short) and picks f(short) because it is more specialized. This partial ordering is what C++ does with template functions. It is simpler and more robust than the older more primitive "match level" system C++ uses for non-template functions. I suspect that if C++ were to do a "do-over" with function overloading, it would use partial ordering instead of match levels. Interestingly, the match level and partial ordering methods almost always produce the same results. There have been various attempts over the years to "fix" various things in the D matching system by adding "just one more" match level. I've rejected all of them, because things that look simple and obvious with trivial examples tend to sink in a swamp with the dirty reality of the rather vast number of types and conversions that D supports. This happens in C++, and what people tend to do is just throw up their hands and hackishly add in more overloads until they get the result they want.
Nov 13
next sibling parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Wednesday, 14 November 2018 at 04:27:05 UTC, Walter Bright 
wrote:
 There have been various attempts over the years to "fix" 
 various things in the D matching system by adding "just one 
 more" match level. I've rejected all of them, because things 
 that look simple and obvious with trivial examples tend to sink 
 in a swamp with the dirty reality of the rather vast number of 
 types and conversions that D supports. This happens in C++, and 
 what people tend to do is just throw up their hands and 
 hackishly add in more overloads until they get the result they 
 want.
The thing is, this isn't a new match level. Rather than the enum implicitly casting to its literal (I'm hoping I'm using the correct word here) I'm proposing it implicitly cast to its typed literal (Instead of A.a implicitly converting to 0, it converts to int(0)). This would mean it would match the int since its a direct match.
Nov 13
parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 14 November 2018 at 04:33:23 UTC, Isaac S. wrote:
 On Wednesday, 14 November 2018 at 04:27:05 UTC, Walter Bright 
 wrote:
 There have been various attempts over the years to "fix" 
 various things in the D matching system by adding "just one 
 more" match level. I've rejected all of them, because things 
 that look simple and obvious with trivial examples tend to 
 sink in a swamp with the dirty reality of the rather vast 
 number of types and conversions that D supports. This happens 
 in C++, and what people tend to do is just throw up their 
 hands and hackishly add in more overloads until they get the 
 result they want.
The thing is, this isn't a new match level. Rather than the enum implicitly casting to its literal (I'm hoping I'm using the correct word here) I'm proposing it implicitly cast to its typed literal (Instead of A.a implicitly converting to 0, it converts to int(0)). This would mean it would match the int since its a direct match.
The water is already somewhat murky here, the magic enums `__c_long` & friends already do some of this, but array of them don't (which I'm going to fix in https://github.com/dlang/dmd/pull/8950 as its needed to make __c_wchar_t actually useful). Extending this to all enums would probably do the trick.
 enum implicitly casting to its literal
memory type is what the compiler calls it.
Nov 13
prev sibling parent reply Neia Neutuladh <neia ikeran.org> writes:
On Tue, 13 Nov 2018 20:27:05 -0800, Walter Bright wrote:
 There have been various attempts over the years to "fix" various things
 in the D matching system by adding "just one more" match level.
I kind of feel like, if something would be confusing like this, maybe the compiler shouldn't be making an automatic decision. Not "just one more" match level, but just...don't match. If there are multiple matching overloads, just error out. Don't try to be clever and surprise people, just tell the user to be more explicit.
Nov 14
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/14/18 1:11 PM, Neia Neutuladh wrote:
 On Tue, 13 Nov 2018 20:27:05 -0800, Walter Bright wrote:
 There have been various attempts over the years to "fix" various things
 in the D matching system by adding "just one more" match level.
I kind of feel like, if something would be confusing like this, maybe the compiler shouldn't be making an automatic decision. Not "just one more" match level, but just...don't match. If there are multiple matching overloads, just error out. Don't try to be clever and surprise people, just tell the user to be more explicit.
You don't think this is confusing? enum A : int { val } A a; foo(a); // error: be more specific int x = a; foo(x); // Sure -Steve
Nov 14
parent reply Neia Neutuladh <neia ikeran.org> writes:
On Wed, 14 Nov 2018 13:40:46 -0500, Steven Schveighoffer wrote:
 You don't think this is confusing?
 
 enum A : int {
      val
 }
 
 A a;
 foo(a); // error: be more specific
 int x = a;
 foo(x); // Sure
I find this confusing: void foo(int i) {} void foo(ubyte b) {} enum A : int { val = 0 } foo(A.val); // calls foo(ubyte) A a = A.val; foo(a); // calls foo(int) If it instead produced an error, the error would look like: Error: foo called with argument types (E) matches both: example.d(1): foo(int i) and: example.d(2): foo(ubyte i) Or else: Error: none of the overloads of foo are callable using argument types (A), candidates are: example.d(1): foo(int i) example.d(2): foo(ubyte i) These aren't the intuitively obvious thing to me, but they're not going to surprise me by calling the wrong function, and there are obvious ways to make the code work as I want. Of the two, I'd prefer the former. The intuitively obvious thing for me is: * Don't use VRP to select an overload. Only use it if there's only one candidate with the right number of arguments. * Don't use VRP if the argument is a ctor, cast expression, or symbol expression referring to a non-builtin. Maybe disallow with builtins. * Don't use VRP if the argument is a literal with explicitly indicated type (0UL shouldn't match to byte, for instance). I think this would make things more as most people expect: foo(A.val); // A -> int, but no A -> byte; calls foo(int) foo(0); // errors (currently calls foo(int)) foo(0L); // errors (currently calls foo(ubyte)) foo(cast(ulong)0); // errors (currently calls foo(ubyte)) And when there's only one overload: void bar(byte b) {} bar(A.val); // errors; can't convert A -> byte bar(0); // type any-number and fits within byte, so should work bar(0UL); // errors; explicit incorrect type bar(0UL & 0x1F); // bitwise and expression can do VRP bar("foo".length); // length is a builtin; maybe do VRP? bar(byte.sizeof); // sizeof is a builtin; maybe do VRP?
Nov 14
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/14/18 4:32 PM, Neia Neutuladh wrote:
 On Wed, 14 Nov 2018 13:40:46 -0500, Steven Schveighoffer wrote:
 You don't think this is confusing?

 enum A : int {
       val
 }

 A a;
 foo(a); // error: be more specific
 int x = a;
 foo(x); // Sure
I find this confusing: void foo(int i) {} void foo(ubyte b) {} enum A : int { val = 0 } foo(A.val); // calls foo(ubyte) A a = A.val; foo(a); // calls foo(int) If it instead produced an error, the error would look like: Error: foo called with argument types (E) matches both: example.d(1): foo(int i) and: example.d(2): foo(ubyte i)
I'm reminded of my son, who I sometimes give him a glass of water when he asks for it, and after I hand it to him, he says "should I drink this?" To me, making the user jump through these hoops will be insanely frustrating.
 
 Or else:
 
      Error: none of the overloads of foo are callable using
      argument types (A), candidates are:
      example.d(1): foo(int i)
      example.d(2): foo(ubyte i)
 
 These aren't the intuitively obvious thing to me, but they're not going to
 surprise me by calling the wrong function, and there are obvious ways to
 make the code work as I want. Of the two, I'd prefer the former.
I prefer the correct version which calls foo(int). I think truly this is a misapplication of the overload rules, and can be fixed.
 The intuitively obvious thing for me is:
 
 * Don't use VRP to select an overload. Only use it if there's only one
 candidate with the right number of arguments.
 * Don't use VRP if the argument is a ctor, cast expression, or symbol
 expression referring to a non-builtin. Maybe disallow with builtins.
 * Don't use VRP if the argument is a literal with explicitly indicated type
 (0UL shouldn't match to byte, for instance).
To me, an enum based on int is an int before it's a typeless integer value. VRP should be trumped by type (which it normally is). I would say an enum *derives* from int, and is a more specialized form of int. It is not derived from byte or ubyte. For it to match those overloads *over* int is surprising. Just like you wouldn't expect the `alias this` inside a class to match an overload over its base class. Oh, crap, it actually does... But I think that's a different bug.
 
 I think this would make things more as most people expect:
 
      foo(A.val);  // A -> int, but no A -> byte; calls foo(int)
      foo(0);      // errors (currently calls foo(int))
If you have foo(int) and for some reason you can't call it with foo(0), nobody is going to expect or want that.
      foo(0L);     // errors (currently calls foo(ubyte))
      foo(cast(ulong)0);  // errors (currently calls foo(ubyte))
I'm actually OK with this being the way it is, or even if it called the foo(int) version. Either way, as long as there is a clear definition of why it does that.
 
 And when there's only one overload:
 
      void bar(byte b) {}
      bar(A.val);  // errors; can't convert A -> byte
      bar(0);      // type any-number and fits within byte, so should work
      bar(0UL);    // errors; explicit incorrect type
      bar(0UL & 0x1F);    // bitwise and expression can do VRP
      bar("foo".length);  // length is a builtin; maybe do VRP?
      bar(byte.sizeof);   // sizeof is a builtin; maybe do VRP?
 
I am OK with VRP calls, even in the case of the enum when there's no int overload. -Steve
Nov 15
prev sibling parent 12345swordy <alexanderheistermann gmail.com> writes:
On Wednesday, 14 November 2018 at 18:11:59 UTC, Neia Neutuladh 
wrote:
 On Tue, 13 Nov 2018 20:27:05 -0800, Walter Bright wrote:
 There have been various attempts over the years to "fix" 
 various things in the D matching system by adding "just one 
 more" match level.
I kind of feel like, if something would be confusing like this, maybe the compiler shouldn't be making an automatic decision. Not "just one more" match level, but just...don't match. If there are multiple matching overloads, just error out. Don't try to be clever and surprise people, just tell the user to be more explicit.
That type of behavior is best left to the programmer defining the public interface. -Alex
Nov 14
prev sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 14 November 2018 at 03:02:48 UTC, Walter Bright 
wrote:
 On 11/13/2018 3:50 PM, Isaac S. wrote:
 is asinine and ignorant.
Some friendly advice - nobody is going to pay serious attention to articles that sum up with such unprofessional statements. Continuing the practice will just result in the moderators removing them.
I read the first adjective as a statement of opinion about your reasoning for rejection and the second about the way you have dismissed the opinions of others, neither of which are uncalled for and certainly not unprofessional. You would do well to think about that before you post further.
Nov 13
parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, November 13, 2018 8:47:01 PM MST Nicholas Wilson via 
Digitalmars-d-announce wrote:
 On Wednesday, 14 November 2018 at 03:02:48 UTC, Walter Bright

 wrote:
 On 11/13/2018 3:50 PM, Isaac S. wrote:
 is asinine and ignorant.
Some friendly advice - nobody is going to pay serious attention to articles that sum up with such unprofessional statements. Continuing the practice will just result in the moderators removing them.
I read the first adjective as a statement of opinion about your reasoning for rejection and the second about the way you have dismissed the opinions of others, neither of which are uncalled for and certainly not unprofessional. You would do well to think about that before you post further.
Given how strong the negative response is to this and how incomprenhensible a number of us find the reasoning behind how bool functions in some scenarios, Walter probably does need to sit back and think about this, but using words like asinine is pretty much always uncalled for in a professional discussion. I can very much understand Isaac's frustration, but making statements like that really is the sort of thing that comes across as attacking the poster and is going to tend to result in folks not listening to your arguments anymore, even if they're well-reasoned and logical. It's already hard enough to convince people when your arguments are solid without getting anything into the mix that could come across as insulting. - Jonathan M Davis
Nov 13
parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 14 November 2018 at 04:24:20 UTC, Jonathan M Davis 
wrote:
 Given how strong the negative response is to this and how 
 incomprenhensible a number of us find the reasoning behind how 
 bool functions in some scenarios, Walter probably does need to 
 sit back and think about this, but using words like asinine is 
 pretty much always uncalled for in a professional discussion. I 
 can very much understand Isaac's frustration, but making 
 statements like that really is the sort of thing that comes 
 across as attacking the poster and is going to tend to result 
 in folks not listening to your arguments anymore, even if 
 they're well-reasoned and logical. It's already hard enough to 
 convince people when your arguments are solid without getting 
 anything into the mix that could come across as insulting.

 - Jonathan M Davis
asinine, adjective: extremely stupid or foolish. Is there some additional connotation I am missing on this living (comparatively) in the middle of nowhere? (Genuine question.)
Nov 13
next sibling parent reply Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Wednesday, 14 November 2018 at 04:27:29 UTC, Nicholas Wilson 
wrote:
 asinine, adjective: extremely stupid or foolish. Is there some 
 additional connotation I am missing on this living 
 (comparatively) in the middle of nowhere? (Genuine question.)
It probably depends on where someone is from (asinine isn't considered a big insult where I live [rural US]). Regardless of that, I will admit I did overstep (especially in calling Walter ignorant) and so I do apologize to Walter (I'm sorry that sounds in-sincere, I don't know how to properly apologize in text-form).
Nov 13
next sibling parent Isaac S. <spam-no-reply-isaac outlook.com> writes:
On Wednesday, 14 November 2018 at 04:37:38 UTC, Isaac S. wrote:
 On Wednesday, 14 November 2018 at 04:27:29 UTC, Nicholas Wilson 
 wrote:
 asinine, adjective: extremely stupid or foolish. Is there some 
 additional connotation I am missing on this living 
 (comparatively) in the middle of nowhere? (Genuine question.)
It probably depends on where someone is from (asinine isn't considered a big insult where I live [rural US]).
And to clarify what I meant by asinine: issue 10560 being the correct behavior is that. I was not meaning to call Walter extremely foolish (although now I can see how it can be interpreted as that and if he took it that way: I'm sorry).
Nov 13
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 8:37 PM, Isaac S. wrote:
 It probably depends on where someone is from (asinine isn't considered a big 
 insult where I live [rural US]).
 
 Regardless of that, I will admit I did overstep (especially in calling Walter 
 ignorant) and so I do apologize to Walter (I'm sorry that sounds in-sincere, I 
 don't know how to properly apologize in text-form).
Thank you. I gladly accept.
Nov 13
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Tuesday, November 13, 2018 9:27:29 PM MST Nicholas Wilson via 
Digitalmars-d-announce wrote:
 On Wednesday, 14 November 2018 at 04:24:20 UTC, Jonathan M Davis

 wrote:
 Given how strong the negative response is to this and how
 incomprenhensible a number of us find the reasoning behind how
 bool functions in some scenarios, Walter probably does need to
 sit back and think about this, but using words like asinine is
 pretty much always uncalled for in a professional discussion. I
 can very much understand Isaac's frustration, but making
 statements like that really is the sort of thing that comes
 across as attacking the poster and is going to tend to result
 in folks not listening to your arguments anymore, even if
 they're well-reasoned and logical. It's already hard enough to
 convince people when your arguments are solid without getting
 anything into the mix that could come across as insulting.

 - Jonathan M Davis
asinine, adjective: extremely stupid or foolish. Is there some additional connotation I am missing on this living (comparatively) in the middle of nowhere? (Genuine question.)
Not AFAIK, but calling someone or something extremely stupid or foolish is almost always a terrible idea in a professional discussion (or pretty much any discussion that you want to be civil) - especially if it can be interpreted as calling the person stupid or foolish. That's just throwing insults around. If an idea or decision is bad, then it should be shown as to why it's bad, and if it is indeed a terrible idea, then the arguments themselves should make that obvious without needing to throw insults around. It's not always easy to avoid calling ideas stupid when you get emotional about something, but the stronger the language used, the more likely it is that you're going to get a strong emotional response out of the other person rather than a logical, reasoned discussion that can come to a useful conclusion rather than a flame war, and asinine is a pretty strong word. It's the sort of word that's going to tend to get people mad and insulted rather than help with a logical argument in any way - which is why Walter called in unprofessional. - Jonathan M Davis
Nov 13
parent Walter Bright <newshound2 digitalmars.com> writes:
On 11/13/2018 8:49 PM, Jonathan M Davis wrote:
 Not AFAIK, but calling someone or something extremely stupid or foolish is
 almost always a terrible idea in a professional discussion (or pretty much
 any discussion that you want to be civil) - especially if it can be
 interpreted as calling the person stupid or foolish. That's just throwing
 insults around. If an idea or decision is bad, then it should be shown as to
 why it's bad, and if it is indeed a terrible idea, then the arguments
 themselves should make that obvious without needing to throw insults around.
 
 It's not always easy to avoid calling ideas stupid when you get emotional
 about something, but the stronger the language used, the more likely it is
 that you're going to get a strong emotional response out of the other person
 rather than a logical, reasoned discussion that can come to a useful
 conclusion rather than a flame war, and asinine is a pretty strong word.
 It's the sort of word that's going to tend to get people mad and insulted
 rather than help with a logical argument in any way - which is why Walter
 called in unprofessional.
Exactly right. It's not that I'm angry about this (I'm not), I've been around too long to get annoyed at this sort of thing. I'm pointing out that using such tactics will produce the following reactions: 1. professionals (i.e. people that matter) will ignore you 2. the recipient will get angry with you, will go out of his way to refuse to acknowledge your position, and will entrench himself deeper in his position 3. discourage professionals (i.e. people that matter) from participating in the forums 4. you'll find yourself interacting solely with other egg-throwers, accomplishing nothing None of these are a desirable result.
Nov 13
prev sibling parent reply Rubn <where is.this> writes:
On Monday, 12 November 2018 at 22:07:39 UTC, Walter Bright wrote:
 One could have <implicit conversion><exact> be treated as 
 "better than" <implicit conversion><implicit conversion>, and 
 it sounds like a good idea, but even C++, not known for 
 simplicity, tried that and had to abandon it as nobody could 
 figure it out once the code examples got beyond trivial 
 examples.
I wonder what these examples are? What did C++ do instead, cause something tells me it didn't do what D is doing. An enum in C++ doesn't call different function overloads based on the constant value. The trivial examples with D's current implementation aren't even understood by most people it seems like.
Nov 13
parent Neia Neutuladh <neia ikeran.org> writes:
On Wed, 14 Nov 2018 00:43:54 +0000, Rubn wrote:
 I wonder what these examples are? What did C++ do instead, cause
 something tells me it didn't do what D is doing. An enum in C++ doesn't
 call different function overloads based on the constant value.
Long long and unsigned long long give an ambiguous overload error. Unsigned int uses the unsigned int overload. Everything else uses the int overload. Test code: ``` #include <iostream> #include <climits> using namespace std; void foo(bool c) { cout << "bool " << c << endl; } void foo(unsigned char c) { cout << "unsigned char " << c << endl; } void foo(char c) { cout << "char " << c << endl; } void foo(int c) { cout << "int " << c << endl; } void foo(unsigned int c) { cout << "unsigned int " << c << endl; } void foo(long long c) { cout << "long long " << c << endl; } void foo(unsigned long long c) { cout << "unsigned long long " << c << endl; } enum Bool : bool { b = 1 }; enum Char : char { c = CHAR_MAX }; enum UChar : unsigned char { d = UCHAR_MAX }; enum Short : short { e = SHRT_MAX }; enum UShort : unsigned short { f = USHRT_MAX }; enum Int : int { g = INT_MAX }; enum UInt : unsigned int { h = UINT_MAX }; enum LongLong : long long { i = LLONG_MAX }; enum ULongLong : unsigned long long { j = ULLONG_MAX }; int main(int argc, char** argv) { foo(b); foo(c); foo(d); foo(e); foo(f); foo(g); foo(h); //foo(i); //foo(j); } ``` Output: int 1 int 127 int 255 int 32767 int 65535 int 2147483647 unsigned int 4294967295
Nov 13
prev sibling parent reply Chris M. <chrismohrfeld comcast.net> writes:
On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from 
 integer and character literals to bool, has been rejected, 
 primarily on the grounds that it is factually incorrect in 
 treating bool as a type distinct from other integral types.

 The TL;DR is that the DIP is trying to change behavior that is 
 working as intended.

 From Example A in the DIP:

     bool b = 1;

 This works because bool is a "small integral" with a range of 
 0..1. The current behavior is consistent with all other 
 integrals.

 From Example B in the DIP:

 ```
 int f(bool b) { return 1; }
 int f(int i) { return 2; }

 enum E : int
 {
     a = 0,
     b = 1,
     c = 2,
 }
 ```

 Here, f(a) and f(b) call the bool overload, while f(c) calls 
 the int version. This works because D selects the overload with 
 the tightest conversion. This behavior is consistent across all 
 integral types. Replace bool with ubyte and f(a), f(b) would 
 both call the ubyte version. The same holds for the DIP's 
 Example C.

 Walter and Andrei left the door open to change the overload 
 behavior for *all* integral types, with the caveat that it's a 
 huge hurdle for such a DIP to be accepted. It would need a 
 compelling argument.

 You can read a few more details in the summary I appended to 
 the DIP:

 https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1015.md#formal-assessment

 Thanks to Mike Franklin for sticking with the process to the 
 end.
I was going to write something up about how you can't do arithmetic on bool types therefore they aren't integral, but I tested and realized D allows this (i.e. bool + bool, bool * bool). Still that seems nonsensical, so I guess my question what is the definition of an integral type and how does bool fit?
Nov 13
next sibling parent Chris M. <chrismohrfeld comcast.net> writes:
On Tuesday, 13 November 2018 at 16:26:55 UTC, Chris M. wrote:
 On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 [...]
I was going to write something up about how you can't do arithmetic on bool types therefore they aren't integral, but I tested and realized D allows this (i.e. bool + bool, bool * bool). Still that seems nonsensical, so I guess my question what is the definition of an integral type and how does bool fit?
Addendum: definition of an integral type in Walter/Andrei's mind
Nov 13
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 11/13/18 11:26 AM, Chris M. wrote:
 On Monday, 12 November 2018 at 09:45:14 UTC, Mike Parker wrote:
 DIP 1015, "Deprecation and removal of implicit conversion from integer 
 and character literals to bool, has been rejected, primarily on the 
 grounds that it is factually incorrect in treating bool as a type 
 distinct from other integral types.

 The TL;DR is that the DIP is trying to change behavior that is working 
 as intended.

 From Example A in the DIP:

     bool b = 1;

 This works because bool is a "small integral" with a range of 0..1. 
 The current behavior is consistent with all other integrals.

 From Example B in the DIP:

 ```
 int f(bool b) { return 1; }
 int f(int i) { return 2; }

 enum E : int
 {
     a = 0,
     b = 1,
     c = 2,
 }
 ```

 Here, f(a) and f(b) call the bool overload, while f(c) calls the int 
 version. This works because D selects the overload with the tightest 
 conversion. This behavior is consistent across all integral types. 
 Replace bool with ubyte and f(a), f(b) would both call the ubyte 
 version. The same holds for the DIP's Example C.

 Walter and Andrei left the door open to change the overload behavior 
 for *all* integral types, with the caveat that it's a huge hurdle for 
 such a DIP to be accepted. It would need a compelling argument.

 You can read a few more details in the summary I appended to the DIP:

 https://github.com/dlang/DIPs/blob/master/DIPs/rejected/DIP1015.
d#formal-assessment 


 Thanks to Mike Franklin for sticking with the process to the end.
I was going to write something up about how you can't do arithmetic on bool types therefore they aren't integral, but I tested and realized D allows this (i.e. bool + bool, bool * bool). Still that seems nonsensical, so I guess my question what is the definition of an integral type and how does bool fit?
What it's doing is promoting false to an integer 0 and true to an integer 1. These work just like all the other integer promotion rules. Interestingly enough, since bool can only be promoted to 0 or 1, bool * bool can be assigned back to a bool, but bool + bool can't be assigned back to a bool unless you cast. But don't expect the addition to behave as other integers do, as true + true == 2, which then casts to true. -Steve
Nov 13