www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Deprecate implicit `int` to `bool` conversion for integer literals

reply Michael V. Franklin <slavo5150 yahoo.com> writes:
What's the official word on this:  
https://github.com/dlang/dmd/pull/6404

Does it need a DIP?

If I revive it will it go anywhere?

What needs to be done to move it forward?

Thanks,
Mike
Nov 11
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Saturday, November 11, 2017 13:40:23 Michael V. Franklin via Digitalmars-
d wrote:
 What's the official word on this:
 https://github.com/dlang/dmd/pull/6404

 Does it need a DIP?

 If I revive it will it go anywhere?

 What needs to be done to move it forward?
It probably needs a DIP, since it's a language change, and based on what Walter has said in the past about this topic, I don't know how convincible he his. I think that most everyone else thought that it was terrible when code like this auto foo(bool) {...} auto foo(long) {...} foo(1); ends up with the bool overload being called, but Walter's answer was just to add an int overload if you didn't want 1 to call the bool overload. He may be more amenable to deprecating the implicit conversion now than he was then, but it's the sort of thing where I would expect there to have to be a DIP rather than it simply being done in a PR, since it's a definite semantic change and not one that Walter previously agreed should be made. I have no idea what Andrei's opinion on the topic is. - Jonathan M Davis
Nov 11
parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Saturday, 11 November 2017 at 14:54:42 UTC, Jonathan M Davis 
wrote:
 On Saturday, November 11, 2017 13:40:23 Michael V. Franklin via 
 Digitalmars- d wrote:
 What's the official word on this: 
 https://github.com/dlang/dmd/pull/6404

 Does it need a DIP?

 If I revive it will it go anywhere?

 What needs to be done to move it forward?
It probably needs a DIP, since it's a language change, and based on what Walter has said in the past about this topic, I don't know how convincible he his. I think that most everyone else thought that it was terrible when code like this auto foo(bool) {...} auto foo(long) {...} foo(1); ends up with the bool overload being called, but Walter's answer was just to add an int overload if you didn't want 1 to call the bool overload.
Yeah, this is bad. However, I’d hate to rewrite things like: if (a & (flag1 | flag2)) to if ((a & (flag1 | flag2)) != 0) When the first is quite obvious.
Nov 12
next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky 
wrote:

 However, I’d hate to rewrite things like:

 if (a & (flag1 | flag2))

 to

 if ((a & (flag1 | flag2)) != 0)

 When the first is quite obvious.
I don't think the proposal to deprecate integer literal conversions to `bool` would affect that as there doesn't appear to be an integer literal in the code. Mike
Nov 12
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Sunday, 12 November 2017 at 13:49:51 UTC, Michael V. Franklin 
wrote:

 I don't think the proposal to deprecate integer literal 
 conversions to `bool` would affect that as there doesn't appear 
 to be an integer literal in the code.
Nevermind. I see what you mean now. Mike
Nov 12
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/12/2017 08:54 AM, Michael V. Franklin wrote:
 On Sunday, 12 November 2017 at 13:49:51 UTC, Michael V. Franklin wrote:
 
 I don't think the proposal to deprecate integer literal conversions to 
 `bool` would affect that as there doesn't appear to be an integer 
 literal in the code.
Nevermind. I see what you mean now. Mike
A DIP could be formulated to only address the problem at hand. BTW, here's a really fun example: void fun(long) { assert(0); } void fun(bool) {} enum int a = 2; enum int b = 1; void main() { fun(a - b); } The overload being called depends on (a) whether a - b can be computed during compilation or not, and (b) the actual value of a - b. Clearly a big problem for modular code. This is the smoking gun motivating the DIP. Andrei
Nov 12
next sibling parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu 
wrote:

 A DIP could be formulated to only address the problem at hand. 
 BTW, here's a really fun example:

 void fun(long) { assert(0); }
 void fun(bool) {}

 enum int a = 2;
 enum int b = 1;

 void main()
 {
     fun(a - b);
 }

 The overload being called depends on (a) whether a - b can be 
 computed during compilation or not, and (b) the actual value of 
 a - b. Clearly a big problem for modular code. This is the 
 smoking gun motivating the DIP.
As I understand it, the case above can be solved by changing the overload resolution rules without deprecating the implicit conversion to bool. A PR for such a fix was submitted here https://github.com/dlang/dmd/pull/1942. I fear a proposal to deprecate the implicit conversion to bool based solely on the example above could be refused in favor of overload resolution changes. IMO, the example above, while certainly a smoking gun, is actually just a symptom of the deeper problem, so I tried to make that case in the DIP. The DIP has been submitted here https://github.com/dlang/DIPs/pull/99 Perhaps I'm not the right person to be formulating these arguments, but given that the issue has been in Bugzilla for 4 years, I'm probably all you've got. Sorry. Mike
Nov 12
prev sibling parent reply Nick Treleaven <nick geany.org> writes:
On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu 
wrote:
 The overload being called depends on (a) whether a - b can be 
 computed during compilation or not, and (b) the actual value of 
 a - b. Clearly a big problem for modular code. This is the 
 smoking gun motivating the DIP.
An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Nov 14
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/14/17 8:20 AM, Nick Treleaven wrote:
 On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:
 The overload being called depends on (a) whether a - b can be computed 
 during compilation or not, and (b) the actual value of a - b. Clearly 
 a big problem for modular code. This is the smoking gun motivating the 
 DIP.
An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Thanks. Addressing this should be part of the DIP as well. -- Andrei
Nov 14
next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 13:32:52 UTC, Andrei Alexandrescu 
wrote:

 An very similar problem exists for int and char overloads:
 
 alias foo = (char c) => 1;
 alias foo = (int i) => 4;
 
 enum int e = 7;
 static assert(foo(e) == 4); // fails
Thanks. Addressing this should be part of the DIP as well. --
It doesn't appear to be related to the implicit conversion of integer literals to bool. While Andrei's example is fixed by by deprecating implicit conversion of integral literals to bool (at least using this implementation: https://github.com/dlang/dmd/pull/7310), Nick's example isn't. Nick, if it's not in bugzilla already, can you please add it? Mike
Nov 14
next sibling parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 13:43:32 UTC, Michael V. Franklin 
wrote:

 An very similar problem exists for int and char overloads:
 
 alias foo = (char c) => 1;
 alias foo = (int i) => 4;
 
 enum int e = 7;
 static assert(foo(e) == 4); // fails
Thanks. Addressing this should be part of the DIP as well. --
It doesn't appear to be related to the implicit conversion of integer literals to bool. While Andrei's example is fixed by by deprecating implicit conversion of integral literals to bool (at least using this implementation: https://github.com/dlang/dmd/pull/7310), Nick's example isn't.
Well, of course it's not related; it's a char not a bool. But there does seem to be some systematic problems in D's implicit conversion rules. I'll have to investigate this and perhaps I can address them both in one DIP. Mike
Nov 14
prev sibling parent Nick Treleaven <nick geany.org> writes:
On Tuesday, 14 November 2017 at 13:43:32 UTC, Michael V. Franklin 
wrote:
 Nick, if it's not in bugzilla already, can you please add it?
Sure: https://issues.dlang.org/show_bug.cgi?id=17983
Nov 14
prev sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 8:32 AM, Andrei Alexandrescu wrote:
 On 11/14/17 8:20 AM, Nick Treleaven wrote:
 On Sunday, 12 November 2017 at 16:57:05 UTC, Andrei Alexandrescu wrote:
 The overload being called depends on (a) whether a - b can be 
 computed during compilation or not, and (b) the actual value of a - 
 b. Clearly a big problem for modular code. This is the smoking gun 
 motivating the DIP.
An very similar problem exists for int and char overloads: alias foo = (char c) => 1; alias foo = (int i) => 4; enum int e = 7; static assert(foo(e) == 4); // fails
Thanks. Addressing this should be part of the DIP as well. -- Andrei
IMO, no character types should implicitly convert from integer types. In fact, character types shouldn't convert from ANYTHING (even other character types). We have so many problems with this. -Steve
Nov 14
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven 
Schveighoffer wrote:

 IMO, no character types should implicitly convert from integer 
 types. In fact, character types shouldn't convert from ANYTHING 
 (even other character types). We have so many problems with 
 this.
Is everyone in general agreement on this? Can anyone think of a compelling use case? Mike
Nov 14
next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via Digitalmars-d
wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:
 
 IMO, no character types should implicitly convert from integer
 types. In fact, character types shouldn't convert from ANYTHING
 (even other character types). We have so many problems with this.
Is everyone in general agreement on this? Can anyone think of a compelling use case?
[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overload There is no compelling use case for implicitly converting char types to int. If you want to directly manipulate ASCII values / Unicode code point values, a direct cast is warranted (clearer code intent). Converting char to wchar (or dchar, or vice versa, etc.) implicitly is also fraught with peril: if the char happens to be an upper byte of a multibyte sequence, you *implicitly* get a garbage value. Not useful at all. Needing to write an explicit cast will remind you to think twice, which is a good thing. T -- Famous last words: I wonder what will happen if I do *this*...
Nov 14
next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 6:09 PM, H. S. Teoh wrote:
 On Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via
Digitalmars-d wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:

 IMO, no character types should implicitly convert from integer
 types. In fact, character types shouldn't convert from ANYTHING
 (even other character types). We have so many problems with this.
Is everyone in general agreement on this? Can anyone think of a compelling use case?
[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overload
I couldn't believe that this is the case so I tested it: https://run.dlang.io/is/AHQYtA for those who don't want to look, it does indeed call the first overload for a character literal, so this is not a problem (maybe you were thinking of something else?)
 There is no compelling use case for implicitly converting char types to
 int.  If you want to directly manipulate ASCII values / Unicode code
 point values, a direct cast is warranted (clearer code intent).
I think you misunderstand the problem. It's fine for chars to promote to int, or even bools for that matter. It's the other way around that is problematic. To put it another way, if you make this require a cast, you will have some angry coders :) if(c >= '0' && c <= '9') value = c - '0';
 Converting char to wchar (or dchar, or vice versa, etc.) implicitly is
 also fraught with peril: if the char happens to be an upper byte of a
 multibyte sequence, you *implicitly* get a garbage value.  Not useful at
 all.  Needing to write an explicit cast will remind you to think twice,
 which is a good thing.
Agree, these should require casts, since the resulting type is probably not what you want in all cases. Where this continually comes up is char ranges. Other than actual char[] arrays, the following code doesn't do the right thing at all: foreach(dchar d; charRange) If we made it require a cast, this would find such problems easily. -Steve
Nov 14
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Tue, Nov 14, 2017 at 06:53:43PM -0500, Steven Schveighoffer via
Digitalmars-d wrote:
 On 11/14/17 6:09 PM, H. S. Teoh wrote:
 On Tue, Nov 14, 2017 at 11:05:51PM +0000, Michael V. Franklin via
Digitalmars-d wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:
 
 IMO, no character types should implicitly convert from integer
 types. In fact, character types shouldn't convert from ANYTHING
 (even other character types). We have so many problems with
 this.
Is everyone in general agreement on this? Can anyone think of a compelling use case?
[...] I am 100% for this change. I've been bitten before by things like this: void myfunc(char ch) { ... } void myfunc(int i) { ... } char c; int i; myfunc(c); // calls first overload myfunc('a'); // calls second overload (WAT) myfunc(i); // calls second overload myfunc(1); // calls second overload
I couldn't believe that this is the case so I tested it: https://run.dlang.io/is/AHQYtA for those who don't want to look, it does indeed call the first overload for a character literal, so this is not a problem (maybe you were thinking of something else?)
[...] Argh, should've checked before I posted. What I meant was more something like this: import std.stdio; void f(dchar) { writeln("dchar overload"); } void f(ubyte) { writeln("ubyte overload"); } void main() { f(1); f('a'); } Output: ubyte overload ubyte overload It "makes sense" from the POV of C/C++-compatible integer promotion rules, but in the context of D, it's just very WAT-worthy. T -- Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian W. Kernighan
Nov 14
parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 23:53:49 UTC, H. S. Teoh wrote:

 Argh, should've checked before I posted.  What I meant was more 
 something like this:

 	import std.stdio;
 	void f(dchar) { writeln("dchar overload"); }
 	void f(ubyte) { writeln("ubyte overload"); }
 	void main() {
 		f(1);
 		f('a');
 	}

 Output:
 	ubyte overload
 	ubyte overload

 It "makes sense" from the POV of C/C++-compatible integer 
 promotion rules, but in the context of D, it's just very 
 WAT-worthy.
If you haven't already, can you please submit this to bugzilla. Thanks, Mike
Nov 14
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/14/2017 3:09 PM, H. S. Teoh wrote:
 I've been bitten before by things like this:
 
 	void myfunc(char ch) { ... }
 	void myfunc(int i) { ... }
 
 	char c;
 	int i;
 
 	myfunc(c);	// calls first overload
 	myfunc('a');	// calls second overload (WAT)
 	myfunc(i);	// calls second overload
 	myfunc(1);	// calls second overload
I just tried: import core.stdc.stdio; void foo(char c) { printf("char\n"); } void foo(int c) { printf("int\n"); } void main() { enum int e = 1; foo(e); foo(1); foo('c'); } and it prints: int int char I cannot reproduce your or Nick's error.
Nov 14
parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Wednesday, 15 November 2017 at 04:30:32 UTC, Walter Bright 
wrote:

 I just tried:

   import core.stdc.stdio;
   void foo(char c) { printf("char\n"); }
   void foo(int c) { printf("int\n"); }
   void main() {
     enum int e = 1;
     foo(e);
     foo(1);
     foo('c');
   }

 and it prints:

   int
   int
   char
The code posted was incorrect. See http://forum.dlang.org/post/mailman.154.1510704335.9493.digitalmars-d puremagic.com
Nov 14
prev sibling next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/14/2017 06:05 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:
 
 IMO, no character types should implicitly convert from integer types. 
 In fact, character types shouldn't convert from ANYTHING (even other 
 character types). We have so many problems with this.
Is everyone in general agreement on this?  Can anyone think of a compelling use case?
No, that would be too large a change of the rules. FWIW 'a' has type dchar, not char. -- Andrei
Nov 14
parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 6:48 PM, Andrei Alexandrescu wrote:
 On 11/14/2017 06:05 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:

 IMO, no character types should implicitly convert from integer types. 
 In fact, character types shouldn't convert from ANYTHING (even other 
 character types). We have so many problems with this.
Is everyone in general agreement on this?  Can anyone think of a compelling use case?
No, that would be too large a change of the rules.
All it means is that when VRP allows it, you still have to cast. It's not that large a change actually, but I can see how it might be too disruptive to be worth it.
 FWIW 'a' has type 
 dchar, not char. -- Andrei
 
No: pragma(msg, typeof('a')); // char -Steve
Nov 14
prev sibling parent Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 6:05 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 13:54:03 UTC, Steven Schveighoffer wrote:
 
 IMO, no character types should implicitly convert from integer types. 
 In fact, character types shouldn't convert from ANYTHING (even other 
 character types). We have so many problems with this.
Is everyone in general agreement on this?  Can anyone think of a compelling use case?
I would think this is another DIP from the one you are looking at, as it is more far-reaching than just overload problems. They are real problems, but this makes the DIP scope broader than it should be, and lessen the chance of acceptance. -Steve
Nov 14
prev sibling next sibling parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven 
wrote:

 An very similar problem exists for int and char overloads:

 alias foo = (char c) => 1;
 alias foo = (int i) => 4;

 enum int e = 7;
 static assert(foo(e) == 4); // fails
Wait a minute! This doesn't appear to be a casting or overload problem. Can you really overload aliases in D? I would expect the compiler to throw an error as `foo` is being redefined. Or for `foo` to be replaced by the most recent assignment in lexical order. Am I missing something? Mike
Nov 14
next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 6:14 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven wrote:
 
 An very similar problem exists for int and char overloads:

 alias foo = (char c) => 1;
 alias foo = (int i) => 4;

 enum int e = 7;
 static assert(foo(e) == 4); // fails
Wait a minute!  This doesn't appear to be a casting or overload problem.  Can you really overload aliases in D?
In fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered. Indeed, this is a completely different problem: enum int e = 500; static assert(foo(e) == 4); // fails to compile (can't call char with 500) If you define foo as an actual overloaded function set, it works as expected.
 
 I would expect the compiler to throw an error as `foo` is being 
 redefined.  Or for `foo` to be replaced by the most recent assignment in 
 lexical order.  Am I missing something?
In this case, the compiler simply *ignores* the newest definition. It should throw an error IMO. -Steve
Nov 14
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven 
Schveighoffer wrote:
 An very similar problem exists for int and char overloads:

 alias foo = (char c) => 1;
 alias foo = (int i) => 4;

 enum int e = 7;
 static assert(foo(e) == 4); // fails
Wait a minute!  This doesn't appear to be a casting or overload problem.  Can you really overload aliases in D?
In fact, I'm surprised you can alias to an expression like that. Usually you need a symbol. It's probably due to how this is lowered.
Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)? Mike
Nov 14
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/14/17 8:56 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven Schveighoffer wrote:
 In fact, I'm surprised you can alias to an expression like that. 
 Usually you need a symbol. It's probably due to how this is lowered.
Boy did I "step in it" with my original post:  Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`).  I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?
I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; } However, it would be good to prevent the second alias which effectively does nothing. -Steve
Nov 15
next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Wednesday, November 15, 2017 07:28:02 Steven Schveighoffer via 
Digitalmars-d wrote:
 On 11/14/17 8:56 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven Schveighoffer 
wrote:
 In fact, I'm surprised you can alias to an expression like that.
 Usually you need a symbol. It's probably due to how this is lowered.
Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?
I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; }
In general, alias aliases symbols, whereas a lambda isn't a symbol. They're essentially the rvalue equivalent of functions. So, in that sense, it's pretty weird that it works. However, we _do_ use it all the time with alias template parameters. So, regardless of what would make sense for other aliases, if we just made it so that alias in general didn't work with lambdas, we'd be in big trouble. It wouldn't surprise me if the fact that aliases like this works with lambdas is related to the fact that it works with alias template parameters, but I don't know. It may simply be that because of how the compiler generates lambdas, it ends up with a name for them (even if you don't see it), and it just naturally came out that those were aliasable.
 However, it would be good to prevent the second alias which effectively
 does nothing.
As far as I'm concerned, in principal, it's identical to declaring a variable with the same name in the same scope, and I'm very surprised that it works. Interestingly, this code alias foo = int; alias foo = float; _does_ produce an error. So, it looks like it's a problem related to lambdas specifically. - Jonathan M Davis
Nov 15
next sibling parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/15/17 9:07 AM, Jonathan M Davis wrote:
 On Wednesday, November 15, 2017 07:28:02 Steven Schveighoffer via
 Digitalmars-d wrote:
 On 11/14/17 8:56 PM, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 23:41:39 UTC, Steven Schveighoffer
wrote:
 In fact, I'm surprised you can alias to an expression like that.
 Usually you need a symbol. It's probably due to how this is lowered.
Boy did I "step in it" with my original post: Started out with one issue and ended up with 3. I looked at what the compiler is doing, and it is generated a new symbol (e.g. `__lambda4`). I suspect this is not intended. My question now is, should the compiler actually be treating the lambda as an expression instead of a new symbol, thus disallowing it altogether? (sigh! more breakage)?
I don't think we can prevent the aliasing in the first place, because if this is possible, I guarantee people use it, and it looks quite handy actually. Much less verbose than templates: alias mul = (a, b) => a * b; vs. auto mul(A, B)(A a, B b) { return a * b; }
In general, alias aliases symbols, whereas a lambda isn't a symbol. They're essentially the rvalue equivalent of functions. So, in that sense, it's pretty weird that it works. However, we _do_ use it all the time with alias template parameters. So, regardless of what would make sense for other aliases, if we just made it so that alias in general didn't work with lambdas, we'd be in big trouble. It wouldn't surprise me if the fact that aliases like this works with lambdas is related to the fact that it works with alias template parameters, but I don't know. It may simply be that because of how the compiler generates lambdas, it ends up with a name for them (even if you don't see it), and it just naturally came out that those were aliasable.
 However, it would be good to prevent the second alias which effectively
 does nothing.
As far as I'm concerned, in principal, it's identical to declaring a variable with the same name in the same scope, and I'm very surprised that it works. Interestingly, this code alias foo = int; alias foo = float;
alias statements and alias parameters have differences, so I don't know if there is any real relation here. For example, int cannot bind to a template alias. Some really weird stuff is going on with aliasing and function overloads in general. If we change them from anonymous lambdas to actual functions: auto lambda1(char c) { return 1; } auto lambda2(int i) { return 4; } alias foo = lambda1; alias foo = lambda2; void main() { assert(foo('a') == 1); assert(foo(1) == 4); } Hey look, it all works! Even if lambda1 and lambda2 are turned into templates, it works. I seriously did not expect this to work. -Steve
Nov 15
parent reply Andrea Fontana <nospam example.com> writes:
On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven 
Schveighoffer wrote:
 alias foo = lambda1;
 alias foo = lambda2;
What?
Nov 15
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/15/17 11:59 AM, Andrea Fontana wrote:
 On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven Schveighoffer wrote:
 alias foo = lambda1;
 alias foo = lambda2;
What?
Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Steve
Nov 15
parent reply Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Wednesday, 15 November 2017 at 19:29:29 UTC, Steven 
Schveighoffer wrote:
 On 11/15/17 11:59 AM, Andrea Fontana wrote:
 On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven 
 Schveighoffer wrote:
 alias foo = lambda1;
 alias foo = lambda2;
What?
Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Steve
I guess you guys haven't been keeping up with language changes :P https://dlang.org/changelog/2.070.0.html#alias-funclit And yes, you can use 'alias' to capture overload sets. See also: https://github.com/dlang/dmd/pull/1660/files https://github.com/dlang/dmd/pull/2125/files#diff-51d0a1ca6214e6a916212fcbf93d7e40 https://github.com/dlang/dmd/pull/2417/files https://github.com/dlang/dmd/pull/4826/files https://github.com/dlang/dmd/pull/5162/files https://github.com/dlang/dmd/pull/5202 https://github.com/dlang/phobos/pull/5818/files
Nov 16
parent reply Meta <jared771 gmail.com> writes:
On Thursday, 16 November 2017 at 13:05:51 UTC, Petar Kirov 
[ZombineDev] wrote:
 On Wednesday, 15 November 2017 at 19:29:29 UTC, Steven 
 Schveighoffer wrote:
 On 11/15/17 11:59 AM, Andrea Fontana wrote:
 On Wednesday, 15 November 2017 at 15:25:06 UTC, Steven 
 Schveighoffer wrote:
 alias foo = lambda1;
 alias foo = lambda2;
What?
Yep. Would never have tried that in a million years before seeing this thread :) But it does work. Tested with dmd 2.076.1 and 2.066. So it's been there a while. -Steve
I guess you guys haven't been keeping up with language changes :P https://dlang.org/changelog/2.070.0.html#alias-funclit And yes, you can use 'alias' to capture overload sets. See also: https://github.com/dlang/dmd/pull/1660/files https://github.com/dlang/dmd/pull/2125/files#diff-51d0a1ca6214e6a916212fcbf93d7e40 https://github.com/dlang/dmd/pull/2417/files https://github.com/dlang/dmd/pull/4826/files https://github.com/dlang/dmd/pull/5162/files https://github.com/dlang/dmd/pull/5202 https://github.com/dlang/phobos/pull/5818/files
Yes, as far as I understand this is just the normal way that you add a symbol to an existing overload set, except now it also interacts with the functionality of using an alias to create a named function literal. Kind of interesting because I don't think it was possible to do this before, e.g.: int function(int) f1 = (int n) => n; int function(int) f2 = (char c) => c; Would obviously be rejected by the compiler. However, using the alias syntax we can create an overload set from function literals in addition to regular functions.
Nov 16
parent Meta <jared771 gmail.com> writes:
On Thursday, 16 November 2017 at 16:10:50 UTC, Meta wrote:
 int function(int) f1 = (int n) => n;
 int function(int) f2 = (char c) => c;
Should be int function(char)
Nov 16
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 15.11.2017 15:07, Jonathan M Davis wrote:
 In general, alias aliases symbols, whereas a lambda isn't a symbol. ...
There is essentially no merit to the symbol/no symbol distinction. It's just a DMD implementation detail resulting in weird inconsistencies between alias declarations and alias template parameters that are being fixed one by one.
 ...
 alias foo = int;
 alias foo = float;
Case in point. Neither int nor float are symbols.
Nov 18
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 15.11.2017 13:28, Steven Schveighoffer wrote:
 
 However, it would be good to prevent the second alias which effectively 
 does nothing.
No. It should just overload properly.
Nov 18
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 15.11.2017 00:14, Michael V. Franklin wrote:
 On Tuesday, 14 November 2017 at 13:20:22 UTC, Nick Treleaven wrote:
 
 An very similar problem exists for int and char overloads:

 alias foo = (char c) => 1;
 alias foo = (int i) => 4;

 enum int e = 7;
 static assert(foo(e) == 4); // fails
Wait a minute!  This doesn't appear to be a casting or overload problem.  Can you really overload aliases in D? ...
Yes. auto foo(int x){ return x; } auto bar(double x){ return x+1; } alias qux=foo; alias qux=bar; void main(){ import std.stdio; writeln(qux(1)," ",qux(1.0)); // 1 2 } This is by design. The fact that the following does not work is just a (known) compiler bug: alias qux=(int x)=>x; alias qux=(double x)=>x+1; void main(){ import std.stdio; writeln(qux(1)," ",qux(1.0)); // error } https://issues.dlang.org/show_bug.cgi?id=16099
Nov 18
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 11/14/2017 5:20 AM, Nick Treleaven wrote:
 An very similar problem exists for int and char overloads:
 
 alias foo = (char c) => 1;
 alias foo = (int i) => 4;
 
 enum int e = 7;
 static assert(foo(e) == 4); // fails
I cannot reproduce this error.
Nov 14
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Wednesday, 15 November 2017 at 04:24:58 UTC, Walter Bright 
wrote:
 On 11/14/2017 5:20 AM, Nick Treleaven wrote:
 An very similar problem exists for int and char overloads:
 
 alias foo = (char c) => 1;
 alias foo = (int i) => 4;
 
 enum int e = 7;
 static assert(foo(e) == 4); // fails
I cannot reproduce this error.
Try it here: https://run.dlang.io/is/nfMGfG DMD-nightly
Nov 14
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/14/17 11:33 PM, Michael V. Franklin wrote:
 On Wednesday, 15 November 2017 at 04:24:58 UTC, Walter Bright wrote:
 On 11/14/2017 5:20 AM, Nick Treleaven wrote:
 An very similar problem exists for int and char overloads:

 alias foo = (char c) => 1;
 alias foo = (int i) => 4;

 enum int e = 7;
 static assert(foo(e) == 4); // fails
I cannot reproduce this error.
Try it here: https://run.dlang.io/is/nfMGfG DMD-nightly
Cool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- Andrei
Nov 15
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Thursday, 16 November 2017 at 07:24:44 UTC, Andrei 
Alexandrescu wrote:

 Try it here: https://run.dlang.io/is/nfMGfG
 DMD-nightly
Cool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- Andrei
Bugzilla Issue is here: https://issues.dlang.org/show_bug.cgi?id=17983 Mike
Nov 15
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 11/16/2017 02:29 AM, Michael V. Franklin wrote:
 On Thursday, 16 November 2017 at 07:24:44 UTC, Andrei Alexandrescu wrote:
 
 Try it here: https://run.dlang.io/is/nfMGfG
 DMD-nightly
Cool, thanks. That seems to be an unrelated bug. Have you added it to bugzilla? Thanks! -- Andrei
Bugzilla Issue is here: https://issues.dlang.org/show_bug.cgi?id=17983 Mike
Gracias! -- Andrei
Nov 16
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky 
wrote:
 if (a & (flag1 | flag2))

 to

 if ((a & (flag1 | flag2)) != 0)

 When the first is quite obvious.
Just change the typing of the if-conditional to: if (boolean|integral) {…}
Nov 12
next sibling parent reply Temtaime <temtaime gmail.com> writes:
On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky 
 wrote:
 if (a & (flag1 | flag2))

 to

 if ((a & (flag1 | flag2)) != 0)

 When the first is quite obvious.
Just change the typing of the if-conditional to: if (boolean|integral) {…}
There's no force change. if explicitly converts cond to bool.
Nov 12
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= writes:
On Sunday, 12 November 2017 at 16:04:59 UTC, Temtaime wrote:
 There's no force change.
 if explicitly converts cond to bool.
Yes, but that is a flaw IMO. E.g. NaN will convert to true.
Nov 12
prev sibling parent reply Dmitry Olshansky <dmitry.olsh gmail.com> writes:
On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad 
wrote:
 On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky 
 wrote:
 if (a & (flag1 | flag2))

 to

 if ((a & (flag1 | flag2)) != 0)

 When the first is quite obvious.
Just change the typing of the if-conditional to: if (boolean|integral) {…}
Rather I recall that: if(expr) is considered to be if(cast(bool)expr) The latter to support user-defined types. So we are good.
Nov 12
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Sunday, November 12, 2017 19:13:00 Dmitry Olshansky via Digitalmars-d 
wrote:
 On Sunday, 12 November 2017 at 16:00:28 UTC, Ola Fosheim Grøstad

 wrote:
 On Sunday, 12 November 2017 at 13:34:50 UTC, Dmitry Olshansky

 wrote:
 if (a & (flag1 | flag2))

 to

 if ((a & (flag1 | flag2)) != 0)

 When the first is quite obvious.
Just change the typing of the if-conditional to: if (boolean|integral) {…}
Rather I recall that: if(expr) is considered to be if(cast(bool)expr) The latter to support user-defined types. So we are good.
Yes. In conditional expressions, you get an implicitly inserted cast to bool. So, you have an implicit, explicit cast to bool (weird as that sounds). If the implicit cast to integers to bool were removed (meaning neither integer literals nor VRP allowed the conversion), then it would have no effect on if statements or loops and whatnot. It would affect overloading and other expressions. So, something like bool a = 2 - 1; or auto foo(bool) {...} foo(1); wouldn't compile anymore. But something like if(1) would compile just fine, just like if("str") compiles just fine, but auto foo(bool) {...} foo("str"); does not. - Jonathan M Davis
Nov 12
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Hi Mike, thanks for your inquiry. A DIP is necessary for all language 
changes. In this case a short and well-argued DIP seems to be the 
ticket. Walter and I spoke and such a proposal has a good chance to be 
successful.

Thanks,

Andrei

On 11/11/2017 08:40 AM, Michael V. Franklin wrote:
 What's the official word on this: https://github.com/dlang/dmd/pull/6404
 
 Does it need a DIP?
 
 If I revive it will it go anywhere?
 
 What needs to be done to move it forward?
 
 Thanks,
 Mike
Nov 11
parent reply Michael V. Franklin <slavo5150 yahoo.com> writes:
On Saturday, 11 November 2017 at 23:30:18 UTC, Andrei 
Alexandrescu wrote:
 A DIP is necessary for all language changes. In this case a 
 short and well-argued DIP seems to be the ticket. Walter and I 
 spoke and such a proposal has a good chance to be successful.
Subject issues: https://issues.dlang.org/show_bug.cgi?id=9999 https://issues.dlang.org/show_bug.cgi?id=10560 Spec in question: https://dlang.org/spec/type.html#bool DIP: https://github.com/dlang/DIPs/pull/99 I need some feedback from the community before I move forward with the DIP. I'm torn between a few ideas and not sure how to proceed. 1. Deprecate implicit conversion of integer literals to bool 2. Allow implicit conversion of integer literals to bool if a function is not overloaded, but disallow it if the function is overloaded. 3. Change the overload resolution rules as illustrated in https://github.com/dlang/dmd/pull/1942 If I had to choose one I would go with 1, simply because the implicit conversion is janky and circumvents the type system for a mild-at-best convenience. But, it will cause breakage that needs to be managed. 2 would solve the issues in question, but keep breakage at a minimum, and would probably be preferred if users wish to maintain the status quo. Disadvantage is it's a special case to document, consider, and explain. 3 is similar to 2, and like 2, is a special case. I don't even really have a dog in this fight, but the demonstration of the problem in the bugzilla issues is simply embarrassing, and I'm tired of seeing issues languish for so long in bugzilla without any resolution. Is there any general consensus in the community on this issue so I can be sure I'm fulfilling the community's preference? Thanks, Mike
Nov 13
parent reply Steven Schveighoffer <schveiguy yahoo.com> writes:
On 11/13/17 8:01 PM, Michael V. Franklin wrote:
 On Saturday, 11 November 2017 at 23:30:18 UTC, Andrei Alexandrescu wrote:
 A DIP is necessary for all language changes. In this case a short and 
 well-argued DIP seems to be the ticket. Walter and I spoke and such a 
 proposal has a good chance to be successful.
Subject issues: https://issues.dlang.org/show_bug.cgi?id=9999 https://issues.dlang.org/show_bug.cgi?id=10560 Spec in question: https://dlang.org/spec/type.html#bool DIP: https://github.com/dlang/DIPs/pull/99 I need some feedback from the community before I move forward with the DIP.  I'm torn between a few ideas and not sure how to proceed. 1. Deprecate implicit conversion of integer literals to bool 2. Allow implicit conversion of integer literals to bool if a function is not overloaded, but disallow it if the function is overloaded. 3. Change the overload resolution rules as illustrated in https://github.com/dlang/dmd/pull/1942 If I had to choose one I would go with 1, simply because the implicit conversion is janky and circumvents the type system for a mild-at-best convenience.  But, it will cause breakage that needs to be managed. 2 would solve the issues in question, but keep breakage at a minimum, and would probably be preferred if users wish to maintain the status quo.  Disadvantage is it's a special case to document, consider, and explain. 3 is similar to 2, and like 2, is a special case. I don't even really have a dog in this fight, but the demonstration of the problem in the bugzilla issues is simply embarrassing, and I'm tired of seeing issues languish for so long in bugzilla without any resolution. Is there any general consensus in the community on this issue so I can be sure I'm fulfilling the community's preference?
My vote would be for 1. It's disruptive, but not that disruptive. I almost always initialize a bool with true or false, not with 1 or 0. The array handling is probably the only part that would be painful. but we could handle that the same way we deprecated octal numbers: bools!"01001101"; => [false, true, false, false, true, true, false, true]; -Steve
Nov 14
parent Michael V. Franklin <slavo5150 yahoo.com> writes:
On Tuesday, 14 November 2017 at 13:17:22 UTC, Steven 
Schveighoffer wrote:

 The array handling is probably the only part that would be 
 painful. but we could handle that the same way we deprecated 
 octal numbers:

 bools!"01001101"; => [false, true, false, false, true, true, 
 false, true];
Thanks for chiming in. `bool[] boolValues = cast(bool[])[0,1,0,1]` will still work fine under option 1. It's only *implicit* casting that's being proposed for deprecation. Mike
Nov 15