www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.announce - dmd 1.046 and 2.031 releases

reply Walter Bright <newshound1 digitalmars.com> writes:
Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip
Jul 05 2009
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Ooooh... looks very nice.  Thanks again, Walter. :)

Incidentally, the links to Final Switch Statement and Case Range
Statement in the changelog for 2.031 are broken.
Jul 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Daniel Keep wrote:
 Ooooh... looks very nice.  Thanks again, Walter. :)
Actually, a lot of people worked on this release, not just me.
 
 Incidentally, the links to Final Switch Statement and Case Range
 Statement in the changelog for 2.031 are broken.
Jul 05 2009
next sibling parent Daniel Keep <daniel.keep.lists gmail.com> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 Ooooh... looks very nice.  Thanks again, Walter. :)
Actually, a lot of people worked on this release, not just me.
True; but it's getting harder to keep track of. How about this? Thanks, mixin(reduce!"a~`, `~b"(D_CONTRIBUTORS)).
Jul 05 2009
prev sibling parent Tim Matthews <tim.matthews7 gmail.com> writes:
Walter Bright wrote:
 Daniel Keep wrote:
 Ooooh... looks very nice.  Thanks again, Walter. :)
Actually, a lot of people worked on this release, not just me.
 Incidentally, the links to Final Switch Statement and Case Range
 Statement in the changelog for 2.031 are broken.
You quoted that but still missed that you've mixxed up your 'a href links with the display name'. Thanks a lot for this release.
Jul 06 2009
prev sibling next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 05 Jul 2009 22:05:10 -0700, Walter Bright wrote:

 Something for everyone here.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
The -deps= switch is helpful, but can we also have a "-nogen" switch so that a compile is done but no object files are created. Kind of like the "-c" switch which does a compile but no linking. Then we can use "-deps=dep.txt -nogen" to get the dependency data so build tools can then work out what needs to actually be compiled. And in that vein, a hash (eg CRC32, MD5, SHA256) of the file's used by DMD would be nice to see in the 'deps' file. Would help build tools detect which files have been modified. May I make a small syntax suggestion for the deps format. Instead of enclosing a path in parentheses, and using ':' as a field delimiter, have the first (and last) character of each line be the field delimiter to use in that line. The delimiter would be guaranteed to never be part of any of the fields' characters. That way, we don't need escape characters and parsing the text is greatly simplified. Also, simplifying the paths by resolving the ".." and "." would be nice. eg. !std.stdio!c:\dmd\dmd\src\phobos\std\stdio.d!public!std.format!c:\dmd\dmd\src\phobos\std\format.d! If this is ok can I submit a patch? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 05 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
The deps thing comes from the LDC group. They've been relying on it 
as-is, so they'd need to agree on any changes.
Jul 05 2009
parent Christian Kamm <check-ldc commits.com> writes:
Walter Bright Wrote:
 The deps thing comes from the LDC group. They've been relying on it 
 as-is, so they'd need to agree on any changes.
Actually, it's from Tomasz' xfBuild http://wiki.team0xf.com/index.php?n=Tools.XfBuild . As it is intended as a generally useful dependency format though, I'm sure he'll open for suggestions.
Jul 06 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
What about pragmas? Aren't both rebuild/dsss and Build using them for 
linking? It seems to me, you would still have to parse the source files for 
those.
Jul 06 2009
prev sibling next sibling parent reply BCS <none anon.com> writes:
Hello Derek,

 The -deps= switch is helpful, but can we also have a "-nogen" switch
 so that a compile is done but no object files are created. 
look at: -o-
Jul 06 2009
parent Derek Parnell <derek psych.ward> writes:
On Mon, 6 Jul 2009 15:03:20 +0000 (UTC), BCS wrote:

 Hello Derek,
 
 The -deps= switch is helpful, but can we also have a "-nogen" switch
 so that a compile is done but no object files are created. 
look at: -o-
Thanks, I've never noticed that switch before. Excellent. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
prev sibling parent Tom S <h3r3tic remove.mat.uni.torun.pl> writes:
Derek Parnell wrote:
 Then we can use "-deps=dep.txt -nogen" to get the dependency data so build
 tools can then work out what needs to actually be compiled. And in that
 vein, a hash (eg CRC32, MD5, SHA256) of the file's used by DMD would be
 nice to see in the 'deps' file. Would help build tools detect which files
 have been modified.
I think this should be the job of the build tool, not the compiler. For example, xfBuild uses the compiler-generated dependency files to keep track of its own project database containing dependencies and file modification times. I guess I'll be adding hashes as well :) Why a separate file? When doing incremental builds, you'll only pass some of the project's modules to the compiler so the deps file would not contain everything. The proper approach is to parse it and update the project database with it.
 May I make a small syntax suggestion for the deps format. Instead of
 enclosing a path in parentheses, and using ':' as a field delimiter, have
 the first (and last) character of each line be the field delimiter to use
 in that line. The delimiter would be guaranteed to never be part of any of
 the fields' characters. That way, we don't need escape characters and
 parsing the text is greatly simplified.
I don't think the parsing is currently very complicated at all, but I guess YMMV. I'd argue that the current format is easier to generate and more human-readable than your proposed syntax. The latter might also be harder to process by UNIXy tools like grep or cut.
 Also, simplifying the paths by resolving the ".." and "." would be nice. 
Yea, that would be nice. -- Tomasz Stachowiak http://h3.team0xf.com/ h3/h3r3tic on #D freenode
Jul 07 2009
prev sibling next sibling parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 05 Jul 2009 22:05:10 -0700, Walter Bright wrote:

 Something for everyone here.
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
One of the very much appreciated updates here is "Implicit integral conversions that could result in loss of significant bits are no longer allowed.". An excellent enhancement, thank you. But I am confused as this below compiles without complaint... ----------- import std.stdio; void main() { byte iii; ubyte uuu = 250; iii = uuu; writefln("%s %s", iii, uuu); } ----------- Output is ... -6 250 But I expected the compiler to complain that an unsigned value cannot by implicitly converted to a signed value as that results in loss of *significant* bits. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 05 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 One of the very much appreciated updates here is "Implicit integral
 conversions that could result in loss of significant bits are no longer
 allowed.". An excellent enhancement, thank you.
Thank Andrei for that, he was the prime mover behind it.
 
 But I am confused as this below compiles without complaint...
 -----------
 import std.stdio;
 void main()
 {
    byte iii;
    ubyte uuu = 250;
    iii = uuu;
    writefln("%s %s", iii, uuu);
 }
 -----------
 
 Output is ...
 -6 250
 
 But I expected the compiler to complain that an unsigned value cannot by
 implicitly converted to a signed value as that results in loss of
 *significant* bits.
We tried for a long time to come up with a sensible way to deal with the signed/unsigned dichotomy. We finally gave that up as unworkable. Instead, we opted for a method of significant bits, *not* how those bits are interpreted. -6 and 250 are the same bits in byte and ubyte, the difference is interpretation.
Jul 05 2009
parent reply Derek Parnell <derek psych.ward> writes:
On Sun, 05 Jul 2009 23:35:24 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 One of the very much appreciated updates here is "Implicit integral
 conversions that could result in loss of significant bits are no longer
 allowed.". An excellent enhancement, thank you.
Thank Andrei for that, he was the prime mover behind it.
Yes, our English language is poor. I should have said "thank yous" ;-)
 But I am confused as this below compiles without complaint...
 -----------
 import std.stdio;
 void main()
 {
    byte iii;
    ubyte uuu = 250;
    iii = uuu;
    writefln("%s %s", iii, uuu);
 }
 -----------
 
 Output is ...
 -6 250
 
 But I expected the compiler to complain that an unsigned value cannot by
 implicitly converted to a signed value as that results in loss of
 *significant* bits.
We tried for a long time to come up with a sensible way to deal with the signed/unsigned dichotomy. We finally gave that up as unworkable. Instead, we opted for a method of significant bits, *not* how those bits are interpreted. -6 and 250 are the same bits in byte and ubyte, the difference is interpretation.
I am disappointed. I hope that you haven't stopped working on a solution to this though, as allowing D to silently permit bugs it could prevent is not something we are hoping for. I can see that the argument so far hinges on the meaning of "significant". I was hoping that a 'sign' bit would have been significant. As for "the same bits in X and Y, the different is interpretation", this is something that can be selective. For example ... ---------- short iii; struct U {align (1) byte a; byte b;} U uuu; iii = uuu; ---------- The bits in 'uuu' can be accommodated in 'iii' so why not allow implicit conversion? Yes, that is a rhetorical question. Because we know that the struct means something different to the scalar 'short', conversion via bit-mapping is not going to be valid in most cases. However, we also know that a signed value is not the same as an unsigned value even though they have the same number of bits; that is the compiler already knows how to interpret those bits. I'm struggling to see why the compiler cannot just disallow any signed<->unsigned implicit conversion? Is it a matter of backward compatibility again? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 05 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5? When you index a pointer, is the index signed or unsigned?
Jul 06 2009
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 00:11:26 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5?
Positive. A positive number can be assigned to an 'int' if there is no size issue. What's the problem that I'm obviously missing?
 When you index a pointer, is the index signed or unsigned?
An index can be either. What's the problem here? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 On Mon, 06 Jul 2009 00:11:26 -0700, Walter Bright wrote:
 
 Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5?
Positive. A positive number can be assigned to an 'int' if there is no size issue.
It can also be an unsigned.
 What's the problem that I'm obviously missing?
  
 When you index a pointer, is the index signed or unsigned?
An index can be either. What's the problem here?
auto x = p1 - p2; What's the type of x?
Jul 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 14:13:45 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Derek Parnell wrote:
 On Mon, 06 Jul 2009 00:11:26 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5?
Positive. A positive number can be assigned to an 'int' if there is no size issue.
It can also be an unsigned.
 What's the problem that I'm obviously missing?

 When you index a pointer, is the index signed or unsigned?
An index can be either. What's the problem here?
auto x = p1 - p2; What's the type of x?
ptrdiff_t, signed counterpart of size_t
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 auto x = p1 - p2;

 What's the type of x?
ptrdiff_t, signed counterpart of size_t
Do you really want an error if you go: size_t y = p1 - p2; ?
Jul 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 14:28:38 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Denis Koroskin wrote:
 auto x = p1 - p2;

 What's the type of x?
ptrdiff_t, signed counterpart of size_t
Do you really want an error if you go: size_t y = p1 - p2; ?
Of course, what sense does it make when p2 > p1? I'd put an assert and mad a case explicit, if there is a size_t is so badly needed for ptr difference: assert(p1 >= p2); size_t y = cast(size_t)p1 - p2;
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 I'd put an assert and mad a case explicit, if there is a size_t is so 
 badly needed for ptr difference:
 
 assert(p1 >= p2);
 size_t y = cast(size_t)p1 - p2;
Aside from the typo in that code (!) the problem with casts is they are a sledgehammer approach. Casts should be minimized because they *hide* typing problems in the code. The more casts in the code, the more the type-checking abilities of the compiler are disabled. I suspect this will *hide* more bugs than it reveals. The reality is is that most integers used in programs are positive and relatively small. int and uint are equally correct for these, and people tend to use both in a mish-mash. Trying to build a barrier between them that requires explicit casting to overcome is going to require a lot of casts that accomplish nothing other than satisfying a nagging, annoying compiler. I've used such a compiler - Pascal back in the early 80s. All the casts it required me to insert basically sucked (and never revealed a single bug). When I discovered C with its sensible system of implicit casting, it was like putting on dry clothes after being soaked out in the cold rain.
Jul 06 2009
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Walter Bright wrote:
 Denis Koroskin wrote:
 I'd put an assert and mad a case explicit, if there is a size_t is so
 badly needed for ptr difference:

 assert(p1 >= p2);
 size_t y = cast(size_t)p1 - p2;
Aside from the typo in that code (!) the problem with casts is they are a sledgehammer approach. Casts should be minimized because they *hide* typing problems in the code. The more casts in the code, the more the type-checking abilities of the compiler are disabled. I suspect this will *hide* more bugs than it reveals. The reality is is that most integers used in programs are positive and relatively small. int and uint are equally correct for these, and people tend to use both in a mish-mash. Trying to build a barrier between them that requires explicit casting to overcome is going to require a lot of casts that accomplish nothing other than satisfying a nagging, annoying compiler. I've used such a compiler - Pascal back in the early 80s. All the casts it required me to insert basically sucked (and never revealed a single bug). When I discovered C with its sensible system of implicit casting, it was like putting on dry clothes after being soaked out in the cold rain.
In the context of a sign-sensitive language assert(p1 >= p2); size_t y = cast(size_t)p1 - p2; looks to me like it is equivalent to size_t y = p1 - p2; but in the context of a sign-insensitive language. The difference in behavior is that the former has a runtime assert, which is arguably useful. The difference in aesthetics/maintainability is that the former has that undesirable cast in there. Perhaps we can have the best of both worlds, and just make the latter work but automatically insert the given runtime assert while in debug/non-release mode. So in D2 the code size_t y = p1 - p2; would become assert(p1 >= p2); size_t y = p1 - p2; during compilation. Sure having negative indices around would eventually crash the program anyways, but it's really helpful to have the program crash as close to the bug's location as possible. (Also not seeing that typo. dmd seems to think it's alright syntactically, and don't we want p1 to be greater?)
Jul 06 2009
prev sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 14:28:38 +0400, Walter Bright
<newshound1 digitalmars.com> wrote:

 Denis Koroskin wrote:
 auto x = p1 - p2;

 What's the type of x?
ptrdiff_t, signed counterpart of size_t
Do you really want an error if you go: size_t y = p1 - p2; ?
Of course, what sense does it make when p2 > p1? I'd put an assert and made a cast explicit, if a size_t is so badly needed for ptr difference: assert(p1 >= p2); size_t y = cast(size_t)p1 - p2;
Jul 06 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 03:13:45 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 On Mon, 06 Jul 2009 00:11:26 -0700, Walter Bright wrote:
 
 Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5?
Positive. A positive number can be assigned to an 'int' if there is no size issue.
It can also be an unsigned.
Which is a positive value, right? Can you think of any unsigned value which is also negative?
 What's the problem that I'm obviously missing?
  
 When you index a pointer, is the index signed or unsigned?
An index can be either. What's the problem here?
auto x = p1 - p2; What's the type of x?
Is that what you meant by "index a pointer"? Anyhow, it is a signed value. The difference between any two random memory addresses can be positive or negative. Whatever the 'signedness' of 'x' is, the expression "p2 + x == p1" must be true. If p1 is 0 and p2 is uint.max then 'x' must still be able to hold (-uint.max)
 Denis Koroskin wrote:
 auto x = p1 - p2;

 What's the type of x?
ptrdiff_t, signed counterpart of size_t
Do you really want an error if you go: size_t y = p1 - p2;
Yes I do. size_t y = cast(size_t)p1 - p2; -- No error. ptrdiff_t y = p1 - p2; -- No error. size_t y = p1 - p2; -- Error. Safety is supposed to be enhance by using D, is it not? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 Safety is supposed to be enhance by using D, is it not?
See my post to Denis. Requiring too many casts reduces safety by essentially disabling the static type checking system.
Jul 06 2009
parent Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 11:02:12 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 Safety is supposed to be enhance by using D, is it not?
See my post to Denis. Requiring too many casts reduces safety by essentially disabling the static type checking system.
I totaly agree that cast() should be avoided, almost at all costs. A better way in this situation is to use a variable that can accommodate the range of possible values. Only use the cast() construct if you are deliberately doing something other than normal. For example, if you know that the different between two addresses will always be less than a 16-bit value AND you are deliberately storing the difference in a 'short' then using a cast() is a good idea as it alerts the code reader to this unusual situation. short x = cast(short)(p1 - p2); However, auto x = p1 - p2; should imply that 'x' is able to hold any value from -(uintptr_t.max) to uintptr_t.max inclusive. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 I'm struggling to see why the compiler cannot just disallow any
 signed<->unsigned implicit conversion? Is it a matter of backward
 compatibility again?
What's the signed-ness of 5? When you index a pointer, is the index signed or unsigned?
Jul 06 2009
prev sibling next sibling parent reply MIURA Masahiro <echochamber gmail.com> writes:
Thanks for the new release!  Are case ranges limited to 256 cases?

% cat -n foo.d
     1  import std.conv;
     2  import std.stdio;
     3
     4  void main(string[] args)
     5  {
     6      int i = to!int(args[0]);
     7
     8      switch (i) {
     9      case int.min: .. case -1:   // line 9
    10          writefln("negative");
    11          break;
    12      case 0:
    13          writefln("zero");
    14          break;
    15      default:
    16          writefln("positive");
    17          break;
    18      }
    19  }
% dmd foo.d
foo.d(9): Error: more than 256 cases in case range
%
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
MIURA Masahiro wrote:
 Thanks for the new release!  Are case ranges limited to 256 cases?
Yes.
Jul 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 12:19:47 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 MIURA Masahiro wrote:
 Thanks for the new release!  Are case ranges limited to 256 cases?
Yes.
Does it compare on case-by-case basis? Up to 256 comparisons?
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Denis Koroskin wrote:
 Does it compare on case-by-case basis? Up to 256 comparisons?
What do you mean? Obj2asm will show what it is doing.
Jul 06 2009
parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 14:12:40 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Denis Koroskin wrote:
 Does it compare on case-by-case basis? Up to 256 comparisons?
What do you mean? Obj2asm will show what it is doing.
I mean, will it translate switch (i) { case 0: .. case 9: doSomething(); } into if (i == 0 || i == 1 || i == 2 || etc) doSomething(); or into if (i >=0 || i <= 9) doSomething();
Jul 06 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Denis Koroskin wrote:
 On Mon, 06 Jul 2009 14:12:40 +0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Denis Koroskin wrote:
 Does it compare on case-by-case basis? Up to 256 comparisons?
What do you mean? Obj2asm will show what it is doing.
I mean, will it translate switch (i) { case 0: .. case 9: doSomething(); } into if (i == 0 || i == 1 || i == 2 || etc) doSomething(); or into if (i >=0 || i <= 9) doSomething();
I'm sure it translate it to: case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: case 9: That's obvious from the "maximum of 256 cases allowed". And also you can see that in the source code. :) Statement *CaseRangeStatement::semantic(Scope *sc) ... /* This works by replacing the CaseRange with an array of Case's. * * case a: .. case b: s; * => * case a: * [...] * case b: * s; */
Jul 06 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Walter Bright escribio':
 MIURA Masahiro wrote:
 Thanks for the new release!  Are case ranges limited to 256 cases?
Yes.
Why?
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Ary Borenszweig wrote:
 Walter Bright escribio':
 MIURA Masahiro wrote:
 Thanks for the new release!  Are case ranges limited to 256 cases?
Yes.
Why?
To avoid dealing with it in the back end for the moment. The back end will die if you pass it 3,000,000 case statements :-)
Jul 06 2009
parent BCS <ao pathlink.com> writes:
Reply to Walter,

 Ary Borenszweig wrote:
 
 Walter Bright escribio':
 
 MIURA Masahiro wrote:
 
 Thanks for the new release!  Are case ranges limited to 256 cases?
 
Yes.
Why?
To avoid dealing with it in the back end for the moment. The back end will die if you pass it 3,000,000 case statements :-)
I get your point. OTOH I rather suspect the back end will die if you give it 3,000,000 of just about anything.
Jul 06 2009
prev sibling next sibling parent reply =?UTF-8?B?IuOBruOBl+OBhOOBiyAobm9zaGlpa2EpIg==?= writes:
Thank you for the great work, Walter and all the other contributors.

But I am a bit disappointed with the CaseRangeStatement syntax.
Why is it
    case 0: .. case 9:
instead of
    case 0 .. 9:

With the latter notation, ranges can be easily used together with 
commas, for example:
    case 0, 2 .. 4, 6 .. 9:

And CaseRangeStatement, being inconsistent with other syntaxes using the 
.. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
Shouldn't D make use of another operator to express ranges that include 
the endpoints as Ruby or Perl6 does?
Jul 06 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.
 
 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
 
 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:
 
 And CaseRangeStatement, being inconsistent with other syntaxes using the 
 .. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
 Shouldn't D make use of another operator to express ranges that include 
 the endpoints as Ruby or Perl6 does?
I think this was hashed out ad nauseum in the n.g. D does introduce another operator, the :..case operator <g>.
Jul 06 2009
next sibling parent reply grauzone <none example.net> writes:
Walter Bright wrote:
 のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
Or case [0..10]: ? Compatible to how list slicing works. Ah yes, bikeshed issue, but my solution is more beautiful. Also, Walter, did you ever think about doing something about the fall-through-by-default issue? Of course in a way that preserves C compatibility.
Jul 06 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Also, Walter, did you ever think about doing something about the 
 fall-through-by-default issue? Of course in a way that preserves C 
 compatibility.
There have always been much more pressing issues.
Jul 06 2009
prev sibling next sibling parent reply Tim Matthews <tim.matthews7 gmail.com> writes:
grauzone wrote:
 Walter Bright wrote:
 のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
Or case [0..10]: ? Compatible to how list slicing works. Ah yes, bikeshed issue, but my solution is more beautiful. Also, Walter, did you ever think about doing something about the fall-through-by-default issue? Of course in a way that preserves C compatibility.
Do u mean this http://digitalmars.com/d/2.0/statement.html#FinalSwitchStatement
Jul 06 2009
parent reply grauzone <none example.net> writes:
Tim Matthews wrote:
 grauzone wrote:
 Walter Bright wrote:
 のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
Or case [0..10]: ? Compatible to how list slicing works. Ah yes, bikeshed issue, but my solution is more beautiful. Also, Walter, did you ever think about doing something about the fall-through-by-default issue? Of course in a way that preserves C compatibility.
Do u mean this http://digitalmars.com/d/2.0/statement.html#FinalSwitchStatement
No. Also, this final switch feature seems to be only marginally useful, and normal switch statements do the same, just at runtime. So much for "more pressing issues" but it's his language and not mine so I'll shut up.
Jul 06 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 No. Also, this final switch feature seems to be only marginally useful, 
 and normal switch statements do the same, just at runtime. So much for 
 "more pressing issues" but it's his language and not mine so I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
Jul 06 2009
parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs. They are that evil kind of bug where you can stare at the exact lines of code that cause it and remain completely clueless until the epiphany hits. This is no good, unless the compiler can be rewritten to induce epiphanies. "much less of an issue" T_T
Jul 06 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Chad J wrote:
 Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Jul 06 2009
next sibling parent reply "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 22:48:07 +0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Chad J wrote:
 Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Reuse goto?
Jul 06 2009
next sibling parent Ary Borenszweig <ary esperanto.org.ar> writes:
Denis Koroskin wrote:
 On Mon, 06 Jul 2009 22:48:07 +0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Chad J wrote:
 Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Reuse goto?
I was thinking "continue case;" But of course this discussion is pointless because it's bikeshed idontknow...
Jul 06 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Denis Koroskin wrote:
 Reuse goto?
So any case-labeled code should end either with a control flow statement that transfers control elswhere? That sounds like a great idea. Fall-through is so rare and so rarely intended, it makes sense to require the programmer to state the intent explicitly via a goto case. Andrei
Jul 06 2009
parent reply Jesse Phillips <jessekphillips gmail.com> writes:
On Mon, 06 Jul 2009 14:38:53 -0500, Andrei Alexandrescu wrote:

 Denis Koroskin wrote:
 Reuse goto?
So any case-labeled code should end either with a control flow statement that transfers control elswhere? That sounds like a great idea. Fall-through is so rare and so rarely intended, it makes sense to require the programmer to state the intent explicitly via a goto case. Andrei
The goto method already works, the only change needed would be to not have fallthru default. http://digitalmars.com/d/2.0/statement.html#GotoStatement
Jul 06 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Jesse Phillips escribió:
 On Mon, 06 Jul 2009 14:38:53 -0500, Andrei Alexandrescu wrote:
 
 Denis Koroskin wrote:
 Reuse goto?
So any case-labeled code should end either with a control flow statement that transfers control elswhere? That sounds like a great idea. Fall-through is so rare and so rarely intended, it makes sense to require the programmer to state the intent explicitly via a goto case. Andrei
The goto method already works, the only change needed would be to not have fallthru default. http://digitalmars.com/d/2.0/statement.html#GotoStatement
But that's kind of redundant: case 1: goto case 11: case 11: goto case 111: case 111: goto case 1111: case 1111: doIt(); don't you think? If you change the case expression, you must change it twice. Why not: case 1: continue case; case 11: continue case; etc.?
Jul 06 2009
next sibling parent BCS <none anon.com> writes:
Hello Ary,

 But that's kind of redundant:
 
 case 1: goto case 11:
 case 11: goto case 111:
 case 111: goto case 1111:
 case 1111:
 doIt();
 don't you think?
 
case 1, 11, 111, 1111: doIt();
 If you change the case expression, you must change it twice.
 
 Why not:
 
 case 1: continue case;
 case 11: continue case;
 etc.?
 
target. For that matter, it's debatable if going from one case label to an immediately following one even constitutes a fall through considering they are attached to the following statement rater than being statements in there own right.
Jul 06 2009
prev sibling parent reply KennyTM~ <kennytm gmail.com> writes:
Ary Borenszweig wrote:
 Jesse Phillips escribió:
 On Mon, 06 Jul 2009 14:38:53 -0500, Andrei Alexandrescu wrote:

 Denis Koroskin wrote:
 Reuse goto?
So any case-labeled code should end either with a control flow statement that transfers control elswhere? That sounds like a great idea. Fall-through is so rare and so rarely intended, it makes sense to require the programmer to state the intent explicitly via a goto case. Andrei
The goto method already works, the only change needed would be to not have fallthru default. http://digitalmars.com/d/2.0/statement.html#GotoStatement
But that's kind of redundant: case 1: goto case 11: case 11: goto case 111: case 111: goto case 1111: case 1111: doIt(); don't you think?
Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .
 
 If you change the case expression, you must change it twice.
 
 Why not:
 
 case 1: continue case;
 case 11: continue case;
 
 etc.?
Jul 07 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
KennyTM~ Wrote:
 Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .
That compromise design looks good to be adopted by D too :-) Bye, bearophile
Jul 07 2009
parent Jesse Phillips <jessekphillips gmail.com> writes:
On Tue, 07 Jul 2009 11:05:31 -0400, bearophile wrote:

 KennyTM~ Wrote:
 Maybe http://msdn.microsoft.com/en-us/vcsharp/aa336815.aspx .
That compromise design looks good to be adopted by D too :-) Bye, bearophile
For which we have, case 1, 2, 3: writeln("I believe");
Jul 07 2009
prev sibling parent reply Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Andrei Alexandrescu wrote:
 Chad J wrote:
 These bugs always take me no less than 2 hours to find, unless I am
 specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Don't marginalize something that's important! The syntax is arbitrary. Just pick something that works. Also, IIRC, the main argument in favor of fallthrough behavior was compatibility with C code. Now that we have "final switch", we can, with reasonable certainty, assume that no C coder has written "final switch(...) ...". Thus we are allowed to choose a syntax that's different without breaking anything. And for those few people that want to write Duff's device, they can just use the old style switch-case. <rambling and syntax> I'm personally a bit biased to the haXe switch-case statements that I've been using for the past few months. In haXe you don't even have to write break; at the end, it will always exit the statement at the end of a case-block. It also has that behavior added in final switch where switching on an enum requires you to be exhaustive about your handling of the possibilities. It has worked well. If it isn't powerful enough, there's always "goto label;" where label is either some external label also worked well. So something like this: final switch( foo ) { case 0: goto 1; // emulate the fallthrough case 1: writefln("Got a zero or a one."); case 2: writefln("This isn't supposed to happen."); } I wouldn't complain if it was done some other way though. Just killing the fallthrough bugs would make me so much happier. </rambling and syntax>
Jul 06 2009
next sibling parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Chad J wrote:
 Andrei Alexandrescu wrote:
 Chad J wrote:
 These bugs always take me no less than 2 hours to find, unless I am
 specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Don't marginalize something that's important! The syntax is arbitrary. Just pick something that works. Also, IIRC, the main argument in favor of fallthrough behavior was compatibility with C code. Now that we have "final switch", we can, with reasonable certainty, assume that no C coder has written "final switch(...) ...". Thus we are allowed to choose a syntax that's different without breaking anything. And for those few people that want to write Duff's device, they can just use the old style switch-case. <rambling and syntax> I'm personally a bit biased to the haXe switch-case statements that I've been using for the past few months. In haXe you don't even have to write break; at the end, it will always exit the statement at the end of a case-block. It also has that behavior added in final switch where switching on an enum requires you to be exhaustive about your handling of the possibilities. It has worked well. If it isn't powerful enough, there's always "goto label;" where label is either some external label also worked well. So something like this: final switch( foo ) { case 0: goto 1; // emulate the fallthrough case 1: writefln("Got a zero or a one."); case 2: writefln("This isn't supposed to happen."); } I wouldn't complain if it was done some other way though. Just killing the fallthrough bugs would make me so much happier. </rambling and syntax>
Err, forgot to show usage in the example. auto foo = 0; final switch( foo ) { case 0: goto 1; // emulate the fallthrough case 1: writefln("Got a zero or a one."); case 2: writefln("This isn't supposed to happen."); } // "Got a zero or a one." is printed.
Jul 06 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Chad J wrote:
 Andrei Alexandrescu wrote:
 Chad J wrote:
 These bugs always take me no less than 2 hours to find, unless I am
 specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it... Andrei
Don't marginalize something that's important! The syntax is arbitrary. Just pick something that works.
I've posted here and also sent private message to Walter asking him what he thinks of requiring each case to end with a control flow statement.
 Also, IIRC, the main argument in favor of fallthrough behavior was
 compatibility with C code.  Now that we have "final switch", we can,
 with reasonable certainty, assume that no C coder has written "final
 switch(...) ...".  Thus we are allowed to choose a syntax that's
 different without breaking anything.  And for those few people that want
 to write Duff's device, they can just use the old style switch-case.
That's not the restriction we're having. The C-compatibility restriction is to not have code that compiles in both C and D and has different semantics in the two languages. So requiring all case-labeled sections in a switch statement to end with a control flow statement would leave some code that compiles in C but not in D. That is acceptable.
 <rambling and syntax>
 I'm personally a bit biased to the haXe switch-case statements that I've
 been using for the past few months.  In haXe you don't even have to
 write break; at the end, it will always exit the statement at the end of
 a case-block.  It also has that behavior added in final switch where
 switching on an enum requires you to be exhaustive about your handling
 of the possibilities.  It has worked well.  If it isn't powerful enough,
 there's always "goto label;" where label is either some external label

 also worked well.
 
 So something like this:
 
 final switch( foo )
 {
 	case 0: goto 1; // emulate the fallthrough
 	case 1: writefln("Got a zero or a one.");
 	case 2: writefln("This isn't supposed to happen.");
 }
 
 I wouldn't complain if it was done some other way though.  Just killing
 the fallthrough bugs would make me so much happier.
 </rambling and syntax>
There is a goto case statement in D already. The only change we need to effect is to require it for fall-through code. Andrei
Jul 06 2009
parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Andrei Alexandrescu wrote:
 [awesome stuff]
 
 
 Andrei
Jul 06 2009
prev sibling next sibling parent reply BCS <ao pathlink.com> writes:
Reply to Chad,

 Walter Bright wrote:
 
 The fall-through thing, though, is purely local and so much less of
 an issue.
 
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs. They are that evil kind of bug where you can stare at the exact lines of code that cause it and remain completely clueless until the epiphany hits. This is no good, unless the compiler can be rewritten to induce epiphanies. "much less of an issue" T_T
The much worse class of bugs is the ones that you can be staring at the location of the bug, have an epiphany about what the bug is and still not be able to tell if it is the case: the bug is here, but cause by a line of code that could be anywhere.
Jul 06 2009
parent Walter Bright <newshound1 digitalmars.com> writes:
BCS wrote:
 The much worse class of bugs is the ones that you can be staring at the 
 location of the bug, have an epiphany about what the bug is and still 
 not be able to tell if it is the case: the bug is here, but cause by a 
 line of code that could be anywhere.
If I understand that correctly, it's when you alter one line or declaration in one part of the code, and it silently breaks another piece of code far away in some other piece of not-obviously-related code. I tend to agree. It's why we don't hardcode array dimensions outside of the declaration - we started out in C by using a #define for it, and then in D encode the array dimension as part of the type. A local issue is one that has no adverse effects outside of the local context. The fall-through would be this. A non-local one is one like adding an enum case, because it can silently break code far away. D makes some significant progress in reducing the non-local effects of changes, and is one of the big improvements over using C and C++.
Jul 06 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 13:47:44 -0500, Andrei Alexandrescu wrote:

 Chad J wrote:
 Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it...
"too much aggravation" for whom? Certainly not for the coder, IMO. Consider this syntax suggestion ... int x = 1; switch (x; fallthru) { case 1: write(1); case 2: write(2); } Output: 12 int x = 1; switch (x; break) { case 1: write(1); case 2: write(2); } Output: 1 int x = 1; switch (x; fallthru) { case 1: write(1); break; case 2: write(2); } Output: 1 int x = 1; switch (x; break) { case 1: write(1); fallthru; case 2: write(2); } Output: 12 The default case "switch (x) {" would be the same as "switch (x; fallthru) {". One new keyword, and not a standard English word, allows all bases to be covered. Too much aggravation for whom? By the way, the above syntax format suggestion is because I see that the syntax for 'switch' is ... switch ( Expression ) ScopeStatement and ScopeStatement is either a BlockStatement or a NonEmptyStatement. Meaning that switch (i) j = k; is valid syntax! Why is that? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Mon, 06 Jul 2009 13:47:44 -0500, Andrei Alexandrescu wrote:
 
 Chad J wrote:
 Walter Bright wrote:
 grauzone wrote:
 No. Also, this final switch feature seems to be only marginally
 useful, and normal switch statements do the same, just at runtime. So
 much for "more pressing issues" but it's his language and not mine so
 I'll shut up.
The final switch deals with a problem where you add an enum member in one file and then have to find and update every switch statement that uses that enum. There's no straightforward way to find them to ensure the case gets added to each switch. It's solving a similar problem that symbolic constants do. The fall-through thing, though, is purely local and so much less of an issue.
huh? These bugs always take me no less than 2 hours to find, unless I am specifically looking for fall-through bugs.
I agree. Probably a good option would be to keep on requiring break, but also requiring the user to explicitly specify they want fallthrough in the rare case when they do want it. I'd love to use "continue" for that but it's already "occupied" by cases like while (...) switch (...). Requiring !break or ~break would work but is a bit too cute. Adding a new keyword or a whole new switch statement is too much aggravation. I guess we'll have to live with it...
"too much aggravation" for whom? Certainly not for the coder, IMO. Consider this syntax suggestion ...
[snip] Well, I think I'd call that aggravation. Andrei
Jul 06 2009
parent Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 14:42:12 -0500, Andrei Alexandrescu wrote:

 Well, I think I'd call that aggravation.
Ok, if we do not have to bend to the C winds, then the default would be to have switch NOT fall through. That way we do not have to code "break;" at all. Is that aggravation? And then for those rare times when a falling through is required, recycle the "goto" or any other existing key word - (I hear that "static" is always available) - to affect a fall through behaviour. Anyhow, sorry to have been a nuisance again. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
prev sibling parent Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 Walter Bright wrote:
 のしいか (noshiika) wrote:
 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
Or case [0..10]: ? Compatible to how list slicing works. Ah yes, bikeshed issue, but my solution is more beautiful.
No, it isn't compatible. [0..10] for slices does not include the 10, while the case range does. Using Andrei's syntax clearly makes the point that it is different, and not confusingly similar to something very different.
Jul 06 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
Walter Bright Wrote:

 のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.
 
 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
 
 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:
 
 And CaseRangeStatement, being inconsistent with other syntaxes using the 
 .. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
 Shouldn't D make use of another operator to express ranges that include 
 the endpoints as Ruby or Perl6 does?
I think this was hashed out ad nauseum in the n.g.
Hardly. There seemed to mostly be complaints about it with Andrei saying things like "I can't believe you don't see the elegance of the syntax". In the end, Andrei commented that he shouldn't involve the community in such small changes and went silent.
Jul 06 2009
next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Jason House:
 Hardly. There seemed to mostly be complaints about it with Andrei saying
things like "I can't believe you don't see the elegance of the syntax". In the
end, Andrei commented that he shouldn't involve the community in such small
changes and went silent.<
He was wrong. Even very intelligent people now and then do the wrong thing. Bye, bearophile
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Jason House:
 Hardly. There seemed to mostly be complaints about it with Andrei saying
things like "I can't believe you don't see the elegance of the syntax". In the
end, Andrei commented that he shouldn't involve the community in such small
changes and went silent.<
He was wrong. Even very intelligent people now and then do the wrong thing.
Of course the latter statement is true, but is in no way evidence supporting the former. About the former, in that particular case I was right. Andrei
Jul 06 2009
parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
Andrei Alexandrescu wrote:
 bearophile wrote:
 Jason House:
 Hardly. There seemed to mostly be complaints about it with Andrei
 saying things like "I can't believe you don't see the elegance of the
 syntax". In the end, Andrei commented that he shouldn't involve the
 community in such small changes and went silent.<
He was wrong. Even very intelligent people now and then do the wrong thing.
Of course the latter statement is true, but is in no way evidence supporting the former. About the former, in that particular case I was right. Andrei
Now, now. Let's all play nicely together... I don't like the `case a:..case b:` syntax. It doesn't matter. The functionality is in place and the syntax has a sane explanation and rationale. Unless there's some egregious problem aside from being a bit on the ugly side [1], it's just bike-shedding. And I'm so, SOOO sick of bike-shedding. Please, let's all just move on. [1] like me. My girlfriend disagrees with me on this, though. *I* think she's crazy, but I'm not exactly inclined to try and change her mind. :)
Jul 06 2009
next sibling parent reply BCS <none anon.com> writes:
Hello Daniel,

 [1] like me. My girlfriend disagrees with me on this,
You have a girlfriend that even bothers to have an opinion on a programming issue, lucky bastard.
 though. *I* think she's crazy, but I'm not exactly
 inclined to try and change her mind. :)
That reminds me of a quote: "If you assume a woman's mind is supposed to work like a man's, the only conclusion you can come to is they are *all* crazy." OTOH you can switch the perspective on that around and I expect it's just as true. It should be pointed out that, almost by definition, you can't have 50% of the world be crazy.
Jul 07 2009
next sibling parent reply Daniel Keep <daniel.keep.lists gmail.com> writes:
BCS wrote:
 Hello Daniel,
 
 [1] like me. My girlfriend disagrees with me on this,
You have a girlfriend that even bothers to have an opinion on a programming issue, lucky bastard.
No, when I said "like me", I meant: "Unless there's some egregious problem aside from being a bit on the ugly side (like me), ..." My girlfriend is actually a nurse, but I could ask for her opinion on case ranges if you want. :)
 though. *I* think she's crazy, but I'm not exactly
 inclined to try and change her mind. :)
That reminds me of a quote: "If you assume a woman's mind is supposed to work like a man's, the only conclusion you can come to is they are *all* crazy." OTOH you can switch the perspective on that around and I expect it's just as true. It should be pointed out that, almost by definition, you can't have 50% of the world be crazy.
My opinion is based more on that, with respect to the above issue, she seems to think differently to more or less everyone else I've ever met, including myself.
Jul 07 2009
parent BCS <none anon.com> writes:
Hello Daniel,

 BCS wrote:
 
 Hello Daniel,
 
 [1] like me. My girlfriend disagrees with me on this,
 
You have a girlfriend that even bothers to have an opinion on a programming issue, lucky bastard.
No, when I said "like me", I meant: "Unless there's some egregious problem aside from being a bit on the ugly side (like me), ..." My girlfriend is actually a nurse, but I could ask for her opinion on case ranges if you want. :)
Odd, the exact same words read differently around midnight. :b
Jul 07 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
BCS wrote:
 Hello Daniel,
 
 [1] like me. My girlfriend disagrees with me on this,
You have a girlfriend that even bothers to have an opinion on a programming issue, lucky bastard.
My understanding is that he's referring to a different issue.
 though. *I* think she's crazy, but I'm not exactly
 inclined to try and change her mind. :)
That reminds me of a quote: "If you assume a woman's mind is supposed to work like a man's, the only conclusion you can come to is they are *all* crazy."
To paraphrase: "If you assume a woman's mind is supposed to work like a man's, you won't get laid. Ever." Andrei
Jul 07 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Andrei Alexandrescu escribi:
 BCS wrote:
 Hello Daniel,

 [1] like me. My girlfriend disagrees with me on this,
You have a girlfriend that even bothers to have an opinion on a programming issue, lucky bastard.
My understanding is that he's referring to a different issue.
 though. *I* think she's crazy, but I'm not exactly
 inclined to try and change her mind. :)
That reminds me of a quote: "If you assume a woman's mind is supposed to work like a man's, the only conclusion you can come to is they are *all* crazy."
To paraphrase: "If you assume a woman's mind is supposed to work like a man's, you won't get laid. Ever."
lol :)
Jul 07 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Daniel Keep, el  7 de julio a las 15:40 me escribiste:
 
 
 Andrei Alexandrescu wrote:
 bearophile wrote:
 Jason House:
 Hardly. There seemed to mostly be complaints about it with Andrei
 saying things like "I can't believe you don't see the elegance of the
 syntax". In the end, Andrei commented that he shouldn't involve the
 community in such small changes and went silent.<
He was wrong. Even very intelligent people now and then do the wrong thing.
Of course the latter statement is true, but is in no way evidence supporting the former. About the former, in that particular case I was right. Andrei
Now, now. Let's all play nicely together... I don't like the `case a:..case b:` syntax. It doesn't matter. The functionality is in place and the syntax has a sane explanation and rationale. Unless there's some egregious problem aside from being a bit on the ugly side [1], it's just bike-shedding. And I'm so, SOOO sick of bike-shedding.
I think Walter is right, this syntax introduce an inconsistency in the ".." operator semantics, which is used with inclusive meaning sometimes (case) and with exclusive meaning other times (slices and foreach). -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
Jul 07 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jason House wrote:
 Walter Bright Wrote:
 
 のしいか (noshiika) wrote:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using the 
 .. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
 Shouldn't D make use of another operator to express ranges that include 
 the endpoints as Ruby or Perl6 does?
I think this was hashed out ad nauseum in the n.g.
Hardly. There seemed to mostly be complaints about it with Andrei saying things like "I can't believe you don't see the elegance of the syntax".
There's been also suggestions of inferior syntax, as are now. Andrei
Jul 06 2009
prev sibling parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.
 
 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
 
 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:
 
 And CaseRangeStatement, being inconsistent with other syntaxes using the 
 .. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
 Shouldn't D make use of another operator to express ranges that include 
 the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
Jul 06 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using 
 the .. operator, i.e. slicing and ForeachRangeStatement, includes the 
 endpoint.
 Shouldn't D make use of another operator to express ranges that 
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
And what did those people use when they wanted to express a range of case labels? In other words, where did those people turn their heads towards? Andrei
Jul 06 2009
parent reply Ary Borenszweig <ary esperanto.org.ar> writes:
Andrei Alexandrescu wrote:
 Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using 
 the .. operator, i.e. slicing and ForeachRangeStatement, includes the 
 endpoint.
 Shouldn't D make use of another operator to express ranges that 
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
And what did those people use when they wanted to express a range of case labels? In other words, where did those people turn their heads towards?
They probably used an if. But I think it's not about that. If D didn't have the possibility to define case range statements, it would be better. Now there's a possibility to do that, but with an ugly syntax (you'll find out when this newsgroup will receive about one or two complaints about this each month, not to mention there were already a lot of complaints). You can find other "ugly" things by looking at repetitive mails to this newsgroup. Also, there's a limitation of just 256 cases. What's that? Where that limitation come from? That looks week.
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Ary Borenszweig wrote:
 Andrei Alexandrescu wrote:
 Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using 
 the .. operator, i.e. slicing and ForeachRangeStatement, includes 
 the endpoint.
 Shouldn't D make use of another operator to express ranges that 
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
And what did those people use when they wanted to express a range of case labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
 But I think it's not about that. If D didn't have the possibility to 
 define case range statements, it would be better. Now there's a 
 possibility to do that, but with an ugly syntax (you'll find out when 
 this newsgroup will receive about one or two complaints about this each 
 month, not to mention there were already a lot of complaints).
This is speculation. And the complaints usually were accompanied with inferior suggestions for "improving" things. Everyone wanted to add some incomprehensible, inconsistent, or confusing syntax to do ranged cases, as long as it wasn't the one I'd chosen.
 You can 
 find other "ugly" things by looking at repetitive mails to this newsgroup.
I and others don't find the added syntax ugly in the least.
 Also, there's a limitation of just 256 cases. What's that? Where that 
 limitation come from? That looks week.
That's just an implementation limitation that future versions will eliminate. Andrei
Jul 06 2009
next sibling parent Chad J <chadjoan __spam.is.bad__gmail.com> writes:
Andrei Alexandrescu wrote:
 Ary Borenszweig wrote:
 Andrei Alexandrescu wrote:
 Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes
 using the .. operator, i.e. slicing and ForeachRangeStatement,
 includes the endpoint.
 Shouldn't D make use of another operator to express ranges that
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
And what did those people use when they wanted to express a range of case labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
 But I think it's not about that. If D didn't have the possibility to
 define case range statements, it would be better. Now there's a
 possibility to do that, but with an ugly syntax (you'll find out when
 this newsgroup will receive about one or two complaints about this
 each month, not to mention there were already a lot of complaints).
This is speculation. And the complaints usually were accompanied with inferior suggestions for "improving" things. Everyone wanted to add some incomprehensible, inconsistent, or confusing syntax to do ranged cases, as long as it wasn't the one I'd chosen.
 You can find other "ugly" things by looking at repetitive mails to
 this newsgroup.
I and others don't find the added syntax ugly in the least.
 Also, there's a limitation of just 256 cases. What's that? Where that
 limitation come from? That looks week.
That's just an implementation limitation that future versions will eliminate. Andrei
Shoving is the way. Pushing is inferior.
Jul 06 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
And what did those people use when they wanted to express a range of case 
labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
Yes, but when you try to make people move to a different language, you have to do considerably better. When I have to choose between something well know, well supported and mature and something that is, at least, unknown (even if it's mature and well supported, I won't know that until I use it a lot so is a risk), I want to be really good, not just barely good. Details as this one are not deal breaker on their own, but when they are a lot, it tends to make the language look ugly as a whole. What bugs me the most is there are a lot of new constructs in the language that are plain ugly, from the start. D is buying it's own baggage (__traits, enum for manifest constants, now the case range, and I'm sure I'm forgetting something else) with no reason... -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
Jul 06 2009
next sibling parent reply aarti_pl <aarti interia.pl> writes:
Leandro Lucarella pisze:
 Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
 And what did those people use when they wanted to express a range of case 
 labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
Yes, but when you try to make people move to a different language, you have to do considerably better. When I have to choose between something well know, well supported and mature and something that is, at least, unknown (even if it's mature and well supported, I won't know that until I use it a lot so is a risk), I want to be really good, not just barely good. Details as this one are not deal breaker on their own, but when they are a lot, it tends to make the language look ugly as a whole. What bugs me the most is there are a lot of new constructs in the language that are plain ugly, from the start. D is buying it's own baggage (__traits, enum for manifest constants, now the case range, and I'm sure I'm forgetting something else) with no reason...
... * foreach_reverse * access to variadic function parameters with _argptr & _arguments * mess with compile time is expression (sorry, that's my favorite ugliness :-] ) I believe that this issues are greatly underestimated by D language designers... BR Marcin Kuszczak (aarti_pl)
Jul 06 2009
parent Leandro Lucarella <llucax gmail.com> writes:
aarti_pl, el  7 de julio a las 00:27 me escribiste:
 Leandro Lucarella pisze:
Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
And what did those people use when they wanted to express a range of case 
labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
Yes, but when you try to make people move to a different language, you have to do considerably better. When I have to choose between something well know, well supported and mature and something that is, at least, unknown (even if it's mature and well supported, I won't know that until I use it a lot so is a risk), I want to be really good, not just barely good. Details as this one are not deal breaker on their own, but when they are a lot, it tends to make the language look ugly as a whole. What bugs me the most is there are a lot of new constructs in the language that are plain ugly, from the start. D is buying it's own baggage (__traits, enum for manifest constants, now the case range, and I'm sure I'm forgetting something else) with no reason...
... * foreach_reverse * access to variadic function parameters with _argptr & _arguments * mess with compile time is expression (sorry, that's my favorite ugliness :-] )
But these ones at least are not new (I'm sure they were new at some point, but now are baggage). The new constructs are future baggage. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
Jul 07 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
 And what did those people use when they wanted to express a range of case 
 labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
Yes, but when you try to make people move to a different language, you have to do considerably better. When I have to choose between something well know, well supported and mature and something that is, at least, unknown (even if it's mature and well supported, I won't know that until I use it a lot so is a risk), I want to be really good, not just barely good.
That goes without saying.
 Details as this one are not deal breaker on their own, but when they are
 a lot, it tends to make the language look ugly as a whole.
You are just saying it's ugly. I don't think it's ugly. Walter doesn't think it's ugly. Other people don't think it's ugly. Many of the people who said it's ugly actually came up with proposals that are arguably ugly, hopelessly confusing, or both. Look at only some of the rehashed proposals of today: the genial "case [0 .. 10]:" which is horribly inconsistent, and the awesome "case 0: ... case 10:", also inconsistent (and gratuitously so) because ellipses today only end lists without having something to their right. The authors claim those are better than the current syntax, and one even claimed "beauty", completely ignoring the utter lack of consistency with the rest of the language. I don't claim expertise in language design, so I wish there were a few good experts in this group.
 What bugs me the most is there are a lot of new constructs in the language
 that are plain ugly, from the start. D is buying it's own baggage
 (__traits, enum for manifest constants, now the case range, and I'm sure
 I'm forgetting something else) with no reason...
I agree there are ugly constructs in D, and is-expressions would near the top of the list (particularly the absolutely awful is(T : T[])), but you have no case (heh) with the switch statement. Andrei
Jul 06 2009
next sibling parent reply grauzone <none example.net> writes:
 You are just saying it's ugly. I don't think it's ugly. Walter doesn't 
 think it's ugly. Other people don't think it's ugly. Many of the people 
 who said it's ugly actually came up with proposals that are arguably 
 ugly, hopelessly confusing, or both. Look at only some of the rehashed 
 proposals of today: the genial "case [0 .. 10]:" which is horribly 
 inconsistent, and the awesome "case 0: ... case 10:", also inconsistent 
 (and gratuitously so) because ellipses today only end lists without 
 having something to their right. The authors claim those are better than 
 the current syntax, and one even claimed "beauty", completely ignoring 
 the utter lack of consistency with the rest of the language. I don't 
I oriented this on the syntax of array slices. Which work that way. Not inconsistent at all. It's also consistent with foreach(_; x..y). Other than that, I realize it's not that good of a choice and it's not elegant at all. But I think it's still better than some of your horrible language crimes (including yours) that are being forced into D. In any way, I think we should completely redesign the switch statement and give it a different syntax. No more C compatibility. No more Duff's device. We can keep the "old" switch statement for that.
Jul 06 2009
next sibling parent grauzone <none example.net> writes:
 In any way, I think we should completely redesign the switch statement 
 and give it a different syntax. No more C compatibility. No more Duff's 
 device. We can keep the "old" switch statement for that.
PS: we could add awesome stuff like pattern matching to this, which would make D much more functional language like, which seems to be the new cool thing to do. We can also omit case labels completely, which will reduce repeated typing of keywords.
Jul 06 2009
prev sibling next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive. Creating such an inconsistency would sentence programmers to forever thinking "which way is it this time". To avoid such confusion an obviously different syntax is required.
Jul 06 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:h2u735$sn8$1 digitalmars.com...
 grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive.
The current way has that inconsistency: variable .. variable // exclusive end caseLabel .. caseLabel // inclusive end And yes, I know that's not how it's actually parsed, but that's how people visually parse it. Ah the hell with it, I don't care any more: The *real* issue here is that the current switch, being based on C's, is horribly antiquated and what we really need is a comprehensive redesign incorporating some sort of generalized pattern matching. Like "case > 1, <= 10:" or something like Nemerle, or whatever. I don't care, as long as it doesn't continue to get trivialized as something that can be solved by tossing in a recycled ".." here, a recycled "final" there, etc.
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:h2u735$sn8$1 digitalmars.com...
 grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive.
The current way has that inconsistency: variable .. variable // exclusive end caseLabel .. caseLabel // inclusive end And yes, I know that's not how it's actually parsed, but that's how people visually parse it.
I don't think so at all. There's a lot of punctuation that has different roles depending on the context. For example, ":" means key/value separator or ternary operator participant; "*" means multiplication or pointer dereference; "&" means taking address or binary "and"... plenty of examples. So you can't center on ".." and claim that it visually means the same thing even though the surrounding is different. You really have no argument here.
 Ah the hell with it, I don't care any more: The *real* issue here is that 
 the current switch, being based on C's, is horribly antiquated and what we 
 really need is a comprehensive redesign incorporating some sort of 
 generalized pattern matching. Like "case > 1, <= 10:" or something like 
 Nemerle, or whatever. I don't care, as long as it doesn't continue to get 
 trivialized as something that can be solved by tossing in a recycled ".." 
 here, a recycled "final" there, etc. 
On the full side of the glass, with the latest dmd release, the language has acquired some useful feature improvements and the implementation has fixed many bugs. Why the crankiness? Andrei
Jul 06 2009
next sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:h2udmf$1b0c$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:h2u735$sn8$1 digitalmars.com...
 grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive.
The current way has that inconsistency: variable .. variable // exclusive end caseLabel .. caseLabel // inclusive end And yes, I know that's not how it's actually parsed, but that's how people visually parse it.
I don't think so at all. There's a lot of punctuation that has different roles depending on the context. For example, ":" means key/value separator or ternary operator participant; "*" means multiplication or pointer dereference; "&" means taking address or binary "and"... plenty of examples. So you can't center on ".." and claim that it visually means the same thing even though the surrounding is different. You really have no argument here.
Those examples are all cases where the meaning and context are wildly different fbetween one use and the other. But with '..', both uses are very similar: "From xxx to (incl/excl) yyy". Big differences are ok, they stand out as obvious. Small differences can be more problematic. FWIW, Even though I dislike it, I don't think it's a sky-falling issue or anything. I just don't think it's so "obviously great" as you and Walter see it. Basically, I see it as a questionable *but* acceptable solution *provided that* it's just a stop-gap in the interim before finally getting a completely re-thought switch/pattern-matcher.
 Ah the hell with it, I don't care any more: The *real* issue here is that 
 the current switch, being based on C's, is horribly antiquated and what 
 we really need is a comprehensive redesign incorporating some sort of 
 generalized pattern matching. Like "case > 1, <= 10:" or something like 
 Nemerle, or whatever. I don't care, as long as it doesn't continue to get 
 trivialized as something that can be solved by tossing in a recycled ".." 
 here, a recycled "final" there, etc.
On the full side of the glass, with the latest dmd release, the language has acquired some useful feature improvements and the implementation has fixed many bugs. Why the crankiness?
Because it's really hot in this room right now... ;) But seriously though, it's just this issue I'm cranky about. I can absolutely agree there's been a incorporated...*squeak**squeak* says the wheel ;) )
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:h2udmf$1b0c$1 digitalmars.com...
 Nick Sabalausky wrote:
 "Walter Bright" <newshound1 digitalmars.com> wrote in message 
 news:h2u735$sn8$1 digitalmars.com...
 grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive.
The current way has that inconsistency: variable .. variable // exclusive end caseLabel .. caseLabel // inclusive end And yes, I know that's not how it's actually parsed, but that's how people visually parse it.
I don't think so at all. There's a lot of punctuation that has different roles depending on the context. For example, ":" means key/value separator or ternary operator participant; "*" means multiplication or pointer dereference; "&" means taking address or binary "and"... plenty of examples. So you can't center on ".." and claim that it visually means the same thing even though the surrounding is different. You really have no argument here.
Those examples are all cases where the meaning and context are wildly different fbetween one use and the other. But with '..', both uses are very similar: "From xxx to (incl/excl) yyy". Big differences are ok, they stand out as obvious. Small differences can be more problematic.
You'd have an uphill battle using a counterfeit Swiss army knife against a battery of Gatling guns arguing that case 'a': .. case 'z': is very similar with 0 .. 10 That's actually much more different than e.g. a = b * c; versus b * c;
 FWIW, Even though I dislike it, I don't think it's a sky-falling issue or 
 anything. I just don't think it's so "obviously great" as you and Walter see 
 it.
I'm not claiming it's obviously great. I do claim it's highly appropriate.
 Basically, I see it as a questionable *but* acceptable solution 
 *provided that* it's just a stop-gap in the interim before finally getting a 
 completely re-thought switch/pattern-matcher.
What is the question you are asking in the "questionable" part? Andrei
Jul 06 2009
parent Michel Fortin <michel.fortin michelf.com> writes:
On 2009-07-07 01:12:12 -0400, Andrei Alexandrescu 
<SeeWebsiteForEmail erdani.org> said:

 Nick Sabalausky wrote:
 Those examples are all cases where the meaning and context are wildly 
 different fbetween one use and the other. But with '..', both uses are 
 very similar: "From xxx to (incl/excl) yyy". Big differences are ok, 
 they stand out as obvious. Small differences can be more problematic.
You'd have an uphill battle using a counterfeit Swiss army knife against a battery of Gatling guns
 arguing that
 
 case 'a': .. case 'z':
 
 is very similar with
 
 0 .. 10
 
 That's actually much more different than e.g.
 
 a = b * c;
 
 versus
 
 b * c;
They aren't so much different if you consider "case 'a':" and "case 'z':" as two items joined by a "..", which I believe is the expected way to read it. In the first case "case 'a':" and "case 'z':" joined by a ".." means an inclusive range, in the second case "0" and "10" joined by a ".." means an exclusive one. With "b * c", the meaning is completly different depending on whether b is a type or not. If "b" is a type, you can't reasonabily expect "b * c" to do a multiplication and you'll get an error about it if that's what you're trying to do. Wheras with "case 'a': .. case 'b':" you can reasonably expect an exclusive range if you aren't too familiar with the syntax (and that's a resonable expectation if you know about ranges), and you won't get an error of you mix things up; thus, clarity of the syntax becomes more important. I still think that having that syntax is better than nothing, but I do believe it's an inconsistency and that it may looks ambiguous to someone unfamiliar with it. But my worse grief about that new feature is the restriction about 256 values which is pretty limitating if you're writing a parser dealing with ranges of unicode characters. I guess I'll have to continue using ifs for that. -- Michel Fortin michel.fortin michelf.com http://michelf.com/
Jul 07 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 21:59:54 -0500, Andrei Alexandrescu wrote:

 There's a lot of punctuation that has different 
 roles depending on the context. 
Agreed. No argument here. The meaning of punctuation depends on its context. Got it. However, that aside, the syntax you have chosen will have a rational explanation for its superiority. So can you explain in simple terms why CaseLabelInt .. CaseLabelInt eg. case 1: .. case 9: is superior than case CaseRange: eg. case 1 .. 9: given that CaseLabelInt ==> case IntegerExpression : CaseRange ==> IntegerExpression .. IntegerExpression
 On the full side of the glass, with the latest dmd release, the language 
 has acquired some useful feature improvements and the implementation has 
 fixed many bugs.
Yes it truly has, and thank you very much to all contributors.
 Why the crankiness?
Because it is so demoralizing to point out 'warts' in D/DMD and be subsequently dismissed as ungrateful plebeians who are wasting the time of the patricians. Sorry, but you did ask. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Derek Parnell wrote:
 However, that aside, the syntax you have chosen will have a rational
 explanation for its superiority. So can you explain in simple terms why 
 
     CaseLabelInt .. CaseLabelInt  eg. case 1: .. case 9:
 
 is superior than
 
     case CaseRange:  eg. case 1 .. 9:
 
 given that
   CaseLabelInt ==> case IntegerExpression :
   CaseRange    ==> IntegerExpression .. IntegerExpression
Because 1. case X..Y: looks like 2. foreach(e; X..Y) 3. array[X..Y] yet the X..Y has a VERY DIFFERENT meaning. (1) is inclusive of Y, and (2) and (3) are exclusive of Y. Having a very different meaning means it should have a distinctly different syntax.
Jul 06 2009
parent reply Derek Parnell <derek psych.ward> writes:
On Mon, 06 Jul 2009 21:29:43 -0700, Walter Bright wrote:

 Derek Parnell wrote:
 However, that aside, the syntax you have chosen will have a rational
 explanation for its superiority. So can you explain in simple terms why 
 
     CaseLabelInt .. CaseLabelInt  eg. case 1: .. case 9:
 
 is superior than
 
     case CaseRange:  eg. case 1 .. 9:
 
 given that
   CaseLabelInt ==> case IntegerExpression :
   CaseRange    ==> IntegerExpression .. IntegerExpression
Because 1. case X..Y: looks like 2. foreach(e; X..Y) 3. array[X..Y] yet the X..Y has a VERY DIFFERENT meaning. (1) is inclusive of Y, and (2) and (3) are exclusive of Y. Having a very different meaning means it should have a distinctly different syntax.
Thank you, but now I am confused ... Andrei just got through lecturing us that the meaning of punctuation is dependant upon context. So I think your example must be more like ... Because 1. case X..Y: looks like 2. foreach(e; X..Y) 3. array[X..Y] 4. case X:..caseY: yet the X..Y has a VERY DIFFERENT meaning. (1) is inclusive of Y, and (2) and (3) are exclusive of Y, and (4) is inclusive of Y ... oh, hang on... Sorry, but I'm just not getting the "VERY DIFFERENT" part yet. Right now, D has ".." meaning exclude-end value (2. and 3.) AND it also has ".." meaning include-end value (4.), depending on context. Ok, I admit that there is one subtle difference. Examples 2 and 3 are of the form IntExpression .. IntExpression and example 4 is CaseLabelInt .. CaseLabelInt but seriously, people are not going to notice that. We see double-dot and think "range". I know that this is not ever going to be changed so I'm not arguing that it should. (One of the most frequent bugs I have in my D programs is that I forget that X..Y excludes Y because it's not natural to me to see text that looks like "the range X to Y" but means "the range X to Y-1".) It seems that D would benefit from having a standard syntax format for expressing various range sets; a. Include begin Include end, i.e. [] b. Include begin Exclude end, i.e. [) c. Exclude begin Include end, i.e. (] d. Exclude begin Exclude end, i.e. () -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
  Because
  
  1.	case X..Y:
  
  looks like
  
  2.	foreach(e; X..Y)
  3.	array[X..Y]
  4.     case X:..caseY:
 
 
  yet the X..Y has a VERY DIFFERENT meaning. (1) is inclusive of Y, and 
  (2) and (3) are exclusive of Y, and (4) is inclusive of Y ... oh, hang
 on...
 
 Sorry, but I'm just not getting the "VERY DIFFERENT" part yet. Right now, D
 has ".." meaning exclude-end value (2. and 3.) AND it also has ".." meaning
 include-end value (4.), depending on context. 
I think here's the point where our thinking diverges. You say ".." has a meaning. No. It does not have any meaning. It's punctuation. Same way, "*", "&", ":", ",", etc. have no meaning in D all alone. They are tokens. They acquire a meaning by partaking in grammatical constructs. What does have arguably a meaning is "expression1 .. expression2". (Even that could be debatable because it's not a standalone expression). But anyway, the meaning that "expression1 .. expression2" acquires in array slices and foreach statements is uniform. Now we have the different construct "case expression1: .. case expression2:" That is evidently not "expression1 .. expression2", does not include it as a part, and shares essentially nothing except the ".." with it. That's my point.
 Ok, I admit that there is one subtle difference. Examples 2 and 3 are of
 the form 
 
   IntExpression .. IntExpression
 
 and example 4 is 
 
   CaseLabelInt .. CaseLabelInt
 
 but seriously, people are not going to notice that. We see double-dot and
 think "range". 
You should see "expression1 .. expression2" and think "range". There's very few tokens that make one think of only one thing when seeing them. One of them is "%" - makes one think of modulus, another is "^" meaning xor, another is "$" - array size... but that's about it.
 I know that this is not ever going to be changed so I'm not arguing that it
 should. 
 
 (One of the most frequent bugs I have in my D programs is that I forget
 that X..Y excludes Y because it's not natural to me to see text that looks
 like "the range X to Y" but means "the range X to Y-1".)
 
 It seems that D would benefit from having a standard syntax format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens. Andrei
Jul 06 2009
parent reply =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format for=
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
=20 I'm afraid this would majorly mess with pairing of parens. =20
I think Derek's point was to have *some* syntax to mean this, not=20 necessarily the one he showed (which he showed because I believe=20 that's the "standard" mathematical way to express it for English=20 speakers). For example, we could say that [] is always inclusive and=20 have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^] Jerome PS: If you *really* want messed parens pairing, try it with the=20 French convention: [] [[ ]] ][ ;) --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o). Andrei
Jul 07 2009
next sibling parent reply =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 J=E9r=F4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format f=
or
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not =
 necessarily the one he showed (which he showed because I believe=20
 that's the "standard" mathematical way to express it for English=20
 speakers). For example, we could say that [] is always inclusive and=20
 have another character which makes it exclusive like:
  a. Include begin Include end, i.e. [  a .. b  ]
  b. Include begin Exclude end, i.e. [  a .. b ^]
  c. Exclude begin Include end, i.e. [^ a .. b  ]
  d. Exclude begin Exclude end, i.e. [^ a .. b ^]
=20 I think Walter's message really rendered the whole discussion moot. Pos=
t=20
 of the year:
=20
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
 I like:
=20
    a .. b+1
=20
 to mean inclusive range.
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
=20
 Consider "+1]" a special symbol that means the range is to be closed to=
=20
 the right :o).
=20
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
How does it not work for floating point numbers? Andrei
Jul 07 2009
parent reply =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 J=E9r=F4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 J=E9r=F4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format=
=20
 for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this,=20 not necessarily the one he showed (which he showed because I believe=
=20
 that's the "standard" mathematical way to express it for English=20
 speakers). For example, we could say that [] is always inclusive and=
=20
 have another character which makes it exclusive like:
  a. Include begin Include end, i.e. [  a .. b  ]
  b. Include begin Exclude end, i.e. [  a .. b ^]
  c. Exclude begin Include end, i.e. [^ a .. b  ]
  d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot.=20 Post of the year: =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 I like:

    a .. b+1

 to mean inclusive range.
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 Consider "+1]" a special symbol that means the range is to be closed =
 to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
=20 How does it not work for floating point numbers? =20
Is that a trick question? Depending on the actual value of b, you=20 might have b+1 =3D=3D b (if b is large enough). Conversely, range a ..=20 b+1 may contain a lot of extra numbers I may not want to include=20 (like b+0.5)... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jrme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax 
 format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
How does it not work for floating point numbers?
Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)...
It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented. Andrei
Jul 07 2009
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax=20
 format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, =
 not necessarily the one he showed (which he showed because I=20
 believe that's the "standard" mathematical way to express it for=20
 English speakers). For example, we could say that [] is always=20
 inclusive and have another character which makes it exclusive like=
:
  a. Include begin Include end, i.e. [  a .. b  ]
  b. Include begin Exclude end, i.e. [  a .. b ^]
  c. Exclude begin Include end, i.e. [^ a .. b  ]
  d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot.=
=20
 Post of the year:

 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 I like:

    a .. b+1

 to mean inclusive range.
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 Consider "+1]" a special symbol that means the range is to be=20
 closed to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
How does it not work for floating point numbers?
Is that a trick question? Depending on the actual value of b, you =
 might have b+1 =3D=3D b (if b is large enough). Conversely, range a ..=
b+1=20
 may contain a lot of extra numbers I may not want to include (like=20
 b+0.5)...
=20 It wasn't a trick question, or it was of sorts. If you iterate with e.g=
=2E=20
 foreach through a floating-point range that has b =3D=3D b + 1, you're =
bound=20
 to get in a lot of trouble because the running variable will be=20
 incremented.
=20
Well: - A floating point range should allow you to specify the iteration=20 step, or else it should allow you to iterate through all numbers=20 that can be represented with the corresponding precision; - The second issue remains: what if I want to include b but not=20 b+=CE=B5 for any =CE=B5>0? Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jérôme M. Berger wrote:
  - A floating point range should allow you to specify the iteration 
 step, or else it should allow you to iterate through all numbers that 
 can be represented with the corresponding precision;
We don't have that, so you'd need to use a straigh for statement.
  - The second issue remains: what if I want to include b but not b+ε for 
 any ε>0?
real a, b; ... for (real f = a; f <= b; update(f)) { } I'd find it questionable to use ranged for with floats anyway. Andrei
Jul 07 2009
parent reply =?UTF-8?B?IkrDqXLDtG1lIE0uIEJlcmdlciI=?= <jeberger free.fr> writes:
Andrei Alexandrescu wrote:
 J=C3=A9r=C3=B4me M. Berger wrote:
  - A floating point range should allow you to specify the iteration=20
 step, or else it should allow you to iterate through all numbers that =
 can be represented with the corresponding precision;
=20 We don't have that, so you'd need to use a straigh for statement. =20
struct FloatRange { float begin, end, step; bool includeBegin, includeEnd; int opApply (int delegate (ref float) dg) { whatever; } whatever; }
  - The second issue remains: what if I want to include b but not b+=CE=
=B5=20
 for any =CE=B5>0?
=20 real a, b; ... for (real f =3D a; f <=3D b; update(f)) { } =20 I'd find it questionable to use ranged for with floats anyway. =20
So would I. But a range of floats is useful for more than iterating=20 over it. Think interval arithmetic for example. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
  - A floating point range should allow you to specify the iteration 
 step, or else it should allow you to iterate through all numbers that 
 can be represented with the corresponding precision;
We don't have that, so you'd need to use a straigh for statement.
struct FloatRange { float begin, end, step; bool includeBegin, includeEnd; int opApply (int delegate (ref float) dg) { whatever; } whatever; }
  - The second issue remains: what if I want to include b but not b+ε 
 for any ε>0?
real a, b; ... for (real f = a; f <= b; update(f)) { } I'd find it questionable to use ranged for with floats anyway.
So would I. But a range of floats is useful for more than iterating over it. Think interval arithmetic for example.
Cool. I'm positive that open ranges will not prevent you from implementing such a library (and from subsequently proposing it to Phobos :o)). Andrei
Jul 07 2009
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax 
 format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
How does it not work for floating point numbers?
Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)...
It wasn't a trick question, or it was of sorts. If you iterate with e.g. foreach through a floating-point range that has b == b + 1, you're bound to get in a lot of trouble because the running variable will be incremented.
Well: - A floating point range should allow you to specify the iteration step, or else it should allow you to iterate through all numbers that can be represented with the corresponding precision; - The second issue remains: what if I want to include b but not b+ε for any ε>0? Jerome
I'd say that a floating point range requires a lazy interpretation, and should only get evaluated on an as-needed basis. But clearly open, half-open, and closed intervals aren't the same kind of thing as ranges. They are more frequently used for making assertions about when something is true (or false). I.e., they're used as an integral part of standard mathematics, but not at all in computer science (except in VERY peculiar cases). In math one makes a assertion that, say, a particular equation holds for all members of an interval, and open or closed is only a statement about whether the end-points are included in the interval. Proof isn't usually be exhaustive calculation, but rather by more abstract reasoning. It would be nice to be able to express mathematical reasoning as parts of a computer program, but it's not something that's likely to be efficiently implementable, and certainly not executable. Mathematica can do that kind of thing, I believe, but it's a bit distant from a normal computer language.
Jul 08 2009
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 21:20:42 +0200, "Jérôme M. Berger" wrote:

 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format 
 for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
Ah, but: - This is inconsistent between the left and right limit; - This only works for integers, not for floating point numbers.
How does it not work for floating point numbers?
Is that a trick question? Depending on the actual value of b, you might have b+1 == b (if b is large enough). Conversely, range a .. b+1 may contain a lot of extra numbers I may not want to include (like b+0.5)... Jerome
If Andrei is not joking (the smiley notwithstanding) the "+1" doesn't mean add one to the previous expression, instead it means that the previous expression's value is the last value in the range set. Subtle, no? -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 I think Walter's message really rendered the whole discussion moot. Post 
 of the year:
 =========================
 I like:
     a .. b+1
 to mean inclusive range.
That was my preferred solution, starting from months ago. Bye, bearophile
Jul 07 2009
prev sibling next sibling parent reply Bill Baxter <wbaxter gmail.com> writes:
2009/7/7 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 I think Walter's message really rendered the whole discussion moot. Post =
of
 the year:

 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
 I like:

 =A0 a .. b+1

 to mean inclusive range.
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D Not everything is an integer. --bb
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Bill Baxter wrote:
 2009/7/7 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 I think Walter's message really rendered the whole discussion moot. Post of
 the year:

 =========================
 I like:

   a .. b+1

 to mean inclusive range.
 =========================
Not everything is an integer.
Works with pointers too. Andrei
Jul 07 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Bill Baxter wrote:
 2009/7/7 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 I think Walter's message really rendered the whole discussion moot. 
 Post of
 the year:

 =========================
 I like:

   a .. b+1

 to mean inclusive range.
 =========================
Not everything is an integer.
Works with pointers too.
It works for the cases where an inclusive range makes sense.
Jul 07 2009
parent =?ISO-8859-1?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 Bill Baxter wrote:
 2009/7/7 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 I think Walter's message really rendered the whole discussion moot. =
 Post of
 the year:

 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 I like:

   a .. b+1

 to mean inclusive range.
 =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
 Not everything is an integer.
Works with pointers too.
=20 It works for the cases where an inclusive range makes sense.
Doesn't work with floats, which *do* make sense too... Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 14:16:12 -0500, Andrei Alexandrescu wrote:

 Bill Baxter wrote:
 2009/7/7 Andrei Alexandrescu <SeeWebsiteForEmail erdani.org>:
 I think Walter's message really rendered the whole discussion moot. Post of
 the year:

 =========================
 I like:

   a .. b+1

 to mean inclusive range.
 =========================
Not everything is an integer.
Works with pointers too.
A pointer is an integer because the byte it is referring to always has an integral address value. Pointers do not point to partial bytes. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de julio a las 13:18 me escribiste:
 Jérôme M. Berger wrote:
Andrei Alexandrescu wrote:
Derek Parnell wrote:
It seems that D would benefit from having a standard syntax format for
expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
What about bearophile response: what about x..uint.max+1? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- More than 50% of the people in the world have never made Or received a telephone call
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  7 de julio a las 13:18 me escribiste:
 Jérôme M. Berger wrote:
 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format for
 expressing various range sets;
 a. Include begin Include end, i.e. []
 b. Include begin Exclude end, i.e. [)
 c. Exclude begin Include end, i.e. (]
 d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed (which he showed because I believe that's the "standard" mathematical way to express it for English speakers). For example, we could say that [] is always inclusive and have another character which makes it exclusive like: a. Include begin Include end, i.e. [ a .. b ] b. Include begin Exclude end, i.e. [ a .. b ^] c. Exclude begin Include end, i.e. [^ a .. b ] d. Exclude begin Exclude end, i.e. [^ a .. b ^]
I think Walter's message really rendered the whole discussion moot. Post of the year: ========================= I like: a .. b+1 to mean inclusive range. ========================= Consider "+1]" a special symbol that means the range is to be closed to the right :o).
What about bearophile response: what about x..uint.max+1?
How often did you encounter that issue? Andrei
Jul 07 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 How often did you encounter that issue?
Please, let's be serious, and let's stop adding special cases to D, or they will kill the language. Lately I have seen too many special cases. For example the current design of the rules of integral seems bad. It has bugs and special cases from the start. The .. used in case is another special case, even if Andrei is blind regarding that, and doesn't see its problem. Why for a change people here stop implementing things, and start implementing a feature only after 55-60+% of the people think it's a good idea? them, and let's not add any more half-backed things. Before adding a feature X let's discuss them, let's create a forum or place to keep a thred for each feature plus a wiki-based text of the best solution found, etc. If not enough people like a solution then let's not add it. Better not having a feature than having a bad one, see Python that even today misses basic things like a switch/case. Bye, bearophile
Jul 07 2009
next sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Andrei Alexandrescu:
 How often did you encounter that issue?
Please, let's be serious, and let's stop adding special cases to D, or they will kill the language.
Don't get me going about what could kill the language.
 Lately I have seen too many special
 cases. For example the current design of the rules of integral seems
 bad. It has bugs and special cases from the start.
Bugs don't imply that the feature is bad. The special cases are well Value range propagation as defined in D is principled and puts D on the right side of both safety and speed. It's better than all other languages mentioned above: safer than C and C++, and requiring much Andrei
Jul 07 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"bearophile" <bearophileHUGS lycos.com> wrote in message 
news:h3093m$2mu6$1 digitalmars.com...
 Before adding a feature X let's discuss them, ... If not enough people 
 like a solution then let's not add it.
Something like that was attempted once before. Andrei didn't like what we had to say, got huffy, and withdrew from the discussion. Stay tuned for the exciting sequel where the feature goes ahead as planned anyway, and our protagonists get annoyed that people still have objections to it.
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:h3093m$2mu6$1 digitalmars.com...
 Before adding a feature X let's discuss them, ... If not enough people 
 like a solution then let's not add it.
Something like that was attempted once before. Andrei didn't like what we had to say, got huffy, and withdrew from the discussion. Stay tuned for the exciting sequel where the feature goes ahead as planned anyway, and our protagonists get annoyed that people still have objections to it.
Put yourself in my place. What would you do? Honest. Sometimes I find it difficult to find the right mix of being honest, being technically accurate, being polite, and not wasting too much time explaining myself. Andrei
Jul 07 2009
parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 19:39:55 -0500, Andrei Alexandrescu wrote:

 Nick Sabalausky wrote:
 "bearophile" <bearophileHUGS lycos.com> wrote in message 
 news:h3093m$2mu6$1 digitalmars.com...
 Before adding a feature X let's discuss them, ... If not enough people 
 like a solution then let's not add it.
Something like that was attempted once before. Andrei didn't like what we had to say, got huffy, and withdrew from the discussion. Stay tuned for the exciting sequel where the feature goes ahead as planned anyway, and our protagonists get annoyed that people still have objections to it.
Put yourself in my place. What would you do? Honest. Sometimes I find it difficult to find the right mix of being honest, being technically accurate, being polite, and not wasting too much time explaining myself. Andrei
Ditto. We know that the development of the D language is not a democratic process, and that's fine. Really, it is. However, clear rationale for decisions made would go a long way to helping reduce dissent, as would some pre-announcements to avoid surprises. By the way, I appreciate that you guys are now closing off bugzilla issues before the release of their fix implementation. It a good heads-up and demonstrates activity in between releases. Well done. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 20:13:45 +0200, "Jérôme M. Berger" wrote:

 Andrei Alexandrescu wrote:
 Derek Parnell wrote:
 It seems that D would benefit from having a standard syntax format for
 expressing various range sets;
  a. Include begin Include end, i.e. []
  b. Include begin Exclude end, i.e. [)
  c. Exclude begin Include end, i.e. (]
  d. Exclude begin Exclude end, i.e. ()
I'm afraid this would majorly mess with pairing of parens.
I think Derek's point was to have *some* syntax to mean this, not necessarily the one he showed
Thank you, Jérôme. I got too frustrated to explain it well enough. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 On Mon, 06 Jul 2009 21:59:54 -0500, Andrei Alexandrescu wrote:
 
 There's a lot of punctuation that has different 
 roles depending on the context. 
Agreed. No argument here. The meaning of punctuation depends on its context. Got it. However, that aside, the syntax you have chosen will have a rational explanation for its superiority. So can you explain in simple terms why CaseLabelInt .. CaseLabelInt eg. case 1: .. case 9: is superior than case CaseRange: eg. case 1 .. 9:
It is superior because case 1 .. 9: raises the question on whether 9 is included or not. Consistency with a[1 .. 9] and foreach (x; 1 .. 9) suggests that 9 should not be included, common usage of case statements suggests that 9 should be included.
 given that
   CaseLabelInt ==> case IntegerExpression :
   CaseRange    ==> IntegerExpression .. IntegerExpression
What's this?
 On the full side of the glass, with the latest dmd release, the language 
 has acquired some useful feature improvements and the implementation has 
 fixed many bugs.
Yes it truly has, and thank you very much to all contributors.
 Why the crankiness?
Because it is so demoralizing to point out 'warts' in D/DMD and be subsequently dismissed as ungrateful plebeians who are wasting the time of the patricians. Sorry, but you did ask.
I understand. Sorry about that. I am certain there's a misreading of attitude, though I clearly see how you could acquire that impression. Andrei
Jul 06 2009
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Walter Bright wrote:
 grauzone wrote:
 I oriented this on the syntax of array slices. Which work that way. 
 Not inconsistent at all. It's also consistent with foreach(_; x..y).
It would look consistent, but it would behave very differently. x..y for foreach and slices is exclusive of the y, while case x..y is inclusive. Creating such an inconsistency would sentence programmers to forever thinking "which way is it this time". To avoid such confusion an obviously different syntax is required.
This isn't a matter that's very important to me, as I rarely use case statements, but the suggestion made elsewhere of allowing restricted pattern matching of some sort, or concatenated logical tests, is appealing. Being able to test for (e.g.) case (< 5 & > j): would be very appealing. (I read that as case less than 5 and greater than j.) OTOH, I'm not at all sure that such a thing could be implemented efficiently. The places that I've usually found such things were in languages interpreted at run time.
Jul 08 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
grauzone wrote:
 You are just saying it's ugly. I don't think it's ugly. Walter doesn't 
 think it's ugly. Other people don't think it's ugly. Many of the 
 people who said it's ugly actually came up with proposals that are 
 arguably ugly, hopelessly confusing, or both. Look at only some of the 
 rehashed proposals of today: the genial "case [0 .. 10]:" which is 
 horribly inconsistent, and the awesome "case 0: ... case 10:", also 
 inconsistent (and gratuitously so) because ellipses today only end 
 lists without having something to their right. The authors claim those 
 are better than the current syntax, and one even claimed "beauty", 
 completely ignoring the utter lack of consistency with the rest of the 
 language. I don't 
I oriented this on the syntax of array slices. Which work that way.
No, it works differently because the slice is open to the right, whereas with switch one seldom wants to specify an open range.
 Not 
 inconsistent at all. It's also consistent with foreach(_; x..y).
No, it isn't consistent. It's a lose-lose proposition. If you want to make it consistent you'd need to have ['a' .. 'z'] exclude the 'z'. That would confuse people who expect ['a' .. 'z'] to contain 'z'. On the other hand, if you choose to include 'z' you will confuse people who expect behavior to be similar with that in arrays. Going with a syntax that uses ".." just as punctuation but otherwise firmly departs from the slice notation eliminates expectation of semantic similarity. And the presence of the second "case" firmly clarifies that the last label is to be included in the range, even to the first-time reader. There would be seldom a need to check the manual for that.
 Other than that, I realize it's not that good of a choice and it's not 
 elegant at all. But I think it's still better than some of your horrible 
 language crimes (including yours) that are being forced into D.
Thanks for emphasizing twice that it's about me. Yep, they're my horrible language crimes - and those definitely include mine :o). I genuinely appreciate the honesty, and to reciprocate, I don't think very highly of your competence either (as every other post of yours makes some technical mistake), and I find your attitude corrosive. Andrei
Jul 06 2009
parent grauzone <none example.net> writes:
 Thanks for emphasizing twice that it's about me. Yep, they're my 
 horrible language crimes - and those definitely include mine :o). I 
 genuinely appreciate the honesty, and to reciprocate, I don't think very 
 highly of your competence either (as every other post of yours makes 
 some technical mistake), and I find your attitude corrosive.
It was a typo, I meant to diss you only once. There are plenty of language crimes not caused by you, as well as good features which were proposed by you. I hope this makes you happy.
Jul 06 2009
prev sibling next sibling parent reply The Anh Tran <trtheanh gmail.com> writes:
Andrei Alexandrescu wrote:
 I agree there are ugly constructs in D, and is-expressions would near 
 the top of the list (particularly the absolutely awful is(T : T[])), but 
 you have no case (heh) with the switch statement.
 
 
 Andrei
Just a funny suggestion: could we change the is() expression to imperative style? Now: template Something(T, U, V) if ( is(T : T[]) && is(...) ) { alias T blah1; U blah2; class C {}; struct S {}; } In mirror universe: template Something(T, U, V) in { static if ( T : T[] ) // static if here mean if (is (...)) { // may do something with T, make alias, change type // or check another constraint. } else { pragma(msg, "T must be Klingon"); static assert(false); // Or return false; ???? } static if ( U == int ) { V = float; // ????? } return true; } body { alias T blah1; U blah2; class C {}; struct S {}; }
Jul 06 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
The Anh Tran wrote:
 Andrei Alexandrescu wrote:
 I agree there are ugly constructs in D, and is-expressions would near 
 the top of the list (particularly the absolutely awful is(T : T[])), 
 but you have no case (heh) with the switch statement.


 Andrei
Just a funny suggestion: could we change the is() expression to imperative style? Now: template Something(T, U, V) if ( is(T : T[]) && is(...) ) { alias T blah1; U blah2; class C {}; struct S {}; } In mirror universe: template Something(T, U, V) in { static if ( T : T[] ) // static if here mean if (is (...)) { // may do something with T, make alias, change type // or check another constraint. } else { pragma(msg, "T must be Klingon"); static assert(false); // Or return false; ???? } static if ( U == int ) { V = float; // ????? } return true; } body { alias T blah1; U blah2; class C {}; struct S {}; }
I wished for the longest time to simplify the often-used if(is(...)) syntax, but Walter said there are too many ambiguities involved if "is" gets dropped. Andrei
Jul 06 2009
prev sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  6 de julio a las 18:32 me escribiste:
 Leandro Lucarella wrote:
Andrei Alexandrescu, el  6 de julio a las 10:44 me escribiste:
And what did those people use when they wanted to express a range of case 
labels? In other words, where did those people turn their heads towards?
They probably used an if.
So they used an inferior means to start with.
Yes, but when you try to make people move to a different language, you have to do considerably better. When I have to choose between something well know, well supported and mature and something that is, at least, unknown (even if it's mature and well supported, I won't know that until I use it a lot so is a risk), I want to be really good, not just barely good.
That goes without saying.
Details as this one are not deal breaker on their own, but when they are
a lot, it tends to make the language look ugly as a whole.
You are just saying it's ugly. I don't think it's ugly. Walter doesn't think it's ugly. Other people don't think it's ugly. Many of the people who said it's ugly actually came up with proposals that are arguably ugly, hopelessly confusing, or both. Look at only some of the rehashed proposals of today: the genial "case [0 .. 10]:" which is horribly inconsistent, and the awesome "case 0: ... case 10:", also inconsistent (and gratuitously so) because ellipses today only end lists without having something to their right. The authors claim those are better than the current syntax, and one even claimed "beauty", completely ignoring the utter lack of consistency with the rest of the language. I don't claim expertise in language design, so I wish there were a few good experts in this group.
Please read the thread at D NG, the current syntax *is* inconsistent too. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
Jul 07 2009
prev sibling parent reply "Lars T. Kyllingstad" <public kyllingen.NOSPAMnet> writes:
Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using 
 the .. operator, i.e. slicing and ForeachRangeStatement, includes the 
 endpoint.
 Shouldn't D make use of another operator to express ranges that 
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
When the discussion first came up in the NG, I was a bit sceptical about Andrei's suggestion for the case range statement as well. Now, I definitely think it's the best choice, and it's only because I realised it can be written like this: case 1: .. case 4: // do stuff Even though it's the same as case 1: .. case 4:, and even though adding those two newlines is just a visual change, it leaves (to me, at least) no doubt that this is an inclusive range even though the .. operator is used, simply because what I would otherwise write is: case 1: case 2: case 3: case 4: // do stuff Also: Thanks for a great release, Walter, Andrei, Don, Sean and everyone else! (Who else are involved in core development of D2, by the way?) I am liking this language better and better the more I use it. And now, to convince the rest of the scientific community that FORTRAN must go... -Lars
Jul 06 2009
parent Moritz Warning <moritzwarning web.de> writes:
On Tue, 07 Jul 2009 08:53:49 +0200, Lars T. Kyllingstad wrote:

 Ary Borenszweig wrote:
 のしいか (noshiika) escribió:
 Thank you for the great work, Walter and all the other contributors.

 But I am a bit disappointed with the CaseRangeStatement syntax. Why is
 it
    case 0: .. case 9:
 instead of
    case 0 .. 9:

 With the latter notation, ranges can be easily used together with
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:

 And CaseRangeStatement, being inconsistent with other syntaxes using
 the .. operator, i.e. slicing and ForeachRangeStatement, includes the
 endpoint.
 Shouldn't D make use of another operator to express ranges that
 include the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
When the discussion first came up in the NG, I was a bit sceptical about Andrei's suggestion for the case range statement as well. Now, I definitely think it's the best choice, and it's only because I realised it can be written like this: case 1: .. case 4: // do stuff
[snip] I think it looks much better that way and users are more likely to be comfortable with the syntax. I hope it will be displayed in the examples that way. Still, the syntax at all looks a bit alien because it's a syntax addition.
Jul 07 2009
prev sibling next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
Thanks everybody!
Jul 06 2009
prev sibling next sibling parent "Denis Koroskin" <2korden gmail.com> writes:
On Mon, 06 Jul 2009 09:05:10 +0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Someone's clock must have stopped at May, 14 - that's what being show as a D2.031 release date (although "Last update Sun Jul 5 01:18:49 2009") You may want to check D1.046 released date, too.
Jul 06 2009
prev sibling next sibling parent =?ISO-8859-1?Q?Anders_F_Bj=F6rklund?= <afb algonet.se> writes:
Walter Bright wrote:
 Something for everyone here.
 
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
The dmd2 phobos seem to have a directory replaced with a file, for std.c.osx.socket ? std/socket.d(79): Error: module socket cannot read file 'std/c/osx/socket.d' std/c/osx -> std/c/osx/socket.d Ran into some Makefile issues as well, --anders
Jul 06 2009
prev sibling next sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
I have just started trying the latest D1 compiler; if I try to compile my dlibs
DMD stops with this error:
Assertion failure: '0' on line 136 in file 'statement.c'
I'll try to locate the trouble better.

--------------------

I can see a very large amount of bug fixes.

Use of with symbols that shadow local symbols is no longer allowed<
std.conv: added Shin Fujishiro's code for printing and parsing enumerated
values.<
Good. Walter:
The deps thing comes from the LDC group. They've been relying on it as-is, so
they'd need to agree on any changes.<
Some of the ideas of Derek Parnell regarding 'deps' seems a significant improvement. (LDC developers are quite flexible, and they usually agree to improve things when they can.)
Bugzilla 2900: Array appending slowed drastically since integration of druntime<
How is it implemented the "solution" to this?
Implicit integral conversions that could result in loss of significant bits are
no longer allowed.<
That doesn't seem to solve a bug-prone situation like: import std.stdio: writeln; void main() { int[] a = [1, 2]; a ~= 3; writeln("a.length=", a.length); int n = 5; writeln("n=", n, " (a.length < n)=", a.length < n); n = -5; writeln("n=", n, " (a.length < n)=", a.length < n); } In the meantime, until a better solution is found, I suggest to change all "length"s to being of type ptrdiff_t, that is safer. -------------------- Possible worsenings: Walter:
The final switch deals with a problem where you add an enum member in one file
and then have to find and update every switch statement that uses that enum.
There's no straightforward way to find them to ensure the case gets added to
each switch. It's solving a similar problem that symbolic constants do. The
fall-through thing, though, is purely local and so much less of an
issue.< Having two kinds of switch in a language is not a nice thing, C++ teaches us that duplication in a language is bad. And big and really well debugged programs in the past have failed for the fall-through-related bug, time ago I have shown a fasmous so there's evidence it's indeed an issue. You may take a look at the 50 thousand answers given by Google here: http://www.google.com/search?q=switch+%22fall-through%22+bug Also, I have already shown this in the past: http://www.soft.com/AppNotes/attcrash.html So: - If you go to accept the hassle to have two kind of switch statements, then it may be better to make one of them not perform fall-through by default. - To avoid switch-related bugs the safe one has to be the default. A possible solution is then to have "switch" that acts safely and a "cswitch" that's there possible.
D does introduce another operator, the  :..case operator <g>.<
Unfortunately the brain of all people will see just ".." as the operator there, and it has a different semantics there, it's a special case. I am not going to like this.
std.string: deprecated std.string.find and std.string.find, replaced with
std.string.indexOf; deprecated std.string.rfind and std.string.irfind, replaced
with std.string.lastIndexOf; added flag CaseSensitive for indexOf and
lastIndexOf; removed startsWith and endsWith because std.algorithm defines
them; defined std.string.byDchar.<
Replacing the simple to read, and easy to understand names "find" and "rfind" with "indexOf" and "lastIndexOf" that are longer and also have upper case letters in the middle looks doesn't look like an improvement. --------------------------- Derek Parnell:
 If this is ok can I submit a patch?
Please, do it :-)
And in that vein, a hash (eg CRC32, MD5, SHA256) of the file's used by DMD
would be nice to see in the 'deps' file. Would help build tools detect which
files have been modified.<
A 64 bit hash value may be enough. But before the hash it may also be useful a timestamp, it can be a "fast path" to see if the module is changed and avoid computing the 64 bit hash value (that requires some time if done on megabytes of source code). If the time is different then the module is considered different. If the timestamp is the same then the hash is computed. Bye, bearophile
Jul 06 2009
next sibling parent Lutger <lutger.blijdestijn gmail.com> writes:
bearophile wrote:

...
D does introduce another operator, the  :..case operator <g>.<
Unfortunately the brain of all people will see just ".." as the operator there, and it has a different semantics there, it's a special case. I am not going to like this.
I don't think that will happen. After all, many brains got used to [..) and [..], this is not that different. In the context of case statements, an inclusive range is more intuitive.
std.string: deprecated std.string.find and std.string.find, replaced with
std.string.indexOf; deprecated std.string.rfind and std.string.irfind,
replaced with std.string.lastIndexOf; added flag CaseSensitive for indexOf
and lastIndexOf; removed startsWith and endsWith because std.algorithm
defines them; defined std.string.byDchar.<
Replacing the simple to read, and easy to understand names "find" and "rfind" with "indexOf" and "lastIndexOf" that are longer and also have upper case letters in the middle looks doesn't look like an improvement.
indexOf and lastIndexOf are much more descriptive of what the function actually does, I think they are easier to understand.
Jul 06 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
Regarding switch(), can be a "static switch" useful?

To solve the semantic special case of ".." in switch cases D2 may use a
compromise, keep the case-case syntax of Andrei, but use three points:
case 0: ... case 10:

Bye,
bearophile
Jul 06 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
bearophile wrote:
 Regarding switch(), can be a "static switch" useful?
 
 To solve the semantic special case of ".." in switch cases D2 may use a
compromise, keep the case-case syntax of Andrei, but use three points:
 case 0: ... case 10:
Why? Andrei
Jul 06 2009
prev sibling next sibling parent Jason House <jason.james.house gmail.com> writes:
randomSample is in the changelig, but not documented

http://www.digitalmars.com/d/2.0/phobos/std_random.html


Walter Bright Wrote:

 Something for everyone here.
 
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Jul 06 2009
prev sibling next sibling parent Aarti_pl <aarti_please_no_spam interia.pl> writes:
Ary Borenszweig Wrote:


escribió:
 Thank you for the great work, Walter and all the other contributors.
 
 But I am a bit disappointed with the CaseRangeStatement syntax.
 Why is it
    case 0: .. case 9:
 instead of
    case 0 .. 9:
 
 With the latter notation, ranges can be easily used together with 
 commas, for example:
    case 0, 2 .. 4, 6 .. 9:
 
 And CaseRangeStatement, being inconsistent with other syntaxes using the 
 .. operator, i.e. slicing and ForeachRangeStatement, includes the endpoint.
 Shouldn't D make use of another operator to express ranges that include 
 the endpoints as Ruby or Perl6 does?
I agree. I think this syntax is yet another one of those things people looking at D will say "ugly" and turn their heads away.
Yeah... It is ugly... BR Marcin Kuszczak Aarti_pl
Jul 06 2009
prev sibling next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 I agree. Probably a good option would be to keep on requiring break, but
 also requiring the user to explicitly specify they want fallthrough in
 the rare case when they do want it. I'd love to use "continue" for that
 but it's already "occupied" by cases like while (...) switch (...).
 Requiring !break or ~break would work but is a bit too cute. Adding a
 new keyword or a whole new switch statement is too much aggravation. I
 guess we'll have to live with it...
In D there is also the "break label". On the other hand, if a safer switch is introduced in D, most times you may want to use the C-style switch is in code ported from C. As there are safeD modules, there can be C-like too modules, where switches work as in C. (This has the downside that locally the code doesn't give you much clue, and you have to go look at the top of the module to see if it's a cmodule or not.) Bye, bearophile
Jul 06 2009
prev sibling next sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright  
<newshound1 digitalmars.com> wrote:

 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different. BTW: The fact that in my original code base DMD gave me the line, inside the string, inside the mixin, inside the template, inside the mixin, inside the struct was just awesome. P.S. There's a bunch of functions in phobos (like std.math.lrint) which return a long and should also have (at least) an integer version as well. (Maybe rndto!T() ?)
Jul 06 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 
 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different.
Walter has implemented an ingenious scheme for disallowing narrowing conversions while at the same time minimizing the number of casts required. He hasn't explained it, so I'll sketch an explanation here. The basic approach is "value range propagation": each expression is associated with a minimum possible value and a maximum possible value. As complex expressions are assembled out of simpler expressions, the ranges are computed and propagated. For example, this code compiles: int x = whatever(); bool y = x & 1; The compiler figures that the range of x is int.min to int.max, the range of 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to 1. So it lets the code go through. However, it won't allow this: int x = whatever(); bool y = x & 2; because x & 2 has range between 0 and 2, which won't fit in a bool. The approach generalizes to arbitrary complex expressions. Now here's the trick though: the value range propagation is local, i.e. all ranges are forgotten beyond one expression. So as soon as you move on to the next statement, the ranges have been forgotten. Why? Simply put, increased implementation difficulties and increased compiler memory footprint for diminishing returns. Both Walter and I noticed that expression-level value range propagation gets rid of all dangerous cases and the vast majority of required casts. Indeed, his test suite, Phobos, and my own codebase required surprisingly few changes with the new scheme. Moreover, we both discovered bugs due to the new feature, so we're happy with the status quo. Now consider your code: byte x,y,z; z = x+y; The first line initializes all values to zero. In an intra-procedural value range propagation, these zeros would be propagated to the next statement, which would range-check. However, in the current approach, the ranges of x, y, and z are forgotten at the first semicolon. Then, x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows. That would fit in a short (and by the way I just found a bug with that occasion) but not in a byte.
 BTW: The fact that in my original code base DMD gave me the line, inside 
 the string, inside the mixin, inside the template, inside the mixin, 
 inside the struct was just awesome.
 
 P.S. There's a bunch of functions in phobos (like std.math.lrint) which 
 return a long and should also have (at least) an integer version as 
 well. (Maybe rndto!T() ?)
Sounds like Don's area. Don? Andrei
Jul 06 2009
next sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 01:48:41 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different.
Walter has implemented an ingenious scheme for disallowing narrowing conversions while at the same time minimizing the number of casts required. He hasn't explained it, so I'll sketch an explanation here. The basic approach is "value range propagation": each expression is associated with a minimum possible value and a maximum possible value. As complex expressions are assembled out of simpler expressions, the ranges are computed and propagated. For example, this code compiles: int x = whatever(); bool y = x & 1; The compiler figures that the range of x is int.min to int.max, the range of 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to 1. So it lets the code go through. However, it won't allow this: int x = whatever(); bool y = x & 2; because x & 2 has range between 0 and 2, which won't fit in a bool. The approach generalizes to arbitrary complex expressions. Now here's the trick though: the value range propagation is local, i.e. all ranges are forgotten beyond one expression. So as soon as you move on to the next statement, the ranges have been forgotten. Why? Simply put, increased implementation difficulties and increased compiler memory footprint for diminishing returns. Both Walter and I noticed that expression-level value range propagation gets rid of all dangerous cases and the vast majority of required casts. Indeed, his test suite, Phobos, and my own codebase required surprisingly few changes with the new scheme. Moreover, we both discovered bugs due to the new feature, so we're happy with the status quo. Now consider your code: byte x,y,z; z = x+y; The first line initializes all values to zero. In an intra-procedural value range propagation, these zeros would be propagated to the next statement, which would range-check. However, in the current approach, the ranges of x, y, and z are forgotten at the first semicolon. Then, x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows. That would fit in a short (and by the way I just found a bug with that occasion) but not in a byte.
That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, / And by that logic shouldn't the following happen? int x,y; int z; z = x+y; // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code) BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness.
Jul 06 2009
next sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 02:35:44 -0400, Robert Jacques <sandford jhu.edu>  
wrote:

 On Tue, 07 Jul 2009 01:48:41 -0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright  
 <newshound1 digitalmars.com> wrote:

 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different.
Walter has implemented an ingenious scheme for disallowing narrowing conversions while at the same time minimizing the number of casts required. He hasn't explained it, so I'll sketch an explanation here. The basic approach is "value range propagation": each expression is associated with a minimum possible value and a maximum possible value. As complex expressions are assembled out of simpler expressions, the ranges are computed and propagated. For example, this code compiles: int x = whatever(); bool y = x & 1; The compiler figures that the range of x is int.min to int.max, the range of 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to 1. So it lets the code go through. However, it won't allow this: int x = whatever(); bool y = x & 2; because x & 2 has range between 0 and 2, which won't fit in a bool. The approach generalizes to arbitrary complex expressions. Now here's the trick though: the value range propagation is local, i.e. all ranges are forgotten beyond one expression. So as soon as you move on to the next statement, the ranges have been forgotten. Why? Simply put, increased implementation difficulties and increased compiler memory footprint for diminishing returns. Both Walter and I noticed that expression-level value range propagation gets rid of all dangerous cases and the vast majority of required casts. Indeed, his test suite, Phobos, and my own codebase required surprisingly few changes with the new scheme. Moreover, we both discovered bugs due to the new feature, so we're happy with the status quo. Now consider your code: byte x,y,z; z = x+y; The first line initializes all values to zero. In an intra-procedural value range propagation, these zeros would be propagated to the next statement, which would range-check. However, in the current approach, the ranges of x, y, and z are forgotten at the first semicolon. Then, x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows. That would fit in a short (and by the way I just found a bug with that occasion) but not in a byte.
That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, / And by that logic shouldn't the following happen? int x,y; int z; z = x+y; // Error: cannot implicitly convert expression (cast(long)x + cast(long)y) of type long to int i.e. why the massive inconsistency between byte/short and int/long? (This is particularly a pain for generic i.e. templated code) BTW: this means byte and short are not closed under arithmetic operations, which drastically limit their usefulness.
Another inconsistency: byte[] x,y,z; z[] = x[]*y[]; // Compiles
Jul 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 Another inconsistency:
 
     byte[] x,y,z;
     z[] = x[]*y[]; // Compiles
Bugzilla is its name. Andrei
Jul 07 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 01:48:41 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Robert Jacques wrote:
 On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:

 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different.
Walter has implemented an ingenious scheme for disallowing narrowing conversions while at the same time minimizing the number of casts required. He hasn't explained it, so I'll sketch an explanation here. The basic approach is "value range propagation": each expression is associated with a minimum possible value and a maximum possible value. As complex expressions are assembled out of simpler expressions, the ranges are computed and propagated. For example, this code compiles: int x = whatever(); bool y = x & 1; The compiler figures that the range of x is int.min to int.max, the range of 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to 1. So it lets the code go through. However, it won't allow this: int x = whatever(); bool y = x & 2; because x & 2 has range between 0 and 2, which won't fit in a bool. The approach generalizes to arbitrary complex expressions. Now here's the trick though: the value range propagation is local, i.e. all ranges are forgotten beyond one expression. So as soon as you move on to the next statement, the ranges have been forgotten. Why? Simply put, increased implementation difficulties and increased compiler memory footprint for diminishing returns. Both Walter and I noticed that expression-level value range propagation gets rid of all dangerous cases and the vast majority of required casts. Indeed, his test suite, Phobos, and my own codebase required surprisingly few changes with the new scheme. Moreover, we both discovered bugs due to the new feature, so we're happy with the status quo. Now consider your code: byte x,y,z; z = x+y; The first line initializes all values to zero. In an intra-procedural value range propagation, these zeros would be propagated to the next statement, which would range-check. However, in the current approach, the ranges of x, y, and z are forgotten at the first semicolon. Then, x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows. That would fit in a short (and by the way I just found a bug with that occasion) but not in a byte.
That's really cool. But I don't think that's actually happening (Or are these the bugs you're talking about?): byte x,y; short z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to short // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
 And by that logic shouldn't the following happen?
 
     int x,y;
     int z;
     z = x+y;  // Error: cannot implicitly convert expression 
 (cast(long)x + cast(long)y) of type long to int
No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long.
 i.e. why the massive inconsistency between byte/short and int/long? 
 (This is particularly a pain for generic i.e. templated code)
I don't find it a pain. It's a practical decision.
 BTW: this means byte and short are not closed under arithmetic 
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively small values. Andrei
Jul 07 2009
next sibling parent reply Brad Roberts <braddr puremagic.com> writes:
 That's really cool. But I don't think that's actually happening (Or
 are these the bugs you're talking about?):

     byte x,y;
     short z;
     z = x+y;  // Error: cannot implicitly convert expression
 (cast(int)x + cast(int)y) of type int to short

     // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
Before going too far, consider: byte x, y, z; short a; a = x + y + z; How far should the logic go?
Jul 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Brad Roberts wrote:
 That's really cool. But I don't think that's actually happening (Or
 are these the bugs you're talking about?):

     byte x,y;
     short z;
     z = x+y;  // Error: cannot implicitly convert expression
 (cast(int)x + cast(int)y) of type int to short

     // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
Before going too far, consider: byte x, y, z; short a; a = x + y + z; How far should the logic go?
Arbitrarily far for any given expression, which is the beauty of it all. In the case above, the expression is evaluated as (x + y) + z, yielding a range of -byte.min-byte.min to byte.max+byte.max for the parenthesized part. Then that range is propagated to the second addition yielding a final range of -byte.min-byte.min-byte.min to byte.max+byte.max+byte.max for the entire expression. That still fits in a short, so the expression is valid. Now, if you add more than 255 bytes, things won't compile anymore ;o). Andrei
Jul 07 2009
prev sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  That's really cool. But I don't think that's actually happening (Or  
 are these the bugs you're talking about?):
      byte x,y;
     short z;
     z = x+y;  // Error: cannot implicitly convert expression  
 (cast(int)x + cast(int)y) of type int to short
      // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x<<y compiles). All the op assigns (+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile.
 And by that logic shouldn't the following happen?
      int x,y;
     int z;
     z = x+y;  // Error: cannot implicitly convert expression  
 (cast(long)x + cast(long)y) of type long to int
No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long.
 i.e. why the massive inconsistency between byte/short and int/long?  
 (This is particularly a pain for generic i.e. templated code)
I don't find it a pain. It's a practical decision.
Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain.
 BTW: this means byte and short are not closed under arithmetic  
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively small values.
Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  That's really cool. But I don't think that's actually happening (Or 
 are these the bugs you're talking about?):
      byte x,y;
     short z;
     z = x+y;  // Error: cannot implicitly convert expression 
 (cast(int)x + cast(int)y) of type int to short
      // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x<<y compiles). All the op assigns (+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile.
 And by that logic shouldn't the following happen?
      int x,y;
     int z;
     z = x+y;  // Error: cannot implicitly convert expression 
 (cast(long)x + cast(long)y) of type long to int
No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long.
 i.e. why the massive inconsistency between byte/short and int/long? 
 (This is particularly a pain for generic i.e. templated code)
I don't find it a pain. It's a practical decision.
Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain.
Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
 BTW: this means byte and short are not closed under arithmetic 
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively small values.
Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.
I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei
Jul 07 2009
next sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  Andrei, I have a short vector template (think vec!(byte,3), etc) where  
 I've had to wrap the majority lines of code in cast(T)( ... ), because  
 I support bytes and shorts. I find that both a kludge and a pain.
Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e. byte a,b,c; c = a + b; => c = a + b; short d; d = a + b; => d = cast(short) a + cast(short) b; int e, f; e = a + b; => e = cast(short) a + cast(short) b; e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d); long g; g = e + f; => d = cast(long) e + cast(long) f; When choosing operator overloads or auto, prefer the ideal lofted interpretation (as per the new rules, but without the exception for int/long), over truncated variants. i.e. auto h = a + b; => short h = cast(short) a + cast(short) b; This would also properly handled some of the corner/inconsistent cases with the current rules: ubyte i; ushort j; j = -i; => j = -cast(short)i; (This currently evaluates to j = cast(short)(-i); And a += a; is equivalent to a = a + a; and is logically consistent with byte[] k,l,m; m[] = k[] + l[]; Essentially, instead of trying to prevent overflows, except for those from int and long, this scheme attempts to minimize the risk of overflows, including those from int (and long, once cent exists. Maybe long+long=>bigInt?) Suggestion 2: Enable the full rules as part of SafeD and allow non-promotion in un-safe D. Note this could be synergistically combined with Suggestion 1.
 BTW: this means byte and short are not closed under arithmetic  
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively small values.
Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.
I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei
Well, how often does everyone else use bytes?
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  Andrei, I have a short vector template (think vec!(byte,3), etc) 
 where I've had to wrap the majority lines of code in cast(T)( ... ), 
 because I support bytes and shorts. I find that both a kludge and a 
 pain.
Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e.
What does loft mean in this context?
 byte a,b,c;
 c = a + b;  => c = a + b;
Unsafe.
 short d;
 d = a + b;  => d = cast(short) a + cast(short) b;
Should work today modulo bugs.
 int e, f;
 e = a + b;  => e = cast(short) a + cast(short) b;
Why cast to short? e has type int.
 e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) + 
 cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d);
I don't understand this.
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
 When choosing operator overloads or auto, prefer the ideal lofted 
 interpretation (as per the new rules, but without the exception for 
 int/long), over truncated variants. i.e.
 auto h = a + b; => short h = cast(short) a + cast(short) b;
This would yield semantics incompatible with C expressions.
 This would also properly handled some of the corner/inconsistent cases 
 with the current rules:
 ubyte  i;
 ushort j;
 j = -i;    => j = -cast(short)i; (This currently evaluates to j = 
 cast(short)(-i);
That should not compile, sigh. Walter wouldn't listen...
 And
 a += a;
 is equivalent to
 a = a + a;
Well not quite equivalent. In D2 they aren't. The former clarifies that you want to reassign the expression to a, and no cast is necessary. The latter would not compile if a is shorter than int.
 and is logically consistent with
 byte[] k,l,m;
 m[] = k[] + l[];
 
 Essentially, instead of trying to prevent overflows, except for those 
 from int and long, this scheme attempts to minimize the risk of 
 overflows, including those from int (and long, once cent exists. Maybe 
 long+long=>bigInt?)
But if you close operations for types smaller than int, you end up with a scheme even more error-prone that C!
 Suggestion 2:
 Enable the full rules as part of SafeD and allow non-promotion in 
 un-safe D. Note this could be synergistically combined with Suggestion 1.
Safe D is concerned with memory safety only. Andrei
Jul 07 2009
next sibling parent bearophile <bearophileHUGS lycos.com> writes:
Andrei Alexandrescu:
 Safe D is concerned with memory safety only.
And hopefully you will understand that is wrong :-) Bye, bearophile
Jul 07 2009
prev sibling next sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 14:16:14 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 On Tue, 07 Jul 2009 11:36:26 -0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  Andrei, I have a short vector template (think vec!(byte,3), etc)  
 where I've had to wrap the majority lines of code in cast(T)( ... ),  
 because I support bytes and shorts. I find that both a kludge and a  
 pain.
Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
Suggestion 1: Loft the right hand of the expression (when lofting is valid) to the size of the left hand. i.e.
What does loft mean in this context?
Sorry. loft <=> up-casting. i.e. byte => short => int => long => cent? => bigInt?
 byte a,b,c;
 c = a + b;  => c = a + b;
Unsafe.
So is int + int or long + long. Or float + float for that matter. My point is that if a programmer is assigning a value to a byte (or short or int or long) then they are willing to accept the accociated over/under flow errors of that type.
 short d;
 d = a + b;  => d = cast(short) a + cast(short) b;
Should work today modulo bugs.
 int e, f;
 e = a + b;  => e = cast(short) a + cast(short) b;
Why cast to short? e has type int.
Opps. You're right. (I was thinking of the new rules, not my suggestion) Should be: e = a + b; => e = cast(int) a + cast(int) b;
 e = a + b + d; => e = cast(int)(cast(short) a + cast(short) b) +  
 cast(int) d; Or e = cast(int) a + (cast(int) b + cast(int)d);
I don't understand this.
Same "Opps. You're right." as above. e = a + b + d; => e = cast(int) a + cast(int) b + cast(int) d;
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
 When choosing operator overloads or auto, prefer the ideal lofted  
 interpretation (as per the new rules, but without the exception for  
 int/long), over truncated variants. i.e.
 auto h = a + b; => short h = cast(short) a + cast(short) b;
This would yield semantics incompatible with C expressions.
How so? The auto rule is identical to the "new rules". The overload rule is identical to the "new rules", except when no match can be found, in which case it tries to "relax" the expression to a smaller number of bits.
 This would also properly handled some of the corner/inconsistent cases  
 with the current rules:
 ubyte  i;
 ushort j;
 j = -i;    => j = -cast(short)i; (This currently evaluates to j =  
 cast(short)(-i);
That should not compile, sigh. Walter wouldn't listen...
 And
 a += a;
 is equivalent to
 a = a + a;
Well not quite equivalent. In D2 they aren't. The former clarifies that you want to reassign the expression to a, and no cast is necessary. The latter would not compile if a is shorter than int.
I understand, but that dichotomy increases the cognitive load on the programmer. Also, there's the issue of byte x; ++x; which is defined in the spec as being equvilent to x = x + 1;
 and is logically consistent with
 byte[] k,l,m;
 m[] = k[] + l[];
  Essentially, instead of trying to prevent overflows, except for those  
 from int and long, this scheme attempts to minimize the risk of  
 overflows, including those from int (and long, once cent exists. Maybe  
 long+long=>bigInt?)
But if you close operations for types smaller than int, you end up with a scheme even more error-prone that C!
Since C (IIRC) always evaluates "x+x" in the manner most prone to causing overflows, no matter the type, a scheme can't be more error-prone than C (at the instruction level). However, it can be less consistent, which I grant can lead to higher level logic errors. (BTW, operations for types smaller than int are closed (by my non-mathy definition) in C) The new rules are definitely an improvement over C, but they make byte/ubyte/short/ushort second class citizens, because practically every assignment requires a cast: byte a,b,c; c = cast(byte) a + b; And if it weren't for compatibility issues, it would almost be worth it to remove them completely.
Jul 07 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Robert Jacques wrote:
 The new rules are definitely an improvement over C, but they make 
 byte/ubyte/short/ushort second class citizens, because practically every 
 assignment requires a cast:
 byte a,b,c;
 c = cast(byte) a + b;
They've always been second class citizens, as their types keep getting promoted to int. They've been second class on the x86 CPUs, too, as short operations tend to be markedly slower than the corresponding int operations.
 And if it weren't for compatibility issues, it would almost be worth it 
 to remove them completely.
Shorts and bytes are very useful in arrays and data structures, but aren't worth much as local variables. If I see a: short s; as a local, it always raises an eyebrow with me that there's a lurking bug.
Jul 07 2009
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Jul 07 2009
next sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=".
It's also troublesome because it would silently produce different answers than C would.
Jul 07 2009
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 21:05:45 -0400, Walter Bright  =

<newshound1 digitalmars.com> wrote:

 Andrei Alexandrescu wrote:
 Robert Jacques wrote:
 long g;
 g =3D e + f;  =3D> d =3D cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g =3D cast(long)(e+f); And this is (I think) correct behavior according to the new rules an=
d =
 not a bug. In the new rules int is special, in this suggestion, it's=
=
 not.
I think this is a good idea that would improve things. I think, =
 however, it would be troublesome to implement because expressions are=
=
 typed bottom-up. The need here is to "teleport" type information from=
=
 the assignment node to the addition node, which is downwards. And I'm=
=
 not sure how this would generalize to other operators beyond "=3D".
It's also troublesome because it would silently produce different =
 answers than C would.
Please, correct me if I'm wrong, but it seems C works by promoting = byte/short/etc to int and then casting back down if need be. (Something = = tells me this wasn't always true) So (I think) the differences would be = = limited to integer expressions assigned to longs. Also, doing this 'righ= t' = might be important to 64-bit platforms. Actually, after finding and skiming the C spec (from = http://frama-c.cea.fr/download/acsl_1.4.pdf via wikipedia) " 2.2.3 Typing The language of logic expressions is typed (as in multi-sorted first-ord= er = logic). Types are either C types or logic types defined as follows: =0F ?mathematical? types: integer for unbounded, mathematical integers, = real = for real numbers, boolean for booleans (with values written \true and \false); =0F logic types introduced by the specification writer (see Section 2.6)= . There are implicit coercions for numeric types: =0F C integral types char, short, int and long, signed or unsigned, are = all = subtypes of type integer; =0F integer is itself a subtype of type real; =0F C types float and double are subtypes of type real. ... 2.2.4 Integer arithmetic and machine integers The following integer arithmetic operations apply to mathematical = integers: addition, subtraction, multiplication, unary minus. The value of a C variable of an integral type is promoted t= o = a mathematical integer. As a consequence, there is no such thing as "arithmetic overflo= w" = in logic expressions. Division and modulo are also mathematical operations, which coincide wit= h = the corresponding C operations on C machine integers, thus following the ANSI C99 convention= s. = In particular, these are not the usual mathematical Euclidean division and remainder. Generally = speaking, division rounds the result towards zero. The results are not specified if divisor is zero; otherwis= e = if q and r are the quotient and the remainder of n divided by d then:" " So by the spec (and please correct me if I'm reading this wrong) g =3D e + f =3D> g =3D cast(long)( cast(integer)e + cast(integer)f ); where integer is unbounded in bits (and therefore has no overflow) therefore g =3D e + f; =3D> d =3D cast(long) e + cast(long) f; is more in keeping with the spec than g =3D cast(long)(e+f); in terms of a practical implementation, since there's less possibility f= or = overflow error. (Caveat: most 32-bit compilers probably defaulted integer to int, though= = 64-bit compilers are probably defaulting integer to long.)
Jul 07 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Jacques wrote:
 So by the spec   (and please correct me if I'm reading this wrong)
 g = e + f => g = cast(long)(  cast(integer)e + cast(integer)f  );
 where integer is unbounded in bits (and therefore has no overflow)
 therefore
 g = e + f;  => d = cast(long) e + cast(long) f;
 is more in keeping with the spec than
 g = cast(long)(e+f);
 in terms of a practical implementation, since there's less possibility 
 for overflow error.
The spec leaves a lot of room for implementation defined behavior. But still, there are common definitions for those implementation defined behaviors, and C programs routinely rely on them. Just like the C standard supports 32 bit "bytes", but essentially zero C programs will port to such a platform without major rewrites. Silently changing the expected results is a significant problem. The guy who does the translation is hardly likely to be the guy who wrote the program. When he notices the program failing, I guarantee he'll write it off as "D sux". He doesn't have the time to debug what looks like a fault in D, and frankly I would agree with him. I have a lot of experience with people porting C/C++ programs to Digital Mars compilers. They run into some implementation-defined issue, or rely on some bug in B/M/X compilers, and yet it's always DM's problem, not B/M/X or the code. There's no point in fighting that, it's just the way it is, and to deal with reality means that DM must follow the same implementation-defined behavior and bugs as B/M/X compilers do. For a C integer expression, D must either refuse to compile it or produce the same results.
 (Caveat: most 32-bit compilers probably defaulted integer to int, though 
 64-bit compilers are probably defaulting integer to long.)
All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
Jul 07 2009
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright  
<newshound1 digitalmars.com> wrote:
 Robert Jacques wrote:
 (Caveat: most 32-bit compilers probably defaulted integer to int,  
 though 64-bit compilers are probably defaulting integer to long.)
All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
Jul 07 2009
parent reply Walter Bright <newshound1 digitalmars.com> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright 
 <newshound1 digitalmars.com> wrote:
 Robert Jacques wrote:
 (Caveat: most 32-bit compilers probably defaulted integer to int, 
 though 64-bit compilers are probably defaulting integer to long.)
All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
Not that I've seen. I'd be very surprised if any did.
Jul 07 2009
parent reply Brad Roberts <braddr puremagic.com> writes:
Walter Bright wrote:
 Robert Jacques wrote:
 On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Robert Jacques wrote:
 (Caveat: most 32-bit compilers probably defaulted integer to int,
 though 64-bit compilers are probably defaulting integer to long.)
All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
Not that I've seen. I'd be very surprised if any did.
From wikipedia: http://en.wikipedia.org/wiki/64-bit
model short int long llong ptrs Sample operating systems LLP64 16 32 32 64 64 Microsoft Win64 (X64/IA64) LP64 16 32 64 64 64 Most UNIX and UNIX-like systems (Solaris, Linux, etc) ILP64 16 64 64 64 64 HAL SILP64 64 64 64 64 64 ?
Jul 07 2009
parent "Robert Jacques" <sandford jhu.edu> writes:
On Wed, 08 Jul 2009 00:08:13 -0400, Brad Roberts <braddr puremagic.com>  
wrote:

 Walter Bright wrote:
 Robert Jacques wrote:
 On Tue, 07 Jul 2009 23:01:58 -0400, Walter Bright
 <newshound1 digitalmars.com> wrote:
 Robert Jacques wrote:
 (Caveat: most 32-bit compilers probably defaulted integer to int,
 though 64-bit compilers are probably defaulting integer to long.)
All 32 bit C compilers defaulted int to 32 bits. 64 bit C compilers are setting int at 32 bits for sensible compatibility reasons.
But are the 64-bit compilers setting the internal "integer" type to 32 or 64 bits? (I'm not running any 64-bit OSes at the moment to test this)
Not that I've seen. I'd be very surprised if any did.
 From wikipedia: http://en.wikipedia.org/wiki/64-bit
model short int long llong ptrs Sample operating systems LLP64 16 32 32 64 64 Microsoft Win64 (X64/IA64) LP64 16 32 64 64 64 Most UNIX and UNIX-like systems (Solaris, Linux, etc) ILP64 16 64 64 64 64 HAL SILP64 64 64 64 64 64 ?
Thanks, but what we're looking for is is what format the data is in in register. For example, in 32-bit C, bytes/shorts are computed as ints and truncated back down. I've found some references to 64-bit native integers in the CLI spec, but nothing definative. The question boils down to is b == 0 or not: int a = 2147483647; long b = a+a+2; // or long long depending on platform
Jul 07 2009
prev sibling parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions.
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions.
Anything can be done... in infinite time with infinite resources. :o) Andrei
Jul 07 2009
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions.
Anything can be done... in infinite time with infinite resources. :o) Andrei
:) Well, weren't polysemous expressions already in the pipeline somewhere?
Jul 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 21:21:47 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 
 Robert Jacques wrote:
 On Tue, 07 Jul 2009 20:48:50 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 Robert Jacques wrote:
 long g;
 g = e + f;  => d = cast(long) e + cast(long) f;
Works today.
Wrong. I just tested this and what happens today is: g = cast(long)(e+f); And this is (I think) correct behavior according to the new rules and not a bug. In the new rules int is special, in this suggestion, it's not.
I think this is a good idea that would improve things. I think, however, it would be troublesome to implement because expressions are typed bottom-up. The need here is to "teleport" type information from the assignment node to the addition node, which is downwards. And I'm not sure how this would generalize to other operators beyond "=". Andrei
Hmm... why can't multiple expressions be built simultaneously and then the best chosen once the assignment/function call/etc is reached? This would also have the benifet of paving the way for polysemous values & expressions.
Anything can be done... in infinite time with infinite resources. :o) Andrei
:) Well, weren't polysemous expressions already in the pipeline somewhere?
I'm afraid they didn't get wings. We have incidentally found different ways to address the issues they were supposed to address. Andrei
Jul 07 2009
prev sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 13:16:14 -0500, Andrei Alexandrescu wrote:


 Safe D is concerned with memory safety only.
That's a pity. Maybe it should be renamed to Partially-Safe D, or Safe-ish D, Memory-Safe D, or ... well you get the point. Could be misleading for the great unwashed. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:


 Well, how often does everyone else use bytes?
Cryptography, in my case. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
parent reply "Robert Jacques" <sandford jhu.edu> writes:
On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell <derek psych.ward> wrote:

 On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:


 Well, how often does everyone else use bytes?
Cryptography, in my case.
Cool. If you don't mind, what's you're take new rules? (As different use cases and points of view are very valuable)
Jul 07 2009
parent reply Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 18:10:24 -0400, Robert Jacques wrote:

 On Tue, 07 Jul 2009 18:05:26 -0400, Derek Parnell <derek psych.ward> wrote:
 
 On Tue, 07 Jul 2009 14:05:33 -0400, Robert Jacques wrote:


 Well, how often does everyone else use bytes?
Cryptography, in my case.
Cool. If you don't mind, what's you're take new rules? (As different use cases and points of view are very valuable)
By new rules you mean the ones implemented in D 2.031? I'm not sure yet. I need to use them more in practice to see how they sort themselves out. It seems that what they are trying to do is predict runtime behaviour at compile time and make the appropriate (as defined by Walter) steps to avoid runtime errors. Anyhow, and be warned that I'm just thinking out loud here, we could have a scheme where the coder explicitly tells the compiler that, in certain specific sections of code, the coder would like to have runtime checking of overflow situations added by the compiler. Something like ... byte a,b,c; try { a = b + c; } catch (OverflowException e) { ... } and in this situation the compiler would not give a message, because I've instructed the compiler to generate runtime checking. The problem we would now have though is balancing the issuing-of-messages with the ease-of-coding. It seems that the most common kind of assignment is where the LHS type is the same as the RHS type(s), so we don't want to make that any harder to code. But clearly, this is also the most common source of potential overflows. Ok, let's assume that we don't want the D compiler to be our nanny; that we are adults and understand stuff. This now leads me to think that unless the coder says differently, the compiler should be silent about potential overflows. The "try .. catch" example above is verbose, however it does scream "run-time checking" to me so it is probably worth the effort. The only remaining issue for me is how to catch accidental overflows in the special cases where I, as a responsible coder, knowingly wish to avoid. Here is where I propose having a signal to the compiler about which specific variables I'm worried about, and if I code an assignment to one of these that can potentially overflow, then the compiler must issue a message. NOTE BENE: For the purposes of these examples, I use the word "guard" as the signal for the compiler to guard against overflows. I don't care so much about which specific signalling method could be adopted. This is still conceptual stuff, okay? guard byte a; // I want this byte guarded. byte b,c; // I don't care about these bytes. a = 3 + 29; // No message 'cos 32 fits into a byte. a = b + c; // Message 'cos it could overflow. a = cast(byte)(b + c); // No message 'cos cast overrides messages. a++; // Message - overflow is possible. a += 1; // Message - overflow is possible. a = a + 1 // Message - overflow is possible. a = cast(byte)a + 1; // No message 'cos cast overrides messages. And for a really smart compiler ... a = 0; a++; // No message as it can determine that the run time value // at this point in time is okay. for (a = 'a'; a <= 'z'; a++) // Still no message. Additionally, I'm pretty certain that I think ... auto x = y + z; should ensure that 'x' is a type that will always be able to hold any value from (y.min + z.min) to (y.max + z.max) inclusive. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Derek Parnell wrote:
 Here is where I propose having a signal to the compiler about which
 specific variables I'm worried about, and if I code an assignment to one of
 these that can potentially overflow, then the compiler must issue a
 message. 
You can implement that as a library. In fact I wanted to do it for Phobos for a long time. I've discussed it in this group too (to an unusual consensus), but I forgot the thread's title and stupid Thunderbird "download 500 headers at a time forever even long after have changed that idiotic default option" won't let me find it. Andrei
Jul 07 2009
next sibling parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 20:13:40 -0500, Andrei Alexandrescu wrote:

 Derek Parnell wrote:
 Here is where I propose having a signal to the compiler about which
 specific variables I'm worried about, and if I code an assignment to one of
 these that can potentially overflow, then the compiler must issue a
 message. 
You can implement that as a library. In fact I wanted to do it for Phobos for a long time.
What does "implement that as a library" actually mean? Does it mean that a Phobos module could be written that defines a struct template (presumably) that holds the data and implements opAssign, etc... to issue a message if required. I assume it could do some limited compile-time value tests so it doesn't always have to issue a message. -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 You can implement that as a library. In fact I wanted to do it for 
 Phobos for a long time. I've discussed it in this group too (to an 
 unusual consensus), but I forgot the thread's title and stupid 
 Thunderbird "download 500 headers at a time forever even long after have 
 changed that idiotic default option" won't let me find it.
All the messages from the dawn of time are online and available at http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable from the search box in the upper left.
Jul 07 2009
parent Derek Parnell <derek psych.ward> writes:
On Tue, 07 Jul 2009 18:26:36 -0700, Walter Bright wrote:


 All the messages from the dawn of time are online and available at 
 http://www.digitalmars.com/d/archives/digitalmars/D/ and are searchable 
 from the search box in the upper left.
Okaaayy ... I see that this (checking for integer overflow) has been an issue since at least 2003. http://www.digitalmars.com/d/archives/19850.html At this rate, D v2 will be released some time after C++0X :-) -- Derek Parnell Melbourne, Australia skype: derek.j.parnell
Jul 07 2009
prev sibling parent Charles Hixson <charleshixsn earthlink.net> writes:
Andrei Alexandrescu wrote:
 Robert Jacques wrote:
 On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
  That's really cool. But I don't think that's actually happening (Or 
 are these the bugs you're talking about?):
      byte x,y;
     short z;
     z = x+y;  // Error: cannot implicitly convert expression 
 (cast(int)x + cast(int)y) of type int to short
      // Repeat for ubyte, bool, char, wchar and *, -, /
http://d.puremagic.com/issues/show_bug.cgi?id=3147 You may want to add to it.
Added. In summary, + * - / % >> >>> don't work for types 8-bits and under. << is inconsistent (x<<1 errors, but x<<y compiles). All the op assigns (+= *= -= /= %= >>= <<= >>>=) and pre/post increments (++ --) compile which is maddeningly inconsistent, particularly when the spec defines ++x as sugar for x = x + 1, which doesn't compile.
 And by that logic shouldn't the following happen?
      int x,y;
     int z;
     z = x+y;  // Error: cannot implicitly convert expression 
 (cast(long)x + cast(long)y) of type long to int
No. Int remains "special", i.e. arithmetic operations on it don't automatically grow to become long.
 i.e. why the massive inconsistency between byte/short and int/long? 
 (This is particularly a pain for generic i.e. templated code)
I don't find it a pain. It's a practical decision.
Andrei, I have a short vector template (think vec!(byte,3), etc) where I've had to wrap the majority lines of code in cast(T)( ... ), because I support bytes and shorts. I find that both a kludge and a pain.
Well suggestions for improving things are welcome. But I don't think it will fly to make int+int yield a long.
 BTW: this means byte and short are not closed under arithmetic 
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively small values.
Andrei, consider anyone who want to do image manipulation (or computer vision, video, etc). Since images are one of the few areas that use bytes extensively, and have to map back into themselves, they are basically sorry out of luck.
I understand, but also keep in mind that making small integers closed is the less safe option. So we'd be hurting everyone for the sake of the image manipulation folks. Andrei
You could add modular arithmetic types. They are frequently useful...though I admit that of the common 2^n basis bytes are the most useful, I've often needed base three or other. (Probably not worth the effort, but modular arithmetic on 2^n for n from 1 to, say 64 would be reasonably easy.
Jul 08 2009
prev sibling parent =?ISO-8859-15?Q?=22J=E9r=F4me_M=2E_Berger=22?= <jeberger free.fr> writes:
Robert Jacques wrote:
 On Tue, 07 Jul 2009 03:33:24 -0400, Andrei Alexandrescu=20
 <SeeWebsiteForEmail erdani.org> wrote:
 Robert Jacques wrote:
 BTW: this means byte and short are not closed under arithmetic=20
 operations, which drastically limit their usefulness.
I think they shouldn't be closed because they overflow for relatively =
 small values.
=20 Andrei, consider anyone who want to do image manipulation (or computer =
 vision, video, etc). Since images are one of the few areas that use=20
 bytes extensively, and have to map back into themselves, they are=20
 basically sorry out of luck.
=20
Wrong example: in most cases, when doing image manipulations, you=20 don't want the overflow to wrap but instead to be clipped. Having=20 the compiler notify you when there is a risk of an overflow and=20 require you to be explicit in how you want it to be handled is=20 actually a good thing IMO. Jerome --=20 mailto:jeberger free.fr http://jeberger.free.fr Jabber: jeberger jabber.fr
Jul 07 2009
prev sibling next sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Jul 7, 2009 at 11:15 AM, Jarrett
Billingsley<jarrett.billingsley gmail.com> wrote:
 On Tue, Jul 7, 2009 at 1:48 AM, Andrei
 Alexandrescu<SeeWebsiteForEmail erdani.org> wrote:
 Walter has implemented an ingenious scheme for disallowing narrowing
 conversions while at the same time minimizing the number of casts requir=
ed.
 He hasn't explained it, so I'll sketch an explanation here.

 The basic approach is "value range propagation": each expression is
 associated with a minimum possible value and a maximum possible value. A=
s
 complex expressions are assembled out of simpler expressions, the ranges=
are
 computed and propagated.

 For example, this code compiles:

 int x =3D whatever();
 bool y =3D x & 1;

 The compiler figures that the range of x is int.min to int.max, the rang=
e of
 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 =
to
 1. So it lets the code go through. However, it won't allow this:

 int x =3D whatever();
 bool y =3D x & 2;

 because x & 2 has range between 0 and 2, which won't fit in a bool.
Very cool. =A0:)
 The approach generalizes to arbitrary complex expressions. Now here's th=
e
 trick though: the value range propagation is local, i.e. all ranges are
 forgotten beyond one expression. So as soon as you move on to the next
 statement, the ranges have been forgotten.

 Why? Simply put, increased implementation difficulties and increased
 compiler memory footprint for diminishing returns. Both Walter and I not=
iced
 that expression-level value range propagation gets rid of all dangerous
 cases and the vast majority of required casts. Indeed, his test suite,
 Phobos, and my own codebase required surprisingly few changes with the n=
ew
 scheme. Moreover, we both discovered bugs due to the new feature, so we'=
re
 happy with the status quo.
Sounds fairly reasonable.
 Now consider your code:

 byte x,y,z;
 z =3D x+y;

 The first line initializes all values to zero. In an intra-procedural va=
lue
 range propagation, these zeros would be propagated to the next statement=
,
 which would range-check. However, in the current approach, the ranges of=
x,
 y, and z are forgotten at the first semicolon. Then, x+y has range
 -byte.min-byte.min up to byte.max+byte.max as far as the type checker kn=
ows.
 That would fit in a short (and by the way I just found a bug with that
 occasion) but not in a byte.
The only thing is: why doesn't _this_ fail, then? int x, y, z; z =3D x + y; I'm sure it's out of convenience, but what about in ten, fifteen years when 32-bit architectures are a historical relic and there's still this hole in the type system? The same argument applies for the implicit conversions between int and uint. =A0If you're going to do that, why not have implicit conversions between long and ulong on 64-bit platforms?
I think I've confused the mailing list's threading algorithm.
Jul 07 2009
prev sibling next sibling parent reply Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Jul 7, 2009 at 1:48 AM, Andrei
Alexandrescu<SeeWebsiteForEmail erdani.org> wrote:
 Walter has implemented an ingenious scheme for disallowing narrowing
 conversions while at the same time minimizing the number of casts required.
 He hasn't explained it, so I'll sketch an explanation here.

 The basic approach is "value range propagation": each expression is
 associated with a minimum possible value and a maximum possible value. As
 complex expressions are assembled out of simpler expressions, the ranges are
 computed and propagated.

 For example, this code compiles:

 int x = whatever();
 bool y = x & 1;

 The compiler figures that the range of x is int.min to int.max, the range of
 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to
 1. So it lets the code go through. However, it won't allow this:

 int x = whatever();
 bool y = x & 2;

 because x & 2 has range between 0 and 2, which won't fit in a bool.
Very cool. :)
 The approach generalizes to arbitrary complex expressions. Now here's the
 trick though: the value range propagation is local, i.e. all ranges are
 forgotten beyond one expression. So as soon as you move on to the next
 statement, the ranges have been forgotten.

 Why? Simply put, increased implementation difficulties and increased
 compiler memory footprint for diminishing returns. Both Walter and I noticed
 that expression-level value range propagation gets rid of all dangerous
 cases and the vast majority of required casts. Indeed, his test suite,
 Phobos, and my own codebase required surprisingly few changes with the new
 scheme. Moreover, we both discovered bugs due to the new feature, so we're
 happy with the status quo.
Sounds fairly reasonable.
 Now consider your code:

 byte x,y,z;
 z = x+y;

 The first line initializes all values to zero. In an intra-procedural value
 range propagation, these zeros would be propagated to the next statement,
 which would range-check. However, in the current approach, the ranges of x,
 y, and z are forgotten at the first semicolon. Then, x+y has range
 -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows.
 That would fit in a short (and by the way I just found a bug with that
 occasion) but not in a byte.
The only thing is: why doesn't _this_ fail, then? int x, y, z; z = x + y; I'm sure it's out of convenience, but what about in ten, fifteen years when 32-bit architectures are a historical relic and there's still this hole in the type system? The same argument applies for the implicit conversions between int and uint. If you're going to do that, why not have implicit conversions between long and ulong on 64-bit platforms?
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Jarrett Billingsley wrote:
 The only thing is: why doesn't _this_ fail, then?
 
 int x, y, z;
 z = x + y;
 
 I'm sure it's out of convenience, but what about in ten, fifteen years
 when 32-bit architectures are a historical relic and there's still
 this hole in the type system?
Well 32-bit architectures may be a historical relic but I don't think 32-bit integers are. And I think it would be too disruptive a change to promote results of arithmetic operation between integers to long.
 The same argument applies for the implicit conversions between int and
 uint.  If you're going to do that, why not have implicit conversions
 between long and ulong on 64-bit platforms?
This is a different beast. We simply couldn't devise a satisfactory scheme within the constraints we have. No simple solution we could think of has worked, nor have a number of sophisticated solutions. Ideas would be welcome, though I need to warn you that the devil is in the details so the ideas must be fully baked; too many good sounding high-level ideas fail when analyzed in detail. Andrei
Jul 07 2009
next sibling parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Tue, Jul 7, 2009 at 11:33 AM, Andrei
Alexandrescu<SeeWebsiteForEmail erdani.org> wrote:
 Well 32-bit architectures may be a historical relic but I don't think 32-bit
 integers are. And I think it would be too disruptive a change to promote
 results of arithmetic operation between integers to long.

 ...

 This is a different beast. We simply couldn't devise a satisfactory scheme
 within the constraints we have. No simple solution we could think of has
 worked, nor have a number of sophisticated solutions. Ideas would be
 welcome, though I need to warn you that the devil is in the details so the
 ideas must be fully baked; too many good sounding high-level ideas fail when
 analyzed in detail.
Hm. Just throwing this out there, as a possible solution for both problems. Suppose you kept the current set of integer types, but made all of them "open" (i.e. byte+byte=short, int+int=long etc.). Furthermore, you made it impossible to implicitly convert between the signed and unsigned types of the same size (the int<>uint hole disappears). But then you introduce two new native-size integer types. Well, we already have them - ptrdiff_t and size_t - but give them nicer names, like word and uword. Unlike the other integer types, these would be implicitly convertible to one another. They'd more or less take the place of 'int' and 'uint' in most code, since most of the time, the size of the integer isn't that important.
Jul 07 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:h2vprn$1t77$1 digitalmars.com...
 This is a different beast. We simply couldn't devise a satisfactory scheme 
 within the constraints we have. No simple solution we could think of has 
 worked, nor have a number of sophisticated solutions. Ideas would be 
 welcome, though I need to warn you that the devil is in the details so the 
 ideas must be fully baked; too many good sounding high-level ideas fail 
 when analyzed in detail.
scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things?
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:h2vprn$1t77$1 digitalmars.com...
 This is a different beast. We simply couldn't devise a satisfactory scheme 
 within the constraints we have. No simple solution we could think of has 
 worked, nor have a number of sophisticated solutions. Ideas would be 
 welcome, though I need to warn you that the devil is in the details so the 
 ideas must be fully baked; too many good sounding high-level ideas fail 
 when analyzed in detail.
scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things?
An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that. Andrei
Jul 07 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 Nick Sabalausky wrote:

 checked/unchecked scheme and someone's (I forget who) idea of 
 expanding that to something like unchecked(overflow, sign)? What was 
 wrong with those sorts of things? 
An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that.
We also should be careful not to turn D into a "bondage and discipline" language that nobody will use unless contractually forced to.
Jul 07 2009
prev sibling parent reply "Nick Sabalausky" <a a.a> writes:
"Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
news:h30907$2lk0$3 digitalmars.com...
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:h2vprn$1t77$1 digitalmars.com...
 This is a different beast. We simply couldn't devise a satisfactory 
 scheme within the constraints we have. No simple solution we could think 
 of has worked, nor have a number of sophisticated solutions. Ideas would 
 be welcome, though I need to warn you that the devil is in the details 
 so the ideas must be fully baked; too many good sounding high-level 
 ideas fail when analyzed in detail.
scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things?
An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that.
you mostly don't care, and then "checked" to enable the checks in the spots where you do care. And then there's been the suggestions for finer-graned control for whevever that's needed.
Jul 07 2009
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:h30907$2lk0$3 digitalmars.com...
 Nick Sabalausky wrote:
 "Andrei Alexandrescu" <SeeWebsiteForEmail erdani.org> wrote in message 
 news:h2vprn$1t77$1 digitalmars.com...
 This is a different beast. We simply couldn't devise a satisfactory 
 scheme within the constraints we have. No simple solution we could think 
 of has worked, nor have a number of sophisticated solutions. Ideas would 
 be welcome, though I need to warn you that the devil is in the details 
 so the ideas must be fully baked; too many good sounding high-level 
 ideas fail when analyzed in detail.
scheme and someone's (I forget who) idea of expanding that to something like unchecked(overflow, sign)? What was wrong with those sorts of things?
An unchecked-based approach was not on the table. Our focus was more on checking things properly, instead of over-checking and then relying on "unchecked" to disable that.
you mostly don't care, and then "checked" to enable the checks in the spots where you do care. And then there's been the suggestions for finer-graned control for whevever that's needed.
Well unfortunately that all wasn't considered. If properly championed, it would. I personally consider the current approach superior because it's safe and unobtrusive. Andrei
Jul 07 2009
prev sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de julio a las 00:48 me escribiste:
 Robert Jacques wrote:
On Mon, 06 Jul 2009 01:05:10 -0400, Walter Bright <newshound1 digitalmars.com> 
wrote:
Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip
Thanks for another great release. Also, I'm not sure if this is a bug or a feature with regard to the new integer rules: byte x,y,z; z = x+y; // Error: cannot implicitly convert expression (cast(int)x + cast(int)y) of type int to byte which makes sense, in that a byte can overflow, but also doesn't make sense, since integer behaviour is different.
Walter has implemented an ingenious scheme for disallowing narrowing conversions while at the same time minimizing the number of casts required. He hasn't explained it, so I'll sketch an explanation here. The basic approach is "value range propagation": each expression is associated with a minimum possible value and a maximum possible value. As complex expressions are assembled out of simpler expressions, the ranges are computed and propagated. For example, this code compiles: int x = whatever(); bool y = x & 1; The compiler figures that the range of x is int.min to int.max, the range of 1 is 1 to 1, and (here's the interesting part), the range of x & 1 is 0 to 1. So it lets the code go through. However, it won't allow this: int x = whatever(); bool y = x & 2; because x & 2 has range between 0 and 2, which won't fit in a bool. The approach generalizes to arbitrary complex expressions. Now here's the trick though: the value range propagation is local, i.e. all ranges are forgotten beyond one expression. So as soon as you move on to the next statement, the ranges have been forgotten. Why? Simply put, increased implementation difficulties and increased compiler memory footprint for diminishing returns. Both Walter and I noticed that expression-level value range propagation gets rid of all dangerous cases and the vast majority of required casts. Indeed, his test suite, Phobos, and my own codebase required surprisingly few changes with the new scheme. Moreover, we both discovered bugs due to the new feature, so we're happy with the status quo. Now consider your code: byte x,y,z; z = x+y; The first line initializes all values to zero. In an intra-procedural value range propagation, these zeros would be propagated to the next statement, which would range-check. However, in the current approach, the ranges of x, y, and z are forgotten at the first semicolon. Then, x+y has range -byte.min-byte.min up to byte.max+byte.max as far as the type checker knows. That would fit in a short (and by the way I just found a bug with that occasion) but not in a byte.
This seems nice. I think it would be nice if this kind of things are commented in the NG before a compiler release, to allow community input and discussion. I think this kind of things are the ones that deserves some kind of RFC (like Python PEPs) like someone suggested a couple of days ago. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ----------------------------------------------------------------------------
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 This seems nice. I think it would be nice if this kind of things are
 commented in the NG before a compiler release, to allow community input
 and discussion.
Yup, that's what happened to case :o).
 I think this kind of things are the ones that deserves some kind of RFC
 (like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up? Andrei
Jul 07 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
 Leandro Lucarella wrote:
This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.
Yup, that's what happened to case :o).
I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up?
What's wrong with the Wiki? -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- He andáu muchos caminos, muchos caminos he andáu, Chile tiene el buen vino y Suecia, el bacalao. Esta'o Unido tiene el hot do', Cuba tiene el mojito, Guatemala, el cornalito y Brasil la feishoada.
Jul 07 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
 Leandro Lucarella wrote:
 This seems nice. I think it would be nice if this kind of things are
 commented in the NG before a compiler release, to allow community input
 and discussion.
Yup, that's what happened to case :o).
 I think this kind of things are the ones that deserves some kind of RFC
 (like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up?
What's wrong with the Wiki?
Where's the link? Andrei
Jul 07 2009
parent reply Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:
 Leandro Lucarella wrote:
Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
Leandro Lucarella wrote:
This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.
Yup, that's what happened to case :o).
I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up?
What's wrong with the Wiki?
Where's the link?
I mean the D Wiki! http://prowiki.org/wiki4d/wiki.cgi (BTW, nice job with the Wiki for whoever did it, I don't remember who was putting a lot of work on improving the Wiki, but it's really much better organized now) I think we can add a DIP (D Improvement Proposal =) section in the "Language Development" section: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Ya ni el cielo me quiere, ya ni la muerte me visita Ya ni el sol me calienta, ya ni el viento me acaricia
Jul 07 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:
 Leandro Lucarella wrote:
 Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
 Leandro Lucarella wrote:
 This seems nice. I think it would be nice if this kind of things are
 commented in the NG before a compiler release, to allow community input
 and discussion.
Yup, that's what happened to case :o).
 I think this kind of things are the ones that deserves some kind of RFC
 (like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up?
What's wrong with the Wiki?
Where's the link?
I mean the D Wiki! http://prowiki.org/wiki4d/wiki.cgi (BTW, nice job with the Wiki for whoever did it, I don't remember who was putting a lot of work on improving the Wiki, but it's really much better organized now) I think we can add a DIP (D Improvement Proposal =) section in the "Language Development" section: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel
Great idea. I can only hope the technical level will be much higher than the two threads related to switch. Andrei
Jul 07 2009
parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  7 de julio a las 16:54 me escribiste:
 Leandro Lucarella wrote:
Andrei Alexandrescu, el  7 de julio a las 15:12 me escribiste:
Leandro Lucarella wrote:
Andrei Alexandrescu, el  7 de julio a las 10:56 me escribiste:
Leandro Lucarella wrote:
This seems nice. I think it would be nice if this kind of things are
commented in the NG before a compiler release, to allow community input
and discussion.
Yup, that's what happened to case :o).
I think this kind of things are the ones that deserves some kind of RFC
(like Python PEPs) like someone suggested a couple of days ago.
I think that's a good idea. Who has the time and resources to set that up?
What's wrong with the Wiki?
Where's the link?
I mean the D Wiki! http://prowiki.org/wiki4d/wiki.cgi (BTW, nice job with the Wiki for whoever did it, I don't remember who was putting a lot of work on improving the Wiki, but it's really much better organized now) I think we can add a DIP (D Improvement Proposal =) section in the "Language Development" section: http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel
Great idea. I can only hope the technical level will be much higher than the two threads related to switch.
I think proposals should be published there but discussed here, so be ready for all kind of discussions (the ones you like and the ones you don't =). From time to time, when there is some kind of agreement, the proposal should be updated (with a new "revision number"). I just went wild and added a DIP index[1] and the first DIP (DIP1), a template for creating new DIPs[2]. This are just rought drafts, but I think they are good enought to start with. Comments are apreciated. I will post a "formal" announcement too. [1] http://www.prowiki.org/wiki4d/wiki.cgi?DiPs [2] http://www.prowiki.org/wiki4d/wiki.cgi?DiP1 -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- "The Guinness Book of Records" holds the record for being the most stolen book in public libraries
Jul 07 2009
prev sibling parent reply Jesse Phillips <jessekphillips gmail.com> writes:
On Tue, 07 Jul 2009 18:43:41 -0300, Leandro Lucarella wrote:

 
 (BTW, nice job with the Wiki for whoever did it, I don't remember who
 was putting a lot of work on improving the Wiki, but it's really much
 better organized now)
Hi, thanks.
 I think we can add a DIP (D Improvement Proposal =) section in the
 "Language Development" section:
 http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel
I was reusing the Discussion and Ideas for these things, but DIP could be for those brought forward by the involved few of accepting ideas, since Ideas and Discussion will likely end up with a lot of old or less thought out ideas. http://www.prowiki.org/wiki4d/wiki.cgi?IdeaDiscussion
Jul 07 2009
parent Leandro Lucarella <llucax gmail.com> writes:
Jesse Phillips, el  8 de julio a las 01:27 me escribiste:
 On Tue, 07 Jul 2009 18:43:41 -0300, Leandro Lucarella wrote:
 
 
 (BTW, nice job with the Wiki for whoever did it, I don't remember who
 was putting a lot of work on improving the Wiki, but it's really much
 better organized now)
Hi, thanks.
 I think we can add a DIP (D Improvement Proposal =) section in the
 "Language Development" section:
 http://prowiki.org/wiki4d/wiki.cgi?LanguageDevel
I was reusing the Discussion and Ideas for these things, but DIP could be for those brought forward by the involved few of accepting ideas, since Ideas and Discussion will likely end up with a lot of old or less thought out ideas. http://www.prowiki.org/wiki4d/wiki.cgi?IdeaDiscussion
Oops! I'm sorry I missed that. =/ Anyways I think that page serves as a way to index interesting discussions in the NG. The idea of DIPs is to be the other way around. You first present the idea as a DIP, the it's discussed. When you get sufficient input, you update the DIP with a new revision number, then you put it for discussion again in the NG. You repeat that until it's Accepted or Rejected (or you give up and Withdrawn it =). -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Pitzulino! Pitzulino! Todos a cantar por el tubo! Pitzulino! Pitzulino! Todos a cantar por el codo!
Jul 08 2009
prev sibling next sibling parent Don <nospam nospam.com> writes:
Walter Bright wrote:
 Something for everyone here.
 
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Why is 'final switch' required? Another possible way of dealing with the same issue would be: switch(e) { case E.A: blah; break; case E.B: blah; break; ... default: assert(0); } Ie, if switch is over an enum type, and the 'default' clause consists only of assert(0), the compiler could generate a warning if some of the possible enum values never appear in a case statement. It's not quite the same as 'final switch', but I think it captures most of the use cases.
Jul 07 2009
prev sibling next sibling parent reply "Lionello Lunesu" <lionello lunesu.remove.com> writes:
"Walter Bright" <newshound1 digitalmars.com> wrote in message 
news:h2s0me$30f2$1 digitalmars.com...
 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
Great release, thanks to all those that have contributed to it! Walter, since the lib/include folders were split according to OS, the dmd2 zip consistently has an extensionless "lib" file in the dmd2 folder. This is because of the 'install' target in win32.mak that would previously copy phobos.lib and gcstub.obj to the lib folder, but now copies their contents to a file called "lib" instead. I've made a patch and attached it to http://d.puremagic.com/issues/show_bug.cgi?id=3153 L.
Jul 07 2009
next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Thanks.
Jul 07 2009
prev sibling parent "Nick Sabalausky" <a a.a> writes:
"Lionello Lunesu" <lionello lunesu.remove.com> wrote in message 
news:h30vss$pmo$1 digitalmars.com...
 Walter, since the lib/include folders were split according to OS, the dmd2 
 zip consistently has an extensionless "lib" file in the dmd2 folder.
It's also in D1.
Jul 07 2009
prev sibling next sibling parent reply Leandro Lucarella <llucax gmail.com> writes:
Walter Bright, el  5 de julio a las 22:05 me escribiste:
 Something for everyone here.
 
 
 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip
 
 
 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors. -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. Tiene B: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas son B: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla)
Jul 08 2009
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Leandro Lucarella wrote:
 Walter Bright, el  5 de julio a las 22:05 me escribiste:
 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors.
Don is awesome and a good example to follow! Andrei P.S. With the help of a dictionary I think I figured most of this joke: MP: Cómo está, estimado Bellini? B: Muy bien, Mario, astrologando. MP: Qué tengo? B: Un balcón-terraza. MP: No, en mi mano, Bellini... B: Un secarropas! MP: No, escuche bien, eh. Tiene B: El circo de Moscú. números. MP: No Bellini. Toma medidas. B: Un ministro. MP: No Bellini, eh! Algunas son B: Una modelo, Mario! de plástico y otras de madera. MP: No, Bellini, no y no! -- El Gran Bellini (Mario Podestá con una regla) It's about wild and funny semantic confusions made by Bellini in attempting to guess with hints, due to homonymy and, heck, polysemy I guess :o). But what does the secarropas (tumble-dryer according to http://www.spanishdict.com/translate/secarropas) have to do with anything?
Jul 08 2009
next sibling parent Leandro Lucarella <llucax gmail.com> writes:
Andrei Alexandrescu, el  8 de julio a las 11:46 me escribiste:

I'm sorry about the spanish taglines, they are selected randomly =)

And most (in spanish) are pretty local (argentine) jokes.

 P.S. With the help of a dictionary I think I figured most of this joke:
 
 MP: Cómo está, estimado Bellini?   B: Muy bien, Mario, astrologando.
 MP: Qué tengo?                     B: Un balcón-terraza.
 MP: No, en mi mano, Bellini...     B: Un secarropas!
 MP: No, escuche bien, eh. Tiene    B: El circo de Moscú.
     números.
 MP: No Bellini. Toma medidas.      B: Un ministro.
 MP: No Bellini, eh! Algunas son    B: Una modelo, Mario!
     de plástico y otras de madera.
 MP: No, Bellini, no y no!
 	-- El Gran Bellini (Mario Podestá con una regla)
 
 It's about wild and funny semantic confusions made by Bellini in
 attempting to guess with hints, due to homonymy and, heck, polysemy
 I guess :o). But what does the secarropas (tumble-dryer according to
 http://www.spanishdict.com/translate/secarropas) have to do with
 anything?
It doesn't, it's just absurd to have a tumble-dryer in your hand. This quote is a sketch from a cable (cult) tv show from Argentina calle "Magazine For Fai". It was mostly sketch-based absurd humor (format similar to Monty Python Flying Circus) with the particularity of being interpreted by children (except for the creator). This sketch is about a not-so-good mentalist ("El Gran Bellini" or "The Great Bellini"), who tries to guess what it's in the hand of Mario Podestá (the creator) with his eyes covered. If you manage to understand spoken spanish, you can see this sketch video in YouTube: http://www.youtube.com/watch?v=dANeOdBX6QM or read the Wikipedia article about the show: http://es.wikipedia.org/wiki/Magazine_For_Fai -- Leandro Lucarella (luca) | Blog colectivo: http://www.mazziblog.com.ar/blog/ ---------------------------------------------------------------------------- GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145 104C 949E BFB6 5F5A 8D05) ---------------------------------------------------------------------------- Que importante, entonces en estos días de globalización refregar nuestras almas, pasarle el lampazo a nuestros corazones para alcanzar un verdadero estado de babia peperianal. -- Peperino Pómoro
Jul 08 2009
prev sibling parent reply Walter Bright <newshound1 digitalmars.com> writes:
Andrei Alexandrescu wrote:
 P.S. With the help of a dictionary I think I figured most of this joke:
 
 MP: Cómo está, estimado Bellini?   B: Muy bien, Mario, astrologando.
 MP: Qué tengo?                     B: Un balcón-terraza.
 MP: No, en mi mano, Bellini...     B: Un secarropas!
 MP: No, escuche bien, eh. Tiene    B: El circo de Moscú.
     números.
 MP: No Bellini. Toma medidas.      B: Un ministro.
 MP: No Bellini, eh! Algunas son    B: Una modelo, Mario!
     de plástico y otras de madera.
 MP: No, Bellini, no y no!
     -- El Gran Bellini (Mario Podestá con una regla)
Translation for the lazy: A donkey, a horse, and a fish walk into a bar.
Jul 08 2009
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
Walter Bright wrote:
 Andrei Alexandrescu wrote:
 P.S. With the help of a dictionary I think I figured most of this joke:

 MP: Cómo está, estimado Bellini?   B: Muy bien, Mario, astrologando.
 MP: Qué tengo?                     B: Un balcón-terraza.
 MP: No, en mi mano, Bellini...     B: Un secarropas!
 MP: No, escuche bien, eh. Tiene    B: El circo de Moscú.
     números.
 MP: No Bellini. Toma medidas.      B: Un ministro.
 MP: No Bellini, eh! Algunas son    B: Una modelo, Mario!
     de plástico y otras de madera.
 MP: No, Bellini, no y no!
     -- El Gran Bellini (Mario Podestá con una regla)
Translation for the lazy: A donkey, a horse, and a fish walk into a bar.
And the bartender asks the horse: "Why the long face?" Andrei
Jul 08 2009
parent Ary Borenszweig <ary esperanto.org.ar> writes:
Andrei Alexandrescu wrote:
 Walter Bright wrote:
 Andrei Alexandrescu wrote:
 P.S. With the help of a dictionary I think I figured most of this joke:

 MP: Cómo está, estimado Bellini?   B: Muy bien, Mario, astrologando.
 MP: Qué tengo?                     B: Un balcón-terraza.
 MP: No, en mi mano, Bellini...     B: Un secarropas!
 MP: No, escuche bien, eh. Tiene    B: El circo de Moscú.
     números.
 MP: No Bellini. Toma medidas.      B: Un ministro.
 MP: No Bellini, eh! Algunas son    B: Una modelo, Mario!
     de plástico y otras de madera.
 MP: No, Bellini, no y no!
     -- El Gran Bellini (Mario Podestá con una regla)
Translation for the lazy: A donkey, a horse, and a fish walk into a bar.
And the bartender asks the horse: "Why the long face?"
http://www.youtube.com/watch?v=KZ-Okkpgeh4
Jul 08 2009
prev sibling next sibling parent Walter Bright <newshound1 digitalmars.com> writes:
Leandro Lucarella wrote:
 I incidentally went through all the D2 bug reports that had being fixed in
 this release and I was really surprised about how much of them had patches
 by Don (the vast majority!).
Don's an awesome contributor. I and the rest of the D community are very much indebted to him.
 Thanks Don! I think it's great that more people are becoming major
 D contributors.
 
Jul 08 2009
prev sibling parent Don <nospam nospam.com> writes:
Leandro Lucarella wrote:
 Walter Bright, el  5 de julio a las 22:05 me escribiste:
 Something for everyone here.


 http://www.digitalmars.com/d/1.0/changelog.html
 http://ftp.digitalmars.com/dmd.1.046.zip


 http://www.digitalmars.com/d/2.0/changelog.html
 http://ftp.digitalmars.com/dmd.2.031.zip
I incidentally went through all the D2 bug reports that had being fixed in this release and I was really surprised about how much of them had patches by Don (the vast majority!). Thanks Don! I think it's great that more people are becoming major D contributors.
Thanks! Yeah, I did a major assault on the segfault/internal compiler error bugs. I figured that right now, the most useful thing I could do was to make the compiler stable. I have few more to give to Walter, but in general it should be quite difficult to crash the compiler now. A couple of my other bug patches -- 1994 and 3010 -- appear to be fixed in this release, though they are not in the changelog. Also the ICE from 339 is fixed.
Jul 09 2009
prev sibling next sibling parent Max Samukha <spambox d-coding.com> writes:
On Sun, 05 Jul 2009 22:05:10 -0700, Walter Bright
<newshound1 digitalmars.com> wrote:

Something for everyone here.


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.046.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.031.zip
Nice release. Thanks! I wonder if expression tuples had been considered for use in the multiple case statement? And if yes, what was the reason they were discarded? Some examples: case InclusiveRange!('a', 'z'): case StaticTuple!(1, 2, 5, 6): case AnEnum.tupleof[1..3]:
Jul 13 2009
prev sibling parent reply bearophile <bearophileHUGS lycos.com> writes:
I'm playing with the new D2 a bit, this comes from some real D1 code:

void main(string[] args) {
    int n = args.length;
    ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n));
}

At compile-time the compiler says:
temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255 ?
255 : n) of type int to ubyte

You have to add a silly cast:

void main(string[] args) {
    int n = args.length;
    ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n));
}

In theory if the compiler gets a smarter such cast can be unnecessary.

Bye,
bearophile
Jul 16 2009
next sibling parent reply John C <johnch_atms hotmail.com> writes:
bearophile Wrote:

 I'm playing with the new D2 a bit, this comes from some real D1 code:
 
 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n));
 }
 
 At compile-time the compiler says:
 temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255
? 255 : n) of type int to ubyte
 
 You have to add a silly cast:
 
 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n));
 }
 
 In theory if the compiler gets a smarter such cast can be unnecessary.
 
 Bye,
 bearophile
Did you not read the change log? "Implicit integral conversions that could result in loss of significant bits are no longer allowed."
Jul 16 2009
parent reply bearophile <bearophileHUGS lycos.com> writes:
John C:
 Did you not read the change log?
 "Implicit integral conversions that could result in loss of significant bits
are no longer allowed."
This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile
Jul 16 2009
parent reply BCS <ao pathlink.com> writes:
Reply to bearophile,

 John C:
 
 Did you not read the change log?
 "Implicit integral conversions that could result in loss of
 significant bits are no longer allowed."
This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile
I'm going with Steven on this one. Making the legality of code dependent on it's semantics is risky because it then ends up with bazaar portability issues or requiters that the scope of the semantics analysts engine be part of the language spec.
Jul 16 2009
next sibling parent reply Don <nospam nospam.com> writes:
BCS wrote:
 Reply to bearophile,
 
 John C:

 Did you not read the change log?
 "Implicit integral conversions that could result in loss of
 significant bits are no longer allowed."
This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte (yes, I understand the compiler is not smart enough yet to understand it, but from the things explained by Andrei I have thought it was. So I am wrong and I have shown this to other people, that may be interested. I have also encouraged to make the compiler smarter to avoid a cast in such case, because this is a single expression, so range propagation is probably not too much hard to implement given the current design of the front-end. You have missed most of the purposes of my post). Bye, bearophile
I'm going with Steven on this one. Making the legality of code dependent on it's semantics is risky because it then ends up with bazaar portability issues or requiters that the scope of the semantics analysts engine be part of the language spec.
In this case, I think bearophile's right: it's just a problem with range propagation of the ?: operator. I think the compiler should be required to do the semantics analysis for single expressions. Not more, not less.
Jul 17 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 17 Jul 2009 08:08:23 -0400, Don <nospam nospam.com> wrote:

 In this case, I think bearophile's right: it's just a problem with range  
 propagation of the ?: operator. I think the compiler should be required  
 to do the semantics analysis for single expressions. Not more, not less.
Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve
Jul 17 2009
parent reply Don <nospam nospam.com> writes:
Steven Schveighoffer wrote:
 On Fri, 17 Jul 2009 08:08:23 -0400, Don <nospam nospam.com> wrote:
 
 In this case, I think bearophile's right: it's just a problem with 
 range propagation of the ?: operator. I think the compiler should be 
 required to do the semantics analysis for single expressions. Not 
 more, not less.
Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve
Already happens. This works: ubyte foo(uint n) { return true ? 255 : n; } And this fails: ubyte boo(uint n) { if (true) return 255; else return n; }
Jul 17 2009
parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Fri, 17 Jul 2009 09:46:11 -0400, Don <nospam nospam.com> wrote:

 Steven Schveighoffer wrote:
 On Fri, 17 Jul 2009 08:08:23 -0400, Don <nospam nospam.com> wrote:

 In this case, I think bearophile's right: it's just a problem with  
 range propagation of the ?: operator. I think the compiler should be  
 required to do the semantics analysis for single expressions. Not  
 more, not less.
Why? What is the benefit of keeping track of the range of integral variables inside an expression, to eliminate a cast? I don't think it's worth it. As far as I know, the ?: is the only expression where this can happen. You will get cries of inconsistency when the compiler doesn't allow: ubyte foo(uint x) { if(x < 256) return x; return 0; } -Steve
Already happens. This works: ubyte foo(uint n) { return true ? 255 : n; } And this fails: ubyte boo(uint n) { if (true) return 255; else return n; }
Does that require range propogation? That is, when the compiler sees: return true ? 255 does it even look at the type or range of the other branch? Does this compile: class C {} ubyte foo(C n) { return true ? 255 : n; } (don't have the latest compiler installed yet, so I couldn't check it myself) I think the situation is different because the compiler isn't forced to consider the other branch, it can be optimized out (I'm surprised it doesn't do that in the general if(true) case anyways, even with optimization turned off). -Steve
Jul 17 2009
parent bearophile <bearophileHUGS lycos.com> writes:
Steven Schveighoffer:
 Does this compile:
 
 class C {}
 
 ubyte foo(C n)
 {
    return true ? 255 : n;
 }
 
 (don't have the latest compiler installed yet, so I couldn't check it  
 myself)
It doesn't compile (DMD v2.031): temp.d(5): Error: incompatible types for ((255) ? (n)): 'int' and 'temp.C' Bye, bearophile
Jul 17 2009
prev sibling parent Stewart Gordon <smjg_1998 yahoo.com> writes:
BCS wrote:
 Reply to bearophile,
 
 John C:

 Did you not read the change log?
 "Implicit integral conversions that could result in loss of
 significant bits are no longer allowed."
This was the code: ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n)); That last n is guaranteed to fit inside an ubyte
<snip>
 I'm going with Steven on this one. Making the legality of code dependent 
 on it's semantics is risky because it then ends up with bazaar 
 portability issues or requiters that the scope of the semantics analysts 
 engine be part of the language spec.
For the record, Nice has a form of automatic downcasting that works something like this, though not AFAIK on numerical comparisons. To take an example from http://nice.sourceforge.net/safety.html#id2488356 : ---------- Component c = ...; ?List<Component> children; if (c instanceof ContainerComponent) children = c.getChildren(); else children = null; ---------- getChildren is a method of ContainerComponent, but not of general Component. The test performed in the condition of the if statement has the additional effect of casting c to a ContainerComponent within the if statement's body. Nice also has nullable and non-nullable types (note the ?) and, in the same way, it forces you to check that it isn't null before you try to dereference it. The principle could be applied to if statements and ?: expressions alike (as it would appear Nice does), and even && and || expressions. And it could be extended to arithmetic comparisons. A possible way is to spec that, if n is an int, and k is a compile-time constant >= 0, then given n >= k ? expr1 : expr2 any occurrence of n in expr1 is treated as cast(uint) n. And similarly for the other relational operators and other signed integer types. And then that, if u is of some unsigned integer type, and k is a compile-time constant within the range of u's type, then given u <= k ? expr1 : expr2 any occurrence of u in expr1 is treated as cast to the smallest unsigned integer type that u will fit into. And similarly for the other relational operators. Then your example would compile. However, - if we're going to do this, then for consistency we probably ought to define all literals to be of the smallest type they'll fit into, and prefer unsigned over signed, unless overridden with a suffix - we could go on defining rules like this for more complicated conditions, and it could get complicated - I'm not sure if this kind of automatic casting is desirable from a generic programming POV. Stewart.
Jul 17 2009
prev sibling next sibling parent reply "Steven Schveighoffer" <schveiguy yahoo.com> writes:
On Thu, 16 Jul 2009 08:49:14 -0400, bearophile <bearophileHUGS lycos.com>  
wrote:

 I'm playing with the new D2 a bit, this comes from some real D1 code:

 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n));
 }

 At compile-time the compiler says:
 temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n  
= 255 ? 255 : n) of type int to ubyte
You have to add a silly cast: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); } In theory if the compiler gets a smarter such cast can be unnecessary.
I don't see how, doesn't this require semantic analysis to determine whether implicit casting is allowed? I think you are asking too much of the compiler. What if the expression was instead a function call, should the compiler look at the function source to determine whether it can fit in a ubyte? Where do you draw the line? I think the current behavior is fine. The D1 code probably works not because the compiler is 'smarter' but because it blindly truncates data. Perhaps if it were an optimization it could be implemented, but the result of an optimization cannot change the validity of the code... In other words, it couldn't be a compiler feature, it would have to be part of the spec, which would mean all compilers must implement it. BTW, I think cast is a perfect requirement here -- you are saying, yes I know the risks and I'm casting anyways. -Steve
Jul 16 2009
parent Charles Hixson <charleshixsn earthlink.net> writes:
Steven Schveighoffer wrote:
 On Thu, 16 Jul 2009 08:49:14 -0400, bearophile 
 <bearophileHUGS lycos.com> wrote:
 
 I'm playing with the new D2 a bit, this comes from some real D1 code:

 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n));
 }

 At compile-time the compiler says:
 temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n 
= 255 ? 255 : n) of type int to ubyte
You have to add a silly cast: void main(string[] args) { int n = args.length; ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n)); } In theory if the compiler gets a smarter such cast can be unnecessary.
I don't see how, doesn't this require semantic analysis to determine whether implicit casting is allowed? I think you are asking too much of the compiler. What if the expression was instead a function call, should the compiler look at the function source to determine whether it can fit in a ubyte? Where do you draw the line? I think the current behavior is fine. The D1 code probably works not because the compiler is 'smarter' but because it blindly truncates data. Perhaps if it were an optimization it could be implemented, but the result of an optimization cannot change the validity of the code... In other words, it couldn't be a compiler feature, it would have to be part of the spec, which would mean all compilers must implement it. BTW, I think cast is a perfect requirement here -- you are saying, yes I know the risks and I'm casting anyways. -Steve
He's saying the cast shouldn't be required, as the code entails that n will fit into a ubyte without loss of information. Perhaps it's too much to ask. I'm not sure. I don't think he's sure. But if he doesn't ask, he won't find out. (And it sure would be nice to avoid casts in situations analogous to that.)
Jul 16 2009
prev sibling parent reply Jason House <jason.james.house gmail.com> writes:
bearophile Wrote:

 I'm playing with the new D2 a bit, this comes from some real D1 code:
 
 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : n));
 }
 
 At compile-time the compiler says:
 temp.d(3): Error: cannot implicitly convert expression (n <= 0 ? 0 : n >= 255
? 255 : n) of type int to ubyte
 
 You have to add a silly cast:
 
 void main(string[] args) {
     int n = args.length;
     ubyte m = (n <= 0 ? 0 : (n >= 255 ? 255 : cast(ubyte)n));
 }
 
 In theory if the compiler gets a smarter such cast can be unnecessary.
 
 Bye,
 bearophile
add it to bugzilla.
Jul 16 2009
parent Jarrett Billingsley <jarrett.billingsley gmail.com> writes:
On Thu, Jul 16, 2009 at 6:43 PM, Jason House<jason.james.house gmail.com> w=
rote:
 bearophile Wrote:

 I'm playing with the new D2 a bit, this comes from some real D1 code:

 void main(string[] args) {
 =A0 =A0 int n =3D args.length;
 =A0 =A0 ubyte m =3D (n <=3D 0 ? 0 : (n >=3D 255 ? 255 : n));
 }

 At compile-time the compiler says:
 temp.d(3): Error: cannot implicitly convert expression (n <=3D 0 ? 0 : n=
=3D 255 ? 255 : n) of type int to ubyte
 You have to add a silly cast:

 void main(string[] args) {
 =A0 =A0 int n =3D args.length;
 =A0 =A0 ubyte m =3D (n <=3D 0 ? 0 : (n >=3D 255 ? 255 : cast(ubyte)n));
 }

 In theory if the compiler gets a smarter such cast can be unnecessary.

 Bye,
 bearophile
add it to bugzilla.
Bearophile has never reported anything in Bugzilla. It's inexplicable. He constantly complains about D and does nothing to help it.
Jul 16 2009