www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Regarding the proposed Binray Literals Deprecation

reply Puneet Goel <puneet coverify.com> writes:
I recently saw a talk by Walter Bright in the recently concluded 
DConf where Walter made a case for dropping compiler support for 
Binary literals.

I use Dlang for hardware. Binary literals are scattered 
everywhere in my code. D is a systems programming language. And 
thanks to industry-wide FPGA/CPU consolidation, programmable 
hardware is projected to grow big in the coming years.

Please reconsider binary literal deprecation.
Sep 09 2022
next sibling parent IGotD- <nise nise.com> writes:
On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently 
 concluded DConf where Walter made a case for dropping compiler 
 support for Binary literals.

 I use Dlang for hardware. Binary literals are scattered 
 everywhere in my code. D is a systems programming language. And 
 thanks to industry-wide FPGA/CPU consolidation, programmable 
 hardware is projected to grow big in the coming years.

 Please reconsider binary literal deprecation.
Why would you want to drop binary literals which are useful for near HW programming? It's just a literal why would that be so hard to support?
Sep 09 2022
prev sibling next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently 
 concluded DConf where Walter made a case for dropping compiler 
 support for Binary literals.

 I use Dlang for hardware. Binary literals are scattered 
 everywhere in my code. D is a systems programming language. And 
 thanks to industry-wide FPGA/CPU consolidation, programmable 
 hardware is projected to grow big in the coming years.

 Please reconsider binary literal deprecation.
Same here. Please leave the binary literals alone. In general, "I haven't needed this feature for a long time/my friends don't use it" is a terrible indicator of what is/may be useful.
Sep 09 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Sep 09, 2022 at 05:52:37PM +0000, Max Samukha via Digitalmars-d wrote:
 On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently concluded
 DConf where Walter made a case for dropping compiler support for
 Binary literals.
 
 I use Dlang for hardware. Binary literals are scattered everywhere
 in my code. D is a systems programming language. And thanks to
 industry-wide FPGA/CPU consolidation, programmable hardware is
 projected to grow big in the coming years.
 
 Please reconsider binary literal deprecation.
Same here. Please leave the binary literals alone. In general, "I haven't needed this feature for a long time/my friends don't use it" is a terrible indicator of what is/may be useful.
I also oppose dropping binary literals. Although I don't use them often, the few times I do need them I'm very glad they are there. I consider it one of the niceties of D that C missed, and would be rather disappointed if we dropped it. It would be a pain to have to resort to a template just so I can use binary literals. T -- Любишь кататься - люби и саночки возить.
Sep 09 2022
parent reply Dave P. <dave287091 gmail.com> writes:
On Friday, 9 September 2022 at 18:20:56 UTC, H. S. Teoh wrote:
 On Fri, Sep 09, 2022 at 05:52:37PM +0000, Max Samukha via 
 Digitalmars-d wrote:
 On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 [...]
I also oppose dropping binary literals. Although I don't use them often, the few times I do need them I'm very glad they are there. I consider it one of the niceties of D that C missed, and would be rather disappointed if we dropped it. It would be a pain to have to resort to a template just so I can use binary literals. T
C23 is actually adding binary literals (and you could already use them with gcc and clang). Although I do agree with Walter about how they’re not very useful. They’re only practical for small numbers and at that point just learning what the small hex literals are in binary is not a big deal.
Sep 09 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 9 September 2022 at 23:43:49 UTC, Dave P. wrote:
 They’re only practical for small numbers and at that point just 
 learning what the small hex literals are in binary is not a big 
 deal.
The nice thing is D lets you group the bits withunderscores. So you might do like 0b11_111_101_001 which makes it a lot easier to manage and you can group something like a flags register the same way it appears in the documentation.
Sep 09 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 4:53 PM, Adam D Ruppe wrote:
 The nice thing is D lets you group the bits withunderscores.
Yes, that is a great feature I copied from Ada, where it had lain forgotten. Now everyone is adding it!
 So you might do like 0b11_111_101_001 which makes it a lot easier to manage
and 
 you can group something like a flags register the same way it appears in the 
 documentation.
I haven't seen CPUs that were documented in octal since the PDP-11, even though it didn't quite work with 16 bits. It was a holdover from the 36 bit PDP-10. 8 and 16 bit processors ever since used hex. BTW, a 0 should go after the b, unless you've got an 11 bit flags register! It's still easier to write as 0x7E9.
Sep 09 2022
next sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 02:31:05 UTC, Walter Bright 
wrote:
 I haven't seen CPUs that were documented in octal since the 
 PDP-11, even though it didn't quite work with 16 bits.
You caught me. I've never actually programmed anything in my entire life and just made all this up. It is true, no such thing exists. No programmer anywhere has ever had use for octal nor binary, and if they did, I wouldn't know anyway since I'm a huge fraud. I only come to these forums because I'm a compulsive liar dedicated to sabotaging the D language. Why? I don't know. I guess I just like to watch the world burn.
Sep 10 2022
prev sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Saturday, 10 September 2022 at 02:31:05 UTC, Walter Bright 
wrote:
 On 9/9/2022 4:53 PM, Adam D Ruppe wrote:
 The nice thing is D lets you group the bits withunderscores.
Yes, that is a great feature I copied from Ada, where it had lain forgotten. Now everyone is adding it!
 So you might do like 0b11_111_101_001 which makes it a lot 
 easier to manage and you can group something like a flags 
 register the same way it appears in the documentation.
I haven't seen CPUs that were documented in octal since the PDP-11, even though it didn't quite work with 16 bits. It was a holdover from the 36 bit PDP-10. 8 and 16 bit processors ever since used hex.
8080/8085/Z80 opcodes when expressed in octal are much easier to handle. Emulators is a niche domain but a lot of CPU and peripheral registers have quite often octal fields. Of course, hex is often enough to handle them.
Sep 10 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 12:15 PM, Patrick Schluter wrote:
 8080/8085/Z80 opcodes when expressed in octal are much easier to handle. 
I've never seen 8080/Z80 opcodes expressed in octal. I know that the modregrm byte for the x86 is 2,3,3, but I deal with that by using an inline function to manipulate it. It never occurred to me to use octal. Interesting. ubyte modregrm (uint m, uint r, uint rm) { return cast(ubyte)((m << 6) | (r << 3) | rm); } https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/code_x86.d#L508 Of course, bit fields might be better :-)
Sep 10 2022
next sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 19:53:11 UTC, Walter Bright 
wrote:
 Of course, bit fields might be better :-)
Not for this! At least not the godawful C flavor of bitfields. As you know, the actual layout is implementation defined. Which means it is of extremely limited value for hardware interop. This is one thing the Phobos template at least defines (though it isn't great either). What I'd like to see for D bitfields is a *good* system, that defines these things in a useful manner. C's things is ok for packing bits into available space, to make private structs smaller. But its undefined bits make it unsuitable for any kind of public api or hardware matching.
Sep 10 2022
prev sibling parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Saturday, 10 September 2022 at 19:53:11 UTC, Walter Bright 
wrote:
 On 9/10/2022 12:15 PM, Patrick Schluter wrote:
 8080/8085/Z80 opcodes when expressed in octal are much easier 
 to handle.
I've never seen 8080/Z80 opcodes expressed in octal. I know that the modregrm byte for the x86 is 2,3,3, but I deal with that by using an inline function to manipulate it. It never occurred to me to use octal. Interesting. ubyte modregrm (uint m, uint r, uint rm) { return cast(ubyte)((m << 6) | (r << 3) | rm); } https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/code_x86.d#L508 Of course, bit fields might be better :-)
Here a short article explaining the idea (it even works up to AMD64). https://dercuano.github.io/notes/8080-opcode-map.html#addtoc_2
Sep 11 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 4:43 PM, Dave P. wrote:
 C23 is actually adding binary literals (and you could already use them with
gcc 
 and clang).
C is regularly adding small window dressing features and not anything that would fundamentally improve the language :-/
 Although I do agree with Walter about how they’re not very useful. 
 They’re only practical for small numbers and at that point just learning
what 
 the small hex literals are in binary is not a big deal.
Hex values are far easier to read, too. Have you ever put the tip of your pencil on the screen to count the number of 1's? I have. Binary literals are user unfriendly.
Sep 09 2022
next sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright 
wrote:
 Hex values are far easier to read, too. Have you ever put the 
 tip of your pencil on the screen to count the number of 1's? I 
 have. Binary literals are user unfriendly.
Bzzzzt! Wrong. That is precisely why we allow underscores in integer literals, of any kind. If you think binary literals are user unfriendly you're using them wrong.
Sep 09 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 7:45 PM, Nicholas Wilson wrote:
 If you think binary literals are user unfriendly you're using them wrong.
Do you use them? :-)
Sep 09 2022
parent TheGag96 <thegag96 gmail.com> writes:
On Saturday, 10 September 2022 at 05:40:49 UTC, Walter Bright 
wrote:
 On 9/9/2022 7:45 PM, Nicholas Wilson wrote:
 If you think binary literals are user unfriendly you're using 
 them wrong.
Do you use them? :-)
Just searched my projects - I am in a few places, and I'm glad they're there. For goodness sake, *please* do not remove binary literals...
Sep 10 2022
prev sibling next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright 
wrote:

 Hex values are far easier to read, too. Have you ever put the 
 tip of your pencil on the screen to count the number of 1's? I 
 have. Binary literals are user unfriendly.
Bit flags are easier to read as binary grouped in nibbles. For example: enum ubyte[16] ledDigits = [ 0b0011_1111, // 0 0b0000_0110, // 1 0b0101_1011, // 2 0b0100_1111, // 3 0b0110_0110, // 4 0b0110_1101, // 5 0b0111_1101, // 6 0b0000_0111, // 7 0b0111_1111, // 8 0b0110_1111, // 9 0b0111_0111, // A 0b0111_1100, // b 0b0011_1001, // C 0b0101_1110, // d 0b0111_1001, // E 0b0111_0001, // F ]; Those are the bit masks for a 7-segment display. Of course, you could define them by or'ing enum flags or translating into hex, or use a template, but that would be annoying.
Sep 10 2022
next sibling parent mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 08:19:18 UTC, Max Samukha wrote:
 On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright 
 wrote:

 Hex values are far easier to read, too. Have you ever put the 
 tip of your pencil on the screen to count the number of 1's? I 
 have. Binary literals are user unfriendly.
Bit flags are easier to read as binary grouped in nibbles. For example: enum ubyte[16] ledDigits = [ 0b0011_1111, // 0 0b0000_0110, // 1 ...
Exactly! It's true that binary literals are user unfriendly *by itself*, but *with* separating underscores "_" it's much easier to use than hex values.
Sep 10 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 1:19 AM, Max Samukha wrote:
 Bit flags are easier to read as binary grouped in nibbles. For example:
 
 enum ubyte[16] ledDigits =
      [
          0b0011_1111, // 0
          0b0000_0110, // 1
          0b0101_1011, // 2
          0b0100_1111, // 3
          0b0110_0110, // 4
          0b0110_1101, // 5
          0b0111_1101, // 6
          0b0000_0111, // 7
          0b0111_1111, // 8
          0b0110_1111, // 9
          0b0111_0111, // A
          0b0111_1100, // b
          0b0011_1001, // C
          0b0101_1110, // d
          0b0111_1001, // E
          0b0111_0001, // F
      ];
 
 Those are the bit masks for a 7-segment display. Of course, you could define 
 them by or'ing enum flags or translating into hex, or use a template, but that 
 would be annoying.
Interesting that you brought up 7-segment display data, as I've actually written that stuff for embedded systems, and once again as a demonstration for the ABEL programming language. A couple things about it: 1. The visual representation of the binary doesn't have any correlation with how the display looks. 2. It's a one-off. Once it's written, it never changes. 3. Writing it in hex isn't any difficulty for 10 entries. A more compelling example would be, say, a character generator ROM, which I've also done. 0b01110 0b10001 0b11111 0b10001 0b10001 and you'll be doing a couple hundred of those at least. Wouldn't this be more appealing: " .XXX. X...X XXXXX X...X X...X " ? Then write a trivial parser, and use CTFE to generate the binary data for the table. Of course, such a parser could be used over and over for other projects.
Sep 10 2022
next sibling parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright 
wrote:
 
 and you'll be doing a couple hundred of those at least. 
 Wouldn't this be more appealing:

 "
    .XXX.
    X...X
    XXXXX
    X...X
    X...X
 "

 ? Then write a trivial parser, and use CTFE to generate the 
 binary data for the table. Of course, such a parser could be 
 used over and over for other projects.
First,the above strings as it is only work for uint8; while binary literals work for all integer sizes. Second, why not provide the above "trivial parser" into std lib (so nobody need to reinvent the wheel), and ask users use it over years to discover unseen problems and get feedback before deprecate / remove binary literals?
Sep 10 2022
next sibling parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 18:44:14 UTC, mw wrote:
 On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright
 
 Second, why not provide the above "trivial parser" into std lib 
 (so nobody need to reinvent the wheel), and ask users use it 
 over years to discover unseen problems and get feedback before 
 deprecate / remove binary literals?
And in general, can we provide alternative solutions or migrating tools before deprecating / removing language features that many users depend on? E.g Python provide command line tool `2to3`.
Sep 10 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
There is dfix, but nobody is actively working on it.
Sep 10 2022
parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 19:04:30 UTC, rikki cattermole 
wrote:
 There is dfix, but nobody is actively working on it.
Well, 2to3 is included in the standard distribution of Python. So where is dfix? If it's not included in the standard distribution, it can easily go out of sync with the main compiler.
Sep 10 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
https://github.com/dlang-community/dfix
Sep 10 2022
parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 19:14:30 UTC, rikki cattermole 
wrote:
 https://github.com/dlang-community/dfix
Let's add to the standard distribution of D compliers then. I.e dmd, ldc and gdc. It needs to be maintained in the same repo as the main compilers.
Sep 10 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 11/09/2022 7:17 AM, mw wrote:
 On Saturday, 10 September 2022 at 19:14:30 UTC, rikki cattermole wrote:
 https://github.com/dlang-community/dfix
Let's add to the standard distribution of D compliers then. I.e dmd, ldc and gdc. It needs to be maintained in the same repo as the main compilers.
It doesn't need to be moved. Its perfectly capable of being updated, its just that nobody cares about it. See DCD and dfmt as examples, they are up to date and in the same family of tools.
Sep 10 2022
prev sibling parent Sergey <kornburn yandex.ru> writes:
On Saturday, 10 September 2022 at 19:17:34 UTC, mw wrote:
 On Saturday, 10 September 2022 at 19:14:30 UTC, rikki 
 cattermole wrote:
 https://github.com/dlang-community/dfix
Let's add to the standard distribution of D compliers then. I.e dmd, ldc and gdc. It needs to be maintained in the same repo as the main compilers.
https://github.com/dlang-community/dfix/issues/60
Sep 10 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 11:44 AM, mw wrote:
 Second, why not provide the above "trivial parser" into std lib (so nobody
need 
 to reinvent the wheel)
Indeed, why not! Want to give it a go?
Sep 10 2022
parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 19:56:14 UTC, Walter Bright 
wrote:
 On 9/10/2022 11:44 AM, mw wrote:
 Second, why not provide the above "trivial parser" into std 
 lib (so nobody need to reinvent the wheel)
Indeed, why not! Want to give it a go?
Not me. I'm fine with: (e.g. uint32) 0b0000_0110_0110_0110_0110_0110_0110_0110 I don't see any advantage of: "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.” over the binary literals, and it worth the effort. And I don't think the latter is more readable than the former. What I'm saying is that: if you insist on removing binary (I hope you not), then [why not provide ...] the migration tool.
Sep 10 2022
next sibling parent reply mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 20:12:05 UTC, mw wrote:
 I'm fine with: (e.g. uint32)
Actually I just realized this is a good example, so I align this two literals here: ``` 0b0000_0110_0110_0110_0110_0110_0110_0110 "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX." ``` Can we have a poll here? which is more readable for uint32? 1) the binary literal 2) the string literal And I have another argument for *NOT* deprecating binary literals, let's talk about int binary literal => string repr => int round trip: many of people (including me) write some kind of code generators / formatters sometime, so the builtin "%b" format char: ``` writeln(format("%b", 0xcc)); // output 11001100 ``` For the round trip, it can be easily convert back to int again. Now, how much more work I need to do to make those string "XX..XX.." works in such round-trip?!
Sep 10 2022
parent matheus <matheus gmail.com> writes:
On Saturday, 10 September 2022 at 20:43:29 UTC, mw wrote:
 ...
 0b0000_0110_0110_0110_0110_0110_0110_0110
  "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX."
 ...
For me in this example, the former (Binary representation) without doubt is better. Matheus.
Sep 10 2022
prev sibling next sibling parent reply Dave P. <dave287091 gmail.com> writes:
On Saturday, 10 September 2022 at 20:12:05 UTC, mw wrote:
 On Saturday, 10 September 2022 at 19:56:14 UTC, Walter Bright 
 wrote:
 On 9/10/2022 11:44 AM, mw wrote:
 Second, why not provide the above "trivial parser" into std 
 lib (so nobody need to reinvent the wheel)
Indeed, why not! Want to give it a go?
Not me. I'm fine with: (e.g. uint32) 0b0000_0110_0110_0110_0110_0110_0110_0110 I don't see any advantage of: `"...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.”` over the binary literals, and it worth the effort. And I don't think the latter is more readable than the former. What I'm saying is that: if you insist on removing binary (I hope you not), then [why not provide ...] the migration tool.
```D ulong parse(const char[] data){ ulong result = 0; foreach(ch; data){ switch(ch){ case '.': result <<= 1; break; case 'x': case 'X': result <<= 1; result |= 1; break; case ' ': case '\t': case '\n': case '\r': case '_': continue; default: throw new Exception("oops"); } } return result; } static assert("...".parse == 0b000); static assert("..x".parse == 0b001); static assert(".x.".parse == 0b010); static assert(".xx".parse == 0b011); static assert("x..".parse == 0b100); static assert("x.x".parse == 0b101); static assert("xx.".parse == 0b110); static assert("xxx".parse == 0b111); static assert(" xxx x.x xxx ".parse == 0b111_101_111); static assert("x.x.__x__.x.x".parse == 0b1010__1__0101); private bool does_throw(const char[] data){ bool caught = false; try { const _ = data.parse; } catch (Exception e){ caught = true; } return caught; } static assert(does_throw("x0x")); static assert(does_throw("1010")); static assert(does_throw("1010")); ```
Sep 10 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 2:13 PM, Dave P. wrote:
 static assert("
          xxx
          x.x
          xxx
          ".parse == 0b111_101_111);
Nice
Sep 11 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 1:12 PM, mw wrote:
 I don't see any advantage of:
 
 "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.”
That wasn't my suggestion.
Sep 11 2022
parent mw <mingwu gmail.com> writes:
On Sunday, 11 September 2022 at 19:55:26 UTC, Walter Bright wrote:
 On 9/10/2022 1:12 PM, mw wrote:
 I don't see any advantage of:
 
 "...._.XX._.XX._.XX._.XX._.XX._.XX._.XX.”
That wasn't my suggestion.
Then what's your suggestion in this scenario? And what's your suggestion for the int binary literal => string repr => int round trip example: https://forum.dlang.org/post/bilqldkymsjgcmbrnqqq forum.dlang.org
Sep 11 2022
prev sibling next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright 
wrote:

 and you'll be doing a couple hundred of those at least. 
 Wouldn't this be more appealing:

 "
    .XXX.
    X...X
    XXXXX
    X...X
    X...X
 "
It probably would, but that's not our use case.
 ? Then write a trivial parser, and use CTFE to generate the 
 binary data for the table. Of course, such a parser could be 
 used over and over for other projects.
As I said, we could do it differently, but binary literals + comments are just optimal for our use case. There are other cases where we use them (in the definition of a binary protocol and in a few other places). We could but don't want to write a parser, translate into hex, etc. Replacing the literals with templates would be acceptable but annoying.
Sep 11 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/11/2022 12:02 AM, Max Samukha wrote:
 As I said, we could do it differently, but binary literals + comments are just 
 optimal for our use case.
You could do the 7 segment as: enum ubyte[16] = [ segment!" - | | | | - ", segment!" | | ", segment!" - | - | - ", .. ]; or even: enum ubyte[16] = segment!" - - - | | | | | - - | | | | | - - - "; D is ideal for doing micro-DSLs like this.
Sep 11 2022
parent Max Samukha <maxsamukha gmail.com> writes:
On Sunday, 11 September 2022 at 20:11:36 UTC, Walter Bright wrote:

 or even:

 enum ubyte[16] = segment!"
  -        -   -
 | |    |   |   |
           -   -
 | |    | |     |
  -        -   -
 ";


 D is ideal for doing micro-DSLs like this.
Nice trolling.)
Sep 11 2022
prev sibling parent reply Kagamin <spam here.lot> writes:
On Saturday, 10 September 2022 at 18:14:38 UTC, Walter Bright 
wrote:
 Interesting that you brought up 7-segment display data, as I've 
 actually written that stuff for embedded systems, and once 
 again as a demonstration for the ABEL programming language.

 A couple things about it:

 1. The visual representation of the binary doesn't have any 
 correlation with how the display looks.
Dunno, I figured it out just by eyeballing it. Hex would be opaque.
 2. It's a one-off. Once it's written, it never changes.
Perl style programming "write once and never touch" is sad, I try to write my code in auditable manner. If I were to simplify D, I'd slay delimited strings, IIRC there are dozens of them.
Sep 12 2022
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Monday, 12 September 2022 at 17:56:42 UTC, Kagamin wrote:
 If I were to simplify D, I'd slay delimited strings, IIRC there 
 are dozens of them.
There's three forms. All trivial lexer features that haven't changed for ages and not caused bugs in any other language component. While some editors might not support them properly, it isn't hard to add then forget about it.
Sep 12 2022
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Sep 12, 2022 at 06:08:21PM +0000, Adam D Ruppe via Digitalmars-d wrote:
 On Monday, 12 September 2022 at 17:56:42 UTC, Kagamin wrote:
 If I were to simplify D, I'd slay delimited strings, IIRC there are
 dozens of them.
There's three forms. All trivial lexer features that haven't changed for ages and not caused bugs in any other language component. While some editors might not support them properly, it isn't hard to add then forget about it.
+1. I thoroughly enjoy D's delimited strings; they make code-generating code more readable. Long snippets of code in a language being generated would be utterly horrendous if I had to manually escape every line of it. Heredoc syntax is ideal for this sort of thing. T -- It only takes one twig to burn down a forest.
Sep 12 2022
prev sibling next sibling parent reply 0xEAB <desisma heidel.beer> writes:
On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright 
wrote:
 Hex values are far easier to read, too.
“easier to read” is not “easier to comprehend”. Comprehensibility really depends on the use-case. But if we assume one’s working with binary flags or something similar (which probably was the reason to use binary literals in the first place), why would we write them in a different notation? To give an example: I can’t translate hex literals to their binary form in my head (in reasonable time). And I never even had to do so – except for an exam or two at school. Wanna know how I did it? – I wrote down the `0=0000`…`1=0001`…`F=1111` table… I understand that “I’ve written binary literals in hexadecimal form since 30 years” is a reasonable point of view. But that doesn’t really help anyone who doesn’t have the strong mental connection between them and their binary meaning. Writing binary literals using “bitshifting” notation in C didn’t provide great readability either. (We did so in the microcontroller programming course at school.) But I’d consider `(1 << 7) | (1 << 1)` way more comprehensible than `0x82`. On the contrary, if our use case is RGB channels (where the individual binary bits don’t matter and we instead think in “whole” numbers), of course it would be inconvenient to use binary literals here. (As always: use the right tool for the job!)
 Have you ever put the tip of your pencil on the screen to count 
 the number of 1's? I have.
No, I haven’t. Thankfully numeric literals are group-able using underscores (in D and binary ones in PHP at least). I actually find myself rather counting digits of decimal number literals (when not using D of course or in real life). Well, I have to admit I haven’t had to deal with code “abusing” binary literals so far…
Sep 10 2022
next sibling parent 0xEAB <desisma heidel.beer> writes:
On Saturday, 10 September 2022 at 15:21:50 UTC, 0xEAB wrote:
 I actually find myself rather counting digits of decimal number 
 literals (when not using D of course or in real life).
A note on that: I really like *underscore* being used as *thousands separator*. Way easier to parse than having to decipher whether a number is written in English or German (applies to a lot of other locales, too!). `1.000` vs `1,000` – which one is “one thousand” and which is “one and zero thousandths”? We might have different opinions on that, but we all will agree on `1_000` meaning “one thousand” :)
Sep 10 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 8:21 AM, 0xEAB wrote:
 But if we assume one’s working with binary flags or something similar (which 
 probably was the reason to use binary literals in the first place), why would
we 
 write them in a different notation?
I use binary flags all the time: enum Flag = { CARRY = 1, SIGN = 2, OVERFLOW = 4, PARITY = 8, ZERO = 0x10, ... } but as mnemonics.
 To give an example:
 I can’t translate hex literals to their binary form in my head (in
reasonable 
 time).
I understand. I suppose it's like learning to touch-type. Takes some effort at first, but a lifetime of payoff. There's no way to avoid working with binary data without getting comfortable with hex. (In 8th grade I took a 2 week summer school course in touch typing. The typewriters were mechanical monsters, you really had to hammer the keys to get it to work, but that helped build the muscle memory. Having a lifetime of payoff from that was soooo worth the few hours.) Other things worth taking the time to get comfortable with: 1. 2-s complement arithmetic 2. how floating point works 3. pointers
 And I never even had to do so – except for an exam or two at school.
 Wanna know how I did it? – I wrote down the `0=0000`…`1=0001`…`F=1111`
table…
That's how I learned the multiplication tables. I'd write out the matrix by hand.
Sep 10 2022
parent 0xEAB <desisma heidel.beer> writes:
  On Saturday, 10 September 2022 at 18:32:26 UTC, Walter Bright 
wrote:
 I understand. I suppose it's like learning to touch-type. Takes 
 some effort at first, but a lifetime of payoff. There's no way 
 to avoid working with binary data without getting comfortable 
 with hex.
Might not be generally applicable, but “binary data” and “data represented by binary digits” aren’t practically the same thing to me. Like: Do the individual digits of a number impose a special meaning? Or are they just a vehicle to represent the number? If the individual binary digits have a meaning (and the number they form is only their storage, think: container), viewing them by their number in a consolidating representation imposes extra work to derive the individual digits. Because it’s the digits that matter, the number itself has no meaning. Even if they’re technically the same. The opposite would be the case with 8bit RGB color channels: The digits impose no meaning by themselves. It’s the whole number that represents the brightness of the channel. Whether the any digit is 0,1,2,3,… is useless information on its own. It only serves a purpose when one has the whole number available. The digits are are only a tool to visualize the number here. …unless we consolidate multiple channels into one number: e.g. `#B03931` (= 0xB0_39_31) While the number itself still represents a specific color (mixed from the three channels R/G/B), the meaning is implied by looking at the digits representing (the brightness of) the individual channels. Once one converts said number (the “color code”) to decimal (→ 11548977) that meaning is lost. Another example: If someone told me that DDRB (Data Direction Register B) of my ATmega 328p were 144, I’d know nothing while technically I’ve been told everything. I first have to separate said number to binary digits; then I will find out what I could have been told in the first place: PINB4 (digit 4) and PINB7 (digit 7) are set to 1 (“input”). Hex form 0x90 might make the separation process easier, but it’s still a number representing a certain state, not the state data itself. Binary form however matches perfectly the actual state data. Why this differentiation matters: In case we get one digit wrong, how much a “whole number” is off, depends on the position of the digit in the number; e.g. • if we’re talking about a color channel then 255 (xFF) vs 55 (x37) will make a whole lot of a difference, while 255 (xFF) vs 250 (xFA) might be barely visible in real life. Nevertheless, the “whole” color channel (it’s an atomic thing from this point of view) is wrong. There’s practically no specific sub-portion that is wrong (we’re only making up one by partitioning the number into digits). • If a digit of the data direction register in my microcontroller is wrong, only one pin will malfunction (further impact depending on the application’s circuit; not the best example obviously). In other words, only a part of the whole thing is wrong. • If one channel of our RGB color lamp is off by whatever value, at first glance the whole color might look wrong (because colors mix in our eyes/brains), but in fact it’s only one LED/bulb of the three that is actually wrong. Don’t get me wrong, please. Of course, there is a difference between binary digits vs decimal digits vs hexadecimal digits. But again, isolated digits have no actual standalone meaning for things like brightness of a single color channels. On the other hand: if we consolidate the binary digits of a register of our microcontroller to dec or hex, even a single digit being off by a little will now also make a huge impact (on up to 4 pins!). In the end it comes down to “atomic” unit we’re looking at. Like I wrote, if any of the (irrelevant) digits of the value of our color channel is wrong, the whole channel is wrong.
 (In 8th grade I took a 2 week summer school course in touch 
 typing. The typewriters were mechanical monsters, you really 
 had to hammer the keys to get it to work, but that helped build 
 the muscle memory. Having a lifetime of payoff from that was 
 soooo worth the few hours.)
Touch-type is useful to me, too :) Learning to mentally translate hex to binary patterns isn’t, to be honest (at least yet; removing binary literals from D would introduce a potential use case). If I were asked, hex numbers are only possibly nice for the first 4 bits; beyond these I have to keep track of the position as well. There we go: On the screen I could at least use a pointer pencil (if wanted). In my mind there is no such option. Most of the time, when I’m working with binary data, individual binary digits (“bits”) of bytes don’t matter. Binary representation serves no purpose there. If it weren’t for control characters etc., ISO-8859-15 visualization would work for me as well… Memorizing the 16 pattern HEX->BIN table doesn’t really help me with binary data retrieved from /dev/urandom or when mysql-native (MySQL/MariaDB client library) decides to return BINARY columns typed as strings. (In case someone wondered what binary data I mostly work with, ignoring media files.) But if someone types "DDRB |= 144;" when programming my microcontroller, I’ll nervously get my calculator out ;)
Sep 10 2022
prev sibling parent reply wjoe <invalid example.com> writes:
On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright 
wrote:
 Hex values are far easier to read, too. Have you ever put the 
 tip of your pencil on the screen to count the number of 1's? I 
 have. Binary literals are user unfriendly.
That's true only if you're working with hex values. Besides, I've had to use the tip of a pencil to count Fs in hex values. Are you saying that hex is user friendlier when dealing with decimal values, too ? Because I've always used decimal for that - and I think the most user friendly way to go about binary values is binary literals. Converting something else to hex and then back to something else is the opposite of user friendly. IMO.
Sep 13 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/13/22 2:07 PM, wjoe wrote:
 On Saturday, 10 September 2022 at 02:22:53 UTC, Walter Bright wrote:
 Hex values are far easier to read, too. Have you ever put the tip of 
 your pencil on the screen to count the number of 1's? I have. Binary 
 literals are user unfriendly.
That's true only if you're working with hex values. Besides, I've had to use the tip of a pencil to count Fs in hex values. Are you saying that hex is user friendlier when dealing with decimal values, too ?
I've used the tip of a pencil to write out what the real bits are of a hex literal, because I can never remember what all of them actually mean. -Steve
Sep 13 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/13/2022 11:07 AM, wjoe wrote:
 Besides, I've had to use the tip of a pencil to count Fs in hex values.
So have I. But I have only 1/4 as many digits to count, and don't need to if there are 6 or fewer digits.
Sep 13 2022
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 13 September 2022 at 18:50:01 UTC, Walter Bright 
wrote:

 So have I. But I have only 1/4 as many digits to count, and 
 don't need to if there are 6 or fewer digits.
Nibble is four digits.
Sep 13 2022
parent reply Don Allen <donaldcallen gmail.com> writes:
On Tuesday, 13 September 2022 at 19:21:11 UTC, Max Samukha wrote:
 On Tuesday, 13 September 2022 at 18:50:01 UTC, Walter Bright 
 wrote:

 So have I. But I have only 1/4 as many digits to count, and 
 don't need to if there are 6 or fewer digits.
Nibble is four digits.
And your point is? One hex digit represents 4 binary digits. What Walter said is correct. I would also add that talking about user-friendly/unfriendly doesn't make a lot of sense unless you state the purpose of the literal. If I wanted to initialize an int to the number of states in the US, no one sane would write ```` int n_us_states = 0b110010 ```` If I were defining a mask to extract a field from a hardware register, I might use a binary literal, though I personally would use the shifting technique I described in an earlier post.
Sep 13 2022
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 13 September 2022 at 19:47:43 UTC, Don Allen wrote:
 On Tuesday, 13 September 2022 at 19:21:11 UTC, Max Samukha 
 wrote:
 On Tuesday, 13 September 2022 at 18:50:01 UTC, Walter Bright 
 wrote:

 So have I. But I have only 1/4 as many digits to count, and 
 don't need to if there are 6 or fewer digits.
Nibble is four digits.
And your point is?
My point has been restated multiple times in this thread by several people: if the binary represents bit flags and is grouped with a dash in nibbles, it is easier to read than a hex. You count bits, not hex digits, and there is no need to count more than 4.
Sep 13 2022
next sibling parent Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 13 September 2022 at 19:56:22 UTC, Max Samukha wrote:

 My point has been restated multiple times in this thread by 
 several people: if the binary represents bit flags and is 
 grouped with a dash in nibbles, it is easier to read than a 
 hex. You count bits, not hex digits, and there is no need to 
 count more than 4.
s/dash/underscore
Sep 13 2022
prev sibling next sibling parent Daniel N <no public.email> writes:
On Tuesday, 13 September 2022 at 19:56:22 UTC, Max Samukha wrote:
 My point has been restated multiple times in this thread by 
 several people: if the binary represents bit flags and is 
 grouped with a dash in nibbles, it is easier to read than a 
 hex. You count bits, not hex digits, and there is no need to 
 count more than 4.
You are right. Furthermore complexity is not reduced by removing literals, consider if you use 5+ languages, and every language has it's own include to enable literals, then you need to remember 5 different includes. (std.conv really makes no sense what if you don't even use phobos?)
Sep 13 2022
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Tuesday, 13 September 2022 at 19:56:22 UTC, Max Samukha wrote:
 On Tuesday, 13 September 2022 at 19:47:43 UTC, Don Allen wrote:
 On Tuesday, 13 September 2022 at 19:21:11 UTC, Max Samukha 
 wrote:
 On Tuesday, 13 September 2022 at 18:50:01 UTC, Walter Bright 
 wrote:

 So have I. But I have only 1/4 as many digits to count, and 
 don't need to if there are 6 or fewer digits.
Nibble is four digits.
And your point is?
My point has been restated multiple times in this thread by several people: if the binary represents bit flags and is grouped with a dash in nibbles, it is easier to read than a hex. You count bits, not hex digits, and there is no need to count more than 4.
I'm aware of those arguments. It wasn't at all clear how your terse comment related to them. Yes, if you are concerned with individual bits, then binary representation is obviously more natural than hex (or octal or decimal). But depending on your purpose, other representations may be more natural than 0b. I gave an example in a previous post and therefore won't repeat.
Sep 13 2022
parent Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 13 September 2022 at 20:08:02 UTC, Don Allen wrote:

 I'm aware of those arguments. It wasn't at all clear how your 
 terse comment related to them.
Sorry for that. I all too often assume people can read my mind.
 Yes, if you are concerned with individual bits, then binary 
 representation is obviously more natural than hex (or octal or 
 decimal). But depending on your purpose, other representations 
 may be more natural than 0b. I gave an example in a previous 
 post and therefore won't repeat.
Yes, nobody is arguing otherwise, and we'd better go back to the point of the discussion. Walter wants to remove binary literals on these false assumptions: 1. Nobody uses binary literals. 2. Hex is always more readable than binary. 3. Removing binary literals would simplify the language and compiler. Let's stop him.
Sep 13 2022
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/13/22 3:47 PM, Don Allen wrote:

 I would also add that talking about user-friendly/unfriendly doesn't 
 make a lot of sense unless you state the purpose of the literal. If I 
 wanted to initialize an int to the number of states in the US, no one 
 sane would write
 ````
 int n_us_states = 0b110010
 ````
 If I were defining a mask to extract a field from a hardware register, I 
 might use a binary literal, though I personally would use the shifting 
 technique I described in an earlier post.
Agreed. The purpose is important. If I wanted to specify an "every third bit set" mask, in hex it would be `0x924924924...`. But in binary it is `0b100100100100...`. The second version is immediately clear what it is, whereas the first is not. While hex is usually clearer than decimal, it's not always as clear as binary. BTW, you know how I figured out that 924 pattern? In the easiest way possible of course! ```d writefln("%x", 0b100100100100100100100100); ``` -Steve
Sep 13 2022
next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Tuesday, 13 September 2022 at 20:06:55 UTC, Steven 
Schveighoffer wrote:
 On 9/13/22 3:47 PM, Don Allen wrote:

 I would also add that talking about user-friendly/unfriendly 
 doesn't make a lot of sense unless you state the purpose of 
 the literal. If I wanted to initialize an int to the number of 
 states in the US, no one sane would write
 ````
 int n_us_states = 0b110010
 ````
 If I were defining a mask to extract a field from a hardware 
 register, I might use a binary literal, though I personally 
 would use the shifting technique I described in an earlier 
 post.
Agreed. The purpose is important. If I wanted to specify an "every third bit set" mask, in hex it would be `0x924924924...`. But in binary it is `0b100100100100...`. The second version is immediately clear what it is, whereas the first is not. While hex is usually clearer than decimal, it's not always as clear as binary. BTW, you know how I figured out that 924 pattern? In the easiest way possible of course! ```d writefln("%x", 0b100100100100100100100100);
So you used 0b notation to come up with a justification for 0b notation :-) I do this sort of thing with an HP calculator.
 ```

 -Steve
Sep 13 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/13/22 4:12 PM, Don Allen wrote:
 On Tuesday, 13 September 2022 at 20:06:55 UTC, Steven Schveighoffer wrote:
 On 9/13/22 3:47 PM, Don Allen wrote:

 I would also add that talking about user-friendly/unfriendly doesn't 
 make a lot of sense unless you state the purpose of the literal. If I 
 wanted to initialize an int to the number of states in the US, no one 
 sane would write
 ````
 int n_us_states = 0b110010
 ````
 If I were defining a mask to extract a field from a hardware 
 register, I might use a binary literal, though I personally would use 
 the shifting technique I described in an earlier post.
Agreed. The purpose is important. If I wanted to specify an "every third bit set" mask, in hex it would be `0x924924924...`. But in binary it is `0b100100100100...`. The second version is immediately clear what it is, whereas the first is not. While hex is usually clearer than decimal, it's not always as clear as binary. BTW, you know how I figured out that 924 pattern? In the easiest way possible of course! ```d writefln("%x", 0b100100100100100100100100);
So you used 0b notation to come up with a justification for 0b notation :-)
I used it to figure out what the pattern would be in hex. The arguments here are that binary is not needed because hex has it covered. Like if I get an error on windows of -1073741819, I have to put it into hex to see what the true error is (in hex, you can recognize the pattern of 0xc0000005) -Steve
Sep 13 2022
prev sibling parent wjoe <invalid example.com> writes:
On Tuesday, 13 September 2022 at 20:12:10 UTC, Don Allen wrote:
 So you used 0b notation to come up with a justification for 0b 
 notation :-)

 I do this sort of thing with an HP calculator.
No, he used binary notation because it's the best tool for the job. He didn't even need to reach for his calculator and that's the point. I would also never do the bit shifting thing you showed earlier (for that purpose). It's way too much mental friction. You need to build in your mind what you'd see on screen with binary literals.
Sep 14 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/13/2022 1:06 PM, Steven Schveighoffer wrote:
 If I wanted to specify an "every third bit set" mask, in hex it would be 
 `0x924924924...`. But in binary it is `0b100100100100...`. The second version
is 
 immediately clear what it is, whereas the first is not.
Is it? How do you know it didn't overflow the int and create a long? How do you know you filled up the int? It's pretty clear the hex one is a long.
Sep 13 2022
next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, Sep 13, 2022 at 01:43:44PM -0700, Walter Bright via Digitalmars-d wrote:
 On 9/13/2022 1:06 PM, Steven Schveighoffer wrote:
 If I wanted to specify an "every third bit set" mask, in hex it would be
 `0x924924924...`. But in binary it is `0b100100100100...`. The second
 version is immediately clear what it is, whereas the first is not.
Is it? How do you know it didn't overflow the int and create a long? How do you know you filled up the int?
[...] Simple, use `_`: 0b1001_0010_0100 This makes it immediately obvious exactly how many bits the literal occupies. T -- Javascript is what you use to allow third party programs you don't know anything about and doing you know not what to run on your computer. -- Charles Hixson
Sep 13 2022
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/13/22 4:43 PM, Walter Bright wrote:
 On 9/13/2022 1:06 PM, Steven Schveighoffer wrote:
 If I wanted to specify an "every third bit set" mask, in hex it would 
 be `0x924924924...`. But in binary it is `0b100100100100...`. The 
 second version is immediately clear what it is, whereas the first is not.
Is it? How do you know it didn't overflow the int and create a long? How do you know you filled up the int?
How do you know the purpose is to fill up an int?
 It's pretty clear the hex one is a long.
It's not a specific example with actual requirements for an existing problem. The point is that the sequence 924 isn't as clear that every 3rd bit is set vs. 100. Have you never started with "I need a number that has these properties", and then proceeded to build that number? Like if you started with "I need a number that's all 9 digits in decimal" you wouldn't try to figure it out in hex, right? You'd just write 99999.... And if you need it to fit in an int, you figure that out (probably with trial-and-error). You don't start with "well, I can't use decimal, because then I'll never know if it fits in an int!" Same thing with binary. It allows me to express *certain numbers* without thinking or figuring too hard. Like building a number that has n consecutive bits set (i.e. the poker example). Or if you have a register that has sets of odd-length bit patterns. The list of things that it helps with is not large. It's also not completely eclipsed by hex. And unlike the horrible C octal syntax, it's not error-prone. IMO, enough to counter any justification for removal, or hoisting into an expensive library implementation. It's not even a different *type*, it costs nothing, because everything happens in the lexer. Removing this feature is so insignificant in terms of compiler/language "savings", and significant in breaking existing code. It just shouldn't even be considered for removal. -Steve
Sep 13 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/13/2022 2:04 PM, Steven Schveighoffer wrote:
 Is it? How do you know it didn't overflow the int and create a long? How do 
 you know you filled up the int?
How do you know the purpose is to fill up an int?
Ok, I'll rephrase that. How do you know when to stop? There's a reason hex is so ubiquitous. It's compact. Binary literals beyond a few digits (8 max) are more or less unreadable. Yes, the _ can extend it to more digits before it becomes unreadable. (Even long hex numbers benefit from _, again, after 8 digits.)
Sep 13 2022
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/13/22 8:35 PM, Walter Bright wrote:
 On 9/13/2022 2:04 PM, Steven Schveighoffer wrote:
 Is it? How do you know it didn't overflow the int and create a long? 
 How do you know you filled up the int?
How do you know the purpose is to fill up an int?
Ok, I'll rephrase that. How do you know when to stop?
Because I'm done making the mask. In this specific situation, I'm only testing 9 bits. 0b100100100 // obvious, clear, easy 0x? // have to calculate using 0b numbers *hint: it's not 924*
 There's a reason hex is so ubiquitous. It's compact. Binary literals 
 beyond a few digits (8 max) are more or less unreadable. Yes, the _ can 
 extend it to more digits before it becomes unreadable. (Even long hex 
 numbers benefit from _, again, after 8 digits.)
But it doesn't disprove the fact that *sometimes*, hex digits aren't as clear. 1) binary numbers are sometimes clearer given the context 2) binary numbers *already* are a thing in D 3) there is no ambiguity with a binary number literal. The `0b` prefix is obvious. -Steve
Sep 13 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
Sep 13 2022
next sibling parent monkyyy <crazymonkyyy gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
Classes, private, immutable, udas, attributes, contracts, dmd having optimizations(everyone who needs them will use ldc), expectations, betterc, playing with c++ and objective c
Sep 13 2022
prev sibling next sibling parent Max Samukha <maxsamukha gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:

 People often complain that D has too many features. What 
 features would you say are not worth it?
Hex/binary/octal literals are perceived as a single feature. Removing some of them would actually complicate the language by introducing an inconsistency into that feature (see Timon's post). What really simplifies a language is removal of inconsistencies and special cases, and also improvement of interactions between features. For example, we are now struggling with the impossibility to perfectly forward a variadic function call: ``` void bar(short a) { } void foo(alias target, A...)(auto ref A args) { target(forward!args); } void main() { bar(1); // ok foo!bar(1); // error because args[0] is inferred as int } ``` We now have to force superfluous casts on the API user, or resort to all kinds of metaprogramming hackery, or go back to Java-style monkey-coding.
Sep 14 2022
prev sibling next sibling parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways?
Yes, definitely. Why use `foreach` sometimes when you have `for`, `while` and even `goto` to do it in another way?
 People often complain that D has too many features. What 
 features would you say are not worth it?
Clumsy Phobos solutions to replace simple elegant consistent language syntax. For complex numbers, the language complexity it involves compared to a library solution is probably justified. For the `0b` literals definitively not.
Sep 14 2022
prev sibling next sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways?
It’s *change* not *features* that must be justified. If a feature is already properly implemented and available for years, removing it must be justified. It is trivial to justify removal of a feature that was promised by the spec, but never implemented (e.g. `cent`). It is easy to justify changes on a feature (including removal) that was inconsistent or never worked correctly (I’m no expert, but I remember `shared` being named in this context).
 People often complain that D has too many features. What 
 features would you say are not worth it?
Any feature that works as intended is probably used or even relied upon by someone. I cannot give you a list, but a criterion. The ones that are worth removing are those that have not worked correctly for years. Apparently, no fix is being found and/or nobody cares enough. It makes sense removing those.
Sep 14 2022
prev sibling next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
This is an example of what I mention at this link below (and in the thread that led to that post). https://forum.dlang.org/post/jslyncewynqaefohloog forum.dlang.org You argue about compiler-complexity AND user-complexity on this topic. Sometimes one, sometimes the other. It's not always consistent and some can find it confusing.
Sep 14 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/14/2022 4:00 AM, jmh530 wrote:
 You argue about compiler-complexity AND user-complexity on this topic.
Sometimes 
 one, sometimes the other. It's not always consistent and some can find it 
 confusing.
These things do not have right and wrong answers, and aspects are often contradictory. It's true of most non-trivial things.
Sep 14 2022
next sibling parent Daniel N <no public.email> writes:
On Wednesday, 14 September 2022 at 19:30:40 UTC, Walter Bright 
wrote:
 On 9/14/2022 4:00 AM, jmh530 wrote:
 You argue about compiler-complexity AND user-complexity on 
 this topic. Sometimes one, sometimes the other. It's not 
 always consistent and some can find it confusing.
These things do not have right and wrong answers, and aspects are often contradictory. It's true of most non-trivial things.
As I wrote before, the cognitive load of remembering the correct include is worse than the cost of the feature because it exist in other languages. But no other language uses std.conv.
Sep 14 2022
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 14.09.22 21:30, Walter Bright wrote:
 On 9/14/2022 4:00 AM, jmh530 wrote:
 You argue about compiler-complexity AND user-complexity on this topic. 
 Sometimes one, sometimes the other. It's not always consistent and 
 some can find it confusing.
These things do not have right and wrong answers, and aspects are often contradictory. It's true of most non-trivial things.
This is a trivial issue with an obvious correct answer.
Sep 15 2022
parent reply Loara <loara noreply.com> writes:
Honestly every time I need to use binary literals there always is 
a better approach (hex literals, enums, bitfields, ...) that does 
the job. Bit counting can be done very well with hex literals 
with little practice too, it's not so hard.

The point is that a lot of people comes from C/C++ and they're 
used to use binary literals and they want it in D too. If 
tomorrow binary literals will be dropped from every programming 
language on the surface of earth I would simply say "Ok."

On the other hand removing it seems useless, I'd prefer to issue 
a warning during compilation every time a binary literal is used 
and to allow user to disable these warnings with a compiler flag 
if you really need to use these literals.
Sep 17 2022
next sibling parent reply Salih Dincer <salihdb hotmail.com> writes:
On Saturday, 17 September 2022 at 17:22:33 UTC, Loara wrote:
 
 On the other hand removing it seems useless, I'd prefer to 
 issue a warning during compilation every time a binary literal 
 is used and to allow user to disable these warnings with a 
 compiler flag if you really need to use these literals.
This idea sounds good. Since Walter is determined, he should give us a compiler flag. SDB 79
Sep 17 2022
parent reply 0xEAB <desisma heidel.beer> writes:
On Saturday, 17 September 2022 at 19:33:50 UTC, Salih Dincer 
wrote:
 On Saturday, 17 September 2022 at 17:22:33 UTC, Loara wrote:
 
 On the other hand removing it seems useless, I'd prefer to 
 issue a warning during compilation every time a binary literal 
 is used and to allow user to disable these warnings with a 
 compiler flag if you really need to use these literals.
This idea sounds good. Since Walter is determined, he should give us a compiler flag. SDB 79
I fail to see the value of such a flag. If there were no plan to remove or deprecate them, why issue a warning in the first place?! There’s nothing wrong with binary literals (as in potential pitfalls), isnt’t there?
Sep 17 2022
parent reply Loara <loara noreply.com> writes:
On Saturday, 17 September 2022 at 19:38:52 UTC, 0xEAB wrote:
 I fail to see the value of such a flag. If there were no plan 
 to remove or deprecate them, why issue a warning in the first 
 place?!

 There’s nothing wrong with binary literals (as in potential 
 pitfalls), isnt’t there?
Because it seems that some people really needs binary literals and can't work without them although there are many higher level alternatives. Binary literals aren't efficient (too many digit to represent relatively small numbers) and aren't essential to the language itself (hex literals are far more efficient). If you need to set flags an enum approach makes code much more readable: ```d enum MyFlag {A = 0, B, C} T asFlag(T)(in T f) pure safe nothrow nogc{ return (1 << f); } ... int field = MyFlag.A.asFlag & MyFlag.B.asFlag; ```
Sep 18 2022
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Sunday, 18 September 2022 at 11:00:12 UTC, Loara wrote:

 Because it seems that some people really needs binary literals 
 and can't work without them although there are many higher 
 level alternatives.
Yes, we need them because they have use cases where alternatives are not justifiable. For example, we have a simple communication protocol that defines just a handful of commands with additional information encoded in the bits (command length, flags, etc). It would be unreasonable to complicate the code with bit ops, parsers, etc., which would make it *less* readable. Having to replace the literals with `bin!...` would be tolerable, though.
Sep 18 2022
parent Loara <loara noreply.com> writes:
On Sunday, 18 September 2022 at 12:15:17 UTC, Max Samukha wrote:
 On Sunday, 18 September 2022 at 11:00:12 UTC, Loara wrote:

 Because it seems that some people really needs binary literals 
 and can't work without them although there are many higher 
 level alternatives.
Yes, we need them because they have use cases where alternatives are not justifiable. For example, we have a simple communication protocol that defines just a handful of commands with additional information encoded in the bits (command length, flags, etc). It would be unreasonable to complicate the code with bit ops, parsers, etc., which would make it *less* readable. Having to replace the literals with `bin!...` would be tolerable, though.
You can use hex literals behind an high level interface that hides them behind enums/aliases, in this way users won't need to know the position of each bit when they'll use your library. Also if you need to change the order of two bits for some reason you'll just change 2/3 enums and your code will work anyway. In general it's better to hide constants behind an alias instead of copy them in several places.
Sep 18 2022
prev sibling parent Kagamin <spam here.lot> writes:
On Sunday, 18 September 2022 at 11:00:12 UTC, Loara wrote:
 Binary literals aren't efficient (too many digit to represent 
 relatively small numbers) and aren't essential to the language 
 itself (hex literals are far more efficient). If you need to 
 set flags an enum approach makes code much more readable:

 ```d
 enum MyFlag {A = 0, B, C}

 T asFlag(T)(in T f) pure  safe nothrow  nogc{
   return (1 << f);
 }

 ...

 int field = MyFlag.A.asFlag & MyFlag.B.asFlag;
 ```
You made an error right away.
Sep 20 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/17/2022 10:22 AM, Loara wrote:
 The point is that a lot of people comes from C/C++ and they're used to use 
 binary literals
Not really. C11 doesn't have binary literals. C++ added them quite recently.
Sep 17 2022
next sibling parent reply Preetpal <preetpal.sohal gmail.com> writes:
On Saturday, 17 September 2022 at 21:39:59 UTC, Walter Bright 
wrote:
 On 9/17/2022 10:22 AM, Loara wrote:
 The point is that a lot of people comes from C/C++ and they're 
 used to use binary literals
Not really. C11 doesn't have binary literals. C++ added them quite recently.
FWIW, https://devblogs.microsoft.com/dotnet/new-features-in-c-7-0/#li eral-improvements), [Java](https://docs.oracle.com/javase/8/docs/technotes/guides/language/bi ary-literals.html), [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Numbers_and_da es#binary_numbers), [Python](https://docs.python.org/3/reference/lexical_analysis.ht l#integer-literals) and [Ruby](https://ruby-doc.org/core-3.1.2/doc/syntax/literals_rdoc.html#lab l-Integer+Literals) all have binary literals.
Sep 17 2022
parent reply Preetpal <preetpal.sohal gmail.com> writes:
On Saturday, 17 September 2022 at 21:57:41 UTC, Preetpal wrote:
 On Saturday, 17 September 2022 at 21:39:59 UTC, Walter Bright 
 wrote:
 On 9/17/2022 10:22 AM, Loara wrote:
 The point is that a lot of people comes from C/C++ and 
 they're used to use binary literals
Not really. C11 doesn't have binary literals. C++ added them quite recently.
FWIW, https://devblogs.microsoft.com/dotnet/new-features-in-c-7-0/#li eral-improvements), [Java](https://docs.oracle.com/javase/8/docs/technotes/guides/language/bi ary-literals.html), [JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Numbers_and_da es#binary_numbers), [Python](https://docs.python.org/3/reference/lexical_analysis.ht l#integer-literals) and [Ruby](https://ruby-doc.org/core-3.1.2/doc/syntax/literals_rdoc.html#lab l-Integer+Literals) all have binary literals.
Also, [Emacs Lisp](https://www.gnu.org/software/emacs/manual/html_node/elisp/I teger-Basics.html), [Fortran](https://riptutorial.com/fortran/example/6321 literal-constants), [Go](https://go.dev/ref/spec#Integer_literals), [Haskell](https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/bi ary_literals.html), [OCaml](https://v2.ocaml.org/manual/lex.html#s s:integer-literals) and [Rust](https://doc.rust-lang.org/rust-by-example/primitives/literals.html). It's hard to find a language without this feature (every single language that came to mind had it).
Sep 17 2022
parent reply Don Allen <donaldcallen gmail.com> writes:
On Saturday, 17 September 2022 at 22:11:35 UTC, Preetpal wrote:
 On Saturday, 17 September 2022 at 21:57:41 UTC, Preetpal wrote:
 [...]
Also, [Emacs Lisp](https://www.gnu.org/software/emacs/manual/html_node/elisp/I teger-Basics.html), [Fortran](https://riptutorial.com/fortran/example/6321 literal-constants), [Go](https://go.dev/ref/spec#Integer_literals), [Haskell](https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/bi ary_literals.html), [OCaml](https://v2.ocaml.org/manual/lex.html#s s:integer-literals) and [Rust](https://doc.rust-lang.org/rust-by-example/primitives/literals.html). It's hard to find a language without this feature (every single language that came to mind had it).
Haskell does *not* have binary integer literals. See the 2010 language report and ```` Prelude> 0x10 16 Prelude> 0o10 8 Prelude> 0b10 <interactive>:8:2: error: Variable not in scope: b10 ````
Sep 17 2022
parent reply Preetpal <preetpal.sohal gmail.com> writes:
On Sunday, 18 September 2022 at 01:55:35 UTC, Don Allen wrote:
 On Saturday, 17 September 2022 at 22:11:35 UTC, Preetpal wrote:
 On Saturday, 17 September 2022 at 21:57:41 UTC, Preetpal wrote:
 [...]
Also, [Emacs Lisp](https://www.gnu.org/software/emacs/manual/html_node/elisp/I teger-Basics.html), [Fortran](https://riptutorial.com/fortran/example/6321 literal-constants), [Go](https://go.dev/ref/spec#Integer_literals), [Haskell](https://ghc.gitlab.haskell.org/ghc/doc/users_guide/exts/bi ary_literals.html), [OCaml](https://v2.ocaml.org/manual/lex.html#s s:integer-literals) and [Rust](https://doc.rust-lang.org/rust-by-example/primitives/literals.html). It's hard to find a language without this feature (every single language that came to mind had it).
Haskell does *not* have binary integer literals. See the 2010 language report and ```` Prelude> 0x10 16 Prelude> 0o10 8 Prelude> 0b10 <interactive>:8:2: error: Variable not in scope: b10 ````
"The language extension BinaryLiterals adds support for expressing integer literals". So you need to use a language extension to use it. ``` ps DESKTOP:~$ ghci GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help Prelude> :set -XBinaryLiterals Prelude> 0b0010 2 Prelude> ```
Sep 17 2022
parent reply Don Allen <donaldcallen gmail.com> writes:
On Sunday, 18 September 2022 at 04:26:45 UTC, Preetpal wrote:
 On Sunday, 18 September 2022 at 01:55:35 UTC, Don Allen wrote:
 On Saturday, 17 September 2022 at 22:11:35 UTC, Preetpal wrote:
 [...]
Haskell does *not* have binary integer literals. See the 2010 language report and ```` Prelude> 0x10 16 Prelude> 0o10 8 Prelude> 0b10 <interactive>:8:2: error: Variable not in scope: b10 ````
"The language extension BinaryLiterals adds support for expressing integer literals". So you need to use a language extension to use it. ``` ps DESKTOP:~$ ghci GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help Prelude> :set -XBinaryLiterals Prelude> 0b0010 2 Prelude> ```
An extension offered by one compiler, off by default, is not the same as inclusion in the official language definition. The official Haskell language does not include binary literals.
Sep 18 2022
next sibling parent Preetpal <preetpal.sohal gmail.com> writes:
On Sunday, 18 September 2022 at 12:47:03 UTC, Don Allen wrote:
 On Sunday, 18 September 2022 at 04:26:45 UTC, Preetpal wrote:
 On Sunday, 18 September 2022 at 01:55:35 UTC, Don Allen wrote:
 On Saturday, 17 September 2022 at 22:11:35 UTC, Preetpal 
 wrote:
 [...]
Haskell does *not* have binary integer literals. See the 2010 language report and ```` Prelude> 0x10 16 Prelude> 0o10 8 Prelude> 0b10 <interactive>:8:2: error: Variable not in scope: b10 ````
"The language extension BinaryLiterals adds support for expressing integer literals". So you need to use a language extension to use it. ``` ps DESKTOP:~$ ghci GHCi, version 8.8.4: https://www.haskell.org/ghc/ :? for help Prelude> :set -XBinaryLiterals Prelude> 0b0010 2 Prelude> ```
An extension offered by one compiler, off by default, is not the same as inclusion in the official language definition. The official Haskell language does not include binary literals.
Well if you use that as your definition, you can technically remove it off the list. The language extensions in Haskell are probably candidates for the next official standard though.
Sep 18 2022
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 18.09.22 14:47, Don Allen wrote:
 ...
 
 An extension offered by one compiler, off by default, is not the same as 
 inclusion in the official language definition. The official Haskell 
 language does not include binary literals.
In practice, this is just not how Haskell works. It's really common for Haskell code to rely on at least some GHC extensions (including GHC itself).
Sep 18 2022
parent reply Don Allen <donaldcallen gmail.com> writes:
On Sunday, 18 September 2022 at 22:45:17 UTC, Timon Gehr wrote:
 On 18.09.22 14:47, Don Allen wrote:
 ...
 
 An extension offered by one compiler, off by default, is not 
 the same as inclusion in the official language definition. The 
 official Haskell language does not include binary literals.
In practice, this is just not how Haskell works. It's really common for Haskell code to rely on at least some GHC extensions (including GHC itself).
You are missing my point. In any language -- C, Haskell, what have you -- some compilers will implement extensions, such as the nested functions in C introduced by gcc. The essential point is that just because some compiler implements an extension, there is no guarantee that extension will make it into the official language definition, therefore you use that extension at the risk of writing non-portable, non-future-proofed code. Haskell is no different in that respect from any other language. That is true can be found in a number of hits you turn up when you search for 'haskell language extensions'. Whether it is common or not for "Haskell code to rely on *at least some* GHC extensions" is not the issue. The issue is whether those extensions eventually become an official part of the language. Some do, some don't, or some do in revised form. You can have the last word if you like; I'm done with this thread, which has long since crossed the ad nauseam threshold.
Sep 21 2022
parent Tejas <notrealemail gmail.com> writes:
On Wednesday, 21 September 2022 at 19:22:00 UTC, Don Allen wrote:
 On Sunday, 18 September 2022 at 22:45:17 UTC, Timon Gehr wrote:
 [...]
You are missing my point. In any language -- C, Haskell, what have you -- some compilers will implement extensions, such as the nested functions in C introduced by gcc. The essential point is that just because some compiler implements an extension, there is no guarantee that extension will make it into the official language definition, therefore you use that extension at the risk of writing non-portable, non-future-proofed code. Haskell is no different in that respect from any other language. That is true can be found in a number of hits you turn up when you search for 'haskell language extensions'. Whether it is common or not for "Haskell code to rely on *at least some* GHC extensions" is not the issue. The issue is whether those extensions eventually become an official part of the language. Some do, some don't, or some do in revised form. You can have the last word if you like; I'm done with this thread, which has long since crossed the ad nauseam threshold.
Yeah, I guess an extreme version of this could be saying that Haskell has refinement types because Liquid Haskell exists A lot of C's flaws could be accounted for if we were willing to consider GNU C rather than ISO C
Sep 21 2022
prev sibling parent Don Allen <donaldcallen gmail.com> writes:
On Saturday, 17 September 2022 at 21:39:59 UTC, Walter Bright 
wrote:
 On 9/17/2022 10:22 AM, Loara wrote:
 The point is that a lot of people comes from C/C++ and they're 
 used to use binary literals
Not really. C11 doesn't have binary literals. C++ added them quite recently.
That's right -- binary literals are not part of the official C language, not now (as defined by C11) and not in the past. The language, as described in the second edition of K&R (1988), does not have them. Nor do they appear in the fifth edition of Harbison and Steele. But compilers such as gcc and clang add them as an extension and the working draft of C2x does include them. So while using C's history and current state as an example of why D ought to have binary literals is a non-existent argument, C++ recently added them, as you said, and C seems to be heading the same way.
Sep 17 2022
prev sibling next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 People often complain that D has too many features. What 
 features would you say are not worth it?
ImportC, -betterC, nogc, nothrow, live. These things don't even *work* on their own terms, and they continue to have additional downstream effects over several parts of D and the ecosystem. Massive complication for little benefit. To a lesser extent, safe and dip1000 can go too.
Sep 14 2022
parent reply Daniel N <no public.email> writes:
On Wednesday, 14 September 2022 at 13:30:46 UTC, Adam D Ruppe 
wrote:
 On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
 wrote:
 People often complain that D has too many features. What 
 features would you say are not worth it?
ImportC, -betterC, nogc, nothrow, live. These things don't even *work* on their own terms, and they continue to have additional downstream effects over several parts of D and the ecosystem. Massive complication for little benefit. To a lesser extent, safe and dip1000 can go too.
Because D is multiparadigm, everyone has their own list. I love and use all of those features. Currently I can only think of 1 feature I don't use, but others use it so it doesn't matter.
Sep 14 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 15/09/2022 6:35 AM, Daniel N wrote:
 Because D is multiparadigm, everyone has their own list. I love and use 
 all of those features. Currently I can only think of 1 feature I don't 
 use, but others use it so it doesn't matter.
By any chance would it happen to be property?
Sep 14 2022
parent reply Daniel N <no public.email> writes:
On Wednesday, 14 September 2022 at 18:38:21 UTC, rikki cattermole 
wrote:
 On 15/09/2022 6:35 AM, Daniel N wrote:
 Because D is multiparadigm, everyone has their own list. I 
 love and use all of those features. Currently I can only think 
 of 1 feature I don't use, but others use it so it doesn't 
 matter.
By any chance would it happen to be property?
OK, you got me, lol.
Sep 14 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
I was going to post a poll on people who actually use its semantics last 
night.

I am pretty sure if we replaced it with a UDA there would be very 
limited breakage.
Sep 14 2022
parent Daniel N <no public.email> writes:
On Wednesday, 14 September 2022 at 18:44:03 UTC, rikki cattermole 
wrote:
 I was going to post a poll on people who actually use its 
 semantics last night.

 I am pretty sure if we replaced it with a UDA there would be 
 very limited breakage.
Yep, I actually considered using it for documentation, but never bothered.
Sep 14 2022
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/14/22 1:58 AM, Walter Bright wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits aren't 
 as clear.
Does sometimes justify a language feature, when there are other ways?
This isn't Go. We have comfort features that make code easier to read and write. binary literals cost nothing to have. There is no cost on the user, and no cost on the compiler (parsing a binary literal isn't any more taxing than parsing a hex literal). If we were talking about adding binary literals, when it's possible to do it via a template, maybe you have a point. But we aren't. We are talking about *removing* a *zero-cost* feature. Can you describe exactly what is gained by removing binary literals? If you think it's because the compiler gets simpler, think again. The *only* code that is removed is this: https://github.com/dlang/dmd/blob/978cd5d766f22957e029754f43245e9d76830d70/compiler/src/dmd/lexer.d#L1979-L1985 Half of which is dealing with ImportC.
 
 People often complain that D has too many features. What features would 
 you say are not worth it?
 
There's a difference between "not worth adding" and "not worth keeping". Removing features needs a very high bar to make sense. Adding features also needs a high bar, considering that it's more difficult to remove later than it is to not add it. That being said, if binary literals weren't in the language, I'd be fine adding them. They don't cost anything, and add a way to write code that is clearer in some cases. If I had to pick at gunpoint an established language feature to remove, it would be betterC. But I can't see any features I'd *want* to remove. D's features are pretty nice. -Steve
Sep 14 2022
prev sibling next sibling parent Nick Treleaven <nick geany.org> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
Template constraints. Horrible error messages (though better than they were) and confusing to work out which overload matches. They make documentation complicated. Just use static if and static assert instead to solve all these problems.
Sep 14 2022
prev sibling next sibling parent claptrap <clap trap.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways?
"Sometimes" justifies a lot of features. Hex literals are only sometimes useful for example.
Sep 14 2022
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
Started a [new thread](https://forum.dlang.org/thread/rbfmdwbveiberqbxoomy forum.dlang.org) on that.
Sep 15 2022
prev sibling next sibling parent Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways?
"When there are other ways" doesn't always justify removing a language feature, to turn your question around. One can write switch statements with if-then-else chains, so why not remove switch? The obvious answer is that *sometimes* switch is a better way to say what needs to be said. There's overlap in all languages, programming and otherwise, for exactly this reason. Again, since you have said there isn't a compelling compiler reason to remove binary literals, I see no good reason to remove them. The simplification benefit is just too small to justify the cost in upset users.
Sep 15 2022
prev sibling next sibling parent reply 0xEAB <desisma heidel.beer> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 People often complain that D has too many features.
Do they really complain about “too many features”? Isn’t it rather “too many *incompatible* features” most of the time? Also: how often is do people actually mean “too many attributes” instead?
Sep 17 2022
parent reply 0xEAB <desisma heidel.beer> writes:
On Saturday, 17 September 2022 at 19:23:58 UTC, 0xEAB wrote:
 On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
 wrote:
 People often complain that D has too many features.
Do they really complain about “too many features”?
There’s also an essential difference between “too many features available” and “too many unfinished features WIP” (shared, DIP1000, live, importC, … to name a few)
Sep 17 2022
parent zjh <fqbqrr 163.com> writes:
On Saturday, 17 September 2022 at 20:09:02 UTC, 0xEAB wrote:
 People often complain that D has too many features.
Do they really complain about “too many features”?
There’s also an essential difference between “too many features available” and “too many unfinished features WIP” (shared, DIP1000, live, importC, … to name a few)
I don't complain that D has too many features. I complain that D has too many unfinished features.
Sep 17 2022
prev sibling next sibling parent Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
`extern(C++, ns)` It is _never_ what you want (that would be extern(C++, "ns")). It is never what you want because, unless the entire namespace is contained within a single module, modA.ns is not the same thing as modB.ns and to "fix" it you have to add a bunch of cross-referencing aliases. It leads to terrible non-sensical errors and error messages without the aliases. Also it blocks string expressions for StdNamespace (being either "std" or "std::__cxx11" or whatever it is) "just working" (you need to do, `extern(C++, (StdNamespace))` to get what you actually want).
Sep 17 2022
prev sibling parent reply Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
wrote:
 On 9/13/2022 7:56 PM, Steven Schveighoffer wrote:
 But it doesn't disprove the fact that *sometimes*, hex digits 
 aren't as clear.
Does sometimes justify a language feature, when there are other ways? People often complain that D has too many features. What features would you say are not worth it?
Function types. I don’t mean types like `int function(string)` (that is a function *pointer* type); I mean `int(string)`. They are nowhere documented (as far as I can tell) and make some meta-programming tasks nasty. ```D void F(T)(T* ptr) { pragma(msg, T); // prints void() } void main() { void function() f = { }; F(f); } ``` If you try making `f` a delegate, the call `F(f)` cannot resolve `T` (obviously). In C++, I can at least make use of them when a lot of functions should have the same signature and I don’t want to bother the reader making sure that is actually the case; I can `typedef` (via `using`) the function type and declare the functions to have exactly the same signature without repeating it: ```cpp // file.h using F = void(int, double); F f1, f2; // look like variables, but are function declarations. // file.cpp void f1(int i, double d) { } void f2(int i, double d) { } ``` If I wanted something like that in D, my take would *not* be to add function types, but to do this: ```D alias F = void function(int, double); enum F f1 = (i, d) { } enum F f2 = (i, d) { } ``` If needed, just add a language rule that `enum` function pointers are an alternative syntax for free or static member functions and that `enum` delegates are an alternative syntax for non-static member functions (aka. methods). Currently, function types are “needed” to apply `__parameters` to. Not only should `__parameters` work with function pointer types and delegate types, it should not work with function types because function types should not exist.
Sep 21 2022
parent reply Zealot <no2 no.no> writes:
On Wednesday, 21 September 2022 at 11:50:16 UTC, Quirin Schroll 
wrote:
 On Wednesday, 14 September 2022 at 05:58:53 UTC, Walter Bright 
 wrote:
 [...]
```D void F(T)(T* ptr) { pragma(msg, T); // prints void() } void main() { void function() f = { }; F(f); } ``` If you try making `f` a delegate, the call `F(f)` cannot resolve `T` (obviously). [...]
``` alias F = void function(int, double); enum:F{ f1 = (i, d) { writeln("f1"); }} enum:F{ f2 = (i, d) { writeln("f2"); }} ``` or ``` alias F = void function(int, double); enum:F { f1 = (i, d) { }, f2 = (i, d) { } } ``` so i'd say this should work too ``` enum:F f1 = (i, d) { }; ```
Sep 21 2022
parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Wednesday, 21 September 2022 at 12:12:52 UTC, Zealot wrote:
 ```d
 alias F = void function(int, double);
 enum : F
 {
     f1 = (i, d) { },
     f2 = (i, d) { }
 }
 ```
This is big. You can do ```d enum : void function(int, double) { f1 = (i, d) { }, f2 = (i, d) { }, } ```
 so i'd say this should work too
 ```d
 enum : F f1 = (i, d) { };
 ```
Why do you think this is good? Just use a space instead of `:`.
Sep 21 2022
prev sibling parent Salih Dincer <salihdb hotmail.com> writes:
On Wednesday, 14 September 2022 at 00:35:04 UTC, Walter Bright 
wrote:
 On 9/13/2022 2:04 PM, Steven Schveighoffer wrote:
 Is it? How do you know it didn't overflow the int and create 
 a long? How do you know you filled up the int?
How do you know the purpose is to fill up an int?
Ok, I'll rephrase that. How do you know when to stop? There's a reason hex is so ubiquitous. It's compact. Binary literals beyond a few digits (8 max) are more or less unreadable. Yes, the _ can extend it to more digits before it becomes unreadable. (Even long hex numbers benefit from _, again, after 8 digits.)
As a digital electronics lover I wouldn't support Walter. But in the days when big data penetrated our bones, Walter was right. As the number of bits increases, grouping and counting becomes meaningless. The simplest example is the RGB color system. 16.7 million colors of 3 x 8 bits! SDB
Sep 14 2022
prev sibling parent wjoe <invalid example.com> writes:
On Tuesday, 13 September 2022 at 21:04:51 UTC, Steven 
Schveighoffer wrote:
 Same thing with binary. It allows me to express *certain 
 numbers* without thinking or figuring too hard. Like building a 
 number that has n consecutive bits set (i.e. the poker 
 example). Or if you have a register that has sets of odd-length 
 bit patterns.
Exactly - or the bit pattern doesn't even mean a number, but the state of switches or buttons or something like that.
Sep 14 2022
prev sibling parent reply IGotD- <nise nise.com> writes:
On Tuesday, 13 September 2022 at 20:43:44 UTC, Walter Bright 
wrote:
 On 9/13/2022 1:06 PM, Steven Schveighoffer wrote:
 If I wanted to specify an "every third bit set" mask, in hex 
 it would be `0x924924924...`. But in binary it is 
 `0b100100100100...`. The second version is immediately clear 
 what it is, whereas the first is not.
Is it? How do you know it didn't overflow the int and create a long? How do you know you filled up the int? It's pretty clear the hex one is a long.
It just as easy with binary literals as D supports the _ delimiter 0xb0100100_10010010_01001001_00100100. You can use the _ as you prefer. It is actually irrelevant what you think, if D would have been a commercial project you would have supported it because some customers would have demanded it. You would put a student on it to implement it during a weekend for a few dollars and movie ticket. I can't believe this discussion. Initial motivation was to save a few lines of code, now the discussion more or less like what color of your underwear that is the best.
Sep 13 2022
parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Tuesday, 13 September 2022 at 21:05:23 UTC, IGotD- wrote:
 I can't believe this discussion. Initial motivation was to save 
 a few lines of code, now the discussion more or less like what 
 color of your underwear that is the best.
I have the same feeling. We’re forced to defend something that so obviously should not need defense.
Sep 14 2022
prev sibling parent wjoe <invalid example.com> writes:
On Tuesday, 13 September 2022 at 18:50:01 UTC, Walter Bright 
wrote:
 On 9/13/2022 11:07 AM, wjoe wrote:
 Besides, I've had to use the tip of a pencil to count Fs in 
 hex values.
So have I. But I have only 1/4 as many digits to count, and don't need to if there are 6 or fewer digits.
Neither do you if you have only 8 bit especially if you group them with underscores. Even 16 bit would be easy to read like that. ubyte foo = 0b0010_1011; ubyte bar = 0x2b; Task: flip bit 3. foo: 1. overwrite bit 3 with a 0. bar: 1. convert 2b to binary 2. flip the bit 3. convert that back to hex. foo 1 step bar 3 steps Checking which bits are set ? foo - it's obvious at a glance. bar - 2 steps: convert to binary -> read the bit which is user friendlier ?
Sep 14 2022
prev sibling next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently 
 concluded DConf where Walter made a case for dropping compiler 
 support for Binary literals.
He thought it was already dropped.... and the octal drop btw wasn't really that much of a success either. We should have went with 0o.
Sep 09 2022
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09.09.22 21:13, Adam D Ruppe wrote:
 On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently concluded DConf 
 where Walter made a case for dropping compiler support for Binary 
 literals.
He thought it was already dropped.... and the octal drop btw wasn't really that much of a success either. We should have went with 0o.
+1.
Sep 09 2022
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/9/22 3:13 PM, Adam D Ruppe wrote:
 On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 I recently saw a talk by Walter Bright in the recently concluded DConf 
 where Walter made a case for dropping compiler support for Binary 
 literals.
He thought it was already dropped.... and the octal drop btw wasn't really that much of a success either. We should have went with 0o.
Deprecating 0-leading literals to mean octal is and will always be a success. The octal literal template -- meh. It functions. I don't think it's a thing we need to highlight. It runs a simple parser at CTFE which isn't nearly as cheap as the octal parser in the compiler. The truly ironic thing is that the compiler is still correctly parsing octal literals, so it can tell you how to write them with std.conv.octal ;) -Steve
Sep 09 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 9 September 2022 at 20:35:49 UTC, Steven Schveighoffer 
wrote:
 Deprecating 0-leading literals to mean octal is and will always 
 be a success.
Well, yeah, that was a silly syntax. I don't know what the C designers were thinking with that. But using 0o solves all those problems.
 It runs a simple parser at CTFE which isn't nearly as cheap as 
 the octal parser in the compiler.
Well it isn't like the cheapness really matters tbh since it is a small job. And it is a kinda cool technique that D can do it. I use it in other places too. Just compared to the 0x and 0b and a prospective 0o.... you're right, it is solidly meh.
Sep 09 2022
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Sep 09, 2022 at 08:54:08PM +0000, Adam D Ruppe via Digitalmars-d wrote:
 On Friday, 9 September 2022 at 20:35:49 UTC, Steven Schveighoffer wrote:
 Deprecating 0-leading literals to mean octal is and will always be a
 success.
Well, yeah, that was a silly syntax. I don't know what the C designers were thinking with that.
+1.
 But using 0o solves all those problems.
+1. 0o totally makes sense for octal, just as 0x totally makes sense for hexadecimal.
 It runs a simple parser at CTFE which isn't nearly as cheap as the
 octal parser in the compiler.
Well it isn't like the cheapness really matters tbh since it is a small job. And it is a kinda cool technique that D can do it. I use it in other places too. Just compared to the 0x and 0b and a prospective 0o.... you're right, it is solidly meh.
+1. T -- Famous last words: I *think* this will work...
Sep 09 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 1:35 PM, Steven Schveighoffer wrote:
 The octal literal template -- meh. It functions. I don't think it's a thing we 
 need to highlight. It runs a simple parser at CTFE which isn't nearly as cheap 
 as the octal parser in the compiler.
That's because it's poorly implemented and overly complex. The implementation I showed in my presentation at Dconf is much simpler. If you're using a lot of octal literals such that this is an issue, one wonders, what for? The only use I know of is for Unix file permissions.
 The truly ironic thing is that the compiler is still correctly parsing octal 
 literals, so it can tell you how to write them with std.conv.octal ;)
To make transition easier. Simplifying the language has a lot of downstream simplifications.
Sep 09 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 9 September 2022 at 23:04:17 UTC, Walter Bright wrote:
 That's because it's poorly implemented and overly complex. The 
 implementation I showed in my presentation at Dconf is much 
 simpler.
The implementation is awful, but nobody cares enough to fix it since it just isn't that user friendly. It is often less hassle to translate it to binary or hex than to bother moving up, adding the import, then moving back. However, the newer imported!"std.conv".octal!433 pattern alleviates that somewhat... though it is wordy enough that you then get tempted to make an alias which means moving up again.
 If you're using a lot of octal literals such that this is an 
 issue, one wonders, what for? The only use I know of is for 
 Unix file permissions.
I keep hitting them in random C code I'm translating. Various unix things beyond file permissions and a hardware manual for a think i had to drive (an rfid chip) used them for various bit triplets too. I often prefer using binary literals anyway, but changing something like 0o50000 to binary is a little obnoxious.
Sep 09 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 4:43 PM, Adam D Ruppe wrote:
 If you're using a lot of octal literals such that this is an issue, one 
 wonders, what for? The only use I know of is for Unix file permissions.
I keep hitting them in random C code I'm translating. Various unix things beyond file permissions and a hardware manual for a think i had to drive (an rfid chip) used them for various bit triplets too.
octal!433 is really not much different from 0433. It could even be shortened to o!433, exactly the same number of characters as 0o433. The reasons for adding language syntactic sugar: 1. its very commonplace 2. the workarounds are gross Of course it's a judgement call, and I understand you see them randomly in C code, but does it really pay off? The downside is the language gets bigger and more complex, the spec gets longer, and people who don't come from a C background wonder why their 093 integer isn't 93.
 the newer imported!"std.conv".octal!433 pattern
Nobody would ever write that unless they used octal exactly once, which suggests that octal literals aren't common enough to justify special syntax.
 I often prefer using binary literals anyway, but changing something like
0o50000 
 to binary is a little obnoxious.
I first implemented binary literals in the 1980s, thinking they were cool and useful. They were not and not. I haven't found a reasonable use for them, or ever wanted them. (I prefer writing them in hex notation, as binary literals take up way too much horizontal space. After all, C3 is a lot easier than 11000011. The latter makes my eyes bleed a bit, too.) Let's simplify D.
Sep 09 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 02:17:30 UTC, Walter Bright 
wrote:
 octal!433 is really not much different from 0433. It could even 
 be shortened to o!433, exactly the same number of characters as 
 0o433.
You snipped the relevant point about having to change context to add the import. That's objectively not a big deal but subjectively proves to be a barrier to adoption. (I do think it would be a bit better too if it was `core.octal` instead of `std.conv` so it brings in a bit less baggage too.)
 The downside is the language gets bigger and more complex
The question of bigger languages is with interaction between features. Octal literals are about the most trivial addition you can do since it doesn't interact with anything else.
 and people who don't come from a C background wonder why their 
 093 integer isn't 93.
This is a completely separate issue that nobody is talking about changing here. While I'd love for it to be good, it is probably practical to keep a deprecation in place so the C programmers can be educated.
 Nobody would ever write that unless they used octal exactly once
This is demonstrably untrue. Local imports are common in D, even when used repeatedly.
 Let's simplify D.
This doesn't achieve anything. If you carry on with this path, you're gonna force a fork of the language. Is that what you want?
Sep 09 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/9/2022 7:38 PM, Adam D Ruppe wrote:
 On Saturday, 10 September 2022 at 02:17:30 UTC, Walter Bright wrote:
 octal!433 is really not much different from 0433. It could even be shortened 
 to o!433, exactly the same number of characters as 0o433.
You snipped the relevant point about having to change context to add the import.
I normally do not quote everything. I'm not trying to hide the import thing, I just don't attach much importance to it.
 That's objectively not a big deal but subjectively proves to be a barrier to 
 adoption.
If you're using octal a lot, it is worth it.
 (I do think it would be a bit better too if it was `core.octal` instead of 
 `std.conv` so it brings in a bit less baggage too.)
It's not really a core feature, but std.octal would be better.
 The question of bigger languages is with interaction between features. Octal 
 literals are about the most trivial addition you can do since it doesn't 
 interact with anything else.
It does, as 093 doesn't work as non-C programmers would expect.
 Nobody would ever write that unless they used octal exactly once
This is demonstrably untrue. Local imports are common in D, even when used repeatedly.
While I like that D can do things like that, it's not a great style, because it wouldn't be discovered with grep (i.e. obfuscates what things are imported).
 Let's simplify D.
This doesn't achieve anything. If you carry on with this path, you're gonna force a fork of the language. Is that what you want?
Do you really want to use the nuclear option over octal literals? It really bothers me why so many discussions head down this path. Let's please try and keep the voltage down.
Sep 09 2022
next sibling parent reply Daniel N <no public.email> writes:
On Saturday, 10 September 2022 at 05:58:25 UTC, Walter Bright 
wrote:
 On 9/9/2022 7:38 PM, Adam D Ruppe wrote:

 (I do think it would be a bit better too if it was 
 `core.octal` instead of `std.conv` so it brings in a bit less 
 baggage too.)
It's not really a core feature, but std.octal would be better.
Personally I think anything which WAS a language feature should be in object to keep the feature working out of the box. Not sure why people are so afraid to use it, not often you have to read object.d source, as long as it's just one file, it will be blazingly fast as always.
 It does, as 093 doesn't work as non-C programmers would expect.
I don't think anyone is arguing in favour of 093 but 0o93. I use octal seldom, but binary very often, google bitboards. This has an obvious visual meaning but in hex it would be hard to read. 0b111111 0b100001 0b100001 0b111111
Sep 10 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 07:03:21 UTC, Daniel N wrote:
 Personally I think anything which WAS a language feature should 
 be in object to keep the feature working out of the box. Not 
 sure why people are so afraid to use it, not often you have to 
 read object.d source, as long as it's just one file, it will be 
 blazingly fast as always.
object isn't actually blazingly fast. Its growth at least appears to be responsible for much of the slowdown of basic builds compared to the older D1 era builds. I'm not sure exactly why though, if it is growth per se, or the use of several internal imports causing slowdowns, or specific implementation techniques, but if you zero out the object.d it does speed up builds. Though I will concede it tends to be a smaller percentage as the program grows, it still imposes a baseline cost to each compiler invocation so we might want to keep an eye on it. Another aspect is that object is implicitly imported so you get things in the global namespace. The module system has ways to disambiguate it, but I still just generally prefer the explicit import to keep it clear especially since the error messages can be fairly poor without import to clean it up. But then you get the ergonomic issue of having to import it.
Sep 10 2022
parent Daniel N <no public.email> writes:
On Saturday, 10 September 2022 at 12:02:57 UTC, Adam D Ruppe 
wrote:
 On Saturday, 10 September 2022 at 07:03:21 UTC, Daniel N wrote:
 Personally I think anything which WAS a language feature 
 should be in object to keep the feature working out of the 
 box. Not sure why people are so afraid to use it, not often 
 you have to read object.d source, as long as it's just one 
 file, it will be blazingly fast as always.
object isn't actually blazingly fast. Its growth at least appears to be responsible for much of the slowdown of basic builds compared to the older D1 era builds.
If it is slow, then it could/should be cached by some mechanism, object.d seldom changes unless you use some advanced tricks, like the ones in your book. imports. https://endjin.com/blog/2021/09/dotnet-csharp-10-implicit-global-using-directives I would be fine with that also, but it's easier to use what we have already object.d.
Sep 10 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 12:03 AM, Daniel N wrote:
 This has an obvious visual meaning but in hex it would be hard to read.
 0b111111
 0b100001
 0b100001
 0b111111
That was the original motivation back in the 80s. But I've since realized that this works much better: XXXXXX X....X X....X XXXXXX Wrap it in a string literal, and write a simple parser to translate it binary data. Like what I did here: https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/disasm86.d#L3645 which makes it really easy to add test cases to the disassembler. Well worth the extra effort to make a tiny parser for it.
Sep 10 2022
prev sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 05:58:25 UTC, Walter Bright 
wrote:
 Do you really want to use the nuclear option over octal 
 literals?

 It really bothers me why so many discussions head down this 
 path. Let's please try and keep the voltage down.
Have you ever stopped to ask WHY there's so little confidence in D's leadership? This whole thing started because you make the *patently false* statement in DConf that binary literals were *already* deprecated. This shows you didn't fact check your talk - you didn't try to compile the code, you didn't check the spec, and you didn't talk to any experienced D users, who would have pointed out the error. Yet you presume to lecture us what what things are used and how, then make unilateral decisions, just ignoring community experience. A lot of us were excited about std.conv.octal. I wrote the first draft myself, then even voluntarily rewrote it into a convoluted mess when requested to by the Phobos maintainer, which was a lot of work and VRP improvements rendered most that work moot since then, but at the time, I thought it was worthwhile to get it in. That was over ten years ago. Since then, despite hitting use for octal literals several times, I've only ever translated them to use std.conv.octal a few times. I more often translate to binary or hex, not necessarily because they're the best representation (though like i said, i often do prefer binary literals to octal), but just because they're built in. Similarly, I have been arguing that `throw new Exception("some string")` is bad form for a long time, but I often do it anyway just because it is the most convenient thing to do. On the other hand, you've pointed out before that `unittest` is not a fantastic test system, but it gets used because it is convenient and this is a good thing, since some unittest is better than none. I do think octal in its own, fully independent module would be better than we have now, since at least then it is easier to pull without additional baggage. But I've heard a lot of people complain they just won't do the import at all because you have to move elsewhere into the code to add it and it is just an added hassle. So it wouldn't fix that but might be more used than it is now. But regardless, binary literals are already here and shouldn't go anywhere. (btw another reason why is the octal!555 trick - using an int literal - won't work with binary since it will overflow too easily. you'd have to quote the string. which is not a big deal but another little thing) Anyway, removing the binary literals we've had for decades would *hurt* D. And ignoring D's users over and over and over again IS going to lead to a fork.
Sep 10 2022
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
I'm still upset over hex strings.

They were useful for generated files.

https://raw.githubusercontent.com/Project-Sidero/basic_memory/main/database/generated/sidero/base/internal/unicode/unicodedata.d

2.72mb!

It is an absolute nightmare to debug without hex strings and you can't 
tell me my builds are going to be faster and use less memory if I have 
to call a function at CTFE to do the conversion from a regular string...
Sep 10 2022
parent reply Kagamin <spam here.lot> writes:
On Saturday, 10 September 2022 at 14:50:18 UTC, rikki cattermole 
wrote:
 I'm still upset over hex strings.

 They were useful for generated files.

 https://raw.githubusercontent.com/Project-Sidero/basic_memory/main/database/generated/sidero/base/internal/unicode/unicodedata.d

 2.72mb!

 It is an absolute nightmare to debug without hex strings
Can't you use escapes there?
Sep 12 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 13/09/2022 5:50 AM, Kagamin wrote:
 Can't you use escapes there?
Every character you add, increases the file size and slows down your editor. Escapes require extra processing, more memory usage to display, which is unnecessary for this this type of data.
Sep 12 2022
next sibling parent reply Kagamin <spam here.lot> writes:
On Monday, 12 September 2022 at 18:05:14 UTC, rikki cattermole 
wrote:
 Escapes require extra processing, more memory usage to display, 
 which is unnecessary for this this type of data.
Hex strings require extra processing too, they aren't WYSIWYG. And there being lots of small code points I suspect you can even save space by using short escapes compared to hex strings.
Sep 12 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 13/09/2022 6:11 AM, Kagamin wrote:
 On Monday, 12 September 2022 at 18:05:14 UTC, rikki cattermole wrote:
 Escapes require extra processing, more memory usage to display, which 
 is unnecessary for this this type of data.
Hex strings require extra processing too, they aren't WYSIWYG.
They require very little processing and only in the compiler.
 And there 
 being lots of small code points I suspect you can even save space by 
 using short escapes compared to hex strings.
Keep in mind that hex strings use case is not for string data. It is a method of getting a whole load of raw bytes directly into the binary which can be cast to the appropriate type before usage (i.e. struct). The direct comparison is to array literal syntax ``[123, 14, 72]`` for which you still have the comma, which adds extra processing to the editor...
Sep 12 2022
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 13/09/2022 6:19 AM, rikki cattermole wrote:
 Keep in mind that hex strings use case is not for string data. It is a 
 method of getting a whole load of raw bytes directly into the binary 
 which can be cast to the appropriate type before usage (i.e. struct).
The example I used for bin2d[0] was shared library dependencies that get extracted out at runtime. It generated hex strings for the file contents. https://github.com/rikkimax/Bin2D
Sep 12 2022
parent Kagamin <spam here.lot> writes:
On Monday, 12 September 2022 at 18:34:04 UTC, rikki cattermole 
wrote:
 The example I used for bin2d[0] was shared library dependencies 
 that get extracted out at runtime. It generated hex strings for 
 the file contents.

 https://github.com/rikkimax/Bin2D
Tar format is the standard unix workaround for the case where you can't have folder hierarchy and need to have everything as one file.
Sep 12 2022
prev sibling parent reply Kagamin <spam here.lot> writes:
On Monday, 12 September 2022 at 18:19:19 UTC, rikki cattermole 
wrote:
 They require very little processing and only in the compiler.
Escapes are processed by the compiler too. In fact, only combining and control characters need to be escaped, all others can appear literally for best compression and minimal processing except for transcoding from utf-8 to utf-32.
Sep 12 2022
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 13/09/2022 6:47 AM, Kagamin wrote:
 On Monday, 12 September 2022 at 18:19:19 UTC, rikki cattermole wrote:
 They require very little processing and only in the compiler.
Escapes are processed by the compiler too. In fact, only combining and control characters need to be escaped, all others can appear literally for best compression and minimal processing except for transcoding from utf-8 to utf-32.
Which is great for textual data. Not useful if you have binary data of any fixed sized type.
Sep 12 2022
prev sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Tue, Sep 13, 2022 at 06:05:14AM +1200, rikki cattermole via Digitalmars-d
wrote:
 
 On 13/09/2022 5:50 AM, Kagamin wrote:
 Can't you use escapes there?
Every character you add, increases the file size and slows down your editor. Escapes require extra processing, more memory usage to display, which is unnecessary for this this type of data.
Yes, but isn't this a bit making mountains out of molehills? Unless you're talking about hundreds of thousands of extra characters, the difference likely to be unnoticeable. If something on the order of a few hundred extra characters can cause a noticeable performance hit in your editor, you need to find a new editor. :-P Not to mention, it sounds a bit disingenuous to be complaining about memory usage when DMD carefreely allocates huge amounts of memory (and never frees it) just to compile a program, such that on a low-memory system it can't even compile anything except trivial programs before running out of memory and crashing. A few thousand extra characters in a source file isn't going to make much of a difference here. T -- If blunt statements had a point, they wouldn't be blunt...
Sep 12 2022
parent rikki cattermole <rikki cattermole.co.nz> writes:
On 13/09/2022 7:07 AM, H. S. Teoh wrote:
 Yes, but isn't this a bit making mountains out of molehills?
I'm complaining about it here, as it was basically an ideal way to handle a set of specific use cases. It basically had no cost to keep. Just like binary literals. It is an identical scenario that I regret not speaking up about at the time and I don't want the same thing to happen again.
Sep 12 2022
prev sibling next sibling parent reply IGotD- <nise nise.com> writes:
On Saturday, 10 September 2022 at 14:21:02 UTC, Adam D Ruppe 
wrote:
 Anyway, removing the binary literals we've had for decades 
 would *hurt* D. And ignoring D's users over and over and over 
 again IS going to lead to a fork.
It will lead to a fork. If the maintainers want to remove simple literals because they think they aren't used, then this project aim is very low. Simple literal support is write once and its done and there forever without any maintenance at all. This also at the same time importC is claimed to be "simple", while working on it over a year. I think that both octal and binary literals can be supported because it is trivial. It doesn't matter if you think they aren't used or not, somewhere there will someone who will absolutely love binary or octal literals. Also I agree we can have 0o syntax for octals and fix one blunder from C. I don't understand, is this some kind of new ill will that programmers are now not allowed certain literals.
Sep 10 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 9:11 AM, IGotD- wrote:
 This also at the same time importC is claimed to be 
 "simple", while working on it over a year.
I have many other things to do besides working on ImportC. And it is simple, as language implementations go. Besides, ImportC is a big win for D. I should have done it from the beginning.
Sep 10 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 7:21 AM, Adam D Ruppe wrote:
 Have you ever stopped to ask WHY there's so little confidence in D's
leadership?
 
 This whole thing started because you make the *patently false* statement in 
 DConf that binary literals were *already* deprecated.
Yes, I made a mistake. There was a decision to remove it, but it just got deferred and then overlooked. Not making an excuse, but D has grown to the point where I can't keep it all in my head at once. Hence at least one motivation for simplification. I've caught Bjarne in errors with C++, too. So what.
 Yet you presume to lecture us what what things are used and how, then make 
unilateral decisions, just ignoring community experience. There is a DIP process, and why we have a n.g. where people can discuss. A lot of changes I felt were a good idea were dropped because of opposition on the n.g., like dropping the initializer syntax in favor of sticking with expression style.
Sep 10 2022
parent Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 10 September 2022 at 17:05:23 UTC, Walter Bright 
wrote:

 Yes, I made a mistake. There was a decision to remove it, but 
 it just got deferred and then overlooked.'
So, are you going to ignore the almost unanimous feedback from the community again and remove the binary literals anyway?
Sep 10 2022
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Saturday, 10 September 2022 at 02:38:33 UTC, Adam D Ruppe 
wrote:
 The question of bigger languages is with interaction between 
 features. Octal literals are about the most trivial addition 
 you can do since it doesn't interact with anything else.
They add a new token to the grammar, which means that tools like syntax highlighters have to be updated. Obviously it's not a difficult change to make, but there is non-zero friction here.
Sep 10 2022
next sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 12:18:49 UTC, Paul Backus wrote:
 Obviously it's not a difficult change to make, but there is 
 non-zero friction here.
Yeah. And on the other hand, deprecating the currently existing binary literals (which is what this thread is about, remember) also requires those same changes. So it has a cost in updating things with no benefit in removing actual complexity since the feature is quite isolated.
Sep 10 2022
prev sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Saturday, 10 September 2022 at 12:18:49 UTC, Paul Backus wrote:
 They add a new token to the grammar, which means that tools 
 like syntax highlighters have to be updated.

 Obviously it's not a difficult change to make, but there is 
 non-zero friction here.
In general yes, in this case (mostly) no. Most syntax highlighters use the fact that identifiers may not start with a digit. They interpret anything starting with a digit as a number up until a character that definitely isn’t part of a number like an operator or space. Even run.dlang.io does this. `03284FHKLLL.dfd.d..f_if` is highlighted as if it were a number.
Sep 12 2022
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10.09.22 04:17, Walter Bright wrote:
 On 9/9/2022 4:43 PM, Adam D Ruppe wrote:
 If you're using a lot of octal literals such that this is an issue, 
 one wonders, what for? The only use I know of is for Unix file 
 permissions.
I keep hitting them in random C code I'm translating. Various unix things beyond file permissions and a hardware manual for a think i had to drive (an rfid chip) used them for various bit triplets too.
octal!433 is really not much different from 0433. It could even be shortened to o!433, exactly the same number of characters as 0o433. ...
o!422 is such a hack, and it does not even (always) work. This problem is even worse for binary literals.
 The reasons for adding language syntactic sugar:
 
 1. its very commonplace
 
 2. the workarounds are gross
 
 Of course it's a judgement call, and I understand you see them randomly 
 in C code, but does it really pay off? The downside is the language gets 
 bigger and more complex, the spec gets longer, and people who don't come 
 from a C background wonder why their 093 integer isn't 93.
 ...
I think basically everyone here agrees that 093 is bad syntax and was a mistake.
  > the newer imported!"std.conv".octal!433 pattern
 
 Nobody would ever write that unless they used octal exactly once, which 
 suggests that octal literals aren't common enough to justify special 
 syntax.
 
 
 I often prefer using binary literals anyway, but changing something 
 like 0o50000 to binary is a little obnoxious.
I first implemented binary literals in the 1980s, thinking they were cool and useful. They were not and not. I haven't found a reasonable use for them, or ever wanted them. (I prefer writing them in hex notation, as binary literals take up way too much horizontal space. After all, C3 is a lot easier than 11000011. The latter makes my eyes bleed a bit, too.) ...
Binary literals are e.g., a GNU C extension and they are in C++14, so clearly people see an use for them.
 Let's simplify D.
I really don't understand why you seem to think removing simple and convenient lexer features that behave exactly as expected in favor of overengineered Phobos templates that have weird corner cases and are orders of magnitude slower to compile is a meaningful simplification of D. It utterly makes no sense to me. Let's simplify D in a way that actually positively impacts the user experience, for example by getting rid of weird corner cases and arbitrary limitations. Of course, that requires actual design work and sometimes even nontrivial compiler improvements, which is a bit harder than just deleting a few lines of code in the lexer and then adding ten times that amount to Phobos.
Sep 10 2022
next sibling parent Sergey <kornburn yandex.ru> writes:
On Saturday, 10 September 2022 at 16:18:53 UTC, Timon Gehr wrote:
 
 Binary literals are e.g., a GNU C extension and they are in 
 C++14, so clearly people see an use for them.
Just some more examples of supported languages: Zig - https://ziglearn.org/chapter-1/#integer-rules Julia - https://docs.julialang.org/en/v1/manual/integers-and-floating-point-numbers/ Kotlin - https://kotlinlang.org/docs/numbers.html#literal-constants-for-numbers (but no Octal)
Sep 10 2022
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 9:18 AM, Timon Gehr wrote:
 o!422 is such a hack,
How so?
 and it does not even (always) work.
You're referring to when it has too many digits, it has to be done as: o!"442" It would be interesting to see a proposal to improve this sort of thing.
 Binary literals are e.g., a GNU C extension and they are in C++14, so clearly 
 people see an use for them.
I implemented them back in the 80s as an extension, and nobody commented on them. I never found a use. As for seeing a use, seeing a use for them and actually using them are different things. D originally was embeddable in html. The compiler was able to extract it from html files. I saw a use for it, but never found one. It was dropped. Nobody commented on that, either.
 Let's simplify D.
I really don't understand why you seem to think removing simple and convenient lexer features that behave exactly as expected in favor of overengineered Phobos templates that have weird corner cases and are orders of magnitude slower to compile is a meaningful simplification of D. It utterly makes no sense to me.
The idea is to have a simple core language, and have a way that users can add features via the library. For example, user-defined literals are a valuable feature. C++ added specific syntax for them. D has user-defined literals as fallout from the template syntax. User-defined literals in D are indeed an order of magnitude slower than builtin ones. But that only matters if one is using a lot of them. Like having files filled with them. How often does that happen? The Phobos implementation of octal is indeed overengineered, as I mentioned in another post here. Phobos in general has been overengineered, but that's not a fault of the language. I suppose I should submit a PR to fix the octal template implementation.
 Let's simplify D in a way that actually positively impacts the user experience,
 for example by getting rid of weird corner cases and arbitrary limitations. Of
 course, that requires actual design work and sometimes even nontrivial compiler
 improvements, which is a bit harder than just deleting a few lines of code in
 the lexer and then adding ten times that amount to Phobos.
We do this all the time.
Sep 10 2022
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/10/22 1:43 PM, Walter Bright wrote:
 On 9/10/2022 9:18 AM, Timon Gehr wrote:
 Binary literals are e.g., a GNU C extension and they are in C++14, so 
 clearly people see an use for them.
I implemented them back in the 80s as an extension, and nobody commented on them. I never found a use. As for seeing a use, seeing a use for them and actually using them are different things.
I just used them a couple months ago: https://github.com/schveiguy/poker/blob/master/source/poker/hand.d#L261 This was so much easier to comprehend than the equivalent hex. -Steve
Sep 10 2022
next sibling parent mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 19:00:01 UTC, Steven 
Schveighoffer wrote:
 On 9/10/22 1:43 PM, Walter Bright wrote:
 
 I implemented them back in the 80s as an extension, and nobody 
 commented on them. I never found a use. As for seeing a use, 
 seeing a use for them and actually using them are different 
 things.
This is your *own personal* experience, the so many people in this very thread opposing such depreciation clearly shows many people *do* use and be happy with binary literals!
Sep 10 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 12:00 PM, Steven Schveighoffer wrote:
 https://github.com/schveiguy/poker/blob/master/source/poker/hand.d#L261
 
 This was so much easier to comprehend than the equivalent hex.
assert(beststraight(0b10000000011110) == Rank.Five); assert(beststraight(0b10101111111110) == Rank.Ten); Are you sure the number of digits is correct? Does your card deck really have 14 cards in a suite? :-) Me, annoying curmudgeon that I am, would write a little parser so I could write such tests as: assert(beststraight(hand!"A234567890JQK") == Rank.Five); and use HCDS for Hearts, Clubs, Diamonds, and Spaces. D is ideal for creating such micro-DSLs, such as this one: https://github.com/dlang/dmd/blob/master/compiler/src/dmd/backend/disasm86.d#L3601 which makes adding test cases for the disassembler dramatically easier to read and compose.
Sep 11 2022
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/11/22 3:20 PM, Walter Bright wrote:
 On 9/10/2022 12:00 PM, Steven Schveighoffer wrote:
 https://github.com/schveiguy/poker/blob/master/source/poker/hand.d#L261

 This was so much easier to comprehend than the equivalent hex.
    assert(beststraight(0b10000000011110) == Rank.Five);     assert(beststraight(0b10101111111110) == Rank.Ten); Are you sure the number of digits is correct? Does your card deck really have 14 cards in a suite? :-)
Not really relevant, but yes. I have 2 aces, one called `Ace`, and one called `LowAce`. When checking for a straight, and an ace is present, it's copied to the `LowAce` spot as well (A-2-3-4-5 is a straight as well), because all I'm doing is searching for 5 consecutive bits. I'm actually quite proud of the bit shifting code, I tried to find the most efficient/clever mechanism to test for a straight given a bitfield. Once I thought of it, it was simple to implement and understand: https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L245-L256
 
 Me, annoying curmudgeon that I am, would write a little parser so I 
 could write such tests as:
 
     assert(beststraight(hand!"A234567890JQK") == Rank.Five);
Sure, that's already there. but it's perfectly understandable with the bit pattern as well. And I don't need to test the parser with this unittest, just the function I'm testing. Every unit test should test the smallest amount possible (sometimes it's unavoidable to pull in more) to avoid coupling between parts of the code.
 and use HCDS for Hearts, Clubs, Diamonds, and Spaces.
 
 D is ideal for creating such micro-DSLs, such as this one:
e.g.: https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L70-L73 Note that when I'm testing the higher-level functions, I do use this: https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L461
 which makes adding test cases for the disassembler dramatically easier 
 to read and compose.
 
Again, the test is easy to compose and understand -- because it's in binary. The whole No True Scotsman line of argument is not convincing. -Steve
Sep 11 2022
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/11/2022 4:00 PM, Steven Schveighoffer wrote:
 Again, the test is easy to compose and understand -- because it's in binary.
For you, I accept that. But the only binary language I understand is that of moisture vaporators.
Sep 11 2022
prev sibling parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 9/11/22 16:00, Steven Schveighoffer wrote:

 I'm actually quite proud of the bit shifting code, I tried to find the
 most efficient/clever mechanism to test for a straight given a bitfield.
 Once I thought of it, it was simple to implement and understand:

 
https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L245-L256 That's very clever! :) Ali
Sep 13 2022
parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Tuesday, 13 September 2022 at 20:41:53 UTC, Ali Çehreli wrote:
 On 9/11/22 16:00, Steven Schveighoffer wrote:

 I'm actually quite proud of the bit shifting code, I tried to
find the
 most efficient/clever mechanism to test for a straight given
a bitfield.
 Once I thought of it, it was simple to implement and
understand:
 
https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L245-L256 That's very clever! :)
`core.bitop.popcnt(check) == 4` would be simpler and most modern CPU's have an instruction for that.
Sep 14 2022
next sibling parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 9/14/22 01:43, Patrick Schluter wrote:

 `core.bitop.popcnt(check) == 4` would be simpler and most modern CPU's
 have an instruction for that.
That will give total number of bits set. Steve's function determines whether 5 neighboring bits are set consecutively, anywhere in the "register". And that 4 is a red herring that made me think I found a bug. No, the function finds 5 consecutive bits set by 4 shifts. :) Ali
Sep 14 2022
parent Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Wednesday, 14 September 2022 at 12:36:09 UTC, Ali Çehreli 
wrote:
 On 9/14/22 01:43, Patrick Schluter wrote:

 `core.bitop.popcnt(check) == 4` would be simpler and most
modern CPU's
 have an instruction for that.
That will give total number of bits set. Steve's function determines whether 5 neighboring bits are set consecutively, anywhere in the "register".
ok. My bad.
 And that 4 is a red herring that made me think I found a bug. 
 No, the function finds 5 consecutive bits set by 4 shifts. :)
Sep 14 2022
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/14/22 4:43 AM, Patrick Schluter wrote:

 
 `core.bitop.popcnt(check) == 4` would be simpler and most modern CPU's 
 have an instruction for that.
I do use that for flushes! https://github.com/schveiguy/poker/blob/6f70cf7ca74e19e78b470f81640a3ce34a95d0d3/source/poker/hand.d#L376-L386 And a cool thing about that too, the "best" flush is simply the mask that is larger (because it has higher bits set, i.e. bigger rank cards). Man, that was a fun little project ;) -Steve
Sep 14 2022
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 10.09.22 19:43, Walter Bright wrote:
 On 9/10/2022 9:18 AM, Timon Gehr wrote:
 o!422 is such a hack,
How so? ...
422 is a decimal literal. E.g., there is type deduction logic in the compiler: ```d import std.conv: o=octal; void main(){ auto x0=o!1000000000; static assert(is(typeof(x0)==int)); auto x1=o!10000000000; static assert(is(typeof(x1)==long)); } ``` It was never meant to be interpreted as octal, and it shows.
 and it does not even (always) work.
You're referring to when it has too many digits, it has to be done as:   o!"442" ...
Sure, for example. Pure, unadulterated incidental complexity. Now every D programmer has to know multiple details about this nonsense. No simplification was achieved. I was objecting to the argument where you used the hack to claim it's not much more ugly. However, this hack should not exist and not be used.
 It would be interesting to see a proposal to improve this sort of thing.
 ...
Build it into the lexer using the canonical syntax.
 Binary literals are e.g., a GNU C extension and they are in C++14, so 
 clearly people see an use for them.
I implemented them back in the 80s as an extension, and nobody commented on them. I never found a use.
Obviously you have been using them on a daily basis and are just lying to us so you can simplify the lexer. /s (Actually, I can see no upside whatsoever. I imagine the lexer will just get a bit bigger, because it still has to give a nice error message saying to use the Phobos template instead.)
 As for seeing a use, seeing a use for them 
 and actually using them are different things.
 ...
People have been telling you they are actually using them. I think you should consider updating your wrong belief that nobody is actually using them. It seems quite rude to tell people that they are not actually using features that they themselves can see they are obviously using.
 D originally was embeddable in html. The compiler was able to extract it 
 from html files. I saw a use for it, but never found one. It was 
 dropped. Nobody commented on that, either.
 
 
 Let's simplify D.
I really don't understand why you seem to think removing simple and convenient lexer features that behave exactly as expected in favor of overengineered Phobos templates that have weird corner cases and are orders of magnitude slower to compile is a meaningful simplification of D. It utterly makes no sense to me.
The idea is to have a simple core language, and have a way that users can add features via the library.
You consistently argue against things like AST macros. I don't think you can have it both ways. There's value in the language providing obvious features as built-ins with standardized syntax. The features are the language and clean syntax is one of the things that people consistently bring up when they have to explain why they are using D. This becomes harder to uphold when you turn obvious code into logically inconsistent line noise for purely ideological reasons.
 For example, user-defined literals are 
 a valuable feature. C++ added specific syntax for them. D has 
 user-defined literals as fallout from the template syntax.
 ...
D has templates with string arguments. I am pretty sure that even if C++ had templates with string arguments, they still would have added user-defined literals.
 User-defined literals in D are indeed an order of magnitude slower than 
 builtin ones. But that only matters if one is using a lot of them. Like 
 having files filled with them. How often does that happen?
 ...
It adds up. It opens the language up to ridicule. The cost/benefit analysis for removing this feature seems widely off.
 The Phobos implementation of octal is indeed overengineered, as I 
 mentioned in another post here. Phobos in general has been 
 overengineered, but that's not a fault of the language. I suppose I 
 should submit a PR to fix the octal template implementation.
 ...
I don't think there's a very good implementation of the idea.
 
  > Let's simplify D in a way that actually positively impacts the user 
 experience,
  > for example by getting rid of weird corner cases and arbitrary 
 limitations. Of
  > course, that requires actual design work and sometimes even 
 nontrivial compiler
  > improvements, which is a bit harder than just deleting a few lines of 
 code in
  > the lexer and then adding ten times that amount to Phobos.
 
 We do this all the time.
Which is good. Which situation is the simplest one? a) 0x... for hexadecimal 0o... for octal 0b... for binary b) 0x... for hexadecimal std.conv.octal!"..." for octal 0b"..." for binary c) 0x... for hexadecimal std.conv.octal!"..." for octal std.conv.binary!"..." for binary
Sep 10 2022
next sibling parent mw <mingwu gmail.com> writes:
On Saturday, 10 September 2022 at 21:57:51 UTC, Timon Gehr wrote:
 On 10.09.22 19:43, Walter Bright wrote:
 Which situation is the simplest one?

 a)

 0x... for hexadecimal
 0o... for octal
 0b... for binary
Surely it's (a), and it's consistent! The other options ... even looking at them let me feel want to vomit, so I have to delete from the quotes.
Sep 10 2022
prev sibling next sibling parent IGotD- <nise nise.com> writes:
On Saturday, 10 September 2022 at 21:57:51 UTC, Timon Gehr wrote:
 D has templates with string arguments. I am pretty sure that 
 even if C++ had templates with string arguments, they still 
 would have added user-defined literals.
C++ has user defined literals since C++11. Typically something that ends with "_myliteral", for example 300_km. Both string and value literals are supported.
Sep 10 2022
prev sibling next sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 21:57:51 UTC, Timon Gehr wrote:
 422 is a decimal literal. E.g., there is type deduction logic 
 in the compiler:
Well, that was a requirement for Phobos inclusion (and one that caused considerable complexity increase). The idea was to replicate the behavior of the original literal. Though I will grant the requirement that if the literal has a L on the end that it always comes out typed as long is what causes the effect you pointed out, since the compiler will treat that 10000000000 and 10000000000L, and 10L too all the same way, so the template can't tell the difference. Still, consider this massively simplified implementation: enum o(ulong a) = to!int(to!string(a), 8); That's consistently going to give you an int, even if the compiler made the input a long. But it no longer will give a long if you use the L suffix. (and yes i know it throws if the octal is legit more than 31 bits too, so you realistically might need a static if branch in there to allow that to be long in those cases. Or you could simply make it always return ulong and let VRP take care of things: enum o(ulong a) = to!ulong(to!string(a), 8); int x1=o!10000000000; // perfectly fine but then `auto` will always give you ulong. so there is no perfect answer. only the string one can really match the suffix rules etc. or, of course, a built in literal. which is the correct answer. But I personally never liked the string version. BTW there's also an argument to be made that the whole `enum` aspect is a bit hacky - you can also use a standard function and let the ordinary constant folder and dead code elimination give you the very same result too.) like i get your point and you can call it a hack if you want but i consider it pure genius just like its brilliant inventor it just isn't meant for this fallen world in which we live
Sep 10 2022
prev sibling next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Sat, Sep 10, 2022 at 11:57:51PM +0200, Timon Gehr via Digitalmars-d wrote:
[...]
 Which situation is the simplest one?
 
 a)
 
 0x... for hexadecimal
 0o... for octal
 0b... for binary
 
 
 b)
 
 0x... for hexadecimal
 std.conv.octal!"..." for octal
 0b"..." for binary
 
 
 c)
 
 0x... for hexadecimal
 std.conv.octal!"..." for octal
 std.conv.binary!"..." for binary
There's also: d) std.conv.hex!"..." for hexadecimal std.conv.octal!"..." for hexadecimal std.conv.binary!"..." for hexadecimal Which, judging by the way things are going, is where we're headed. The logical conclusion of which is: e) std.conv.hex!"..." for hexadecimal std.conv.octal!"..." for hexadecimal std.conv.binary!"..." for hexadecimal std.conv.decimal!"..." for decimal - because the language becomes *much* simpler when it doesn't natively support any integer literals at all. :-P T -- Being able to learn is a great learning; being able to unlearn is a greater learning.
Sep 10 2022
prev sibling next sibling parent Tejas <notrealemail gmail.com> writes:
On Saturday, 10 September 2022 at 21:57:51 UTC, Timon Gehr wrote:
 On 10.09.22 19:43, Walter Bright wrote:
 [...]
422 is a decimal literal. E.g., there is type deduction logic in the compiler: [...]
I always thought we have user defined literals thanks to UFCS. Stuff like `3.seconds`, `27.metres`. What do user defined literals used via templates that take `string` arguments look like?
Sep 11 2022
prev sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Saturday, 10 September 2022 at 21:57:51 UTC, Timon Gehr wrote:
 `0x...` for hexadecimal
 `std.conv.octal!"..."` for octal
 `std.conv.binary!"..."` for binary
IF we get to this point, I’d want `std.conv.hex!"..."` just for sake of consistency.
Sep 12 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
What's ironic about this discussion is the exact opposite happened with D
bitfields.

After implementing it for C, I realized that we could add bitfields to D by 
simply turning the existing implementation on. The code was already there, it 
was already supported and debugged.

The other side preferred a template solution that didn't have quite the simple 
syntax that the C solution had, whereas I thought bitfields would be used
enough 
to justify the simpler builtin syntax.

Another irony was that in turning it on for D, it exposed a serious bug that
the 
extensive tests I wrote for the C side had missed.
Sep 10 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
A lot of us kept quiet about bitfields being turned on.

Honestly, if we are already paying the price turning them on (with a DIP 
of course), it is brilliant.

In you saying about irony and testing failing, I'm reminded of the fact 
that Unicode in symbols are correctly not being tested with export. 
Which if tested would result in linker errors. How fun!
Sep 10 2022
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/10/22 1:48 PM, Walter Bright wrote:
 What's ironic about this discussion is the exact opposite happened with 
 D bitfields.
 
 After implementing it for C, I realized that we could add bitfields to D 
 by simply turning the existing implementation on. The code was already 
 there, it was already supported and debugged.
 
 The other side preferred a template solution that didn't have quite the 
 simple syntax that the C solution had, whereas I thought bitfields would 
 be used enough to justify the simpler builtin syntax.
I'm not sure that's correct. I think we all preferred a *cross-platform* solution. I.e. a defined bitfield system which does the same thing on all platforms, not necessarily what C does. I personally dislike the phobos bitfields, just because of how they are specified. I'd much prefer a builtin bitfield system. -Steve
Sep 10 2022
prev sibling next sibling parent Dennis <dkorpel gmail.com> writes:
On Saturday, 10 September 2022 at 17:48:15 UTC, Walter Bright 
wrote:
 What's ironic about this discussion is the exact opposite 
 happened with D bitfields.
There's no irony, the two situations are not comparable. With bitfields, the current situation is we have std.bitmanip in Phobos, which has its own simple layout scheme, doesn't break meta programming, but has ugly syntax. The proposal was to make D additionally inherit C's bitfields, which has C's platform-dependent layout scheme, breaks meta programming, but has nice syntax. With binary literals, the current situation is we have a perfectly fine implementation in the lexer, with the proposal to replace it with a Phobos template to do the exact same but with worse user experience. What *is* ironic though, is that [you were against](https://forum.dlang.org/post/qs9cf2$kbt$1 digitalmars.com) deprecating `q"EOS text EOS"` strings, dismissing complexity concerns and siding with user experience. Now you're doing the reverse.
Sep 10 2022
prev sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 10 September 2022 at 17:48:15 UTC, Walter Bright 
wrote:
 What's ironic about this discussion is the exact opposite 
 happened with D bitfields.
C bitfields are legitimately *awful*. That discussion was about that particular definition (or lack thereof), not the concept as a whole. Just like how C's octal literals suck but some octal literals are ok, C's bitfields suck but other bitfields could do well. This was actually going to be the topic of my blog this week but i never got around to finishing it. The basic idea I'd like to see though is that alignment, size, and layout are all defined in a separately reflectable definition.
Sep 10 2022
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Saturday, 10 September 2022 at 02:17:30 UTC, Walter Bright 
wrote:
 On 9/9/2022 4:43 PM, Adam D Ruppe wrote:
 If you're using a lot of octal literals such that this is an 
 issue, one wonders, what for? The only use I know of is for 
 Unix file permissions.
I keep hitting them in random C code I'm translating. Various unix things beyond file permissions and a hardware manual for a think i had to drive (an rfid chip) used them for various bit triplets too.
octal!433 is really not much different from 0433. It could even be shortened to o!433, exactly the same number of characters as 0o433. The reasons for adding language syntactic sugar: 1. its very commonplace 2. the workarounds are gross Of course it's a judgement call, and I understand you see them randomly in C code, but does it really pay off? The downside is the language gets bigger and more complex, the spec gets longer, and people who don't come from a C background wonder why their 093 integer isn't 93.
 the newer imported!"std.conv".octal!433 pattern
Nobody would ever write that unless they used octal exactly once, which suggests that octal literals aren't common enough to justify special syntax.
 I often prefer using binary literals anyway, but changing 
 something like 0o50000 to binary is a little obnoxious.
I first implemented binary literals in the 1980s, thinking they were cool and useful. They were not and not. I haven't found a reasonable use for them, or ever wanted them. (I prefer writing them in hex notation, as binary literals take up way too much horizontal space. After all, C3 is a lot easier than 11000011. The latter makes my eyes bleed a bit, too.) Let's simplify D.
I couldn't agree more with this. I've made it clear that I've done some very successful work with D and have been very pleased with the outcome. But this work involved porting C code I wrote 10 years ago that had become ugly (or maybe it always was) and difficult to maintain. The D version is a big improvement. But if I were starting with an empty editor buffer, would I choose D? Especially to write a garden-variety application rather than bashing hardware registers? Perhaps not. Some of that would be simply that a higher-level language would be more suitable, e.g., Haskell or Scheme, both personal favorites. But some of that would be due to the hangover from C and C++ that D stills exhibits. My opinion: C was a bad language in 1970 and it is horrifying today. C++? Words fail me, unless they are scatological. I think the more D can detach itself from its C heritage and emphasize modern programming language practice (in other words, take advantage of what we have learned in the last 52 years), the better.
Sep 10 2022
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 9/10/2022 12:05 PM, Don Allen wrote:
 But if I were starting with an empty editor buffer, would I choose D?
Especially 
 to write a garden-variety application rather than bashing hardware registers? 
 Perhaps not. Some of that would be simply that a higher-level language would
be 
 more suitable, e.g., Haskell or Scheme, both personal favorites. But some of 
 that would be due to the hangover from C and C++ that D stills exhibits. My 
 opinion: C was a bad language in 1970 and it is horrifying today. C++? Words 
 fail me, unless they are scatological. I think the more D can detach itself
from 
 its C heritage and emphasize modern programming language practice (in other 
 words, take advantage of what we have learned in the last 52 years), the
better.
What's amusing is when I embarked on implementing ImportC, I rediscovered all the things I disliked about it that had been fixed in D.
Sep 10 2022
prev sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 10 September 2022 at 19:05:38 UTC, Don Allen wrote:
 I couldn't agree more with this. I've made it clear that I've 
 done some very successful work with D and have been very 
 pleased with the outcome. But this work involved porting C code 
 I wrote 10 years ago that had become ugly (or maybe it always 
 was) and difficult to maintain. The D version is a big 
 improvement.
Removing the binary literals does not mean reduction in complexity, neither in the compiler, nor in the user code.
Sep 11 2022
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Sunday, 11 September 2022 at 07:24:03 UTC, Max Samukha wrote:
 On Saturday, 10 September 2022 at 19:05:38 UTC, Don Allen wrote:
 I couldn't agree more with this. I've made it clear that I've 
 done some very successful work with D and have been very 
 pleased with the outcome. But this work involved porting C 
 code I wrote 10 years ago that had become ugly (or maybe it 
 always was) and difficult to maintain. The D version is a big 
 improvement.
Removing the binary literals does not mean reduction in complexity, neither in the compiler, nor in the user code.
There are multiple ways that complexity has been used on this thread, which I think contributed to a lot of disagreements. It might be better in the future if people make clear whether they refer to compiler-complexity or user-complexity (or call it developer-complexity, same idea). I don't have the knowledge to comment on how they impact compiler-complexity. I think most people would agree that removing binary literals would not meaningfully reduce user-complexity. I haven't heard of a new D programmer struggling with understanding about how binary literals interact with some other feature in a complex way. They aren't that frequently used, but people can look up how they work if you need them. However, there's also an asymmetry. The more a user makes use of them, the larger the potential cost to them for the removal. So, even if there is a minor reduction of user-complexity, the people who make use of them face a larger cost. I think this is what frustrates some on the thread. I would echo the comments of others about the importance of automated tools to reduce the burden on users of these kinds of changes. I don't recall anyone mentioning the removal of complex/imaginary numbers, but the issues are the same.
Sep 12 2022
next sibling parent IGotD- <nise nise.com> writes:
On Monday, 12 September 2022 at 14:48:12 UTC, jmh530 wrote:
 There are multiple ways that complexity has been used on this 
 thread, which I think contributed to a lot of disagreements. It 
 might be better in the future if people make clear whether they 
 refer to compiler-complexity or user-complexity (or call it 
 developer-complexity, same idea).

 I don't have the knowledge to comment on how they impact 
 compiler-complexity.

 I think most people would agree that removing binary literals 
 would not meaningfully reduce user-complexity. I haven't heard 
 of a new D programmer struggling with understanding about how 
 binary literals interact with some other feature in a complex 
 way.  They aren't that frequently used, but people can look up 
 how they work if you need them. However, there's also an 
 asymmetry. The more a user makes use of them, the larger the 
 potential cost to them for the removal. So, even if there is a 
 minor reduction of user-complexity, the people who make use of 
 them face a larger cost. I think this is what frustrates some 
 on the thread.

 I would echo the comments of others about the importance of 
 automated tools to reduce the burden on users of these kinds of 
 changes. I don't recall anyone mentioning the removal of 
 complex/imaginary numbers, but the issues are the same.
Common number literals should be a part of the compiler and not some kind of library. In the case of betterC you don't want to pull in Phobos just have simple number literals which will not even compile as a bare bone. BetterC will be used for near HW programming where binary literals are useful. You talking about complexity and these are simple number literals. This thread has really made me lose faith in the D project all together. When you can't support simple things in the compiler, what happens with more complex features? Will you stand there with empty eyes as birdhouses? Painful thread for me it is.
Sep 12 2022
prev sibling next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Monday, 12 September 2022 at 14:48:12 UTC, jmh530 wrote:
 On Sunday, 11 September 2022 at 07:24:03 UTC, Max Samukha wrote:
 On Saturday, 10 September 2022 at 19:05:38 UTC, Don Allen 
 wrote:
 I couldn't agree more with this. I've made it clear that I've 
 done some very successful work with D and have been very 
 pleased with the outcome. But this work involved porting C 
 code I wrote 10 years ago that had become ugly (or maybe it 
 always was) and difficult to maintain. The D version is a big 
 improvement.
Removing the binary literals does not mean reduction in complexity, neither in the compiler, nor in the user code.
There are multiple ways that complexity has been used on this thread, which I think contributed to a lot of disagreements. It might be better in the future if people make clear whether they refer to compiler-complexity or user-complexity (or call it developer-complexity, same idea). I don't have the knowledge to comment on how they impact compiler-complexity. I think most people would agree that removing binary literals would not meaningfully reduce user-complexity. I haven't heard of a new D programmer struggling with understanding about how binary literals interact with some other feature in a complex way. They aren't that frequently used, but people can look up how they work if you need them. However, there's also an asymmetry. The more a user makes use of them, the larger the potential cost to them for the removal. So, even if there is a minor reduction of user-complexity, the people who make use of them face a larger cost. I think this is what frustrates some on the thread. I would echo the comments of others about the importance of automated tools to reduce the burden on users of these kinds of changes. I don't recall anyone mentioning the removal of complex/imaginary numbers, but the issues are the same.
On Monday, 12 September 2022 at 14:48:12 UTC, jmh530 wrote:
 On Sunday, 11 September 2022 at 07:24:03 UTC, Max Samukha wrote:
 On Saturday, 10 September 2022 at 19:05:38 UTC, Don Allen 
 wrote:
 I couldn't agree more with this. I've made it clear that I've 
 done some very successful work with D and have been very 
 pleased with the outcome. But this work involved porting C 
 code I wrote 10 years ago that had become ugly (or maybe it 
 always was) and difficult to maintain. The D version is a big 
 improvement.
Removing the binary literals does not mean reduction in complexity, neither in the compiler, nor in the user code.
There are multiple ways that complexity has been used on this thread, which I think contributed to a lot of disagreements. It might be better in the future if people make clear whether they refer to compiler-complexity or user-complexity (or call it developer-complexity, same idea). I don't have the knowledge to comment on how they impact compiler-complexity. I think most people would agree that removing binary literals would not meaningfully reduce user-complexity. I haven't heard of a new D programmer struggling with understanding about how binary literals interact with some other feature in a complex way. They aren't that frequently used, but people can look up how they work if you need them. However, there's also an asymmetry. The more a user makes use of them, the larger the potential cost to them for the removal. So, even if there is a minor reduction of user-complexity, the people who make use of them face a larger cost. I think this is what frustrates some on the thread. I would echo the comments of others about the importance of automated tools to reduce the burden on users of these kinds of changes. I don't recall anyone mentioning the removal of complex/imaginary numbers, but the issues are the same.
I was talking about language complexity, as was Walter. I thought that was quite clear and still do. While I expressed agreement with Walter regarding his desire to simplify D, I don't have a strong opinion about the possibility of removing binary literals, because this is not where I see D's complexity problems. I don't use binary literals myself. A recent use case for me was defining various flag bits that are part of integer columns in several Sqlite tables. I define them with enums like so: ```` enum AccountFlags { descendents_are_marketable_bit = 0, descendents_are_marketable = 1, hidden_bit = 1, hidden = 1 << hidden_bit, descendents_are_assets_bit = 2, descendents_are_assets = 1 << descendents_are_assets_bit, placeholder_bit = 3, placeholder = 1 << placeholder_bit, ... ```` It's nice that the shifting takes place at compile-time in D. Hardware registers usually are similar to the above -- a collection of bits each having its own meaning and a specific bit number. If I were doing this sort of work in D, I would most likely deal with those register bits in the same way I describe above. Creating masks for multi-bit fields can be dealt with similarly. So while I don't have a personal use for binary literals, clearly others have. But Walter sees an internal cost to the compiler. So while most of us can weigh in on the perceived benefit of retaining binary literals or not, few of us can understand the cost and therefore the tradeoff. I'd mention that Rust has binary literals as does Nim. Haskell does not, though it has both octal and hex literals.
Sep 13 2022
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 13 September 2022 at 13:52:43 UTC, Don Allen wrote:
 [snip]
 I was talking about language complexity, as was Walter. I 
 thought that was quite clear and still do.

 [snip]
I think Walter has referenced both and it's not always clear which ones he is referring to. That's part of the confusion. I can't recall your arguments, to be honest.
Sep 13 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/13/2022 6:52 AM, Don Allen wrote:
 So while I don't have a personal use for binary literals, clearly others have. 
 But Walter sees an internal cost to the compiler.
The internal cost to the compiler is pretty small as these things go. It's more the cognitive cost of a larger language.
Sep 14 2022
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 14 September 2022 at 19:39:19 UTC, Walter Bright 
wrote:
 On 9/13/2022 6:52 AM, Don Allen wrote:
 So while I don't have a personal use for binary literals, 
 clearly others have. But Walter sees an internal cost to the 
 compiler.
The internal cost to the compiler is pretty small as these things go. It's more the cognitive cost of a larger language.
This focuses things on what I'm calling user-complexity. What many people are arguing here is that removing binary literals will not reduce the cognitive load meaningfully.
Sep 14 2022
prev sibling parent Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 14 September 2022 at 19:39:19 UTC, Walter Bright 
wrote:
 On 9/13/2022 6:52 AM, Don Allen wrote:
 So while I don't have a personal use for binary literals, 
 clearly others have. But Walter sees an internal cost to the 
 compiler.
The internal cost to the compiler is pretty small as these things go. It's more the cognitive cost of a larger language.
If that's the case, then I would say that removing binary literals is a really tiny step that is not worth taking, given the push-back from the community. I think there are other much larger ways in which the language is too complex, many inherited from C and C++. For me, that's where the real payoff is. Backwards compatibility, as always, will be the big impediment.
Sep 14 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone 
 mentioning the removal of complex/imaginary numbers, but the issues are the
same.
I was surprised at the pretty much non-existent pushback on removing them, even though it did carry with it the loss of the convenient syntax for them.
Sep 14 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 15/09/2022 7:34 AM, Walter Bright wrote:
 I was surprised at the pretty much non-existent pushback on removing 
 them, even though it did carry with it the loss of the convenient syntax 
 for them.
At some point we need to look into what C is doing and sync back up with them. C23 has an awful lot of new things that we can't represent right now.
Sep 14 2022
prev sibling next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Wednesday, 14 September 2022 at 19:34:00 UTC, Walter Bright 
wrote:
 On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone mentioning the removal of 
 complex/imaginary numbers, but the issues are the same.
I was surprised at the pretty much non-existent pushback on removing them, even though it did carry with it the loss of the convenient syntax for them.
I had some code that broke. It took maybe 15 minutes or a half an hour to fix. I don't recall there being a preview switch for that, but if not it might have been good to have one. I agree it was a convenient syntax, but most people probably agreed that it wasn't pulling its weight.
Sep 14 2022
parent Daniel N <no public.email> writes:
On Wednesday, 14 September 2022 at 19:56:13 UTC, jmh530 wrote:
 On Wednesday, 14 September 2022 at 19:34:00 UTC, Walter Bright 
 wrote:
 On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone mentioning the removal of 
 complex/imaginary numbers, but the issues are the same.
I was surprised at the pretty much non-existent pushback on removing them, even though it did carry with it the loss of the convenient syntax for them.
I had some code that broke. It took maybe 15 minutes or a half an hour to fix. I don't recall there being a preview switch for that, but if not it might have been good to have one. I agree it was a convenient syntax, but most people probably agreed that it wasn't pulling its weight.
... and it's worse if you remove common C features as then D nolonger is a BetterC.
Sep 14 2022
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 9/14/22 3:34 PM, Walter Bright wrote:
 On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone mentioning the removal of complex/imaginary 
 numbers, but the issues are the same.
Not even close. Complex numbers were a set of 6 *types*. Binary literals are not even a type, but just another way to specify an integer. Removing complex numbers means you have to change the type of anything you were using it for to the library Complex type. Removing binary literals would just mean you had to change literals to either hex, or some std.conv function. So in terms of burden, the complex number removal is far greater (for those who used it) than removing binary literals would be. But in terms of language complexity, the *benefits* of removing complex numbers as a builtin are much much greater -- removing a slew of code generation, removing 6 types and the accompanying TypeInfo classes, (possibly) removing keywords, etc. The library also gets more complicated, because now it must replace that functionality. But with that cost, now you have a template that can be used with other things (like half-float). Removing binary literals means removing 5 lines of code in the lexer. That's it. And you could add a std.conv.binary function (but probably not necessary). Which is why it's so confusing that we are even having this debate. It's neither a monumental achievement, nor a monumental burden, it's just... annoying. It would be like removing a dedicated clock in a car dashboard, because you could see the time in the touch-screen system corner. For saving a few pennies you piss off all the customers who *liked* that clock feature.
 I was surprised at the pretty much non-existent pushback on removing 
 them, even though it did carry with it the loss of the convenient syntax 
 for them.
I've never used complex numbers in code. Not even while playing around. I've used binary literals, not a ton, but I have used them. And when I do use them, it's not because I like binary literals more than hex literals, or decimals, it's because for that specific case, they were a tad clearer. I suspect many are in the same boat. -Steve
Sep 14 2022
next sibling parent reply Daniel N <no public.email> writes:
On Thursday, 15 September 2022 at 00:15:03 UTC, Steven 
Schveighoffer wrote:
 On 9/14/22 3:34 PM, Walter Bright wrote:

 Removing binary literals means removing 5 lines of code in the 
 lexer. That's it. And you could add a std.conv.binary function 
 (but probably not necessary).
Actually we can't even remove it, we need to keep the support for ImportC and add extra logic to disable it for D!
Sep 14 2022
parent reply Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 15 September 2022 at 05:48:31 UTC, Daniel N wrote:


 Actually we can't even remove it, we need to keep the support 
 for ImportC and add extra logic to disable it for D!
C11 doesn't have binary literals, but the idea that support for C11 would be enough to make ImportC widely useful is strange.
Sep 14 2022
parent reply Daniel Nielsen <no public.email> writes:
On Thursday, 15 September 2022 at 05:57:17 UTC, Max Samukha wrote:
 On Thursday, 15 September 2022 at 05:48:31 UTC, Daniel N wrote:

 Actually we can't even remove it, we need to keep the support 
 for ImportC and add extra logic to disable it for D!
C11 doesn't have binary literals, but the idea that support for C11 would be enough to make ImportC widely useful is strange.
Besides C23 it's also a gnu extension. We added other gnu/clang extension to ImportC such as typed enum : uint32_t etc.
Sep 14 2022
parent Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 15 September 2022 at 06:42:34 UTC, Daniel Nielsen 
wrote:
 Besides C23 it's also a gnu extension. We added other gnu/clang 
 extension to ImportC such as typed enum : uint32_t etc.
Yeah, I read this: "Implementation Defined: Adjustment to the ImportC dialect is made to match the behavior of the C compiler that the D compiler is matched to, i.e. the Associated C Compiler." as LDC/GDC will have to support every C extension gnu/clang supports.
Sep 15 2022
prev sibling parent zjh <fqbqrr 163.com> writes:
On Thursday, 15 September 2022 at 00:15:03 UTC, Steven 
Schveighoffer wrote:
 For saving a few pennies you piss off all the customers who 
 *liked* that clock feature.
A small function to solve a big problem, and only a few lines of code! This function, the author of D said, I will delete it! Come and stop me!
Sep 14 2022
prev sibling next sibling parent 0xEAB <desisma heidel.beer> writes:
On Wednesday, 14 September 2022 at 19:34:00 UTC, Walter Bright 
wrote:
 On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone mentioning the removal of 
 complex/imaginary numbers, but the issues are the same.
I was surprised at the pretty much non-existent pushback on removing them, even though it did carry with it the loss of the convenient syntax for them.
I’ve *seen* complex numbers being used in real-world D code exactly once (in an example of the Dplug framework, I think). That’s where the huge difference comes in: binary literals on the other hand are a feature I’ve actually used myself. And I’d think the same applies to others.
Sep 17 2022
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 14.09.22 21:34, Walter Bright wrote:
 On 9/12/2022 7:48 AM, jmh530 wrote:
 I don't recall anyone mentioning the removal of complex/imaginary 
 numbers, but the issues are the same.
I was surprised at the pretty much non-existent pushback on removing them, even though it did carry with it the loss of the convenient syntax for them.
I will always miss "creal". It was the most humorous D keyword.
Sep 18 2022
prev sibling parent Kagamin <spam here.lot> writes:
On Friday, 9 September 2022 at 23:04:17 UTC, Walter Bright wrote:
 If you're using a lot of octal literals such that this is an 
 issue, one wonders, what for? The only use I know of is for 
 Unix file permissions.
Also UTF parsing involves reflection on leading set bits to determine the number of code units. Binary is more comfortable for unix permissions for me, whenever I see them in octal form I have to convert them to binary to make sense of them and then back from binary to octal to write them.
Sep 12 2022
prev sibling next sibling parent reply 0xEAB <desisma heidel.beer> writes:
On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 Please reconsider binary literal deprecation.
+1 funfact, PHP got them as well: ```php <?php var_dump(0b1011); // int(11) var_dump(0b0011_1111); // int(63) ```
Sep 09 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Sep 09, 2022 at 11:10:16PM +0000, 0xEAB via Digitalmars-d wrote:
 On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 Please reconsider binary literal deprecation.
+1 funfact, PHP got them as well: ```php <?php var_dump(0b1011); // int(11) var_dump(0b0011_1111); // int(63) ```
PHP has all sorts of things I'm not sure is wise to emulate. Just because something is in PHP doesn't make a good argument for including it in D. :-D T -- A computer doesn't mind if its programs are put to purposes that don't match their names. -- D. Knuth
Sep 09 2022
parent 0xEAB <desisma heidel.beer> writes:
On Friday, 9 September 2022 at 23:21:38 UTC, H. S. Teoh wrote:
 PHP has all sorts of things I'm not sure is wise to emulate.
“At least there aren’t two different yet incompatible standard libraries in PHP.” – assuming we’re throwing around obsolete prejudices.
Sep 10 2022
prev sibling next sibling parent MrSmith33 <mrsmith33 yandex.ru> writes:
On Friday, 9 September 2022 at 16:55:18 UTC, Puneet Goel wrote:
 Please reconsider binary literal deprecation.
Just to add a data point to the discussion, I have run a search over one of my D projects and got `151 hits in 15 files` for `\b0b`, so I use them quite a lot.
Sep 10 2022
prev sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Sep 14, 2022 at 06:42:00PM +0000, Daniel N via Digitalmars-d wrote:
 On Wednesday, 14 September 2022 at 18:38:21 UTC, rikki cattermole wrote:
 
 On 15/09/2022 6:35 AM, Daniel N wrote:
 Because D is multiparadigm, everyone has their own list. I love
 and use all of those features. Currently I can only think of 1
 feature I don't use, but others use it so it doesn't matter.
By any chance would it happen to be property?
OK, you got me, lol.
On Thu, Sep 15, 2022 at 06:44:03AM +1200, rikki cattermole via Digitalmars-d wrote:
 I was going to post a poll on people who actually use its semantics
 last night.
 
 I am pretty sure if we replaced it with a UDA there would be very
 limited breakage.
At one time I used to studiously write property on my range methods, because isInputRange/isForwardRange at one point required it. But since then, somebody has changed Phobos to relax this requirement, so nowadays I don't bother with property anymore. Except maybe occasionally for self-documentation purposes, but even that is falling out of use. I *probably* wouldn't miss property if it was removed today (as long as it doesn't break older code). T -- People who are more than casually interested in computers should have at least some idea of what the underlying hardware is like. Otherwise the programs they write will be pretty weird. -- D. Knuth
Sep 14 2022