www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Fixing C's Biggest Mistake

reply Walter Bright <newshound2 digitalmars.com> writes:
https://news.ycombinator.com/edit?id=34084894

I'm wondering. Should I just go ahead and implement [..] in ImportC?
Dec 21 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 11:09 AM, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894
 
 I'm wondering. Should I just go ahead and implement [..] in #ImportC?
Vote here: https://twitter.com/WalterBright/status/1605642794965483521
Dec 21 2022
next sibling parent Dave P. <dave287091 gmail.com> writes:
On Wednesday, 21 December 2022 at 19:18:19 UTC, Walter Bright 
wrote:
 On 12/21/2022 11:09 AM, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894
 
 I'm wondering. Should I just go ahead and implement [..] in 
 #ImportC?
Vote here: https://twitter.com/WalterBright/status/1605642794965483521
I don’t use twitter, so count this as a vote for yes. The C committee doesn’t like adding features out of whole cloth like the C++ committee does, so having a C compiler with this extension is one step towards standardization.
Dec 21 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 11:18 AM, Walter Bright wrote:
 On 12/21/2022 11:09 AM, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in #ImportC?
Vote here: https://twitter.com/WalterBright/status/1605642794965483521
Stunning poll results just in: 41.6% yes 29.2% no 0.0% MS syntax is better 29.2% Hal needs to open the pod bay doors
Dec 22 2022
parent reply Greggor <Greggor notareal.email> writes:
On Thursday, 22 December 2022 at 22:45:40 UTC, Walter Bright 
wrote:
  0.0% MS syntax is better
At least we can all agree on one thing, the MS syntax is horrid :^)
Dec 22 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 4:08 PM, Greggor wrote:
 On Thursday, 22 December 2022 at 22:45:40 UTC, Walter Bright wrote:
  0.0% MS syntax is better
At least we can all agree on one thing, the MS syntax is horrid :^)
One reason I get paid the Big Bucks!
Dec 22 2022
prev sibling next sibling parent reply matheus <matheus gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
Could you please post what's going on for those who can't access the link? Matheus.
Dec 21 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 11:22 AM, matheus wrote:
 On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in ImportC?
Could you please post what's going on for those who can't access the link?
I posted this on Hacker News in response to an article about Microsoft's Checked C. ======================= Checked C: int a[5] = { 0, 1, 2, 3, 4}; _Array_ptr<int> p : count(5) = a; // p points to 5 elements. My proposal for C: int a[5] = { 0, 1, 2, 3, 4}; int p[..] = a; // p points to 5 elements. https://www.digitalmars.com/articles/C-biggest-mistake.html https://github.com/Microsoft/checkedc/wiki/New-pointer-and-a...
Dec 21 2022
next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
wrote:
 I posted this on Hacker News in response to an article about 
 Microsoft's Checked C.
 =======================
 Checked C:

     int a[5] = { 0, 1, 2, 3, 4};
     _Array_ptr<int> p : count(5) = a;  // p points to 5 
 elements.

 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.

 https://www.digitalmars.com/articles/C-biggest-mistake.html

 https://github.com/Microsoft/checkedc/wiki/New-pointer-and-a...
I don't like Microsoft's Checked C. I don't like your alternative. Not because it's not better than Microsofts, which yours certainly is, but because I don't believe C needs fixing. I also don't want to have to learn new things, to program in C. I'm sick of having to learn new things all the time!! That's precisely why I like C. Cause I don't need to. Sure C is unsafe, but that's the way I like it. Having said that, I'd much rather see your alternative in the C standard, than Microsofts. But I rather see neither. The ship has sailed on C. Anyone that uses it knows it quirks and just deals with it. Thankfully, it's extremely unlikely either would ever get into the standard in any case. As for importC, I don't see a problem, since D should be able to provide extensions to it's C implementation, just as other implementators are free to do so.
Dec 22 2022
next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 22 December 2022 at 10:44:07 UTC, 
areYouSureAboutThat wrote:

and just a followup to my previous post...

I'd like to see C modules in the standard, that's for sure.

Not this bounds checking nonsense!

Can you focus on C modules getting into the standard instead 
perhaps ;-)
Dec 22 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 2:48 AM, areYouSureAboutThat wrote:
 Can you focus on C modules getting into the standard instead perhaps ;-)
My attempts at getting things into the C or C++ standard have all failed. That doesn't mean they haven't gotten into the standard. They've copied successful D features!
Dec 22 2022
prev sibling parent reply cc <cc nevernet.com> writes:
On Thursday, 22 December 2022 at 10:44:07 UTC, 
areYouSureAboutThat wrote:
 I also don't want to have to learn new things, to program in C.
 I'm sick of having to learn new things all the time!!
 That's precisely why I like C. Cause I don't need to.
 Sure C is unsafe, but that's the way I like it.
I greatly admire this post.
Dec 25 2022
parent reply bachmeier <no spam.net> writes:
On Sunday, 25 December 2022 at 20:04:01 UTC, cc wrote:
 On Thursday, 22 December 2022 at 10:44:07 UTC, 
 areYouSureAboutThat wrote:
 I also don't want to have to learn new things, to program in C.
 I'm sick of having to learn new things all the time!!
 That's precisely why I like C. Cause I don't need to.
 Sure C is unsafe, but that's the way I like it.
I greatly admire this post.
To be honest, I spend hours learning every time I write C code. And it's not the fun kind of learning. It's time spent debugging uninteresting things because the language is an abomination.
Dec 25 2022
next sibling parent cc <cc nevernet.com> writes:
On Sunday, 25 December 2022 at 20:34:25 UTC, bachmeier wrote:
 On Sunday, 25 December 2022 at 20:04:01 UTC, cc wrote:
 On Thursday, 22 December 2022 at 10:44:07 UTC, 
 areYouSureAboutThat wrote:
 I also don't want to have to learn new things, to program in 
 C.
 I'm sick of having to learn new things all the time!!
 That's precisely why I like C. Cause I don't need to.
 Sure C is unsafe, but that's the way I like it.
I greatly admire this post.
To be honest, I spend hours learning every time I write C code. And it's not the fun kind of learning. It's time spent debugging uninteresting things because the language is an abomination.
C++ is an abomination. C is just an innocent troglodyte. Primitive, dangerous when provoked (which is often), but not malevolent.
Dec 25 2022
prev sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Sunday, 25 December 2022 at 20:34:25 UTC, bachmeier wrote:
 To be honest, I spend hours learning every time I write C code. 
 And it's not the fun kind of learning. It's time spent 
 debugging uninteresting things because the language is an 
 abomination.
C was designed to compete with assembly. I'm unaware of any other programming language that has been more successful in doing just that. The problems you mention, are the price one must pay, to use C. It's also why it so hard to create a 'better' C. Cause you can't.
Dec 25 2022
parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 26/12/2022 2:13 PM, areYouSureAboutThat wrote:
 I'm unaware of any other programming language that has been more 
 successful in doing just that.
Ooo oo, I can name two! Basic, almost every micro computer had it come with it by default in ROM. And C. C pre/post putting parameters into function parameter list is a pretty different feel and I believe it had some semantic flow on effects from what I saw in a 1970's era C compiler which was dead simple.
Dec 25 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/25/2022 5:19 PM, Richard (Rikki) Andrew Cattermole wrote:
 C pre/post putting parameters into function parameter list is a pretty
different 
 feel and I believe it had some semantic flow on effects from what I saw in a 
 1970's era C compiler which was dead simple.
You're referring to function prototyping, a C++ feature which was backported to C. I programmed in C a lot before function prototyping. The errors were rampant and very difficult to find. C compilers first started adding prototying as an extension, and it grew so popular it had to be put in the Standard. C has had a number of fundamental improvements from K+R C that dramatically reduced the incidence of bugs. Function prototyping was probably the biggest. Another big problem was sign preserving vs value preserving integer promotion. That made C unportable, and the community was evenly split between the two. Sign preserving won basically by fiat, and everyone else had to change their code.
Dec 25 2022
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Monday, 26 December 2022 at 04:31:38 UTC, Walter Bright wrote:
 ...
well, to paraphrase Andrew Koenig... (and taking much liberty in doing so.. i.e going beyond what he actually said): --- Many programmers hesitate to program in C for fear of getting it wrong. Others are fearless .. and do get it wrong. ---
Dec 26 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
C uninitialized variables was another fountain of endless and hard to track
down 
problems. D initializes them by default for a very good reason.
Dec 26 2022
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 27 December 2022 at 00:38:33 UTC, Walter Bright wrote:
 C uninitialized variables was another fountain of endless and 
 hard to track down problems. D initializes them by default for 
 a very good reason.
Is it initialization or "branding"? Honestly, I've never been comfortable with D's initialization semantics. On the one hand, the language postulates that, for any type T, there is always a valid value T.init. On the other hand, it makes that weird distinction between initialization and "branding". That has always felt a bit schizophrenic. If T.init is supposed to be a valid value, then the constructor receives an already initialized object (not some "branded" abberant), so the constructor actually plays the role of assignment. If T.init is supposed to be an invalid value useful for debugging, then variables initialized to that value... are not initialized. IMO, D's attempt to conflate those two meanings of T.init is a failed experiment. Even C++ seems to be going in a right direction (https://youtu.be/ELeZAKCN4tY?t=4887) in that respect.
Dec 27 2022
next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Tuesday, 27 December 2022 at 09:41:59 UTC, Max Samukha wrote:
 If T.init is supposed to be a valid value, then the constructor 
 receives an already initialized object (not some "branded" 
 abberant), so the constructor actually plays the role of 
 assignment.

 If T.init is supposed to be an invalid value useful for 
 debugging, then variables initialized to that value... are not 
 initialized.
The `.init` value is supposed to be both. A null pointer is a good example. It is valid in the sense it's behaviour is reliable. Dereferencing it always crashes the program, as opposed to undefined behaviour. Also it will reliably say yes when compared to another null pointer. But it is also an useful value for debugging, because accidently using it immediately crashes and produces a core dump, making it obvious we had a null where there shouldn't be one. Also when debugging, pointer to address `0x0000_0000_0000_0000` is clearly uninitialised, while a pointer to whatever happens might look like it's pointing to something valid.
Dec 27 2022
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 27 December 2022 at 11:32:51 UTC, Dukc wrote:
 On Tuesday, 27 December 2022 at 09:41:59 UTC, Max Samukha wrote:
 The `.init` value is supposed to be both. A null pointer is a 
 good example. It is valid in the sense it's behaviour is 
 reliable. Dereferencing it always crashes the program, as 
 opposed to undefined behaviour. Also it will reliably say yes 
 when compared to another null pointer.
I'd say it is invalid, but using it results in deterministic behavior. Hence "invalid but good for debugging".
 But it is also an useful value for debugging, because 
 accidently using it immediately crashes and produces a core 
 dump, making it obvious we had a null where there shouldn't be 
 one. Also when debugging, pointer to address 
 `0x0000_0000_0000_0000` is clearly uninitialised, while a 
 pointer to whatever happens might look like it's pointing to 
 something valid.
Yeah, but in case of an int, you never can tell whether the programmer wanted to initialize it to 0 or forgot to initialize it.
Dec 27 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/27/2022 11:12 AM, Max Samukha wrote:
 Yeah, but in case of an int, you never can tell whether the programmer wanted
to 
 initialize it to 0 or forgot to initialize it.
It's better than C++'s approach of default initializing it with a random bit pattern.
Dec 27 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/27/2022 3:32 AM, Dukc wrote:
 The `.init` value is supposed to be both. A null pointer is a good example. It 
 is valid in the sense it's behaviour is reliable. Dereferencing it always 
 crashes the program, as opposed to undefined behaviour. Also it will reliably 
 say yes when compared to another null pointer.
 
 But it is also an useful value for debugging, because accidently using it 
 immediately crashes and produces a core dump, making it obvious we had a null 
 where there shouldn't be one. Also when debugging, pointer to address 
 `0x0000_0000_0000_0000` is clearly uninitialised, while a pointer to whatever 
 happens might look like it's pointing to something valid.
D's positive initialization also ensures that instances will not be initialized with random garbage.
Dec 27 2022
parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Tuesday, 27 December 2022 at 22:54:59 UTC, Walter Bright wrote:
 
 ...
 D's positive initialization also ensures that instances will 
 not be initialized with random garbage.
and it may not be 'random garage'. it could be your privateKey. and you don't want to be sending your unintialised buffer over the network with your privateKey still in it... It is very difficult to argue against init-by-default, in any language. Thankfully D provides an opt-out -> '.. = void'; (except for pointers in safe mode). But I would never support default-init for stack/heap allocations in C. How would I then discover your privateKey!?!?
Dec 27 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/27/2022 1:41 AM, Max Samukha wrote:
 If T.init is supposed to be an invalid value useful for debugging, then 
 variables initialized to that value... are not initialized.
It depends on the designed of the struct to decide on an initialized value that can be computed at compile time. This is not a failure, it's a positive feature. It means struct instances will *never* be in a garbage state. C++ does it a different way, not a better way.
Dec 27 2022
next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/27/22 23:53, Walter Bright wrote:
 This is not a failure, it's a positive feature.
To some extent. Aspects of this have been lovingly nicknamed the "billion dollar mistake".
 It means struct instances will *never* be in a garbage state. 
(Memory corruption.)
Dec 28 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/28/2022 1:33 AM, Timon Gehr wrote:
 On 12/27/22 23:53, Walter Bright wrote:
 This is not a failure, it's a positive feature.
To some extent. Aspects of this have been lovingly nicknamed the "billion dollar mistake".
I don't agree with that assessment at all. Having a seg fault when your program enters an unanticipated, invalid state is a *good* thing. The *actual* billion dollar mistake(s) in C are: 1. uninitialized data leading to undefined behavior 2. no way to do array buffer overflow detection because those lead to malware and other silent disasters. And it's good to have a state that a memory object can be initialized too that cannot fail.
Dec 29 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Thursday, 29 December 2022 at 20:38:23 UTC, Walter Bright 
wrote:
 I don't agree with that assessment at all. Having a seg fault 
 when your program enters an unanticipated, invalid state is a 
 *good* thing. The *actual* billion dollar mistake(s) in C are:
The alternative is the language could have prevent this state from being unanticipated at all, e.g. nullable vs not null types.
Dec 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2022 12:45 PM, Adam D Ruppe wrote:
 The alternative is the language could have prevent this state from being 
 unanticipated at all, e.g. nullable vs not null types.
It can't really prevent it. What happens is people assign a value, any value, just to get it to compile. I've seen it enough to not encourage that practice. If there are no null pointers, what happens to designate a leaf node in a tree? An equivalent "null" object is invented. Nothing is really gained. Null pointers are an excellent debugging tool. When a seg fault happens, it leads directly to the mistake with a backtrace. The "go directly to jail, do not pass go, do not collect $200" nature of what happens is good. *Hiding* those errors happens with non-null pointers. Initialization with garbage is terrible. I've spent days trying to find the source of those bugs. Null pointer seg faults are as useful as array bounds overflow exceptions. NaNs are another excellent tool. They enable, for example, dealing with a data set that may have unknown values in it from bad sensors. Replacing that missing data with "0.0" is a very bad idea.
Dec 29 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
All that said, we'll get non-nullable pointers with sum types. How well that 
will work wrt bugs, we'll see!
Dec 29 2022
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 04:02, Walter Bright wrote:
 All that said, we'll get non-nullable pointers with sum types. How well 
 that will work wrt bugs, we'll see!
Well, if we get compile-time checking for them it will work better, otherwise it won't.
Dec 29 2022
prev sibling parent Hipreme <msnmancini hotmail.com> writes:
On Friday, 30 December 2022 at 03:02:04 UTC, Walter Bright wrote:
 All that said, we'll get non-nullable pointers with sum types. 
 How well that will work wrt bugs, we'll see!
So, although many things happened here in this post, I would like to say that although many people here didn't like the idea of import C, I believe it can be a great game changer by attracting people which want to integrate D in existing D code without requiring them to port anything, read that as less investment to do. I would just like to say that I just would like to see import C working in real life before you start extending it do things it wasn't supposed to do. No one here wants more half assed features. Just wish to focus on import C finish and then work in fixing C biggest mistake.
Dec 30 2022
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 03:03, Walter Bright wrote:
 On 12/29/2022 12:45 PM, Adam D Ruppe wrote:
 The alternative is the language could have prevent this state from 
 being unanticipated at all, e.g. nullable vs not null types.
It can't really prevent it. What happens is people assign a value, any value, just to get it to compile.
No, if they want a special state, they just declare that special state as part of the type. Then the type system makes sure they don't dereference it.
 I've seen it enough to not encourage that practice.
 ...
There is ample experience with that programming model and languages are generally moving in the direction of not allowing null dereferences. This is because it works. You can claim otherwise, but you are simply wrong.
 If there are no null pointers, what happens to designate a leaf node in 
 a tree?
E.g. struct Node{ Node[] children; }
 An equivalent "null" object is invented.
No, probably it would be a "leaf" object. E.g.: data BinaryTree = Inner BinaryTree BinaryTree | Leaf Now none of the two cases are special. You can pattern match on BinaryTrees to figure out whether it is an inner node or a leaf. The compiler checks that you cover all cases. This is not complicated. size tree = case tree of Inner t1 t2 -> size t1 + size t2 + 1 Leaf -> 1 No null was necessary.
 size (Inner Leaf (Inner Leaf Leaf))
5
 Nothing is really gained.
 ...
Nonsense. Compile-time checking is really gained. This is just a question of type safety.
 Null pointers are an excellent debugging tool. When a seg fault happens, 
 it leads directly to the mistake with a backtrace. The "go directly to 
 jail, do not pass go, do not collect $200" nature of what happens is 
 good. *Hiding* those errors happens with non-null pointers.
 ...
Not at all. You simply get those errors at compile time. As you say, it's an excellent debugging tool.
 Initialization with garbage is terrible.
Of course.
 I've spent days trying to find 
 the source of those bugs.
 
 Null pointer seg faults are as useful as array bounds overflow exceptions.
 ...
Even array bounds overflow exceptions would be better as compile-time errors. If you don't consider that practical, that's fine, I guess it will take a couple of decades before people accept that this is a good idea, but it's certainly practical today for null dereferences.
 NaNs are another excellent tool. They enable, for example, dealing with 
 a data set that may have unknown values in it from bad sensors. 
 Replacing that missing data with "0.0" is a very bad idea.
This is simply about writing code that does not lie. Current way: Object obj; // <- this is _not actually_ an Object Much better: Object? obj; // <- Object or null made explicit if(obj){ static assert(is(typeof(obj)==Object)); // ok, checked // can dereference obj here } obj.member(); // error, obj could be null The same is true for floats. It would in principle make sense to have an additional floating-point type that does not allow NaN. This is simply about type system expressiveness, you can still do everything you were able to do before, but the type system will be able to catch your mistakes early because you are making your expectations explicit across function call boundaries. It just makes no sense to add an additional invalid state to every type and defer to runtime where it may or may not crash, when instead you could have just given a type error.
Dec 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2022 8:01 PM, Timon Gehr wrote:
 Even array bounds overflow exceptions would be better as compile-time errors. 
If you don't consider that practical, that's fine, I guess it will take a couple of decades before people accept that this is a good idea, The size of the array depends on the environment. I don't see how to do that at compile time.
 but it's certainly practical today for null dereferences.
Pattern matching inserts an explicit runtime check, rather than using the hardware memory protection to do the check. All you get with pattern matching is (probably) a better error message, and a slower program. You still get a fatal error, if the pattern match arm for the null pointer is fatal. You can also get a better error message with a seg fault if you code a trap for that error. Isn't it great that the hardware provides runtime null checking for you at zero cost? If a seg fault resulted in memory corruption, then I agree with you. But it doesn't, it's at zero cost, your program runs at full speed. P.S. in the bad old DOS days, a null pointer write would scramble DOS's interrupt table, which had unpredictable and often terrible effects. Fortunately, uP's have evolved since then into having hardware memory protection, so that is no longer an issue. As soon as I got a machine with memory protection, I switched all my development to that. Only as a last step did I recompile it for DOS.
Dec 30 2022
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 21:27, Walter Bright wrote:
 On 12/29/2022 8:01 PM, Timon Gehr wrote:
  > Even array bounds overflow exceptions would be better as compile-time 
 errors. If you don't consider that practical, that's fine, I guess it 
 will take a couple of decades before people accept that this is a good 
 idea,
 
 The size of the array depends on the environment. I don't see how to do 
 that at compile time.
 ...
Well, it can be done in many cases. The point is it would be even better. I made this point because you said a null pointer segfault is like a bounds-checking failure. I agree.
  > but it's certainly practical today for null dereferences.
 
 Pattern matching inserts an explicit runtime check, rather than using 
 the hardware memory protection to do the check. All you get with pattern 
 matching is (probably) a better error message, and a slower program. You 
 still get a fatal error, if the pattern match arm for the null pointer 
 is fatal.
 ...
If the pattern match arm for the null pointer is fatal, maybe the reference should not have been typed as nullable in the first place, and no check should have been required at all. You seem to be reasoning from the position that the fatal runtime error was unavoidable. This is just not the case I care about at all.
 You can also get a better error message with a seg fault if you code a 
 trap for that error.
 
 Isn't it great that the hardware provides runtime null checking for you 
 at zero cost?
 ...
I am saying A > B, you are saying B > C. I agree that B > C. I don't know what to tell you.
 If a seg fault resulted in memory corruption, then I agree with you. But 
 it doesn't, it's at zero cost, your program runs at full speed.
 ...
Errors manifesting in production that should have been caught immediately at compile time are not zero cost. Not even close. E.g., see https://deepsource.io/blog/exponential-cost-of-fixing-bugs/ Sometimes, the original developer is not even around anymore and/or does not care to fix crashes. The cost to users can be very high, and at the very least it is embarrassing to the developer.
 P.S. in the bad old DOS days, a null pointer write would scramble DOS's 
 interrupt table, which had unpredictable and often terrible effects. 
 Fortunately, uP's have evolved since then into having hardware memory 
 protection, so that is no longer an issue. As soon as I got a machine 
 with memory protection, I switched all my development to that. Only as a 
 last step did I recompile it for DOS.
 
So your point is that B > C, therefore you switched from C to B. My point is that A > B, so I want to switch from B to A. Unfortunately, there's some nontrivial cost to jumping ship and switching languages, so I am engaging in this kind of seemingly fruitless discussion instead.
Dec 30 2022
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 21:27, Walter Bright wrote:
 
 Pattern matching inserts an explicit runtime check, rather than using 
 the hardware memory protection to do the check.
If the program crashes in case a reference is null, there is a bug in the logic of the program. If I fix that bug, the hardware memory protection will no longer be the one that's responsible for the check, just like for pattern matching. It will be explicit in the logic. I care about the correct program. Why would I care whether a crash costs me additional runtime? The check is either necessary or it is not necessary. If it is not necessary, even the hardware check is redundant, otherwise the hardware check does nothing that the compiler should not have caught even earlier.
Dec 30 2022
prev sibling parent reply Nick Treleaven <nick geany.org> writes:
On Friday, 30 December 2022 at 20:27:43 UTC, Walter Bright wrote:
 Pattern matching inserts an explicit runtime check, rather than 
 using the hardware memory protection to do the check. All you 
 get with pattern matching is (probably) a better error message, 
 and a slower program.
What that buys you is a way to convert a nullable reference to non-nullable, without the possibility of an accidental fatal error. Non-nullable reference types that *need no checks whatsoever*. No chance of accidentally passing a nullable type to a function parameter that is non-nullable. No segfault ever unless you use the clear red flag `unwrap` escape hatch which is obvious in code review. Plus sometimes whole program optimization can remove those checks. The compiler could potentially generate another version of a function taking a nullable parameter, one for a non-nullable reference. Any null path is statically removed in that version.
 You still get a fatal error, if the pattern match arm for the 
 null pointer is fatal.
If the arm for null is fatal then the programmer must have deliberately opted-in to a fatal error by typing e.g. `assert(0)`. Not accidentally forgetting to handle null, which is a common mistake.
 You can also get a better error message with a seg fault if you 
 code a trap for that error.

 Isn't it great that the hardware provides runtime null checking 
 for you at zero cost?

 If a seg fault resulted in memory corruption, then I agree with 
 you. But it doesn't, it's at zero cost, your program runs at 
 full speed.
Unfortunately it's not 100% reliable. If you have a null reference to a type with fields whose offset is big enough to take it into a valid memory address, then that hardware check won't catch it. (And anything which is not guaranteed memory-safe is not supposed to be allowed in safe code).
Dec 30 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 1:46 PM, Nick Treleaven wrote:
 What that buys you is a way to convert a nullable reference to non-nullable, 
 without the possibility of an accidental fatal error.
At some point, you did insert a check. For example, if you have an array of file pointers, and want to remove one. Do you compact the array, or just leave a null entry? Leaving a null entry is faster, and if you forget to add a check, the hardware will check it for you.
 Plus sometimes whole program optimization can remove those checks. The
compiler 
 could potentially generate another version of a function taking a nullable 
 parameter, one for a non-nullable reference. Any null path is statically
removed 
 in that version.
That's true. And for a null pointer, the null path is statically not there, even though the checking remains for free!
 You still get a fatal error, if the pattern match arm for the null pointer is 
 fatal.
If the arm for null is fatal then the programmer must have deliberately opted-in to a fatal error by typing e.g. `assert(0)`. Not accidentally forgetting to handle null, which is a common mistake.
Yes, that's exactly what I was talking about. You've substituted a free hardware check for a costly check.
 Unfortunately it's not 100% reliable. If you have a null reference to a type 
 with fields whose offset is big enough to take it into a valid memory address, 
 then that hardware check won't catch it.
Yes, you are correct. I think D has a limit on the size of a struct object for that reason. ---- But hey, this discussion is ultimately pointless. D will get sumtypes and pattern matching at some point. You can write code in your preferred style - no problem!
Dec 30 2022
next sibling parent Adam D Ruppe <destructionator gmail.com> writes:
On Saturday, 31 December 2022 at 01:45:06 UTC, Walter Bright 
wrote:
 But hey, this discussion is ultimately pointless. D will get 
 sumtypes
Object and int* and friends are already sumtypes. They are sumtype { typeof(null), actual Object/int*/whatever } But the compiler doesn't really realize this.
Dec 30 2022
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 02:45, Walter Bright wrote:
 If the arm for null is fatal then the programmer must have 
 deliberately opted-in to a fatal error by typing e.g. `assert(0)`. Not 
 accidentally forgetting to handle null, which is a common mistake.
Yes, that's exactly what I was talking about. You've substituted a free hardware check for a costly check. ...
Well, he said "e.g.". It's perfectly reasonable to design the language such that people can opt into the free hardware check. Ultimately those are implementation details.
 ...
 ----
 
 But hey, this discussion is ultimately pointless. D will get sumtypes 
 and pattern matching at some point. You can write code in your preferred 
 style - no problem!
So far my understanding is that you are aiming for: - keep references/pointers with implicit null values - add references/pointers with explicit null values What's still potentially missing: - references/pointers without any null values (fundamentally at odds with T.init, this is why I brought this up in the first place) - static checking
Dec 30 2022
parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Saturday, 31 December 2022 at 02:15:43 UTC, Timon Gehr wrote:
 What's still potentially missing:
 - references/pointers without any null values (fundamentally at 
 odds with T.init, this is why I brought this up in the first 
 place)
 - static checking
Non-null pointers would be a major step forward. Don't know how they will fit in with the rest, but I would use them all the time. As for null, note that in WASM it is a valid address.
Dec 30 2022
prev sibling parent norm <norm.rowtree gamil.com> writes:
On Saturday, 31 December 2022 at 01:45:06 UTC, Walter Bright 
wrote:
 ... Leaving a null entry is faster, and if you forget to add a 
 check, the hardware will check it for you.
This won't be deemed acceptable by most devs I know outside the small batch utility space. Trapping a seg-fault does not work, you lose context and cannot gracefully continue operating. A NULL object existing at runtime is not a fatal error and should be recoverable. Dereferencing a NULL pointer and triggering a seg-fault, even with a handler in place, is a fatal error and unrecoverable. Is it really the performance hit of checking for NULL? Well C++ it then and only pay for what you use. If devs don't want the runtime cost of a SW null check in their code, then they don't use the Nullable type. Call that type whatever you want, in Python it is Optional. But having the compiler statically enforce a null check is incredibly useful in my experience because more times than not you do not want to crash. On that note, mypy statically checks for a None check prior to accessing any Optional type. I bring this up as well because Python is inherently unsafe and typing is not high on the Python dev radar, but even Python devs consider this a useful feature. bye, norm
Dec 31 2022
prev sibling next sibling parent reply cc <cc nevernet.com> writes:
On Friday, 30 December 2022 at 02:03:39 UTC, Walter Bright wrote:
 NaNs are another excellent tool. They enable, for example, 
 dealing with a data set that may have unknown values in it from 
 bad sensors. Replacing that missing data with "0.0" is a very 
 bad idea.
How many D programmers acquire data from sensors that require such default language-integrated fault detection? Versus how many D programmers would be well benefited from having floating point types treated like other numeric types, and sensibly default initialize to a usable 0 value? Why is one group determined to be the one that needs its use case catered to, and not the other?
Dec 30 2022
next sibling parent Dom Disc <dominikus scherkl.de> writes:
On Saturday, 31 December 2022 at 06:04:25 UTC, cc wrote:
 On Friday, 30 December 2022 at 02:03:39 UTC, Walter Bright 
 wrote:
 NaNs are another excellent tool. They enable, for example, 
 dealing with a data set that may have unknown values in it 
 from bad sensors. Replacing that missing data with "0.0" is a 
 very bad idea.
How many D programmers acquire data from sensors that require such default language-integrated fault detection?
E.g. we do.
 Versus how many D programmers would be well benefited from 
 having floating point types treated like other numeric types, 
 and sensibly default initialize to a usable 0 value?
Well, all other types should have a NaN value too. But D allows you to create such types, so its not such a big problem.
 Why is one group determined to be the one that needs its use 
 case catered to, and not the other?
I suspect that's C legacy. :-/
Dec 31 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 10:04 PM, cc wrote:
 How many D programmers acquire data from sensors that require such default 
 language-integrated fault detection?
Any that use D for data analysis are going to have to deal with missing data points.
 Versus how many D programmers would be well benefited from having floating
point 
 types treated like other numeric types, and sensibly default initialize to a 
 usable 0 value?
Having the program run is one thing, having it produce correct results is quite another. The bias here is to not guess at what the programmer intended thereby producing results that look good but are wrong. The bias is to let the programmer know that the results are wrong, and he needs to select the correct initial value, rather than guess at it.
 Why is one group determined to be the one that needs its use case catered to, 
 and not the other?
Think of it like implicit declaration of variables. Users don't need to be bothered with declaring variables before use, the language will do it for you. This is a great idea that surfaces time and again. Unfortunately, the language designers eventually figure out this produces more problems than it solves, and wind up having to painfully break existing code by requiring declarations.
Dec 31 2022
prev sibling parent FeepingCreature <feepingcreature gmail.com> writes:
On Friday, 30 December 2022 at 02:03:39 UTC, Walter Bright wrote:
 On 12/29/2022 12:45 PM, Adam D Ruppe wrote:
 The alternative is the language could have prevent this state 
 from being unanticipated at all, e.g. nullable vs not null 
 types.
It can't really prevent it. What happens is people assign a value, any value, just to get it to compile. I've seen it enough to not encourage that practice. If there are no null pointers, what happens to designate a leaf node in a tree? An equivalent "null" object is invented. Nothing is really gained.
This all hangs together if you have a sufficient type system. What you'd do in Neat (with non-nullable objects) is, for instance, you'd start with a non-nullable type, by default, say `Class class`, you'd get an error that "null is not convertible to Class" or such, and you'd either declare the variable/field as `nullable Class` (which doesn't implconv to Class), if it's a variable declare it as `mut uninitialized Class class` (which is easy to grep for), or if it's in a data structure add a leaf case as `(Class | :none)` - but then you're obligated to handle `:none` at every use site; the compiler won't get you get to `Class` otherwise. That can be as simple as `.case(:none: die)`, which tbf is not *much* better than a segfault, but at least the termination happens in the right location, rather than some random time later on an access attempt. The point is that you don't get automatically opted into crashes by the language, but have to choose them explicitly. You can get null crashes by telling the compiler to crash in case of null, but you can't get null crashes by *forgetting about null.*
Jan 10 2023
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/29/22 21:38, Walter Bright wrote:
 On 12/28/2022 1:33 AM, Timon Gehr wrote:
 On 12/27/22 23:53, Walter Bright wrote:
 This is not a failure, it's a positive feature.
To some extent. Aspects of this have been lovingly nicknamed the "billion dollar mistake".
I don't agree with that assessment at all. Having a seg fault when your program enters an unanticipated, invalid state
The bad thing is allowing programs to enter unanticipated, invalid states in the first place...
 is a *good* thing. The 
 *actual* billion dollar mistake(s) in C are:
 
 1. uninitialized data leading to undefined behavior
 
 2. no way to do array buffer overflow detection
 
 because those lead to malware and other silent disasters.
 ...
Not all disasters are silent. Maybe you are biased because you only write batch programs that are intended to implement a very precise spec.
Dec 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2022 12:46 PM, Timon Gehr wrote:
 The bad thing is allowing programs to enter unanticipated, invalid states in
the 
 first place...
We both agree on that one. But invalid states happen in the real world.
 Not all disasters are silent. Maybe you are biased because you only write
batch 
 programs that are intended to implement a very precise spec.
I'm biased from my experience designing aircraft systems. You never, ever want an avionics program to proceed if it has entered an invalid state. It must fail instantly, fail hard, and allow the backup to take over. The idea that a program should soldier on once it is in an invalid state is very bad system design. Perfect programs cannot be made. The solution is to not pretend that the program is perfect, but be able to tolerate its failure by shutting it down and engaging the backup. I think the Chrome was the first browser to do this. It's an amalgamation of independent processes. The processes do not share memory, they communicate with interprocess protocols. If one process fails, its failure is isolated, it is aborted, and a replacement is spun up. The hubris of "can't be allowed to fail" software is what allowed hackers to manipulate a car's engine and brakes remotely by hacking in via the keyless door lock. (Saw this on an episode of "60 Minutes".)
Dec 29 2022
next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 30 December 2022 at 02:17:58 UTC, Walter Bright wrote:
 The hubris of "can't be allowed to fail" software is what 
 allowed hackers to manipulate a car's engine and brakes 
 remotely by hacking in via the keyless door lock. (Saw this on 
 an episode of "60 Minutes".)
Id blame that more on the hubris of adding computers to a physical system that could be simple. I dont understand why its such a rare opinion to think about software as fail safe or fail dangerous depending on context; most software that exists should be fail safe, where every attempt is made to make it to keep going. Airplanes, nasa and maybe even hard drive drivers; write triple check every line of code, turn on every safety check and have meetings about each and every type; fine. Code I realistically will write? Nah
Dec 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2022 7:04 PM, monkyyy wrote:
 I dont understand why its such a rare opinion to think about software as fail 
 safe or fail dangerous depending on context; most software that exists should
be 
 fail safe, where every attempt is made to make it to keep going.
Please reconsider your "every attempt" statement. It's a surefire way to disaster.
 Airplanes, nasa 
 and maybe even hard drive drivers; write triple check every line of code, turn 
 on every safety check and have meetings about each and every type; fine.
Sorry, but again, that is attempting to write perfect software. It is *impossible* to do. Humans aren't capable of doing it, and from what I read about the space shuttle software is it is terrifyingly expensive to do all that checking and so it does not scale. The right way is not to imagine one can write perfect software. It is to have a plan for what to do *when* the software fails. Because it *will* fail. For example, a friend of mine years ago told me he was using a password manager for his hundreds of passwords to keep them safe. I told him it that the PWM is a single point of failure, and when it failed it would compromise all of his passwords. He dismissed the idea, saying he trusted the password manager company. Fast forward to today. LastPass, which is what he was relying on, failed. Now all his hundreds of passwords are compromised.
Dec 30 2022
next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 30 December 2022 at 20:38:52 UTC, Walter Bright wrote:
 On 12/29/2022 7:04 PM, monkyyy wrote:
 [...]
Please reconsider your "every attempt" statement. It's a surefire way to disaster.
 [...]
Sorry, but again, that is attempting to write perfect software. It is *impossible* to do. Humans aren't capable of doing it, and from what I read about the space shuttle software is it is terrifyingly expensive to do all that checking and so it does not scale. The right way is not to imagine one can write perfect software. It is to have a plan for what to do *when* the software fails. Because it *will* fail. For example, a friend of mine years ago told me he was using a password manager for his hundreds of passwords to keep them safe. I told him it that the PWM is a single point of failure, and when it failed it would compromise all of his passwords. He dismissed the idea, saying he trusted the password manager company. Fast forward to today. LastPass, which is what he was relying on, failed. Now all his hundreds of passwords are compromised.
Yes, no matter how correct the software is, no matter how perfectly memory safe the programming langauge is, it all comes back to 'the unanticipated interactions' - which cannot be avoided if you live in our universe. The fate of software is never just in the hands of the progrmammer.
Dec 30 2022
prev sibling next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 30 December 2022 at 20:38:52 UTC, Walter Bright wrote:
 Fast forward to today. LastPass, which is what he was relying 
 on, failed. Now all his hundreds of passwords are compromised.
After a decade as systems engineer, I changed career in 2014 after realising how foolish it was for all these companies (and gov agencies) wanting to take all their systems and data, and put them into the 'cloud'. I knew what was coming, didn't want to be the person they end up blaming, and decided I wanted no part of it. Now, these companies (and gov agencies) (and many more too come) pay the price for their foolishness.. and so will their customers. And this will continue, indefinately. The only system that cannot be hacked, is the system that is turned off. Why would anyone would store their passwords in the cloud? What were they thinking!?!?
Dec 30 2022
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 30, 2022 at 11:47:32PM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
 On Friday, 30 December 2022 at 20:38:52 UTC, Walter Bright wrote:
 
 Fast forward to today. LastPass, which is what he was relying on,
 failed. Now all his hundreds of passwords are compromised.
After a decade as systems engineer, I changed career in 2014 after realising how foolish it was for all these companies (and gov agencies) wanting to take all their systems and data, and put them into the 'cloud'. I knew what was coming, didn't want to be the person they end up blaming, and decided I wanted no part of it.
Finally, someone else that sees through the king's invisible clothes! In the old days you just had to somehow include the word "Java" somewhere in your product and everyone will flock to buy it -- regardless of whether it actually solved anything. These days, "Java" has been replaced by "cloud".
 Now, these companies (and gov agencies) (and many more too come) pay
 the price for their foolishness.. and so will their customers. And
 this will continue, indefinately.
 
 The only system that cannot be hacked, is the system that is turned
 off.
The only safe way to use the internet is not to use it. :-D
 Why would anyone would store their passwords in the cloud? What were
 they thinking!?!?
They weren't thinking. :-D T -- May you live all the days of your life. -- Jonathan Swift
Dec 30 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 4:24 PM, H. S. Teoh wrote:
 Finally, someone else that sees through the king's invisible clothes!
 In the old days you just had to somehow include the word "Java"
 somewhere in your product and everyone will flock to buy it --
 regardless of whether it actually solved anything.  These days, "Java"
 has been replaced by "cloud".
You guys should be pleased I didn't add "blockchain" to the list of D's features. But wait a minute - maybe I should?
Dec 30 2022
next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 02:50, Walter Bright wrote:
 On 12/30/2022 4:24 PM, H. S. Teoh wrote:
 Finally, someone else that sees through the king's invisible clothes!
 In the old days you just had to somehow include the word "Java"
 somewhere in your product and everyone will flock to buy it --
 regardless of whether it actually solved anything.  These days, "Java"
 has been replaced by "cloud".
You guys should be pleased I didn't add "blockchain" to the list of D's features. But wait a minute - maybe I should?
It's called `pure`, but D falls a bit short of the ideal. Nondeterministic semantics of `pure` functions are fatal for blockchain applications.
Dec 30 2022
prev sibling parent max haughton <maxhaton gmail.com> writes:
On Saturday, 31 December 2022 at 01:50:34 UTC, Walter Bright 
wrote:
 On 12/30/2022 4:24 PM, H. S. Teoh wrote:
 Finally, someone else that sees through the king's invisible 
 clothes!
 In the old days you just had to somehow include the word "Java"
 somewhere in your product and everyone will flock to buy it --
 regardless of whether it actually solved anything.  These 
 days, "Java"
 has been replaced by "cloud".
You guys should be pleased I didn't add "blockchain" to the list of D's features. But wait a minute - maybe I should?
https://github.com/zhuowei/nft_ptr This but for the GC
Dec 30 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 3:47 PM, areYouSureAboutThat wrote:
 Why would anyone would store their passwords in the cloud? What were they 
 thinking!?!?
Google is always popping up offering to store my passwords for me. Not a chance of that happening.
Dec 30 2022
prev sibling next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 30 December 2022 at 20:38:52 UTC, Walter Bright wrote:
 On 12/29/2022 7:04 PM, monkyyy wrote:
 I dont understand why its such a rare opinion to think about 
 software as fail safe or fail dangerous depending on context; 
 most software that exists should be fail safe, where every 
 attempt is made to make it to keep going.

 Airplanes, nasa and maybe even hard drive drivers; write 
 triple check every line of code, turn on every safety check 
 and have meetings about each and every type; fine.
Sorry, but again, that is attempting to write perfect software. It is *impossible* to do. Humans aren't capable of doing it,
I am discussing failure modes; "how should doors fail" a Walmart sliding door should be "failsafe" and attempt to open if it's confused about the situation like if someone pulls a fire alarm a nuclear launch code safe should be "fail dangerous", and attempt to explode if someone is picking it So it's nonsense to answer "how should door fail" without picking a context. It's all well and good you made airplane software the way you did therefore you want floats to init to nan and nullable to be strict or etc. etc. etc. Airplane software can be fail dangerous so the backup kicks in. When adr is making a video game on stream and defines a vec2 with default initialized floats; it's a video game it should be fail-safe and init to 0 rather than have him take 10 minutes on stage debugging it. Different situations can call for different solutions, why is safety within computer science universally without context?
Dec 31 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2022 10:33 AM, monkyyy wrote:
 When adr is 
 making a video game on stream and defines a vec2 with default initialized 
 floats; it's a video game it should be fail-safe and init to 0 rather than
have 
 him take 10 minutes on stage debugging it. Different situations can call for 
 different solutions, why is safety within computer science universally without 
 context?
You're right that a video game need not care about correctness or corruption. I remember the Simpsons video game where there was a sign error (I infer) in the rendering that caused one to see the back side of the renderings. It was quite funny. Had to reboot it to get it working again. Unfortunately, at some point the language has to make decisions. The tradeoff made was to value correctness more than convenience. After all, professionals using it for financial programs, engineering programs, scientific data analysis, etc., need correctness. Any programs that manage a device that people rely on needs correctness.
Jan 01 2023
parent reply cc <cc nevernet.com> writes:
On Sunday, 1 January 2023 at 20:04:13 UTC, Walter Bright wrote:
 On 12/31/2022 10:33 AM, monkyyy wrote:
 When adr is making a video game on stream and defines a vec2 
 with default initialized floats; it's a video game it should 
 be fail-safe and init to 0 rather than have him take 10 
 minutes on stage debugging it. Different situations can call 
 for different solutions, why is safety within computer science 
 universally without context?
You're right that a video game need not care about correctness or corruption.
I don't think that's a very apt take *at all*. Frankly it's insulting. You do realize video games are a *business*, right? They absolutely care about correctness and corruption. On this specific issue, it so happens that developers also tend to find it very useful and common for numeric types to initialize to zero (when they are initialized at all). Which is why they find it very *surprising* and *confusing* when they suddenly don't. This should not be interpreted to mean that their industry is lazy and "doesn't care" about the financial viability of releasing sound code. "If they really cared, they'd just initialize everything like they're supposed to" is not a well-aimed criticism coming from D (and the same argument applies to aviation; should they not initialize everything to NaN, just to be safe? Or do they not care?). C++ requires initialization, because depending on compiler there's a good chance your program will just catastrophically explode otherwise. Many other heavy lifting languages in the marketplace have adopted default/value initialization. D, for some reason, has decided to free the programmer from worry by adopting usable default initialization, and then turn around and give you a value *you can't use*, for certain arbitrary extremely common primitive types, but not all variables, in order to satisfy what is, in some estimations, a minority use case, thus making things more difficult for the majority who might expect consistency, both internal and external. D *almost* solves the first problem, then creates a new one. Yes, I realize the NaN thing is an old dead horse and isn't going to change. I had not intended to make any posts complaining about "D leadership" as I've at times witnessed in this thread and others, as I ordinarily have no direct problems with, engagement with, or influence over it. But it's very troubling to suddenly see a mentality of "ah, who cares about those use cases? They don't care about writing real programs, they're not flying airplanes!" coming from the top. Developers are not just hobbyists. There are careers, employees, and families in the mix. $180 billion dollar industry. Shigeru Miyamoto was not harmed by the fact Mario can clip through otherwise solid blocks due to mathematical insufficiencies in his 1985 video game. These days, a buggy mess means jobs are lost. Does D only ever see itself as a "hobby language" in this field then? Should that be how we treat it? I believe it has the potential to be much more, enough to the point I was willing to stake at least a portion of my livelihood on it. I took a chance--and still am--because I believed the good parts of D are **so good**, *even in its current state*, it's worth the potential risks to grow alongside it, and so far it has been. Not every studio or developer can be in the position to incur this cost of what is in some ways a show of faith. I don't want to see those risks expand and those years lead to an eventual dead end because the designers don't consider my career a respectable use of time or the language.
Jan 05 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2023 2:25 PM, cc wrote:
 On Sunday, 1 January 2023 at 20:04:13 UTC, Walter Bright wrote:
 On 12/31/2022 10:33 AM, monkyyy wrote:
 When adr is making a video game on stream and defines a vec2 with default 
 initialized floats; it's a video game it should be fail-safe and init to 0 
 rather than have him take 10 minutes on stage debugging it. Different 
 situations can call for different solutions, why is safety within computer 
 science universally without context?
You're right that a video game need not care about correctness or corruption.
I don't think that's a very apt take *at all*.  Frankly it's insulting.  You do realize video games are a *business*, right? They absolutely care about correctness and corruption.
Sorry I made it sound that way. Nobody is going to die if the display is a bit off. And the reason video game developers asked for D to support half-floats is because of speed, not accuracy. (It's in the Sargon library now.) John Carmack famously used the fast inverse square root algorithm that was faster, but less accurate, than the usual method. https://en.wikipedia.org/wiki/Fast_inverse_square_root That said, I believe you when you say you care about this. I believe you want very much to have your programs be correct. Which is great! I'm certainly not going to try and talk you out of that. I posit that NaN initialization is a good path to get there. It's the whole reason for it. I'm not even sure why we're debating it!
 On this specific issue, it so happens that 
 developers also tend to find it very useful and common for numeric types to 
 initialize to zero (when they are initialized at all).  Which is why they
find 
 it very *surprising* and *confusing* when they suddenly don't.  This should
not 
 be interpreted to mean that their industry is lazy and "doesn't care" about
the 
 financial viability of releasing sound code.
1.0 is also a popular initial value. The compiler is never going to reliably guess it right. Suppose 1.3 was what it was supposed to be. Does initialization to 0.0 make the program more or less likely to be correct than if it was initialized with NaN? I think we can agree that the program is wrong under both scenarios. But which program is more *likely* to produce an error in the output that cannot be ignored? I propose the NaN initialization would make the error much more obvious, and so fixable before release.
Jan 07 2023
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 1/8/23 06:51, Walter Bright wrote:
 I don't think that's a very apt take *at all*.  Frankly it's 
 insulting.  You do realize video games are a *business*, right? They 
 absolutely care about correctness and corruption.
Sorry I made it sound that way. Nobody is going to die if the display is a bit off. And the reason video game developers asked for D to support half-floats is because of speed, not accuracy. (It's in the Sargon library now.) John Carmack famously used the fast inverse square root algorithm that was faster, but less accurate, than the usual method. https://en.wikipedia.org/wiki/Fast_inverse_square_root
It's not "less accurate". John Carmack decides what the rules of the game are. This is not comparable at all to the game crashing with a segmentation fault in the middle of an online multiplayer session...
Jan 14 2023
prev sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On Friday, 30 December 2022 at 20:38:52 UTC, Walter Bright wrote:
 Fast forward to today. LastPass, which is what he was relying 
 on, failed. Now all his hundreds of passwords are compromised.
Nope. That's not how LastPass (and password managers in general) work. -Steve
Jan 08 2023
parent reply RTM <riven baryonides.ru> writes:
On Sunday, 8 January 2023 at 21:53:32 UTC, Steven Schveighoffer 
wrote:
 Nope. That's not how LastPass (and password managers in 
 general) work.
https://en.m.wikipedia.org/wiki/LastPass#2022_security_incidents It’s serious.
Jan 08 2023
parent reply max haughton <maxhaton gmail.com> writes:
On Monday, 9 January 2023 at 00:18:50 UTC, RTM wrote:
 On Sunday, 8 January 2023 at 21:53:32 UTC, Steven Schveighoffer 
 wrote:
 Nope. That's not how LastPass (and password managers in 
 general) work.
https://en.m.wikipedia.org/wiki/LastPass#2022_security_incidents It’s serious.
Serious yes, but look at the data that actually leaked, it's not the keys to the safe I think
Jan 08 2023
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On Monday, 9 January 2023 at 00:43:08 UTC, max haughton wrote:
 On Monday, 9 January 2023 at 00:18:50 UTC, RTM wrote:
 On Sunday, 8 January 2023 at 21:53:32 UTC, Steven 
 Schveighoffer wrote:
 Nope. That's not how LastPass (and password managers in 
 general) work.
https://en.m.wikipedia.org/wiki/LastPass#2022_security_incidents It’s serious.
Serious yes, but look at the data that actually leaked, it's not the keys to the safe I think
Yes, it's no different than any other data breach of any other company -- email addresses, billing information, etc. Note that LastPass and others do not even have the keys to the safe to be stolen in the first place -- they never store your master password. the "100s of passwords" are not compromised (that is, unless they use "password123!" as their master password). LastPass uses 100100 rounds of encryption, which means each guess takes a long time to test to see if it's right. Brute force will take millions of years. Everyone today should use a password manager, whether it's cloud based or not. And the *most important rule* is to not use a previous password as your master password. -Steve
Jan 08 2023
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/8/2023 5:44 PM, Steven Schveighoffer wrote:
 Everyone today should use a password manager, whether it's cloud based or not.
Yes, because password managers are perfect software, unlike every other piece of software on the planet. I heard today that Pegasus can read Whatsapp encrypted communications. If Pegasus can do it, anybody can.
 And the *most important rule* is to not use a previous password as your master 
 password.
A master password is a single point of failure.
Jan 08 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Monday, 9 January 2023 at 03:02:31 UTC, Walter Bright wrote:
 On 1/8/2023 5:44 PM, Steven Schveighoffer wrote:
 Everyone today should use a password manager, whether it's 
 cloud based or not.
Yes, because password managers are perfect software, unlike every other piece of software on the planet. I heard today that Pegasus can read Whatsapp encrypted communications. If Pegasus can do it, anybody can.
 And the *most important rule* is to not use a previous 
 password as your master password.
A master password is a single point of failure.
So is an airplane (despite the internal redundancies, the whole system can fail, e.g., the 737 rudder actuator failures), and yet we fly. That something is a single point of failure is, considered alone, not an argument against its use. The decision to use or not should be based on a weighing of the benefits vs the risk/cost (probability of failure and its cost). As for LastPass, I was a user, with a long-enough random password drawn from a large enough character set resulting in > 10^15 possibilities. A key that hard to find by brute force gets the risk low enough for me so I can enjoy the benefit of having access to my passwords from all my devices and share them with my wife and vice-versa. What's the alternative? An encrypted spreadsheet? Unworkable. I will say, though, that I have cancelled my LastPass subscription and migrated to 1Password, because I think the way LastPass handled this was dishonest.
Jan 09 2023
next sibling parent reply jmh530 <john.michael.hall gmail.com> writes:
On Monday, 9 January 2023 at 15:12:41 UTC, Don Allen wrote:
 [snip]

 I will say, though, that I have cancelled my LastPass 
 subscription and migrated to 1Password, because I think the way 
 LastPass handled this was dishonest.
Sorry if this is off topic, but how was the migration? Any difficulties?
Jan 09 2023
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On Monday, 9 January 2023 at 15:23:56 UTC, jmh530 wrote:
 Sorry if this is off topic, but how was the migration? Any 
 difficulties?
I migrated to 1Password, and it had some rough spots (this was a couple years ago), most things were fine. Most of the strife is differences in how the two services store the notes. Like some notes went into the wrong buckets or the fields didn't match up. -Steve
Jan 09 2023
prev sibling next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Monday, 9 January 2023 at 15:23:56 UTC, jmh530 wrote:
 On Monday, 9 January 2023 at 15:12:41 UTC, Don Allen wrote:
 [snip]

 I will say, though, that I have cancelled my LastPass 
 subscription and migrated to 1Password, because I think the 
 way LastPass handled this was dishonest.
Sorry if this is off topic, but how was the migration? Any difficulties?
I'm only about a week into this, so not a lot of experience, but 1Password has worked well so far. A minor inconvenience is that the 1Password browser extension wants the master password the first time you use it after a browser restart. My password is 15 random characters, so impossible to remember. I've dealt with this by putting the password in a file in a USB key and have a little script to mount the key (using doas so I don't have to do this as root), print the password and umount the key (doas again). I then copy-paste the password to make the extension happy. Not a big deal.
Jan 09 2023
next sibling parent jmh530 <john.michael.hall gmail.com> writes:
On Monday, 9 January 2023 at 17:30:32 UTC, Don Allen wrote:
 [snip]

 I'm only about a week into this, so not a lot of experience, 
 but 1Password has worked well so far. A minor inconvenience is 
 that the 1Password browser extension wants the master password 
 the first time you use it after a browser restart. My password 
 is 15 random characters, so impossible to remember. I've dealt 
 with this by putting the password in a file in a USB key and 
 have a little script to mount the key (using doas so I don't 
 have to do this as root), print the password and umount the key 
 (doas again). I then copy-paste the password to make the 
 extension happy. Not a big deal.
Thanks. The USB key approach is difficult if you use a work computer that locks down the ability to use a USB driver at all.
Jan 09 2023
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On Monday, 9 January 2023 at 17:30:32 UTC, Don Allen wrote:
 I'm only about a week into this, so not a lot of experience, 
 but 1Password has worked well so far. A minor inconvenience is 
 that the 1Password browser extension wants the master password 
 the first time you use it after a browser restart. My password 
 is 15 random characters, so impossible to remember. I've dealt 
 with this by putting the password in a file in a USB key and 
 have a little script to mount the key (using doas so I don't 
 have to do this as root), print the password and umount the key 
 (doas again). I then copy-paste the password to make the 
 extension happy. Not a big deal.
A word of warning, 1Password requires you to enter the master password every 2 weeks. So you may want to find a better way to do it, or try to memorize that password. I personally have it set to require the MP whenever the screen is locked and reopened. I have the Mac desktop app installed, and you log in once, and it works for all browsers at that point (restarting browser doesn't require a reentry). -Steve
Jan 09 2023
prev sibling parent bachmeier <no spam.net> writes:
On Monday, 9 January 2023 at 15:23:56 UTC, jmh530 wrote:
 On Monday, 9 January 2023 at 15:12:41 UTC, Don Allen wrote:
 [snip]

 I will say, though, that I have cancelled my LastPass 
 subscription and migrated to 1Password, because I think the 
 way LastPass handled this was dishonest.
Sorry if this is off topic, but how was the migration? Any difficulties?
I moved from Lastpass to Bitwarden long before this incident (maybe a year ago). Moving all my passwords and notes took a minute or two. I exported everything to a csv file and Bitwarden imported it all correctly.
Jan 09 2023
prev sibling next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Monday, 9 January 2023 at 15:12:41 UTC, Don Allen wrote:
 I will say, though, that I have cancelled my LastPass 
 subscription and migrated to 1Password, because I think the way 
 LastPass handled this was dishonest.
just watch out for phishing attacks, now that someone out there has all that customer data. Won't be long till they have 1password customer data too ;-)
Jan 09 2023
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/9/2023 7:12 AM, Don Allen wrote:
 So is an airplane (despite the internal redundancies, the whole system can
fail, 
 e.g., the 737 rudder actuator failures), and yet we fly. That something is a 
 single point of failure is, considered alone, not an argument against its use. 
 The decision to use or not should be based on a weighing of the benefits vs
the 
 risk/cost (probability of failure and its cost).
The rudder failure was a very baffling problem, and it wasn't even clear it *was* a rudder failure for years.
 As for LastPass, I was a user, with a long-enough random password drawn from a 
 large enough character set resulting in > 10^15 possibilities. A key that hard 
 to find by brute force gets the risk low enough for me so I can enjoy the 
 benefit of having access to my passwords from all my devices and share them
with 
 my wife and vice-versa. What's the alternative? An encrypted spreadsheet? 
 Unworkable.
A strong password isn't good enough. There are other ways in. A key logger may record your password.
 I will say, though, that I have cancelled my LastPass subscription and
migrated 
 to 1Password, because I think the way LastPass handled this was dishonest.
Just be aware you've got a single point of failure for *all* your passwords.
Jan 09 2023
next sibling parent reply bachmeier <no spam.net> writes:
On Monday, 9 January 2023 at 21:38:34 UTC, Walter Bright wrote:

 I will say, though, that I have cancelled my LastPass 
 subscription and migrated to 1Password, because I think the 
 way LastPass handled this was dishonest.
Just be aware you've got a single point of failure for *all* your passwords.
Your computer is always going to be a single point of failure for passwords if you're not using 2FA, independent of the password manager issue. Weak/recycled passwords is far worse than anything that can happen when you put your passwords in the cloud. The bigger advantage of a password manager is that it provides a convenient way to generate secure passwords - even if you ultimately choose to write them on a piece of paper. But you don't need to put them in the cloud or on paper. KeepassXC works just fine on your local machine.
Jan 09 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/9/2023 3:19 PM, bachmeier wrote:
 Your computer is always going to be a single point of failure for passwords if 
 you're not using 2FA, independent of the password manager issue. Weak/recycled 
 passwords is far worse than anything that can happen when you put your
passwords 
 in the cloud.
I'm aware of that. But I don't lose *all* my passwords due to one failure.
 The bigger advantage of a password manager is that it provides a convenient
way 
 to generate secure passwords - even if you ultimately choose to write them on
a 
 piece of paper.
Assuming the password generator algorithm isn't itself cracked.
 But you don't need to put them in the cloud or on paper.
Storing passwords in the cloud is one of the riskier ideas. What if your cloud provider disables your account? What if they just "go dark"? How is *losing* all your passwords going to affect you?
 KeepassXC works just fine on your local machine.
Replacing one single point of failure with another single point of failure is not progress.
Jan 09 2023
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Tuesday, 10 January 2023 at 03:49:17 UTC, Walter Bright wrote:
 What if your cloud provider disables your account? What if they 
 just "go dark"?
p sure you'd still have the local copy, similarly to how github works. but i don't use these things either im all about the sticky notes on the monitor
Jan 10 2023
parent jmh530 <john.michael.hall gmail.com> writes:
On Tuesday, 10 January 2023 at 12:57:51 UTC, Adam D Ruppe wrote:
 On Tuesday, 10 January 2023 at 03:49:17 UTC, Walter Bright 
 wrote:
 What if your cloud provider disables your account? What if 
 they just "go dark"?
p sure you'd still have the local copy, similarly to how github works. but i don't use these things either im all about the sticky notes on the monitor
Speaking of single points of failure... ;)
Jan 10 2023
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Monday, 9 January 2023 at 21:38:34 UTC, Walter Bright wrote:
 On 1/9/2023 7:12 AM, Don Allen wrote:
 So is an airplane (despite the internal redundancies, the 
 whole system can fail, e.g., the 737 rudder actuator 
 failures), and yet we fly. That something is a single point of 
 failure is, considered alone, not an argument against its use. 
 The decision to use or not should be based on a weighing of 
 the benefits vs the risk/cost (probability of failure and its 
 cost).
The rudder failure was a very baffling problem, and it wasn't even clear it *was* a rudder failure for years.
 As for LastPass, I was a user, with a long-enough random 
 password drawn from a large enough character set resulting in
 10^15 possibilities. A key that hard to find by brute force
gets the risk low enough for me so I can enjoy the benefit of having access to my passwords from all my devices and share them with my wife and vice-versa. What's the alternative? An encrypted spreadsheet? Unworkable.
A strong password isn't good enough. There are other ways in. A key logger may record your password.
I'm well aware of key loggers. It's pretty unlikely that a key logger going to get installed on my FreeBSD or Linux systems that are sitting behind a firewall with the sshd port blocked? In addition, I never type my 1Password password. I keep it on a USB key that gets inserted and mounted when I need it and a script prints the password and umounts the key. I then copy-paste the password. I'm not looking for zero risk, which is impossible. I'm looking for the most reasonable operating point. Again, cost/risk vs. benefit.
Jan 10 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/10/2023 7:39 PM, Don Allen wrote:
 I'm not looking for zero risk, which is impossible. I'm looking for the most 
 reasonable operating point. Again, cost/risk vs. benefit.
I don't know your situation, but losing all my passwords would be a disaster for me. I've had my checking account compromised, credit cards compromised several times. Multiply that by a hundred. I've seen sob stories on HackerNews were some victim had has Mac compromised, and the hacker then took over all his accounts, changed the passwords, and started impersonating the victim. Apple won't fix it for you, Google won't fix it for you, Amazon won't fix it for you. You're borked. No thanks.
Jan 10 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 04:03:02 UTC, Walter Bright 
wrote:
 On 1/10/2023 7:39 PM, Don Allen wrote:
 I'm not looking for zero risk, which is impossible. I'm 
 looking for the most reasonable operating point. Again, 
 cost/risk vs. benefit.
I don't know your situation, but losing all my passwords would be a disaster for me. I've had my checking account compromised, credit cards compromised several times. Multiply that by a hundred. I've seen sob stories on HackerNews were some victim had has Mac compromised, and the hacker then took over all his accounts, changed the passwords, and started impersonating the victim.
I think it's a pretty safe bet that the "victim" did something dumb. If you use your wife's maiden name as the password of your Google account, don't enable 2FA, and your account gets hacked, are you a victim? I don't think so. Information for how to protect yourself online is everywhere. People ignore it, just as they ignore warnings about smoking.
 Apple won't fix it for you, Google won't fix it for you, Amazon 
 won't fix it for you.

 You're borked.

 No thanks.
Well, you and I just have a different set of weighting factors. Do you carry a cellphone? There are risks, as I'm sure you well know. I have friends at MIT who won't use them who, I'm quite sure, would agree with you about password managers. Use credit cards? See what Richard Stallman has to say about that. Write checks? Risks. I think this is just like getting on an airplane or driving a car. Most of us accept the risks in return for the benefits. But not all.
Jan 10 2023
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/10/2023 8:32 PM, Don Allen wrote:
 Do you carry a cellphone? There are risks, as I'm sure you well know.
Yeah, I do. But I don't build my life around the phone, because I'm aware of what happens when it gets hacked, stolen, lost, etc. Pegasus can remotely read everything you do on your phone. You can catch Pegasus by clicking on a link. A password manager is useless when faced by that.
 I have 
 friends at MIT who won't use them who, I'm quite sure, would agree with you 
 about password managers. Use credit cards? See what Richard Stallman has to
say 
 about that. Write checks? Risks.
I've had my credit cards stolen, and my checking account compromised. Both were a fair amount of work to fix. But it's not *ALL* of my online accounts. I keep things compartmentalized, like how airplanes are designed. Airplanes are deliberately designed to withstand any single failure and land safely. They can lose an engine, a pilot, a wing spar, a hydraulic system, a bird strike, a hole in the cabin, jammed actuators, etc. The incredible safety record of airliners shows this works. I take my cues from that.
Jan 10 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 05:27:05 UTC, Walter Bright 
wrote:
 On 1/10/2023 8:32 PM, Don Allen wrote:
 Do you carry a cellphone? There are risks, as I'm sure you 
 well know.
Yeah, I do. But I don't build my life around the phone, because I'm aware of what happens when it gets hacked, stolen, lost, etc. Pegasus can remotely read everything you do on your phone. You can catch Pegasus by clicking on a link. A password manager is useless when faced by that.
 I have friends at MIT who won't use them who, I'm quite sure, 
 would agree with you about password managers. Use credit 
 cards? See what Richard Stallman has to say about that. Write 
 checks? Risks.
I've had my credit cards stolen, and my checking account compromised. Both were a fair amount of work to fix. But it's not *ALL* of my online accounts. I keep things compartmentalized, like how airplanes are designed. Airplanes are deliberately designed to withstand any single failure and land safely. They can lose an engine, a pilot, a wing spar, a hydraulic system, a bird strike, a hole in the cabin, jammed actuators, etc. The incredible safety record of airliners shows this works.
True. But it doesn't work perfectly, which was my point. The jammed 737 rudder actuator problem killed a lot of people and it took years for the NTSB and Boeing to figure it out. Need I mention the DC-10 cargo door latches? You can have ATC errors, pilot errors, etc. that get people killed. So flying is not a zero-risk proposition, just like everything else we've been talking about, including password managers. Again, every case requires that we do our own personal calculation to decide whether the benefit is worth the risk.
 I take my cues from that.
Jan 11 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/11/2023 5:43 AM, Don Allen wrote:
 True. But it doesn't work perfectly, which was my point. The jammed 737 rudder 
 actuator problem killed a lot of people and it took years for the NTSB and 
 Boeing to figure it out. Need I mention the DC-10 cargo door latches? You can 
 have ATC errors, pilot errors, etc. that get people killed. So flying is not a 
 zero-risk proposition, just like everything else we've been talking about, 
 including password managers.
The difference is, Boeing fixed that single point of failure problem as soon as they could. The rudder now has no known single point of failure problems. The password manager remains a known, unfixed, single point of failure.
Jan 11 2023
next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 23:28:15 UTC, Walter Bright 
wrote:
 On 1/11/2023 5:43 AM, Don Allen wrote:
 True. But it doesn't work perfectly, which was my point. The 
 jammed 737 rudder actuator problem killed a lot of people and 
 it took years for the NTSB and Boeing to figure it out. Need I 
 mention the DC-10 cargo door latches? You can have ATC errors, 
 pilot errors, etc. that get people killed. So flying is not a 
 zero-risk proposition, just like everything else we've been 
 talking about, including password managers.
The difference is, Boeing fixed that single point of failure problem as soon as they could. The rudder now has no known single point of failure problems. The password manager remains a known, unfixed, single point of failure.
There is no difference whatsoever. The airplane (including pilots and ATC) remains a "known, unfixed, single point of failure" when you fly. Yes, the 737 is a safer airplane now that the rudder actuator has been fixed. The 737 Max has that fix, I'm sure. And 346 people died in Max crashes because of a badly designed software change, a failed angle-of-attack sensor, (MCAS used only one of the two angle-of-attack sensors, which was crazy), pilots who weren't told about the software change and how to disable it in case of trouble, etc. You persist in missing my point. I'll try once more. The use of everything we've talked about -- password managers, cellphones, airplanes, cars, etc. -- carries risks. Those risks may change over time but they are never zero. We either accept those risks in return for the benefits or we don't. That's a personal decision. You won't use a password manager, but you do use a cellphone. I'm guessing that you fly, especially given your interest in aviation. I'm also guessing you drive a car. I'm not trying to argue that I think this is unreasonable, because you have made subjective decisions about risks vs. benefits. But you keep focusing on your perceived risks of password managers, implying that I'm making a huge mistake by using one -- you haven't said it, but the implication is clear -- losing sight of the fact that this is a personal risk-benefit decision just like your decision to use a cellphone or a car or an airplane. You are also losing sight of the fact that I understand security issues quite well, having dealt with them professionally for many years, and I am satisfied that the measures taken by the manager I am using bring the risk into line -- for me -- with the benefits.
Jan 11 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/11/2023 6:51 PM, Don Allen wrote:
 There is no difference whatsoever. The airplane (including pilots and ATC) 
 remains a "known, unfixed, single point of failure" when you fly. Yes, the 737 
 is a safer airplane now that the rudder actuator has been fixed. The 737 Max
has 
 that fix, I'm sure. And 346 people died in  Max crashes because of a badly 
 designed software change, a failed angle-of-attack sensor, (MCAS used only one 
 of the two angle-of-attack sensors, which was crazy), pilots who weren't told 
 about the software change and how to disable it in case of trouble, etc.
Oh, but the pilots were told. The pilots were sent an EAD (Emergency Airworthiness Directive) which explained how to disable it and live. The MCAS was not a single point of failure.
 You persist in missing my point.
We're just talking past each other at this point.
Jan 11 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Thursday, 12 January 2023 at 04:20:27 UTC, Walter Bright wrote:
 On 1/11/2023 6:51 PM, Don Allen wrote:
 There is no difference whatsoever. The airplane (including 
 pilots and ATC) remains a "known, unfixed, single point of 
 failure" when you fly. Yes, the 737 is a safer airplane now 
 that the rudder actuator has been fixed. The 737 Max has that 
 fix, I'm sure. And 346 people died in  Max crashes because of 
 a badly designed software change, a failed angle-of-attack 
 sensor, (MCAS used only one of the two angle-of-attack 
 sensors, which was crazy), pilots who weren't told about the 
 software change and how to disable it in case of trouble, etc.
Oh, but the pilots were told. The pilots were sent an EAD (Emergency Airworthiness Directive) which explained how to disable it and live.
Lion Air Flight 610 crashed on October 29, 2018, killing 189. The Emergency Airworthiness Directive was issued on November 7, 2018.
 The MCAS was not a single point of failure.

 You persist in missing my point.
We're just talking past each other at this point.
That is the only thing on which we agree about this.
Jan 12 2023
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/12/2023 12:14 PM, Don Allen wrote:
 On Thursday, 12 January 2023 at 04:20:27 UTC, Walter Bright wrote:
 Oh, but the pilots were told. The pilots were sent an EAD (Emergency 
 Airworthiness Directive) which explained how to disable it and live.
Lion Air Flight 610 crashed on October 29, 2018, killing 189. The Emergency Airworthiness Directive was issued on November 7, 2018.
That's correct. Months later the Egypt Air crash happened, as the pilots did not follow the EAD procedure. What nobody reports on is that the first MCAS incident resulted in the plane completing its flight and landing safely. The pilots simply followed their training and did what the EAD reiterated. Boeing had issued the EAD to emphasize what to do. The instructions are simple: 1. restore normal trim with the electric trim switches (all three crews did this) 2. turn off the stab trim system That's it. when the airplane was pointed at the ground. Crashed.
Jan 12 2023
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 1/12/23 00:28, Walter Bright wrote:
 On 1/11/2023 5:43 AM, Don Allen wrote:
 True. But it doesn't work perfectly, which was my point. The jammed 
 737 rudder actuator problem killed a lot of people and it took years 
 for the NTSB and Boeing to figure it out. Need I mention the DC-10 
 cargo door latches? You can have ATC errors, pilot errors, etc. that 
 get people killed. So flying is not a zero-risk proposition, just like 
 everything else we've been talking about, including password managers.
The difference is, Boeing fixed that single point of failure problem as soon as they could. The rudder now has no known single point of failure problems. The password manager remains a known, unfixed, single point of failure.
Well, what are you comparing it to? Reusing the same password and/or email for a lot of services probably carries an even higher risk.
Jan 14 2023
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/14/2023 10:01 AM, Timon Gehr wrote:
 On 1/12/23 00:28, Walter Bright wrote:
 On 1/11/2023 5:43 AM, Don Allen wrote:
 The password manager remains a known, unfixed, single point of failure.
Well, what are you comparing it to?
Having a different name and password for each site. Fortunately, many places that bill me allow me to pay via a "guest" payment rather than a login. Many places that require a login just to look at their site get a pass from me.
 Reusing the same password and/or email for a
 lot of services probably carries an even higher risk.
Of course, and I never suggested that.
Jan 14 2023
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 04:32:41 UTC, Don Allen wrote:
 On Wednesday, 11 January 2023 at 04:03:02 UTC, Walter Bright 
 wrote:
 On 1/10/2023 7:39 PM, Don Allen wrote:
 I'm not looking for zero risk, which is impossible. I'm 
 looking for the most reasonable operating point. Again, 
 cost/risk vs. benefit.
I don't know your situation, but losing all my passwords would be a disaster for me. I've had my checking account compromised, credit cards compromised several times. Multiply that by a hundred. I've seen sob stories on HackerNews were some victim had has Mac compromised, and the hacker then took over all his accounts, changed the passwords, and started impersonating the victim.
I think it's a pretty safe bet that the "victim" did something dumb. If you use your wife's maiden name as the password of your Google account, don't enable 2FA, and your account gets hacked, are you a victim? I don't think so. Information for how to protect yourself online is everywhere. People ignore it, just as they ignore warnings about smoking.
 Apple won't fix it for you, Google won't fix it for you, 
 Amazon won't fix it for you.

 You're borked.

 No thanks.
Well, you and I just have a different set of weighting factors. Do you carry a cellphone? There are risks, as I'm sure you well know. I have friends at MIT who won't use them who, I'm quite sure, would agree with you about password managers. Use credit cards? See what Richard Stallman has to say about that. Write checks? Risks. I think this is just like getting on an airplane or driving a car. Most of us accept the risks in return for the benefits. But not all.
I forgot to mention a couple of things about password managers. I won't convince you, but for the benefit of anyone reading this who may be considering their use: 1. Any password manager worth using provides 2FA for the main password. So in the very unlikely event that a hacker got your password (key logger or whatever), they are not going to get past the need for a time-dependent code. 1Password has this, of course, and requires codes generated by a phone-based authenticator. 2. 1Password gives you a long "secret key", which you must produce to set up 1Password on a new device. They provide that key in a .pdf file, which you can store offline, or encrypted (I encrypt sensitive files with AES256 using a 32-character key that is stored offline). So for someone to get into your 1Password account from a device other than yours, they need to 1. Steal your password 2. Produce the "secret key", which they won't be able to 3. Get past 2FA, which they won't be able to
Jan 11 2023
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/11/2023 5:35 AM, Don Allen wrote:
 1. Steal your password
 2. Produce the "secret key", which they won't be able to
 3. Get past 2FA, which they won't be able to
Those are all good things. But it doesn't help you if you download a trojan version of the manager, or a trojan masquerading as an update. I've also seen several schemes that outmaneuver 2FA. Allow me to explain the framing. At Boeing, it was never "that part cannot fail". It is always framed as "when that part fails, how do we land safely?" So, *when* your password manager fails, what are you going to do about it? I'm not singling you out, I'm trying to make a point. Far too many software developers develop a hubris that they can write software that cannot fail. Unfortunately, usually someone else is going to have to pay for that mistake.
Jan 11 2023
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Jan 11, 2023 at 03:39:50PM -0800, Walter Bright via Digitalmars-d wrote:
 On 1/11/2023 5:35 AM, Don Allen wrote:
 1. Steal your password
 2. Produce the "secret key", which they won't be able to
 3. Get past 2FA, which they won't be able to
Those are all good things. But it doesn't help you if you download a trojan version of the manager, or a trojan masquerading as an update. I've also seen several schemes that outmaneuver 2FA.
Indeed, there are (at least) 11 ways to defeat 2FA: https://www.knowbe4.com/hubfs/KB4-11WaystoDefeat2FA-RogerGrimes.pdf
 Allow me to explain the framing. At Boeing, it was never "that part
 cannot fail". It is always framed as "when that part fails, how do we
 land safely?"
 
 So, *when* your password manager fails, what are you going to do about
 it?
 
 I'm not singling you out, I'm trying to make a point. Far too many
 software developers develop a hubris that they can write software that
 cannot fail.  Unfortunately, usually someone else is going to have to
 pay for that mistake.
We've had several decades of industry experience proving that all non-trivial software is inevitably buggy and has failure modes, oftentimes ugly ones. :-) It's only a matter of time before yet another software tower of cards come crashing down, and all your precious data with it. This is why I'm a big skeptic of cloud-based services (or indeed, anything that relies on some remote network resource being always available / secure). There's a time and place for it, but if you follow the bandwagon in putting *everything* on it even when you really shouldn't be, then you should be prepared for the catastrophic failure that's inevitably coming. T -- Why can't you just be a nonconformist like everyone else? -- YHL
Jan 11 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/11/2023 4:07 PM, H. S. Teoh wrote:
 This is why I'm a big skeptic of cloud-based services (or indeed,
 anything that relies on some remote network resource being always
 available / secure).
Me too. You all know my practice of putting links to more information in code I write. I've been doing that for a long time. I've found a lot of those older links to Microsoft documentation have become deadends, and google doesn't reveal any replacement. It's just gone. Poof.
Jan 11 2023
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 1/12/23 03:32, Walter Bright wrote:
 On 1/11/2023 4:07 PM, H. S. Teoh wrote:
 This is why I'm a big skeptic of cloud-based services (or indeed,
 anything that relies on some remote network resource being always
 available / secure).
Me too. You all know my practice of putting links to more information in code I write. I've been doing that for a long time. I've found a lot of those older links to Microsoft documentation have become deadends, and google doesn't reveal any replacement. It's just gone. Poof.
https://archive.org/web/
Jan 14 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/14/2023 10:17 AM, Timon Gehr wrote:
 On 1/12/23 03:32, Walter Bright wrote:
 Me too. You all know my practice of putting links to more information in code 
 I write. I've been doing that for a long time. I've found a lot of those older 
 links to Microsoft documentation have become deadends, and google doesn't 
 reveal any replacement.

 It's just gone. Poof.
https://archive.org/web/
If they were there, wouldn't google have found them? They disappeared around 2001.
Jan 14 2023
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 1/14/23 22:24, Walter Bright wrote:
 On 1/14/2023 10:17 AM, Timon Gehr wrote:
 On 1/12/23 03:32, Walter Bright wrote:
 Me too. You all know my practice of putting links to more information 
 in code I write. I've been doing that for a long time. I've found a 
 lot of those older links to Microsoft documentation have become 
 deadends, and google doesn't reveal any replacement.

 It's just gone. Poof.
https://archive.org/web/
If they were there, wouldn't google have found them? They disappeared around 2001.
I don't know. I don't remember being pointed towards archived webpages by google, but you can just enter the links there directly and find out.
Jan 14 2023
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/14/2023 2:06 PM, Timon Gehr wrote:
 I don't know. I don't remember being pointed towards archived webpages by 
 google, but you can just enter the links there directly and find out.
I never thought of that, thanks for the tip.
Jan 14 2023
prev sibling parent Tejas <notrealemail gmail.com> writes:
On Saturday, 14 January 2023 at 22:06:58 UTC, Timon Gehr wrote:
 On 1/14/23 22:24, Walter Bright wrote:
 On 1/14/2023 10:17 AM, Timon Gehr wrote:
 On 1/12/23 03:32, Walter Bright wrote:
 Me too. You all know my practice of putting links to more 
 information in code I write. I've been doing that for a long 
 time. I've found a lot of those older links to Microsoft 
 documentation have become deadends, and google doesn't 
 reveal any replacement.

 It's just gone. Poof.
https://archive.org/web/
If they were there, wouldn't google have found them? They disappeared around 2001.
I don't know. I don't remember being pointed towards archived webpages by google, but you can just enter the links there directly and find out.
I never once received a Google query result that pointed to archive.org It's always me having to explicitly visit the site and enter the url
Jan 14 2023
prev sibling next sibling parent Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 23:39:50 UTC, Walter Bright 
wrote:
 On 1/11/2023 5:35 AM, Don Allen wrote:
 1. Steal your password
 2. Produce the "secret key", which they won't be able to
 3. Get past 2FA, which they won't be able to
Those are all good things. But it doesn't help you if you download a trojan version of the manager, or a trojan masquerading as an update. I've also seen several schemes that outmaneuver 2FA.
The safety 1Password doesn't depend on 2FA alone. A hacker has to get through three barriers.
 Allow me to explain the framing. At Boeing, it was never "that 
 part cannot fail". It is always framed as "when that part 
 fails, how do we land safely?"

 So, *when* your password manager fails, what are you going to 
 do about it?
Maybe nothing, since these packages are designed using the Boeing approach: try your damnedest to fail safe. LastPass just did fail. They had a security breach months ago. That's where my passwords were at the time. Have I seen any evidence that my accounts have been compromised? Absolutely not. To what do I attribute that? My use of long-enough random passwords drawn from a big-enough character set. And the effectiveness of AES256. And the fact that I have 2FA enabled on all sensitive accounts were it is optional, e.g., Amazon. I moved my passwords to 1Password for the simple reason, mentioned in an earlier post, because the LastPass management handled the situation dishonestly. I prefer not to give my business to such people.
 I'm not singling you out, I'm trying to make a point. Far too 
 many software developers develop a hubris that they can write 
 software that cannot fail. Unfortunately, usually someone else 
 is going to have to pay for that mistake.
Yes, that's true. I don't see the relevance to this discussion. I am making an educated guess that password managers are safe enough to use, not that they are perfect. Just like you make the same educated guess when you get on an airplane that Boeing or Airbus or Embraer knew what it was doing when it built the airplane, the people in the cockpit are competent, especially in an emergency (sometimes you get a Sullenberger, sometimes you get a Pierre Bodin (AF447), or the Asiana 214 pilot who couldn't hand-fly a landing in perfect VFR conditions, or the guy in Buffalo who responded to a stall-warning by pulling back while his co-pilot retracted the flaps), and ATC doesn't screw up. It's a damned good system, but it's not perfect. Same thing exactly.
Jan 11 2023
prev sibling parent Max Samukha <maxsamukha gmail.com> writes:
On Wednesday, 11 January 2023 at 23:39:50 UTC, Walter Bright 
wrote:

 Far too many software developers develop a hubris that they can 
 write software that cannot fail.
You seem to be attacking that poor strawman again.) I've never met a sober programmer who would claim he can write software that cannot fail. Minimizing the probability of a failure is still a good thing regardless of how much redundancy there is to deal with failures.
Jan 11 2023
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
I can't find your other post, so I'll reply here. You mentioned that ImportC 
failed to compile a .h file you were using. I asked for you to post these to 
bugzilla.

I can't find them there. Can you please post them? Help me help you get your 
code working.
Jan 11 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 11 January 2023 at 23:41:22 UTC, Walter Bright 
wrote:
 I can't find your other post, so I'll reply here. You mentioned 
 that ImportC failed to compile a .h file you were using. I 
 asked for you to post these to bugzilla.

 I can't find them there. Can you please post them? Help me help 
 you get your code working.
On Wednesday, 11 January 2023 at 23:41:22 UTC, Walter Bright wrote:
 I can't find your other post, so I'll reply here. You mentioned 
 that ImportC failed to compile a .h file you were using. I 
 asked for you to post these to bugzilla.

 I can't find them there. Can you please post them? Help me help 
 you get your code working.
See https://issues.dlang.org/show_bug.cgi?id=23485, "ImportC: two tests with gtk", posted 2022-11-14, tagged ImportC as you requested.
Jan 11 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/11/2023 6:27 PM, Don Allen wrote:
 See https://issues.dlang.org/show_bug.cgi?id=23485, "ImportC: two tests with 
 gtk", posted 2022-11-14, tagged ImportC as you requested.
Thank you. Can I press you to just quote enough of the .h file to show the problem, rather than a giant tar file?
Jan 11 2023
parent reply Don Allen <donaldcallen gmail.com> writes:
On Thursday, 12 January 2023 at 02:34:39 UTC, Walter Bright wrote:
 On 1/11/2023 6:27 PM, Don Allen wrote:
 See https://issues.dlang.org/show_bug.cgi?id=23485, "ImportC: 
 two tests with gtk", posted 2022-11-14, tagged ImportC as you 
 requested.
Thank you. Can I press you to just quote enough of the .h file to show the problem, rather than a giant tar file?
No. I've done what I can and will for the D project. The information you need is in that tar file. As I've already said, I am no longer using D for my own work, based on what I've learned about the nature of this project and its leadership in this thread. I said all this in a previous post. Good luck with the project going forward. I mean that sincerely. I think you need to make some substantive changes in how the project is lead and managed, to get roles in line with abilities, but I do believe there is a core of good work here. I won't be posting here again. /Don Allen
Jan 12 2023
parent reply Dukc <ajieskola gmail.com> writes:
On Thursday, 12 January 2023 at 20:21:42 UTC, Don Allen wrote:
 As I've already said, I am no longer using D for my own work, 
 based on what I've learned about the nature of this project and 
 its leadership in this thread. I said all this in a previous 
 post.

 Good luck with the project going forward. I mean that 
 sincerely. I think you need to make some substantive changes in 
 how the project is lead and managed, to get roles in line with 
 abilities, but I do believe there is a core of good work here. 
 I won't be posting here again.

 /Don Allen
Consider copying your earlier posts about our shortcoming to an email and sending it for the feedback campaign [Mike is running](https://forum.dlang.org/thread/rcnrixuppxflnrheyvsz forum.dlang.org). That way your observations (which do sound useful to me) are less likely to go to waste. That is, if you did not already do so.
Jan 13 2023
parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Friday, 13 January 2023 at 08:15:57 UTC, Dukc wrote:
 Consider copying your earlier posts about our shortcoming to an 
 email and sending it for the feedback campaign
On bug reports: 1) he gives the info 2) D people be like "do more free labor to reformat this in a way that's more convenient for me" 3) he refuses On feedback campaign 1) he gives the info 2) D people be like "do more free labor to reformat this in a way that's more convenient for me" 3) ........... it is a mystery!!!!!!
Jan 13 2023
parent Guillaume Piolat <first.last spam.org> writes:
On Friday, 13 January 2023 at 14:05:13 UTC, Adam D Ruppe wrote:
 On feedback campaign
I sell software for a living, and people with proper feedback do not hesitate to send emails to feedback privately. The more you interact in public forums, the more people are criticizing with relatively random feedback for clout. It is all there is to it really, forums are largely a diversion.
Jan 13 2023
prev sibling next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Monday, 9 January 2023 at 01:44:41 UTC, Steven Schveighoffer 
wrote:
 Yes, it's no different than any other data breach of any other 
 company -- email addresses, billing information, etc.

 Note that LastPass and others do not even have the keys to the 
 safe to be stolen in the first place -- they never store your 
 master password.

 the "100s of passwords" are not compromised (that is, unless 
 they use "password123!" as their master password).

 LastPass uses 100100 rounds of encryption, which means each 
 guess takes a long time to test to see if it's right. Brute 
 force will take millions of years.

 Everyone today should use a password manager, whether it's 
 cloud based or not. And the *most important rule* is to not use 
 a previous password as your master password.

 -Steve
Sadly, many peoples 'master' password will most likely be something they can easily remember. Also, there is almost certainly a backdoor into the password database. The backdoor could be intentional (to assist law enforcement), or it could just be an API that someone forgot to properly lockdown. But its there. It always is. "the cloud is another name for 'someone else's computer'": https://www.schneier.com/blog/archives/2022/12/lastpass-breach.html
Jan 08 2023
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Monday, 9 January 2023 at 01:44:41 UTC, Steven Schveighoffer 
wrote:

and btw. People talk alot about reducing 'the surface of attack' 
by using more 'memory safe' programming languages.

But if only people would stop uploading critical information to 
the cloud!

(like the list of all their passwords!!)

That would reduce the need to attack in the first place.

As long as the cloud exists. Attacks on it, will always be 
occuring. No matter the platform, no matter the programming 
language.
Jan 08 2023
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Monday, 9 January 2023 at 00:43:08 UTC, max haughton wrote:
 On Monday, 9 January 2023 at 00:18:50 UTC, RTM wrote:
 On Sunday, 8 January 2023 at 21:53:32 UTC, Steven 
 Schveighoffer wrote:
 Nope. That's not how LastPass (and password managers in 
 general) work.
https://en.m.wikipedia.org/wiki/LastPass#2022_security_incidents It’s serious.
Serious yes, but look at the data that actually leaked, it's not the keys to the safe I think
Even if it was just the 'customer data', that data alone is worth a lot, as it can be (and likely will be) used in very mischievous ways. It may well be, they were after that data afterall.
Jan 08 2023
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 03:17, Walter Bright wrote:
 On 12/29/2022 12:46 PM, Timon Gehr wrote:
 The bad thing is allowing programs to enter unanticipated, invalid 
 states in the first place...
We both agree on that one. But invalid states happen in the real world. ...
That's certainly not a reason to introduce even _more_ opportunities for bad things to happen...
 
 Not all disasters are silent. Maybe you are biased because you only 
 write batch programs that are intended to implement a very precise spec.
I'm biased from my experience designing aircraft systems. You never, ever want an avionics program to proceed if it has entered an invalid state. It must fail instantly, fail hard, and allow the backup to take over. ...
That's context-specific and for the programmer to decide. You can't have the backup take over if you blow up the plane.
 The idea that a program should soldier on once it is in an invalid state 
 is very bad system design.
Well, here it's the language that is encouraging people to choose a design that allows invalid states.
 Perfect programs cannot be made. The solution 
 is to not pretend that the program is perfect, but be able to tolerate 
 its failure by shutting it down and engaging the backup.
 ...
Great, so let's just give up I guess. All D programs should just segfault on startup. They were not perfect anyway.
 I think the Chrome was the first browser to do this. It's an 
 amalgamation of independent processes. The processes do not share 
 memory, they communicate with interprocess protocols. If one process 
 fails, its failure is isolated, it is aborted, and a replacement is spun 
 up.
 
 The hubris of "can't be allowed to fail" software is what allowed 
 hackers to manipulate a car's engine and brakes remotely by hacking in 
 via the keyless door lock. (Saw this on an episode of "60 Minutes".)
I am not saying software can't be allowed to fail, just that it should fail compilation, not at runtime.
Dec 29 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/29/2022 7:37 PM, Timon Gehr wrote:
 I am not saying software can't be allowed to fail, just that it should fail 
 compilation, not at runtime.
In your description of pattern matching checks in this thread, the check was at runtime. (Of course, we'd all like the compiler to detect all the errors at compile time. D does an awful lot of that.)
Dec 30 2022
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/30/22 21:41, Walter Bright wrote:
 On 12/29/2022 7:37 PM, Timon Gehr wrote:
 I am not saying software can't be allowed to fail, just that it should 
 fail compilation, not at runtime.
In your description of pattern matching checks in this thread, the check was at runtime. ...
No, the check was at compile time. The check I care about is the check for _failure_. The check for _null_ may or may not be _necessary_ depending on the type of the reference. Relying on hardware memory protection to catch the null reference is never necessary, because _valid programs should not even compile if that's the kind of runtime check they would require to ensure type safety_. The hardware memory protection can still catch compiler bugs I guess.
 (Of course, we'd all like the compiler to detect all the errors at 
 compile time. D does an awful lot of that.)
 
Well, glad we are on the same page at least. But what is the concern then? This technology has a proven track record.
Dec 30 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 1:07 PM, Timon Gehr wrote:
 In your description of pattern matching checks in this thread, the check was 
 at runtime.
 ...
No, the check was at compile time.
The pattern matching is done at run time.
 The check I care about is the check for 
 _failure_. The check for _null_ may or may not be _necessary_ depending on the 
 type of the reference.
NonNull pointers: int* p = ...; nonnull int* np = isPtrNull(p) ? fatalError("it's null!") : p; *np = 3; // guaranteed not to fail! Null pointers: int* p = ...; *p = 3; // seg fault! Which is better? Both cause the program to quit on a null pointer.
 This technology has a proven track record.
A proven track record of not seg faulting, sure. A proven trackrecord of no fatal errors at converting a nullable pointer to nonnull, I'm not so sure.
 Relying on hardware memory protection to catch the null
 reference is never necessary,
If you manually code in a runtime check, sure, you won't need a builtin check at runtime.
 because _valid programs should not even compile if
 that's the kind of runtime check they would require to ensure type safety_.
Then we don't need sumtypes with pattern matching?
 The hardware memory protection can still catch compiler bugs I guess.
Having a hardware check is perfectly valid for checking things. BTW, back in the bad old DOS days, I used to write a lot of: assert(p != NULL); It was very effective. But with modern CPUs, this check adds no value, and I removed them.
Dec 30 2022
next sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Saturday, 31 December 2022 at 06:34:38 UTC, Walter Bright 
wrote:
 NonNull pointers:

   int* p = ...;
   nonnull int* np = isPtrNull(p) ? fatalError("it's null!") : p;
   *np = 3; // guaranteed not to fail!

 Null pointers:

   int* p = ...;
   *p = 3;  // seg fault!

 Which is better? Both cause the program to quit on a null 
 pointer.
In a larger program the first one allows the programmer to do the check once and rely on it for the remainder of the program. Essentially it leverages the type system to make invalid state unrepresentable. This simplifies subsequent code. It is very much similar to representing a phonenumber using either a string or a dedicated phonenumber type. The way you construct an instance of the phonenumber type is through a check, and any function accepting it can rely on it. In contrast, if one uses strings to pass around phonenumbers, you will need so many checks everywhere you likely forget one.
 Having a hardware check is perfectly valid for checking things.
Not all targets have said check though.
Dec 30 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/30/2022 11:55 PM, Sebastiaan Koppe wrote:
 On Saturday, 31 December 2022 at 06:34:38 UTC, Walter Bright wrote:
 Which is better? Both cause the program to quit on a null pointer.
In a larger program the first one allows the programmer to do the check once and rely on it for the remainder of the program.
Which is what usually happens with nullable pointers. We check once and rely on it to be non-null for the rest of the program, and the hardware ensures we didn't screw it up.
 Essentially it leverages the type system to make invalid state unrepresentable.
I actually do understand that, I really do. I'm pointing out that the hardware makes dereferencing null pointers impossible. Different approach, but with the same end result.
 This simplifies subsequent code.
I'm not so sure it does. It requires two types rather than one - one with the possibility of a null, one without. Even the pattern matching to convert the type is more work than: if (p) ...
 Having a hardware check is perfectly valid for checking things.
Not all targets have said check though.
True. Some 16 bit processors don't, notably the 8086. The 80286 had it since 1985 or thereabouts, back in the stone age. My experience with such machines is to develop and debug the code on a machine with hardware memory protection, and port it to the primitive target as the very last step. ---- I know I'm not convincing anyone, and that's OK. Seg faults are a marvel of modern CPU technology, but 99% of programmers regard them as uncool as a zit. D will get sumtypes and pattern matching and then everyone can do what works best for them. D has always been a language where you can choose between a floor wax and a dessert topping. Personally, I'm most interested in sumtypes and pattern matching as a better error handling mechanism than throwing exceptions.
Dec 31 2022
next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Sunday, 1 January 2023 at 01:58:18 UTC, Walter Bright wrote:
 True. Some 16 bit processors don't, notably the 8086. The 80286 
 had it since 1985 or thereabouts, back in the stone age.
The very popular WebAssembly virtual machine doesn't have it *today*. I personally think this shows how little thought they put into the spec, but it is what it is.
Dec 31 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2022 6:07 PM, Adam D Ruppe wrote:
 The very popular WebAssembly virtual machine doesn't have it *today*.
I'm rather disturbed to hear that.
 I personally think this shows how little thought they put into the spec, but
it 
 is what it is.
Indeed. The 8086 was designed to put the ROM at the *high* end of the address space. If they'd put it at the *bottom* end, it would have saved developers a million hours. Weirdly, I never heard anyone suggest that.
Jan 01 2023
prev sibling next sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 1/1/23 02:58, Walter Bright wrote:
 In a larger program the first one allows the programmer to do the 
 check once and rely on it for the remainder of the program.
Which is what usually happens with nullable pointers. We check once and rely on it to be non-null for the rest of the program, and the hardware ensures we didn't screw it up.
No, it absolutely, positively does not... It only ensures no null dereference takes place on each specific run. You can have screwed it up and only notice once the program is published. I know this happens because I have been a _user_ of software with this kind of problem. Notably this kind of thing happens in released versions of DMD sometimes...
 
 Essentially it leverages the type system to make invalid state unrepresentable.
I actually do understand that, I really do. I'm pointing out that the hardware makes dereferencing null pointers impossible. Different approach, but with the same end result.
If you really think it's the same, you actually do not understand it.
 This simplifies subsequent code. 
I'm not so sure it does. It requires two types rather than one - one with the possibility of a null, one without.
You have to be aware of this in either case. It's simpler if it's actually tracked in the type system.
 Even the pattern matching to convert the type is more work than:
 
    if (p) ...
Most languages with nonnull pointers allow the above syntax. This is a non-argument.
 I know I'm not convincing anyone, and that's OK.
I don't even understand what your position is.
 Seg faults are a marvel of modern CPU technology, but 99% of programmers
regard them as uncool as a zit.
They are great at what they do, this is just not a good use case for them.
 D will get sumtypes and pattern matching and then everyone can do what works
best for them. D has always been a language where you can choose between a
floor wax and a dessert topping. 
That's great. However, it's somewhat aggravating to me that I am currently not actually convinced you understand what's needed to achieve that. This is because you are making statements that equate nonnull pointers in the type system to runtime hardware checking with segmentation faults.
Dec 31 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2022 7:06 PM, Timon Gehr wrote:
 No, it absolutely, positively does not... It only ensures no null dereference 
 takes place on each specific run. You can have screwed it up and only notice 
 once the program is published. I know this happens because I have been a
_user_ 
 of software with this kind of problem. Notably this kind of thing happens in 
 released versions of DMD sometimes...
You're absolutely right. And if I do a pattern match to create a non-nullable pointer, where the null arm does a fatal error if it can't deal with the null, it's the same thing. But we've both stated this same thing several times now.
 That's great. However, it's somewhat aggravating to me that I am currently not 
 actually convinced you understand what's needed to achieve that. This is
because 
 you are making statements that equate nonnull pointers in the type system to 
 runtime hardware checking with segmentation faults.
Yes, I am doing just that. Perhaps I can state our difference thusly. You are coming from a type theory point of view, and your position is quite right from that point of view. I'm not saying you are wrong. You are right. But I am coming from an engineering point of view, saying that for practical purposes, the hardware check produces the same result. If the hardware check wasn't there, I'd be all in on your approach. Which is why I'm excited about sumtypes being used for error states.
Jan 01 2023
next sibling parent reply mate <aiueo aiueo.aiueo> writes:
On Sunday, 1 January 2023 at 18:18:57 UTC, Walter Bright wrote:
 That's great. However, it's somewhat aggravating to me that I 
 am currently not actually convinced you understand what's 
 needed to achieve that. This is because you are making 
 statements that equate nonnull pointers in the type system to 
 runtime hardware checking with segmentation faults.
Yes, I am doing just that.
I don’t understand how it can be argued that both approaches are equivalent. In the hardware case, the null check is done at runtime. If the programmer forgets to handle some edge case, the bug gets released and affects the customers in production. In the type system case, the language ensures that, when the appropriate type is used, this bug cannot happen. In other words, that fixes the billion dollar mistake.
Jan 02 2023
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/2/2023 3:20 AM, mate wrote:
 I don’t understand how it can be argued that both approaches are equivalent.
In 
 the hardware case, the null check is done at runtime. If the programmer
forgets 
 to handle some edge case, the bug gets released and affects the customers in 
 production. In the type system case, the language ensures that, when the 
 appropriate type is used, this bug cannot happen. In other words, that fixes
the 
 billion dollar mistake.
May I quote myself: "if I do a pattern match to create a non-nullable pointer, where the null arm does a fatal error if it can't deal with the null, it's the same thing."
Jan 03 2023
prev sibling next sibling parent reply claptrap <clap trap.com> writes:
On Sunday, 1 January 2023 at 18:18:57 UTC, Walter Bright wrote:
 On 12/31/2022 7:06 PM, Timon Gehr wrote:
 No, it absolutely, positively does not... It only ensures no 
 null dereference takes place on each specific run. You can 
 have screwed it up and only notice once the program is 
 published. I know this happens because I have been a _user_ of 
 software with this kind of problem. Notably this kind of thing 
 happens in released versions of DMD sometimes...
You're absolutely right. And if I do a pattern match to create a non-nullable pointer, where the null arm does a fatal error if it can't deal with the null, it's the same thing.
That's a logically flawed argument because it rests on the assumption that the only possible way to handle a null when converting from a nullable to a non-nullable is to abort the program. It's also a strawman because it also misses the point that its not about what you do with the null, but knowing where they might get in. Statically knowing where the external doors are.
 That's great. However, it's somewhat aggravating to me that I 
 am currently not actually convinced you understand what's 
 needed to achieve that. This is because you are making 
 statements that equate nonnull pointers in the type system to 
 runtime hardware checking with segmentation faults.
Yes, I am doing just that.
It's not the same because non-nullables allow you to narrow down the places where the abort can happen. It's like putting metal detectors on doorways. When someone passes through a metal detector they get a id badge, with that badge they can pass through any doorway. If there are only 3 external entrances you put the metal detectors on those. Now the building is safe. The nullable to non-nullable conversions are the metal detectors.
Jan 02 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/2/2023 1:50 PM, claptrap wrote:
 That's a logically flawed argument because it rests on the assumption that the 
 only possible way to handle a null when converting from a nullable to a 
 non-nullable is to abort the program.
I never said "only possible way". However, it is the usual case. Now, if the *user* checks a pointer for null, and it isn't, then it won't seg fault from subsequent use.
 It's also a strawman because it also misses the point that its not about what 
 you do with the null, but knowing where they might get in. Statically knowing 
 where the external doors are.
Indeed, that is a difference. I've rarely had any actual trouble tracking it back to its source, though.
Jan 03 2023
parent claptrap <clap trap.com> writes:
On Tuesday, 3 January 2023 at 08:49:55 UTC, Walter Bright wrote:
 On 1/2/2023 1:50 PM, claptrap wrote:
 That's a logically flawed argument because it rests on the 
 assumption that the only possible way to handle a null when 
 converting from a nullable to a non-nullable is to abort the 
 program.
I never said "only possible way". However, it is the usual case. Now, if the *user* checks a pointer for null, and it isn't, then it won't seg fault from subsequent use.
It's implicit in your position. If you want to say "they are the same" then you have to also hold that they must always abort on null. "Usually" isnt enough A usually aborts on NULL B always aborts on NULL Therefore they are the same... ***doesn't work***.
 It's also a strawman because it also misses the point that its 
 not about what you do with the null, but knowing where they 
 might get in. Statically knowing where the external doors are.
Indeed, that is a difference. I've rarely had any actual trouble tracking it back to its source, though.
It'd be like you could verify 90% of your numeric code wont generate NaNs at compile time, and so you'd only have to check the boundry between the 10% that can and the 90% that cant. It'd be more useful than default to NaN is with floats. And you like that?
Jan 03 2023
prev sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 1/1/23 19:18, Walter Bright wrote:
 On 12/31/2022 7:06 PM, Timon Gehr wrote:
 No, it absolutely, positively does not... It only ensures no null 
 dereference takes place on each specific run. You can have screwed it 
 up and only notice once the program is published. I know this happens 
 because I have been a _user_ of software with this kind of problem. 
 Notably this kind of thing happens in released versions of DMD 
 sometimes...
You're absolutely right. And if I do a pattern match to create a non-nullable pointer, where the null arm does a fatal error if it can't deal with the null, it's the same thing. ...
_IF_. It's a very big IF. If you can't deal with the null, it should have been a non-null pointer in the first place. While, without even more powerful language features, this is not absolutely _always_ possible, it is _usually_ possible. This should be very easy to understand from an "engineering point of view".
 But we've both stated this same thing several times now.
 
 
 That's great. However, it's somewhat aggravating to me that I am 
 currently not actually convinced you understand what's needed to 
 achieve that. This is because you are making statements that equate 
 nonnull pointers in the type system to runtime hardware checking with 
 segmentation faults.
Yes, I am doing just that. Perhaps I can state our difference thusly. You are coming from a type theory point of view, and your position is quite right from that point of view. ...
I am approaching this from a practical angle, as a user and creator of software.
 I'm not saying you are wrong. You are right. But I am coming from an 
 engineering point of view, saying that for practical purposes, the 
 hardware check produces the same result.
 ...
This is wrong from any point of view that includes occasionally running software. I have suffered as a user from many bugs whose underlying cause is exceedingly easy to guess as just being "somebody forgot about null" and a proper type system would have obviously prevented most of those _during the initial design of the system when everyone's memory of the code base was very fresh_. If your software crashes with a fatal segmentation fault, that's an engineering failure. Using the right tools, such as type systems, is _part_ of software engineering. Let me translate the previous discussion to the bridge setting: TG: I observe bridges collapsing. Maybe we should actually calculate statics during the planing phase, before building them? WB: From a computational standpoint you are right. However, from an engineering standpoint, you can just keep building bridges. You will then learn their flaws as they collapse, which gives you exactly the same end result. I just don't think good engineers argue in this fashion. This is exactly the kind of out-of-touch ivory tower reasoning that theorists are sometimes accused of.
 If the hardware check wasn't there, I'd be all in on your approach. 
 Which is why I'm excited about sumtypes being used for error states.
Sure. And on some targets there is not even a hardware check. (WASM in particular would be useful to me.)
Jan 03 2023
prev sibling next sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Sunday, 1 January 2023 at 01:58:18 UTC, Walter Bright wrote:
 On 12/30/2022 11:55 PM, Sebastiaan Koppe wrote:
 In a larger program the first one allows the programmer to do 
 the check once and rely on it for the remainder of the program.
Which is what usually happens with nullable pointers. We check once and rely on it to be non-null for the rest of the program
Oh but here is the difference, with nonnull pointers its the type system that keeps me honest and in your case its another responsibility for the programmer.
 Essentially it leverages the type system to make invalid state 
 unrepresentable.
I actually do understand that, I really do. I'm pointing out that the hardware makes dereferencing null pointers impossible. Different approach, but with the same end result.
Valuable software is forever in a state of change; a series of additions, refactorings, removals etc. To make those changes easier it helps if you limit the assumptions each line of code has. Things like: "This dereference on line 293 is fine because we checked it at line 237" make refactorings harder. In order to keep high plasticity you want to limit these implicit - invisible - connections. Having unittests helps fearless refactoring. Using the type system to your advantage helps a lot too.
 This simplifies subsequent code.
I'm not so sure it does. It requires two types rather than one - one with the possibility of a null, one without. Even the pattern matching to convert the type is more work than: if (p) ...
Yes, it is a little bit more work, but that often happens somewhere on the boundary of the (sub)program, so that the remainder can just work with the more restricted type. Also, somewhat tangentially related, isn't nonnull-ness one of the things `ref` helps with: ```dlang // transmogrifies `p`, `p` must be non-null! void transmogrify1(int* p); // transmogrifies `p` void transmogrify2(ref int p); ``` `transmogrify2` uses the type system to express its non-null requirement, which anyone can see just glancing at the signature. That is worth something.
 Personally, I'm most interested in sumtypes and pattern 
 matching as a better error handling mechanism than throwing 
 exceptions.
Very happy to hear that. I recently wrote https://github.com/skoppe/oras/blob/master/source/oras/client.d which uses mir's algebraics to make all the possible errors explicit. On top of that it uses `nothrow` to make sure no exceptions slip in due to changes.
Jan 01 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/1/2023 7:24 AM, Sebastiaan Koppe wrote:
 Also, somewhat tangentially related, isn't nonnull-ness one of the things
`ref` 
 helps with:
Sadly, not completely: void foo(ref int); void test() { int* p = null; foo(*p); } (C++ has the same issue.) As for your other points, we'd just be repeating our positions.
Jan 01 2023
parent reply kdevel <kdevel vogtner.de> writes:
On Sunday, 1 January 2023 at 18:11:10 UTC, Walter Bright wrote:
 On 1/1/2023 7:24 AM, Sebastiaan Koppe wrote:
 Also, somewhat tangentially related, isn't nonnull-ness one of 
 the things `ref` helps with:
Sadly, not completely: void foo(ref int); void test() { int* p = null; foo(*p); } (C++ has the same issue.)
How? Quote from https://eel.is/c++draft/dcl.ref: [...] A reference shall be initialized to refer to a valid object or function. [Note 2: In particular, a null reference cannot exist in a well-defined program, because the only way to create such a reference would be to bind it to the “object” obtained by indirection through a null pointer, which causes undefined behavior. {...} — end note]
Jan 01 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/1/2023 11:14 AM, kdevel wrote:
 On Sunday, 1 January 2023 at 18:11:10 UTC, Walter Bright wrote:
 (C++ has the same issue.)
How? Quote from https://eel.is/c++draft/dcl.ref:    [...] A reference shall be initialized to refer to a valid object or function.    [Note 2: In particular, a null reference cannot exist in a well-defined    program, because the only way to create such a reference would be to bind it    to the “object” obtained by indirection through a null pointer, which causes    undefined behavior. {...} — end note]
The spec says don't do it. That doesn't stop it from happening, though, as the compiler has no way to detect it.
Jan 02 2023
parent kdevel <kdevel vogtner.de> writes:
On Monday, 2 January 2023 at 22:54:41 UTC, Walter Bright wrote:
 On 1/1/2023 11:14 AM, kdevel wrote:
 On Sunday, 1 January 2023 at 18:11:10 UTC, Walter Bright wrote:
 (C++ has the same issue.)
How? Quote from https://eel.is/c++draft/dcl.ref:    [...] A reference shall be initialized to refer to a valid object or function.    [Note 2: In particular, a null reference cannot exist in a well-defined    program, because the only way to create such a reference would be to bind it    to the “object” obtained by indirection through a null pointer, which causes    undefined behavior. {...} — end note]
The spec says don't do it.
The C++ standard does not primarily say "don't do this or that" but "if you do this or that your code does not form a valid program".
 That doesn't stop it from happening,
By definition it does not happen in valid C++ programs. Urge programmers to write valid C++ programs and all's right with the world!
 though, as the compiler has no way to detect it.
It's like speeding. The guys who deploy road signs are not in charge of surveilling the traffic.
Jan 02 2023
prev sibling next sibling parent reply "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 01/01/2023 2:58 PM, Walter Bright wrote:
 I know I'm not convincing anyone, and that's OK. Seg faults are a marvel 
 of modern CPU technology, but 99% of programmers regard them as uncool 
 as a zit. D will get sumtypes and pattern matching and then everyone can 
 do what works best for them. D has always been a language where you can 
 choose between a floor wax and a dessert topping.
You convinced me ages ago. I was saying it a few days ago on Discord, why we don't add null checks for things like classes, because the CPU already does it! Only issue is when it doesn't give you a stack trace.
 Personally, I'm most interested in sumtypes and pattern matching as a 
 better error handling mechanism than throwing exceptions.
So am I. The only difference is, I want it automatic as part of throw/try catch statements. I should really finish off that DIP...
Jan 01 2023
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 1/1/2023 7:32 AM, Richard (Rikki) Andrew Cattermole wrote:
 Only issue is when it 
 doesn't give you a stack trace.
That could probably be added. After all, we already give a stack trace for some other kinds of errors.
Jan 03 2023
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 03/01/2023 9:46 PM, Walter Bright wrote:
 On 1/1/2023 7:32 AM, Richard (Rikki) Andrew Cattermole wrote:
 Only issue is when it doesn't give you a stack trace.
That could probably be added. After all, we already give a stack trace for some other kinds of errors.
Yeah absolutely. Its a small QoL improvement for non-Windows systems (on Windows you can have VS attach automatically, although it should work there too!).
Jan 03 2023
prev sibling parent Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Sunday, 1 January 2023 at 01:58:18 UTC, Walter Bright wrote:
 I know I'm not convincing anyone, and that's OK. Seg faults are 
 a marvel of modern CPU technology, but 99% of programmers 
 regard them as uncool as a zit.
Maybe this applies to the programmers that you know personally, but my experience is very different from yours. I guess, I'm lucky to be surrounded by much more competent people.
 I'm trying to make a point. Far too many software developers
 develop a hubris that they can write software that cannot fail.
It's the other way around. Inexperienced beginners tend to believe that they can write software that cannot fail, but this kind of delusion does not last long if they keep developing software and learn a thing or two.
Jan 11 2023
prev sibling next sibling parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 07:34, Walter Bright wrote:
 On 12/30/2022 1:07 PM, Timon Gehr wrote:
 In your description of pattern matching checks in this thread, the 
 check was at runtime.
 ...
No, the check was at compile time.
The pattern matching is done at run time.
 The check I care about is the check for _failure_. The check for 
 _null_ may or may not be _necessary_ depending on the type of the 
 reference.
NonNull pointers:   int* p = ...;   nonnull int* np = isPtrNull(p) ? fatalError("it's null!") : p;   *np = 3; // guaranteed not to fail! Null pointers:   int* p = ...;   *p = 3;  // seg fault! Which is better? Both cause the program to quit on a null pointer. ...
You have deliberately chosen an example where it does not matter because your aim was specifically to dereference a possibly null pointer. I care about this case: nonnull int* p = ...; // possibly a compile time error *p = 3; // no runtime check. no seg fault! Note that the declaration and dereference can be a few function calls apart. The further away the two are, the more useful tracking it in the type system becomes. Manual checks can be used to turn possibly null pointers into non-null pointers anywhere in the program where there is a sensible way to handle the null case separately. This is just a special case of sum types, where the compiler checks that you dealt with all cases exhaustively. The especially efficient tag encoding provided by `null` is just an additional small detail.
 
 This technology has a proven track record.
A proven track record of not seg faulting, sure.
Of making people think about, and handle the null case if it is necessary at all. I have already told you that my main gripe here is not specifically the segfault (though that does not help), it's the fatal and implicit nature of the crash.
 A proven trackrecord of 
 no fatal errors at converting a nullable pointer to nonnull, I'm not so 
 sure.
 ...
Converting a nullable pointer to nonnull without handling the null case is inherently an unsafe operation. D currently does it implicitly. Explicit is better than implicit for fatal runtime errors that will shut down your program completely. Typically you'd mostly use nonnull pointers and not get any fatal errors. It is true that if you have nontrivial logic determining whether some pointer should be null or not you may have to check that invariant at runtime with the techniques present in popular languages, but at least it's explicit. My experience has been that null pointer segfaults usually happen in places where either a null pointer is never expected (and a nonnull pointer should have been used, making the type system ensure that the caller provides one) or there should have been a check, with different logic for the null case. I.e., they happen because people failed to think about the null case at all. The language encourages this lack of thinking by treating all references as non-null references during type checking and then crashing at runtime implicitly once the type checker's assumptions are inevitably violated. Nonnull pointers allow expressing such assumptions in the type system. They are actually more useful than runtime segfaults and assertion failures, because they document expectations and the error will be at the place where the bad null pointer originates instead of at the place where it was not expected to occur. Runtime segfaults/assertion failures are actually much more susceptible to being papered over by subtly changing a function's interface and making it more complex by doing some checking internally and ignoring null instead of addressing the underlying issue. This is because it's harder to find the root cause, especially in a large undocumented code base. Nonnull is compiler-checked documentation and it will direct your attention to the function that is actually wrong by default.
 
  > Relying on hardware memory protection to catch the null
  > reference is never necessary,
 
 If you manually code in a runtime check, sure, you won't need a builtin 
 check at runtime.
 ...
No, you don't need any runtime check at all to dereference a nonnull pointer. nonnull x = new A; x.y = 3; // runtime checks completely redundant here
  > because _valid programs should not even compile if
  > that's the kind of runtime check they would require to ensure type 
 safety_.
 
 Then we don't need sumtypes with pattern matching?
 ...
That's not what I said. I am specifically talking about _implicit_ runtime checks causing a _program panic/segfault_. It's just a bad combination for null handling. Bad UX and hardly defensible with technical limitations.
  > The hardware memory protection can still catch compiler bugs I guess.
 
 Having a hardware check is perfectly valid for checking things.
 ...
Sure, in principle it can still be leveraged for some sort of explicit runtime-checked null pointer dereference syntax. Personally, the convenience of having the assertion failure tell me where it happened (even if I don't happen to be running in a debugger) is probably _by far_ worth the additional runtime check in the couple places where it would even remain necessary. Also as Sebastiaan points out, there are actually relevant targets that don't give you the check.
 BTW, back in the bad old DOS days, I used to write a lot of:
 
      assert(p != NULL);
 
 It was very effective. But with modern CPUs, this check adds no value, 
 and I removed them.
I have not much to add to this off-topic point. As I told you many times by now, I mostly agree here, but I want to be able to move most of this checking to compile time instead. BTW: I really dislike the terminology "nonnull pointer/reference". It's a weird inversion of defaults. nonnull is a much better default.
Dec 31 2022
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 07:34, Walter Bright wrote:
 On 12/30/2022 1:07 PM, Timon Gehr wrote:
 In your description of pattern matching checks in this thread, the 
 check was at runtime.
 ...
No, the check was at compile time.
The pattern matching is done at run time.
I don't get the relevance of this.
Dec 31 2022
parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 15:31, Timon Gehr wrote:
 On 12/31/22 07:34, Walter Bright wrote:
 On 12/30/2022 1:07 PM, Timon Gehr wrote:
 In your description of pattern matching checks in this thread, the 
 check was at runtime.
 ...
No, the check was at compile time.
The pattern matching is done at run time.
I don't get the relevance of this.
By the way, you were under-quoting. This is the relevant context: On 12/31/22 07:34, Walter Bright wrote:
 On 12/29/2022 7:37 PM, Timon Gehr wrote:
 I am not saying software can't be allowed to fail, just that it should fail
compilation, not at runtime.
In your description of pattern matching checks in this thread, the check was at runtime. ...
I.e., we were talking about failure, then you made an unrelated and obvious, i.e., spurious, remark about pattern matching happening at runtime instead of addressing the actual point. Now you seem to be doubling down on this. I was talking about _failure_. You then started talking about pattern matching. I don't want to contest that pattern matching happens at runtime. Not at all. But it's just not the check we had been talking about...
Dec 31 2022
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 12/31/22 15:41, Timon Gehr wrote:
 
 I was talking about _failure_. You then started talking about pattern 
 matching. I don't want to contest that pattern matching happens at 
 runtime. Not at all. But it's just not the check we had been talking 
 about...
My point has always been that with pattern matching, the exhaustiveness check is (ideally) at compile time. The check of the tag is at runtime. We are currently discussing the exhaustiveness check.
Dec 31 2022
prev sibling parent reply A moo person <moo_mail fake.com> writes:
On Friday, 30 December 2022 at 02:17:58 UTC, Walter Bright wrote:
 The idea that a program should soldier on once it is in an 
 invalid state is very bad system design.
There are definitely cases where it is desirable. In games, especially competitive real time games, the show must go on. If you are in a high adrenalin match and your game crashes at the worst time because some animation system got into an invalid state, you will be very mad. also this thread makes me sad reading thru it... :(
Jan 10 2023
next sibling parent A moo person <moo_mail fake.com> writes:
Oh, I see you talk about Simpsons game and sort of made that 
point already.

Reading through this thread was a slog. Not sure why I did it but 
it definitely convinced me that non nullable types are cool and 
rad.

Also somehow nothing ever seems to change over here in D land... 
bikeshedding, people asking for d3, weird meandering progress 
with seemingly no end goal. Classic Dlang forums thread.
Jan 10 2023
prev sibling parent reply Guillaume Piolat <first.last spam.org> writes:
On Wednesday, 11 January 2023 at 00:02:30 UTC, A moo person wrote:
 There are definitely cases where it is desirable. In games, 
 especially competitive real time games, the show must go on. If 
 you are in a high adrenalin match and your game crashes at the 
 worst time because some animation system got into an invalid 
 state, you will be very mad.
We need to caracterize where it's ok to go on, typically it's cases where showing errors in would be worse for the user, and the user is creating some "content". - Markdown has a design where it always compile. No errors because erros have a visual impact, and in content creation if it has no visual impact it's not a real error. - typically a game engine: if a file failed to load - HTML and CSS are famously lenient But all those cases are "input errors", not "invalid state".
Jan 11 2023
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Wednesday, 11 January 2023 at 08:57:25 UTC, Guillaume Piolat 
wrote:
 On Wednesday, 11 January 2023 at 00:02:30 UTC, A moo person 
 wrote:
 There are definitely cases where it is desirable. In games, 
 especially competitive real time games, the show must go on. 
 If you are in a high adrenalin match and your game crashes at 
 the worst time because some animation system got into an 
 invalid state, you will be very mad.
We need to caracterize where it's ok to go on, typically it's cases where showing errors in would be worse for the user, and the user is creating some "content". - Markdown has a design where it always compile. No errors because erros have a visual impact, and in content creation if it has no visual impact it's not a real error. - typically a game engine: if a file failed to load - HTML and CSS are famously lenient But all those cases are "input errors", not "invalid state".
I used to buy into the propaganda of the distinction between "input" and "logic" errors. Now I beleive the distinction is moslty useless. "Invalid state" becomes "input error" depending on how you modularize the system.
Jan 11 2023
parent reply Dukc <ajieskola gmail.com> writes:
On Wednesday, 11 January 2023 at 10:57:18 UTC, Max Samukha wrote:
 I used to buy into the propaganda of the distinction between 
 "input" and "logic" errors. Now I beleive the distinction is 
 moslty useless. "Invalid state" becomes "input error" depending 
 on how you modularize the system.
Your observation does not contradict the original idea. An unrecoverable assertion failure is a recoverable input error from perspective of the operating system or a separate watchdog process - recoverable by restarting the crashed program. The point is, Each program needs to distinguish what it can handle by itself, and where it must consider itself out of control and leave it up to others to restart (or ditch) it.
Jan 11 2023
parent reply Max Samukha <maxsamukha gmail.com> writes:
On Wednesday, 11 January 2023 at 13:38:42 UTC, Dukc wrote:

 Your observation does not contradict the original idea. An 
 unrecoverable assertion failure is a recoverable input error 
 from perspective of the operating system or a separate watchdog 
 process - recoverable by restarting the crashed program. The 
 point is, Each program needs to distinguish what it can handle 
 by itself, and where it must consider itself out of control and 
 leave it up to others to restart (or ditch) it.
My point is you can rarely decide upfront how to handle input to a public API, because the decision depends on how the API will be used: (1) ``` to!int(readln); // "bad input error", expected to be recoverable ``` (2) ``` string s = <computation that may contain a logic error> to!int(s); // "logic error", expected to panic ``` If you decide on 'assert', then (1) will require a redundant 'enforce'. If you decide on 'enforce', then (2) will require a redundant 'assert'.
Jan 14 2023
parent reply Dukc <ajieskola gmail.com> writes:
On Saturday, 14 January 2023 at 10:59:38 UTC, Max Samukha wrote:
 My point is you can rarely decide upfront how to handle input 
 to a public API, because the decision depends on how the API 
 will be used:

 (1)
 ```
 to!int(readln); // "bad input error", expected to be recoverable
 ```

 (2)
 ```
 string s = <computation that may contain a logic error>
 to!int(s); // "logic error", expected to panic
 ```

 If you decide on 'assert', then (1) will require a redundant 
 'enforce'. If you decide on 'enforce', then (2) will require a 
 redundant 'assert'.
The distinction between recoverable and unrecoverable errors is still relevant. The library author is only picking the default. User still needs to make the decision.
Jan 14 2023
parent Max Samukha <maxsamukha gmail.com> writes:
On Saturday, 14 January 2023 at 12:15:46 UTC, Dukc wrote:

 The distinction between recoverable and unrecoverable errors is 
 still relevant. The library author is only picking the default. 
 User still needs to make the decision.
Which they never make. I've never seen an 'assert' preceding a call to `to!int` in real code.
Jan 14 2023
prev sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Wednesday, 11 January 2023 at 08:57:25 UTC, Guillaume Piolat 
wrote:
 We need to caracterize where it's ok to go on, typically it's 
 cases where showing errors in would be worse for the user, and 
 the user is creating some "content".
I always want the problem to be logged, so I want a descriptive entry to be submitted over TCP/IP with as much context as possible, even if I decide that a restart is necessary.
Jan 14 2023
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 29 December 2022 at 20:38:23 UTC, Walter Bright 
wrote:
 ..... The *actual* billion dollar mistake(s) in C are:

 1. uninitialized data leading to undefined behavior

 2. no way to do array buffer overflow detection

 because those lead to malware and other silent disasters.

 And it's good to have a state that a memory object can be 
 initialized too that cannot fail.
I would argue, the billion dollar mistakes are really the fault of the users of the C programming language, and not the language itself. Those sames users can make billion dollar mistakes in any language. Perhaps, not those particular ones you mentioned, but others. Even in the most safest language possible, a programmer could leave an API exposed, that wasn't meant to be exposed... The programmer can actually do runtime bounds checking in C. e.g. Create your own vector type with bounds checking. The programmer can also initialise everything to a known state in C. One could also use calloc instead of malloc, or create a their own memory allocator. The C standard library didn't help either. It too could have been designed in a more memory safe manner. But like C itself, it is minimal, perfomance oriented, and not designed to get in your way and make things difficult for you. Even if C did all these things for you, and more, it's likely C programmers would have found a way to remove them, turn them off, created their own vector that doesn't do bound checking, create their own memory allocater that doesn't initiaslise its allocations ... e.g -release -noboundscheck .. sound familiar?
Dec 29 2022
prev sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Tuesday, 27 December 2022 at 22:53:45 UTC, Walter Bright wrote:
 On 12/27/2022 1:41 AM, Max Samukha wrote:
 If T.init is supposed to be an invalid value useful for 
 debugging, then variables initialized to that value... are not 
 initialized.
It depends on the designed of the struct to decide on an initialized value that can be computed at compile time. This is not a failure, it's a positive feature. It means struct instances will *never* be in a garbage state.
Yes, they will be in an invalid state.
 C++ does it a different way, not a better way.
C++ is looking for a principled solution, as that presentation by Herb Sutter suggests. By saying "no dummy values", he must be referring to T.init :) If you don't want a proper fix as Timon and others are proposing, can we at least have nullary constructors for structs to be consistent with the "branding" vs construction ideology? struct S { this(); } S x; // S x = S.init; S x = S(); // S x = S.init; x.__ctor(); There is no reason to require the use of factory functions for this. Constructors *are* "standard" factory functions. People have resorted to all kinds of half-working hacks to work around this in generic code. The latest one I've seen is like: mixin template Ctors() { static typeof(this) opCall(A...)(auto ref A a) { import core.lifetime: forward; typeof(this) r; r.__init(forward!a); return r; } } struct S { // fake ctors void __init() {} void __init(...) {} mixin Ctors; } void foo(T)() { T x = T(); } // no need to pass around a factory function anymore
Dec 30 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
What I meant was default construction, which is not necessary in D.
Constructors 
are still allowed for non-default construction.
Dec 30 2022
parent reply Max Samukha <maxsamukha gmail.com> writes:
On Friday, 30 December 2022 at 20:44:04 UTC, Walter Bright wrote:
 What I meant was default construction, which is not necessary 
 in D.
It is necessary. For types that require runtime construction, initializing to T.init does not result in a constructed object. Forcing programmers to use factory functions doesn't make much sense: struct S { this() disable; } S s() { S r = ...; return r; } // you disallowed `S()` just to make people fake it. There is no need to forbid the nullary constructor. I intentionally don't call it "default constructor", because I want it to be distinct from initializing to T.init. I want this: disable(init) // or whatever syntax you prefer struct S { this() {} } S s; // error: explicit constructor call required S s = S(); // ok, explicit constructor call S s = S.init; // shouldn't be allowed in safe code S[] ss; ss.length = 1; // error S[] ss = [S()]; // ok // etc. I would still dislike it, but it at least would save me from the factory function nonsense.
Dec 31 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/31/2022 2:28 AM, Max Samukha wrote:
 For types that require runtime construction, initializing to 
 T.init does not result in a constructed object.
The idea is to: 1. have construction that cannot fail. This helps avoid things like double-fault exceptions 2. have initializers that can be placed in read only memory 3. have something to set a destroyed object to, in case of dangling references and other bugs 4. present to a constructor an already initialized object. This prevents the common C++ problem of adding a field and forgetting to construct it in one of the overloaded constructors, a problem that has plagued me with erratic behavior 5. provide a NaN state. I know many people don't like NaN states, but if one does, the default construction is perfect for implementing one. 6. it fits in well with (future) sumtypes, where the default initializer can be the error state. An alternative to factory functions is to have a constructor with a dummy argument. Nothing says one has to actually use the parameters to a constructor.
Jan 02 2023
next sibling parent Max Samukha <maxsamukha gmail.com> writes:
On Monday, 2 January 2023 at 22:53:30 UTC, Walter Bright wrote:
 On 12/31/2022 2:28 AM, Max Samukha wrote:
 For types that require runtime construction, initializing to 
 T.init does not result in a constructed object.
The idea is to: 1. have construction that cannot fail. This helps avoid things like double-fault exceptions
I understand the idea. My point (again) is that an object initialized to a dummy value is not a constructed object. Yes, the dummy value is better than random garbage. However, you cannot say "construction cannot fail". An object initialized to a dummy value has not been constructed. You have deferred construction to a later point, and it may fail there. Lazy initialization, initialization in factory functions, etc. is nothing but deferred construction. `default(T)` (their equivalent of `T.init`) from `T()`. From https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/struct#struct-initialization-and-default-values: "That creates a distinction between an uninitialized struct, which has its default value and an initialized struct, which stores values set by constructing it." See, they don't pretend that "set to default value" is "initialized"?
 2. have initializers that can be placed in read only memory
That seems to be irrelevant to my argument. Fully constructed objects can be serialized to ROM as well.
 3. have something to set a destroyed object to, in case of 
 dangling references and other bugs
Yes, the object is set to the dummy state before construction and after destruction. Constructors are for construction. `this()` is a constructor. Just unban it.
 4. present to a constructor an already initialized object. This 
 prevents the common C++ problem of adding a field and 
 forgetting to construct it in one of the overloaded 
 constructors, a problem that has plagued me with erratic 
 behavior
`this()` is a constructor - present to it the default-initialized object just as you do to other constructors.
 5. provide a NaN state. I know many people don't like NaN 
 states, but if one does, the default construction is perfect 
 for implementing one.
That is irrelevant to my point. I am not arguing against default initialization. Just don't call it construction. Constructors replace the dummy value with a useful one.
 6. it fits in well with (future) sumtypes, where the default 
 initializer can be the error state.
No objection.
 An alternative to factory functions is to have a constructor 
 with a dummy argument. Nothing says one has to actually use the 
 parameters to a constructor.
I know about the dummy argument hack. I just see no reason why I have to do that.
Jan 03 2023
prev sibling parent Quirin Schroll <qs.il.paperinik gmail.com> writes:
On Monday, 2 January 2023 at 22:53:30 UTC, Walter Bright wrote:
 On 12/31/2022 2:28 AM, Max Samukha wrote:
 For types that require runtime construction, initializing to 
 T.init does not result in a constructed object.
The idea is to: 1. have construction that cannot fail. This helps avoid things like double-fault exceptions
Here, it would help if you clarified what you mean by _double-fault exceptions_ because I just tried to look it up and was lead to tennis and CPU interrupts. I have a rough idea of what you man and could guess, but I could just ask. Construction that cannot fail sounds nice, and if an aggregate type constructor can pull it off, it should go for it. But this sounds a lot like `nothrow` and not something the language should impose. I know people dislike complicated rules because exceptions (to rules, not `Exception`s), but a possibility could be that nullary `struct` constructors must be `nothrow` or be annotated `throw`.
 2. have initializers that can be placed in read only memory
I’m not saying that `init` isn’t a great idea. It’s just that `init` shouldn’t be used explicitly, but only as “the thing a constructor must act on to produce a valid object”. A “naked” `init` may be an object that violates its invariants. An example would be a string optimized for short values (SSO). It has (at least) a `pointer` to data, a fixed-size internal `buffer`, and a `length` with the invariant: `pointer = &buffer[0]` if and only if `length <= buffer.length`. A SSO’s `init` cannot possibly represent the empty string unless we allow `pointer` to be `null` to represent it. This means that a SSO has two representations for the empty string. Or we interpret the `null` data pointer as a `null` string. In any case, we get something we don’t want.
 3. have something to set a destroyed object to, in case of 
 dangling references and other bugs
If a NaN state is available, use that. (I don’t think NaN states are bad; I actually think that every built-in type except `bool` should have one: Signed and unsigned integer types could use `T.min` and `T.max`. Setting those to 0 is bad because in a lot of contexts, 0 is a perfectly reasonable value, whereas `int.min` and `size_t.max` rarely are.
 4. present to a constructor an already initialized object. This 
 prevents the common C++ problem of adding a field and 
 forgetting to construct it in one of the overloaded 
 constructors, a problem that has plagued me with erratic 
 behavior
The problem is, C++ does not complain about you forgetting that field. (For other people:) In C++, if a struct field is of built-in type (e.g. `int`) and you forget to initialize it, it has an unspecified value. Aggregate types call a nullary constructor and fail to compile if no nullary constructor exists. Now, even if there is a nullary constructor, it might not be what you want. Requiring initialization of every field in every constructor is what C++ lacks.
 5. provide a NaN state. I know many people don't like NaN 
 states, but if one does, the default construction is perfect 
 for implementing one.
One question is penalty for the NaN state. Floating-point NaN values are supported by hardware. If we declared `int.min` and `size_t.max` as their respective types’ NaN, we’d probably specify existing behavior. The issue is, making them sticky incurs costs. Floating-point NaN serves two purposes: Indicate an invalid result and error propagation through the program execution. Integer min/max values are used for the former already. People don’t like them do the latter probably. Another issue of floating-point NaN values is their weird comparison behavior. I understand the argument that `x == y` should be false if `x` and `y` happen to be `NaN`, but `if (x == double.nan)` being silently always false feels broken.
 6. it fits in well with (future) sumtypes, where the default 
 initializer can be the error state.
I’m curious what comes out of this.
 An alternative to factory functions is to have a constructor 
 with a dummy argument. Nothing says one has to actually use the 
 parameters to a constructor.
Or we could just allow a nullary struct constructor. It should be backwards compatible (at least to a large degree) if D defines `this() … {}` when `this()` is not defined explicitly (as ` disable`d or otherwise). An explicit `this()` should be `nothrow` if failure is a problem. With `throw` as an attribute, `this() throw { … }` is a kind of: “Sorry, Walter, I know you wanted the best for me, but for this type, it’s too wrong to be right.” That way, a declaration like `T x;` will call a constructor that – in almost all cases – does nothing, in the remaining cases, in almost all cases does something that cannot fail. Sorry for the late answer.
Feb 07 2023
prev sibling next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Mon, Dec 26, 2022 at 04:38:33PM -0800, Walter Bright via Digitalmars-d wrote:
 C uninitialized variables was another fountain of endless and hard to
 track down problems. D initializes them by default for a very good
 reason.
C's manual memory management is another fountain of endless hard to debug pointer bugs and pernicious memory problems. Manual memory management requires absolute consistency and utmost precision, two things humans are notoriously bad at. In spite of some people having knee-jerk reactions to the GC, D having one has been a big saver of headaches in terms of the amount of time and effort spent writing and debugging memory management code. T -- Дерево держится корнями, а человек - друзьями.
Dec 27 2022
prev sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Tuesday, 27 December 2022 at 00:38:33 UTC, Walter Bright wrote:
 C uninitialized variables was another fountain of endless and 
 hard to track down problems. D initializes them by default for 
 a very good reason.
Yes, D can certainly claim to have better strategies than C to 'reduce the number of weaknesses that occur in software'. This is a good thing, surely. https://cwe.mitre.org/data/definitions/1337.html But in the end, C (as you know of course) operates at a low level of abstraction, and does so on purpose, and therefore such mitigation strategies are not consistent with the spirit and design goals of C. Nobody (as far as i know) works on trying to create a better assembly. It is what it is. Why does everyone want to create a better C? Well, they don't really. What they really want to do, is reduce programming errors by constantly raising the level of abstraction. I like initialised variables in D. I wouldn't like them in C. It would feel like I've lost control. And in C, it should always be me who is in control (otherwise I'd have to revert to assembly).
Dec 27 2022
parent RTM <riven baryonides.ru> writes:
On Tuesday, 27 December 2022 at 21:39:36 UTC, areYouSureAboutThat 
wrote:

 Nobody (as far as i know) works on trying to create a better 
 assembly. It is what it is.
There was Randall Hyde’s HLA. Abandoned long ago, though.
Dec 27 2022
prev sibling next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
wrote:
 ....
 ......

 Checked C:

     int a[5] = { 0, 1, 2, 3, 4};
     _Array_ptr<int> p : count(5) = a;  // p points to 5 
 elements.

 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.
I've decieded that I like neither: _Array_ptr<int> p (nor) int p[..] = a; I prefer: int a[5] = { 0, 1, 2, 3, 4}; checked int p[] = a; // p points to 5 elements. attributes would be a great way of extending the C language without introducing odd looking syntax, like Microsofts, that makes me wanna puke, or yours, that makes me look at it for 30 mintutes, trying to work our .. what does [..] actually mean...
Jan 03 2023
next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 4 January 2023 at 03:42:42 UTC, areYouSureAboutThat 
wrote:

another example:

/* This function searches for an integer in an array.
    If it finds the integer, it returns the index in the
    array where the integer occurs. Otherwise, it returns -1
*/

//int find (int key , array_ptr <int > a : count ( len ), int len 
) // Microsoft Checked C syntax.
int find (int key ,  checked int a[] : len, int len ) // 
alternative syntax using  attributes
{
     for (int i = 0; i < len; i ++)
     {
         // NOTE: a[i] is bounds checked.
         // The checking ensures that i is between 0 and len .
         if (a[i] == key )
         {
             return i;
         }
     }
     return -1;
}
Jan 03 2023
parent Salih Dincer <salihdb hotmail.com> writes:
On Wednesday, 4 January 2023 at 04:24:33 UTC, areYouSureAboutThat 
wrote:
 int find (int key ,  checked int a[] : len, int len ) // 
 alternative syntax using  attributes
 {
     for (int i = 0; i < len; i ++)
     {
         // NOTE: a[i] is bounds checked.
         // The checking ensures that i is between 0 and len .
         if (a[i] == key )
         {
             return i;
         }
     }
     return -1;
 }
I hardly ever use indexOf anymore. Because I want to not use an signed number. I want to be able to use it inside the if. Wouldn't it occur to anyone other than me to write this function: ```d auto nextIndexOf(A)(A[] arr, A key) { size_t i = 1; while(i <= arr.length) { if(arr[i - 1] == key ) { return i; } else i++; } return 0; } void main() { auto fun = [ 1, 2, 3, 4, 5 ]; import std.stdio; if(auto result = fun.nextIndexOf(6)) { "index of ".write(result - 1); } else { "Not".write; } " Found".writeln; if(auto result = fun.nextIndexOf(5)) { "index of ".write(result - 1); } else { "Not".write; } } ``` Fun: "Yar bana bir eglence" by the traditional Turkish shadow play: Hacivat&Karagoz SDB 79
Jan 04 2023
prev sibling parent matheus <matheus gmail.com> writes:
On Wednesday, 4 January 2023 at 03:42:42 UTC, areYouSureAboutThat 
wrote:
 On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
 wrote:
 ....
 ......

 Checked C:

     int a[5] = { 0, 1, 2, 3, 4};
     _Array_ptr<int> p : count(5) = a;  // p points to 5 
 elements.

 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.
I've decieded that I like neither: _Array_ptr<int> p (nor) int p[..] = a; I prefer: int a[5] = { 0, 1, 2, 3, 4}; checked int p[] = a; // p points to 5 elements. attributes would be a great way of extending the C language without introducing odd looking syntax, like Microsofts, that makes me wanna puke, or yours, that makes me look at it for 30 mintutes, trying to work our .. what does [..] actually mean...
In fact this seems reasonable, and with this what about changed it to: __checked, then: C programmers use #define when needed, i.e: #define __checked ; __checked int p[] = a; // p points to 5 elements. Of course using #IF #ELSE for defining it when compiling in C vs D. Matheus.
Jan 04 2023
prev sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
wrote:
 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.
I would wish you first implement this for D: ```d uint[] x = [1,2,3]; // create a dynamic array with initial size 3 uint[..] y = [1,2,3]; // create a static array with automatic size (not possible now) ```
Jan 05 2023
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 09:07:57 UTC, Dom DiSc wrote:
 On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
 wrote:
 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.
I would wish you first implement this for D: ```d uint[] x = [1,2,3]; // create a dynamic array with initial size 3 uint[..] y = [1,2,3]; // create a static array with automatic size (not possible now) ```
Neither proposal will get into C. One of the design goals of C, is actually to resist change (no, I'm no kidding). I think that is a good thing. As for D, yes, it sure is surprising the compiler cannot automatically size a static array by the number of arguments being provided to it. But I'm not a fan of this syntax [..] Everytime I see it, I think, wtf is that! It's also confusing as it uses syntax from slices [1..$] maybe I'd settle on int[T] array = [1,2,3]; now, as a programmer, I already know that T is a token that will get automatically replaced with something meaningful.
Jan 05 2023
next sibling parent Hipreme <msnmancini hotmail.com> writes:
On Thursday, 5 January 2023 at 09:37:18 UTC, areYouSureAboutThat 
wrote:
 On Thursday, 5 January 2023 at 09:07:57 UTC, Dom DiSc wrote:
 On Wednesday, 21 December 2022 at 19:31:22 UTC, Walter Bright 
 wrote:
 My proposal for C:

     int a[5] = { 0, 1, 2, 3, 4};
     int p[..] = a;  // p points to 5 elements.
I would wish you first implement this for D: ```d uint[] x = [1,2,3]; // create a dynamic array with initial size 3 uint[..] y = [1,2,3]; // create a static array with automatic size (not possible now) ```
Neither proposal will get into C. One of the design goals of C, is actually to resist change (no, I'm no kidding). I think that is a good thing. As for D, yes, it sure is surprising the compiler cannot automatically size a static array by the number of arguments being provided to it. But I'm not a fan of this syntax [..] Everytime I see it, I think, wtf is that! It's also confusing as it uses syntax from slices [1..$] maybe I'd settle on int[T] array = [1,2,3]; now, as a programmer, I already know that T is a token that will get automatically replaced with something meaningful.
You can't use [T] because it is reserved as user symbol and could break D code. For instance, T can be used as a number: ```d struct IntStaticArray(uint T) { int[T] data; alias data this; } IntStaticArray!(5) arr; ``` The most accepted syntax for inferred length static array was `int[$] a = [1,2,3]`. But people insists that `import std.array:staticArray; int[] a = [1,2,3].staticArray;` is better. So I don't know what to say. Anyway this thread has gone quite far and is really unproductive. So just go and fix the C biggest mistake so people can back to be productive again.
Jan 05 2023
prev sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 09:37:18AM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
[...]
 As for D, yes, it sure is surprising the compiler cannot automatically
 size a static array by the number of arguments being provided to it.
Of course it can. See std.array.staticArray. Yes, yes, people hate the standard library for some weird reason. D must be the only language where people actively hate its standard library. It's like writing C without using stdlib or stdio, or writing C++ without using STL. Makes no sense. Built-in syntax has been proposed multiple times in the past, the main blocker being:
 But I'm not a fan of this syntax [..]
Everybody says that about every proposal that has come up. // All this has been proposed before: int[_] staticArray; // "but _ is a valid identifier!" int[$] staticArray; // "but $ looks ugly!" int[auto] staticArray; // "but auto is too verbose!" ... // the list goes on // And now this: int[..] staticArray; // "but i'm not a fan of this syntax!" If we would stop bikeshedding over trivialities such as syntax, we'd have implemented this years ago. T -- Notwithstanding the eloquent discontent that you have just respectfully expressed at length against my verbal capabilities, I am afraid that I must unfortunately bring it to your attention that I am, in fact, NOT verbose.
Jan 05 2023
next sibling parent reply Max Samukha <maxsamukha gmail.com> writes:
On Thursday, 5 January 2023 at 17:30:59 UTC, H. S. Teoh wrote:

 	int[$] staticArray;	// "but $ looks ugly!"
We need to stop listening to people who hate dollars. Seriously, $ is already in the language and means "length of the array". Let's use it.
Jan 05 2023
next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 06:19:12PM +0000, Max Samukha via Digitalmars-d wrote:
 On Thursday, 5 January 2023 at 17:30:59 UTC, H. S. Teoh wrote:
 
 	int[$] staticArray;	// "but $ looks ugly!"
We need to stop listening to people who hate dollars. Seriously, $ is already in the language and means "length of the array". Let's use it.
+1. Anyone up for pushing this DIP through? I'd support it. T -- Береги платье снову, а здоровье смолоду.
Jan 05 2023
prev sibling parent reply IGotD- <nise nise.com> writes:
On Thursday, 5 January 2023 at 18:19:12 UTC, Max Samukha wrote:
 On Thursday, 5 January 2023 at 17:30:59 UTC, H. S. Teoh wrote:

 	int[$] staticArray;	// "but $ looks ugly!"
We need to stop listening to people who hate dollars. Seriously, $ is already in the language and means "length of the array". Let's use it.
I didn't like the dollars in the DIP with the enum inference (https://forum.dlang.org/thread/wpqmuysuxadcwnzypnxk forum.dlang.org). However, here it makes sense and already mean something (arrays length). uint[$] y = [1,2,3]; Looks ok to me. I would support a DIP suggesting this. uint[..] y = [1,2,3]; is also ok. Static array length based on initializer is long overdue.
Jan 05 2023
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 07:10:47PM +0000, IGotD- via Digitalmars-d wrote:
[...]
 Static array length based on initializer is long overdue.
Let's do it!! Time to resurrect that DIP and push it through. T -- First Rule of History: History doesn't repeat itself -- historians merely repeat each other.
Jan 05 2023
prev sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 17:30:59 UTC, H. S. Teoh wrote:
 Of course it can. See std.array.staticArray.
I didn't know that. Thanks. I'll use it. But in that case, what are people fussing about. Just use that. Why introduce nonsense such as this: [$] [..] ??? Mmm .. maybe [?] ?? btw. I don't consider syntax as bikeshedding. As a programmer, nothing is more important to me than sensible (and predictable) syntax. However, instead of having to do this: auto myArray = [0, 1].staticArray; I would like the compiler to infer that I'm creating a staticArray using this: int[] myArray = [0, 1].staticArray; I don't see why it requires my to only ever use auto. I don't like using auto here.
Jan 05 2023
next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 20:58:52 UTC, areYouSureAboutThat 
wrote:

This would the absolute best approach:

int[] myDynamicArray = [0, 1]; // your typical dynamic D array.

int[] myStaticArray = [0, 1].staticArray;
static assert(is(typeof(myStaticArray) == int[2]));


Is there any reason the compiler cannot be made to do this?
Jan 05 2023
parent reply IGotD- <nise nise.com> writes:
On Thursday, 5 January 2023 at 21:08:28 UTC, areYouSureAboutThat 
wrote:
 int[] myStaticArray = [0, 1].staticArray;
 static assert(is(typeof(myStaticArray) == int[2]));


 Is there any reason the compiler cannot be made to do this?
In your example, wouldn't myStaticArray be a slice?
Jan 05 2023
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 21:18:00 UTC, IGotD- wrote:
 On Thursday, 5 January 2023 at 21:08:28 UTC, 
 areYouSureAboutThat wrote:
 int[] myStaticArray = [0, 1].staticArray;
 static assert(is(typeof(myStaticArray) == int[2]));


 Is there any reason the compiler cannot be made to do this?
In your example, wouldn't myStaticArray be a slice?
how can this possibly be misinterpreted?? int[] myStaticArray = [0, 1].staticArray; the '.staticArray' provides all the context needed to understand what type is being created.
Jan 05 2023
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 10:03:10PM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
 On Thursday, 5 January 2023 at 21:18:00 UTC, IGotD- wrote:
 On Thursday, 5 January 2023 at 21:08:28 UTC, areYouSureAboutThat wrote:
 
 int[] myStaticArray = [0, 1].staticArray;
 static assert(is(typeof(myStaticArray) == int[2]));
 
 
 Is there any reason the compiler cannot be made to do this?
In your example, wouldn't myStaticArray be a slice?
how can this possibly be misinterpreted?? int[] myStaticArray = [0, 1].staticArray; the '.staticArray' provides all the context needed to understand what type is being created.
`int[]` is a slice. The static array is written as `int[2]`. This is valid (though not exactly a good idea): // Take a slice of a static array int[] myStaticArray = [0, 1].staticArray[]; T -- Lottery: tax on the stupid. -- Slashdotter
Jan 05 2023
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 22:09:15 UTC, H. S. Teoh wrote:
 `int[]` is a slice. The static array is written as `int[2]`.

 This is valid (though not exactly a good idea):

 	// Take a slice of a static array
 	int[] myStaticArray = [0, 1].staticArray[];


 T
to be honest, I've never needed a static array in D. I just let the gc take care of everything with [] so I'll leave the 'controversy' over this to those that do ;-) in the meantime.. this is hardly .. hard: auto myStaticArray = [0, 1].staticArray; int[2] myStaticArray = [0, 1].staticArray; people can still count.. can't they?
Jan 05 2023
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 10:35:37PM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
[...]
 to be honest, I've never needed a static array in D.
:-D I haven't really used static arrays very much myself, but occasionally they're useful. E.g. in one project where I use a lot of 4-element int arrays, using static arrays for them reduces a lot of GC pressure, and also gives me nice by-value semantics (useful in certain contexts). [...]
 in the meantime.. this is hardly .. hard:
 
 auto myStaticArray = [0, 1].staticArray;
 
 int[2] myStaticArray = [0, 1].staticArray;
 
 people can still count.. can't they?
It's not so much about counting, it's about maintainability / mutability of the code. For example, if you had a long static array like this: int[100] x = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541 ]; if one day you decide that some elements have to be added/removed from this array (based on some arbitrary criteria), then you have to recount and update the length, which, for a long array, isn't a trivial effort. It's also not very DRY; the compiler can already figure this out for you, so why make the programmer repeat the same work? Note that static arrays isn't just arrays of numbers, it could potentially be an array of some aggregate type with complex initializers that makes it annoying to have to recount every time you update it. T -- Жил-был король когда-то, при нём блоха жила.
Jan 05 2023
next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 23:09:43 UTC, H. S. Teoh wrote:
 It's not so much about counting, it's about maintainability / 
 mutability of the code.  For example, if you had a long static 
 array like this:

 	int[100] x = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41,
 		43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101,
 		103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157,
 		163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223,
 		227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277,
 		281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349,
 		353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419,
 		421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479,
 		487, 491, 499, 503, 509, 521, 523, 541 ];
I'd write that like this ;-) int[0] x = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541 ];
Error: mismatched array lengths, 0 and 100
..let the compiler count them ;-)
Jan 05 2023
prev sibling parent reply Patrick Schluter <Patrick.Schluter bbox.fr> writes:
On Thursday, 5 January 2023 at 23:09:43 UTC, H. S. Teoh wrote:
 On Thu, Jan 05, 2023 at 10:35:37PM +0000, areYouSureAboutThat 
 via Digitalmars-d wrote: [...]
 to be honest, I've never needed a static array in D.
:-D I haven't really used static arrays very much myself, but occasionally they're useful. E.g. in one project where I use a lot of 4-element int arrays, using static arrays for them reduces a lot of GC pressure, and also gives me nice by-value semantics (useful in certain contexts). [...]
 in the meantime.. this is hardly .. hard:
 
 auto myStaticArray = [0, 1].staticArray;
 
 int[2] myStaticArray = [0, 1].staticArray;
 
 people can still count.. can't they?
It's not so much about counting, it's about maintainability / mutability of the code. For example, if you had a long static array like this: int[100] x = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521, 523, 541 ]; if one day you decide that some elements have to be added/removed from this array (based on some arbitrary criteria), then you have to recount and update the length, which, for a long array, isn't a trivial effort. It's also not very DRY; the compiler can already figure this out for you, so why make the programmer repeat the same work? Note that static arrays isn't just arrays of numbers, it could potentially be an array of some aggregate type with complex initializers that makes it annoying to have to recount every time you update it.
The annoying and shameful thing about this static array size auto-determination is that it is a feature that even K&R C was able to provide. We will probably see men on the Moon again before D is able to do it ;-)
Jan 06 2023
next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Jan 06, 2023 at 01:43:33PM +0000, Patrick Schluter via Digitalmars-d
wrote:
 On Thursday, 5 January 2023 at 23:09:43 UTC, H. S. Teoh wrote:
[...]
 	int[100] x = [ 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41,
 		43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101,
 		103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157,
 		163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223,
 		227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277,
 		281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349,
 		353, 359, 367, 373, 379, 383, 389, 397, 401, 409, 419,
 		421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479,
 		487, 491, 499, 503, 509, 521, 523, 541 ];
[...]
 The annoying and shameful thing about this static array size
 auto-determination is that it is a feature that even K&R C was able to
 provide. We will probably see men on the Moon again before D is able
 to do it ;-)
LOL... it's *really* time to push that DIP through to the end. I vote for either `int[$]` or `int[auto]`. Syntactically I prefer the former, but upon more careful consideration the latter might be more bulletproof, because conceivably, in some far-fetched scenario where you might construct a static array inside an index expression where $ may be bound to the length of the outer array, $ could prove to be ambiguous: // REALLY contrived case that probably won't happen in real life int[] outerArray = [ 1, 2, 3, 4, 5 ]; outerArray[{ // Does $ here refer to outerArray.length (5) or the length // of the initializer (3)? size_t[$] indices = [ 0, 1, 2 ]; return indices[1]; }()] = 0; I don't remember now whether $ is carried over into a function literal subexpression, but if it does, the above example would probably cause a compile error. Using `auto` eliminates this potential ambiguity. T -- Маленькие детки - маленькие бедки.
Jan 06 2023
prev sibling parent reply Ki Rill <rill.ki yahoo.com> writes:
On Friday, 6 January 2023 at 13:43:33 UTC, Patrick Schluter wrote:
 On Thursday, 5 January 2023 at 23:09:43 UTC, H. S. Teoh wrote:
 On Thu, Jan 05, 2023 at 10:35:37PM +0000, areYouSureAboutThat 
 via Digitalmars-d wrote: [...]
 to be honest, I've never needed a static array in D.
:-D I haven't really used static arrays very much myself, but occasionally they're useful. E.g. in one project where I use a lot of 4-element int arrays, using static arrays for them reduces a lot of GC pressure, and also gives me nice by-value semantics (useful in certain contexts). [...]
The annoying and shameful thing about this static array size auto-determination is that it is a feature that even K&R C was able to provide. We will probably see men on the Moon again before D is able to do it ;-)
It has been implemented: ```d scope int[] arr = [10, 20, 30]; ``` In my opinion, it's a much better design than this: ```d int[$] arr = [10, 20, 30]; int[..] arr = [10, 20, 30]; int[auto] arr = [10, 20, 30]; ``` Link: https://dlang.org/changelog/pending.html#dmd.scope-array-on-stack
Jan 20 2023
parent kdevel <kdevel vogtner.de> writes:
On Friday, 20 January 2023 at 15:41:20 UTC, Ki Rill wrote:
 [...]
 It has been implemented:
 [...]
 Link: 
 https://dlang.org/changelog/pending.html#dmd.scope-array-on-stack
"Change Log: 2.103.0" ??? https://dlang.org/changelog/2.102.0.html#log_float_double_implementations https://dlang.org/changelog/pending.html#log_float_double_implementations One change in two versions?
Jan 20 2023
prev sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 08:58:52PM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
[...]
 But in that case, what are people fussing about. Just use that.
 
 Why introduce nonsense such as this: [$] [..]  ???
As you said yourself later, syntax. ;-)
 Mmm .. maybe [?]   ??
`?` is the ternary operator, it's weird to use it in this context with a totally unrelated meaning.
 btw. I don't consider syntax as bikeshedding. As a programmer, nothing
 is more important to me than sensible (and predictable) syntax.
Syntax certainly has its place (I wouldn't ever want to use C++ template syntax again unless I'm forced to, for example), but semantics is far more important in a programming language than syntax. The prettiest syntax in the world is worthless if it cannot express what I want to express in my code, or if it has weird semantics with strange corner cases that make my code hard to understand.
 However, instead of having to do this:
 
 auto myArray = [0, 1].staticArray;
 
 I would like the compiler to infer that I'm creating a staticArray
 using this:
 
 int[] myArray = [0, 1].staticArray;
 
 I don't see why it requires my to only ever use auto.
This does work: int[2] myArray = [0, 1].staticArray; Though it does also defeat the purpose of .staticArray. :-D
 I don't like using auto here.
It's already obvious from the initializer what the type of myArray is, why would you want to repeat it? I prefer my code to be DRY. I almost never write: int[] x = [1, 2, 3]; MyStruct s = MyStruct(1, 2, 3); MyClass obj = new MyClass; Too much redundant information. This is better: auto x = [1, 2, 3]; auto s = MyStruct(1, 2, 3); auto obj = new MyClass; The compiler can already figure out the types for me; let the machine do its job while I focus on more important things. Like actually solving the programming problem I set out to solve, for example. T -- LINUX = Lousy Interface for Nefarious Unix Xenophobes.
Jan 05 2023
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/21/22 2:09 PM, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894
 
 I'm wondering. Should I just go ahead and implement [..] in ImportC?
Stop trying to fix C. Fix D instead. -Steve
Dec 21 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 11:35 AM, Steven Schveighoffer wrote:
 Stop trying to fix C. Fix D instead.
[..] is edging ahead in the twitter poll!
Dec 21 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/21/22 7:46 PM, Walter Bright wrote:
 On 12/21/2022 11:35 AM, Steven Schveighoffer wrote:
 Stop trying to fix C. Fix D instead.
[..] is edging ahead in the twitter poll!
It's the bots! Don't listen to them! -Steve
Dec 21 2022
prev sibling parent deadalnix <deadalnix gmail.com> writes:
On Thursday, 22 December 2022 at 00:46:03 UTC, Walter Bright 
wrote:
 On 12/21/2022 11:35 AM, Steven Schveighoffer wrote:
 Stop trying to fix C. Fix D instead.
[..] is edging ahead in the twitter poll!
We don't need faster horses. To quote Coluche, "Just because they are many to be wrong doesn't mean they are right."
Dec 21 2022
prev sibling parent bauss <jacobbauss gmail.com> writes:
On Wednesday, 21 December 2022 at 19:35:54 UTC, Steven 
Schveighoffer wrote:
 On 12/21/22 2:09 PM, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894
 
 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
Stop trying to fix C. Fix D instead. -Steve
We have went full circle.
Dec 21 2022
prev sibling next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I thought the intent of ImportC was primarily to facilitate the use of C libraries, e.g., sqlite, from D without having to manually translate function prototypes and other definitions (such as typedefs). What you seem to be proposing is an extension to the C language that the ImportC compiler (which is a C compiler) understands. How is fixing a problem in C that the C community hasn't agreed is a problem in a way it hasn't agreed to of benefit to the D community? Wouldn't it be better to devote ImportC time and effort to extending ImportC's understanding of existing variants of C used in libraries that it currently doesn't understand (I've documented examples of this in a recent bug report that you asked me to file)? Perhaps I'm missing something here. If so, please enlighten me. /Don
Dec 21 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 11:38 AM, Don Allen wrote:
 Perhaps I'm missing something here. If so, please enlighten me.
ImportC already has a couple extensions from D to make programming in it easier. But to address your point, ImportC has been a big success for D. People trying to interface D with C and C++ have many times lamented the lack of dynamic arrays in C and C++, leading to ugly interface hacks. The advantages are: 1. drawing attention to ImportC, which will draw attention to D 2. D can call ImportC code, and ImportC code can call D code. This will make one of the most-used features of D easy to cross over between the two languages. 3. Help get [..] into the C Standard which will help D, too, by making D easier to interface with C 4. Getting it into C means better C debugger support for D
Dec 21 2022
next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 On 12/21/2022 11:38 AM, Don Allen wrote:
 Perhaps I'm missing something here. If so, please enlighten me.
ImportC already has a couple extensions from D to make programming in it easier. But to address your point, ImportC has been a big success for D. People trying to interface D with C and C++ have many times lamented the lack of dynamic arrays in C and C++, leading to ugly interface hacks. The advantages are: 1. drawing attention to ImportC, which will draw attention to D 2. D can call ImportC code, and ImportC code can call D code. This will make one of the most-used features of D easy to cross over between the two languages. 3. Help get [..] into the C Standard which will help D, too, by making D easier to interface with C 4. Getting it into C means better C debugger support for D
I read your paper/discussion of C's biggest mistake. I'm not prepared to agree that what you discuss is The Biggest, but it's certainly up there in the long list of C's mistakes. So I think your argument is persuasive. But it is far from clear to me that making that argument by implementing your proposed new construct in ImportC is the best way to go about this. For example, you say 2. D can call ImportC code, and ImportC code can call D code. This will make one of the most-used features of D easy to cross over between the two languages. Again, I thought the intent of ImportC was to facilitate access to useful existing C libraries. If the C community doesn't have an instant epiphany and adopt your position of making a distinction between arrays and pointers because you added that feature to ImportC, then your effort is going to be greeted with the sound of one hand clapping. I think it would be much better if you argued for your proposed change to C to the C-powers-that-be and devoted your ImportC efforts to extending its coverage of existing C dialects. I think *that* would be the best way to make ImportC a big win for D. Zig's C translator handles the include files (gtk.h and everything it drags in and sqlite3.h as well) that ImportC does not. I think your goal for ImportC should be to make it as comprehensive as the Zig translator. And your key advantage is that D is here now, production-worthy, whereas Zig won't be ready until 2025, according to Kelley's latest roadmap (and based on what I've seen from watching that project, he might well be optimistic).
Dec 21 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 12:25 PM, Don Allen wrote:
 Again, I thought the intent of ImportC was to facilitate access to useful 
 existing  C libraries. If the C community doesn't have an instant epiphany
and 
 adopt your position of making a distinction between arrays and pointers
because 
 you added that feature to ImportC, then your effort is going to be greeted
with 
 the sound of one hand clapping.
I have some experience implementing things that no sane person wants, and it becoming a game changer. Of course, I can't know this in advance. But I'm not afraid to try.
 I think it would be much better if you argued for your proposed change to C to 
 the C-powers-that-be and devoted your ImportC efforts to extending its
coverage 
 of existing C dialects. I think *that* would be the best way to make ImportC a 
 big win for D.
I agree with you on the importance of improving ImportC's abilities to handle common .h files with their wacky use of C extensions. I just did another improvement for that a couple days ago.
Dec 21 2022
parent Don Allen <donaldcallen gmail.com> writes:
On Wednesday, 21 December 2022 at 20:44:06 UTC, Walter Bright 
wrote:
 On 12/21/2022 12:25 PM, Don Allen wrote:
 Again, I thought the intent of ImportC was to facilitate 
 access to useful existing  C libraries. If the C community 
 doesn't have an instant epiphany and adopt your position of 
 making a distinction between arrays and pointers because you 
 added that feature to ImportC, then your effort is going to be 
 greeted with the sound of one hand clapping.
I have some experience implementing things that no sane person wants, and it becoming a game changer. Of course, I can't know this in advance. But I'm not afraid to try.
 I think it would be much better if you argued for your 
 proposed change to C to the C-powers-that-be and devoted your 
 ImportC efforts to extending its coverage of existing C 
 dialects. I think *that* would be the best way to make ImportC 
 a big win for D.
I agree with you on the importance of improving ImportC's abilities to handle common .h files with their wacky use of C extensions. I just did another improvement for that a couple days ago.
Well, you are the key guy on this project. If I were in your shoes, I'd be prioritizing my todo list in descending order of your best guess of expected value to the project. My guess of the expected value of [..] is epsilon, which is approaching zero. But it's your project, so you do what you want. But you did ask us, and that's my opinion.
Dec 21 2022
prev sibling next sibling parent reply Adam D Ruppe <destructionator gmail.com> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 But to address your point, ImportC has been a big success for D.
How did you determine this? Most people I've talked to either haven't actually used it or have had (predictable) trouble with it. A small minority of D users actually find value in it right now. As I've been saying for a while, I agree it has some potential, but it has not been a big success yet.
Dec 21 2022
next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Wednesday, 21 December 2022 at 21:14:24 UTC, Adam D Ruppe 
wrote:
 On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
 wrote:
 But to address your point, ImportC has been a big success for 
 D.
How did you determine this?
Number of positive Hacker News comments, perhaps? :)
Dec 21 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 1:44 PM, Paul Backus wrote:
 Number of positive Hacker News comments, perhaps? :)
It has indeed generated a lot of positive interest in D.
Dec 21 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 1:14 PM, Adam D Ruppe wrote:
 have had (predictable) trouble with it.
I appreciate any help identifying specific problems and reporting them on bugzilla.
Dec 21 2022
prev sibling next sibling parent IGotD- <nise nise.com> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 But to address your point, ImportC has been a big success for 
 D. People trying to interface D with C and C++ have many times 
 lamented the lack of dynamic arrays in C and C++, leading to 
 ugly interface hacks.
You still need interface hacks anyway, for example C flexible arrays (last member is an array). C often has a size member field determining the size. I think your [..] addition will be used very sparsely as most people will import **standard C** APIs.
 The advantages are:

 1. drawing attention to ImportC, which will draw attention to D
I kind of doubt it. Most people will never know about it the [..] addition.
 3. Help get [..] into the C Standard which will help D, too, by 
 making D easier to interface with C
I can get a cat to do my laundry and my taxes before that happens. C has been around for 40 years.
Dec 21 2022
prev sibling next sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 3. Help get [..] into the C Standard which will help D, too, by 
 making D easier to interface with C

 4. Getting it into C means better C debugger support for D
For the record: a feature very similar to [..] was proposed for inclusion into C23, by Martin Uecker in section 3.4 of the paper "Improved Bounds Checking for Array Types" [1]. The paper was discussed at the June 2021 meeting of the C committee [2], and in that discussion, section 3.4 received the following comment:
 With regard to section 3.4, VLA's already store sizes because 
 the platform
 needs the size for arithmetic/subtraction. So you do not need 
 new syntax.
I.e., the use-case of having runtime bounds information available for arrays is already covered by using a pointer-to-VLA, which (unlike VLAs themselves) are a required feature of C23 [3]. For example, one can write: void my_func(size_t n, int (*arr)[n]) ...and then use sizeof to compute the length at runtime: for (size_t i = 0; i < (sizeof arr / sizeof arr[0]); i++) Of course, anyone who has used a language like D with real slices will understand that this is a poor substitute, but convincing the C committee of that may be an uphill battle. [1] https://open-std.org/jtc1/sc22/wg14/www/docs/n2660.pdf [2] https://open-std.org/jtc1/sc22/wg14/www/docs/n2802.pdf [3] https://open-std.org/jtc1/sc22/wg14/www/docs/n2778.pdf
Dec 21 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
Thanks for the references.

As you mentioned, they are a poor substitute. The proof of that is I simply 
never run across C code that uses them.
Dec 21 2022
prev sibling next sibling parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:

 But to address your point, ImportC has been a big success for D.
Show me any project that successfuly uses ImportC, or didn't happen.
 I appreciate any help identifying specific problems and 
 reporting them on bugzilla.
And then reverting PRs that actually fix those problems.
 Every time I try to clean up technical debt, a cadre arises 
 objecting that it breaks existing code.
Yet I find dlangui to be broken at least once a month with every new DMD/Phobos release. On Wednesday, 21 December 2022 at 19:35:54 UTC, Steven Schveighoffer wrote:
 Stop trying to fix C. Fix D instead.
 -Steve
Fully agree. This thing right here https://dlang.org/spec/hash-map.html#static_initialization has been "Not YET implemented" for as long as I work with D (that's more than 3 years), and it doesn't seem to be bothering anyone. But yes, implementing a compiler inside a compiler is much more important, when the original compiler doesn't even work properly.
Dec 22 2022
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 12/22/22 3:26 AM, GrimMaple wrote:

 This thing right here 
 https://dlang.org/spec/hash-map.html#static_initialization has been "Not 
 YET implemented" for as long as I work with D (that's more than 3 
 years), and it doesn't seem to be bothering anyone. But yes, 
 implementing a compiler inside a compiler is much more important, when 
 the original compiler doesn't even work properly.
I have a (clunky) library solution: https://github.com/schveiguy/newaa You can build one of those at compile-time, and use it at runtime, and it can be explicitly converted to/from builtin AA (without any copying), because it's binary compatible. e.g. ```d import std.stdio; import schlib.newaa; Hash!(string, int) aa = ["hello": 1, "world": 2]; void foo(int[string] x) { x["blah"] = 5; } void main() { foo(aa.asAA); writeln(aa["blah"]); // 5 } ``` -Steve
Dec 22 2022
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 12:26 AM, GrimMaple wrote:
 Show me any project that successfuly uses ImportC, or didn't happen.
Phobos compiles its C files with ImportC.
Dec 22 2022
prev sibling next sibling parent Sergey <kornburn yandex.ru> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 On 12/21/2022 11:38 AM, Don Allen wrote:
 ImportC already has a couple extensions from D to make 
 programming in it easier.

 But to address your point, ImportC has been a big success for 
 D. People trying to interface D with C and C++ have many times
At last Beerconf I’ve made a poll for Jitsy users about tools for C interaction. ImportC got 0 votes… the winner btw “manual export(C)” with 3 votes. Of course the statistic is not representative, but still.
 lamented the lack of dynamic arrays in C and C++, leading to 
 ugly interface hacks.

 The advantages are:

 1. drawing attention to ImportC, which will draw attention to D
I really hardly imagine a solid C developer (like the level of OpenBSD hacker or Linux/Git kernel) who will use ImportC to have this feature, instead of using gcc/clang/specific compiler for target(intel,nvidia,etc)
 2. D can call ImportC code, and ImportC code can call D code.
 This will make one of the most-used features of D easy to cross 
 over between the two languages.
There is no code with this syntax. D needs that ImportC will flawlessly work with current libraries and code, not with “code from future with improved features of C45”
 3. Help get [..] into the C Standard which will help D, too, by 
 making D easier to interface with C
How it helps to get it into C Standard? Isn’t it better to prepare PR for GCC and send DIP(or how they call it) to C Core community? And after it will be approved and widely used - add it to ImportC..
 4. Getting it into C means better C debugger support for D
Same as above. Maybe use principles of KISS and Unix-way? For debugging use specific debugger-tools? Not “other language Interoperability tool”?
Dec 22 2022
prev sibling parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 21 December 2022 at 19:59:58 UTC, Walter Bright 
wrote:
 On 12/21/2022 11:38 AM, Don Allen wrote:
 Perhaps I'm missing something here. If so, please enlighten me.
ImportC already has a couple extensions from D to make programming in it easier.
Which no one will ever use. :-)
Dec 22 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 2:21 AM, Iain Buclaw wrote:
 Which no one will ever use. :-)
They were very useful for the ImportC test suite.
Dec 22 2022
next sibling parent Iain Buclaw <ibuclaw gdcproject.org> writes:
On Thursday, 22 December 2022 at 21:18:04 UTC, Walter Bright 
wrote:
 On 12/22/2022 2:21 AM, Iain Buclaw wrote:
 Which no one will ever use. :-)
They were very useful for the ImportC test suite.
It's still a DMD folly. GDC has been able to compile C files since day 0 (19 years ago?) as it delegates foreign sources to the other compilers it was built with - the reverse also works, GCC can compile D sources because it uses the same mechanism. So you could say that C and C++ have been capable of compiling D sources since 2019. ;-) ``` $ gccgo -fno-druntime compiler/test/runnable/bcraii.d -o go-bcraii $ ./go-bcraii S.this() S.~this() inside ``` None of these tests where dmd pretends to be a C compiler even work outside of dmd. ``` $ gdc compiler/test/compilable/cimport.c compiler/test/compilable/cimport.c:3:1: error: unknown type name ‘__import’ 3 | __import core.stdc.stdarg; | ^~~~~~~~ compiler/test/compilable/cimport.c:3:14: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘.’ token 3 | __import core.stdc.stdarg; | ^ compiler/test/compilable/cimport.c:4:1: error: unknown type name ‘__import’ 4 | __import imports.impcimport; | ^~~~~~~~ compiler/test/compilable/cimport.c:4:17: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘.’ token 4 | __import imports.impcimport; | ^ compiler/test/compilable/cimport.c: In function ‘foo’: compiler/test/compilable/cimport.c:8:5: error: unknown type name ‘va_list’ 8 | va_list x; | ^~~~~~~ compiler/test/compilable/cimport.c:1:1: note: ‘va_list’ is defined in header ‘<stdarg.h>’; did you forget to ‘#include <stdarg.h>’? +++ |+#include <stdarg.h> 1 | // https://issues.dlang.org/show_bug.cgi?id=22666 compiler/test/compilable/cimport.c:9:16: error: ‘A’ undeclared (first use in this function) 9 | return 1 + A; | ^ compiler/test/compilable/cimport.c:9:16: note: each undeclared identifier is reported only once for each function it appears in ``` DMD becoming a C compiler is a side-effect of importC, but just because you can doesn't mean you should.
Dec 23 2022
prev sibling parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Thursday, 22 December 2022 at 21:18:04 UTC, Walter Bright 
wrote:
 On 12/22/2022 2:21 AM, Iain Buclaw wrote:
 Which no one will ever use. :-)
They were very useful for the ImportC test suite.
What you are saying is, ImportC is only useful for, well, ImportC. I'm sorry, but I can't understand your game here. A few posts back you're saying:
 Versioning comes with other problems. The most significant is 
 we lack sufficient staff to maintain multiple versions.
Yet, for maintaining multiple obscure dialects of D there IS enough staff? So, is there enough, or is there not? There's D, there's worseD ("better" C if you want). Now there's evenWorseD, which is basically going back straight to C. And I picked D specifically because I wanted to break free from C/C++. What is your goal even, do you care about D at all? I'm going to great lengths to write software that's pure D, and when the creator of D gives up and starts "fixing" other languages, that's a huge off point to many.
Dec 23 2022
next sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Friday, 23 December 2022 at 14:24:36 UTC, GrimMaple wrote:

 What is your goal even, do you care about D at all? I'm going 
 to great lengths to write software that's pure D, and when the 
 creator of D gives up and starts "fixing" other languages, 
 that's a huge off point to many.
Yes. This episode prompted me to do some reading about the history of this project. Around the time Andrei Andreescu left the project, he wrote a pretty frank message about the state of the D world then and what he thought was needed. This prompted an exchange of messages with others interested in D. My point in bringing this up is that that exchange makes what has happened here feel like deju-vu all over again. Nothing has changed. I don't think anyone in their right mind would question Walter's knowledge of compilers, his technical chops. But knowing how to make compilers does not qualify a person to manage a complex software project and set its agenda. This requires different talents. It is what the Peter Principle is all about. I speak from experience here, including my own mistakes in a long career of writing code and managing development projects. I was far better at the former than the latter and I think the same can be said of Walter. Reading the Andreescu thread demonstrates that I am not the first to make this observation. The issues with this project's decision-making have exactly the effect you cite. It has had a direct effect on my own confidence in this project. I'm building software for my family to use without me, when that time comes, and it now feels to me like a key component that I'm relying on comes from a dysfunctional family. For that reason, I've decided to fall back on my original C version and bring it up-to-date (including back-porting some of my D code). I have more confidence in the stability of the C environment and its tools, despite the primitive state of the language compared to D. I do hope this project finally finds a way to right itself. It will take the addition of the right person or people, not an easy task, but not impossible. Good luck.
Dec 23 2022
next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 23 December 2022 at 16:51:40 UTC, Don Allen wrote:
 
 I do hope this project finally finds a way to right itself. It 
 will take the addition of the right person or people, not an 
 easy task, but not impossible. Good luck.
I believe the main hope is with forks or new compiler projects
Dec 23 2022
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 23, 2022 at 05:21:43PM +0000, monkyyy via Digitalmars-d wrote:
 On Friday, 23 December 2022 at 16:51:40 UTC, Don Allen wrote:
 
 I do hope this project finally finds a way to right itself. It will
 take the addition of the right person or people, not an easy task,
 but not impossible. Good luck.
I believe the main hope is with forks or new compiler projects
The size of the community is small enough that forking may well kill the language. For new compiler projects, maybe you should contribute to deadalnix's SDC? It's a D compiler built from scratch according to the published spec. But still quite limited in how much D code it can compile, last time I checked (that was a while ago, though). T -- If I were two-faced, would I be wearing this one? -- Abraham Lincoln
Dec 23 2022
prev sibling parent reply matheus <matheus gmail.com> writes:
On Friday, 23 December 2022 at 16:51:40 UTC, Don Allen wrote:
 ...
 This episode prompted me to do some reading about the history 
 of this project. Around the time Andrei Andreescu left the 
 project, he wrote a pretty frank message about the state of the 
 D world then and what he thought was needed. This prompted an 
 exchange of messages with others interested in D. My point in 
 bringing this up is that that exchange makes what has happened 
 here feel like deju-vu all over again. Nothing has changed.

 ... Reading the Andreescu thread demonstrates that I am not the 
 first to make this observation.
 ...
Could you please provide the link for that Thread? Matheus.
Dec 23 2022
parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 23, 2022 at 08:13:20PM +0000, matheus via Digitalmars-d wrote:
 On Friday, 23 December 2022 at 16:51:40 UTC, Don Allen wrote:
 ...
 This episode prompted me to do some reading about the history of
 this project. Around the time Andrei Andreescu left the project, he
 wrote a pretty frank message about the state of the D world then and
 what he thought was needed. This prompted an exchange of messages
 with others interested in D. My point in bringing this up is that
 that exchange makes what has happened here feel like deju-vu all
 over again. Nothing has changed.
 
 ... Reading the Andreescu thread demonstrates that I am not the
 first to make this observation.  ...
Could you please provide the link for that Thread?
[...] I'd also like to find out what message exactly Don is referring to here, since as far as I know Andrei never left D, he still pops in here every now and then with something D related. IIRC he after a period of time in the trenches he just decided to prioritize his personal life instead. T -- Question authority. Don't ask why, just do it.
Dec 23 2022
parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 23 December 2022 at 20:37:31 UTC, H. S. Teoh wrote:

I think you have to take Andrei at his word.

https://forum.dlang.org/post/qj18h2$8o1$1 digitalmars.com

At the same time, it's not unreasonable to wonder if there is 
something 'not stated' ;-)

But regardless, it can only be conjecture.

Presumably nothing has changed further since this announcement by 
Andrei?

One thing is for sure... The future of the D Programming language 
is primarly stuck with Walter.

But I think some are overreacting a little.... let Walter add 
this thing... I mean who cares.. really? I certainly don't.

I use C if I want to use C.
I use D if i want to use D.
I do NOT use D cause I want to use C.
Dec 23 2022
next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 23 December 2022 at 20:55:58 UTC, areYouSureAboutThat 
wrote:

..many months before...

https://forum.dlang.org/post/q7lguv$12sd$1 digitalmars.com

I'm still trying to understand what it was, that he was trying to 
say ;-)
Dec 23 2022
prev sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 23, 2022 at 08:55:58PM +0000, areYouSureAboutThat via Digitalmars-d
wrote:
[...]
 I think you have to take Andrei at his word.
 
 https://forum.dlang.org/post/qj18h2$8o1$1 digitalmars.com
 
 At the same time, it's not unreasonable to wonder if there is
 something 'not stated' ;-)
 
 But regardless, it can only be conjecture.
Well, Andrei *did* describe elsewhere how he became a target for criticism once he took on the helm, much of which was pretty vicious. There's only so much a person can take before the question pops up, why am I doing this, and should I be doing something else instead?
 Presumably nothing has changed further since this announcement by
 Andrei?
As far as I know, no.
 One thing is for sure... The future of the D Programming language is
 primarly stuck with Walter.
I thought Atila was supposed to be making decisions with Walter as well, it's not just Walter by himself.
 But I think some are overreacting a little.... let Walter add this
 thing...  I mean who cares.. really? I certainly don't.
[...] The frustration is understandable, though. When your favorite pet peeve with D has been sitting in bugzilla for a long time, months and years, while every few months the leadership announce Yet Another New Project (which doesn't affect you nor your project nor address your pet peeve, and which in your opinion address only a niche issue), it's hard not to feel that things just aren't going in the right direction. Meanwhile, Walter feels like he's trapped between a rock and a hard place. Spend time fixing bugs, and people accuse him of making no progress and having no strategic vision; work on something he considers strategic, and people accuse him of working on the wrong thing (in their opinion) while bugs are languishing in bugzilla. At some point, a leader just has to be the bad guy to put down his foot and make a decision, whether or not it's the popular one. And over the years I've observed that what's popular with one segment of the D forums can be extremely unpopular to another segment; there's really no pleasing everybody on a lot of issues. Having said all that, though, I think a lot of reactions on the forum are overblown, because in its current state, D is already pretty darned good. Sure, there are dark corners and quirks, and there are old experimental features that are not worth pushing forward and unfinished features that are extremely complex and will take a lot of time to bring to completion. But it's easy to lose sight of how far D has come when focusing on what lies ahead. D is plenty usable in its current state, and I've been very pleased with it. Even though there's obviously plenty of room for improvement. Another thing is that it's easy to underestimate the complexity and amount of effort required to implement something that, at first glance, may seem very simple. Especially when it's your pet peeve that grates on your nerves every time you encounter it, but which is, in the grand scheme of things, not very high on the priority list. "Feature X is so straightforward, just look at languages P, Q, R, that have it too! Why can't we do it in D? Why hasn't it been done yet, it's been N years?" But feature X may interact with feature Y which may conflict with feature Z; just because something is simple in another language doesn't guarantee it's simple in this language. And just because it's theoretically easy doesn't mean the implementation will be trivial. Add on top of this, that D, being a mainly volunteer-based project, doesn't have the big bucks of a big corporation to pay people to do what they normally wouldn't want to do, and things that require too much time and effort, or that the people who have the skills to do something about it aren't interested in it, simply don't get done. None of this justifies the lack of progress, of course, perceived or otherwise. But it's where things are at currently, and it's what we have to live with. It's not perfect, but D is already pretty darned good at what it does. Moving forward, what will *really* change the current state of things is people who are willing to step up and implement stuff themselves and contribute it to the project. Angry posts in the forum rarely bring about any long-lasting change (if any at all), as proven by history. For example, the WASM story can really improve if there's somebody, or a couple of people, who are really invested in it, could work on making things work, package it nicely, and present it as a solution to the community. I'd do it myself, but frankly, after dabbling with WASM for a little bit earlier this year, I don't feel particularly inclined to invest much effort into it after all: I found it to be early-adopter technology, still raw around its edges and with some basic functionality still missing / not working very well (e.g., GC support, passing any data more complex than basic POD types, interfacing with the host environment). You still need a whole bunch of Javascript (involving a LOT of boilerplate) just to get basic functionality running, and core interactions with the host environment (I/O, access to the GPU, etc) still have a lot of gaps and holes and places where you still need lots of JS glue to make things work. In such a state, I'm not sure how wise it is to for the D foundation to invest much resources into WASM -- a large part of it could go to waste as WASM rapidly changes in the coming years. But if somebody who really needs D on WASM could work out something and contribute it to the community, it would make a huge difference. Someone who is motivated enough to keep up with the latest changes in WASM and keep the D interface to it up-to-date. T -- Frank disagreement binds closer than feigned agreement.
Dec 23 2022
next sibling parent bachmeier <no spam.net> writes:
On Friday, 23 December 2022 at 22:09:37 UTC, H. S. Teoh wrote:
 On Fri, Dec 23, 2022 at 08:55:58PM +0000, areYouSureAboutThat 
 via Digitalmars-d wrote: [...]
 I think you have to take Andrei at his word.
 
 https://forum.dlang.org/post/qj18h2$8o1$1 digitalmars.com
 
 At the same time, it's not unreasonable to wonder if there is 
 something 'not stated' ;-)
 
 But regardless, it can only be conjecture.
Well, Andrei *did* describe elsewhere how he became a target for criticism once he took on the helm, much of which was pretty vicious. There's only so much a person can take before the question pops up, why am I doing this, and should I be doing something else instead?
Whatever were Andrei's reasons, I for one think it's a bit strange that people want to know. It was clear that he didn't enjoy spending his time that way. He made the right decision for himself. He donated a massive amount of time (and money) to the project. I'm grateful and wish him well.
 One thing is for sure... The future of the D Programming 
 language is primarly stuck with Walter.
I thought Atila was supposed to be making decisions with Walter as well, it's not just Walter by himself.
D certainly has problems. To my knowledge, none of them are caused by Walter. He's not the reason people complain about IDE support. He's not the reason people complain about the ecosystem. He's not the reason there's no LTS release. The language is good enough to use and there's only so much he can do. Language changes, new features, etc. are completely unimportant if you want to explain why D isn't more popular.
Dec 23 2022
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 23 December 2022 at 22:09:37 UTC, H. S. Teoh wrote:
 Well, Andrei *did* describe elsewhere how he became a target 
 for criticism once he took on the helm, much of which was 
 pretty vicious. There's only so much a person can take before 
 the question pops up, why am I doing this, and should I be 
 doing something else instead?
I think both Walter and Andrei have been pretty upfront as to their 'leadership' qualities ;-) One is a great language design hacker. Arguably, one of the best. The other is a programmer with an OCD-like focus on 'details' (much like me). But yes, when people have certain expectations that are not met.. they do tend to get grumpy..especially if they're volunteering their own time.. whether it Andrei or those directing criticism towards him. Walter seems to handle these contentions much better than anyone.. perhaps cause he is the ultimate gate-keeper.
 One thing is for sure... The future of the D Programming 
 language is primarly stuck with Walter.
I thought Atila was supposed to be making decisions with Walter as well, it's not just Walter by himself.
'Technically', yes. But I don't think Atila (or anyone else in his position) is under any delusions as to what effect he can ultimately have ;-) Personally, I'm comfortable with Walter being the ultimate gatekeeper. As he has pointed out many time, people can just fork..it. For those that volunteer their time and effort under the delusion that this is not how it works, that is for them to deal with ;-) Again, people should lower the expectations, and they will be much happier..and if they can't, they won't. Let Walter fix C's biggest mistake.. but do it in D's C only, not in C. Leave C alone!
Dec 23 2022
prev sibling next sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 23 December 2022 at 14:24:36 UTC, GrimMaple wrote:
 What is your goal even, do you care about D at all? I'm going 
 to great lengths to write software that's pure D, and when the 
 creator of D gives up and starts "fixing" other languages, 
 that's a huge off point to many.
https://idioms.thefreedictionary.com/kills+your+darlings *Many a writer faces the uncomfortable need to kill their darlings in the editing process. If something in your art is no longer working, then you'll have to be ruthless and kill your darlings. How else will you grow as an artist?*
Dec 23 2022
parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Friday, 23 December 2022 at 17:06:51 UTC, IGotD- wrote:
 https://idioms.thefreedictionary.com/kills+your+darlings

 *Many a writer faces the uncomfortable need to kill their 
 darlings in the editing process. If something in your art is no 
 longer working, then you'll have to be ruthless and kill your 
 darlings. How else will you grow as an artist?*
Is D compiler supposed to be a practical tool or a piece of art?
Dec 23 2022
next sibling parent IGotD- <nise nise.com> writes:
On Friday, 23 December 2022 at 17:25:38 UTC, Siarhei Siamashka 
wrote:
 Is D compiler supposed to be a practical tool or a piece of art?
I believe it is the same underlying sociopsychological phenomenon which also valid for software engineering. When you are a programmer you are also an author and you can become carried away with things that aren't really relevant. Like snowing by trying to optimize an inner loop with very little benefit at the same time other parts of the program needs the attention.
Dec 23 2022
prev sibling parent GrimMaple <grimmaple95 gmail.com> writes:
On Friday, 23 December 2022 at 17:25:38 UTC, Siarhei Siamashka 
wrote:
 On Friday, 23 December 2022 at 17:06:51 UTC, IGotD- wrote:
 https://idioms.thefreedictionary.com/kills+your+darlings

 *Many a writer faces the uncomfortable need to kill their 
 darlings in the editing process. If something in your art is 
 no longer working, then you'll have to be ruthless and kill 
 your darlings. How else will you grow as an artist?*
Is D compiler supposed to be a practical tool or a piece of art?
It definitely fails me as a relaible tool, so.
Dec 23 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/23/2022 6:24 AM, GrimMaple wrote:
 What is your goal even, do you care about D at all? I'm going to great lengths 
 to write software that's pure D, and when the creator of D gives up and starts 
 "fixing" other languages, that's a huge off point to many.
Expecting people with large C code bases to translate their C code to D is never, ever going to work. With ImportC, their C code bases can become "user friendly" with D code, making using D viable with a considerably larger user base. After all, look at the success of C++ with its integration with C.
Dec 23 2022
next sibling parent reply Hipreme <msnmancini hotmail.com> writes:
On Friday, 23 December 2022 at 20:34:10 UTC, Walter Bright wrote:
 On 12/23/2022 6:24 AM, GrimMaple wrote:
 What is your goal even, do you care about D at all? I'm going 
 to great lengths to write software that's pure D, and when the 
 creator of D gives up and starts "fixing" other languages, 
 that's a huge off point to many.
Expecting people with large C code bases to translate their C code to D is never, ever going to work. With ImportC, their C code bases can become "user friendly" with D code, making using D viable with a considerably larger user base.
And what problem will that solve? People in C aren't going to use alien syntax, much less using D because it allows that syntax. D can't run on every platform as C or C++ does. Unless you're thinking about killing the language itself and making """betterC""" the only usable component in D. Just check how much time D lost while trying to get into Android (which is unfortunately broken again). Now we're living the Web era without real D support in WASM. D with the ability to be used as an alternative to Javascript/Typescript programmers would be specially useful as it is not a hard language to work with. The runtime problem must be solved somehow for we can stop dividing D and betterC libraries. A lot of projects were duplicated after betterC announcement. I'm still really bothered with the many people that quit the language because of stubbornness and no real comunitty problem.
Dec 23 2022
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/23/2022 12:46 PM, Hipreme wrote:
 And what problem will that solve?
People will be able to incrementally add D modules to existing C projects. Ive done such work, and C not supporting dynamic arrays is a particular nuisance. Anything we can do to make that easier will make D more attractive.
 Just check how much time D lost while trying to get into Android (which is 
 unfortunately broken again).
The Android project was not one I was involved with, so none of my time was lost on it.
 Now we're living the Web era without real D support in WASM.
If the WASM needs help, then please help it.
 I'm still really bothered with the many people that quit the language because
of 
 stubbornness and no real comunitty problem.
I do not tell people what to work on. (Well, sometimes I do, but that doesn't work. People ask me what to work on, I give a list of things, and they go do something not on the list.) People work on what they want to. The WASM support came about because people decided to support it. I didn't tell anyone to go implement WASM support. It would be different if the D Foundation had a budget that could spend millions of dollars on top engineers. But we don't, we rely on volunteer work done by top engineers. Anyone is welcome to fork D. It's designed and licensed to permit this. But unless it comes with a plan for paid staff, it's hard to see how it would be different. If there's a particular problem you need addressed, I encourage you to do one or more of: 1. file a bug report in bugzilla 2. create and submit a DIP 3. implement your idea or fix
Dec 23 2022
next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Friday, 23 December 2022 at 21:32:28 UTC, Walter Bright wrote:
 ...
What are your views on versioning ;-) Also. The only problem with a programming language that is being developed iteratively by volunteers, apart from that itself, is that it has users ;-) So a few people need to lower their expectations a bit...
Dec 23 2022
prev sibling parent GrimMaple <grimmaple95 gmail.com> writes:
On Friday, 23 December 2022 at 21:32:28 UTC, Walter Bright wrote:

 Anyone is welcome to fork D. It's designed and licensed to 
 permit this. But unless it comes with a plan for paid staff, 
 it's hard to see how it would be different.
AFter all, maybe, it is YOU who should fork the compiler and toy with it in your fork. Just leave the current compiler for the desperate community to fix
Dec 24 2022
prev sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Dec 23, 2022 at 08:46:09PM +0000, Hipreme via Digitalmars-d wrote:
[...]
 Just check how much time D lost while trying to get into Android
 (which is unfortunately broken again).
What broke in Android? My Android project still compiles (though I haven't worked on it for a while now -- busy with other things).
 Now we're living the Web era without real D support in WASM. D with
 the ability to be used as an alternative to Javascript/Typescript
 programmers would be specially useful as it is not a hard language to
 work with.
Call me a skeptic, but the last time I checked, which was earlier this year, I found WASM still very much in its infancy, it's still early adopter tech with lots of rough edges and uncertainties. Expecting polished support at this point is IMO a bit unrealistic. As I pointed out in another post, a lot of Javascript glue and boilerplate is still required to make things work. A while back I posted my vision of what D support might look like: a preprocessing tool for extracting D function signatures, type declarations, target APIs (WebGPU, DOM, etc), etc., and generating the necessary JS boilerplate to make it work. And some way to make the GC work without causing too many problems (which involves working with threads, which involves yet more JS boilerplate, which involves potentially nasty performance bottlenecks lurking behind the convenient abstractions). The days of being able to just cross-compile a full-fledged D application, bells & whistles and GC and everything, into WASM with just a flick of a switch, are still far ahead in the future. Be glad that LDC *can* compile a pretty big subset of D code into WASM already. The key ingredients are already there, now we just need to build the plumbing. But given the infancy of WASM, I'm really not sure if I want to be pouring a ton of time and effort into this yet. I mean, c'mon, they haven't even finalized how you're going to interact with the host APIs yet, key things like GPU access, I/O, etc. -- these are pretty fundamental things without which you just ain't gonna have end-to-end WASM support, no matter how much you wish for it. Even something as fundamental as passing string data across the JS/WASM boundary involves a huge amount of glue code and boilerplate; this isn't an off-the-shelf product that you can just take home and plug into your standardized WASM interface socket (it doesn't exist yet) and expect things will Just Work(tm). This is a raw microchip that, despite whatever tremendous potential it may have, you still have solder to the motherboard yourself with your own soldering iron -- and the motherboard isn't provided, you have to build your own. If somebody wants to build that standard WASM motherboard for interfacing with D, I'd fully applaud it. But for the time being, I'm not expecting to be able to "just use D" on WASM just yet.
 The runtime problem must be solved somehow for we can stop dividing D
 and betterC libraries. A lot of projects were duplicated after betterC
 announcement.
[...] What runtime problem? Not being facetious here, just wasn't clear from your post which issue(s) specifically you're referring to. T -- Study gravitation, it's a field with a lot of potential.
Dec 23 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 24/12/2022 11:45 AM, H. S. Teoh wrote:
 On Fri, Dec 23, 2022 at 08:46:09PM +0000, Hipreme via Digitalmars-d wrote:
 [...]
 Just check how much time D lost while trying to get into Android
 (which is unfortunately broken again).
What broke in Android? My Android project still compiles (though I haven't worked on it for a while now -- busy with other things).
TLS in newer NDK's due to different linker.
Dec 23 2022
parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Sat, Dec 24, 2022 at 11:48:11AM +1300, rikki cattermole via Digitalmars-d
wrote:
 On 24/12/2022 11:45 AM, H. S. Teoh wrote:
 On Fri, Dec 23, 2022 at 08:46:09PM +0000, Hipreme via Digitalmars-d wrote:
 [...]
 Just check how much time D lost while trying to get into Android
 (which is unfortunately broken again).
What broke in Android? My Android project still compiles (though I haven't worked on it for a while now -- busy with other things).
TLS in newer NDK's due to different linker.
Ahh I see. I haven't updated my NDK in a while, that's probably why I didn't notice. :-/ T -- When solving a problem, take care that you do not become part of the problem.
Dec 23 2022
prev sibling parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Friday, 23 December 2022 at 20:34:10 UTC, Walter Bright wrote:
 On 12/23/2022 6:24 AM, GrimMaple wrote:
 What is your goal even, do you care about D at all? I'm going 
 to great lengths to write software that's pure D, and when the 
 creator of D gives up and starts "fixing" other languages, 
 that's a huge off point to many.
Expecting people with large C code bases to translate their C code to D is never, ever going to work. With ImportC, their C code bases can become "user friendly" with D code, making using D viable with a considerably larger user base. After all, look at the success of C++ with its integration with C.
This just further cements my point of you caring more about C programmers than about existing D programmers. You know, this is very difficult to deal with. Especially with your attitude of "I'm going to add half working stuff and if you're not satisfied with it just fix it yourself". You really shouldn't expect the community to run behind you and undo the damage that your toying around with the compiler does. There isn't any problem with manpower in the community at all. It's the fact that you keep introducing half-baked solutions so nobody understands what the heck to focus on, and what to fix. As a result, D promises a lot of potentially great features, that only really work on paper. I've went through an effort and checked your github page. I didn't see you write or create anything beisdes the D compiler. Have you ever stepped down from your compiler work and tried to actually write anything in D? I'm not talking about basing your D code on top of existing C code, I'm talking about quality software written 100% in D. Because it feels like you're completely missing the point because you don't deal with the sufferings of D on a daily basis. I and a lot of the community are facing those problems, which is why a lot of people here told you to stop fixing C. Now, about the C++ success part. I don't know how in the world you determined that C++ is a success, considering how every other C++ programmer (I've been a professional C++ programmer for about 10 years now) is loathing the language and desperatly wants to move out of it, but is held back by the legacy and the third party. C++ is so successful that Google had to make Carbon to fix it. For God's sake, wasn't the sole purpose of creating D to fix C++ in the first place? Aren't you just going against everything that D is? The large portion of C++ is just legacy that has to be supported despite the suffering. For me, D is slowly turning to the same thing. The sunk cost fallacy.
 Anyone is welcome to fork D.
Are you really that arrogant? I hope you don't mean it, because when people do end up forking D, what would you do? I've been told about the D vs Tango split in the past. It doesn't seem to have taught you anything though. At this point, I don't think any convincing you is going to work. You're just gonna do it because you can do it. And if anyone disagrees, they, by your words, can just fork the compiler. That line can be read as "You can screw off" by someone who has put any effort in improving the D ecosystem. That being said, consider another D contributor lost. I might fork D later, but I don't think I could be bothered. I'm not a compiler developer, and there doesn't seem to be enough peope interested in D as a stable language. So instead I will
Dec 24 2022
parent reply RTM <riven baryonides.ru> writes:
On Saturday, 24 December 2022 at 08:57:45 UTC, GrimMaple wrote:
 It's the fact that you keep introducing half-baked solutions so 
 nobody understands what the heck to focus on, and what to fix.
Sometimes it works. SIMD features were added that way, and it brought D into Remedy’s games. To the topic. ImportC is good thing. There are lots and lots of libraries written in C. Extending C syntax is definitely not, it’s a time wasted. No one can beat Herb Sutter (CPPfront). All D needs is: 1. Roadmap 2. Stick to it If D2 cannot be fixed because of codebase (= cashflow from professional users) - it’s okay, we all need to eat. It just means D3. Python did it, why D can’t?
Dec 24 2022
next sibling parent reply GrimMaple <grimmaple95 gmail.com> writes:
On Saturday, 24 December 2022 at 09:20:26 UTC, RTM wrote:
 On Saturday, 24 December 2022 at 08:57:45 UTC, GrimMaple wrote:
 It's the fact that you keep introducing half-baked solutions 
 so nobody understands what the heck to focus on, and what to 
 fix.
Sometimes it works. SIMD features were added that way, and it brought D into Remedy’s games.
I wouldn't bring Remedy as a success story considering they ditched D right after Ethan left.
Dec 24 2022
parent reply RTM <riven baryonides.ru> writes:
On Saturday, 24 December 2022 at 09:23:10 UTC, GrimMaple wrote:
 I wouldn't bring Remedy as a success story considering they 
 ditched D right after Ethan left.
I disagree. Without core.SIMD, there would be nothing to ditch.
Dec 24 2022
parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Saturday, 24 December 2022 at 09:30:15 UTC, RTM wrote:
 On Saturday, 24 December 2022 at 09:23:10 UTC, GrimMaple wrote:
 I wouldn't bring Remedy as a success story considering they 
 ditched D right after Ethan left.
I disagree. Without core.SIMD, there would be nothing to ditch.
Is or was anyone other than Remedy using `core.simd`? There had to be a very good reason to design it in a way that is incompatible with the de-facto standard GCC intrinsics and vector extensions.
Dec 24 2022
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Saturday, 24 December 2022 at 09:54:41 UTC, Siarhei Siamashka 
wrote:
 On Saturday, 24 December 2022 at 09:30:15 UTC, RTM wrote:
 On Saturday, 24 December 2022 at 09:23:10 UTC, GrimMaple wrote:
 I wouldn't bring Remedy as a success story considering they 
 ditched D right after Ethan left.
I disagree. Without core.SIMD, there would be nothing to ditch.
Is or was anyone other than Remedy using `core.simd`? There had to be a very good reason to design it in a way that is incompatible with the de-facto standard GCC intrinsics and vector extensions.
GDC and LDC provide implementations for the generic intrinsics - in [gcc.simd](https://github.com/gcc-mirror/gcc/blob/master/libphobos/lib runtime/gcc/simd.d) and [ldc.simd](https://github.com/ldc-developers/druntime/blob/ldc/src/ldc/simd.di) respectively though. I can only think of the intel intrinsics library that would use the non-portable `__simd` functions for the sake of DMD support.
Dec 24 2022
next sibling parent Bruce Carneal <bcarneal gmail.com> writes:
On Saturday, 24 December 2022 at 20:52:53 UTC, Iain Buclaw wrote:
 On Saturday, 24 December 2022 at 09:54:41 UTC, Siarhei 
 Siamashka wrote:
 On Saturday, 24 December 2022 at 09:30:15 UTC, RTM wrote:
 On Saturday, 24 December 2022 at 09:23:10 UTC, GrimMaple 
 wrote:
 I wouldn't bring Remedy as a success story considering they 
 ditched D right after Ethan left.
I disagree. Without core.SIMD, there would be nothing to ditch.
Is or was anyone other than Remedy using `core.simd`? There had to be a very good reason to design it in a way that is incompatible with the de-facto standard GCC intrinsics and vector extensions.
GDC and LDC provide implementations for the generic intrinsics - in [gcc.simd](https://github.com/gcc-mirror/gcc/blob/master/libphobos/lib runtime/gcc/simd.d) and [ldc.simd](https://github.com/ldc-developers/druntime/blob/ldc/src/ldc/simd.di) respectively though. I can only think of the intel intrinsics library that would use the non-portable `__simd` functions for the sake of DMD support.
Yeah. Auto vec and ldc.simd/gcc.simd + their intrinsics cover it, for me anyway. Choosing to use the DMD back-end for performance critical SIMD work would be peculiar. Manual unrolling with static foreach within vLen target introspected library functions can help you avoid some intrinsics, if that is your goal, *but* you'll then be on the hook for checking that the optimizer "does the right thing". Tradeoffs... Dlang has a pretty good story wrt data parallel programming already and it's getting better (thanks Iain, Nic, Guillaume, Manu, ...).
Dec 24 2022
prev sibling parent Guillaume Piolat <first.last spam.org> writes:
On Saturday, 24 December 2022 at 20:52:53 UTC, Iain Buclaw wrote:
 Is or was anyone other than Remedy using `core.simd`? There 
 had to be a very good reason to design it in a way that is 
 incompatible with the de-facto standard GCC intrinsics and 
 vector extensions.
Well core.simd is used by a lot of people already. My understanding is that before core.simd there was no vector extension in LDC or GDC, or, it would be even more different without that specific core.simd effort. intel-intrinsics does use D_SIMD now, it's a slow work in progress, so that to exploit the capabilities of the DMD compiler. SIMD support only got better in D and DMD since Remedy.
Jan 07 2023
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Saturday, 24 December 2022 at 09:20:26 UTC, RTM wrote:
 ..
 To the topic.
 ImportC is good thing. There are lots and lots of libraries 
 written in C.
 Extending C syntax is definitely not, it’s a time wasted. No 
 one can beat Herb Sutter (CPPfront).
Please don't compare ImportC to cppfront,as you just made the case for importC much stronger ;-) i.e.cppFront will happily compile this (even though no such function exists..anywhere): main: () -> int = { printMyMessage(); } it's only when you try to compile the file cppfront outputs, using a 'proper' compiler, will you learn that no such function exists.
Dec 25 2022
prev sibling next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
If you are finding d too unwieldy to add features to, maybe its time to start planning d3 and how you going to clean up the technical debt
Dec 21 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 3:13 PM, monkyyy wrote:
 If you are finding d too unwieldy to add features to, maybe its time to start 
 planning d3 and how you going to clean up the technical debt
Every time I try to clean up technical debt, a cadre arises objecting that it breaks existing code.
Dec 21 2022
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Wed, Dec 21, 2022 at 07:23:47PM -0800, Walter Bright via Digitalmars-d wrote:
 On 12/21/2022 3:13 PM, monkyyy wrote:
 If you are finding d too unwieldy to add features to, maybe its time
 to start planning d3 and how you going to clean up the technical
 debt
Every time I try to clean up technical debt, a cadre arises objecting that it breaks existing code.
That's what versioning is supposed to solve. Freeze the current language as version 2, put it on long-term maintenance, and start a new branch as a new version with breaking changes that fix technical debts. The current language has plenty of rough spots that shouldn't be set in stone forever; there is still room for D to develop into the future. Don't lock yourself into the current suboptimal state; acknowledge it as the best effort so far, and give yourself a new opportunity to do better. T -- Nearly all men can stand adversity, but if you want to test a man's character, give him power. -- Abraham Lincoln
Dec 21 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/21/2022 9:02 PM, H. S. Teoh wrote:
 That's what versioning is supposed to solve.
I know about versioning. But our -preview and -revert switches are better, as they allow the user to selectively decide which obsolete features they need. The request is always "I want X bug fixed, I want to keep the old Y feature, I want the new Z feature." But the people objecting to breaking code also object to using a -revert switch. Versioning comes with other problems. The most significant is we lack sufficient staff to maintain multiple versions.
 Freeze the current
 language as version 2, put it on long-term maintenance, and start a new
 branch as a new version with breaking changes that fix technical debts.
The trouble there is that fixing bugs *also* comes with breaking existing code. There is no clean separation between the two.
Dec 21 2022
next sibling parent reply Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Thursday, 22 December 2022 at 07:59:58 UTC, Walter Bright 
wrote:
 On 12/21/2022 9:02 PM, H. S. Teoh wrote:
 That's what versioning is supposed to solve.
I know about versioning. But our -preview and -revert switches are better, as they allow the user to selectively decide which obsolete features they need.
Can you provide any example of successfully using these switches for anything practical other than just testing the compiler itself? One practical scenario is bisecting several years of some application's commit history. But just having a battery of multiple old versions of DMD installed and using the appropriate compiler version to compile the code seems to be much more reliable and straightforward than fooling with the -revert switches. Another practical scenario is having two libraries A and B as dependencies, where the former is super-modern and the latter is super-old and none of the DMD versions can successfully compile them both with default settings. Would -revert be really useful here? I don't know, this feels like a bad idea to me.
 The request is always "I want X bug fixed, I want to keep the 
 old Y feature, I want the new Z feature."
What's wrong with such request? Assuming that the new feature Z doesn't break compatibility.
 But the people objecting to breaking code also object to using 
 a -revert switch.
Again, is this -revert switch really useful for anything?
 Versioning comes with other problems. The most significant is 
 we lack sufficient staff to maintain multiple versions.
If you spin it this way, then you also lack sufficient staff even to maintain just one most recent compiler version. Many bugs are rotting in bugzilla for years. But this isn't a good reason to give up and do nothing. Would selecting some compiler version as LTS and backporting only minor bugfixes to it require too much effort? And if some fix is too difficult to backport, then the bug description and possible workarounds can be at least documented in some kind of errata list.
 Freeze the current
 language as version 2, put it on long-term maintenance, and 
 start a new
 branch as a new version with breaking changes that fix 
 technical debts.
The trouble there is that fixing bugs *also* comes with breaking existing code. There is no clean separation between the two.
Not every bugfix comes with breaking existing code. And there's a big difference between breaking the existing *buggy* code (the maintainers of such existing code will be grateful for getting the bug exposed) and breaking some perfectly *valid* existing code due to unnecessary language syntax changes (the maintainers of such existing code will be only annoyed).
Dec 22 2022
parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 4:05 AM, Siarhei Siamashka wrote:
 The request is always "I want X bug fixed, I want to keep the old Y feature, I 
 want the new Z feature."
What's wrong with such request? Assuming that the new feature Z doesn't break compatibility.
Everyone has a different XYZ combination they want. This is why versioning does not work.
 But the people objecting to breaking code also object to using a -revert
switch.
Again, is this -revert switch really useful for anything?
More useful than versions.
 Would selecting some compiler version as LTS and backporting only minor
bugfixes 
 to it require too much effort?
If someone wants to volunteer to do this, I would welcome it. In fact, an LTS version isn't even necessary. Any fix could be backported to any previous D release.
 Not every bugfix comes with breaking existing code.
I didn't say every. But a lot of critical bugs do.
 And there's a big difference between breaking the existing *buggy* code (the 
 maintainers of such existing code will be grateful for getting the bug
exposed) 
 and breaking some perfectly *valid* existing code due to unnecessary language 
 syntax changes (the maintainers of such existing code will be only annoyed).
One person's valid code is another's bug. Fixing autodecoding, for example, breaks existing code.
Dec 22 2022
prev sibling parent reply Don Allen <donaldcallen gmail.com> writes:
On Thursday, 22 December 2022 at 07:59:58 UTC, Walter Bright 
wrote:
 On 12/21/2022 9:02 PM, H. S. Teoh wrote:
 That's what versioning is supposed to solve.
 Versioning comes with other problems. The most significant is 
 we lack sufficient staff to maintain multiple versions.
Not enough development horsepower? All the more reason to do First Things First. I think much of this discussion is rooted in disagreement about what the First Things are.
Dec 22 2022
next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Dec 22, 2022 at 03:23:50PM +0000, Don Allen via Digitalmars-d wrote:
 On Thursday, 22 December 2022 at 07:59:58 UTC, Walter Bright wrote:
 On 12/21/2022 9:02 PM, H. S. Teoh wrote:
 That's what versioning is supposed to solve.
[...]
 Versioning comes with other problems. The most significant is we
 lack sufficient staff to maintain multiple versions.
[...]
 Not enough development horsepower? All the more reason to do First
 Things First. I think much of this discussion is rooted in
 disagreement about what the First Things are.
We've been in disagreement about what the First Things are for many years now, unfortunately. :'( T -- Error: Keyboard not attached. Press F1 to continue. -- Yoon Ha Lee, CONLANG
Dec 22 2022
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 7:23 AM, Don Allen wrote:
 Not enough development horsepower? All the more reason to do First Things
First. 
 I think much of this discussion is rooted in disagreement about what the First 
 Things are.
Of course. Everyone has a different view of what FT are. Trying to manage that is the most difficult job I have. There isn't any magic solution. Somebody is always going to be unhappy with the choice.
Dec 22 2022
parent Don Allen <donaldcallen gmail.com> writes:
On Thursday, 22 December 2022 at 20:46:29 UTC, Walter Bright 
wrote:
 On 12/22/2022 7:23 AM, Don Allen wrote:
 Not enough development horsepower? All the more reason to do 
 First Things First. I think much of this discussion is rooted 
 in disagreement about what the First Things are.
Of course. Everyone has a different view of what FT are. Trying to manage that is the most difficult job I have. There isn't any magic solution. Somebody is always going to be unhappy with the choice.
It doesn't matter if *someone* is unhappy with the choice. What matters is if your choices, as the project leader, generally make sense to the troops. You don't need unanimity. You need majority support.
Dec 22 2022
prev sibling next sibling parent Siarhei Siamashka <siarhei.siamashka gmail.com> writes:
On Thursday, 22 December 2022 at 03:23:47 UTC, Walter Bright 
wrote:
 On 12/21/2022 3:13 PM, monkyyy wrote:
 If you are finding d too unwieldy to add features to, maybe 
 its time to start planning d3 and how you going to clean up 
 the technical debt
Every time I try to clean up technical debt, a cadre arises objecting that it breaks existing code.
You can't break D2 by working on D3. Also it would be perfect if D3 could implement ImportD2 and take advantage of the existing D2 modules. But I guess, ImportD2 is much more difficult than ImportC and won't be even considered.
Dec 21 2022
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 12/22/22 04:23, Walter Bright wrote:
 
 Every time I try to clean up technical debt, a cadre arises objecting 
 that it breaks existing code.
The technical debt in DMD is how the code and AST is structured, not the features it supports. This is much harder to clean up than removing isolated features and I think you are among the people slowing down such efforts. One reason why is that there are a lot of pull requests that would be broken by large-scale refactorings. Those open pull requests are also technical debt. OTOH, removing completely isolated lexer features does nothing to clean up technical debt.
Dec 23 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/23/2022 12:58 PM, Timon Gehr wrote:
 The technical debt in DMD is how the code and AST is structured, not the 
 features it supports. This is much harder to clean up than removing isolated 
 features and I think you are among the people slowing down such efforts.
Please identify any refactorings you regard as productive.
 One reason why is that there are a lot of pull requests that would be broken
by 
 large-scale refactorings. Those open pull requests are also technical debt.
Refactorings inherently break open pull requests.
Dec 23 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 24/12/2022 10:46 AM, Walter Bright wrote:
 On 12/23/2022 12:58 PM, Timon Gehr wrote:
 The technical debt in DMD is how the code and AST is structured, not 
 the features it supports. This is much harder to clean up than 
 removing isolated features and I think you are among the people 
 slowing down such efforts.
Please identify any refactorings you regard as productive.
It is incredibly hard to identify such refactoring when the code base is effectively in an unknown state and the things that put it into a known state you have refused.
Dec 23 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/23/2022 2:03 PM, rikki cattermole wrote:
 the things that put it into a known state you have refused.
I can do nothing with such generalizations. Be specific.
Dec 23 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
P1: separate out known state modules (ideally good) vs unknown state.

To turn a module into known state you must severely document it, 
including TODO's.

For good state: do the TODO's.

Sound familiar? Its packagerization of leaf modules.



Oh and on another note: WTF why do we have two ASTs? That's a massive 
technical debt to keep paying in maintenance (which we don't do).

Good example of this: 
https://forum.dlang.org/post/gfwduutjchvppcolsoel forum.dlang.org
Dec 23 2022
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 12/23/2022 2:37 PM, rikki cattermole wrote:
 P1: separate out known state modules (ideally good) vs unknown state.
Please be more specific.
 To turn a module into known state you must severely document it, including
TODO's.
One of my most common review comments are: "Please add ddoc comments to added functions." A common thing I do when working on a section of code is add ddoc comments to the functions.
 For good state: do the TODO's.
Grepping for TODOs in the source code doesn't yield much.
 Sound familiar? Its packagerization of leaf modules.
We've already discussed that one to death.
 Oh and on another note: WTF why do we have two ASTs? That's a massive
technical 
 debt to keep paying in maintenance (which we don't do).
That was a refactoring added by people other than me. I agree with you on that one. Finding the right refactoring is not easy.
Dec 23 2022
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 24/12/2022 11:57 AM, Walter Bright wrote:
 On 12/23/2022 2:37 PM, rikki cattermole wrote:
 P1: separate out known state modules (ideally good) vs unknown state.
Please be more specific.
The root of the problem is we have an old code base which for all intents and purposes is in an unknown state. We do not understand it in its entirety. There are duplicate behaviors, a lot of it is undocumented. We cannot deal with it as a whole, but we can start with leafs, things that minimally depend on others. As long as we keep known state vs unknown state in the same directory structure, we will never be motivated to deal with it.
 To turn a module into known state you must severely document it, 
 including TODO's.
One of my most common review comments are: "Please add ddoc comments to added functions." A common thing I do when working on a section of code is add ddoc comments to the functions.
Yeah that is a great long term strategy which I have been pleased to see happening, but the goal here is to do entire modules at a time.
 For good state: do the TODO's.
Grepping for TODOs in the source code doesn't yield much.
Yeah, it takes concentrated efforts to add them first.
 Sound familiar? Its packagerization of leaf modules.
We've already discussed that one to death.
Unfortunately. I (and a few others) just don't see any other way to pay off such significant debt. What we are doing now is just tinkering around the edges of the problem and isn't taking a sledge hammer to it which is what actually is needed.
 Oh and on another note: WTF why do we have two ASTs? That's a massive 
 technical debt to keep paying in maintenance (which we don't do).
That was a refactoring added by people other than me. I agree with you on that one. Finding the right refactoring is not easy.
Sounds like some refactoring to do ;) Get the sledge hammer!
Dec 23 2022
parent rikki cattermole <rikki cattermole.co.nz> writes:
I've had some time to think about some more technical debt that could be 
paid off relatively easily.

In my last attempt at fixing ModuleInfo exportation, Iain complained 
about dmd's glue code state infecting the AST.

I.e. 
https://github.com/dlang/dmd/blob/5dfc20e016850820036a81a3c40b01bf08b8c244/compiler/src/dmd/dsymbol.h#L177

Which could be extracted out into its own struct which could be turned 
opaque or even used by ldc/gdc. The main issue is dub, which I think 
might be solvable by having a gluedefinitions subPackage in turn 
versions on if you have dmd-be.
Dec 23 2022
prev sibling next sibling parent reply Hipreme <msnmancini hotmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
Nope. I really doubt people nowadays are going to create new big projects using C. Still, I doubt even more people are going to use non standard C to create other kind of code. When importC works, I wished to see something done about what divided the D community called "betterC"
Dec 21 2022
next sibling parent rikki cattermole <rikki cattermole.co.nz> writes:
On 22/12/2022 1:22 PM, Hipreme wrote:
 I wished to see something done about what divided the D community called 
 "betterC"
Kinda waiting on DLL support on Windows to start breaking -betterC flag up into its constituent flags. Right now if we did it, it would break the world with no ability to go back from it :/
Dec 21 2022
prev sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Dec 22, 2022 at 12:22:18AM +0000, Hipreme via Digitalmars-d wrote:
[...]
 Nope. I really doubt people nowadays are going to create new big
 projects using C.
[...] You'll be surprised. I'm being paid to work on very large C projects every day, and my employer is regularly introducing new product lines following the same model (i.e., large projects heavily focused on C). And by "very large" I'm talking about millions of lines of code with complex interacting subsystems, each of which may in itself be a large C sub-project. Much as we all wish C will see its sunset soon, it's still going to be sticking around for a good long time before it buckles under its own weight. T -- Computers shouldn't beep through the keyhole.
Dec 21 2022
prev sibling next sibling parent reply Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
Good idea. I presume you're already keen on doing it so do it.
Dec 22 2022
parent reply =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 12/22/22 00:59, Per Nordlöw wrote:
 On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in ImportC?
Good idea. I presume you're already keen on doing it so do it.
It's not clear how much sincere you were writing that but I agree: Life is too short to always do what needs to be done. I think people should be given liberty to do (or prioritize) what they want at least to increase motivation, creativity, serendipity, and more. Ali
Dec 22 2022
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 12/22/2022 7:51 AM, Ali Çehreli wrote:
 It's not clear how much sincere you were writing that but I agree: Life is too 
 short to always do what needs to be done. I think people should be given
liberty 
 to do (or prioritize) what they want at least to increase motivation, 
 creativity, serendipity, and more.
It's also a low effort / low risk job, like implementing bitfields in D after having developed and debugged them with ImportC.
Dec 22 2022
prev sibling parent Per =?UTF-8?B?Tm9yZGzDtnc=?= <per.nordlow gmail.com> writes:
On Thursday, 22 December 2022 at 15:51:31 UTC, Ali Çehreli wrote:
 Good idea. I presume you're already keen on doing it so do it.
It's not clear how much sincere you were writing that but I agree: Life is too short to always do what needs to be done. I think people should be given liberty to do (or prioritize) what they want at least to increase motivation, creativity, serendipity, and more.
That was exactly what I meant. :)
Dec 23 2022
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I don't think it will be of much use. If I want to compile an existing C project, it won't have D slices. If I want to write new code, I presumably want to write standard C. If I'm willing to restrict myself to use a D compiler to compile my code, why in the world would I write C? That said, the feature would not cause issues either since it doesn't need to be used. If it's already worth making for easing testing of other ImportC features, it probably doesn't hurt having it as a curiosity for users.
Dec 22 2022
prev sibling next sibling parent arandomonlooker <pnkjkoiftqhgbpzfqi tmmcv.net> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I think it fits D better than C, to be fair. There was a DIP many years ago about implementing a way in the language to avoid using staticArray, I think that would fix the issue neatly.
Dec 22 2022
prev sibling next sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
So, presumably you have read all 173 pages of the Checked C draft?? (i.e. there's more to it than just '_Array_ptr<int> p : count(5) = a; ) https://github.com/microsoft/checkedc/releases/download/CheckedC-Clang-12.0.1-rel3/checkedc-v0.9.pdf 173 pages - to talk about how they want to fix bounds checking problems in C. I can learn the entire C langauge in less than 1/10 of that number of pages.
Dec 23 2022
prev sibling next sibling parent Hipreme <msnmancini hotmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
When we are going to fix D's biggest mistake? Like D community itself scaring away its own contributors by doing the unreasonable?
Dec 23 2022
prev sibling next sibling parent reply jonatjano <jonatjano gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I think you should do it, you're losing much more time arguing about it's value than if you did it from the start People want you to work on this and that but argue so much with you on a non-breaking addition that you don't have the time to work at all As far as I understand it's nothing more than a syntax for D slices into betterC, which mean most of the code is already written (maybe it could use D slice syntax directly for a better transition from betterC to full D if wanted?)
Jan 03 2023
parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/3/2023 1:42 AM, jonatjano wrote:
 I think you should do it, you're losing much more time arguing about it's
value 
 than if you did it from the start
 People want you to work on this and that but argue so much with you on a 
 non-breaking addition that you don't have the time to work at all
Every post I make spawns multiple leaves with everyone repeating their same positions. At some point any progress grinds to a halt.
 As far as I understand it's nothing more than a syntax for D slices into 
 betterC, which mean most of the code is already written
That's right.
 (maybe it could use D 
 slice syntax directly for a better transition from betterC to full D if
wanted?)
The D syntax won't work in C, which is why I changed it slightly.
Jan 04 2023
prev sibling next sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I'd much rather see attributes strategically' incorporated into D's betterC. e.g. module test; extern(C) void main() { import core.stdc.stdio : printf; // Accesses to this array will be bounds checked at runtime. checked int[5] arr = [ 0, 1, 2, 3, 4]; for (int i = 0; i < 10; i++) // runtime will catch this error. { printf("%d\n", arr[i]); } } and btw, i really hate having to constantly change my C syntax: i.e. int arr[5] = { 0, 1, 2, 3, 4}; // C syntax (into) int[5] arr = [ 0, 1, 2, 3, 4]; // betterC syntax That is just plain annoying.
Jan 03 2023
parent reply max haughton <maxhaton gmail.com> writes:
On Wednesday, 4 January 2023 at 03:04:31 UTC, areYouSureAboutThat 
wrote:
 On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
 wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
I'd much rather see attributes strategically' incorporated into D's betterC. e.g. module test; extern(C) void main() { import core.stdc.stdio : printf; // Accesses to this array will be bounds checked at runtime. checked int[5] arr = [ 0, 1, 2, 3, 4]; for (int i = 0; i < 10; i++) // runtime will catch this error. { printf("%d\n", arr[i]); } }
This is already caught by the runtime. The idea of betterC is that you are opting into these checks.
Jan 03 2023
parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 4 January 2023 at 03:13:25 UTC, max haughton wrote:
 This is already caught by the runtime. The idea of betterC is 
 that you are opting into these checks.
Except that checked, in my example, would mean there is no way to opt-out of that check. i.e. even in -release or -boundscheck=off, that array would always be bounds checked at runtime.
Jan 03 2023
prev sibling next sibling parent reply Dibyendu Majumdar <d.majumdar gmail.com> writes:
Dennis Ritchie proposed fat pointers for C in 1990.

https://github.com/kenmartin-unix/UnixDocs/blob/master/VaribleSized_Arrays_in_C.pdf
Jan 05 2023
next sibling parent "H. S. Teoh" <hsteoh qfbox.info> writes:
On Thu, Jan 05, 2023 at 08:53:55PM +0000, Dibyendu Majumdar via Digitalmars-d
wrote:
 Dennis Ritchie proposed fat pointers for C in 1990.
 
 https://github.com/kenmartin-unix/UnixDocs/blob/master/VaribleSized_Arrays_in_C.pdf
Wow. So it took 30 years (and who knows how many buffer overflow security holes) for people to realize that this might have been a good idea? T -- What did the alien say to Schubert? "Take me to your lieder."
Jan 05 2023
prev sibling next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 1/5/2023 12:53 PM, Dibyendu Majumdar wrote:
 Dennis Ritchie proposed fat pointers for C in 1990.
 
 https://github.com/kenmartin-unix/UnixDocs/blob/master/VaribleSized_Arrays_in_C.pdf
Wow, didn't know about that. Thanks! The way D does it is better, though!
Jan 05 2023
prev sibling parent reply areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Thursday, 5 January 2023 at 20:53:55 UTC, Dibyendu Majumdar 
wrote:
 Dennis Ritchie proposed fat pointers for C in 1990.

 https://github.com/kenmartin-unix/UnixDocs/blob/master/VaribleSized_Arrays_in_C.pdf
As per his proposal, I too like using the question mark for this: [?] I'd be happy if C had this now (and I'm a 'big fan' of NOT changing the standard). But if Dennis (RIP) could not get it into the Standard, then that tells you just how resistant to change, the C language is (and a good thing IMO, for the C language anyway). Surely D could use int[?] also. I mean, it's 'contexually obvious' what is intended here, despite the ? being used for other purposes in the language. It's not the dual use of ?, but the context in which its being used that matters. int[?] arr = [0,1,2]; // perfect! IMO
Jan 05 2023
parent reply RTM <riven baryonides.ru> writes:
On Friday, 6 January 2023 at 02:51:07 UTC, areYouSureAboutThat 
wrote:
 int[?] arr = [0,1,2];  // perfect! IMO
int[auto] = [0,1,2];
Jan 06 2023
parent RTM <riven baryonides.ru> writes:
On Friday, 6 January 2023 at 11:35:46 UTC, RTM wrote:
 On Friday, 6 January 2023 at 02:51:07 UTC, areYouSureAboutThat 
 wrote:
 int[?] arr = [0,1,2];  // perfect! IMO
int[auto] = [0,1,2];
Sorry) int[auto] arr = [0,1,2];
Jan 06 2023
prev sibling parent areYouSureAboutThat <areYouSureAboutThat gmail.com> writes:
On Wednesday, 21 December 2022 at 19:09:37 UTC, Walter Bright 
wrote:
 https://news.ycombinator.com/edit?id=34084894

 I'm wondering. Should I just go ahead and implement [..] in 
 ImportC?
Is this done yet? btw. I think C' biggest mistake, is that it doesn't have classes.
Jan 14 2023