www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Thoughts on Backward Compatibility

reply Paul Backus <snarwin gmail.com> writes:
In [a 2019 blog post][1], which I found today on the front page 
of /r/programming, Michael Orlitzky complains that modern 
languages have too many breaking changes. He contrasts "grownup" 
languages like C and Ada, which have remained stable and largely 
backwards compatible for decades, with languages like Rust, which 
[regularly breaks compatibility][2] despite having promised 
otherwise in its 1.0 release.

This is a reasonable criticism, and yet what strikes me most 
about it is how seemingly irrelevant it ultimately is. Rust 
handily outranks Ada on any measure of language popularity you 
care to name ([TIOBE Index][3], [Stack Overflow Survey][4], 
[Github statistics][5]), and its adoption is still trending 
upward. Clearly, there is a sizeable contingent of programmers 
who do not view its breaking changes as deal-breakers.

Why might that be? Glancing over some of the Rust release notes 
myself, I notice a couple of trends.

1. Many of the changes are low-impact and provide a clear 
migration path for existing code.
2. Many of the changes involve fixing holes in Rust's 
memory-safety checks.

For breaking changes that fall under (1), it's easy to understand 
why Rust programmers put up with them. Switching languages is a 
much bigger hassle than doing a simple find-and-replace to update 
a renamed library function.

For breaking changes that fall under (2), it's a little less 
obvious. If your code has unknowingly been taking advantage of a 
safety hole, it can often require a fair amount of work to fix 
(as any D programmer who's tried to test their code with 
`-preview=dip1000` can attest). Why are Rust programmers willing 
to subject themselves to that kind of aggravation?

The answer is: because that's the reason they chose Rust in the 
first place! Rust's memory-safety checks are its biggest 
value-add compared to other languages, and the main driver of its 
adoption. Making those checks more accurate and effective is 
giving Rust programmers more of something that they've already 
demonstrated they want.

It is worth noting that the vast majority of these breaking 
changes occurred *within* a single language edition. For example, 
Rust 1.5.0, which has thirteen breaking changes listed in its 
release notes, was an update to the 2015 edition that started 
with Rust 1.0.0 and continued until Rust 1.30.0. Again, this fact 
does not seem to have had any serious impact on Rust's adoption.

Are there languages where breaking changes *have* hurt adoption? 
The most notable example I can think of is Python 3, which has 
struggled for years to win over Python 2 programmers. And if we 
look at [the changes introduced in Python 3][6], we can see that 
they follow very different trends than the Rust changes discussed 
above:

1. Many of the changes are high-impact, requiring widespread 
changes to existing code, and lack a clear migration path.
2. Many of the changes are focused on performance, correctness, 
and type safety.

Trend (1) is straightforwardly bad because it increases the cost 
of migration. Trend (2) may at first seem like a good thing, but 
the problem is that these qualities are not what most programmers 
chose Python 2 for. They chose it because it was easy to learn, 
convenient, and [fun][7], and forcing them to rewrite all of 
their code to handle Unicode correctly or use generators instead 
of lists is the exact opposite of that.



First, that the success and popularity of a programming language 
is mostly determined by factors other than stability and backward 
compatibility (or lack thereof).

Second, that even without an edition bump, small-scale breaking 
changes with easy migration paths aren't a big deal.

Third, that even with an edition bump, large-scale breaking 
changes that make migration difficult should probably be avoided.

Fourth, that breaking changes should be used to give D 
programmers more of what they already like about D, not to take 
the D language in new directions.

To Walter, Atila, and the rest of D's leadership, I hope this 
post provides some helpful data points for you to take into 
account when designing D's language editions and planning future 
language changes.

To everyone else reading this, I'd like to leave you with one 
last question: what do **you** like about D? What strengths does 
D have, as a language, that you'd like to see become even 
stronger?

[1]: 
https://michael.orlitzky.com/articles/greybeards_tomb:_the_lost_treasure_of_language_design.xhtml
[2]: 
https://michael.orlitzky.com/articles/greybeards_tomb:_the_lost_treasure_of_language_design.xhtml#should-have-a-standard
[3]: https://www.tiobe.com/tiobe-index/
[4]: 
https://survey.stackoverflow.co/2023/#most-popular-technologies-language
[5]: 
https://innovationgraph.github.com/global-metrics/programming-languages
[6]: https://docs.python.org/3/whatsnew/3.0.html
[7]: https://xkcd.com/353/
Feb 15
next sibling parent reply "H. S. Teoh" <hsteoh qfbox.info> writes:
On Fri, Feb 16, 2024 at 01:44:51AM +0000, Paul Backus via Digitalmars-d wrote:
[...]

 
 First, that the success and popularity of a programming language is
 mostly determined by factors other than stability and backward
 compatibility (or lack thereof).
+100. Over the past 5-10 years or so, I've been finding myself wishing that D would introduce some breaking changes so that it could clean up some of its dark, ugly corners that have been like flies in the ointment for a long time. At the same time, I'd had deprecations and breaking changes that are really rather minor, but extremely frustrating, because:
 Second, that even without an edition bump, small-scale breaking
 changes with easy migration paths aren't a big deal.
The worst feeling is when you upgrade your compiler, and suddenly you find yourself having to do major code surgery in order to make previously-fine code work again. Having an easy migration path for breaking changes is very important. I'd also add that the migration path should be *easy*: it shouldn't take too much thought to upgrade the code, and should not involve tricky decisions based on subtle semantic differences that require deep understanding of the code to make the right choice. The std.math.approxEqual deprecation is a major example that I keep running into. It's intended to be replaced by isClose, with all the right intentions. But it was frustrating because (1) it didn't feel necessary -- previous code worked fine even if there were some pathological cases that weren't being handled correctly. (2) The deprecation message didn't give a clear migration path -- isClose has different parameters with subtly different semantics from approxEqual, and it wasn't obvious how you should replace calls to approxEqual with equivalent calls to isClose. There were also no easy defaults that you could use that replicated the previous behaviour; you had to sit down and think about each call, then look up the docs to be sure. (3) The choice of name felt like a serious blunder, even if it was made for all the right reasons. All of these added up to a very frustrating experience, even if the intentions were right. If we could have done this over again, I'd have proposed to keep the old semantics of approxEqual, perhaps add another parameter that would use the new semantics. And make sure the deprecation message is clear about how exactly you go about deciding what to put in the new parameter. I.e., for lazy authors not changing anything would let their code continue to work as before; if they wanted to have the new semantics they'd have to explicitly opt in.
 Third, that even with an edition bump, large-scale breaking changes
 that make migration difficult should probably be avoided.
Yes, large breaking changes are a no-no. Unless my old code can continue compiling as before, and I have to opt-in to the new stuff. Editions would help with this, but it still depends on the execution. There should always be a good migration path that doesn't require you to rewrite 5-10 year old code that you no longer remember the details of and can no longer confidently reimplement without spending disproportionate amounts of time to re-learn the ins and outs of it.
 Fourth, that breaking changes should be used to give D programmers
 more of what they already like about D, not to take the D language in
 new directions.
TBH, nogc, dip1000, live, etc., feel a lot like D trying to go in entirely new directions. The fact that it's been years and still practically nobody understands exactly how it works and what it does, is not a good sign. And all this while things like `share` and static initialization of AA's are stagnating. Built-in AA's are one of my major reasons for choosing D, and seeing it languish for years with elementary features like static initialization not fixed is quite disheartening. Worse when it feels like D wants to move to newer pastures when its current features are still half-done and has problematic corner cases. I.e., what I like about D is stagnating, while new features that I have little interest in are being pushed on me.
 To Walter, Atila, and the rest of D's leadership, I hope this post
 provides some helpful data points for you to take into account when
 designing D's language editions and planning future language changes.
 
 To everyone else reading this, I'd like to leave you with one last
 question: what do **you** like about D? What strengths does D have, as
 a language, that you'd like to see become even stronger?
[...] What I like about D: - Meta-programming power. - CTFE should be improved. By a lot. It was a big disappointment that Stefan's newCTFE never materialized. IMO we should be improving this story instead of trying to chase rainbows like ARC with live and dip1000 and what-not. We should make this so good that I'll never need to use an external codegen utility again. And it should not introduce crazy compile times. This is a primary D strength, its story should be maximally optimized. - The template story should be improved. There should be a way of working with templates that cut down on needless bloat. Lots of room for exploration here. We shouldn't be confined by C++ limitations here. This is one of D's primary strengths and where we can pioneer even more. One area is improving IFTI to make it work for even more common cases. Another is recognizing common patterns like chains of ranges, and optimizing symbol generation so that you don't end up with unreasonably huge symbols. Esp. when it's a one-of-a-kind UFCS chain (it's unlikely you're ever going to have exactly the same chain twice with exactly the same template arguments -- no point encoding every argument in the symbol, just an ID that gets incremented per instantiation is good enough). - Compile-time introspection and DbI. This is another huge D strength, and we should be working on streamlining it even more. - Clean up __traits(), make std.traits more sensible. - Fix things like scoping issues with static foreach. Introduce local aliases so that static foreach doesn't need crazy hacks with {{...}} and temporary templates just for injecting new identifiers per iteration without running into multiple declaration errors. - Improve the syntax for retrieving members of some symbol. Something prettier than __traits(getAllMembers,...). This is a primary D strength, it should be dressed in better syntax than this. - Maybe first-class types to make the metaprogramming story even more powerful. - GC. Instead of bending over backwards trying to woo the nogc crowd who are mass migrating to Rust anyway, what about introducing write barriers that would allow us existing D users to use a much more competitive GC algorithm? Stop-the-world GC in 2024 shouldn't even be a thing anymore. We aren't in 1998 anymore. D should embrace the GC, not sacrifice it for the sake of wooing a crowd that isn't likely to adopt D regardless. Instead of trying to get away from the GC, what about making the GC experience better for existing D users? - Built-in AA's. It's been at least a decade. Why is static initialization support still sketchy? - Built-in unittests. The default experience should be top-of-the-line. We shouldn't need to import a dub package for something beyond the current dumb built-in test runner. Named unittests, the ability to select which tests to run, the ability to run all tests regardless of failure and show stats afterwards -- these are all basic functionalities that ought to work out-of-the-box. - Automatic type & attribute inference. `auto` was revolutionary when I first joined D (C++ got it only years later). We should improve type and attribute inference to the max (e.g., in default parameters for enums: there should be no need to repeat the enum name). Nobody like spelling out attribute soup, just like nobody likes spelling out explicit types when it's already obvious from context. The compiler should automate this to the max. A little bit of breakage here IMO is acceptable as long as it gets us to an even better place. Have negated attributes be a thing as well. In fact, make attributes first-class citizens so that we can use DbI / metaprogramming to manipulate it. This is what we should be focusing our efforts on instead of trying to woo an amorphous group of hypothetical potential users somewhere out there who aren't particularly likely to adopt D to begin with. T -- People tell me that I'm skeptical, but I don't believe them.
Feb 15
next sibling parent reply Dom DiSc <dominikus scherkl.de> writes:
On Friday, 16 February 2024 at 04:38:03 UTC, H. S. Teoh wrote:
 TBH,  nogc, dip1000,  live, etc., feel a lot like D trying to 
 go in entirely new directions.
I don't think so. Ok, nogc is somewhat superfluous (as you better control the GC with explicit commands in the few places where this is necessary), but the rest plays well with safe and pure and so on. I would like D to be a better Rust (as I hate the Rust syntax and the struggle with the borrow checker in code that has really noching to do with memory safety). What I miss most is safe by default and working properties (and type-properties: it always bugs me that with user defined properties you need to use myProperty!T instead of T.myProperty - mostly because I tend to always use the wrong systax. I even started to re-define the buildin properties with templates, just so that I can use them all the same way).
Feb 16
parent Dom DiSc <dominikus scherkl.de> writes:
On Friday, 16 February 2024 at 10:10:07 UTC, Dom DiSc wrote:
 On Friday, 16 February 2024 at 04:38:03 UTC, H. S. Teoh wrote:
 I would like D to be a better Rust (as I hate the Rust syntax 
 and the struggle with the borrow checker in code that has 
 really noching to do with memory safety).
nothing
 What I miss most is  safe by default and working  properties 
 (and type-properties: it always bugs me that with user defined 
 properties you need to use myProperty!T instead of T.myProperty 
 - mostly because I tend to always use the wrong systax.
syntax
 I even started to re-define the buildin properties with 
 templates, just so that I can use them all the same way).
what I mean is: if I declare a template with a single compile-time parameter as property: ```d static property template myProperty(T) {} ``` it should automatically be callable with "int.myProperty" or "myType.myProperty". Should be very easy to implement.
Feb 16
prev sibling next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On Friday, 16 February 2024 at 04:38:03 UTC, H. S. Teoh wrote:
 Built-in AA's are one of my major reasons for choosing D, and 
 seeing it languish for years with elementary features like 
 static initialization not fixed is quite disheartening.  Worse 
 when it feels like D wants to move to newer pastures when its 
 current features are still half-done and has problematic corner 
 cases.  I.e., what I like about D is stagnating, while new 
 features that I have little interest in are being pushed on me.
Static aa initialization is in the language now. https://dlang.org/changelog/2.106.0.html#dmd.static-assoc-array -Steve
Feb 16
prev sibling next sibling parent Carl Sturtivant <sturtivant gmail.com> writes:
On Friday, 16 February 2024 at 04:38:03 UTC, H. S. Teoh wrote:
 On Fri, Feb 16, 2024 at 01:44:51AM +0000, Paul Backus via 
 Digitalmars-d wrote: [...]

 
 First, that the success and popularity of a programming 
 language is mostly determined by factors other than stability 
 and backward compatibility (or lack thereof).
+100. Over the past 5-10 years or so, I've been finding myself wishing that D would introduce some breaking changes so that it could clean up some of its dark, ugly corners that have been like flies in the ointment for a long time. At the same time, I'd had deprecations and breaking changes that are really rather minor, but extremely frustrating, because:
 Second, that even without an edition bump, small-scale 
 breaking changes with easy migration paths aren't a big deal.
The worst feeling is when you upgrade your compiler, and suddenly you find yourself having to do major code surgery in order to make previously-fine code work again. Having an easy migration path for breaking changes is very important. I'd also add that the migration path should be *easy*: it shouldn't take too much thought to upgrade the code, and should not involve tricky decisions based on subtle semantic differences that require deep understanding of the code to make the right choice. The std.math.approxEqual deprecation is a major example that I keep running into. It's intended to be replaced by isClose, with all the right intentions. But it was frustrating because (1) it didn't feel necessary -- previous code worked fine even if there were some pathological cases that weren't being handled correctly. (2) The deprecation message didn't give a clear migration path -- isClose has different parameters with subtly different semantics from approxEqual, and it wasn't obvious how you should replace calls to approxEqual with equivalent calls to isClose. There were also no easy defaults that you could use that replicated the previous behaviour; you had to sit down and think about each call, then look up the docs to be sure. (3) The choice of name felt like a serious blunder, even if it was made for all the right reasons. All of these added up to a very frustrating experience, even if the intentions were right. If we could have done this over again, I'd have proposed to keep the old semantics of approxEqual, perhaps add another parameter that would use the new semantics. And make sure the deprecation message is clear about how exactly you go about deciding what to put in the new parameter. I.e., for lazy authors not changing anything would let their code continue to work as before; if they wanted to have the new semantics they'd have to explicitly opt in.
 Third, that even with an edition bump, large-scale breaking 
 changes that make migration difficult should probably be 
 avoided.
Yes, large breaking changes are a no-no. Unless my old code can continue compiling as before, and I have to opt-in to the new stuff. Editions would help with this, but it still depends on the execution. There should always be a good migration path that doesn't require you to rewrite 5-10 year old code that you no longer remember the details of and can no longer confidently reimplement without spending disproportionate amounts of time to re-learn the ins and outs of it.
 Fourth, that breaking changes should be used to give D 
 programmers more of what they already like about D, not to 
 take the D language in new directions.
TBH, nogc, dip1000, live, etc., feel a lot like D trying to go in entirely new directions. The fact that it's been years and still practically nobody understands exactly how it works and what it does, is not a good sign. And all this while things like `share` and static initialization of AA's are stagnating. Built-in AA's are one of my major reasons for choosing D, and seeing it languish for years with elementary features like static initialization not fixed is quite disheartening. Worse when it feels like D wants to move to newer pastures when its current features are still half-done and has problematic corner cases. I.e., what I like about D is stagnating, while new features that I have little interest in are being pushed on me.
 To Walter, Atila, and the rest of D's leadership, I hope this 
 post provides some helpful data points for you to take into 
 account when designing D's language editions and planning 
 future language changes.
 
 To everyone else reading this, I'd like to leave you with one 
 last question: what do **you** like about D? What strengths 
 does D have, as a language, that you'd like to see become even 
 stronger?
[...] What I like about D: - Meta-programming power. - CTFE should be improved. By a lot. It was a big disappointment that Stefan's newCTFE never materialized. IMO we should be improving this story instead of trying to chase rainbows like ARC with live and dip1000 and what-not. We should make this so good that I'll never need to use an external codegen utility again. And it should not introduce crazy compile times. This is a primary D strength, its story should be maximally optimized. - The template story should be improved. There should be a way of working with templates that cut down on needless bloat. Lots of room for exploration here. We shouldn't be confined by C++ limitations here. This is one of D's primary strengths and where we can pioneer even more. One area is improving IFTI to make it work for even more common cases. Another is recognizing common patterns like chains of ranges, and optimizing symbol generation so that you don't end up with unreasonably huge symbols. Esp. when it's a one-of-a-kind UFCS chain (it's unlikely you're ever going to have exactly the same chain twice with exactly the same template arguments -- no point encoding every argument in the symbol, just an ID that gets incremented per instantiation is good enough). - Compile-time introspection and DbI. This is another huge D strength, and we should be working on streamlining it even more. - Clean up __traits(), make std.traits more sensible. - Fix things like scoping issues with static foreach. Introduce local aliases so that static foreach doesn't need crazy hacks with {{...}} and temporary templates just for injecting new identifiers per iteration without running into multiple declaration errors. - Improve the syntax for retrieving members of some symbol. Something prettier than __traits(getAllMembers,...). This is a primary D strength, it should be dressed in better syntax than this. - Maybe first-class types to make the metaprogramming story even more powerful. - GC. Instead of bending over backwards trying to woo the nogc crowd who are mass migrating to Rust anyway, what about introducing write barriers that would allow us existing D users to use a much more competitive GC algorithm? Stop-the-world GC in 2024 shouldn't even be a thing anymore. We aren't in 1998 anymore. D should embrace the GC, not sacrifice it for the sake of wooing a crowd that isn't likely to adopt D regardless. Instead of trying to get away from the GC, what about making the GC experience better for existing D users? - Built-in AA's. It's been at least a decade. Why is static initialization support still sketchy? - Built-in unittests. The default experience should be top-of-the-line. We shouldn't need to import a dub package for something beyond the current dumb built-in test runner. Named unittests, the ability to select which tests to run, the ability to run all tests regardless of failure and show stats afterwards -- these are all basic functionalities that ought to work out-of-the-box. - Automatic type & attribute inference. `auto` was revolutionary when I first joined D (C++ got it only years later). We should improve type and attribute inference to the max (e.g., in default parameters for enums: there should be no need to repeat the enum name). Nobody like spelling out attribute soup, just like nobody likes spelling out explicit types when it's already obvious from context. The compiler should automate this to the max. A little bit of breakage here IMO is acceptable as long as it gets us to an even better place. Have negated attributes be a thing as well. In fact, make attributes first-class citizens so that we can use DbI / metaprogramming to manipulate it. This is what we should be focusing our efforts on instead of trying to woo an amorphous group of hypothetical potential users somewhere out there who aren't particularly likely to adopt D to begin with.
+100 And the subtle business mentioned first of cleaning up dark ugly corners with breaking changes gets +110. Here are a couple of bottom level ones that jumped out at me in the last few days. --- Fix void to be a type like any other and not an unconsidered edge case to trip over. --- Remove automatic conversions between signed and unsigned so that unconsidered bizarre semantics is excluded from D. Here's a sane take on GC, not more ideology. https://bitbashing.io/gc-for-systems-programmers.html --- Make the GC a 21st century GC! And acknowledge that using it by default makes sense as per the short article.
Feb 18
prev sibling parent cc <cc nevernet.com> writes:
On Friday, 16 February 2024 at 04:38:03 UTC, H. S. Teoh wrote:
 GC.  Instead of bending over backwards trying to woo the  nogc 
 crowd
I just want to chime in here: I'm a GC minimalist who avoids it whenever reasonably possible and only uses it when it makes the most sense. And I really don't care about nogc. Feel free to dump it and work on cooler stuff, you have my support!
 trying to woo an amorphous group of hypothetical potential users
I feel like this describes a lot of the general philosophy I get observing these forums lately. A perhaps overly cynical interpretation would be that (some of?) the D community is so insecure about losing any more of its already well-known to be small userbase that it's terrified to make any serious meaningful positive changes in the event some unknown person somewhere with a dub package that hasn't been touched in 7 years gets annoyed typing build and decides to move on (hey, I get annoyed typing dub build *every time*). Or some vague future new user to whom this will happen 7 years hence. Every time I come to General I see another thread with a deep, introspective, heavily passionate argument about why we can't have nice thing because of some astronomically remote edge case and everything gets frozen into a moebius loop of trying to figure out how to account for every possible combinatorial ways it might be used or misused. I'll grant lack of foresight has long been the C Family Curse, but there is such a thing as being too navel-gazing as well. Every time I see one of these hot topics and start drawing up a response, I watch it spiral deeper into interdependent debates, bringing up every flaw that D, phobos, and C++ have ever experienced in their lifetimes, of why it needs to be absolutely perfect to some exacting cosmological standard so that the nebulous supercorporation that may or may not use it in some unspecified future will be sufficiently satisfied with the implementors' fidelity, and sigh and delete the post. I've discarded more drafts to this forum than I've ever submitted. Why argue with the heavyweights? They've got all the scientific proof that doing anything that may need to change someday is simply impermissible, and all I have is a fondness for nice things. Nice things are what drew me to D in the first place, but now nice things are anathema, because we might be required to take responsibility for them someday. Why get a dog if you have to walk it?
Feb 20
prev sibling next sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
Fast compilation, modules, compile time stuff should have been 
enough for D to rule in the system language area.. but too much 
distraction (Java OOP stuff), time has passed and still no way to 
do better then C in some areas

And i disagree with you on Python3, breaking change didn't hurt 
its adoption, it gave it a new birth, what hurt its adoption was 
distro maintainers

I do lot of python these days, and it sadden me whenever i go 
back to D and i can't do a simple pattern match, i have to use 
the old ass verbose C style switch

Similar with returning multiple values, can't do it in D unless 
you import a template everywhere and even then, using them is not 
as smooth and is too verbose

It's hard to win people over these matter because everyone has 
its own idea of what's needed, what's important and what is useful

At some point the leadership has to bruteforce and implement 
things he thinks will be needed, important and useful

Expecting users to contribute is wrong, it's your project, we are 
only using it to power our own projects

Anyways.. i don't mind breaking changes, as long as:

- it is properly documented

- the upgrade path is automated with tooling as much as possible 
(i wish dmd had a: `dmd fmt` built in)

- it was done to make the language future proof


Backward compatibility should have a minimum version, otherwise 
there is no way to fix past mistakes or to adapt

For me, the strengths of D:

- fast compilation
- close to C performance/control wise
- very easy access to C code / libraries, great to kickstart a 
project
- compile time type introspection
- modules
- not ruled by a big corp

It however show its lack of adaptiveness, when compared to the 
competition, its weaknesses:

- verbosity/repetition in the wrong areas
- no switch as expression
- no native tuple
- no tagged union
- no static array length inference
- can't have anonymous struct within structs
- C++ style of "the solution is a template in phobos"
- hard to predict what's next
Feb 16
parent ryuukk_ <ryuukk.dev gmail.com> writes:
Small addition to my post for a little prediction

I predict that people using high level languages will be replaced 
by generative AI tools

Python developers doing generative AI stuff turn back to C 
whenever they need performance

I want them to turn back to D

If you just have 1 D library that one of these devs depend on, 
then it's a win for D, because that'll attract ton of developers 
having an interest in maintaining, extending and funding it, and 
that's imo what we should all thrive for, enabling people to 
write pragmatic libraries with high impact, high performance, so 
no gc, no exceptions, no oop, so it's easy for these people to 
consume it


D has to compete with what exist today, and what's to come, it 
needs to be future proof


https://docs.modular.com/mojo/why-mojo.html
Feb 16
prev sibling next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Friday, 16 February 2024 at 01:44:51 UTC, Paul Backus wrote:
 To everyone else reading this, I'd like to leave you with one 
 last question: what do **you** like about D? What strengths 
 does D have, as a language, that you'd like to see become even 
 stronger?
You managed to have me look DIP1000 in a new light. If it's going to be made the default, the choices are: 1. Put in a lot of thought to make the code compile. 2. Move stack-allocated stuff to the heap so there's no need to fight with `scope`. 3. Mark the code where there are problems ` system`. As you said, 1. can be a real problem. If the code has been already battle-tested, it's far too much effort to be worth it if the code in question will not go any major overhauls anymore. 2. is okay in most cases - fortunately - but it can't be a general answer, especially not for a systems language like D. And we really don't want people doing option 3. It's less of a problem than combining ` system` and stack references for new code, but the code is still probably going to see some changes. `scope` is an incredible footgun outside ` safe`. Before, I would have said that if the code doesn't compile with DIP1000 it isn't verifiable as safe anyway, so nothing to be done. In practice though, even pre-DIP1000 safety checks are usually much better than ` system` or ` trusted`, and *especially* better than non-` safe` code _with scope references_. I didn't think much of the language foundations decision to keep pre-DIP1000 semantics the default until we have editions, but considering this it starts to make sense. We need a way to selectively enable pre-DIP1000 semantics for old functions before we move on, otherwise the choices for those who have old code are just too stark.
Feb 16
parent reply Paul Backus <snarwin gmail.com> writes:
On Friday, 16 February 2024 at 15:20:34 UTC, Dukc wrote:
 Before, I would have said that if the code doesn't compile with 
 DIP1000 it isn't verifiable as safe anyway, so nothing to be 
 done. In practice though, even pre-DIP1000 safety checks are 
 usually much better than ` system` or ` trusted`, and 
 *especially* better than non-` safe` code _with scope 
 references_.

 I didn't think much of the language foundations decision to 
 keep pre-DIP1000 semantics the default until we have editions, 
 but considering this it starts to make sense. We need a way to 
 selectively enable pre-DIP1000 semantics for old functions 
 before we move on, otherwise the choices for those who have old 
 code are just too stark.
I think for DIP1000 to succeed, we're going to have to come at this from both directions. Yes, we need to do as much as we can to reduce the burden of adoption, but even if we do, DIP1000 is always going to be a high-impact breaking change. Which means that *in addition to providing a migration path*, we need to have strong buy-in from the community that ` safe` is one of D's core strengths, and improvements to ` safe` are something they're willing to suffer some breakage for. If we don't have that buy-in, then, as much as it pains me to say it, the correct move is probably to give up on ` safe` altogether, and focus our efforts on improving D's other strengths.
Feb 16
next sibling parent Sebastiaan Koppe <mail skoppe.eu> writes:
On Friday, 16 February 2024 at 16:15:30 UTC, Paul Backus wrote:
 I think for DIP1000 to succeed...
For me, dip1000 has already succeeded. It aligns very well with structured concurrency. I use it heavily to move almost every async state to the stack.
 Yes, we need to do as much as we can to reduce the burden of 
 adoption, but even if we do, DIP1000 is always going to be a 
 high-impact breaking change.
Most of the code out there that isn't safe is effectively trusted. There is no shame in marking it as such. That generally provides a good transition path. Yes, it won't get any benefits until code is safe, but it can be done gradually. There are some pain points of course, sometimes it is difficult to convince the compiler what the code does is safe, or when you have to decipher which of the many attributes one has to use. There is also sometimes a cascading need to sprinkle scope on member functions. Generally though, I find that code that has adopted dip1000 to have a better api and less surprises on the inside. Its subtle, but is there. So yeah, it can sometimes be very difficult to get everything safe. That said, application code mostly doesn't have to care. Let all the tricky dip1000 concepts be constrained to libraries and just write applications using the GC.
Feb 16
prev sibling parent Dukc <ajieskola gmail.com> writes:
On Friday, 16 February 2024 at 16:15:30 UTC, Paul Backus wrote:
 Which means that *in addition to providing a migration path*, 
 we need to have strong buy-in from the community that ` safe` 
 is one of D's core strengths, and improvements to ` safe` are 
 something they're willing to suffer some breakage for.
For code still in active development, sure. No point in enabling ` safe` in the first place if there's no willingness to use it with the rules that are actually safe, which means turning DIP1000 on. If people refuse keeping structs/arrays they refer to at safe code in the heap and also refuse taking their time to learn DIP1000, then they essentially don't want real memory safety. But for code in maintenance-only mode, it's different. Whatever bugs it may have had because of lack of DIP1000 are usually already caught the hard way, or if they remain they manifest only very rarely. That people maintaining code like that don't want to update it doesn't really mean they don't want ` safe` or DIP1000, as in this case the work-to-benefit ratio is much worse than for new code.
 If we don't have that buy-in, then, as much as it pains me to 
 say it, the correct move is probably to give up on ` safe` 
 altogether, and focus our efforts on improving D's other 
 strengths.
Fortunately we don't have to make a hard yes-or-no decision on that one since the language as-is allows keeping ` system` on for everything.
Feb 19
prev sibling next sibling parent monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 16 February 2024 at 01:44:51 UTC, Paul Backus wrote:
 The answer is: because that's the reason they chose Rust in the 
 first place!
You should at least consider other hypothesis rust programmers are anti sanity, much like there williness to design a c++ like syntax hell from scratch maybe they masochistically enjoy breaking changes rust success is political not meritocratic, firefox is on the short list of charities who are way over funded, beg for money, and letting their main project rot; firefox funded rust bad verbose solutions make people feel smart for handling it, breaking changes for security is stratifying for rust users While everyone wants good code, theres nothin in the world actually delivering that consistently what we get; so rust success isn't permissive etc etc etc
Feb 16
prev sibling next sibling parent reply Dukc <ajieskola gmail.com> writes:
On Friday, 16 February 2024 at 01:44:51 UTC, Paul Backus wrote:
 To everyone else reading this, I'd like to leave you with one 
 last question: what do **you** like about D? What strengths 
 does D have, as a language, that you'd like to see become even 
 stronger?
It revolves around the fact it's truly general purpose - scripting, application development and systems programming all equally supported. And also that the language doesn't impose restrictions just for sake of some stylistic discipline. If D ever were to "pick it's camp" and either force me to use the GC and the standard runtime whether I want or not, or alternatively ditch the GC it and impose RAII/ref counting instead, that would drive me away.
Feb 19
parent reply Carl Sturtivant <sturtivant gmail.com> writes:
On Monday, 19 February 2024 at 17:14:21 UTC, Dukc wrote:
 It revolves around the fact it's truly general purpose - 
 scripting, application development and systems programming all 
 equally supported. And also that the language doesn't impose 
 restrictions just for sake of some stylistic discipline.
+1
 If D ever were to "pick it's camp" and either force me to use 
 the GC and the standard runtime whether I want or not, or 
 alternatively ditch the GC it and impose RAII/ref counting 
 instead, that would drive me away.
To be clear, I am not suggesting that D should force using the GC. Quoting the article I mentioned (https://bitbashing.io/gc-for-systems-programmers.html):
Many developers opposed to garbage collection are building 
“soft” real-time systems. They want to go as fast as 
possible—more FPS in my video game! Better compression in my 
streaming codec! But they don’t have hard latency requirements. 
Nothing will break and nobody will die if the system 
occasionally takes an extra millisecond.
 [...]
**Modern garbage collection offers optimizations that 
alternatives can not.** A moving, generational GC periodically 
recompacts the heap. This provides insane throughput, since 
allocation is little more than a pointer bump! It also gives 
sequential allocations great locality, helping cache performance.
 [...]
I’m not suggesting that all software would benefit from garbage 
collection. Some certainly won’t. But it’s almost 2024, and any 
mention of GC—especially in my milieu of systems 
programmers—still drowns in false dichotomies and FUD. *GC is 
for dum dums, too lazy or incompetent to write an "obviously" 
faster version in a language with manual memory management*.

It’s just not true. It’s ideology. And I bought it for over a 
decade until I joined a team that builds systems—systems people 
bet their lives on—that provide sub-microsecond latency, using a 
garbage-collected language that allocates on nearly every line. 
It turns out modern GCs provide amazing throughput, and you 
don’t need to throw that out for manual memory management just 
because some of your system absolutely needs to run in n clock 
cycles. (Those *specific parts* can be relegated to non-GC code, 
or even hardware!)
What I *am* suggesting is that a modern GC for D would be a game-changer, with the default of using the GC being the best answer most of the time. People who didn't believe this could find it out experimentally.
Feb 19
parent Dukc <ajieskola gmail.com> writes:
On Tuesday, 20 February 2024 at 00:48:42 UTC, Carl Sturtivant 
wrote:
 On Monday, 19 February 2024 at 17:14:21 UTC, Dukc wrote:
 If D ever were to "pick it's camp" and either force me to use 
 the GC and the standard runtime whether I want or not, or 
 alternatively ditch the GC it and impose RAII/ref counting 
 instead, that would drive me away.
To be clear, I am not suggesting that D should force using the GC.
No worries, wasn't thinking you - rather the numerous doomsayers here over the years that say D can't succeed "if it can't decide what it wants to be". If they are right, it means my criterion for an ideal language is fundamentally backwards.
 What I *am* suggesting is that a modern GC for D would be a 
 game-changer, with the default of using the GC being the best 
 answer most of the time. People who didn't believe this could 
 find it out experimentally.
D:s GC is more modern than you might think. Yes, it stops the world and isn't as fast as many of the alternatives, but it's because it has an unique feature. Namely, the GC-collected memory can be referred to with raw pointers. Any GC that can stop it's collection and resume it afterwards must only use special references (that tell the GC when they are assigned to), meaning you can't let regular C functions to handle your GC-collected memory. Maybe D would still benefit from a GC that requires pointer write gates, but then all D pointers (and other references, save for function pointers) of that program would need that write gate. This is not an option if, say you're writing a library for a foreign language that must work without the DRuntime being initialised. Hence the pointer assignment semantics would have to be behind a compiler switch. Or maybe pointer assignment would call a symbol somewhere in DRuntime that can be redefined by the user.
Feb 20
prev sibling next sibling parent Guillaume Piolat <first.name gmail.com> writes:
On Friday, 16 February 2024 at 01:44:51 UTC, Paul Backus wrote:
 Why might that be? Glancing over some of the Rust release notes 
 myself, I notice a couple of trends.

 1. Many of the changes are low-impact and provide a clear 
 migration path for existing code.
 2. Many of the changes involve fixing holes in Rust's 
 memory-safety checks.
So here is my first impression of Rust, I had to build a measuring tool today and the only workable one happened to be in Rust. Remarks: - "rustup" can be get with a curl | sh commandline... it's nice, it means it's a few less click to get, and you can't really have a bad rustup version anyway. - I had to install various nightlies using "rustup toolchain", which is an "edition" + a target. A bit like Flutter does. - cargo is essentially like dub, I saw no big difference here. The binaries are annoyingly deep in sub-directories to find. One letter shortcuts useful. - code didn't build (1000+ packages!), I had to fix the dependencies in ~/.cargo to proceed. I must say it all build pretty fast. Now, for build errors, there were an enormous wall of text (almost too much) explaining how to fix the issues, so despite my ignorance of the language it was possible to continue. - libraries don't looks as friendly without the GC, for example CLAP vs a D argument parser. Pros: I think rustup is a big win here, a principled approach to install this or that version of D for this or that platform would perhaps be a good idea, it is similar to DVM, but DVM isn't much used. Cons: A bit of a wordey experience, and necessity to use nightlies for features that are in nightly since 3 years (will they get merged?). Unnatural package count.
Feb 19
prev sibling parent reply Atila Neves <atila.neves gmail.com> writes:
On Friday, 16 February 2024 at 01:44:51 UTC, Paul Backus wrote:
 In [a 2019 blog post][1], which I found today on the front page 
 of /r/programming, Michael Orlitzky complains that modern 
 languages have too many breaking changes. He contrasts 
 "grownup" languages like C and Ada, which have remained stable 
 and largely backwards compatible for decades, with languages 
 like Rust, which [regularly breaks compatibility][2] despite 
 having promised otherwise in its 1.0 release.

 [...]
Thanks for writing this, some very good points here. I think that making migration easier is something we need to focus on, but that probably needs dmd as a library to be easier to use. In the case of DIP1000 specifically I think maybe Robert's idea of moving its checks to ` trusted` may be that way forward, and making ` safe` regular GC D. Once I'm done with editions I'm going to write a DIP for this.
Feb 20
next sibling parent Dukc <ajieskola gmail.com> writes:
On Tuesday, 20 February 2024 at 09:03:15 UTC, Atila Neves wrote:
 In the case of DIP1000 specifically I think maybe Robert's idea 
 of moving its checks to ` trusted` may be that way forward, and 
 making ` safe` regular GC D. Once I'm done with editions I'm 
 going to write a DIP for this.
Please remember that it won't be any better than DIP1000 for backwards compatibility - in fact it's even worse, since: - With DIP1000 your slice of static array or pointer to a struct field will still compile if you didn't happen to escape it. With Roberts proposal it will always need fixing. - With Roberts proposal your choices for breaking code are either removing ` safe` or moving your variables to the heap. With DIP1000 you can also do either, but you also have a third choice: adding `scope` annotations.
Feb 20
prev sibling next sibling parent Paul Backus <snarwin gmail.com> writes:
On Tuesday, 20 February 2024 at 09:03:15 UTC, Atila Neves wrote:
 In the case of DIP1000 specifically I think maybe Robert's idea 
 of moving its checks to ` trusted` may be that way forward, and 
 making ` safe` regular GC D. Once I'm done with editions I'm 
 going to write a DIP for this.
My understanding is that both Robert's proposal and DIP 1000 are high-impact changes with difficult migration paths for existing ` safe` code. It's not clear to me that switching from one to the other will make things any easier. Personally, I think if we're going to break people's code either way, we might as well do it by making ` safe` D more powerful, rather than crippling it.
Feb 20
prev sibling parent reply Sebastiaan Koppe <mail skoppe.eu> writes:
On Tuesday, 20 February 2024 at 09:03:15 UTC, Atila Neves wrote:
 In the case of DIP1000 specifically I think maybe Robert's idea 
 of moving its checks to ` trusted` may be that way forward, and 
 making ` safe` regular GC D. Once I'm done with editions I'm 
 going to write a DIP for this.
For me dip1000 isn't about the GC though, but rather about avoiding the use of an object after it has been deconstructed. In its simplest form this involves restricting usage of a pointer. The reason I would want to restrict it is because I know the thing it points to to be dead at some point in my program and I want the compiler to ensure any usage doesn't violate that. (Which incidentally is why I believe any allocator needs to return essentially a scope pointer). It is a way to have objects with deterministic lifetimes and the guarantee of correct usage by restricting it to some scope. Importantly, this goes beyond just memory. Just because I use the GC and no longer have to care about the lifetime of raw memory, doesnt mean I stop caring about lifetime of objects in general. Plenty of resources need deterministic destruction. Modelling them as non-copyable structs is an excellent way to achieve that _and_ avoid reference counters. After which its natural to use dip1000 to pass them around. Even if you put them on the heap you would want to ensure there is no usage after they are logically freed. Buffers are a good example. They typically live on the heap, you will want to reuse them to avoid churn, and you will want to ensure downstream code doesn't accidentally escape them. So you make them scope and rely on dip1000. Now all that has to be trusted?
Feb 20
parent "Richard (Rikki) Andrew Cattermole" <richard cattermole.co.nz> writes:
On 21/02/2024 10:42 AM, Sebastiaan Koppe wrote:
 On Tuesday, 20 February 2024 at 09:03:15 UTC, Atila Neves wrote:
 In the case of DIP1000 specifically I think maybe Robert's idea of 
 moving its checks to ` trusted` may be that way forward, and making 
 ` safe` regular GC D. Once I'm done with editions I'm going to write a 
 DIP for this.
For me dip1000 isn't about the GC though, but rather about avoiding the use of an object after it has been deconstructed. In its simplest form this involves restricting usage of a pointer. The reason I would want to restrict it is because I know the thing it points to to be dead at some point in my program and I want the compiler to ensure any usage doesn't violate that. (Which incidentally is why I believe any allocator needs to return essentially a scope pointer). It is a way to have objects with deterministic lifetimes and the guarantee of correct usage by restricting it to some scope. Importantly, this goes beyond just memory. Just because I use the GC and no longer have to care about the lifetime of raw memory, doesnt mean I stop caring about lifetime of objects in general. Plenty of resources need deterministic destruction. Modelling them as non-copyable structs is an excellent way to achieve that _and_ avoid reference counters. After which its natural to use dip1000 to pass them around. Even if you put them on the heap you would want to ensure there is no usage after they are logically freed. Buffers are a good example. They typically live on the heap, you will want to reuse them to avoid churn, and you will want to ensure downstream code doesn't accidentally escape them. So you make them scope and rely on dip1000. Now all that has to be trusted?
Agreed. Not having reference counting in the language which overrides scope is bad enough and will kill scope for me in the future. But this, this is would be the end of scope on non-delegates for me right now. Users of libraries should not be touching trusted if everything is working correctly. I want safe code protected, not trusted.
Feb 20