www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Is it time for D 3.0?

reply Steven Schveighoffer <schveiguy gmail.com> writes:
There have been a lot of this pattern happening:

1. We need to add feature X, to fix problem Y.
2. This will break ALL CODE IN EXISTENCE
3. OK, cancel the fix, we'll just live with it.

Having a new branch of the compiler will provide a way to keep D2 
development alive while giving a playground to add new mechanisms, fix 
long-existing design issues, and provide an opt-in for code breakage.

Some issues I can think of:

1. The safe by default debate
2. pure by default
3. nothrow by default
4. String interpolation DIP
5. auto-decoding
6. range.save
7. virtual by default
8. ProtoObject

Other languages evolve much quicker than D, but break things only in 
major updates. D seems to "sort of" break things, there's always a risk 
in every release. We try to be conservative, but we have this horrible 
mix of deciding some features can break things, while others are not 
allowed to, and there's no clear guide as to which breakage fits in 
which category.

If we went to a more regular major release schedule, and decided for a 
roadmap for each major release what features would be included, it would 
allow much better planning, and much more defensible breakage of code. 
If you know that your code will only compile with D2.x, and you're fine 
with that, then great, don't upgrade to D3.x. If you desperately want a 
feature, you may have to upgrade to D3.x, but once you get there, you 
know your code is going to build for a while.

We could also not plan for many major releases, but at least move to D3 
for some major TLC to the language that is held back to prevent breakage.

I work occasionally with Swift, and they move very fast, and break a lot 
of stuff, but only in major versions. It's a bit fast for my taste, but 
it seems to work for them. And they get to fix issues that languages 
like C++ might have been stuck with forever.

The biggest drawback is that we aren't a huge language, with lots of 
manpower to keep x branches going at once.

I just wanted to throw it out as a discussion point. We spend an awful 
lot of newsgroup server bytes debating things that to me seem obvious, 
but have legitimate downsides for not breaking them in a "stable" language.

-Steve
Mar 27
next sibling parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 [...]
Didn't the 1.0 to 2.0 conversion nearly kill the language? -Alex
Mar 27
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/27/20 12:03 PM, 12345swordy wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 [...]
Didn't the 1.0 to 2.0 conversion nearly kill the language?
No. -Steve
Mar 27
parent reply 12345swordy <alexanderheistermann gmail.com> writes:
On Friday, 27 March 2020 at 16:54:28 UTC, Steven Schveighoffer 
wrote:
 On 3/27/20 12:03 PM, 12345swordy wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 [...]
Didn't the 1.0 to 2.0 conversion nearly kill the language?
No. -Steve
Oh, must had conflict it with something else. This article is still relevant to this day. https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
Mar 27
next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/27/20 12:58 PM, 12345swordy wrote:
 On Friday, 27 March 2020 at 16:54:28 UTC, Steven Schveighoffer wrote:
 On 3/27/20 12:03 PM, 12345swordy wrote:
 Didn't the 1.0 to 2.0 conversion nearly kill the language?
No.
Oh, must had conflict it with something else.
There was an issue with an alternative standard library (Tango), which divided the community. That shouldn't be a problem for a D3. -Steve
Mar 27
parent reply Mike Parker <aldacron gmail.com> writes:
On Friday, 27 March 2020 at 17:17:44 UTC, Steven Schveighoffer 
wrote:
 On 3/27/20 12:58 PM, 12345swordy wrote:
 On Friday, 27 March 2020 at 16:54:28 UTC, Steven Schveighoffer 
 wrote:
 On 3/27/20 12:03 PM, 12345swordy wrote:
 Didn't the 1.0 to 2.0 conversion nearly kill the language?
No.
Oh, must had conflict it with something else.
There was an issue with an alternative standard library (Tango), which divided the community. That shouldn't be a problem for a D3.
I remembered Tango being the bigger issue, too. But while working on the HOPL IV paper, digging through the forum archives, and recollecting with a few people, I realized that wasn't the whole story. The Tango split was real, but it almost certainly would have been much less of an issue without the move to D2. The D1/D2 split was much more impactful. Some of the changes and new features required a paradigm shift (e.g., transitive const/immutable, ranges & algorithms). Tango with D1 was an escape hatch for those who were resistant to the changes (I was pretty resistant myself; wrote a big rant about it on my old blog). Some (like me) eventually came around. I'm now of the opinion that if Tango hadn't been around at the time, we may well have lost more people than we did. There was also this quote from Walter a few years back: https://forum.dlang.org/post/nmf48b$1ckm$1 digitalmars.com "There are no plans for D3 at the moment. All plans for improvement are backwards compatible as much as possible. D had its wrenching change with D1->D2, and it nearly destroyed us." I'm saying this just to point out the historical context, not to take sides on a potential D3. If we do move in that direction, IMO we need to do it in a way that's methodical and clearly mapped out.
Mar 27
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 27 March 2020 at 22:34:40 UTC, Mike Parker wrote:
 The D1/D2 split was much more impactful.
What, specifically, was it? D2 was literally just an arbitrary point release in an ongoing evolution. String literals becoming invariant, for example, happened at 2.006, which was probably the biggest breaking change outside the phobos library. I recall that being a pretty invasive change, but the most annoying to me as i remember was actually renaming in phobos, like stripl to stripLeft and such which was spread over several releases. I also recall some commercial users having major problems with the change to slices - the array stomping protection modification, which was a silent runtime breaking change. But it will be worth looking at exactly what it was and why it bothered people. There's been breaking changes before and after the D2 name too, and the D2 name itself didn't actually break anything. So bringing it up without a specific policy isn't helpful.
 There was also this quote from Walter a few years back:
His memory is as faulty as anyone else's.
Mar 27
parent Jacob Carlborg <doob me.com> writes:
On 2020-03-28 01:53, Adam D. Ruppe wrote: > D2 was literally just an 
arbitrary point release in an ongoing
 evolution. 
Yes.
 String literals becoming invariant, for example, happened at 
 2.006, which was probably the biggest breaking change outside the phobos 
 library. I recall that being a pretty invasive change
String literals have always been placed in read-only sections on Posix. In D1, even if the compiler did let you modify string literals it would crash at runtime. Perhaps it was only a problem on Windows? -- /Jacob Carlborg
Mar 28
prev sibling parent GreatSam4sure <greatsam4sure gmail.com> writes:
On Friday, 27 March 2020 at 16:58:03 UTC, 12345swordy wrote:
 On Friday, 27 March 2020 at 16:54:28 UTC, Steven Schveighoffer 
 wrote:
 On 3/27/20 12:03 PM, 12345swordy wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven 
 Schveighoffer wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 [...]
Didn't the 1.0 to 2.0 conversion nearly kill the language?
No. -Steve
Oh, must had conflict it with something else. This article is still relevant to this day. https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/
Thanks For the article you reference the URL. Now I know throwing away your codebase is not the right decision. Proper refactoring and improvement is the way to go. A am a little wiser now.
Mar 27
prev sibling next sibling parent reply Meta <jared771 gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
D has been around for 20 years and hasn't gained the traction that younger languages like Rust or Go have (though as we all know, the main reason for this is D's lack of a big corporate patron a la Mozilla or Google). Maybe what's needed is a "new" language that breaks backwards compatibility (as conservatively as possible and hopefully in a way that makes it easy to automatically port your D2 code). Walter originally wanted to call it the Mars language - maybe it's time to revive that name in a complete rebranding of the language.
Mar 27
parent reply Russel Winder <russel winder.org.uk> writes:
On Fri, 2020-03-27 at 16:55 +0000, Meta via Digitalmars-d wrote:
=20
[=E2=80=A6]
 D has been around for 20 years and hasn't gained the traction=20
 that younger languages like Rust or Go have (though as we all=20
 know, the main reason for this is D's lack of a big corporate=20
 patron a la Mozilla or Google). Maybe what's needed is a "new"=20
 language that breaks backwards compatibility (as conservatively=20
 as possible and hopefully in a way that makes it easy to=20
 automatically port your D2 code).
Whilst D has not had the hype or the instant traction of Rust and Go, it does have some traction and mindshare. This is based on it's history, and it would (in my opinion) be a bad idea to lose this. I think having a positive strategy towards a D v3 would be a good idea but only if there are big breaking changes to D v2. The current Java evolution strategy is fine, but it's version numbering is an unimitigated disaster =E2=80=93 in my view. Groovy has though had the right evolution strategy and the right approach to version numbering =E2=80=93 it= has fairly recently released v3 for all exactly the right reasons.
 Walter originally wanted to call it the Mars language - maybe=20
 it's time to revive that name in a complete rebranding of the=20
 language.
I think having a brand new language that just happened to have a very simple upgrade path from D v2 would be a self-defeating activity. People would very quickly spot the con. If D has a future it is in terms of v3, v4, etc. with a strong technical evolution (cf. Groovy) and good marketing. Clearly D remaining at v2 for ever more would, I feel, be a Very Bad Idea=E2=84=A2 s= ince it advertises no changes to the language, i.e. a language with a stalled evolution.=20 --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Mar 27
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/27/2020 10:22 AM, Russel Winder wrote:
 Clearly D
 remaining at v2 for ever more would, I feel,  be a Very Bad Idea™ since
 it advertises no changes to the language, i.e. a language with a
 stalled evolution.
If this happens it still seems like a marketing failure. After all, C++ gets a year appended and yet has large changes.
Mar 28
parent reply Russel Winder <russel winder.org.uk> writes:
On Sat, 2020-03-28 at 01:42 -0700, Walter Bright via Digitalmars-d
wrote:
 On 3/27/2020 10:22 AM, Russel Winder wrote:
 Clearly D
 remaining at v2 for ever more would, I feel,  be a Very Bad Idea=E2=84=
=A2
 since
 it advertises no changes to the language, i.e. a language with a
 stalled evolution.
=20 If this happens it still seems like a marketing failure. After all, C++ gets a=20 year appended and yet has large changes.
It is always a delicate balance of keeping a language vibrant and alive in the minds of those *not* already committed to it, and seeming a niche language dead to the mainstream. Switching D to a proper semantic versioning system would, in my view, help keep D in the former category, and out of the latter one. If we get to D 2.999, I would suggest D has moved into to latter category. Yes I know Torvalds did the major version hack on Linux simply to avoid seeming stuck and stalled, not for any actual technical reasons, but it worked. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Mar 29
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 3/29/2020 2:41 AM, Russel Winder wrote:
 It is always a delicate balance of keeping a language vibrant and alive
 in the minds of those *not* already committed to it, and seeming a
 niche language dead to the mainstream.
 
 Switching D to a proper semantic versioning system would, in my view,
 help keep D in the former category, and out of the latter one. If we
 get to D 2.999, I would suggest D has moved into to latter category.
D's development path simply doesn't fit into the semantic versioning system. It's one of continuous change, not long periods of stability punctuated by wrenching change.
 Yes I know Torvalds did the major version hack on Linux simply to avoid
 seeming stuck and stalled, not for any actual technical reasons, but it
 worked.
I'm not familiar with that, but it makes sense.
Mar 29
parent reply Russel Winder <russel winder.org.uk> writes:
On Sun, 2020-03-29 at 12:16 -0700, Walter Bright via Digitalmars-d
wrote:
=20
[=E2=80=A6]
 D's development path simply doesn't fit into the semantic versioning
 system.=20
 It's one of continuous change, not long periods of stability
 punctuated by=20
 wrenching change.
I think your understanding/appreciation of how semantic versioning works for a dynamic/agile project is failing you. Semantic versioning is entirely applicable to the DMD project =E2=80=93 even if it is never goi= ng to happen.
=20
[=E2=80=A6] --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Mar 31
next sibling parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 31 March 2020 at 12:57:04 UTC, Russel Winder wrote:
 On Sun, 2020-03-29 at 12:16 -0700, Walter Bright via 
 Digitalmars-d wrote:
 
[…]
 D's development path simply doesn't fit into the semantic 
 versioning
 system.
 It's one of continuous change, not long periods of stability
 punctuated by
 wrenching change.
I think your understanding/appreciation of how semantic versioning works for a dynamic/agile project is failing you. Semantic versioning is entirely applicable to the DMD project – even if it is never going to happen.
 
[…]
Almost every release would be a major release or a patch release. Hard to see how it adds much value to follow Semantic Versioning in that case.
Mar 31
next sibling parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Tuesday, 31 March 2020 at 17:00:28 UTC, John Colvin wrote:
 Almost every release would be a major release or a patch 
 release. Hard to see how it adds much value to follow Semantic 
 Versioning in that case.
Well, it might concentrate everyone's minds a bit if we had such a clear indicator of either breaking change or zero new features with every compiler release ... :-) BTW, _why_ is there such a regular rate of breaking change? Is it deliberate or accidental? Assuming deliberate, why so regular?
Mar 31
parent reply John Colvin <john.loughran.colvin gmail.com> writes:
On Tuesday, 31 March 2020 at 17:22:34 UTC, Joseph Rushton 
Wakeling wrote:
 On Tuesday, 31 March 2020 at 17:00:28 UTC, John Colvin wrote:
 Almost every release would be a major release or a patch 
 release. Hard to see how it adds much value to follow Semantic 
 Versioning in that case.
Well, it might concentrate everyone's minds a bit if we had such a clear indicator of either breaking change or zero new features with every compiler release ... :-) BTW, _why_ is there such a regular rate of breaking change? Is it deliberate or accidental? Assuming deliberate, why so regular?
So many things in D are breaking changes. Because of introspection and __traits(compiler,...) etc. almost any bug fix can break someone's code. Even introducing an entirely disjoint overload for a function can be a breaking change. It would be an interesting but daunting (impractical?) task to characterise what is considered "API" and what is "implementation detail" in phobos.
Apr 01
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Wednesday, 1 April 2020 at 09:40:34 UTC, John Colvin wrote:
 So many things in D are breaking changes. Because of 
 introspection and __traits(compiler,...) etc. almost any bug 
 fix can break someone's code. Even introducing an entirely 
 disjoint overload for a function can be a breaking change.

 It would be an interesting but daunting (impractical?) task to 
 characterise what is considered "API" and what is 
 "implementation detail" in phobos.
Has there ever been any kind of systematic effort to record and categorize those sorts of introspection-based breakages? I'm wondering if there are any regularities to what breaks that would help us avoid them more rigorously.
Apr 01
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/1/20 8:43 AM, Joseph Rushton Wakeling wrote:
 On Wednesday, 1 April 2020 at 09:40:34 UTC, John Colvin wrote:
 So many things in D are breaking changes. Because of introspection and 
 __traits(compiler,...) etc. almost any bug fix can break someone's 
 code. Even introducing an entirely disjoint overload for a function 
 can be a breaking change.

 It would be an interesting but daunting (impractical?) task to 
 characterise what is considered "API" and what is "implementation 
 detail" in phobos.
Has there ever been any kind of systematic effort to record and categorize those sorts of introspection-based breakages?  I'm wondering if there are any regularities to what breaks that would help us avoid them more rigorously.
I think what John is saying is that it's nearly impossible. With introspection, literally EVERYTHING becomes part of the API, including all names of parameters to functions. It doesn't mean that the API is defined as that, however. But it does mean that code that uses funky techniques might break when something is changed that is not expected to cause a problem. However, I think we can define what the API does include, and basically warn people "don't depend on this". I just don't think we do that anywhere. With a more robust versioning system, we could also provide a "breaking" branch which is expected to change, and a "non-breaking" branch which is expected never to break anything. Right now we have a grey area with both. Your point about deprecations is a very good one. I think we should switch to that (only one deprecation removal release per year). We could probably engineer a versioning system that makes this detectable based on the version number. -Steve
Apr 01
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Wednesday, 1 April 2020 at 13:26:20 UTC, Steven Schveighoffer 
wrote:
 I think what John is saying is that it's nearly impossible. 
 With introspection, literally EVERYTHING becomes part of the 
 API, including all names of parameters to functions.

 It doesn't mean that the API is defined as that, however. But 
 it does mean that code that uses funky techniques might break 
 when something is changed that is not expected to cause a 
 problem.

 However, I think we can define what the API does include, and 
 basically warn people "don't depend on this". I just don't 
 think we do that anywhere.
Ah, OK. That does make sense. And yes, I agree that strong clarity on what can robustly be introspected in upstream code, and what not, would be a very good thing. Is this in principle something that we could automate warnings about? (I say warnings because I don't imagine banning such introspection would be helpful. After all one could probably do these things internally in a codebase -- on entities you control and hence where you can prevent breakage -- and get benefit out of them. So there would have to be some way to indicate intent to do the risky thing.)
 Your point about deprecations is a very good one. I think we 
 should switch to that (only one deprecation removal release per 
 year). We could probably engineer a versioning system that 
 makes this detectable based on the version number.
Well, for example, if we made the breaking changes happen in the first release of the year, we could version D in annual epochs. D 2020 anyone? :-) [Bad pun, but: did anyone write up a D 2020 vision? ;-)] On the broader original topic of your first post -- I do agree that it's really starting to feel like it's time for another major revision to the language, as that will allow some very desirable features to be done much more solidly (or be done at all). As someone who still wants to get a lot of production use out of D in the next years, I'd MUCH rather have a breaking change that gets us to even more awesome places, than stability and less powerful features. I'm not sure I would agree with all your listed examples -- e.g. I'm reluctant to endorse pure by default as I think that would clash too much with the ability to just script easily -- but that sort of stuff is for future discussion anyway. The core idea of wanting to make another major language revision seems sound.
Apr 02
parent reply Mathias LANG <geod24 gmail.com> writes:
On Thursday, 2 April 2020 at 22:51:18 UTC, Joseph Rushton 
Wakeling wrote:
 On Wednesday, 1 April 2020 at 13:26:20 UTC, Steven 
 Schveighoffer wrote:
 I think what John is saying is that it's nearly impossible. 
 With introspection, literally EVERYTHING becomes part of the 
 API, including all names of parameters to functions.

 It doesn't mean that the API is defined as that, however. But 
 it does mean that code that uses funky techniques might break 
 when something is changed that is not expected to cause a 
 problem.

 However, I think we can define what the API does include, and 
 basically warn people "don't depend on this". I just don't 
 think we do that anywhere.
Ah, OK. That does make sense. And yes, I agree that strong clarity on what can robustly be introspected in upstream code, and what not, would be a very good thing. Is this in principle something that we could automate warnings about?
That's actually a topic I was very interested in when I was working at Sociomantic. We had a pretty good versioning thing going on with libraries, but it was never formally defined what was an acceptable change. As mentioned, the ability to do `static if (!__traits(compiles, ...))` (notice the negation: checking that something does not compile) means that, except for introducing a new module (but that's due to a compiler limitation that we'd eventually fix), everything can be considered a breaking change, if you're a purist. I've always been very fond of https://community.kde.org/Policies/Binary_Compatibility_Issues_With_C%2B%2B as it describes the techniques one can use to ensure binary compatibility in C++ program, and wish we had the same in D.
 (I say warnings because I don't imagine banning such 
 introspection would be helpful.  After all one could probably 
 do these things internally in a codebase -- on entities you 
 control and hence where you can prevent breakage -- and get 
 benefit out of them.  So there would have to be some way to 
 indicate intent to do the risky thing.)
We don't do warnings. We have some, but they are an historical bagage. I know Walter oppose to it, it was brought up multiple times.
 Your point about deprecations is a very good one. I think we 
 should switch to that (only one deprecation removal release 
 per year). We could probably engineer a versioning system that 
 makes this detectable based on the version number.
Well, for example, if we made the breaking changes happen in the first release of the year, we could version D in annual epochs. D 2020 anyone? :-)
I still think the deprecation scheme should take a few more things into account, e.g. how long has the replacement feature been available for ? What bugs are triggered by the feature that is to be removed ? How much is it used ? For example, the body deprecation: Most of the gains come from making `body` a context-dependent keyword, and allowing `do`. Removing `body` as a keyword doesn't actually help the user. It doesn't bring them value. Additionally, TONS of code out there use `body`. Using `body` does not cause bugs at all. And the replacement was introduced only shortly before deprecation. For this reason, we could keep it in the language for a very long time (5 years at least), and just undocument it / replace usages as we can. On the other hands, D1 operator overloads: The replacement (D2) have been around for over a decade. They have different priorities, so they don't mix well with their replacement, leading to subtle bugs (which was actually what triggered their deprecation). However there is A LOT of code out there that still uses them, so that's why I would want us to go with at least a 2 year deprecation period.
 I'm not sure I would agree with all your listed examples -- 
 e.g. I'm reluctant to endorse pure by default as I think that 
 would clash too much with the ability to just script easily -- 
 but that sort of stuff is for future discussion anyway.  The 
 core idea of wanting to make another major language revision 
 seems sound.
That's a good point. There's no escape hatch for `pure`. How do I call `writeln` from main ?
Apr 02
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 4/3/20 1:00 AM, Mathias LANG wrote:
 On Thursday, 2 April 2020 at 22:51:18 UTC, Joseph Rushton Wakeling wrote:
 On Wednesday, 1 April 2020 at 13:26:20 UTC, Steven Schveighoffer wrote:
 I think what John is saying is that it's nearly impossible. With 
 introspection, literally EVERYTHING becomes part of the API, 
 including all names of parameters to functions.

 It doesn't mean that the API is defined as that, however. But it does 
 mean that code that uses funky techniques might break when something 
 is changed that is not expected to cause a problem.

 However, I think we can define what the API does include, and 
 basically warn people "don't depend on this". I just don't think we 
 do that anywhere.
Ah, OK.  That does make sense.  And yes, I agree that strong clarity on what can robustly be introspected in upstream code, and what not, would be a very good thing.  Is this in principle something that we could automate warnings about?
I don't think warnings are possible. But I'd say we can at least have a list of changes that the D library considers to be "non-breaking changes", even if they will break introspective code. What that list is, I have not thought of. But it should be something discussed, especially if we are to have a more formal versioning system.
 
 That's actually a topic I was very interested in when I was working at 
 Sociomantic. We had a pretty good versioning thing going on with 
 libraries, but it was never formally defined what was an acceptable 
 change. As mentioned, the ability to do `static if (!__traits(compiles, 
 ...))` (notice the negation: checking that something does not compile) 
 means that, except for introducing a new module (but that's due to a 
 compiler limitation that we'd eventually fix), everything can be 
 considered a breaking change, if you're a purist.
Yeah, that just leads to breaking changes being OK, since you can't avoid breaking changes.
 I've always been very fond of 
 https://community.kde.org/Policies/Binary_Compatibility_Issues_With_C%2B%2B 
 as it describes the techniques one can use to ensure binary 
 compatibility in C++ program, and wish we had the same in D.
D uses source compatibility almost exclusively, because of the vast metaprogramming capabilities -- you have to recompile because any slight change in a function or type is going to require a rebuild. Only a library that is focused on binary compatibility could specify such a thing. Druntime and Phobos can't be in that boat.
 Your point about deprecations is a very good one. I think we should 
 switch to that (only one deprecation removal release per year). We 
 could probably engineer a versioning system that makes this 
 detectable based on the version number.
Well, for example, if we made the breaking changes happen in the first release of the year, we could version D in annual epochs.  D 2020 anyone? :-)
I think deprecations should have a time requirement (at least X years/months), but also only be done periodically.
 I still think the deprecation scheme should take a few more things into 
 account, e.g. how long has the replacement feature been available for ? 
 What bugs are triggered by the feature that is to be removed ? How much 
 is it used ?
Yes, it's simply the maximum of X time and the next "breaking" release.
 For example, the body deprecation: Most of the gains come from making 
 `body` a context-dependent keyword, and allowing `do`. Removing `body` 
 as a keyword doesn't actually help the user. It doesn't bring them 
 value. Additionally, TONS of code out there use `body`. Using `body` 
 does not cause bugs at all. And the replacement was introduced only 
 shortly before deprecation.
 For this reason, we could keep it in the language for a very long time 
 (5 years at least), and just undocument  it /  replace usages as we can.
I think you mean removing body as a contextual keyword doesn't help the user because he can use `body` today as a normal variable name, right? I agree with that, it's not critical to remove the "alternate" keyword for `do` here.
 On the other hands, D1 operator overloads: The replacement (D2) have 
 been around for over a  decade. They have different priorities, so they 
 don't mix well with their replacement, leading to subtle bugs (which was 
 actually what triggered their deprecation). However there is A LOT of 
 code out there that still uses them, so that's why I would want us to go 
 with at least a 2 year deprecation period.
For sure. opInc or opAdd is a lot easier and quicker to write than opBinary(string op: "+"). So there is going to be a lot of code that just does the former. We still have yet to provide a mixin that provides the forwarding system (i.e. if you have opAdd or opInc, it will define appropriate forwarding). Something like this I think should be a good library or object.d inclusion.
 I'm not sure I would agree with all your listed examples -- e.g. I'm 
 reluctant to endorse pure by default as I think that would clash too 
 much with the ability to just script easily -- but that sort of stuff 
 is for future discussion anyway.  The core idea of wanting to make 
 another major language revision seems sound.
That's a good point. There's no escape hatch for `pure`. How do I call `writeln` from main ?
That was just a list of wishes, not necessarily what I want. All the "by default" wishes stem from the fact that a) much non-template code is written that is already pure/safe/nothrow, but unless you tag it, they are not usable from that realm, and b) tagging everything in all the ways they are supposed to be tagged is tedious and verbose. Honestly, I'd say a mechanism to affect the defaults: pragma(default_attributes, pure, safe, nothrow); or something like that would go a long way. And you would need opposites for all to make that work (i.e. throw, impure, system). Then you can alter the defaults, and there truly is an escape hatch to go back to the way it was. I too would list pure as one that is too difficult to workaround to be a default default, especially since the standard hello world would fail to compile! -Steve
Apr 03
prev sibling parent reply Mathias LANG <geod24 gmail.com> writes:
On Tuesday, 31 March 2020 at 17:00:28 UTC, John Colvin wrote:
 On Tuesday, 31 March 2020 at 12:57:04 UTC, Russel Winder wrote:
 On Sun, 2020-03-29 at 12:16 -0700, Walter Bright via 
 Digitalmars-d wrote:
 
[…]
 D's development path simply doesn't fit into the semantic 
 versioning
 system.
 It's one of continuous change, not long periods of stability
 punctuated by
 wrenching change.
I think your understanding/appreciation of how semantic versioning works for a dynamic/agile project is failing you. Semantic versioning is entirely applicable to the DMD project – even if it is never going to happen.
 
[…]
Almost every release would be a major release or a patch release. Hard to see how it adds much value to follow Semantic Versioning in that case.
Why do you think it would ? At the moment we have a sliding window for deprecation. We'd just change that to have a specific time for removal. I don't see how that would make every release a breaking change ?
Mar 31
parent reply Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Tuesday, 31 March 2020 at 17:52:39 UTC, Mathias LANG wrote:
 Why do you think it would ? At the moment we have a sliding 
 window for deprecation. We'd just change that to have a 
 specific time for removal. I don't see how that would make 
 every release a breaking change ?
Yup, assuming the current 3-month release cadence continues it would make sense to make all breaking changes in only one of those 4 quarterly releases. A Christmas present to the community? O:-)
Mar 31
parent reply Mathias LANG <geod24 gmail.com> writes:
On Tuesday, 31 March 2020 at 23:25:31 UTC, Joseph Rushton 
Wakeling wrote:
 On Tuesday, 31 March 2020 at 17:52:39 UTC, Mathias LANG wrote:
 Why do you think it would ? At the moment we have a sliding 
 window for deprecation. We'd just change that to have a 
 specific time for removal. I don't see how that would make 
 every release a breaking change ?
Yup, assuming the current 3-month release cadence continues it would make sense to make all breaking changes in only one of those 4 quarterly releases. A Christmas present to the community? O:-)
That would be a step... Sideways ? Currently we follow this: https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/accepted/DIP1013.md Personally, the main disruption I had over the last 2 years was the concurrent GC (a new feature) being enabled by default. There was hardly any breaking change. With the current 10-release deprecation period and 3-months between release, this means a deprecation is up for 30 months, or 2 1/2 years before being removed. I don't think we should shorten it any further, it already proved problematic on some occasions (e.g. body => do).
Mar 31
next sibling parent reply Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Wednesday, 1 April 2020 at 05:30:41 UTC, Mathias LANG wrote:
 On Tuesday, 31 March 2020 at 23:25:31 UTC, Joseph Rushton 
 Wakeling wrote:
 On Tuesday, 31 March 2020 at 17:52:39 UTC, Mathias LANG wrote:
 Why do you think it would ? At the moment we have a sliding 
 window for deprecation. We'd just change that to have a 
 specific time for removal. I don't see how that would make 
 every release a breaking change ?
Yup, assuming the current 3-month release cadence continues it would make sense to make all breaking changes in only one of those 4 quarterly releases. A Christmas present to the community? O:-)
That would be a step... Sideways ? Currently we follow this: https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/accepted/DIP1013.md Personally, the main disruption I had over the last 2 years was the concurrent GC (a new feature) being enabled by default. There was hardly any breaking change. With the current 10-release deprecation period and 3-months between release, this means a deprecation is up for 30 months, or 2 1/2 years before being removed. I don't think we should shorten it any further, it already proved problematic on some occasions (e.g. body => do).
Small correction: we have a major release every two months - see: https://dlang.org/changelog/release-schedule.html You can see the history of releases here: https://dlang.org/changelog/index.html
Mar 31
next sibling parent Petar Kirov [ZombineDev] <petar.p.kirov gmail.com> writes:
On Wednesday, 1 April 2020 at 06:58:47 UTC, Petar Kirov 
[ZombineDev] wrote:
 On Wednesday, 1 April 2020 at 05:30:41 UTC, Mathias LANG wrote:
 [...]
Small correction: we have a major release every two months - see: https://dlang.org/changelog/release-schedule.html You can see the history of releases here: https://dlang.org/changelog/index.html
s/major release/minor release/
Apr 01
prev sibling parent Mathias LANG <geod24 gmail.com> writes:
On Wednesday, 1 April 2020 at 06:58:47 UTC, Petar Kirov 
[ZombineDev] wrote:
 Small correction: we have a major release every two months - 
 see: https://dlang.org/changelog/release-schedule.html

 You can see the history of releases here: 
 https://dlang.org/changelog/index.html
Oops! So not even 2 years of deprecation. When I tried to follow the deprecation schedule, one of our corporate user complained and this happened: https://github.com/dlang/dmd/pull/10763 Personally I am sympathetic to the situation, and they are not alone. If I remember correctly Weka also had a period where they were not updating their compiler because it was just so much work (although it had more to do with regressions AFAIR). Personally I'd be in favor of extending the deprecation period for symbols or feature whose replacement hasn't been available for a long time.
Apr 01
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On Wednesday, 1 April 2020 at 05:30:41 UTC, Mathias LANG wrote:
 On Tuesday, 31 March 2020 at 23:25:31 UTC, Joseph Rushton 
 Wakeling wrote:
 Yup, assuming the current 3-month release cadence continues it 
 would make sense to make all breaking changes in only one of 
 those 4 quarterly releases.  A Christmas present to the 
 community? O:-)
That would be a step... Sideways ? Currently we follow this: https://github.com/dlang/DIPs/blob/5afe088809bed47e45e14c9a90d7e78910ac4054/DIPs/accepted/DIP1013.md
Yup, right. So however many releases there are in a year (4 quarterly, or 6 bimonthly), the removal schedule is based on the number of releases since something got deprecated. Now suppose that in 6 different successive releases you deprecate one thing each. If you follow the 10-releases-later rule then, 10 releases down the line, you wind up with 6 successive releases all making breaking changes. What I'm suggesting is that instead we could have a once-a-year breaking-change release. So, if a feature is deprecated, then it gets removed 10 releases later or at the next annual breaking-change point, _whichever is later_. So, it's a little bit sideways, but it's a lot more predictable and allows dev teams to plan around it much more robustly. And note that this doesn't shorten deprecation periods -- on average, it will slightly increase them. Of course, I imagine that in practice where there are multiple breaking changes to make they _do_ get bundled together rather than strictly sticking to a 10-releases-later rule. But if that's at the discretion of maintainers rather than an official policy, it's a lot less easy to plan around it.
 Personally, the main disruption I had over the last 2 years was 
 the concurrent GC (a new feature) being enabled by default. 
 There was hardly any breaking change.
Sure. That's why I asked earlier where the breaking changes are coming from. We can control for when we schedule planned breaking change (and here SemVer is useful). But if most of the breaking changes are unplanned regressions, we have a different problem.
 With the current 10-release deprecation period and 3-months 
 between release, this means a deprecation is up for 30 months, 
 or 2 1/2 years before being removed. I don't think we should 
 shorten it any further, it already proved problematic on some 
 occasions (e.g. body => do).
Yea, this was inadequate communication on my part. As explained above, I wasn't suggesting shortening the deprecation period.
Apr 01
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2020-03-31 14:57, Russel Winder wrote:

 Semantic versioning is entirely applicable to the DMD project – even if it
is never going
 to happen. 
How would that work? There are several components, would they all share the same version or have their own? I'm thinking of the following: * The D language itself * The ABI of the language * The compiler, i.e. the CLI interface * The DMD Dub package, i.e. the compiler as a library * Phobos * druntime Additional components that are bundled with DMD in the release packages: * ddmangle * dub * dumpobj * dustmite * obj2asm * rdmd * shell -- /Jacob Carlborg
Mar 31
prev sibling parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 27 March 2020 at 17:22:30 UTC, Russel Winder wrote:
 On Fri, 2020-03-27 at 16:55 +0000, Meta via Digitalmars-d wrote:
 
[…]
 D has been around for 20 years and hasn't gained the traction 
 that younger languages like Rust or Go have (though as we all 
 know, the main reason for this is D's lack of a big corporate 
 patron a la Mozilla or Google). Maybe what's needed is a "new" 
 language that breaks backwards compatibility (as 
 conservatively as possible and hopefully in a way that makes 
 it easy to automatically port your D2 code).
Whilst D has not had the hype or the instant traction of Rust and Go, it does have some traction and mindshare. This is based on it's history, and it would (in my opinion) be a bad idea to lose this. I think having a positive strategy towards a D v3 would be a good idea but only if there are big breaking changes to D v2. The current Java evolution strategy is fine, but it's version numbering is an unimitigated disaster – in my view. Groovy has though had the right evolution strategy and the right approach to version numbering – it has fairly recently released v3 for all exactly the right reasons.
 Walter originally wanted to call it the Mars language - maybe 
 it's time to revive that name in a complete rebranding of the 
 language.
I think having a brand new language that just happened to have a very simple upgrade path from D v2 would be a self-defeating activity. People would very quickly spot the con. If D has a future it is in terms of v3, v4, etc. with a strong technical evolution (cf. Groovy) and good marketing. Clearly D remaining at v2 for ever more would, I feel, be a Very Bad Idea™ since it advertises no changes to the language, i.e. a language with a stalled evolution.
Groovy isn't properly a good exemple. If it wasn't for Gradle and its use in Android, it would be long gone and forgotten. And even there, there is a big pressure to replace it with Kotlin, in what regards Android build infrastructure. So is the fate of any guest language until the main platform language catches up. -- Paulo
Mar 28
parent reply Russel Winder <russel winder.org.uk> writes:
On Sat, 2020-03-28 at 11:01 +0000, Paulo Pinto via Digitalmars-d wrote:
 [=E2=80=A6]
=20
 Groovy isn't properly a good exemple.
I see no reason why it isn't, it is an evolving language following the semantc versioning model.
 If it wasn't for Gradle and its use in Android, it would be long=20
 gone and forgotten.
In you opinion. The evidence I see is that Groovy has more traction in Java sites than is immediately apparent. Clearly Kotlin is challenging the role of Groovy in many respects, but Groovy is still used by many orgsanisation fro dynamic programing. The analogy is where C++ codebases use Python or Lua.
 And even there, there is a big pressure to replace it with=20
 Kotlin, in what regards Android build infrastructure.
Kotlin rather than Groovy is the language of choice on the Android platform these days certainly, but there are a lot of JVM installation out there using Java, Kotlin, and Groovy =E2=80=93 not to mention Scala, Clojure, etc. =E2=80=93 all going along happily. Yes there are a lot of tho= se installations that will only use Java.
 So is the fate of any guest language until the main platform=20
 language catches up.
Java can never catch up with Groovy, whereas is can catch up with Kotlin. Kotlin is the guest language you are talking of for most Java installation, not Groovy. Statis Groovy may be a dead thing, but Dynamic Groovy is far from dead. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Mar 29
parent reply Paulo Pinto <pjmlp progtools.org> writes:
On Sunday, 29 March 2020 at 09:47:15 UTC, Russel Winder wrote:
 On Sat, 2020-03-28 at 11:01 +0000, Paulo Pinto via 
 Digitalmars-d wrote:
 […]
 
 Groovy isn't properly a good exemple.
I see no reason why it isn't, it is an evolving language following the semantc versioning model.
 If it wasn't for Gradle and its use in Android, it would be 
 long gone and forgotten.
In you opinion. The evidence I see is that Groovy has more traction in Java sites than is immediately apparent. Clearly Kotlin is challenging the role of Groovy in many respects, but Groovy is still used by many orgsanisation fro dynamic programing. The analogy is where C++ codebases use Python or Lua.
 And even there, there is a big pressure to replace it with 
 Kotlin, in what regards Android build infrastructure.
Kotlin rather than Groovy is the language of choice on the Android platform these days certainly, but there are a lot of JVM installation out there using Java, Kotlin, and Groovy – not to mention Scala, Clojure, etc. – all going along happily. Yes there are a lot of those installations that will only use Java.
 So is the fate of any guest language until the main platform 
 language catches up.
Java can never catch up with Groovy, whereas is can catch up with Kotlin. Kotlin is the guest language you are talking of for most Java installation, not Groovy. Statis Groovy may be a dead thing, but Dynamic Groovy is far from dead.
The times that Groovy made any headlines in German Java conferences or local JUGs are long gone, I wonder where Groovy is being used above a single digit usage market share on the Java platform. I was quite surprised that Groovy actually managed to release the 3.0 version. It is not my opinion, rather what any Java market analysis report will easily confirm.
Mar 29
next sibling parent Russel Winder <russel winder.org.uk> writes:
On Sun, 2020-03-29 at 12:00 +0000, Paulo Pinto via Digitalmars-d wrote:
[=E2=80=A6]
=20
 The times that Groovy made any headlines in German Java=20
 conferences or local JUGs are long gone, I wonder where Groovy is=20
 being used above a single digit usage market share on the Java=20
 platform.
As with TIOBE Index, headlines are meaningless regarding traction. Groovy is not a programming language with massive traction, but it has quite a lot, mostly it just happens quietly. Gradle is one use of Groovy, but so is Grails, and now Micronaut. Indeed many just use Groovy directly. See the list of users on=20 https://groovy-lang.org/index.html
 I was quite surprised that Groovy actually managed to release the=20
 3.0 version.
You weren't tracking Groovy seriously then. Groovy 3.0 was always going to happen. Now if you challege whether Groovy 4.0 will happen that is a wholly different situation.
 It is not my opinion, rather what any Java market analysis report=20
 will easily confirm.
That just confirms a Java bias of Java journalists. --=20 Russel. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Dr Russel Winder t: +44 20 7585 2200 41 Buckmaster Road m: +44 7770 465 077 London SW11 1EN, UK w: www.russel.org.uk
Mar 29
prev sibling parent reply Meta <jared771 gmail.com> writes:
On Sunday, 29 March 2020 at 12:00:20 UTC, Paulo Pinto wrote:
 On Sunday, 29 March 2020 at 09:47:15 UTC, Russel Winder wrote:
 On Sat, 2020-03-28 at 11:01 +0000, Paulo Pinto via 
 Digitalmars-d wrote:
 […]
 
 Groovy isn't properly a good exemple.
I see no reason why it isn't, it is an evolving language following the semantc versioning model.
 If it wasn't for Gradle and its use in Android, it would be 
 long gone and forgotten.
In you opinion. The evidence I see is that Groovy has more traction in Java sites than is immediately apparent. Clearly Kotlin is challenging the role of Groovy in many respects, but Groovy is still used by many orgsanisation fro dynamic programing. The analogy is where C++ codebases use Python or Lua.
 And even there, there is a big pressure to replace it with 
 Kotlin, in what regards Android build infrastructure.
Kotlin rather than Groovy is the language of choice on the Android platform these days certainly, but there are a lot of JVM installation out there using Java, Kotlin, and Groovy – not to mention Scala, Clojure, etc. – all going along happily. Yes there are a lot of those installations that will only use Java.
 So is the fate of any guest language until the main platform 
 language catches up.
Java can never catch up with Groovy, whereas is can catch up with Kotlin. Kotlin is the guest language you are talking of for most Java installation, not Groovy. Statis Groovy may be a dead thing, but Dynamic Groovy is far from dead.
The times that Groovy made any headlines in German Java conferences or local JUGs are long gone, I wonder where Groovy is being used above a single digit usage market share on the Java platform.
IBM Security, one of the largest cybersecurity companies in the world. The most widely used enterprise-level SIEM (by quite a wide margin) uses Groovy extensively for its testing framework.
 I was quite surprised that Groovy actually managed to release 
 the 3.0 version.

 It is not my opinion, rather what any Java market analysis 
 report will easily confirm.
Mar 29
parent Paulo Pinto <pjmlp progtools.org> writes:
On Sunday, 29 March 2020 at 23:27:06 UTC, Meta wrote:
 On Sunday, 29 March 2020 at 12:00:20 UTC, Paulo Pinto wrote:
 On Sunday, 29 March 2020 at 09:47:15 UTC, Russel Winder wrote:
 On Sat, 2020-03-28 at 11:01 +0000, Paulo Pinto via 
 Digitalmars-d wrote:
 […]
 
 Groovy isn't properly a good exemple.
I see no reason why it isn't, it is an evolving language following the semantc versioning model.
 If it wasn't for Gradle and its use in Android, it would be 
 long gone and forgotten.
In you opinion. The evidence I see is that Groovy has more traction in Java sites than is immediately apparent. Clearly Kotlin is challenging the role of Groovy in many respects, but Groovy is still used by many orgsanisation fro dynamic programing. The analogy is where C++ codebases use Python or Lua.
 And even there, there is a big pressure to replace it with 
 Kotlin, in what regards Android build infrastructure.
Kotlin rather than Groovy is the language of choice on the Android platform these days certainly, but there are a lot of JVM installation out there using Java, Kotlin, and Groovy – not to mention Scala, Clojure, etc. – all going along happily. Yes there are a lot of those installations that will only use Java.
 So is the fate of any guest language until the main platform 
 language catches up.
Java can never catch up with Groovy, whereas is can catch up with Kotlin. Kotlin is the guest language you are talking of for most Java installation, not Groovy. Statis Groovy may be a dead thing, but Dynamic Groovy is far from dead.
The times that Groovy made any headlines in German Java conferences or local JUGs are long gone, I wonder where Groovy is being used above a single digit usage market share on the Java platform.
IBM Security, one of the largest cybersecurity companies in the world. The most widely used enterprise-level SIEM (by quite a wide margin) uses Groovy extensively for its testing framework.
The same IBM that introduced Beanshell for scripting Java beans, once upon time used JTcl for Websphere scripting until version 6, replaced it with jython on Websphere 6 and now just offers JMX MBeans on Liberty? Yep, definitely a guarantee of longevity regarding IBM's usage of JVM guest languages.
Mar 29
prev sibling next sibling parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:
 7. virtual by default
You mean final, by default, right?
Mar 27
next sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/27/20 12:59 PM, Paolo Invernizzi wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 There have been a lot of this pattern happening:
 7. virtual by default
You mean final, by default, right?
Yes, I meant the issue that virtual is the default but it shouldn't be. At one point, we had merged in a change for this, and it was reverted. -Steve
Mar 27
prev sibling parent user1234 <user1234 12.de> writes:
On Friday, 27 March 2020 at 16:59:47 UTC, Paolo Invernizzi wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:
 7. virtual by default
You mean final, by default, right?
"final" attribute does no mean non-virtual. A final method cannot be overridden, which offers oportunities to devirtualize the calls, and that's it.
Mar 28
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 27, 2020 at 11:56:40AM -0400, Steven Schveighoffer via
Digitalmars-d wrote:
 There have been a lot of this pattern happening:
 
 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
 
 Having a new branch of the compiler will provide a way to keep D2
 development alive while giving a playground to add new mechanisms, fix
 long-existing design issues, and provide an opt-in for code breakage.
What about supporting multiple versions of the language simultaneously, using some kind of version directive at the top of each file? It will mean more work to develop and maintain the compiler, but the benefit is that *no* code will break, and users can migrate old code incrementally and at their leisure by bumping the version directive on a file and fixing any subsequent errors. As Andrei said once, one solution to the problem of not breaking old code while improving new code is additive enhancements rather than replacing old things outright. If there's a breaking language change, maybe there's a way to keep the old behaviour for old code while changing it for new code. Versioned source files is one way to do this, albeit at the cost of greater complexity in the compiler.
 Some issues I can think of:
 
 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
Yeah, pretty much all of these items would be nice to have in my book. [...]
 The biggest drawback is that we aren't a huge language, with lots of
 manpower to keep x branches going at once.
Yeah, forking the compiler now may have major unforeseen consequences. Supporting multiple versions of the language simultaneously will also put a drain on our resources, and there will be tricky issues about how to interoperate code written in different versions of D, but at least it still keeps it all in one place.
 I just wanted to throw it out as a discussion point. We spend an awful
 lot of newsgroup server bytes debating things that to me seem obvious,
 but have legitimate downsides for not breaking them in a "stable"
 language.
[...] Yes, we're gradually becoming everything we hated about C++. Perhaps it's not as simple to do better, as we once thought! ;-) T -- Do not reason with the unreasonable; you lose by definition.
Mar 27
next sibling parent Jacob Carlborg <doob me.com> writes:
On 2020-03-27 18:32, H. S. Teoh wrote:

 What about supporting multiple versions of the language simultaneously
Rust is already doing that, they call it "editions", and it's a much younger language than D and should have less code to worry about. Each of your dependencies can be built using different editions and it all works together. -- /Jacob Carlborg
Mar 28
prev sibling parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Friday, 27 March 2020 at 17:32:44 UTC, H. S. Teoh wrote:
 On Fri, Mar 27, 2020 at 11:56:40AM -0400, Steven Schveighoffer 
 via Digitalmars-d wrote:
 There have been a lot of this pattern happening:
 
 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
 
 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.
What about supporting multiple versions of the language simultaneously, using some kind of version directive at the top of each file? It will mean more work to develop and maintain the compiler, but the benefit is that *no* code will break, and users can migrate old code incrementally and at their leisure by bumping the version directive on a file and fixing any subsequent errors. As Andrei said once, one solution to the problem of not breaking old code while improving new code is additive enhancements rather than replacing old things outright. If there's a breaking language change, maybe there's a way to keep the old behaviour for old code while changing it for new code. Versioned source files is one way to do this, albeit at the cost of greater complexity in the compiler.
 Some issues I can think of:
 
 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
Yeah, pretty much all of these items would be nice to have in my book. [...]
 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.
Yeah, forking the compiler now may have major unforeseen consequences. Supporting multiple versions of the language simultaneously will also put a drain on our resources, and there will be tricky issues about how to interoperate code written in different versions of D, but at least it still keeps it all in one place.
 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not 
 breaking them in a "stable" language.
[...] Yes, we're gradually becoming everything we hated about C++. Perhaps it's not as simple to do better, as we once thought! ;-) T
+1 on this comment.. If i could vote for one feature to add that would be HELLA breaking but would be AMAZING at same time it is versioning built into the compiler, preferable GIT because that is what dub uses.. also if we are voting for new names i vote DIALECTIC! A dialectic is what the ancients uses for debate.. hagel used it for dialectic analysis to study things like self vs other, and marx used it for economic analysis, worker vs owner. Often in philosophy classes the dialectic is taught as synonym for debate, that is WRONG... a dialectic is TWO end points with tension between the two. LIKE TWO VERSIONS OF A COMPILER!!! OR TWO VERSIONS OF A REPOSITORY .. or two nodes in a graph... The tension this language is facing is the fact that old versions of the language can not communicate with new versions of the language.. they can not interact with each other! The dialectic is often used as a synonym for debate.. if the past version of the compiler and the future versions of the compiler could communicate.. share information.. have a debate.. ideas could flow up and down the stack.. If i could vote for one feature that would be hella breaking but lowkey amazing it would be adding versioning system to the compiler.. where child and parent compilers talked back and forth to compile the code..
Apr 01
parent reply Mike Parker <aldacron gmail.com> writes:
On Wednesday, 1 April 2020 at 07:13:53 UTC, Kaitlyn Emmons wrote:
 On Friday, 27 March 2020 at 17:32:44 UTC, H. S. Teoh wrote:
 What about supporting multiple versions of the language 
 simultaneously, using some kind of version directive at the 
 top of each file?
+1 on this comment.. If i could vote for one feature to add that would be HELLA breaking but would be AMAZING at same time it is versioning built into the compiler, preferable GIT because that is what dub uses..
static if(__VERSION__ >= 2091) { } else { }
Apr 01
parent Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Wednesday, 1 April 2020 at 07:58:47 UTC, Mike Parker wrote:
 On Wednesday, 1 April 2020 at 07:13:53 UTC, Kaitlyn Emmons 
 wrote:
 On Friday, 27 March 2020 at 17:32:44 UTC, H. S. Teoh wrote:
 What about supporting multiple versions of the language 
 simultaneously, using some kind of version directive at the 
 top of each file?
+1 on this comment.. If i could vote for one feature to add that would be HELLA breaking but would be AMAZING at same time it is versioning built into the compiler, preferable GIT because that is what dub uses..
static if(__VERSION__ >= 2091) { } else { }
that is NOT what i am talking about at all.. i am not talking about the code being VERSIONED.. im talking about the COMPILER BEING VERSIONED.. aka if u write something based against an older compiler, it can be translated and passed up the chain to newer compilers the dialectic can not be resolved by pushing the work off onto the user, you missed the point
Apr 01
prev sibling next sibling parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
Further thoughts on this:

On Fri, Mar 27, 2020 at 10:32:44AM -0700, H. S. Teoh via Digitalmars-d wrote:
 On Fri, Mar 27, 2020 at 11:56:40AM -0400, Steven Schveighoffer via
Digitalmars-d wrote:
[...]
 4. String interpolation DIP
IMO this is a purely additive change, and it was rejected mainly because we couldn't come to an agreement with Walter, I don't think it needs a whole new language version just to implement.
 5. auto-decoding
 6. range.save
These two could be done as Phobos v2, and can be an additive change that doesn't need a whole new language version. [...]
 8. ProtoObject
[...] If we go with a new language version, ProtoObject could just replace Object altogether. We could also have other ways of dealing with the current issues with Object, such as having the compiler automatically insert things like the Monitor field on-demand. T -- Without geometry, life would be pointless. -- VS
Mar 27
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/27/20 2:33 PM, H. S. Teoh wrote:
 Further thoughts on this:
Yeah, I just listed things that have had roadblocks that could be alleviated by saying it's a new major language version. Certainly I would consider Phobos part of the language at the moment. In D3, it could be separate if needed. Note that I don't think we need DRASTIC changes to the language. I don't think we need to rename it or change core things. But we could just free up some of the roadblocks so we can break some eggs here. Probably there are some more things that make sense for a breaking change. Exceptions might be redone as well. Maybe review keyword usage/consistency. DIP1000 features might work better as a type constructor. Etc. -Steve
Mar 27
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 27, 2020 at 03:08:51PM -0400, Steven Schveighoffer via
Digitalmars-d wrote:
[...]
 Certainly I would consider Phobos part of the language at the moment.
 In D3, it could be separate if needed.
It's arguable, since some of the avid D users here actively avoid Phobos, but OK.
 Note that I don't think we need DRASTIC changes to the language. I
 don't think we need to rename it or change core things. But we could
 just free up some of the roadblocks so we can break some eggs here.
I think for these smaller but still breaking things to be implemented as a simultaneous new version like I suggested. Have some kind of version directive at the top of your file to indicate which language version the source was written for, then everything below version X will support the old semantics, but version X and above will use the new semantics. As long as the change isn't too drastic, source files with different versions in the same project should still be compatible with each other.
 Probably there are some more things that make sense for a breaking
 change.  Exceptions might be redone as well. Maybe review keyword
 usage/consistency.  DIP1000 features might work better as a type
 constructor. Etc.
[...] Honestly, I think cheap Exceptions could be done transparently of user code, for the most part. Instead of using libunwind or equivalent, for example, just change the ABI to propagate an out-of-band error indicator that, say in a register or something that gets checked by the caller and branches to the function's exit block if set. T -- Give a man a fish, and he eats once. Teach a man to fish, and he will sit forever.
Mar 27
parent reply NaN <divide by.zero> writes:
On Friday, 27 March 2020 at 20:16:19 UTC, H. S. Teoh wrote:
 On Fri, Mar 27, 2020 at 03:08:51PM -0400, Steven Schveighoffer


 Honestly, I think cheap Exceptions could be done transparently 
 of user code, for the most part.  Instead of using libunwind or 
 equivalent, for example, just change the ABI to propagate an 
 out-of-band error indicator that, say in a register or 
 something that gets checked by the caller and branches to the 
 function's exit block if set.
Why not just pass the address of the error handler into the callee? So instead of foo() if (R12) goto i_see_dead_people; void foo() { if (zombies) { R12 = err; return; } } you do... R12 = &i_see_dead_people foo(); void foo() { if (zombies) { return to R12; } } On x86 at least you could just poke the alternate return address into the stack and do RET as you would for normal return. So in the non-error path it's a single LEA instruction, vs a compare and conditional branch if using an extra return value.
Mar 27
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 27, 2020 at 10:26:11PM +0000, NaN via Digitalmars-d wrote:
 On Friday, 27 March 2020 at 20:16:19 UTC, H. S. Teoh wrote:
[...]
 Honestly, I think cheap Exceptions could be done transparently of
 user code, for the most part.  Instead of using libunwind or
 equivalent, for example, just change the ABI to propagate an
 out-of-band error indicator that, say in a register or something
 that gets checked by the caller and branches to the function's exit
 block if set.
Why not just pass the address of the error handler into the callee? So instead of foo() if (R12) goto i_see_dead_people; void foo() { if (zombies) { R12 = err; return; } } you do... R12 = &i_see_dead_people foo(); void foo() { if (zombies) { return to R12; } } On x86 at least you could just poke the alternate return address into the stack and do RET as you would for normal return. So in the non-error path it's a single LEA instruction, vs a compare and conditional branch if using an extra return value.
That would work too, I guess. The point is, these are all implementation details that can be implemented in a way that's transparent to user code. It doesn't have to be libunwind with its associated overhead that Walter did not like. T -- "Maybe" is a strange word. When mom or dad says it it means "yes", but when my big brothers say it it means "no"! -- PJ jr.
Mar 27
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Friday, 27 March 2020 at 22:47:36 UTC, H. S. Teoh wrote:
 The point is, these are all implementation details that can be 
 implemented in a way that's transparent to user code.  It 
 doesn't have to be libunwind with its associated overhead that 
 Walter did not like.
It used to be this way! D had a custom exception mechanism (on linux, on Windows it used SEH which works very well) that was quite lightweight, but it got killed in the name of C++ compatibility :(
Mar 27
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 27, 2020 at 11:40:35PM +0000, Adam D. Ruppe via Digitalmars-d wrote:
 On Friday, 27 March 2020 at 22:47:36 UTC, H. S. Teoh wrote:
 The point is, these are all implementation details that can be
 implemented in a way that's transparent to user code.  It doesn't
 have to be libunwind with its associated overhead that Walter did
 not like.
It used to be this way! D had a custom exception mechanism (on linux, on Windows it used SEH which works very well) that was quite lightweight, but it got killed in the name of C++ compatibility :(
Hmm. Then maybe we should make this an optional thing so that projects that don't need C++ compatibility don't have to suffer for it? Either that, or do the translation at the C++/D boundary only, instead of throughout the entire call stack. Basically: 1) A D function declared extern(C++) would throw a libunwind compatible exception in its exit block if it had caught a D-specific exception. 2) A D function that calls an extern(C++) function would insert a libunwind-compatible catch block that translates any thrown C++ exception into the D-specific implementation, then continues propagating it as before (including to any user-defined catch blocks if the user wrote one in the function). So we only need to pay if C++ compatibility was actually used, rather than pessimize all D code just on the off-chance that C++ compatibility *might* be required. T -- People say I'm indecisive, but I'm not sure about that. -- YHL, CONLANG
Mar 27
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2020-03-28 00:40, Adam D. Ruppe wrote:

 It used to be this way! D had a custom exception mechanism (on linux, on 
 Windows it used SEH which works very well) that was quite lightweight, 
 but it got killed in the name of C++ compatibility :(
Funny thing. There's a C++ proposal by Herb Sutter, "Zero-overhead deterministic exceptions: Throwing values" [1]. Which more or less lowers the current syntax used for exception to something similar to return codes. Existing C++ code like this (which can not afford to use table based exceptions): expected<int, errc> safe_divide(int i, int j) { if(j==0) return unexpected(arithmetic_errc::divide_by_zero); if(i==INT_MIN&&j==-1) return unexpected(arithmetic_errc::integer_divide_overflows); if(i%j!=0) return unexpected(arithmetic_errc::not_integer_division); else return i / j; } expected<double, errc> caller(double i, double j, double k) { auto q = safe_divide(j, k); if (q) return i + *q; else return q; } Can with the new proposal be expressed like this: int safe_divide(int i, int j) throws { if(j==0) throw arithmetic_errc::divide_by_zero; if(i==INT_MIN&&j==-1) throw arithmetic_errc::integer_divide_overflows; if(i%j!=0) throw arithmetic_errc::not_integer_division; else return i / j; } double caller(double i, double j, double k) throws { return i + safe_divide(j, k); } Which are more or less lowered to code similar as the original example. If we want to stay compatible with C++ exception handling, we need to implement this :). Although we currently don't support any other features added after C++98. [1] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0709r0.pdf -- /Jacob Carlborg
Mar 28
prev sibling next sibling parent reply JN <666total wp.pl> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.
I think there are two things that would need to be in place that would greatly reduce the breakage amount. First one is testing top N dub packages for breakages with new versions/features. Breakages and deprecations will happen. But if something breaks vibe-d or GtkD it's a big deal and needs additional investigation effort (easy to fix or not). But if something breaks someone's personal project... well, tough luck. As far as I know the new D versions are tested on the newer evrsions so that's good. It's not just about checking if everything compiles, but assessing the scope of breakages if they happen. Second thing is having a tool like gofix - https://blog.golang.org/introducing-gofix - gofix is an official Golang tool which automatically applies fixes to code, whether it's converting deprecated code or syntax/stdlib changes. Would also be a good test for a D compiler as a library project. On Friday, 27 March 2020 at 16:03:47 UTC, 12345swordy wrote:
 Didn't the 1.0 to 2.0 conversion nearly kill the language?

 -Alex
I don't remember it as such. However, there seems to be a bit of lack of consensus in the community what direction should D take. It's hard to unify an effort, especially in open source (unpaid) community if everyone has their own idea of the direction it should go and doesn't want to compromise. On Friday, 27 March 2020 at 16:55:14 UTC, Meta wrote:
 D has been around for 20 years and hasn't gained the traction 
 that younger languages like Rust or Go have (though as we all 
 know, the main reason for this is D's lack of a big corporate 
 patron a la Mozilla or Google).
Possibly, or maybe not. While Rust and Go surpassed D in popularity, it's important not only to look at the competition ahead, but look behind every now and then. If it stays on course, I wouldn't be surprised Zig becoming more popular than D, and the corporate backing excuse won't work there. On Friday, 27 March 2020 at 17:17:44 UTC, Steven Schveighoffer wrote:
 There was an issue with an alternative standard library 
 (Tango), which divided the community. That shouldn't be a 
 problem for a D3.
I'm probably in a minority here, but I feel like D would be better off if it went more in the Tango direction and being more of an OOP language, rather than moving into the STL/boost direction with templates everywhere. But that's just me, an OOP lover and not a fan of templates. On Friday, 27 March 2020 at 17:22:30 UTC, Russel Winder wrote:
 If D has a future it is in terms of v3, v4, etc. with a strong
 technical evolution (cf. Groovy) and good marketing. Clearly D
 remaining at v2 for ever more would, I feel,  be a Very Bad 
 Idea™ since
 it advertises no changes to the language, i.e. a language with a
 stalled evolution.
D1, D2 is only meaningful within the D community. It doesn't really mean anything outside of it. Outside of the community, people just refer it to D. Whether it's D 2.0 being improved, or D 3.0 which is just a few breaking changes, I doubt it would make any difference marketing wise. On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
My main concern here is that while it seems to be fixes to a longstanding D issues, it's also very low-level in the long run. Anyone who was turned off from D before would look at it and be like "hmm, ok, well, so what?". No one will ask "have they fixed auto decoding?". People would ask "does it work without GC now?" "can it compile to WASM?".
Mar 27
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Fri, Mar 27, 2020 at 07:50:44PM +0000, JN via Digitalmars-d wrote:
[...]
 First one is testing top N dub packages for breakages with new
 versions/features.
Isn't the current CI already doing this? [...]
 Second thing is having a tool like gofix -
 https://blog.golang.org/introducing-gofix - gofix is an official
 Golang tool which automatically applies fixes to code, whether it's
 converting deprecated code or syntax/stdlib changes. Would also be a
 good test for a D compiler as a library project.
There's already dfix. Does it not work well enough? What are the issues the prevent us from using it in general? --T [...]
 On Friday, 27 March 2020 at 17:17:44 UTC, Steven Schveighoffer wrote:
 
 There was an issue with an alternative standard library (Tango),
 which divided the community. That shouldn't be a problem for a D3.
I'm probably in a minority here, but I feel like D would be better off if it went more in the Tango direction and being more of an OOP language, rather than moving into the STL/boost direction with templates everywhere. But that's just me, an OOP lover and not a fan of templates.
Why can't we have both? I'm a big fan of templates and compile-time introspection, but OOP is still useful sometimes and I do still pull it out for when I really need runtime polymorphism. In fact, combine the two together, and you get something amazing like Adam's jni.d, which makes Java interop so nice I'm tempted to stop hating Java so much ('cos I can now write Java in D :-P). [...]
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 Some issues I can think of:
 
 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
My main concern here is that while it seems to be fixes to a longstanding D issues, it's also very low-level in the long run. Anyone who was turned off from D before would look at it and be like "hmm, ok, well, so what?". No one will ask "have they fixed auto decoding?". People would ask "does it work without GC now?" "can it compile to WASM?".
Honestly, I think this whole no-GC thing is just barking up the wrong tree. People who have GC phobia will turn off as soon as they see "GC", doesn't matter if you can actually write D without GC or not. They won't even look that far before walking away. And frankly, IMO D should just embrace the GC and relish in it instead of trying to win over the no-GC crowd. Trying to be too many things at once is stretching us too thin; D should just make a danged decision already about these issues, and stick to it instead of trying to be everything to everyone. As for compiling to WASM, isn't LDC already doing that, or on the way to doing that? T -- Three out of two people have difficulties with fractions. -- Dirk Eddelbuettel
Mar 27
next sibling parent reply JN <666total wp.pl> writes:
On Friday, 27 March 2020 at 20:25:14 UTC, H. S. Teoh wrote:
 There's already dfix.  Does it not work well enough?  What are 
 the issues the prevent us > from using it in general?
I didn't even know it exists. Guess you learn something every day. I guess the question would be, why can't we use dfix to prevent breakages in case of simple deprecations.
 Honestly, I think this whole no-GC thing is just barking up the 
 wrong tree. People who have GC phobia will turn off as soon as 
 they see "GC", doesn't matter if you can actually write D 
 without GC or not.
I agree.
 As for compiling to WASM, isn't LDC already doing that, or on 
 the way to doing that?


 T
Well, just as with nogc. Many questions in D are answered with "Yeah, but...". My understanding is that LDC can compile to WASM, but only in betterC mode, runtime (and GC) don't work under WASM (this might change in the future with the Spasm effort). But from a bystander perspective betterC isn't acceptable enough of a compromise.
Mar 27
parent rikki cattermole <rikki cattermole.co.nz> writes:
https://github.com/dlang-community/dfix
Mar 27
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2020-03-27 21:25, H. S. Teoh wrote:

 There's already dfix.  Does it not work well enough?  What are the
 issues the prevent us from using it in general?
It's not good enough. It's not based on the DMD frontend, it uses its own parser. It doesn't do any semantic analysis. As long as the DMD frontend is not used, it will always play catch up and will most likely never do semantic analysis. It's also not an official tool. -- /Jacob Carlborg
Mar 28
parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 28, 2020 at 09:12:17AM +0100, Jacob Carlborg via Digitalmars-d
wrote:
 On 2020-03-27 21:25, H. S. Teoh wrote:
 
 There's already dfix.  Does it not work well enough?  What are the
 issues the prevent us from using it in general?
It's not good enough. It's not based on the DMD frontend, it uses its own parser. It doesn't do any semantic analysis. As long as the DMD frontend is not used, it will always play catch up and will most likely never do semantic analysis.
[...] But how could it use the DMD frontend if its job is to fix old syntax rejected by the new compiler into new syntax acceptable to the new compiler? Would it have to use two copies of the frontend, one for older syntax and one for newer syntax? T -- Having a smoking section in a restaurant is like having a peeing section in a swimming pool. -- Edward Burr
Mar 28
prev sibling next sibling parent reply Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
Yes, but not because we want to break things, but because we are breaking things. The preview switches are great, but can we get a grouped feature set for release. It is critical that the existing libraries can be utilized with newer language versions (at least for a time). Stationary is not an option, but a good upgrade which does not compromise the ecosystem is vital.
Mar 27
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 28/03/2020 5:41 PM, Jesse Phillips wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
Yes, but not because we want to break things, but because we are breaking things. The preview switches are great, but can we get a grouped feature set for release. It is critical that the existing libraries can be utilized with newer language versions (at least for a time). Stationary is not an option, but a good upgrade which does not compromise the ecosystem is vital.
I have said this before, D3 should be a preview switch which turns on all the others. All D2 code should be compilable with D3, but not all D3 should be compilable as D2. dmd3 could literally be the preview switch turned on permanently.
Mar 27
next sibling parent GreatSam4sure <greatsam4sure gmail.com> writes:
On Saturday, 28 March 2020 at 05:19:52 UTC, rikki cattermole 
wrote:
 On 28/03/2020 5:41 PM, Jesse Phillips wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 [...]
Yes, but not because we want to break things, but because we are breaking things. The preview switches are great, but can we get a grouped feature set for release. It is critical that the existing libraries can be utilized with newer language versions (at least for a time). Stationary is not an option, but a good upgrade which does not compromise the ecosystem is vital.
I have said this before, D3 should be a preview switch which turns on all the others. All D2 code should be compilable with D3, but not all D3 should be compilable as D2. dmd3 could literally be the preview switch turned on permanently.
This is a possible path to go through. The D community must be bold to move forward. There seems to be much time mow, it is time to map a clear road path for D.
Mar 28
prev sibling next sibling parent shfit <shfit fake.de> writes:
On Saturday, 28 March 2020 at 05:19:52 UTC, rikki cattermole 
wrote:
 On 28/03/2020 5:41 PM, Jesse Phillips wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
Yes, but not because we want to break things, but because we are breaking things. The preview switches are great, but can we get a grouped feature set for release. It is critical that the existing libraries can be utilized with newer language versions (at least for a time). Stationary is not an option, but a good upgrade which does not compromise the ecosystem is vital.
I have said this before, D3 should be a preview switch which turns on all the others. All D2 code should be compilable with D3, but not all D3 should be compilable as D2. dmd3 could literally be the preview switch turned on permanently.
I like this idea. It's simple and pragmatic. The documentation could recommend this to new users, who don't yet know what the right defaults should be, and everyone else could do the transition when they feel like it. At some point in the future it could be the default and the we could bump the version to D3.
Mar 28
prev sibling parent Jesse Phillips <Jesse.K.Phillips+D gmail.com> writes:
On Saturday, 28 March 2020 at 05:19:52 UTC, rikki cattermole 
wrote:
 I have said this before, D3 should be a preview switch which 
 turns on all the others.

 All D2 code should be compilable with D3, but not all D3 should 
 be compilable as D2.

 dmd3 could literally be the preview switch turned on 
 permanently.
Yes. I think the can compile causes some changes inaccessible, but I think it is necessary to keep things moving.
Mar 28
prev sibling next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
I think in any case something is required, not sure if D 3.0 would be it though, D is already known for having had too many unfinished features, even if the reality isn't as bad as those people portray it. For me personally, I don't care that much about D's adding features taken from Midori project for low level programming, and when GraalVM graduated from Oracle Research labs. Java and .NET have OS vendors backing, distributed computing, game industry support, big data, to sell as "why use X". Swift, Objective-C have Apple's mighty hand to sell as "why use X". Rust has the message of "security above anything else" and now has Apple, Microsoft, Google on boat as well, doing OS features in Rust. All of them are hiring Rust developers currently. Even though those are the same companies that are heavily invested in C++ eco-system, and keep doing strong investments into ISO C++, compilers and IDE tooling. NVidia, a strong C++ house is now using Ada as well, for high integrity firmware development. So D 3.0 or not, what D needs is to finally get is act together about what story the community wants to sell. If this doesn't happen, I have the feeling that D will eventually die when the core team for whatever reason decides to focus elsewhere. -- Paulo
Mar 28
prev sibling next sibling parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:
I have long wanted to offer but there was no suitable place. I would like to propose to trivial rename standart type names by this way: int -> int32 ulong -> uint64 float -> float32 double -> float64 byte -> octet Reason: Most developers no longer remember where these names came from and why it are so called. In the future this number will close to 100%. And soon we will have access to all sorts of non-standard FPGA implemented CPUs with a different byte size, for example. (It will also break existing code very reliably and it will be difficult to confuse up code of different Dlang versions.)
Mar 28
next sibling parent reply IGotD- <nise nise.com> writes:
On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
In this case wouldn't it be better to rename them to shorter u8, i8 .. u64, i64 and f32 .. f64. This is actually not a breaking change and we can implement it today if we want to with the old type names still there. Also, you can make whatever alias you want.
Mar 28
next sibling parent JN <666total wp.pl> writes:
On Saturday, 28 March 2020 at 17:28:59 UTC, IGotD- wrote:
 On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
 wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
In this case wouldn't it be better to rename them to shorter u8, i8 .. u64, i64 and f32 .. f64. This is actually not a breaking change and we can implement it today if we want to with the old type names still there. Also, you can make whatever alias you want.
How is it not a breaking change?
Mar 28
prev sibling parent reply krzaq <dlangmailinglist krzaq.cc> writes:
On Saturday, 28 March 2020 at 17:28:59 UTC, IGotD- wrote:
 On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
 wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
In this case wouldn't it be better to rename them to shorter u8, i8 .. u64, i64 and f32 .. f64. This is actually not a breaking change and we can implement it today if we want to with the old type names still there. Also, you can make whatever alias you want.
I would love that. This is one of the things that Rust got 100% right. I know you can make whatever alias you want, but the point is that it's not universally used. Standarization is important, that's why I'd use the fugly C names in C/C++ even though I could alias them away.
Mar 28
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
 I would love that. This is one of the things that Rust got 100% 
 right.

 I know you can make whatever alias you want, but the point is 
 that it's not universally used.  Standarization is important, 
 that's why I'd use the fugly C names in C/C++ even though I 
 could alias them away.
D did standardize, there's no question as to size in D as it is now.
Mar 28
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sat, Mar 28, 2020 at 08:22:14PM +0000, Adam D. Ruppe via Digitalmars-d wrote:
 On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
 I would love that. This is one of the things that Rust got 100%
 right.
 
 I know you can make whatever alias you want, but the point is that
 it's not universally used.  Standarization is important, that's why
 I'd use the fugly C names in C/C++ even though I could alias them
 away.
D did standardize, there's no question as to size in D as it is now.
+1. All except 'real', that is, and that has turned into a mini-disaster. T -- Your inconsistency is the only consistent thing about you! -- KD
Mar 28
prev sibling parent reply krzaq <dlangmailinglist krzaq.cc> writes:
On Saturday, 28 March 2020 at 20:22:14 UTC, Adam D. Ruppe wrote:
 On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
 I would love that. This is one of the things that Rust got 
 100% right.

 I know you can make whatever alias you want, but the point is 
 that it's not universally used.  Standarization is important, 
 that's why I'd use the fugly C names in C/C++ even though I 
 could alias them away.
D did standardize, there's no question as to size in D as it is now.
I'm not disputing that. I'm saying that D standarized the wrong names. Rust hit the bullseye, while C/C++ is somewhere in the middle. And the alias argument is IMO weak because of it being non-standard.
Mar 28
parent Guillaume Piolat <firstname.lastname gmail.com> writes:
On Saturday, 28 March 2020 at 23:00:34 UTC, krzaq wrote:
 I'm notrea disputing that. I'm saying that D standarized the 
 wrong names. Rust hit the bullseye, while C/C++ is somewhere in 
 the middle. And the alias argument is IMO weak because of it 
 being non-standard.
The first letters of a word count more for To me the whole i8/u8/i16/u16/i32/u32 is just unreadable, "float" and "double" are staple names in the native community for 32-bit and 64-bit floating point numbers. Those names are used in C / C++ / OpenCL / CUDA etc. C and C++ is the dominant native programming culture, why change name and syntax people already know? The only other acceptable name would perhaps be "single" instead of "float" (like Pascal did). "float" being 64-bit in ocaml gets is particularly wrong. Sorry but there is very little to win by tryng to be clever with different names. Also regarding integers. We write "int" because it contains the information "this is an integer", that it is 32-bit is rarely _that_ interesting for perceiving the intent. The new names are obvious Rust mistakes :) it solves no problem and they also departed from C integer promotion rules... and end up with casts everywhere, which is another level of unsafety.
Mar 29
prev sibling parent Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Saturday, 28 March 2020 at 20:08:18 UTC, krzaq wrote:
 In this case wouldn't it be better to rename them to shorter 
 u8, i8 .. u64, i64 and f32 .. f64. This is actually not a 
 breaking change and we can implement it today if we want to 
 with the old type names still there. Also, you can make 
 whatever alias you want.
I would love that. This is one of the things that Rust got 100% right. I know you can make whatever alias you want, but the point is that it's not universally used. Standarization is important, that's why I'd use the fugly C names in C/C++ even though I could alias them away.
+1 i like the short names
Apr 08
prev sibling next sibling parent reply Les De Ridder <les lesderid.net> writes:
On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:
I have long wanted to offer but there was no suitable place. I would like to propose to trivial rename standart type names by this way: int -> int32 ulong -> uint64 float -> float32 double -> float64 byte -> octet
You could make an argument for the other ones, but I'm pretty sure everyone understands what is meant by 'byte' 99% of the time.
Mar 28
parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Saturday, 28 March 2020 at 17:29:57 UTC, Les De Ridder wrote:

 byte -> octet
You could make an argument for the other ones, but I'm pretty sure everyone understands what is meant by 'byte' 99% of the time.
At this decade. I specifically wrote remark about FPGA. Creating new processors becomes easy as creating software for them. In the future we will have a lot of different processors inside of PC case. I can easily imagine special purpose CPU with byte sized 6 or 16. So as for me it’s better to get rid of this name, since in the very first version of D language it was declared that standard types are platform independent (except size_t of course).
Mar 28
next sibling parent reply JN <666total wp.pl> writes:
On Saturday, 28 March 2020 at 17:40:45 UTC, Denis Feklushkin 
wrote:
 On Saturday, 28 March 2020 at 17:29:57 UTC, Les De Ridder wrote:

 byte -> octet
You could make an argument for the other ones, but I'm pretty sure everyone understands what is meant by 'byte' 99% of the time.
At this decade. I specifically wrote remark about FPGA. Creating new processors becomes easy as creating software for them. In the future we will have a lot of different processors inside of PC case. I can easily imagine special purpose CPU with byte sized 6 or 16. So as for me it’s better to get rid of this name, since in the very first version of D language it was declared that standard types are platform independent (except size_t of course).
I think octet would be confusing, even if it's technically more correct. Most people are aware of what byte means, they wouldn't know what octet is. Alternatively, byte type could just be dropped and uint8 should be used instead.
Mar 28
parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Saturday, 28 March 2020 at 18:40:30 UTC, JN wrote:

 I think octet would be confusing, even if it's technically more 
 correct. Most people are aware of what byte means, they 
 wouldn't know what octet is. Alternatively, byte type could 
 just be dropped and uint8 should be used instead.
Yes, absolutely right
Mar 28
prev sibling parent reply NaN <divide by.zero> writes:
On Saturday, 28 March 2020 at 17:40:45 UTC, Denis Feklushkin 
wrote:
 On Saturday, 28 March 2020 at 17:29:57 UTC, Les De Ridder wrote:

 byte -> octet
You could make an argument for the other ones, but I'm pretty sure everyone understands what is meant by 'byte' 99% of the time.
At this decade. I specifically wrote remark about FPGA. Creating new processors becomes easy as creating software for them. In the future we will have a lot of different processors inside of PC case. I can easily imagine special purpose CPU with byte sized 6 or 16. So as for me it’s better to get rid of this name, since in the very first version of D language it was declared that standard types are platform independent (except size_t of course).
Dont design based on imaginings of the future, you will almost always get it wrong.
Mar 28
parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:

 Dont design based on imaginings of the future, you will almost 
 always get it wrong.
This is almost already reality, not future. Just make survey around your friends/collegues about: what is a byte? Then compare with wikipedia/dictionary/RFC/etc definition. You will be very surprised. Already, it is difficult for a beginner to explain why the double is 64 bits. And if it is double from integer why integer is 32. I think it is no need to spend time by explaining whole IT history. Above wrote that the Rust did something similar. I suppose they went the same way of reasoning.
Mar 28
next sibling parent reply NaN <divide by.zero> writes:
On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
wrote:
 On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:

 Dont design based on imaginings of the future, you will almost 
 always get it wrong.
This is almost already reality, not future.
I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.
 Just make survey around your friends/collegues about: what is a 
 byte? Then compare with wikipedia/dictionary/RFC/etc 
 definition. You will be very surprised.

 Already, it is difficult for a beginner to explain why the 
 double is 64 bits. And if it is double from integer why integer 
 is 32. I think it is no need to spend time by explaining whole 
 IT history.
I'm struggling to understand why anyone would find it either hard to understand or difficult to explain... float is a 32 bit floating point number double is a 64 bit floating point number Lets be honest, if that is causing you problems then you probably need to reconsider your career path.
Mar 28
next sibling parent IGotD- <nise nise.com> writes:
On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
 I'm struggling to understand why anyone would find it either 
 hard to understand or difficult to explain...

 float is a 32 bit floating point number
 double is a 64 bit floating point number

 Lets be honest, if that is causing you problems then you 
 probably need to reconsider your career path.
It's for clarity, ease of use and the purpose not naming things inconsistently. It's like the ridiculous USB speeds. Low Speed, Full Speed, High Speed, SuperSpeed, SuperSpeed+. Now, tell me what throughput speed these names correspond to, without looking it up on the internet.
Mar 28
prev sibling next sibling parent reply krzaq <dlangmailinglist krzaq.cc> writes:
On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:

 Dont design based on imaginings of the future, you will 
 almost always get it wrong.
This is almost already reality, not future.
I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.
 Just make survey around your friends/collegues about: what is 
 a byte? Then compare with wikipedia/dictionary/RFC/etc 
 definition. You will be very surprised.

 Already, it is difficult for a beginner to explain why the 
 double is 64 bits. And if it is double from integer why 
 integer is 32. I think it is no need to spend time by 
 explaining whole IT history.
I'm struggling to understand why anyone would find it either hard to understand or difficult to explain... float is a 32 bit floating point number double is a 64 bit floating point number Lets be honest, if that is causing you problems then you probably need to reconsider your career path.
It's not hard to understand. It's pointless memorization though, as those names and their binding to sizes are based on implementation details of processors from the *previous millenium*. Programming languages should aim to lower the cognitive load of their programmers, not the opposite. To paraphrase your agument: A mile is 1760 yards A yard is 3 feet A foot is 12 inches What's so hard to understand? If that is causing you problems then you probably need to reconsider your career path.
Mar 28
parent reply NaN <divide by.zero> writes:
On Sunday, 29 March 2020 at 00:48:15 UTC, krzaq wrote:
 On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:
float is a 32 bit floating point number double is a 64 bit floating point number Lets be honest, if that is causing you problems then you probably need to reconsider your career path.
It's not hard to understand. It's pointless memorization though, as those names and their binding to sizes are based on implementation details of processors from the *previous millenium*.
Firstly either way you have to remember something, u16 or short. So there's memorization whatever way you slice it. Secondly those processors from the last millennium are still the dominant processors of this millennium.
 Programming languages should aim to lower the cognitive load of 
 their programmers, not the opposite.
I agree, but this is so irrelevant it's laughable.
 To paraphrase your agument:
 A mile is 1760 yards
 A yard is 3 feet
 A foot is 12 inches
 What's so hard to understand? If that is causing you problems 
 then you probably need to reconsider your career path.
If your job requires you to work in inches, feet and yards every single day then yes you should know that off the top of your head and you shouldn't even have to try. And if you find it difficult then yes you should reconsider your career path. If you struggle with basic arithmetic then you shouldn't really be looking at a career in engineering.
Mar 28
parent reply krzaq <dlangmailinglist krzaq.cc> writes:
On Sunday, 29 March 2020 at 01:21:25 UTC, NaN wrote:
 Firstly either way you have to remember something, u16 or 
 short. So there's memorization whatever way you slice it.
But you don't have to remember anything other than what you want to use. When you want a 16 bit unsigned integer you don't have to mentally lookup the type you want, because you already spelled it. And if you see a function accepting a long you don't have to think "Is this C? If so, is this Windows or not(==is this LP64)? Or maybe it's D? But what was the size of a long in D? oh, 64" If you argued that when you want just an integer then you shouldn't need to provide its size, I'd grant you a point. But if anything, `int` should be an alias to whatever is fast on current arch, not the other way around.
 Secondly those processors from the last millennium are still 
 the dominant processors of this millennium.
Are they really? I have more ARMs around me than I do x86's. Anyway, they're compatible, but not the same. "double precision" doesn't really mean much outside of hardcore number crunching, and short is (almost?) never used as an optimization on integer, but a limitation of its domain. And, at least for C and C++, any style guide will tell you to use a type with a meaningful name instead.
 Programming languages should aim to lower the cognitive load 
 of their programmers, not the opposite.
I agree, but this is so irrelevant it's laughable.
It is very relevant. Expecting the programmer to remember that some words mean completely different things than anywhere else is not good, and the more of those differences you have, the more difficult it is to use the language. And it's just not the type names, learning that you have to use enum instead of immutable or const for true constants was just as mind-boggling to me as learning that inline means anything but inline in C++.
 To paraphrase your agument:
 A mile is 1760 yards
 A yard is 3 feet
 A foot is 12 inches
 What's so hard to understand? If that is causing you problems 
 then you probably need to reconsider your career path.
If your job requires you to work in inches, feet and yards every single day then yes you should know that off the top of your head and you shouldn't even have to try. And if you find it difficult then yes you should reconsider your career path. If you struggle with basic arithmetic then you shouldn't really be looking at a career in engineering.
That's circular reasoning. The whole argument is that your day job shouldn't require rote memorization of silly incantations. As for "basic arithmetic" - there is a reason why the whole world, bar one country, moved to a sane unit system.
Mar 28
parent NaN <divide by.zero> writes:
On Sunday, 29 March 2020 at 02:09:37 UTC, krzaq wrote:
 On Sunday, 29 March 2020 at 01:21:25 UTC, NaN wrote:
 Firstly either way you have to remember something, u16 or 
 short. So there's memorization whatever way you slice it.
But you don't have to remember anything other than what you want to use. When you want a 16 bit unsigned integer you don't have to mentally lookup the type you want, because you already spelled it. And if you see a function accepting a long you don't have to think "Is this C? If so, is this Windows or not(==is this LP64)? Or maybe it's D? But what was the size of a long in D? oh, 64"
If you're sitting there thinking "wait what language am I using" you have bigger problems. I've used maybe 10 different languages over 30 years, it's not ever been a problem for me to remember what language I'm using or what the basic types were.
 Secondly those processors from the last millennium are still 
 the dominant processors of this millennium.
Are they really? I have more ARMs around me than I do x86's. Anyway, they're compatible, but not the same. "double precision" doesn't really mean much outside of hardcore number crunching, and short is (almost?) never used as an optimization on integer, but a limitation of its domain. And, at least for C and C++, any style guide will tell you to use a type with a meaningful name instead.
ARMs were outselling x86 by the end of the 90s, just nobody took any notice till the smartphone boom. (In units shipped at least)
 Programming languages should aim to lower the cognitive load 
 of their programmers, not the opposite.
I agree, but this is so irrelevant it's laughable.
It is very relevant. Expecting the programmer to remember that some words mean completely different things than anywhere else is not good, and the more of those differences you have, the more difficult it is to use the language. And it's just not the type names, learning that you have to use enum instead of immutable or const for true constants was just as mind-boggling to me as learning that inline means anything but inline in C++.
I'm 100% with you on the enum thing, I don't struggle to remember it but its awful. Its the language equivalent of a "leaky abstraction", from an implementation point of view enum members and manifest constants are pretty much the same thing, so why not use the same keyword? Its like saying well a single int is actually just an array with one member so from now on you have to declare ints as arrays int[1] oh_really; My other pet hate is nothrow. That actually means no exceptions, not not that it wont throw, it can still throw errors. Oh yeah and assert(0) i hate that too
 To paraphrase your agument:
 A mile is 1760 yards
 A yard is 3 feet
 A foot is 12 inches
 What's so hard to understand? If that is causing you problems 
 then you probably need to reconsider your career path.
If your job requires you to work in inches, feet and yards every single day then yes you should know that off the top of your head and you shouldn't even have to try. And if you find it difficult then yes you should reconsider your career path. If you struggle with basic arithmetic then you shouldn't really be looking at a career in engineering.
That's circular reasoning. The whole argument is that your day job shouldn't require rote memorization of silly incantations. As for "basic arithmetic" - there is a reason why the whole world, bar one country, moved to a sane unit system.
The reason was because the actual math was easier, not because it was hard to remember what a foot was. Which doesnt apply here, we're just talking about names, not about whether the system makes actually working with the units easier.
Mar 29
prev sibling parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:

 Dont design based on imaginings of the future, you will 
 almost always get it wrong.
This is almost already reality, not future.
I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.
I do not suggest adding or resizing types. I suggest name them more correctly to exclude cause of confusion for people without a beard :-)
Mar 28
parent NaN <divide by.zero> writes:
On Sunday, 29 March 2020 at 04:04:21 UTC, Denis Feklushkin wrote:
 On Sunday, 29 March 2020 at 00:19:57 UTC, NaN wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 On Saturday, 28 March 2020 at 19:50:44 UTC, NaN wrote:

 Dont design based on imaginings of the future, you will 
 almost always get it wrong.
This is almost already reality, not future.
I was responding to your statement regarding FPGAs. If they become ubiquitous, and if people want to use D to program them, and if someone does the work to make it happen, then maybe different width basic types *might* be needed.
I do not suggest adding or resizing types. I suggest name them more correctly to exclude cause of confusion for people without a beard :-)
I think you said something along the lines of... the FPGA are coming, they might have 6 bit bytes, so we should be prepared and start using a naming system that can accommodate that. Im saying that you shoudnt base decisions on predictions like that because they are almost always wrong.
Mar 29
prev sibling parent reply H. S. Teoh <hsteoh quickfur.ath.cx> writes:
On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
wrote:
[...]
 Just make survey around your friends/collegues about: what is a 
 byte? Then compare with wikipedia/dictionary/RFC/etc 
 definition. You will be very surprised.
[...] The Wikipedia article clearly states that definitions of "byte" other than 8 bits are *historical*, and that practically all modern hardware has standardized on the 8-bit byte. I don't understand why this is even in dispute in the first place. Frankly, it smells like just a red herring.
Mar 28
next sibling parent norm <norm.rowtree gmail.com> writes:
On Sunday, 29 March 2020 at 00:58:12 UTC, H. S. Teoh wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 [...]
 Just make survey around your friends/collegues about: what is 
 a byte? Then compare with wikipedia/dictionary/RFC/etc 
 definition. You will be very surprised.
[...] The Wikipedia article clearly states that definitions of "byte" other than 8 bits are *historical*, and that practically all modern hardware has standardized on the 8-bit byte. I don't understand why this is even in dispute in the first place. Frankly, it smells like just a red herring.
smells like a troll to me, best not to feed it
Mar 28
prev sibling parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Sunday, 29 March 2020 at 00:58:12 UTC, H. S. Teoh wrote:
 On Saturday, 28 March 2020 at 21:38:00 UTC, Denis Feklushkin 
 wrote:
 [...]
 Just make survey around your friends/collegues about: what is 
 a byte? Then compare with wikipedia/dictionary/RFC/etc 
 definition. You will be very surprised.
[...] The Wikipedia article clearly states that definitions of "byte" other than 8 bits are *historical*, and that practically all modern hardware has standardized on the 8-bit byte. I don't understand why this is even in dispute in the first place.
Just because there are very few general-purpose processors for now. It's like the idea of ​​making variables thread local by default. This does not make sense for now, but in the near future it can be an advantage. And for example, most of newfangled neuroprocessors use a small size of float to represent synapses. Isn't it better to stick with the right names initially?
 Frankly, it smells like just a red herring.
I am surprised that this small proposal caused such response.
Mar 28
prev sibling next sibling parent reply Ernesto Castellotti <erny.castell gmail.com> writes:
On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 [...]
I have long wanted to offer but there was no suitable place. I would like to propose to trivial rename standart type names by this way: int -> int32 ulong -> uint64 float -> float32 double -> float64 byte -> octet Reason: Most developers no longer remember where these names came from and why it are so called. In the future this number will close to 100%. And soon we will have access to all sorts of non-standard FPGA implemented CPUs with a different byte size, for example. (It will also break existing code very reliably and it will be difficult to confuse up code of different Dlang versions.)
Absolutely! But I think it is not necessary to rename. For me it is enough to create aliases in object.d, I did it personally in my implementation of druntime to port D to AVR (It is not yet public)
Mar 28
parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Saturday, 28 March 2020 at 18:00:01 UTC, Ernesto Castellotti 
wrote:

 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet

 Reason:

 Most developers no longer remember where these names came from 
 and why it are so called. In the future this number will close 
 to 100%. And soon we will have access to all sorts of 
 non-standard FPGA implemented CPUs with a different byte size, 
 for example.

 (It will also break existing code very reliably and it will be 
 difficult to confuse up code of different Dlang versions.)
Absolutely! But I think it is not necessary to rename.
Then everyone why comes from D2 will use old names and there can be confusion.
Mar 28
prev sibling next sibling parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/28/20 1:09 PM, Denis Feklushkin wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep D2 
 development alive while giving a playground to add new mechanisms, fix 
 long-existing design issues, and provide an opt-in for code breakage.

 Some issues I can think of:
I have long wanted to offer but there was no suitable place. I would like to propose to trivial rename standart type names by this way: int -> int32 ulong -> uint64 float -> float32 double -> float64 byte -> octet
I would say no, for 2 reasons. One, this is basically renaming without benefit. All those types are well defined, and there is no problem with sizing. Two, you can already do this with aliases if this is what you wish for.
 Reason:
 
 Most developers no longer remember where these names came from and why 
 it are so called. In the future this number will close to 100%. And soon 
 we will have access to all sorts of non-standard FPGA implemented CPUs 
 with a different byte size, for example.
Again, alias can already solve this problem. I would recommend for those implementations, if D were to support them, that byte not be changed to the native byte size, but rather a new type introduced that covers it. I think D 3.0 doesn't mean "let's break everything", it should be an incremental release, but one that is *allowed* to have fixes we have been wishing for that break things we cannot break with 2.x. -Steve
Mar 29
next sibling parent reply Arine <arine123445128843 gmail.com> writes:
On Sunday, 29 March 2020 at 13:34:44 UTC, Steven Schveighoffer 
wrote:
 On 3/28/20 1:09 PM, Denis Feklushkin wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to 
 keep D2 development alive while giving a playground to add 
 new mechanisms, fix long-existing design issues, and provide 
 an opt-in for code breakage.

 Some issues I can think of:
I have long wanted to offer but there was no suitable place. I would like to propose to trivial rename standart type names by this way: int -> int32 ulong -> uint64 float -> float32 double -> float64 byte -> octet
I would say no, for 2 reasons. One, this is basically renaming without benefit. All those types are well defined, and there is no problem with sizing. Two, you can already do this with aliases if this is what you wish for.
What about "real"? It is not defined. On x86 it means the old deprecated 80-bit float (that nobody should use). And on ARM it means 128-bit float. The worse part is, at least last I checked, Phobos only implements some functions only for real.
Mar 29
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/29/20 10:01 AM, Arine wrote:
 On Sunday, 29 March 2020 at 13:34:44 UTC, Steven Schveighoffer wrote:
 I would say no, for 2 reasons. One, this is basically renaming without 
 benefit. All those types are well defined, and there is no problem 
 with sizing. Two, you can already do this with aliases if this is what 
 you wish for.
What about "real"? It is not defined. On x86 it means the old deprecated 80-bit float (that nobody should use). And on ARM it means 128-bit float. The worse part is, at least last I checked, Phobos only implements some functions only for real.
I think real is an exception, and possibly available for migration to things like real80 or real128 and aliasing real to one of them. Simply because all floating points in D are implicitly convertable between them. -Steve
Mar 29
prev sibling parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Sunday, 29 March 2020 at 13:34:44 UTC, Steven Schveighoffer 
wrote:
 On 3/28/20 1:09 PM, Denis Feklushkin wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
I would say no, for 2 reasons. One, this is basically renaming without benefit.
My second proposal is remove auto casting to int while calculations: int8 * int8 == int16 (not int32) Sometimes it causes unncessary casts. Intuitively it seems that it is uneconomical and possibly spoils superscalarity. Perhaps this from those ancient times when compatibility D with C was declared at source level? (Or am I confusing and there was no such period?)
Mar 29
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 30/03/2020 6:51 PM, Denis Feklushkin wrote:
 On Sunday, 29 March 2020 at 13:34:44 UTC, Steven Schveighoffer wrote:
 On 3/28/20 1:09 PM, Denis Feklushkin wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
I would say no, for 2 reasons. One, this is basically renaming without benefit.
My second proposal is remove auto casting to int while calculations: int8 * int8 == int16 (not int32) Sometimes it causes unncessary casts. Intuitively it seems that it is uneconomical and possibly spoils superscalarity. Perhaps this from those ancient times when compatibility D with C was declared at source level? (Or am I confusing and there was no such period?)
C integer promotion is a feature, it is not going anywhere.
Mar 29
next sibling parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Monday, 30 March 2020 at 05:54:55 UTC, rikki cattermole wrote:

 My second proposal is remove auto casting to int while 
 calculations:
 int8 * int8 == int16 (not int32)
 
 Sometimes it causes unncessary casts. Intuitively it seems 
 that it is uneconomical and possibly spoils superscalarity.
 
 Perhaps this from those ancient times when compatibility D 
 with C was declared at source level? (Or am I confusing and 
 there was no such period?)
C integer promotion is a feature, it is not going anywhere.
I do not understand its usefulness. But I sure that if this proposal will be accepted existing code will be broken silently and this means this can only be implemented in D3.
Mar 29
prev sibling next sibling parent reply Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Monday, 30 March 2020 at 05:54:55 UTC, rikki cattermole wrote:

 Perhaps this from those ancient times when compatibility D 
 with C was declared at source level? (Or am I confusing and 
 there was no such period?)
C integer promotion is a feature, it is not going anywhere.
I remember exactly that this has already been discussed here and your point of view won. I can’t just find. Well, okay.
Mar 29
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 30/03/2020 7:12 PM, Denis Feklushkin wrote:
 On Monday, 30 March 2020 at 05:54:55 UTC, rikki cattermole wrote:
 
 Perhaps this from those ancient times when compatibility D with C was 
 declared at source level? (Or am I confusing and there was no such 
 period?)
C integer promotion is a feature, it is not going anywhere.
I remember exactly that this has already been discussed here and your point of view won. I can’t just find. Well, okay.
Short answer: it is too late to change it. Long answer: all options are fairly opinionated and arbitrary. There is no right answer. Whatever option you go with, you will have cases where you will want to cast to a more appropriate type. With the C promotion rules at least, most C family developers should be able to understand them and they will "just work" including when they are porting code from other languages.
Mar 29
parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Monday, 30 March 2020 at 06:19:47 UTC, rikki cattermole wrote:

 There is no right answer.

 Whatever option you go with, you will have cases where you will 
 want to cast to a more appropriate type.

 With the C promotion rules at least, most C family developers 
 should be able to understand them and they will "just work" 
 including when they are porting code from other languages.
Then let's not change anything at all - after all, someone will have to learn this changes too. :-)
Mar 30
prev sibling parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/30/20 1:54 AM, rikki cattermole wrote:
 On 30/03/2020 6:51 PM, Denis Feklushkin wrote:
 On Sunday, 29 March 2020 at 13:34:44 UTC, Steven Schveighoffer wrote:
 On 3/28/20 1:09 PM, Denis Feklushkin wrote:
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
I would say no, for 2 reasons. One, this is basically renaming without benefit.
My second proposal is remove auto casting to int while calculations: int8 * int8 == int16 (not int32) Sometimes it causes unncessary casts. Intuitively it seems that it is uneconomical and possibly spoils superscalarity. Perhaps this from those ancient times when compatibility D with C was declared at source level? (Or am I confusing and there was no such period?)
C integer promotion is a feature, it is not going anywhere.
First, it's only a feature in that it means C code compiled as D will do the same thing as C (as long as it compiles). In other words, it's the C compatibility that's a feature, not that C integer promotion is the most desirable mechanism. Second, it doesn't take long to break down: auto z = x * y + 1000; // z must be int, no matter what type and y are. The unnecessary casts are not terrible in any case, because at least you are aware of the odd truncation effect, I would say there's very little utility in the truncation effect. A better option is to use more sane checked integer types if you want certain behavior. -Steve
Mar 30
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2020-03-30 07:51, Denis Feklushkin wrote:

 Perhaps this from those ancient times when compatibility D with C was 
 declared at source level? (Or am I confusing and there was no such period?)
Yes, kind of. It was said if you copy-paste C code to a D file and it compiles, it should have the same behavior as in C. -- /Jacob Carlborg
Mar 30
next sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 30, 2020 at 07:09:30PM +0200, Jacob Carlborg via Digitalmars-d
wrote:
 On 2020-03-30 07:51, Denis Feklushkin wrote:
 
 Perhaps this from those ancient times when compatibility D with C
 was declared at source level? (Or am I confusing and there was no
 such period?)
Yes, kind of. It was said if you copy-paste C code to a D file and it compiles, it should have the same behavior as in C.
[...] Isn't that still true? Keyword here being, "if it compiles". T -- Philosophy: how to make a career out of daydreaming.
Mar 30
prev sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Monday, March 30, 2020 11:30:45 AM MDT H. S. Teoh via Digitalmars-d 
wrote:
 On Mon, Mar 30, 2020 at 07:09:30PM +0200, Jacob Carlborg via Digitalmars-d 
wrote:
 On 2020-03-30 07:51, Denis Feklushkin wrote:
 Perhaps this from those ancient times when compatibility D with C
 was declared at source level? (Or am I confusing and there was no
 such period?)
Yes, kind of. It was said if you copy-paste C code to a D file and it compiles, it should have the same behavior as in C.
[...] Isn't that still true? Keyword here being, "if it compiles".
Mostly. IIRC, there are a few cases where it's not (e.g. a function parameter which is a static array in D will effectively compile as a dynamic one in C, because C only ever passes arrays as pointers), but in almost all cases, C code is either valid D code with the same semantics, or it doesn't compile. - Jonathan M Davis
Mar 30
prev sibling parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin 
wrote:

 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
if ulong is uint64.. byte should be uint8
Apr 08
parent Timon Gehr <timon.gehr gmx.ch> writes:
On 09.04.20 01:24, Kaitlyn Emmons wrote:
 On Saturday, 28 March 2020 at 17:09:34 UTC, Denis Feklushkin wrote:
 
 int -> int32
 ulong -> uint64
 float -> float32
 double -> float64
 byte -> octet
if ulong is uint64.. byte should be uint8
byte is actually signed. There's ubyte. :)
Apr 09
prev sibling next sibling parent Vladimir Panteleev <thecybershadow.lists gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.
I think this is doable with some preparation. 1. LTS branches. Something like every 5 years, maintained for 10 years. Our current system for point-releases is kind of pointless because we NEVER issue a point-release for a non-current major version. The most important aspect of updating LTS releases is platform support. It seems that macOS versions breaking DMD or DMD-produced binaries is a regular occurrence, with Linux/FreeBSD not far behind. It would be a good time to reconsider SemVer as well. 2. Integrating D version management into build tools. E.g. dub.sdl would allow declaring which language version a program was written for, and Dub could then download and use that particular compiler version. (IMHO Digger's library component is in a good place for this, with the bonus of being able to select non-release versions such as master commits, PRs, or forks.) 3. All language changes should be done in such a way that libraries could still be written, with a reasonable amount of effort, to support compilers before and after the change. This greatly helped Python's 2/3 transition.
Mar 28
prev sibling next sibling parent reply Francesco Mecca <me francescomecca.eu> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
What about phobos and the druntime then? If we switch to D3 it makes sense to shrink the druntime to a smaller one that doesn't bring all the weight of unused features. I don't know the rust model and I only have experiences with python libraries working with both py2 and py3. What about keeping D2 and D3 code interoperable between each other? That would mean offering the possibility to use a D2 library in a D3 codebase and a D3 library in a D2 codebase, in the same way we do with C right now.
Mar 29
parent Denis Feklushkin <feklushkin.denis gmail.com> writes:
On Sunday, 29 March 2020 at 13:35:35 UTC, Francesco Mecca wrote:

 What about phobos and the druntime then?
 If we switch to D3 it makes sense to shrink the druntime to a 
 smaller one that doesn't bring all the weight of unused 
 features.
druntime is good enough Now I am slowly working on detaching it from libc as much as possible. And also, to reduce its size, it can be compiled statically and only with necessary functions. While this is not implemented in the master branch, but problems with this work is not expected (haha)
Mar 29
prev sibling next sibling parent Martin Brezl <martin.brzenska googlemail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
Is it time for D 3.0?
I am a dlang user and a long time community observer - as such i would be pleased to see D3 with proper implemented "lessons learned", instead of doing the
 3. OK, cancel the fix, we'll just live with it.
..which is what all software developer know/has seen, knowing about its nasty consequences. IMO there is no alternative to D3 - either D3 or become the PHP of the system programming languages.
Mar 29
prev sibling next sibling parent reply IGotD- <nise nise.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
Let's go back to the original question. If we going to go to D3, then we need a proper project plan so that people know what they are supposed to implement. That means that someone needs to decide what to implement. As the D community works today with DIPs which will yield a dozen different opinions, it will be impossible to even plan D3. It will take several years before even implementation will begin. In this case the outside opinions must be limited if you going to decide anything at all. I can see in front of me that D3 will be a huge project if we also going to clean up the libraries, druntime, proper pay as you go and so on. First question is, do we have the resources for such a project? Do we have a good management model for such project? Who will decide what to include in the project? Who will write down everything that needs to be done?
Mar 29
parent reply Mathias Lang <pro.mathias.lang gmail.com> writes:
On Sunday, 29 March 2020 at 18:24:33 UTC, IGotD- wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 There have been a lot of this pattern happening:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
Let's go back to the original question. If we going to go to D3, then we need a proper project plan so that people know what they are supposed to implement. That means that someone needs to decide what to implement. As the D community works today with DIPs which will yield a dozen different opinions, it will be impossible to even plan D3. It will take several years before even implementation will begin. In this case the outside opinions must be limited if you going to decide anything at all. I can see in front of me that D3 will be a huge project if we also going to clean up the libraries, druntime, proper pay as you go and so on. First question is, do we have the resources for such a project? Do we have a good management model for such project? Who will decide what to include in the project? Who will write down everything that needs to be done?
The problem is not whether or not we need to come up with a list of things to implement. The problem is, who will implement it ? And the answer has always been, those who care, those who will use it, or Walter. We have a lot of features in the pipeline. Just do `dmd -preview=?`. But they are unfinished, and aren't getting activated. We have to get those through the door, first.
Mar 29
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Sun, Mar 29, 2020 at 07:01:18PM +0000, Mathias Lang via Digitalmars-d wrote:
 On Sunday, 29 March 2020 at 18:24:33 UTC, IGotD- wrote:
[...]
 If we going to go to D3, then we need a proper project plan so that
 people know what they are supposed to implement. That means that
 someone needs to decide what to implement.
[...]
 The problem is not whether or not we need to come up with a list of
 things to implement. The problem is, who will implement it ? And the
 answer has always been, those who care, those who will use it, or
 Walter.
[...] Yes, herein lies the rub. There is no shortage of good ideas and opinions in this forum, but when it comes time to actually write the code and make it work, the enthusiasm seems to evaporate and the manpower is nowhere to be found. Grand plans have been drawn up in the past, and countless lists of tasks that need to be done. None of that has made much of a difference. The difference has mostly been made by a number of silent contributors who don't speak up much, but do most of the real work behind the scenes and make things tick. This is the curse of the volunteer project: nobody gets paid so when they're told what to do rather than making their own choice about what to do, they just walk away. We can invent grand plans all we like, but until there's somebody passionate enough to actually put in the hard work to implement said grand plan, nothing will actually happen. Instead, what tends to get done is the itch that somebody wants to scratch, that doesn't always match what others want. T -- Those who don't understand D are condemned to reinvent it, poorly. -- Daniel N
Mar 29
parent reply Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/29/20 9:02 PM, H. S. Teoh wrote:
 On Sun, Mar 29, 2020 at 07:01:18PM +0000, Mathias Lang via Digitalmars-d wrote:
 On Sunday, 29 March 2020 at 18:24:33 UTC, IGotD- wrote:
[...]
 If we going to go to D3, then we need a proper project plan so that
 people know what they are supposed to implement. That means that
 someone needs to decide what to implement.
[...]
 The problem is not whether or not we need to come up with a list of
 things to implement. The problem is, who will implement it ? And the
 answer has always been, those who care, those who will use it, or
 Walter.
[...] Yes, herein lies the rub. There is no shortage of good ideas and opinions in this forum, but when it comes time to actually write the code and make it work, the enthusiasm seems to evaporate and the manpower is nowhere to be found. Grand plans have been drawn up in the past, and countless lists of tasks that need to be done. None of that has made much of a difference. The difference has mostly been made by a number of silent contributors who don't speak up much, but do most of the real work behind the scenes and make things tick. This is the curse of the volunteer project: nobody gets paid so when they're told what to do rather than making their own choice about what to do, they just walk away. We can invent grand plans all we like, but until there's somebody passionate enough to actually put in the hard work to implement said grand plan, nothing will actually happen. Instead, what tends to get done is the itch that somebody wants to scratch, that doesn't always match what others want.
Good points, but I think that we are currently suffering from a different problem -- people want to, and do, implement these things, only to be told no, sorry we want it but we can't use it, because it breaks things. I listed several things that have been implemented but were rejected (or merged and reverted). Some of them even by the creator and BDFL of the language. Some other things are just wholesale changes to the library that implementing them is just not going to happen without some significant buy-in from the community and leaders. On top of that, people who may want to implement things are gun shy after seeing language changes get shot down left and right. Yes, we also still need leadership to approve and agree that X should be implemented. But right now, even they say X should be implemented, but we just can't without breaking "everything". What we need is a place for that answer to be yes instead. If not D3.0, I don't know what is the correct path for such things. -Steve
Mar 29
parent reply Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 30 March 2020 at 01:14:06 UTC, Steven Schveighoffer 
wrote:
 On 3/29/20 9:02 PM, H. S. Teoh wrote:
 [...]
Good points, but I think that we are currently suffering from a different problem -- people want to, and do, implement these things, only to be told no, sorry we want it but we can't use it, because it breaks things. I listed several things that have been implemented but were rejected (or merged and reverted). Some of them even by the creator and BDFL of the language. Some other things are just wholesale changes to the library that implementing them is just not going to happen without some significant buy-in from the community and leaders. On top of that, people who may want to implement things are gun shy after seeing language changes get shot down left and right. Yes, we also still need leadership to approve and agree that X should be implemented. But right now, even they say X should be implemented, but we just can't without breaking "everything". What we need is a place for that answer to be yes instead. If not D3.0, I don't know what is the correct path for such things. -Steve
Hey, I'm still waiting for leadership feedback for _adding_ and not _changing_ a Phobos method: adding nogc to socket receive! What's the _policy_ in evolving _obsoleted_ modules? One year and an half of ... fog ... :-P The canary in the mine ... no pun intended!
Mar 30
parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 31/03/2020 12:02 AM, Paolo Invernizzi wrote:
 On Monday, 30 March 2020 at 01:14:06 UTC, Steven Schveighoffer wrote:
 On 3/29/20 9:02 PM, H. S. Teoh wrote:
 [...]
Good points, but I think that we are currently suffering from a different problem -- people want to, and do, implement these things, only to be told no, sorry we want it but we can't use it, because it breaks things. I listed several things that have been implemented but were rejected (or merged and reverted). Some of them even by the creator and BDFL of the language. Some other things are just wholesale changes to the library that implementing them is just not going to happen without some significant buy-in from the community and leaders. On top of that, people who may want to implement things are gun shy after seeing language changes get shot down left and right. Yes, we also still need leadership to approve and agree that X should be implemented. But right now, even they say X should be implemented, but we just can't without breaking "everything". What we need is a place for that answer to be yes instead. If not D3.0, I don't know what is the correct path for such things. -Steve
Hey, I'm still waiting for leadership feedback for _adding_ and not _changing_ a Phobos method: adding nogc to socket receive! What's the _policy_ in evolving _obsoleted_ modules? One year and an half of ... fog ... :-P The canary in the mine ... no pun intended!
std.socket is not obsolete. But in this case receive can be nogc if the internals allow it to be. It does not take callbacks, or anything like that. The only issue surrounding this is if somebody has decided to inherit from one of the socket classes. Which is kinda a bad thing to do regardless... So ask point blank if anybody has done it, and it should be fine.
Mar 30
parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 30 March 2020 at 21:24:31 UTC, rikki cattermole wrote:
 But in this case receive can be  nogc if the internals allow it 
 to be.
it can, this should be a trivial change.
 The only issue surrounding this is if somebody has decided to 
 inherit from one of the socket classes. Which is kinda a bad 
 thing to do regardless...
It is a useful thing to do - I did for SSL support, for example. But my ssl socket child class is also nogc compliant, as I suspect most people's would be.
Mar 30
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Mar 30, 2020 at 09:28:44PM +0000, Adam D. Ruppe via Digitalmars-d wrote:
 On Monday, 30 March 2020 at 21:24:31 UTC, rikki cattermole wrote:
 But in this case receive can be  nogc if the internals allow it to be.
it can, this should be a trivial change.
[...] In the name of getting stuff done instead of just talking about it: https://github.com/dlang/phobos/pull/7433 [...]
 The only issue surrounding this is if somebody has decided to
 inherit from one of the socket classes. Which is kinda a bad thing
 to do regardless...
It is a useful thing to do - I did for SSL support, for example. But my ssl socket child class is also nogc compliant, as I suspect most people's would be.
Hmm. Wouldn't this be a problem if changing .receive to nogc break existing code that inherits from Socket? T -- Let's call it an accidental feature. -- Larry Wall
Mar 30
next sibling parent reply Adam D. Ruppe <destructionator gmail.com> writes:
On Monday, 30 March 2020 at 22:34:32 UTC, H. S. Teoh wrote:
 Hmm.  Wouldn't this be a problem if changing .receive to  nogc 
 break existing code that inherits from Socket?
It technically breaks but it is a trivial fix, I'd be generally OK telling people to deal with it. But if someone has a case where it isn't trivial, well, we'd have to see why. Some D classes in the past have done it by adding a new method with _nothrow or _nogc or whatever then deprecating the old. Or you could make a new subclass that tightens it there. There's various solutions but I don't think we even really need to worry - in practice it is all nogc anyway at least in my experience.
Mar 30
parent Steven Schveighoffer <schveiguy gmail.com> writes:
On 3/30/20 6:46 PM, Adam D. Ruppe wrote:
 On Monday, 30 March 2020 at 22:34:32 UTC, H. S. Teoh wrote:
 Hmm.  Wouldn't this be a problem if changing .receive to  nogc break 
 existing code that inherits from Socket?
It technically breaks but it is a trivial fix, I'd be generally OK telling people to deal with it. But if someone has a case where it isn't trivial, well, we'd have to see why.
I could see an example of a SSL socket that allocates a buffer for processing the SSL on the first call to receive. Or it needs to extend an existing buffer to read more data. -Steve
Mar 30
prev sibling parent Paolo Invernizzi <paolo.invernizzi gmail.com> writes:
On Monday, 30 March 2020 at 22:34:32 UTC, H. S. Teoh wrote:
 On Mon, Mar 30, 2020 at 09:28:44PM +0000, Adam D. Ruppe via 
 Digitalmars-d wrote:
 On Monday, 30 March 2020 at 21:24:31 UTC, rikki cattermole 
 wrote:
 But in this case receive can be  nogc if the internals allow 
 it to be.
it can, this should be a trivial change.
[...] In the name of getting stuff done instead of just talking about it: https://github.com/dlang/phobos/pull/7433
https://github.com/dlang/phobos/pull/6730 Please read the history and how it ended, that's the reason I'm citing that in that discussions ... Citing Steven some message above: "Good points, but I think that we are currently suffering from a different problem -- people want to, and do, implement these things, only to be told no, sorry we want it but we can't use it, because it breaks things. I listed several things that have been implemented but were rejected (or merged and reverted). Some of them even by the creator and BDFL of the language. Some other things are just wholesale changes to the library that implementing them is just not going to happen without some significant buy-in from the community and _____leaders_____." Emphasis on "leaders" mine ..
Mar 31
prev sibling next sibling parent reply Andrea Fontana <nospam example.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
Some things I'd like to see in d3 that can't be done without breaking D: - Fixing attribute syntax: safe vs pure (why not pure?) or safe vs nogc (why no-prefix?). It would be nice to have #pure #safe #gc and their negative. And leave for UDA. So we have no need to reserve any keyword. - Remove unfinished things. For ex.: multiple alias this, property, real, cfloat, ucent, etc... - Fixing range, autodecoder, ... - Removing old modules from phobos - Adding some features like string interpolation, static initialization of AAs Andrea
Mar 30
parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Monday, 30 March 2020 at 08:07:38 UTC, Andrea Fontana wrote:
 Some things I'd like to see in d3 that can't be done without 
 breaking D:
 - Fixing attribute syntax:  safe vs pure (why not  pure?) or 
  safe vs  nogc (why no-prefix?). It would be nice to have #pure 
 #safe #gc and their negative. And leave   for UDA. So we have 
 no need to reserve any keyword.
 - Remove unfinished things. For ex.: multiple alias this, 
  property, real, cfloat, ucent, etc...
 - Fixing range, autodecoder, ...
 - Removing old modules from phobos
 - Adding some features like string interpolation, static 
 initialization of AAs

 Andrea
+1 good suggestions
Apr 08
parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
So in D3 can we scrap the CTFE engine all together and just write 
a real D interpreter!

I suggest this actually be written FIRST before a compiler.. and 
then make compilation a feature of the interpreter! Flip the 
paradigm and unleash the true capabilities of meta programming!!
Apr 10
parent reply Atila Neves <atila.neves gmail.com> writes:
On Saturday, 11 April 2020 at 04:07:53 UTC, Kaitlyn Emmons wrote:
 So in D3 can we scrap the CTFE engine all together and just 
 write a real D interpreter!

 I suggest this actually be written FIRST before a compiler.. 
 and then make compilation a feature of the interpreter! Flip 
 the paradigm and unleash the true capabilities of meta 
 programming!!
If I were designing and implementing a language from scratch right now, that's exactly what I'd do.
Apr 13
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 13 April 2020 at 15:27:46 UTC, Atila Neves wrote:
 If I were designing and implementing a language from scratch 
 right now, that's exactly what I'd do.
Here is one funny approach: 1. implement an interpreter/JIT 2. run it 3. create a core dump when hitting the first I/O instruction 4. use a program that turns the core dump into an executable Voila. Compilation finished. ;""}
Apr 15
prev sibling next sibling parent user1234 <user1234 12.de> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 [...]
58. Be "tooling-friendly" this time
Mar 30
prev sibling next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.
I've got a feeling that major problem with significant changes isn't social, but the technological infrastructure. You can only evolve a code-base so far, at some point it becomes more efficient to start over from scratch.
 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them.
It works for them because of the app-market, and Swift is the only reasonable option next to Objective-C++ (and in some cases Dart). Because Swift changes so much (and does not perform all that well), I am currently more inclined to use Objective-C++ for my own OS-X project. I agree though that it helps with major versions, if the vendor is keeping the older versions alive as with Python 2. I am still running Python 2, an update is too costly, not worth it. Same issue with Angular, they sometimes remove stuff for which there is a replacement, but it can still take a lot of time to make the transition. Go1 and C++ on the other hand have very little breakage, even if they evolve.
 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.
Actually, I think the biggest drawback is that you probably should start with a clean slate implementation if you intend to make major changes.
Mar 30
prev sibling next sibling parent Dukc <ajieskola gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.
I don't think heavy-handed breaking of backwards compatibility is worth it. Even if we were to do it, redesigning the standard library should be enough - no need to break the compiler.
 Some issues I can think of:

 1. The safe by default debate
Isn't it quite simple? Just add a way to apply ` safe:` and ` system:` on the top of module without them applying to template inference, and it really does not matter what the default will be.
 2. pure by default
 3. nothrow by default
Same as above, except they obviously will need antiattributes that turn them off
 4. String interpolation DIP
Should hardly be a breaking change
 5. auto-decoding
This is the only thing of these that I think would make the transition worthwhile, were there many more issues of this caliber. But even this is only about Phobos, not the language spec.
 6. range.save
Each range should just decide their behaviour on copy. Since forgetting to `.save` is so common, changing copy behaviour of a public range will break code anyway, so the behaviour can be guaranteed just as well.
 7. virtual by default
Small issue after all. Can be further mitigated by adding `virtual` keyword so one can add `final:` on top and cancel it where applicable.
 8. ProtoObject
No opinions on this - haven't followed. That all being said, I would welcome a way to better deal with code breakage, so the breakage could be more aggressive. For example, if one could compile together both modules that rely on autodecoding Phobos and modules that rely on non-decoding Phobos, issue fixed.
Mar 31
prev sibling next sibling parent Tony <tonytdominguez aol.com> writes:
The regular versioning of a language and even standard libraries 
with new features and/or removal of features seems to be the 
norm. I brought this up some years ago and there appeared to be 
unanimous support for remaining with what I would call “gcc-style 
versioning”.
Apr 03
prev sibling next sibling parent reply Istvan Dobos <not.disclosing.here example.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
I agree with the general idea of introducing breaking changes in an organised way. I didn't write a massive amount of D, but in light of its brilliant features, sticking so rigidly to C and C++ seems like an idea that inherited difficulties from them meanwhile failing to reap the benefits. The great features which make you all gather here deserve to break free! Fortunately now seems to be a historical moment when C syntax/semantics does no longer need to be a given, as more languages gain traction that purposefully break away from them. (For the better, I think.) It does not matter if D only followed in these languages' footsteps, however humiliating that might feel, because it'd have the potential to give the language a new life. In its current form, its advantages over C++ seem to diminish with every new standard. Whether that would happen or not, the new language would definitely need a solidly defined identity / value proposition so it'd be clear why to pick it up. It would also need a big break from the current design principles, which remain essentially unchallenged for a long time now. That might also go against some current mindset and eventually could be seem just too big of a deviation. All in all, not sure if any of that would happen but it'd be really exciting!
Apr 08
next sibling parent reply IGotD- <nise nise.com> writes:
On Wednesday, 8 April 2020 at 16:44:57 UTC, Istvan Dobos wrote:
 It does not matter if D only followed in these languages' 
 footsteps, however humiliating that might feel, because it'd 
 have the potential to give the language a new life. In its 
 current form, its advantages over C++ seem to diminish with 
 every new standard.
Funny, so called "modern C++" (C++17 and beyond) is one of the reasons why I got interested in D. In my opinion "modern C++" is absolutely awful. The success of Java can be linked to its similarities in syntax with C and C++ so inheriting the C/C++ doesn't have to be a disadvantage. D did not become as successful as Java, for a few reasons but it feels OT in this thread.
Apr 08
parent reply "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Wed, Apr 08, 2020 at 04:55:56PM +0000, IGotD- via Digitalmars-d wrote:
 On Wednesday, 8 April 2020 at 16:44:57 UTC, Istvan Dobos wrote:
 It does not matter if D only followed in these languages' footsteps,
 however humiliating that might feel, because it'd have the potential
 to give the language a new life. In its current form, its advantages
 over C++ seem to diminish with every new standard.
Funny, so called "modern C++" (C++17 and beyond) is one of the reasons why I got interested in D. In my opinion "modern C++" is absolutely awful.
[...] Yeah, with every new C++ revision, the language becomes more baroque, more overly complex, and introduces yet more strange exceptional cases that nobody can possibly remember (except maybe for Scott Meyers :-P). No thanks, despite whatever warts D has, it's still light-years better than C++. T -- Жил-был король когда-то, при нём блоха жила.
Apr 08
parent Istvan Dobos <not.disclosing.here example.com> writes:
On Wednesday, 8 April 2020 at 17:17:07 UTC, H. S. Teoh wrote:
 On Wed, Apr 08, 2020 at 04:55:56PM +0000, IGotD- via 
 Digitalmars-d wrote:
 On Wednesday, 8 April 2020 at 16:44:57 UTC, Istvan Dobos wrote:
 It does not matter if D only followed in these languages' 
 footsteps, however humiliating that might feel, because it'd 
 have the potential to give the language a new life. In its 
 current form, its advantages over C++ seem to diminish with 
 every new standard.
Funny, so called "modern C++" (C++17 and beyond) is one of the reasons why I got interested in D. In my opinion "modern C++" is absolutely awful.
[...] Yeah, with every new C++ revision, the language becomes more baroque, more overly complex, and introduces yet more strange exceptional cases that nobody can possibly remember (except maybe for Scott Meyers :-P). No thanks, despite whatever warts D has, it's still light-years better than C++. T
Yeah, not the first time I hear that opinion. Myself have a very little exposure to that beast and I intend to keep it that way. My point was more like, if the idea of breaking changes + new major version settles in, it could be good to aim higher!
Apr 08
prev sibling parent Bienlein <ffm2002 web.de> writes:
On Wednesday, 8 April 2020 at 16:44:57 UTC, Istvan Dobos wrote:

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
Fix the internals in D that prevent changing the GC to be faster.
Apr 09
prev sibling next sibling parent reply zoujiaqing <zoujiaqing gmail.com> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject

 Other languages evolve much quicker than D, but break things 
 only in major updates. D seems to "sort of" break things, 
 there's always a risk in every release. We try to be 
 conservative, but we have this horrible mix of deciding some 
 features can break things, while others are not allowed to, and 
 there's no clear guide as to which breakage fits in which 
 category.

 If we went to a more regular major release schedule, and 
 decided for a roadmap for each major release what features 
 would be included, it would allow much better planning, and 
 much more defensible breakage of code. If you know that your 
 code will only compile with D2.x, and you're fine with that, 
 then great, don't upgrade to D3.x. If you desperately want a 
 feature, you may have to upgrade to D3.x, but once you get 
 there, you know your code is going to build for a while.

 We could also not plan for many major releases, but at least 
 move to D3 for some major TLC to the language that is held back 
 to prevent breakage.

 I work occasionally with Swift, and they move very fast, and 
 break a lot of stuff, but only in major versions. It's a bit 
 fast for my taste, but it seems to work for them. And they get 
 to fix issues that languages like C++ might have been stuck 
 with forever.

 The biggest drawback is that we aren't a huge language, with 
 lots of manpower to keep x branches going at once.

 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
Thank you Steve! I want to upgrade to dlang 3.0 and need change it: 1. ARC for GC is default 2. So fast json library 3. Async and await for D 4. Organize standard library -Brian
Apr 09
next sibling parent Paulo Pinto <pjmlp progtools.org> writes:
On Thursday, 9 April 2020 at 13:48:15 UTC, zoujiaqing wrote:
 On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
 wrote:
 [...]
Thank you Steve! I want to upgrade to dlang 3.0 and need change it: 1. ARC for GC is default 2. So fast json library 3. Async and await for D 4. Organize standard library -Brian
Beware of ARC by default, usually the reality isn't the same as vocal anti-tracing GC wishes for. https://github.com/ixy-languages/ixy-languages https://www.modernescpp.com/index.php/memory-and-performance-overhead-of-smart-pointer "CppCon 2016: Herb Sutter “Leak-Freedom in C++... By Default." https://www.youtube.com/watch?v=JfmTagWcqoE -- Paulo Pinto
Apr 09
prev sibling parent Jack <jackfeng.jia gmail.com> writes:
On Thursday, 9 April 2020 at 13:48:15 UTC, zoujiaqing wrote:

 Thank you Steve!

 I want to upgrade to dlang 3.0 and need change it:
 1. ARC for GC is default
 2. So fast json library
 3. Async and await for D
 4. Organize standard library

 -Brian
It's my wishlist too!
Apr 10
prev sibling next sibling parent reply Kagamin <spam here.lot> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.
I'd say 2.080 was the first D3 release, where the move towards default safety seriously started, now it's only a debate of how much intended breakage should happen.
Apr 10
parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
Oh and in D3 can we clean up "is"? Its like the syntax version of 
mom spaghetti.
D2 has fallen into the same trap c++ has.. never cleaning up old 
bad features!



PLEASE BREAK MY CODE!! A BETTER LANGUAGE IS WORTH CODE BREAKAGE..



how is that not obvious.. if we cared about not having to rewrite 
code we would all still be using cpp.. just BREAK D2 already!! 
DAMN.. its old and getting stale..

I just wanted to say that cus this thread will most likely die 
out like the n other threads about this topic.. nothing ever gets 
done.. even though there has been strong desire for D3 in the 
community for like 6 years now or something.. WALTER THIS IS ON 
YOU! MAKE THE DECISION TO KILL D2 PLEASE WE BEG YOU <3
Apr 13
next sibling parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Monday, 13 April 2020 at 07:13:37 UTC, Kaitlyn Emmons wrote:


In d3 can we make all functions and templates aliases to literals?

EG:
alias fun = (int x, int y) {
      // some fun >.>
}

alias funT = template(T){
      alias this = (T x, T y){
           // some T fun >o>
      }
}

Also can we merge the concept of alias this with the way 
templates work by declaring an alias with the same name as the 
template.


Also in d3 can we extend operator overloading to ALL aliases not 
just certain ones..
Eg i wana declare a namespace that overrides opDispatch
Apr 13
next sibling parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Monday, 13 April 2020 at 07:30:54 UTC, Kaitlyn Emmons wrote:

In D3 can we PLEASE merge class and struct.. such a bad design 
right there.. we have alias this and mixins and crazy shit to 
make structs do anything we want and then we have classes which 
honestly could be implemented at this point as a struct template 
in library..


Also in D3 can we make all type voldemort types except for 
fundamental types like int/void ect..

Just embrace the inference.. it the direction D wants to go 
anyways
Apr 13
parent Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Monday, 13 April 2020 at 07:43:10 UTC, Kaitlyn Emmons wrote:
 Also in D3 can we make all type voldemort types except for 
 fundamental types like int/void ect..

 Just embrace the inference.. it the direction D wants to go 
 anyways
By that i mean, define types as the return from a function.. and scrap constructors all together .. alias SomeStruct = (int x, int y){ r = struct { X = x; Y = y; } return r; }
Apr 13
prev sibling parent reply Timon Gehr <timon.gehr gmx.ch> writes:
On 13.04.20 09:30, Kaitlyn Emmons wrote:
 On Monday, 13 April 2020 at 07:13:37 UTC, Kaitlyn Emmons wrote:
 
 
 In d3 can we make all functions and templates aliases to literals?
 
 EG:
 alias fun = (int x, int y) {
       // some fun >.>
 }
 
 alias funT = template(T){
       alias this = (T x, T y){
            // some T fun >o>
       }
 }
 
 Also can we merge the concept of alias this with the way templates work 
 by declaring an alias with the same name as the template.
 
 
 Also in d3 can we extend operator overloading to ALL aliases not just 
 certain ones..
 Eg i wana declare a namespace that overrides opDispatch
 
 
Some of those suggestions are (to a large extent) backwards compatible. The reason why operator overloading does not get fixed is that there's a vocal group of people who think the weird restrictions prevent "abuse". In reality, e.g., if all you really wanted was to overload the '<' operator, opCmp is just inefficient and error prone. Here's a small obvious improvement that would break code massively: Introduce consistency between template/function declaration and instantiation/call syntax, e.g.: auto fun!T(T x,T y)(T z){ ... } (One template parameter, curried function definition.)
Apr 13
parent Ecstatic Coder <ecstatic.coder gmail.com> writes:
On Monday, 13 April 2020 at 09:17:48 UTC, Timon Gehr wrote:
 On 13.04.20 09:30, Kaitlyn Emmons wrote:
 On Monday, 13 April 2020 at 07:13:37 UTC, Kaitlyn Emmons wrote:
 
 
 In d3 can we make all functions and templates aliases to 
 literals?
 
 EG:
 alias fun = (int x, int y) {
       // some fun >.>
 }
 
 alias funT = template(T){
       alias this = (T x, T y){
            // some T fun >o>
       }
 }
 
 Also can we merge the concept of alias this with the way 
 templates work by declaring an alias with the same name as the 
 template.
 
 
 Also in d3 can we extend operator overloading to ALL aliases 
 not just certain ones..
 Eg i wana declare a namespace that overrides opDispatch
 
 
Some of those suggestions are (to a large extent) backwards compatible. The reason why operator overloading does not get fixed is that there's a vocal group of people who think the weird restrictions prevent "abuse". In reality, e.g., if all you really wanted was to overload the '<' operator, opCmp is just inefficient and error prone. Here's a small obvious improvement that would break code massively: Introduce consistency between template/function declaration and instantiation/call syntax, e.g.: auto fun!T(T x,T y)(T z){ ... } (One template parameter, curried function definition.)
While I agree that D could benefit from a new version with breaking changes, I think that it should focus on : - keeping the language simple and easy to learn; - integrating concurrency and a better garbage collection. Languages like Go and Crystal have been designed with concurrency in mind (channels, etc) and an obvious "less is more" philosophy. I use D mainly as a scripting language for file processing, and I really enjoy its current syntax. I must say I'm deliberately using only a small imperative or object oriented subset of the language (slices, maps, regex, functions, classes, genericity), and **for my own needs** it's perfectly fine like that. The only features I **personally** miss are related to Go-like concurrency with automatic parallelism, and an incremental garbage collector like in Nim and Unity. So, if you want to shape a future version of D, maybe we could take some lessons from Go and Crystal, and instead of making D a better C++, it could make more sense to simply make D a better competitor to both these languages. And MAYBE, we could then expect D to benefit from a very fast rise in popularity like those languages experienced when they were released.
Apr 13
prev sibling parent reply Mathias LANG <geod24 gmail.com> writes:
On Monday, 13 April 2020 at 07:13:37 UTC, Kaitlyn Emmons wrote:
 Oh and in D3 can we clean up "is"? Its like the syntax version 
 of mom spaghetti.
 D2 has fallen into the same trap c++ has.. never cleaning up 
 old bad features!
You can't be serious... Just grep for "Removed" or deprecated in the changelog, you have a hit on almost any release. v2.091.0 removed class deallocator (provided, they already error-ed out before). 2.089.0 deprecated some case of shadowing, along with a few change which could break user code (e.g. mixin + extern(C)). And the release before that, the list is just too long to cite: https://dlang.org/changelog/2.088.0.html And that is just language changes. Runtime and Phobos receive a lot of cleanups too.
 PLEASE BREAK MY CODE!! A BETTER LANGUAGE IS WORTH CODE 
 BREAKAGE..
"Better" is not an absolute metric, unfortunately. What is better for you will be worse for someone else. I've seen it repeated countless time, someone tries to get rid of a language feature that is not "worth their weight", only to have someone that uses this feature come up with an argument why it is worth its weight. We had a few cases were language fixes were *definitely* worth it. The `case`-implicit-fallthrough deprecation comes to mind immediately. We had other cases were, while the change was welcomed, it forced *a lot* of downstream users to change their (working) code. That's not something we should do too often.
 how is that not obvious.. if we cared about not having to 
 rewrite code we would all still be using cpp.. just BREAK D2 
 already!! DAMN.. its old and getting stale..

 I just wanted to say that cus this thread will most likely die 
 out like the n other threads about this topic.. nothing ever 
 gets done.. even though there has been strong desire for D3 in 
 the community for like 6 years now or something.. WALTER THIS 
 IS ON YOU! MAKE THE DECISION TO KILL D2 PLEASE WE BEG YOU <3
This is so out of touch with the reality of software development it sounds almost like a troll.
Apr 13
parent reply Kaitlyn Emmons <katemmons0 gmail.com> writes:
On Monday, 13 April 2020 at 08:01:49 UTC, Mathias LANG wrote:
 This is so out of touch with the reality of software 
 development it sounds almost like a troll.
I been following this community for like 6 years now and i remember when i joined people were asking for D3.. the troll is the fact people still discussing it and nothing has been done
Apr 13
parent reply Konstantin <kostya.hm2 gmail.com> writes:
On Monday, 13 April 2020 at 08:47:23 UTC, Kaitlyn Emmons wrote:
 On Monday, 13 April 2020 at 08:01:49 UTC, Mathias LANG wrote:
 This is so out of touch with the reality of software 
 development it sounds almost like a troll.
I been following this community for like 6 years now and i remember when i joined people were asking for D3.. the troll is the fact people still discussing it and nothing has been done
I'm c++ programmer and not familiar with Dlang enough. I only observed language features and time over time read forum discussions about dlang past and future. I will try to summarize many different opinions about D. Marketing and purpose. Some people wrote posts about problems "to sell" D to programmers like modern languages(Rust, Go, Java). It's not clear which niches Dlang was designed to cover. It can't be used for high frequency trading or other niches where performance is matter, because has dependency on GC in language structure and infrastructure (classes). Bad support for non-gc methods in standard library. But there are cool features for template metaprogramming, mixins, good compile time evaluation support. Cool reflection is also big thing(c++ just on the road, but D has support reflection much earlier). On the other hand there are many general purpose languages with there are no corporations like Oracle, Microsoft behind D and small community. Oldstyle "stop the world" GC does not add popularity to Dlang. I also saw some programmers use D as scripting language or language for prototypes because you can program, compile fast and good syntax, infrastructure(packet manager, build system) are helpful. In fact dlang is old, but still unpopular. Approaches and leadership. There are many memory management features like refcounting, gc, owner/borrow semantic from Rust in D. However these features are not integrated in language, libraries. May be choose one and design, implement, test it to be ready? Because community is small and everyone do what interested in and try to move lang in different ways. I see that like a "box of unfinished projects". Were there any discussions about dlang fork? On my opinion Dlang has features to be popular system and scripting language. It needs not many cool unfinished things, but well implemented. And lang needs development plan with priorities. P.S. Sorry for my bad english. I wrote all this because i'm tired to write simple things hard way in c++, but does not see any good alternatives. And as i know initial purpose on Dlang development was re-engineering of c++.
Apr 17
next sibling parent JN <666total wp.pl> writes:
On Friday, 17 April 2020 at 22:59:43 UTC, Konstantin wrote:
 It can't be used for high frequency trading or other niches 
 where performance is matter, because has dependency on GC in 
 language structure and infrastructure (classes).
Java and Go are used in HFT and other performance sensitive areas, and they are much more dependent on GC than D is.
 On my opinion Dlang has features to be popular system and 
 scripting language. It needs not many cool unfinished things, 
 but well implemented. And lang needs development plan with 
 priorities.
D suffers from its own version of "curse of Lisp", which is both a blessing and a curse. http://winestockwebdesign.com/Essays/Lisp_Curse.html . Read the article, especially the Haskell part reminds me of how suddenly D wants to be Rust with its borrow checker. D is very powerful and gives you a lot of choice, but those choices come with tradeoffs. Languages like Java or Go don't give you a choice when it comes to GC. You either accept the GC and use these languages, or you don't accept the GC and go for C/C++/Rust. That's one less technical decision to make on every project. Also, the paradigms in these languages are more consistent. For 99% of Java projects/libraries, you know they will be object oriented and you know they will be using GC, exceptions and all the stuff, so you can rely on those. Also, the standard library provides good foundation libraries, so a library might take stdlib objects like InputStream as input and enjoy the benefits of interop with other libraries that work with inputstreams. I don't know what to think about D 3.0... obviously everyone has their idea what it should look like. Most people seem to opt for D 3.0 being an update with a package of breaking changes. But then it could also be D 2.100 or something like that.
Apr 17
prev sibling next sibling parent welkam <wwwelkam gmail.com> writes:
On Friday, 17 April 2020 at 22:59:43 UTC, Konstantin wrote:
 And as i know initial purpose on Dlang development was 
 re-engineering of c++.
It started as re-engineering of C. Then Andrei touched it(in a good way) and now people think it was designed as better C++.
Apr 19
prev sibling parent Kagamin <spam here.lot> writes:
On Friday, 17 April 2020 at 22:59:43 UTC, Konstantin wrote:
 Some people wrote posts about problems "to sell" D to 
 programmers like modern languages(Rust, Go, Java). It's not 
 clear which niches Dlang was designed to cover.
Meme driven development. A niche is a bug, not a feature.
Apr 20
prev sibling parent reply Chris <wendlec tcd.ie> writes:
On Friday, 27 March 2020 at 15:56:40 UTC, Steven Schveighoffer 
wrote:
 There have been a lot of this pattern happening:

 1. We need to add feature X, to fix problem Y.
 2. This will break ALL CODE IN EXISTENCE
 3. OK, cancel the fix, we'll just live with it.

 Having a new branch of the compiler will provide a way to keep 
 D2 development alive while giving a playground to add new 
 mechanisms, fix long-existing design issues, and provide an 
 opt-in for code breakage.

 Some issues I can think of:

 1. The safe by default debate
 2. pure by default
 3. nothrow by default
 4. String interpolation DIP
 5. auto-decoding
 6. range.save
 7. virtual by default
 8. ProtoObject
[snip]
 I just wanted to throw it out as a discussion point. We spend 
 an awful lot of newsgroup server bytes debating things that to 
 me seem obvious, but have legitimate downsides for not breaking 
 them in a "stable" language.

 -Steve
Interesting post. Yet very unspecific, and as far as I remember, Walter decided that unspecific posts would not be heeded anymore. Be that as it may, what difference will it make? As you said, other languages are developing fast and, may I add, are keeping an eye on recent developments like big data and (relatively) new platforms. I fear the D train won't leave the station anymore. As a prominent D user once said: "So any discussion _now_ would have the very same structure of the discussion _then_, and would lead to the exact same result. It's quite tragic. And I urge the real D supporters to let such conversation die (topics debated to death) as soon as they appear." It's quite tragic indeed.
Apr 21
parent Tony <tonytdominguez aol.com> writes:
On Tuesday, 21 April 2020 at 18:37:19 UTC, Chris wrote:
 Interesting post. Yet very unspecific, and as far as I 
 remember, Walter decided that unspecific posts would not be 
 heeded anymore. Be that as it may, what difference will it 
 make? As you said, other languages are developing fast and, may 
 I add, are keeping an eye on recent developments like big data 
 and (relatively) new platforms. I fear the D train won't leave 
 the station anymore.

 As a prominent D user once said:

 "So any discussion _now_ would have the very same structure of 
 the discussion _then_, and would lead to the exact same result. 
 It's quite tragic. And I urge the real D supporters to let such 
 conversation die (topics debated to death) as soon as they 
 appear."

 It's quite tragic indeed.
Not quite as tragic as someone who comes to a D language forum year after year to complain about the D language.
Apr 24