www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - D versionning

reply deadalnix <deadalnix gmail.com> writes:
One thing PHP has been good at is evolving, and introducing change in 
the language (some can argument that the language is so fucked up that 
this is unavoidable, so I do it now and we can discuss interesting topic).

I discussed that system with Rasmus Ledorf at afup 2012 and it something 
that D should definitively look into.

The const vs OOP discussion have shown once again that D will have to 
introduce breaking changes in the language. This isn't easy matter 
because if we break people code, D isn't attractive. But as long as code 
isn't broken, D people can't worked on what's next and it slows down D 
progress.

The system adopted in PHP works with a 3 number version. The first 
number is used for major languages changes (for instance 4 > 5 imply 
passing object by reference when it was by copy before, 5 > 6 switched 
the whole thing to unicode).

The second number imply language changes, but either non breaking or 
very specific, rarely used stuff. For instance 5.2 > 5.3 added GC, 
closures and namespace which does not break code.

The last one is reserved for bug fixes. Several version are maintained 
at the same time (even if a large amount of code base is common, so bug 
fixes can be used for many version at the time).

We should leverage the benefit of having switched to git to go in that 
way. We can start right now D2.1.xx with the opX dropped from object and 
see how it goes without requiring everybody to switch now.

Such a system would also permit to drop all D1 stuff that are in current 
DMD because D1 vs D2 can be chosen at compile time on the same sources.

git provide all we need to implement such a process, it is easy to do it 
soon (after 2.060 for instance) because it doesn't imply drastic changes 
for users.
Jul 12 2012
next sibling parent Mirko Pilger <pilger cymotec.de> writes:
food for thought:

http://semver.org/
Jul 12 2012
prev sibling next sibling parent Gor Gyolchanyan <gor.f.gyolchanyan gmail.com> writes:
On Thu, Jul 12, 2012 at 8:49 PM, deadalnix <deadalnix gmail.com> wrote:

 One thing PHP has been good at is evolving, and introducing change in the
 language (some can argument that the language is so fucked up that this is
 unavoidable, so I do it now and we can discuss interesting topic).

 I discussed that system with Rasmus Ledorf at afup 2012 and it something
 that D should definitively look into.

 The const vs OOP discussion have shown once again that D will have to
 introduce breaking changes in the language. This isn't easy matter because
 if we break people code, D isn't attractive. But as long as code isn't
 broken, D people can't worked on what's next and it slows down D progress.

 The system adopted in PHP works with a 3 number version. The first number
 is used for major languages changes (for instance 4 > 5 imply passing
 object by reference when it was by copy before, 5 > 6 switched the whole
 thing to unicode).

 The second number imply language changes, but either non breaking or very
 specific, rarely used stuff. For instance 5.2 > 5.3 added GC, closures and
 namespace which does not break code.

 The last one is reserved for bug fixes. Several version are maintained at
 the same time (even if a large amount of code base is common, so bug fixes
 can be used for many version at the time).

 We should leverage the benefit of having switched to git to go in that
 way. We can start right now D2.1.xx with the opX dropped from object and
 see how it goes without requiring everybody to switch now.

 Such a system would also permit to drop all D1 stuff that are in current
 DMD because D1 vs D2 can be chosen at compile time on the same sources.

 git provide all we need to implement such a process, it is easy to do it
 soon (after 2.060 for instance) because it doesn't imply drastic changes
 for users.
Definitely +1!!! -- Bye, Gor Gyolchanyan.
Jul 12 2012
prev sibling next sibling parent reply "Tobias Pankrath" <tobias pankrath.net> writes:
 The system adopted in PHP works with a 3 number version. The 
 first number is used for major languages changes (for instance 
 4 > 5 imply passing object by reference when it was by copy 
 before, 5 > 6 switched the whole thing to unicode).

 The second number imply language changes, but either non 
 breaking or very specific, rarely used stuff. For instance 5.2
 5.3 added GC, closures and namespace which does not break
code.
We can also learn from the python community, whose from __future__ import <feature> facility is a great success and we should adopt a similar scheme. Consider the -property switch or a future introduction of tuple syntax. If you start a new project, it's no problem. Just use the switch and don't introduce things that will not work without. But if you have an old codebase but want to use a new feature you can either a) fix the hole codebase at one b) fix and configure your build systems to build some files with and some files without the switch. Both options are PITA. An alternative is to introduce #pragma(future, tuplesyntax); Now you can insert the pragma in those sourcefiles that are new or already fixed and you can immediately benefit from the feature and you can upgrade your code file by file or even scope by scope. Later versions could even introduce #pragma(past, notuplesyntax) before dropping the old syntax completely.
Jul 12 2012
parent reply deadalnix <deadalnix gmail.com> writes:
On 12/07/2012 19:10, Tobias Pankrath wrote:
 The system adopted in PHP works with a 3 number version. The first
 number is used for major languages changes (for instance 4 > 5 imply
 passing object by reference when it was by copy before, 5 > 6 switched
 the whole thing to unicode).

 The second number imply language changes, but either non breaking or
 very specific, rarely used stuff. For instance 5.2
 5.3 added GC, closures and namespace which does not break
code.
We can also learn from the python community, whose from __future__ import <feature> facility is a great success and we should adopt a similar scheme. Consider the -property switch or a future introduction of tuple syntax. If you start a new project, it's no problem. Just use the switch and don't introduce things that will not work without. But if you have an old codebase but want to use a new feature you can either a) fix the hole codebase at one b) fix and configure your build systems to build some files with and some files without the switch. Both options are PITA. An alternative is to introduce #pragma(future, tuplesyntax); Now you can insert the pragma in those sourcefiles that are new or already fixed and you can immediately benefit from the feature and you can upgrade your code file by file or even scope by scope. Later versions could even introduce #pragma(past, notuplesyntax) before dropping the old syntax completely.
This scheme don't allow for breaking change to be made. It also require the compiler to handle a mess of features that can be activated or not. This is way more work, and I don't really see the benefice. As non breaking change are introduced in a different process than breaking one, it is easy to migrate to new version that will not break legacy code and provide new features.
Jul 12 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Thursday, 12 July 2012 at 17:20:32 UTC, deadalnix wrote:
 On 12/07/2012 19:10, Tobias Pankrath wrote:
 The system adopted in PHP works with a 3 number version. The 
 first
 number is used for major languages changes (for instance 4 > 
 5 imply
 passing object by reference when it was by copy before, 5 > 6 
 switched
 the whole thing to unicode).

 The second number imply language changes, but either non 
 breaking or
 very specific, rarely used stuff. For instance 5.2
 5.3 added GC, closures and namespace which does not break
code.
We can also learn from the python community, whose from __future__ import <feature> facility is a great success and we should adopt a similar scheme. Consider the -property switch or a future introduction of tuple syntax. If you start a new project, it's no problem. Just use the switch and don't introduce things that will not work without. But if you have an old codebase but want to use a new feature you can either a) fix the hole codebase at one b) fix and configure your build systems to build some files with and some files without the switch. Both options are PITA. An alternative is to introduce #pragma(future, tuplesyntax); Now you can insert the pragma in those sourcefiles that are new or already fixed and you can immediately benefit from the feature and you can upgrade your code file by file or even scope by scope. Later versions could even introduce #pragma(past, notuplesyntax) before dropping the old syntax completely.
This scheme don't allow for breaking change to be made. It also require the compiler to handle a mess of features that can be activated or not. This is way more work, and I don't really see the benefice. As non breaking change are introduced in a different process than breaking one, it is easy to migrate to new version that will not break legacy code and provide new features.
OTOH, it may break the community yet again, which we certainly don't want, probably even less than breaking code. Also, the example of Python with two main stable branches that live in parallel is not very encouraging.
Jul 15 2012
next sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
 OTOH, it may break the community yet again, which we certainly 
 don't want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that 
 live in parallel is not very encouraging.
Are you kidding? Python should be used as example of how software should be engineered. They keep release schedules, keep stable versions & never break backward compatibility without giving their users ways to not be stuck in bad situation. It is well thought and planned. Its popularity and widespread is not a coincidence, and the fact that it became de facto part of linuxes (shipping with 5 year old versions without a fear of deprecation) just proves people can count on it and use it without fear of some random unguided development that is typical of D with its half thought our new features that bite it on the ass year later.
Jul 15 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 15 July 2012 at 20:44:01 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly 
 don't want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that 
 live in parallel is not very encouraging.
Are you kidding? Python should be used as example of how software should be engineered. They keep release schedules, keep stable versions & never break backward compatibility without giving their users ways to not be stuck in bad situation. It is well thought and planned. Its popularity and widespread is not a coincidence, and the fact that it became de facto part of linuxes (shipping with 5 year old versions without a fear of deprecation) just proves people can count on it and use it without fear of some random unguided development that is typical of D with its half thought our new features that bite it on the ass year later.
I understand your gripe with breaking changes and bugs, but your painting of the sate of things is caricatural. First Linuxes are not shipping with 5 year old versions of Python, they usually ship with 2.7 which is the last version of the 2 branch. Meanwhile, the 3 branch is having a hard time getting used, several years after its introduction, and some major packages still haven't been ported. http://wiki.python.org/moin/Python2orPython3 That is what I was referring to. I agree the Python roadmap is better paved than the D roadmap, which hardly exists. It does make a case for a dev and a stable branch, which makes complete sense. OTOH, Python has suffered from disruptive changes just as much as D, like the fact that incorporating UTF in the language has justified a completely new branch. And talking about half assed features, its reference implementation suffers from *major* issues, like being slow (about 5 times slower than the Pypy JIT implementation) and monothreaded. And that is not going to be fixed any time soon. And you can't use PyPy for most serious web projects as native libraries are not compatible and haven't been ported.
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 14:20:14 -0700, SomeDude <lovelydear mailmetrash.com>  
wrote:

 On Sunday, 15 July 2012 at 20:44:01 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly don't  
 want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that live in  
 parallel is not very encouraging.
Are you kidding? Python should be used as example of how software should be engineered. They keep release schedules, keep stable versions & never break backward compatibility without giving their users ways to not be stuck in bad situation. It is well thought and planned. Its popularity and widespread is not a coincidence, and the fact that it became de facto part of linuxes (shipping with 5 year old versions without a fear of deprecation) just proves people can count on it and use it without fear of some random unguided development that is typical of D with its half thought our new features that bite it on the ass year later.
I understand your gripe with breaking changes and bugs, but your painting of the sate of things is caricatural. First Linuxes are not shipping with 5 year old versions of Python, they usually ship with 2.7 which is the last version of the 2 branch. Meanwhile, the 3 branch is having a hard time getting used, several years after its introduction, and some major packages still haven't been ported. http://wiki.python.org/moin/Python2orPython3 That is what I was referring to. I agree the Python roadmap is better paved than the D roadmap, which hardly exists. It does make a case for a dev and a stable branch, which makes complete sense. OTOH, Python has suffered from disruptive changes just as much as D, like the fact that incorporating UTF in the language has justified a completely new branch. And talking about half assed features, its reference implementation suffers from *major* issues, like being slow (about 5 times slower than the Pypy JIT implementation) and monothreaded. And that is not going to be fixed any time soon. And you can't use PyPy for most serious web projects as native libraries are not compatible and haven't been ported.
To be fair, the majority of the problems you listed with Python have nothing to do with their release process but their design process. The two are unrelated. The fact that it suffers disruptive changes is an argument for dev/stable branches, not against. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Patrick Stewart <ncc1701d starfed.com> writes:
Adam Wilson Wrote:

 On Sun, 15 Jul 2012 14:20:14 -0700, SomeDude <lovelydear mailmetrash.com>  
 wrote:
 
 On Sunday, 15 July 2012 at 20:44:01 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly don't  
 want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that live in  
 parallel is not very encouraging.
Are you kidding? Python should be used as example of how software should be engineered. They keep release schedules, keep stable versions & never break backward compatibility without giving their users ways to not be stuck in bad situation. It is well thought and planned. Its popularity and widespread is not a coincidence, and the fact that it became de facto part of linuxes (shipping with 5 year old versions without a fear of deprecation) just proves people can count on it and use it without fear of some random unguided development that is typical of D with its half thought our new features that bite it on the ass year later.
I understand your gripe with breaking changes and bugs, but your painting of the sate of things is caricatural. First Linuxes are not shipping with 5 year old versions of Python, they usually ship with 2.7 which is the last version of the 2 branch. Meanwhile, the 3 branch is having a hard time getting used, several years after its introduction, and some major packages still haven't been ported. http://wiki.python.org/moin/Python2orPython3 That is what I was referring to. I agree the Python roadmap is better paved than the D roadmap, which hardly exists. It does make a case for a dev and a stable branch, which makes complete sense. OTOH, Python has suffered from disruptive changes just as much as D, like the fact that incorporating UTF in the language has justified a completely new branch. And talking about half assed features, its reference implementation suffers from *major* issues, like being slow (about 5 times slower than the Pypy JIT implementation) and monothreaded. And that is not going to be fixed any time soon. And you can't use PyPy for most serious web projects as native libraries are not compatible and haven't been ported.
To be fair, the majority of the problems you listed with Python have nothing to do with their release process but their design process. The two are unrelated. The fact that it suffers disruptive changes is an argument for dev/stable branches, not against.
Point here is how community is handling problems. It is matter of engineering skill, not programming. Both languages have programming bugs and bad decisions. Python fix them without disrupting schedule and usability. D says "suck it up for next X years while we fix it" or "You have some obscure 4 year old version without that bug".
 -- 
 Adam Wilson
 IRC: LightBender
 Project Coordinator
 The Horizon Project
 http://www.thehorizonproject.org/
Jul 15 2012
parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Patrick Stewart:

 Both languages have programming bugs and bad decisions. Python 
 fix them without disrupting schedule and usability. D says 
 "suck it up for next X years while we fix it" or "You have some 
 obscure 4 year old version without that bug".
Python C interpreter is also far simpler than designing a new system language, and writing its compiler and standard library. And behind Python there are far more developers and users. Bye, bearophile
Jul 15 2012
next sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
bearophile Wrote:

 Patrick Stewart:
 
 Both languages have programming bugs and bad decisions. Python 
 fix them without disrupting schedule and usability. D says 
 "suck it up for next X years while we fix it" or "You have some 
 obscure 4 year old version without that bug".
Python C interpreter is also far simpler than designing a new system language, and writing its compiler and standard library. And behind Python there are far more developers and users. Bye, bearophile
We are coming back to dsource & Tango graveyard story. D had equally capable and large community to. Its resources got wasted. People left. Huge amount of work just wasted for nothing. On the other hand, Python has one of the largest *operational* standard library and tons of 3rd party ones. Why? Because with stable language, all those libraries stayed in the game.
Jul 15 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 4:15 PM, Patrick Stewart wrote:
 We are coming back to dsource & Tango graveyard story. D had equally capable
 and large community to. Its resources got wasted. People left. Huge amount of
 work just wasted for nothing. On the other hand, Python has one of the
 largest *operational* standard library and tons of 3rd party ones. Why?
 Because with stable language, all those libraries stayed in the game.
Quite a bit of Tango has moved into D2, the parts whose authors had the rights to change the license and so move it.
Jul 15 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/15/12 7:15 PM, Patrick Stewart wrote:
 We are coming back to dsource&  Tango graveyard story. D had equally
 capable and large community to. Its resources got wasted. People
 left. Huge amount of work just wasted for nothing.
Actually a couple of weeks ago I was curious and collected a few statistics about the frequency of posts, number of posters, and such. The numbers are not yet in shape to be published, but from what I gathered so far there was no visible glitch around the D1/D2 divergence. There's a strong increase since 2011, but I couldn't yet gather an exponential trend.
 On the other hand,
 Python has one of the largest *operational* standard library and tons
 of 3rd party ones. Why? Because with stable language, all those
 libraries stayed in the game.
Agreed, we have much to learn from Python and other successful languages. I assume those procedures and protocols materialized together with strong growth of the community, and may be difficult to transplant to our team. Right now my main focus as an organizer is to make sure people's cycles are spent on productive, high-impact work. Right now Walter is working on Win64, which is of very high impact. A change of procedure right now would simply mean time taken away from that task. Finally, since you are interested in effecting durable positive change in D's development, I'll venture that perhaps you're not going the best way about it. Your posts attempt almost with no exception to inflame, and there's no contribution I know of in your name. That all reduces the credibility of your points, however merit there may be in them. Thanks, Andrei
Jul 15 2012
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 18:01:41 -0700, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 7/15/12 7:15 PM, Patrick Stewart wrote:
 We are coming back to dsource&  Tango graveyard story. D had equally
 capable and large community to. Its resources got wasted. People
 left. Huge amount of work just wasted for nothing.
Actually a couple of weeks ago I was curious and collected a few statistics about the frequency of posts, number of posters, and such. The numbers are not yet in shape to be published, but from what I gathered so far there was no visible glitch around the D1/D2 divergence. There's a strong increase since 2011, but I couldn't yet gather an exponential trend.
 On the other hand,
 Python has one of the largest *operational* standard library and tons
 of 3rd party ones. Why? Because with stable language, all those
 libraries stayed in the game.
Agreed, we have much to learn from Python and other successful languages. I assume those procedures and protocols materialized together with strong growth of the community, and may be difficult to transplant to our team. Right now my main focus as an organizer is to make sure people's cycles are spent on productive, high-impact work. Right now Walter is working on Win64, which is of very high impact. A change of procedure right now would simply mean time taken away from that task. Finally, since you are interested in effecting durable positive change in D's development, I'll venture that perhaps you're not going the best way about it. Your posts attempt almost with no exception to inflame, and there's no contribution I know of in your name. That all reduces the credibility of your points, however merit there may be in them. Thanks, Andrei
I would like to state that I am all for waiting onr Win64; it's a huge project and trying to do this change in the middle of it would be the height of stupidity. However, directly after Win64 goes live I move that we make the dual branch model the default going forward as it solves too many long-standing community complaints to reasonably dismiss. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
next sibling parent "nazriel" <nazriel6969 gmail.com> writes:
On Monday, 16 July 2012 at 01:06:16 UTC, Adam Wilson wrote:
 On Sun, 15 Jul 2012 18:01:41 -0700, Andrei Alexandrescu 
 <SeeWebsiteForEmail erdani.org> wrote:

 On 7/15/12 7:15 PM, Patrick Stewart wrote:
 We are coming back to dsource&  Tango graveyard story. D had 
 equally
 capable and large community to. Its resources got wasted. 
 People
 left. Huge amount of work just wasted for nothing.
Actually a couple of weeks ago I was curious and collected a few statistics about the frequency of posts, number of posters, and such. The numbers are not yet in shape to be published, but from what I gathered so far there was no visible glitch around the D1/D2 divergence. There's a strong increase since 2011, but I couldn't yet gather an exponential trend.
 On the other hand,
 Python has one of the largest *operational* standard library 
 and tons
 of 3rd party ones. Why? Because with stable language, all 
 those
 libraries stayed in the game.
Agreed, we have much to learn from Python and other successful languages. I assume those procedures and protocols materialized together with strong growth of the community, and may be difficult to transplant to our team. Right now my main focus as an organizer is to make sure people's cycles are spent on productive, high-impact work. Right now Walter is working on Win64, which is of very high impact. A change of procedure right now would simply mean time taken away from that task. Finally, since you are interested in effecting durable positive change in D's development, I'll venture that perhaps you're not going the best way about it. Your posts attempt almost with no exception to inflame, and there's no contribution I know of in your name. That all reduces the credibility of your points, however merit there may be in them. Thanks, Andrei
I would like to state that I am all for waiting onr Win64; it's a huge project and trying to do this change in the middle of it would be the height of stupidity. However, directly after Win64 goes live I move that we make the dual branch model the default going forward as it solves too many long-standing community complaints to reasonably dismiss.
+1 I think that model proposed by deadalnix would drastically increase productivity of D development cycle and would cover needs of those who like to feel "stable" with their software. Lets learn from bigger, successful projects :)
Jul 15 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 03:06, Adam Wilson wrote:

 I would like to state that I am all for waiting onr Win64; it's a huge
 project and trying to do this change in the middle of it would be the
 height of stupidity. However, directly after Win64 goes live I move that
 we make the dual branch model the default going forward as it solves too
 many long-standing community complaints to reasonably dismiss.
I see no reason why COFF/Win64 would need to affect the current release (ok it already has, I know). Walter don't need to push his changes upstream. He can either keep them locally or in his own forks (I assume he uses forks). When the work is finished he can push a new branch upstream and let people try that for a while. Then we can merge that branch when we feel it's stable enough. -- /Jacob Carlborg
Jul 16 2012
prev sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
bearophile Wrote:

 Patrick Stewart:
 
 Both languages have programming bugs and bad decisions. Python 
 fix them without disrupting schedule and usability. D says 
 "suck it up for next X years while we fix it" or "You have some 
 obscure 4 year old version without that bug".
Python C interpreter is also far simpler than designing a new system language, and writing its compiler and standard library.
Simpler - true. But having more complex problem to solve just makes more important to have *better* organization. And D community approach is of weekend developers that discovered VCS & test suite, but know nothing about branching and project planning.
 And behind Python there are far more developers and users.
 
Completely not relevant. Number of developers have nothing to do with project organization. There is a lot of software there equal or more complex that are product of just a few or even single programmer.
Jul 15 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 19:43:44 Patrick Stewart wrote:
 Completely not relevant. Number of developers have nothing to do with
 project organization. There is a lot of software there equal or more
 complex that are product of just a few or even single programmer.
Actually, it's _very_ relevant. If release model A takes up more time and resources than release model B, then release model A will slow down development. For projects with more developers, it may be possible to mitigate those costs such that the benefits outweigh them, but for projects with fewer developers, a slower release model may be completely unacceptable. It all depends on what the various pros and cons are and how they will affect the project. It may very well be that the proposed release model is well worth going to, but it could also be that its benefits aren't worth its costs, and the number of developers involved has a definite effect on that calculation. - Jonathan M Davis
Jul 15 2012
prev sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
SomeDude Wrote:

 On Sunday, 15 July 2012 at 20:44:01 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly 
 don't want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that 
 live in parallel is not very encouraging.
Are you kidding? Python should be used as example of how software should be engineered. They keep release schedules, keep stable versions & never break backward compatibility without giving their users ways to not be stuck in bad situation. It is well thought and planned. Its popularity and widespread is not a coincidence, and the fact that it became de facto part of linuxes (shipping with 5 year old versions without a fear of deprecation) just proves people can count on it and use it without fear of some random unguided development that is typical of D with its half thought our new features that bite it on the ass year later.
I understand your gripe with breaking changes and bugs, but your painting of the sate of things is caricatural. First Linuxes are not shipping with 5 year old versions of Python, they usually ship with 2.7 which is the last version of the 2 branch. Meanwhile, the 3 branch is having a hard time getting used, several years after its introduction, and some major packages still haven't been ported. http://wiki.python.org/moin/Python2orPython3 That is what I was referring to.
CentOS 6.x (latest) is shiping with 2 year old python version. Most production servers I maintain are 4.x - 5.x with Python 2.4+ that are around 5 year old releases. Upgrading is not an option for most situations. And we are talking here about linux distro that targets stability above all.
 I agree the Python roadmap is better paved than the D roadmap, 
 which hardly exists. It does make a case for a dev and a stable 
 branch, which makes complete sense. OTOH, Python has suffered 
It goes other way around. Stability is achieved by branching development from trunk on certain milestones. Whole D development is one constant trunk with added bugs and broken backward compatibility with each release.
 from disruptive changes just as much as D, like the fact that 
 incorporating UTF in the language has justified a completely new 
 branch. And talking about half assed features, its reference 
 implementation suffers from *major* issues, like being slow 
 (about 5 times slower than the Pypy JIT implementation) and 
 monothreaded. And that is not going to be fixed any time soon. 
 And you can't use PyPy for most serious web projects as native 
 libraries are not compatible and haven't been ported.
Web projects? Speed? PyPy? You are so far up premature optimization story that you miss what we are talking here about. Second biggest flaw with D development is premature optimization opsession by large number of devs. "We haven't make it work quite yet as specs define, but lets us optimize it, so it can work incorrectly even faster!"
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 4:08 PM, Patrick Stewart wrote:
 Second biggest flaw with D development is premature optimization opsession by
 large number of devs. "We haven't make it work quite yet as specs define, but
 lets us optimize it, so it can work incorrectly even faster!"
All versions pass the D test suite 100%. Any regressions that appeared were not in the test suite, but do get added as they are fixed.
Jul 15 2012
parent reply Patrick Stewart <ncc1701d starfed.com> writes:
Walter Bright Wrote:

 On 7/15/2012 4:08 PM, Patrick Stewart wrote:
 Second biggest flaw with D development is premature optimization opsession by
 large number of devs. "We haven't make it work quite yet as specs define, but
 lets us optimize it, so it can work incorrectly even faster!"
All versions pass the D test suite 100%. Any regressions that appeared were not in the test suite, but do get added as they are fixed.
100% passage of test suites do not make me more happy even a bit. Which is my business, of course. What would make me happy is feature list on page describing milestones. And each version branched and kept stable, with bugfixes and *no* new features introduced. Real question is where is D going? When will be enough with cramming new features down its throat each time a semi-good idea pops up? When will D leave beta and reach v1 ? Because D2 is still in beta as far as I see it.
Jul 15 2012
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 15 July 2012 at 23:36:56 UTC, Patrick Stewart wrote:
 100% passage of test suites do not make me more happy even a 
 bit. Which is my business, of course.

 What would make me happy is feature list on page describing 
 milestones. And each version branched and kept stable, with 
 bugfixes and *no* new features introduced.

 Real question is where is D going? When will be enough with 
 cramming new features down its throat each time a semi-good 
 idea pops up? When will D leave beta and reach v1 ? Because D2 
 is still in beta as far as I see it.
Seriously, your whining is getting tiring and will give you nothing.
Jul 16 2012
prev sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
 OTOH, it may break the community yet again, which we certainly 
 don't want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that 
 live in parallel is not very encouraging.
Also, check Python website: they recommend python v2 for all new users that don't know what to choose. They are both stable, but v2 has more libraries, and they do reassure them by saying v2 will be supported for time to come. On the other hand, on D website, D1 is pushed to the dark corners as ugly half child nobody should know about, and D2 is titled as thing to chose without thinking. And there is no mentioning D1 is relatively stable, while D2 is still unstable, non conforming to D documentation and that some things just don't work, while in constant beta flux that breaks things on regular basis with each release. So tell me again, which language treats its users with more respect ? Which one encourages users more to use them?
Jul 15 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 15 July 2012 at 20:50:47 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly 
 don't want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that 
 live in parallel is not very encouraging.
Also, check Python website: they recommend python v2 for all new users that don't know what to choose. They are both stable, but v2 has more libraries, and they do reassure them by saying v2 will be supported for time to come. On the other hand, on D website, D1 is pushed to the dark corners as ugly half child nobody should know about, and D2 is titled as thing to chose without thinking. And there is no mentioning D1 is relatively stable, while D2 is still unstable, non conforming to D documentation and that some things just don't work, while in constant beta flux that breaks things on regular basis with each release. So tell me again, which language treats its users with more respect ? Which one encourages users more to use them?
The problem I raised is not a problem of respect. It's a problem of community. The D community is a tiny fraction of the Python community. It has been steadily growing this last year and a half or so, but it's still fragile. The D1/D2 split basically set it back to near zero for several years, with many people leaving, only a few staying, and a number recently coming back. The project certainly can't afford yet another split, or many key people will simply throw the towel. I for one would rather see part of the users quitting than active members. As for the stability of D2, upir opinion may be different, but it has largely improved recently due to increased forces, as several people have noted (David Simcha in a recent thread said something about the stability of the compiler being good enough that he only rarely encountered a problem). And considering the rate of bugs correction, it will continue to improve. You only need to have a look at the changelog to see that it's growing with each release, and I'm pretty confident that the 2.060 will contain more bug fixes than any past release.
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 14:36:06 -0700, SomeDude <lovelydear mailmetrash.com>  
wrote:

 On Sunday, 15 July 2012 at 20:50:47 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly don't  
 want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that live in  
 parallel is not very encouraging.
Also, check Python website: they recommend python v2 for all new users that don't know what to choose. They are both stable, but v2 has more libraries, and they do reassure them by saying v2 will be supported for time to come. On the other hand, on D website, D1 is pushed to the dark corners as ugly half child nobody should know about, and D2 is titled as thing to chose without thinking. And there is no mentioning D1 is relatively stable, while D2 is still unstable, non conforming to D documentation and that some things just don't work, while in constant beta flux that breaks things on regular basis with each release. So tell me again, which language treats its users with more respect ? Which one encourages users more to use them?
The problem I raised is not a problem of respect. It's a problem of community. The D community is a tiny fraction of the Python community. It has been steadily growing this last year and a half or so, but it's still fragile. The D1/D2 split basically set it back to near zero for several years, with many people leaving, only a few staying, and a number recently coming back. The project certainly can't afford yet another split, or many key people will simply throw the towel. I for one would rather see part of the users quitting than active members. As for the stability of D2, upir opinion may be different, but it has largely improved recently due to increased forces, as several people have noted (David Simcha in a recent thread said something about the stability of the compiler being good enough that he only rarely encountered a problem). And considering the rate of bugs correction, it will continue to improve. You only need to have a look at the changelog to see that it's growing with each release, and I'm pretty confident that the 2.060 will contain more bug fixes than any past release.
You are concerned that adopting an OSS best-practice is going to split the community ... seriously? I find that a very hard argument to swallow with a straight face. We aren't splitting D2 into a D3, are simply arguing for a bugfix branch (stable) and a new feature branch (dev). How, pray tell, is this a split? We are merely seeking to make two things that are logically unrelated, bugs and new features, and make them physically unrelated as well. The idea that bugs and new features can and should be rolled into the same release runs counter to every accepted best practice in both FOSS and Commercial wisdom. The two have VERY different velocities, bugs can be fixed in days, but new features take much longer. Consider COFF support for example, Walter has been hammering away at it for weeks now, and he isn't even 50% done, how many bugs have been fixed and confirmed resolved, in the same timespan? Also, consider that adding new features makes it significantly harder to track down regressions (is a real regression or did the new feature upset the code in an unexpected way) and the new features themselves create new bugs. If the branches are separate then it becomes trivial to determine if the new feature caused the bug, because it will show up in one and not the other. How DARE we DEMAND that our users wait 4 MONTHS for regression fixes because we are afraid of a split or a little extra work? How many users could we lose if we significantly slowed down the release cycle (and therefore the bugfix cycle) such that people are waiting many months for their fixes? The language would be perceived as dead/dying and that would be just as bad as the D1/D2 split. If you allow your past experiences to paralyze you into inaction, you will bring about the very problem you seek to avoid. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 2:58 PM, Adam Wilson wrote:
 The idea that bugs and new features can and should be rolled into the same
 release runs counter to every accepted best practice in both FOSS and
Commercial
 wisdom. The two have VERY different velocities, bugs can be fixed in days, but
 new features take much longer. Consider COFF support for example, Walter has
 been hammering away at it for weeks now, and he isn't even 50% done, how many
 bugs have been fixed and confirmed resolved, in the same timespan?
Weeks is an exaggeration. And still, there have been a steady accumulation of fixes.
 Also,
 consider that adding new features makes it significantly harder to track down
 regressions (is a real regression or did the new feature upset the code in an
 unexpected way) and the new features themselves create new bugs. If the
branches
 are separate then it becomes trivial to determine if the new feature caused the
 bug, because it will show up in one and not the other.

 How DARE we DEMAND that our users wait 4 MONTHS for regression fixes because we
 are afraid of a split or a little extra work? How many users could we lose if
we
 significantly slowed down the release cycle (and therefore the bugfix cycle)
 such that people are waiting many months for their fixes? The language would be
 perceived as dead/dying and that would be just as bad as the D1/D2 split. If
you
 allow your past experiences to paralyze you into inaction, you will bring about
 the very problem you seek to avoid.
Sigh. Half say we release too often, the other half not often enough.
Jul 15 2012
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 16:26:50 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/15/2012 2:58 PM, Adam Wilson wrote:
 The idea that bugs and new features can and should be rolled into the  
 same
 release runs counter to every accepted best practice in both FOSS and  
 Commercial
 wisdom. The two have VERY different velocities, bugs can be fixed in  
 days, but
 new features take much longer. Consider COFF support for example,  
 Walter has
 been hammering away at it for weeks now, and he isn't even 50% done,  
 how many
 bugs have been fixed and confirmed resolved, in the same timespan?
Weeks is an exaggeration. And still, there have been a steady accumulation of fixes.
 Also,
 consider that adding new features makes it significantly harder to  
 track down
 regressions (is a real regression or did the new feature upset the code  
 in an
 unexpected way) and the new features themselves create new bugs. If the  
 branches
 are separate then it becomes trivial to determine if the new feature  
 caused the
 bug, because it will show up in one and not the other.

 How DARE we DEMAND that our users wait 4 MONTHS for regression fixes  
 because we
 are afraid of a split or a little extra work? How many users could we  
 lose if we
 significantly slowed down the release cycle (and therefore the bugfix  
 cycle)
 such that people are waiting many months for their fixes? The language  
 would be
 perceived as dead/dying and that would be just as bad as the D1/D2  
 split. If you
 allow your past experiences to paralyze you into inaction, you will  
 bring about
 the very problem you seek to avoid.
Sigh. Half say we release too often, the other half not often enough.
This would solve both complaints overnight. The half that say "not often enough" have critical bugfixes they are waiting on, the "too often" camp has new things that they want now (ex. COFF) and see all this bugfixing as getting in the way. I agree its a problem, but this makes both happy. The new feature camp can use dev and put up with the breakages (what we do now), and bugfix camp can get back to work. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 16:26:50 Walter Bright wrote:
 Sigh. Half say we release too often, the other half not often enough.
Which is actually one argument for going to a model where you have frequent minor releases which only contain bug fixes and less frequent major releases with the larger changes. You can never make everyone happy, but by doing so, you get the bug fixes faster for the folks complaining about the lack of frequent releases, and you get increased stability as far as the new stuff goes, because it doesn't come with every release. I'm only against the proposed versioning scheme because I think that we need to stabilize things better (e.g. actually have all of the features that TDPL lists fully implemented) before we move to it. But I fully support moving to this sort of scheme in the long run. It manages change much better, and I think that many, many existing projects have shown that it promotes stable code bases while still allowing for them to evolve as necessary. - Jonathan M Davis
Jul 15 2012
parent reply deadalnix <deadalnix gmail.com> writes:
On 16/07/2012 01:42, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 16:26:50 Walter Bright wrote:
 Sigh. Half say we release too often, the other half not often enough.
Which is actually one argument for going to a model where you have frequent minor releases which only contain bug fixes and less frequent major releases with the larger changes. You can never make everyone happy, but by doing so, you get the bug fixes faster for the folks complaining about the lack of frequent releases, and you get increased stability as far as the new stuff goes, because it doesn't come with every release. I'm only against the proposed versioning scheme because I think that we need to stabilize things better (e.g. actually have all of the features that TDPL lists fully implemented) before we move to it. But I fully support moving to this sort of scheme in the long run. It manages change much better, and I think that many, many existing projects have shown that it promotes stable code bases while still allowing for them to evolve as necessary. - Jonathan M Davis
The proposed scheme is only a proposed scheme. Other solutions exist that solve the problem, and if they better fit, why not ?
Jul 15 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, July 16, 2012 02:07:13 deadalnix wrote:
 On 16/07/2012 01:42, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 16:26:50 Walter Bright wrote:
 Sigh. Half say we release too often, the other half not often enough.
Which is actually one argument for going to a model where you have frequent minor releases which only contain bug fixes and less frequent major releases with the larger changes. You can never make everyone happy, but by doing so, you get the bug fixes faster for the folks complaining about the lack of frequent releases, and you get increased stability as far as the new stuff goes, because it doesn't come with every release. I'm only against the proposed versioning scheme because I think that we need to stabilize things better (e.g. actually have all of the features that TDPL lists fully implemented) before we move to it. But I fully support moving to this sort of scheme in the long run. It manages change much better, and I think that many, many existing projects have shown that it promotes stable code bases while still allowing for them to evolve as necessary. - Jonathan M Davis
The proposed scheme is only a proposed scheme. Other solutions exist that solve the problem, and if they better fit, why not ?
If someone has a better proposal, they should make it (though probably in a separate thread - this one's long enough as it is). I think that the basics of this proposal are good, and a lot of projects work that way. I just think that D needs to be more stable before we worry about having major and minor releases or stable and unstable branches. - Jonathan M Davis
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 17:20:33 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Monday, July 16, 2012 02:07:13 deadalnix wrote:
 On 16/07/2012 01:42, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 16:26:50 Walter Bright wrote:
 Sigh. Half say we release too often, the other half not often enough.
Which is actually one argument for going to a model where you have frequent minor releases which only contain bug fixes and less frequent major releases with the larger changes. You can never make everyone happy,
but
 by doing so, you get the bug fixes faster for the folks complaining  
about
 the lack of frequent releases, and you get increased stability as far  
as
 the new stuff goes, because it doesn't come with every release.

 I'm only against the proposed versioning scheme because I think that  
we
 need to stabilize things better (e.g. actually have all of the  
features
 that TDPL lists fully implemented) before we move to it. But I fully
 support moving to this sort of scheme in the long run. It manages  
change
 much better, and I think that many, many existing projects have shown
 that it promotes stable code bases while still allowing for them to
 evolve as necessary.

 - Jonathan M Davis
The proposed scheme is only a proposed scheme. Other solutions exist that solve the problem, and if they better fit, why not ?
If someone has a better proposal, they should make it (though probably in a separate thread - this one's long enough as it is). I think that the basics of this proposal are good, and a lot of projects work that way. I just think that D needs to be more stable before we worry about having major and minor releases or stable and unstable branches. - Jonathan M Davis
I guess I just see it as differing definitions of "stable". For example, dsimcha was here not twenty hours ago praising D for how stable it's become. I think this is a pretty good summation of stable in the community project context: http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html Note: We meet all criteria for stable. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 17:23:44 Adam Wilson wrote:
 I guess I just see it as differing definitions of "stable". For example,
 dsimcha was here not twenty hours ago praising D for how stable it's
 become.
 
 I think this is a pretty good summation of stable in the community project
 context:
 http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html
 
 Note: We meet all criteria for stable.
What I want to see is dmd having fully implemented all of the features in TDPL (e.g. multiple alias thises) and sorted out all of the major design or implementation issues (e.g. the issues with const and Object). After that, D2 has been fully implemented, and we can look at adding new features if we want to and restricting those as well as any breaking changes that we need to make to a different branch which only gets merged into the main branch in certain releases. Arguably, we've been adding too many new features (e.g. new lambda syntax and SIMD support), given that we're supposed to be making everything that we already have work properly, but those features haven't been breaking changes, and presumably forcing Walter to just fix bugs wouldn't be all that pleasant for him. But until we've fully implemented what we have, I think that it's just going to slow us down to little benefit to change the release model. Once we have, _then_ I'd love to see a release model which promotes major vs minor releases and the like, because then we can evolve the language and library as appropriate while still maintaining stable releases which programmers can rely on for long periods of time without worrying about breaking changes and whatnot. - Jonathan M Davis
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 17:36:28 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Sunday, July 15, 2012 17:23:44 Adam Wilson wrote:
 I guess I just see it as differing definitions of "stable". For example,
 dsimcha was here not twenty hours ago praising D for how stable it's
 become.

 I think this is a pretty good summation of stable in the community  
 project
 context:
 http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html

 Note: We meet all criteria for stable.
What I want to see is dmd having fully implemented all of the features in TDPL (e.g. multiple alias thises) and sorted out all of the major design or implementation issues (e.g. the issues with const and Object). After that, D2 has been fully implemented, and we can look at adding new features if we want to and restricting those as well as any breaking changes that we need to make to a different branch which only gets merged into the main branch in certain releases. Arguably, we've been adding too many new features (e.g. new lambda syntax and SIMD support), given that we're supposed to be making everything that we already have work properly, but those features haven't been breaking changes, and presumably forcing Walter to just fix bugs wouldn't be all that pleasant for him. But until we've fully implemented what we have, I think that it's just going to slow us down to little benefit to change the release model. Once we have, _then_ I'd love to see a release model which promotes major vs minor releases and the like, because then we can evolve the language and library as appropriate while still maintaining stable releases which programmers can rely on for long periods of time without worrying about breaking changes and whatnot. - Jonathan M Davis
I think the problem is that in the real world, that state is somewhat unlikely. For example, Walter is currently working on COFF support, this is arguably a new feature (we already can make programs work on Windows). Programmers aren't machines and fixing bugs all day is boring, we want to do the fun stuff, in this case, new features. It just so happens that it's the fun stuff that makes fixing bugs bearable. I don't think it's fair of us to demand that Walter only fix bugs, besides, COFF support is a HIGHLY requested new feature, he is just supposed to ignore them? It is never easy deciding which new features to add versus which bugs to fix, but that's the beauty of this model, you don't have to. You just do both. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 5:45 PM, Adam Wilson wrote:
 I think the problem is that in the real world, that state is somewhat unlikely.
 For example, Walter is currently working on COFF support, this is arguably a
new
 feature (we already can make programs work on Windows). Programmers aren't
 machines and fixing bugs all day is boring, we want to do the fun stuff, in
this
 case, new features. It just so happens that it's the fun stuff that makes
fixing
 bugs bearable. I don't think it's fair of us to demand that Walter only fix
 bugs, besides, COFF support is a HIGHLY requested new feature, he is just
 supposed to ignore them?
Supporting Win64 is absolutely critical for the future of D, and the sooner we get it, the better. The COFF route is the shortest route to doing it, and the most practical for attracting devs, which is why it's the way we're going. 32 bit code is dead on OSX, is dying rapidly on Linux and Windows.
Jul 15 2012
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 23:00:01 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/15/2012 5:45 PM, Adam Wilson wrote:
 I think the problem is that in the real world, that state is somewhat  
 unlikely.
 For example, Walter is currently working on COFF support, this is  
 arguably a new
 feature (we already can make programs work on Windows). Programmers  
 aren't
 machines and fixing bugs all day is boring, we want to do the fun  
 stuff, in this
 case, new features. It just so happens that it's the fun stuff that  
 makes fixing
 bugs bearable. I don't think it's fair of us to demand that Walter only  
 fix
 bugs, besides, COFF support is a HIGHLY requested new feature, he is  
 just
 supposed to ignore them?
Supporting Win64 is absolutely critical for the future of D, and the sooner we get it, the better. The COFF route is the shortest route to doing it, and the most practical for attracting devs, which is why it's the way we're going. 32 bit code is dead on OSX, is dying rapidly on Linux and Windows.
I absolutely agree with this, but you already know that. I've been lobbying for COFF ever since I first showed up here. :-) -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 08:00, Walter Bright wrote:

 Supporting Win64 is absolutely critical for the future of D, and the
 sooner we get it, the better. The COFF route is the shortest route to
 doing it, and the most practical for attracting devs, which is why it's
 the way we're going.
Agree.
 32 bit code is dead on OSX, is dying rapidly on Linux and Windows.
No need in removing that until it cause big maintenance problems. -- /Jacob Carlborg
Jul 16 2012
prev sibling next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Monday, 16 July 2012 at 06:00:03 UTC, Walter Bright wrote:
 Supporting Win64 is absolutely critical for the future of D, 
 and the sooner we get it, the better. The COFF route is the 
 shortest route to doing it, and the most practical for 
 attracting devs, which is why it's the way we're going.
Sorry, but I don't think this is a valid argument. Yes, Win64 (and even more so, COFF) support is important to have for DMD, but no, it's not a good idea to delay a pending release because of this (cf. the »Time for a new beta« thread from the end of May). Here is why: http://d.puremagic.com/issues/buglist.cgi?chfieldto=Now&query_format=advanced&chfield=bug_status&chfieldfrom=2012-04-13&bug_status=RESOLVED&resolution=FIXED Already 289 issues resolved since 2.059! And implementing Win64 support isn't going to be done in a weekend. Sure, the changes needed are not world-shattering: Finish COFF writing support, tweak the register spilling/call emitting code to conform to the Win64 ABI, implement vararg support in both the backend and druntime (they are handled differently on Win64 than described in the System V ABI), transition to the MSVC runtime. druntime and Phobos will also require some changes, although there shouldn't be much left to do, given that GDC (and LDC, except for exceptions) already work on x64 Windows. After this has been done, there are still http://d.puremagic.com/issues/buglist.cgi?chfieldto=Now&query_format=advanced&chfieldfrom=2012-04-13&bug_severity=regression&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED to deal with. And after those regressions have been fixed and a first beta is released, chances are that some new regressions will pop up, because many people can't even use Git master for their real-world code right now. So, all in all, I wouldn't expect a release before, say, around mid-September. This is five months after the last release, a delay twice as long as our usual release cycle. Several of the bugs fixed since 2.059 were hard to work around, so I don't think it's unreasonable to assume that we will have lost users because of this. And what for? Chances are that we will just have _two_ semi-working targets in DMD then (structs are still broken on x86_64 Linux/OS X/BSD w.r.t. parameter ABI and in some cases sizing/alignment). If you really want to make D more attractive, including for corporate use (from what I gathered from several Thrift-related discussions), the easiest thing to do, in my humble opinion, would be to make the release schedule at least somewhat predictable, to publish more or less dependable short-term roadmaps, and most importantly, to actively communicate your decisions on these topics – it just happens that you are D's lead-developer-release-manager-strategist-dictator, regardless of whether you'd prefer to fill only some of the roles. David
Jul 16 2012
parent Don Clugston <dac nospam.com> writes:
On 16/07/12 16:51, David Nadlinger wrote:
 On Monday, 16 July 2012 at 06:00:03 UTC, Walter Bright wrote:
 Supporting Win64 is absolutely critical for the future of D, and the
 sooner we get it, the better. The COFF route is the shortest route to
 doing it, and the most practical for attracting devs, which is why
 it's the way we're going.
Sorry, but I don't think this is a valid argument. Yes, Win64 (and even more so, COFF) support is important to have for DMD, but no, it's not a good idea to delay a pending release because of this (cf. the »Time for a new beta« thread from the end of May). Here is why: http://d.puremagic.com/issues/buglist.cgi?chfieldto=Now&query_format=advanced&chfield=bug_status&chfieldfrom=2012-04-13&bug_status=RESOLVED&resolution=FIXED Already 289 issues resolved since 2.059!
More than that. Of the official releases, there is no usable 64 bit DMD compiler on ANY platform. Some awful wrong-code bugs were still present in 2.059. They have been fixed for a couple of months in DMD git, but not in an official release.
Jul 16 2012
prev sibling parent Andrej Mitrovic <andrej.mitrovich gmail.com> writes:
On 7/16/12, Walter Bright <newshound2 digitalmars.com> wrote:
 The COFF route is the shortest route to doing it, and the most practical for
attracting devs, which is why it's the way we're going.
Anyone know if MinGW and VC++ COFF object files are linkable? I might have even done this before without knowing, but I don't recall.
Jul 22 2012
prev sibling next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 5:36 PM, Jonathan M Davis wrote:
 Arguably, we've been adding too many new features (e.g. new lambda syntax and
 SIMD support), given that we're supposed to be making everything that we
 already  have work properly, but those features haven't been breaking changes,
 and presumably forcing Walter to just fix bugs wouldn't be all that pleasant
 for him.
SIMD support is critical for D's mission as a systems programming language, and has been important in attracting some significant adoption of D.
Jul 15 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 22:55:40 Walter Bright wrote:
 On 7/15/2012 5:36 PM, Jonathan M Davis wrote:
 Arguably, we've been adding too many new features (e.g. new lambda syntax
 and SIMD support), given that we're supposed to be making everything that
 we already  have work properly, but those features haven't been breaking
 changes, and presumably forcing Walter to just fix bugs wouldn't be all
 that pleasant for him.
SIMD support is critical for D's mission as a systems programming language, and has been important in attracting some significant adoption of D.
Oh, I'm not saying that the feature isn't valuable. I'm just pointing out that it's adding something new rather than actually finishing all of the features that we're already supposed to have, and in theory, after TDPL's release, we were supposed to be avoiding adding new features which we didn't need to make all of the existing features work until we'd finished all of the features outlined in TDPL. And maybe it _was_ worth adding SIMD support now rather than later, but it goes against what we said we were doing. - Jonathan M Davis
Jul 15 2012
next sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 11:06 PM, Jonathan M Davis wrote:
 And maybe it _was_ worth adding SIMD support now rather than
 later, but it goes against what we said we were doing.
It was a leap of faith on my part, but I think events have shown that it was indeed worth it.
Jul 15 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 08:06, Jonathan M Davis wrote:

 Oh, I'm not saying that the feature isn't valuable. I'm just pointing out that
 it's adding something new rather than actually finishing all of the features
 that we're already supposed to have, and in theory, after TDPL's release, we
 were supposed to be avoiding adding new features which we didn't need to make
 all of the existing features work until we'd finished all of the features
 outlined in TDPL. And maybe it _was_ worth adding SIMD support now rather than
 later, but it goes against what we said we were doing.
Yeah, and it did pop up somewhat unexpected. Sure there were a lot of discussion about it but in the middle of the discussions we could see commits adding SIMD support popping up. -- /Jacob Carlborg
Jul 16 2012
prev sibling next sibling parent reply "David Nadlinger" <see klickverbot.at> writes:
On Monday, 16 July 2012 at 05:56:47 UTC, Walter Bright wrote:
 SIMD support is critical for D's mission as a systems 
 programming language, and has been important in attracting some 
 significant adoption of D.
Has it? David
Jul 16 2012
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 16 July 2012 12:26, David Nadlinger <see klickverbot.at> wrote:
 On Monday, 16 July 2012 at 05:56:47 UTC, Walter Bright wrote:
 SIMD support is critical for D's mission as a systems programming
 language, and has been important in attracting some significant adoption of
 D.
Has it? David
It certainly raised a few eyebrows from D users from the Game Development market. Even had someone from Remedy Games contact me over some support queries - which I found to be a bit of a surprise. :-) -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jul 16 2012
parent "David Nadlinger" <see klickverbot.at> writes:
On Monday, 16 July 2012 at 11:34:26 UTC, Iain Buclaw wrote:
 It certainly raised a few eyebrows from D users from the Game
 Development market.
I don't doubt that, but has SIMD support in its current form actually led to any »significant adoption«? For example, I got dozens of private requests from different programmers, some from companies with big names, regarding Thrift support in D after last year's GSoC. I know that it led some people to play around with D, but that's hardly what I would call »significant adoption«… David
Jul 16 2012
prev sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 05:56:47 UTC, Walter Bright wrote:
 On 7/15/2012 5:36 PM, Jonathan M Davis wrote:
 Arguably, we've been adding too many new features (e.g. new 
 lambda syntax and
 SIMD support), given that we're supposed to be making 
 everything that we
 already  have work properly, but those features haven't been 
 breaking changes,
 and presumably forcing Walter to just fix bugs wouldn't be all 
 that pleasant
 for him.
SIMD support is critical for D's mission as a systems programming language, and has been important in attracting some significant adoption of D.
OTOH, if the people who were attracted end up quitting because the language is continuously unstable, the net effect is negative, because those who leave the boat won't come back. I tend to agree that there should be a stable and a dev branch, with regular merges from dev --> stable when new non breaking features have been shown to work for a while. For instance, nothing prevents you to develop COFF and SIMD on the dev branch, and decide in 6 months to merge those features in the stable branch, because they've been shown to have stabilized. In fact, it's much easier to make a roadmap this way than it is right now, where releases are delayed because new features don't work. The 2.060 is late, very late.
Jul 16 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/16/12 4:37 PM, SomeDude wrote:
 The 2.060 is late, very late.
Then perhaps it's worth priming dlang-stable with a "2.059 plus best of 2.060" experimental release. Enough to only get a few essential commits in. It would help the project get its bearings. Andrei
Jul 16 2012
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 16 Jul 2012 13:42:36 -0700, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 7/16/12 4:37 PM, SomeDude wrote:
 The 2.060 is late, very late.
Then perhaps it's worth priming dlang-stable with a "2.059 plus best of 2.060" experimental release. Enough to only get a few essential commits in. It would help the project get its bearings. Andrei
I should note that dlang-stable is currently forked from DMD-HEAD, so you'd have to reset it to the 2.059 tag then start merging commits, doable, but a lot of work. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 16 2012
prev sibling parent "H. S. Teoh" <hsteoh quickfur.ath.cx> writes:
On Mon, Jul 16, 2012 at 04:42:36PM -0400, Andrei Alexandrescu wrote:
 On 7/16/12 4:37 PM, SomeDude wrote:
The 2.060 is late, very late.
Then perhaps it's worth priming dlang-stable with a "2.059 plus best of 2.060" experimental release. Enough to only get a few essential commits in. It would help the project get its bearings.
[...] +1. T -- Famous last words: I wonder what will happen if I do *this*...
Jul 16 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 02:36, Jonathan M Davis wrote:

 Arguably, we've been adding too many new features (e.g. new lambda syntax and
 SIMD support), given that we're supposed to be making everything that we
 already  have work properly, but those features haven't been breaking changes,
 and presumably forcing Walter to just fix bugs wouldn't be all that pleasant
 for him. But until we've fully implemented what we have, I think that it's
 just going to slow us down to little benefit to change the release model. Once
 we have, _then_ I'd love to see a release model which promotes major vs minor
 releases and the like, because then we can evolve the language and library as
 appropriate while still maintaining stable releases which programmers can rely
 on for long periods of time without worrying about breaking changes and
 whatnot.
There are lot of other things to do than fixing bugs. For example, the ongoing COFF/Win64 changes. I wouldn't really consider this as a new feature and not really as a bug fix either. Then we have Phobos, ARM and tools to work on as well. -- /Jacob Carlborg
Jul 16 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-16 02:20, Jonathan M Davis wrote:

 If someone has a better proposal, they should make it (though probably in a
 separate thread - this one's long enough as it is). I think that the basics of
 this proposal are good, and a lot of projects work that way. I just think that
 D needs to be more stable before we worry about having major and minor
 releases or stable and unstable branches.
Wasn't that what you just said two posts up? To have major and minor releases. Major containing new features and minor only bug fixes. -- /Jacob Carlborg
Jul 16 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, July 16, 2012 09:25:54 Jacob Carlborg wrote:
 On 2012-07-16 02:20, Jonathan M Davis wrote:
 If someone has a better proposal, they should make it (though probably in
 a
 separate thread - this one's long enough as it is). I think that the
 basics of this proposal are good, and a lot of projects work that way. I
 just think that D needs to be more stable before we worry about having
 major and minor releases or stable and unstable branches.
Wasn't that what you just said two posts up? To have major and minor releases. Major containing new features and minor only bug fixes.
Isn't that essentially whan the OP's proposal was? We'd have 2.x.y where x is for major releases and y is for minor releases, with only bug fixes being permitted in minor releases. I don't think that I've proposed any other versioning schemes than what the OP was proposing. If I did, I misunderstood something. - Jonathan M Davis
Jul 16 2012
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 16/07/12 01:42, Jonathan M Davis wrote:
 I'm only against the proposed versioning scheme because I think that we need
 to stabilize things better (e.g. actually have all of the features that TDPL
 lists fully implemented) before we move to it. But I fully support moving to
 this sort of scheme in the long run. It manages change much better, and I
 think that many, many existing projects have shown that it promotes stable
 code bases while still allowing for them to evolve as necessary.
... but is a switch to this versioning method really going to slow down the implementation of new features? D is now stable enough (in terms of quality) and broad enough (in terms of features) for this scheme to be useful, so perhaps it's worth defining the "blocking" features that really _must_ be there before a switch in versioning style takes place. I think that should probably be a minimal rather than maximal list, with the aim being to switch versioning style sooner rather than later. It shouldn't have to wait on everything that TDPL lists -- how long is that going to take? If you want the version number scheme to represent clearly the importance of the complete-TDPL milestone, how about instead bumping the MAJOR version number when it's done? Yes, I know much has been said about "no D3", but this is a different and possibly useful definition of 3.0 :-)
Jul 22 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On 15/07/2012 23:36, SomeDude wrote:
 On Sunday, 15 July 2012 at 20:50:47 UTC, Patrick Stewart wrote:
 OTOH, it may break the community yet again, which we certainly don't
 want, probably even less than breaking code.
 Also, the example of Python with two main stable branches that live
 in parallel is not very encouraging.
Also, check Python website: they recommend python v2 for all new users that don't know what to choose. They are both stable, but v2 has more libraries, and they do reassure them by saying v2 will be supported for time to come. On the other hand, on D website, D1 is pushed to the dark corners as ugly half child nobody should know about, and D2 is titled as thing to chose without thinking. And there is no mentioning D1 is relatively stable, while D2 is still unstable, non conforming to D documentation and that some things just don't work, while in constant beta flux that breaks things on regular basis with each release. So tell me again, which language treats its users with more respect ? Which one encourages users more to use them?
The problem I raised is not a problem of respect. It's a problem of community. The D community is a tiny fraction of the Python community. It has been steadily growing this last year and a half or so, but it's still fragile. The D1/D2 split basically set it back to near zero for several years, with many people leaving, only a few staying, and a number recently coming back. The project certainly can't afford yet another split, or many key people will simply throw the towel. I for one would rather see part of the users quitting than active members. As for the stability of D2, upir opinion may be different, but it has largely improved recently due to increased forces, as several people have noted (David Simcha in a recent thread said something about the stability of the compiler being good enough that he only rarely encountered a problem). And considering the rate of bugs correction, it will continue to improve. You only need to have a look at the changelog to see that it's growing with each release, and I'm pretty confident that the 2.060 will contain more bug fixes than any past release.
Well bug will be fixed for sure. But, as explained, as new feature are also introduced, bug will also be introduced. This is why stable release is never reached.
Jul 15 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Sunday, 15 July 2012 at 23:15:29 UTC, deadalnix wrote:
 On 15/07/2012 23:36, SomeDude wrote:
 Well bug will be fixed for sure.

 But, as explained, as new feature are also introduced, bug will 
 also be introduced.

 This is why stable release is never reached.
I ended up changing my mind, but only if the minor releases don't get distanced by the major releases, i.e if the features that have proven to be stabilized after a while end up being merged in the stable branch, i.e the stable branch is not frozen. This way, the stable branch and the dev branch don't diverge to the point that there effectively become two versions of the same language.
Jul 16 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Mon, 16 Jul 2012 13:45:10 -0700, SomeDude <lovelydear mailmetrash.com>  
wrote:

 On Sunday, 15 July 2012 at 23:15:29 UTC, deadalnix wrote:
 On 15/07/2012 23:36, SomeDude wrote:
 Well bug will be fixed for sure.

 But, as explained, as new feature are also introduced, bug will also be  
 introduced.

 This is why stable release is never reached.
I ended up changing my mind, but only if the minor releases don't get distanced by the major releases, i.e if the features that have proven to be stabilized after a while end up being merged in the stable branch, i.e the stable branch is not frozen. This way, the stable branch and the dev branch don't diverge to the point that there effectively become two versions of the same language.
That is the main goal of dlang-stable. :-) -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 16 2012
parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 20:47:13 UTC, Adam Wilson wrote:
 I ended up changing my mind, but only if the minor releases 
 don't get distanced by the major releases, i.e if the features 
 that have proven to be stabilized after a while end up being 
 merged in the stable branch, i.e the stable branch is not 
 frozen.

 This way, the stable branch and the dev branch don't diverge 
 to the point that there effectively become two versions of the 
 same language.
That is the main goal of dlang-stable. :-)
Then I guess we agree :)
Jul 16 2012
prev sibling parent reply "RivenTheMage" <riven-mage id.ru> writes:
On Monday, 16 July 2012 at 20:45:11 UTC, SomeDude wrote:

 This way, the stable branch and the dev branch don't diverge to 
 the point that there effectively become two versions of the 
 same language.
I think, they should be diverging. The stable branch should be "the TDPL branch". After reaching point of TDPL-compliance (top priority for the branch), only bugfixes and non-breaking features must be accepted (like COFF support). The dev branch should be "the DMD3/TDPLv2 branch": a cutting edge version of the D language.
Jul 16 2012
next sibling parent "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 21:29:31 UTC, RivenTheMage wrote:
 On Monday, 16 July 2012 at 20:45:11 UTC, SomeDude wrote:

 This way, the stable branch and the dev branch don't diverge 
 to the point that there effectively become two versions of the 
 same language.
I think, they should be diverging. The stable branch should be "the TDPL branch". After reaching point of TDPL-compliance (top priority for the branch), only bugfixes and non-breaking features must be accepted (like COFF support). The dev branch should be "the DMD3/TDPLv2 branch": a cutting edge version of the D language.
I don't think so. I think it's the best way to split the community of users with the community of developers.
Jul 16 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, July 16, 2012 23:29:30 RivenTheMage wrote:
 On Monday, 16 July 2012 at 20:45:11 UTC, SomeDude wrote:
 This way, the stable branch and the dev branch don't diverge to
 the point that there effectively become two versions of the
 same language.
I think, they should be diverging. The stable branch should be "the TDPL branch". After reaching point of TDPL-compliance (top priority for the branch), only bugfixes and non-breaking features must be accepted (like COFF support). The dev branch should be "the DMD3/TDPLv2 branch": a cutting edge version of the D language.
We're talking about doing something similar to having 2.x.y, with the main branch incrementing x and the "stable" branch incrementing y. Going to v3 would mean incrementing the 2. We have _no_ intention of doing that for years to come. - Jonathan M Davis
Jul 16 2012
next sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/16/12 5:33 PM, Jonathan M Davis wrote:
 We're talking about doing something similar to having 2.x.y, with the main
 branch incrementing x and the "stable" branch incrementing y.
That seems a simple, nice scheme. Andrei
Jul 16 2012
parent deadalnix <deadalnix gmail.com> writes:
On 17/07/2012 00:33, Andrei Alexandrescu wrote:
 On 7/16/12 5:33 PM, Jonathan M Davis wrote:
 We're talking about doing something similar to having 2.x.y, with the
 main
 branch incrementing x and the "stable" branch incrementing y.
That seems a simple, nice scheme. Andrei
That is the scheme proposed in the very first post. It has been used successfully in many projects.
Jul 16 2012
prev sibling parent reply "RivenTheMage" <riven-mage id.ru> writes:
On Monday, 16 July 2012 at 22:14:03 UTC, Jonathan M Davis wrote:

 Going to v3 would mean incrementing the 2.
 We have _no_ intention of doing that for years
 to come.
Small steps are perfect for many projects, but - in my opinion - not for a programming language specification (and reference implementation). Big leaps are better. I'm aware of counterexamples (like PHP), but these are bad examples to follow.
Jul 16 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Tuesday, July 17, 2012 06:37:23 RivenTheMage wrote:
 On Monday, 16 July 2012 at 22:14:03 UTC, Jonathan M Davis wrote:
 Going to v3 would mean incrementing the 2.
 We have _no_ intention of doing that for years
 to come.
Small steps are perfect for many projects, but - in my opinion - not for a programming language specification (and reference implementation). Big leaps are better. I'm aware of counterexamples (like PHP), but these are bad examples to follow.
Actually, most programming languages are very conservative in how often they doesn't break a lot of code either. They mostly just add more stuff on top of what they did before and then try and get you to use the new stuff in addition to the old stuff (or instead of, depending on what it is). But the old stuff is still there. We'll continue to add new features to Phobos for the forseeable future as long as those features are worth adding. And while I don't expect that we'll add very many language features to D or that they'll be added very quickly, non- breaking additions will still occur from time to time. However, _breaking_ changes to the language should be pretty much non-existent for _years_ to come, and breaking changes to Phobos should become quite rare if not non- existent as well. We'll eventually look at starting D3, which will mean breaking core language stuff where we deem appropriate and possibly completely revamping stuff in Phobos, but D2 needs to become fully stable and get a solid user base _long_ before we'll consider pulling the rug out from everyone with D3. - Jonathan M Davis
Jul 16 2012
prev sibling parent deadalnix <deadalnix gmail.com> writes:
On 17/07/2012 06:37, RivenTheMage wrote:
 On Monday, 16 July 2012 at 22:14:03 UTC, Jonathan M Davis wrote:

 Going to v3 would mean incrementing the 2.
 We have _no_ intention of doing that for years
 to come.
Small steps are perfect for many projects, but - in my opinion - not for a programming language specification (and reference implementation). Big leaps are better. I'm aware of counterexamples (like PHP), but these are bad examples to follow.
Yeah sure. Nothing was released between 3.0 and 4.0 for instance. Not a single patch.
Jul 17 2012
prev sibling parent Joseph Rushton Wakeling <joseph.wakeling webdrake.net> writes:
On 16/07/12 23:33, Jonathan M Davis wrote:
 We're talking about doing something similar to having 2.x.y, with the main
 branch incrementing x and the "stable" branch incrementing y. Going to v3
 would mean incrementing the 2. We have _no_ intention of doing that for years
 to come.
Just to note that, in the sense of making major changes to the language design, you're obviously right -- but I don't see the problem with bumping the major version number just to indicate that some key milestone has been passed (e.g. implementing in entirely the features described in TDPL). That might even be a _good_ way of indicating to the wider development world that you have a really well-defined stable release.
Jul 23 2012
prev sibling next sibling parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 12 July 2012 17:49, deadalnix <deadalnix gmail.com> wrote:
 One thing PHP has been good at is evolving, and introducing change in the
 language (some can argument that the language is so fucked up that this is
 unavoidable, so I do it now and we can discuss interesting topic).

 I discussed that system with Rasmus Ledorf at afup 2012 and it something
 that D should definitively look into.

 The const vs OOP discussion have shown once again that D will have to
 introduce breaking changes in the language. This isn't easy matter because
 if we break people code, D isn't attractive. But as long as code isn't
 broken, D people can't worked on what's next and it slows down D progress.

 The system adopted in PHP works with a 3 number version. The first number is
 used for major languages changes (for instance 4 > 5 imply passing object by
 reference when it was by copy before, 5 > 6 switched the whole thing to
 unicode).

 The second number imply language changes, but either non breaking or very
 specific, rarely used stuff. For instance 5.2 > 5.3 added GC, closures and
 namespace which does not break code.

 The last one is reserved for bug fixes. Several version are maintained at
 the same time (even if a large amount of code base is common, so bug fixes
 can be used for many version at the time).

 We should leverage the benefit of having switched to git to go in that way.
 We can start right now D2.1.xx with the opX dropped from object and see how
 it goes without requiring everybody to switch now.

 Such a system would also permit to drop all D1 stuff that are in current DMD
 because D1 vs D2 can be chosen at compile time on the same sources.

 git provide all we need to implement such a process, it is easy to do it
 soon (after 2.060 for instance) because it doesn't imply drastic changes for
 users.
Might as well just say "Lets start D3 now - Let's drop all features that have been deprecated since 0.103 - everyone make a hype and party!" -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jul 12 2012
parent reply deadalnix <deadalnix gmail.com> writes:
On 12/07/2012 19:31, Iain Buclaw wrote:
 On 12 July 2012 17:49, deadalnix<deadalnix gmail.com>  wrote:
 One thing PHP has been good at is evolving, and introducing change in the
 language (some can argument that the language is so fucked up that this is
 unavoidable, so I do it now and we can discuss interesting topic).

 I discussed that system with Rasmus Ledorf at afup 2012 and it something
 that D should definitively look into.

 The const vs OOP discussion have shown once again that D will have to
 introduce breaking changes in the language. This isn't easy matter because
 if we break people code, D isn't attractive. But as long as code isn't
 broken, D people can't worked on what's next and it slows down D progress.

 The system adopted in PHP works with a 3 number version. The first number is
 used for major languages changes (for instance 4>  5 imply passing object by
 reference when it was by copy before, 5>  6 switched the whole thing to
 unicode).

 The second number imply language changes, but either non breaking or very
 specific, rarely used stuff. For instance 5.2>  5.3 added GC, closures and
 namespace which does not break code.

 The last one is reserved for bug fixes. Several version are maintained at
 the same time (even if a large amount of code base is common, so bug fixes
 can be used for many version at the time).

 We should leverage the benefit of having switched to git to go in that way.
 We can start right now D2.1.xx with the opX dropped from object and see how
 it goes without requiring everybody to switch now.

 Such a system would also permit to drop all D1 stuff that are in current DMD
 because D1 vs D2 can be chosen at compile time on the same sources.

 git provide all we need to implement such a process, it is easy to do it
 soon (after 2.060 for instance) because it doesn't imply drastic changes for
 users.
Might as well just say "Lets start D3 now - Let's drop all features that have been deprecated since 0.103 - everyone make a hype and party!"
No, user will need backward compatible support for any real life work.
Jul 12 2012
parent "Paulo Pinto" <pjmlp progtools.org> writes:
"deadalnix"  wrote in message news:jtn1ol$juu$1 digitalmars.com...

On 12/07/2012 19:31, Iain Buclaw wrote:
 On 12 July 2012 17:49, deadalnix<deadalnix gmail.com>  wrote:
 One thing PHP has been good at is evolving, and introducing change in the
 language (some can argument that the language is so fucked up that this 
 is
 unavoidable, so I do it now and we can discuss interesting topic).

 I discussed that system with Rasmus Ledorf at afup 2012 and it something
 that D should definitively look into.

 The const vs OOP discussion have shown once again that D will have to
 introduce breaking changes in the language. This isn't easy matter 
 because
 if we break people code, D isn't attractive. But as long as code isn't
 broken, D people can't worked on what's next and it slows down D 
 progress.

 The system adopted in PHP works with a 3 number version. The first number 
 is
 used for major languages changes (for instance 4>  5 imply passing object 
 by
 reference when it was by copy before, 5>  6 switched the whole thing to
 unicode).

 The second number imply language changes, but either non breaking or very
 specific, rarely used stuff. For instance 5.2>  5.3 added GC, closures 
 and
 namespace which does not break code.

 The last one is reserved for bug fixes. Several version are maintained at
 the same time (even if a large amount of code base is common, so bug 
 fixes
 can be used for many version at the time).

 We should leverage the benefit of having switched to git to go in that 
 way.
 We can start right now D2.1.xx with the opX dropped from object and see 
 how
 it goes without requiring everybody to switch now.

 Such a system would also permit to drop all D1 stuff that are in current 
 DMD
 because D1 vs D2 can be chosen at compile time on the same sources.

 git provide all we need to implement such a process, it is easy to do it
 soon (after 2.060 for instance) because it doesn't imply drastic changes 
 for
 users.
Might as well just say "Lets start D3 now - Let's drop all features that have been deprecated since 0.103 - everyone make a hype and party!"
 No, user will need backward compatible support for any real life work.
This is why besides some small D toy projects, I keep using C++(11) for any native coding at work when the oportunity surfaces. While I keep complaining on Go forums that I don't like that the language lacks enums and generics, the way Google oposes language featuritis and is keen on Go 1 stability, makes it easier to sell to management. -- Paulo
Jul 13 2012
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, July 12, 2012 18:49:16 deadalnix wrote:
 One thing PHP has been good at is evolving, and introducing change in
 the language (some can argument that the language is so fucked up that
 this is unavoidable, so I do it now and we can discuss interesting topic).
 
 I discussed that system with Rasmus Ledorf at afup 2012 and it something
 that D should definitively look into.
 
 The const vs OOP discussion have shown once again that D will have to
 introduce breaking changes in the language. This isn't easy matter
 because if we break people code, D isn't attractive. But as long as code
 isn't broken, D people can't worked on what's next and it slows down D
 progress.
 
 The system adopted in PHP works with a 3 number version. The first
 number is used for major languages changes (for instance 4 > 5 imply
 passing object by reference when it was by copy before, 5 > 6 switched
 the whole thing to unicode).
 
 The second number imply language changes, but either non breaking or
 very specific, rarely used stuff. For instance 5.2 > 5.3 added GC,
 closures and namespace which does not break code.
 
 The last one is reserved for bug fixes. Several version are maintained
 at the same time (even if a large amount of code base is common, so bug
 fixes can be used for many version at the time).
 
 We should leverage the benefit of having switched to git to go in that
 way. We can start right now D2.1.xx with the opX dropped from object and
 see how it goes without requiring everybody to switch now.
 
 Such a system would also permit to drop all D1 stuff that are in current
 DMD because D1 vs D2 can be chosen at compile time on the same sources.
 
 git provide all we need to implement such a process, it is easy to do it
 soon (after 2.060 for instance) because it doesn't imply drastic changes
 for users.
There would definitely be value in the long run in having a similar versioning scheme, but I think that we're still ironing enough out that there's not much point yet. We don't want people to continue to code against verison 2.X.Y instead of moving their code to 2.X+1.Y. We want people to update their code to the newest version. We provide appropriate deprecation paths to ease transition, but we don't want to be supporting older versions of stuff. If you really want to stick with what dmd 2.059 provides because 2.060 deprecates something that you want, then just stick with 2.059. You don't need a new versioning scheme to do that. Once the language is stable enough that we expect pretty much anything written now to work several years from now, _then_ providing a more advanced versioning scheme would probably be beneficial. But much as D is far more stable than it was a year or two ago, there's still enough in flux that I don't think that there's much point in switching versioning schemes like that. - Jonathan M Davis
Jul 12 2012
parent reply deadalnix <deadalnix gmail.com> writes:
On 12/07/2012 21:25, Jonathan M Davis wrote:
 There would definitely be value in the long run in having a similar versioning
 scheme, but I think that we're still ironing enough out that there's not much
 point yet. We don't want people to continue to code against verison 2.X.Y
 instead of moving their code to 2.X+1.Y. We want people to update their code
 to the newest version. We provide appropriate deprecation paths to ease
 transition, but we don't want to be supporting older versions of stuff. If you
 really want to stick with what dmd 2.059 provides because 2.060 deprecates
 something that you want, then just stick with 2.059. You don't need a new
 versioning scheme to do that.
You may want to benefit from bug fixes even if you don't want to migrate to the new functionality yet. Sticking with 2.059 is somehow problematic. Plus, when a breaking change need to be introduced, we currently only have the choice to talk about it on a theoretical perspective. Being able to play with it without it being integrated in the last « release » version is something that the language definition would benefit greatly from. It is clear that for now, we would be unable to support version for a very extended period of time (we aren't as big as PHP). I still think we can benefit from that. According to you, what are the drawbacks of switching to that (as, if I understand you correctly, you think this will be useful in the future, but not now) ?
Jul 12 2012
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Thu, 12 Jul 2012 14:57:31 -0700, deadalnix <deadalnix gmail.com> wrote:

 On 12/07/2012 21:25, Jonathan M Davis wrote:
 There would definitely be value in the long run in having a similar  
 versioning
 scheme, but I think that we're still ironing enough out that there's  
 not much
 point yet. We don't want people to continue to code against verison  
 2.X.Y
 instead of moving their code to 2.X+1.Y. We want people to update their  
 code
 to the newest version. We provide appropriate deprecation paths to ease
 transition, but we don't want to be supporting older versions of stuff.  
 If you
 really want to stick with what dmd 2.059 provides because 2.060  
 deprecates
 something that you want, then just stick with 2.059. You don't need a  
 new
 versioning scheme to do that.
You may want to benefit from bug fixes even if you don't want to migrate to the new functionality yet. Sticking with 2.059 is somehow problematic.
This. 1000% this. New functionality is fundamentally different and placing bug fixes in the same development cycle is ridiculous to the point that no successful software endeavor I know of to date has ever considered it a viable strategy much less promoted it's use. I don't necessarily WANT to upgrade my DMD all the time to the latest, but I have on choice to get the latest set of bugfixes. It would also make the task of adding new features much simpler, you can pull the fix merges into both trees, and maintain a stable branch, a development branch. You'll note that the versioning system tends to work well with this model. For example: 2.0.60 is the current HEAD. Bug fixes Only. 2.1.60 is the new feature branch. It is a GitHub fork of the current DMD-HEAD owned by the same org as current DMD-HEAD. This way Walter can work against both simultaneously. We could have rolled the Object const change in 2.1.60, found out we didn't like them but instead of being FORCED to revert it to keep 2.060 stable, we could have continued developing and improving the model or working on the problem from a completely different angle, WITHOUT affecting the release of 2.0.60. We could keep all the COFF work in the DMD 2.1 branch without affecting DMD 2.0 branch and having nearly as many breakages as we currently do in HEAD. Most recently, the ElfObj breakage. Roll that work into 2.1.60 and if it breaks well, you KNEW you were on the development branch, what's your problem? The stable/development branch model exists for a reason, it works, well. We don't have to keep rediscovering the models that worked successfully for other teams the hard way. If we proactively seek best practices, we can proactively avoid a huge amount of pain. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 12 2012
next sibling parent reply Patrick Stewart <ncc1701d starfed.com> writes:
Most ridiculous thing about D is that it breaks so much backward compatibility
that people just give up using it. Decent versioning like this might help
people stick to something.

Wake up, guys, it is 10+ years and *still* it haven't reached some form of
stable release.

Like I sad, engineering failure.
Jul 12 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2012 3:40 PM, Patrick Stewart wrote:
 Most ridiculous thing about D is that it breaks so much backward compatibility
that people just give up using it. Decent versioning like this might help
people stick to something.

 Wake up, guys, it is 10+ years and *still* it haven't reached some form of
stable release.

 Like I sad, engineering failure.
We did do a stable release, D1, and there were plenty of complaints that D1 did not get new features. Also, all the released versions of D are available for download. There is no need to constantly download the latest if that disrupts your projects.
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 14:58:14 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/12/2012 3:40 PM, Patrick Stewart wrote:
 Most ridiculous thing about D is that it breaks so much backward  
 compatibility that people just give up using it. Decent versioning like  
 this might help people stick to something.

 Wake up, guys, it is 10+ years and *still* it haven't reached some form  
 of stable release.

 Like I sad, engineering failure.
We did do a stable release, D1, and there were plenty of complaints that D1 did not get new features. Also, all the released versions of D are available for download. There is no need to constantly download the latest if that disrupts your projects.
And with the comming deprecation of D1, what then? Going backwards is almost never the answer with D2, the bugs are almost always still there. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 3:00 PM, Adam Wilson wrote:
 Also, all the released versions of D are available for download. There is no
 need to constantly download the latest if that disrupts your projects.
And with the comming deprecation of D1, what then?
It'll still be there for download for those that want to use it.
 Going backwards is almost never the answer with D2, the bugs are almost always
 still there.
To me, 'stable' means unchanging, not 'has no bugs'.
Jul 15 2012
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 15:32:06 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/15/2012 3:00 PM, Adam Wilson wrote:
 Also, all the released versions of D are available for download. There  
 is no
 need to constantly download the latest if that disrupts your projects.
And with the comming deprecation of D1, what then?
It'll still be there for download for those that want to use it.
I guess my point is that at that time we only have one operative branch per your implication. Great, it's still there, but it's un
 Going backwards is almost never the answer with D2, the bugs are almost  
 always
 still there.
To me, 'stable' means unchanging, not 'has no bugs'.
So the problem is semantics then? Because I dredge up another word to describe what we are asking for if that's all it takes. But I don't think that anyone else is going to read "stable" as "unchanging". Software is by definition changing, or it's dead. It appears to my parsing of your sentence that you are asserting that stable == static. By that definition of stable, Windows ME is "stable" and ... ehrm, not a soul in the tech world would agree with that summation of WinME. As I said earlier, no one else in FOSS or Commercial equates stable with "has no bugs", it means no new features and no regressions. Not a single solitary person I've talked too expects their software to be bug free. THIS is what we mean when we say "stable": http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html It's also how pretty much everyone else will read "stable". -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 3:52 PM, Adam Wilson wrote:
 So the problem is semantics then? Because I dredge up another word to describe
 what we are asking for if that's all it takes. But I don't think that anyone
 else is going to read "stable" as "unchanging". Software is by definition
 changing, or it's dead. It appears to my parsing of your sentence that you are
 asserting that stable == static. By that definition of stable, Windows ME is
 "stable" and ... ehrm, not a soul in the tech world would agree with that
 summation of WinME.

 As I said earlier, no one else in FOSS or Commercial equates stable with "has
no
 bugs", it means no new features and no regressions. Not a single solitary
person
 I've talked too expects their software to be bug free.

 THIS is what we mean when we say "stable":
 http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html
 It's also how pretty much everyone else will read "stable".
D does have a test suite, and it is a (almost always achieved) goal to keep it always passing, even on the dev branch. In fact, most of my work is running the test suite and making sure each change doesn't regress. (Regressions that do slip through were not in the test suite.) Frankly, I don't know how to do what you're asking for. D users, every single day, clamor for: 1. more bug fixes 2. more new features 3. why aren't deprecated features removed more quickly? 4. why don't we add this breaking feature? 5. why did you add that breaking feature which broke my code? Often, these are the same people! Sometimes, even in the same post! And, to reiterate, we did release D1. Since its release, it has only received bug fixes. No breaking changes, no regressions. This, inevitably, has made many D1 users unhappy - they wanted new features folded in. So that was not satisfactory, either. Yes, I do feel a bit put upon by this, as I see no way to satisfy all these mutually contradictory requests.
Jul 15 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 16:06:58 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/15/2012 3:52 PM, Adam Wilson wrote:
 So the problem is semantics then? Because I dredge up another word to  
 describe
 what we are asking for if that's all it takes. But I don't think that  
 anyone
 else is going to read "stable" as "unchanging". Software is by  
 definition
 changing, or it's dead. It appears to my parsing of your sentence that  
 you are
 asserting that stable == static. By that definition of stable, Windows  
 ME is
 "stable" and ... ehrm, not a soul in the tech world would agree with  
 that
 summation of WinME.

 As I said earlier, no one else in FOSS or Commercial equates stable  
 with "has no
 bugs", it means no new features and no regressions. Not a single  
 solitary person
 I've talked too expects their software to be bug free.

 THIS is what we mean when we say "stable":
 http://www.modernperlbooks.com/mt/2009/06/what-does-stable-mean.html
 It's also how pretty much everyone else will read "stable".
D does have a test suite, and it is a (almost always achieved) goal to keep it always passing, even on the dev branch. In fact, most of my work is running the test suite and making sure each change doesn't regress. (Regressions that do slip through were not in the test suite.) Frankly, I don't know how to do what you're asking for. D users, every single day, clamor for: 1. more bug fixes
Branch A. Rebase into Branch B as needed.
 2. more new features
Branch B.
 3. why aren't deprecated features removed more quickly?
Branch A, marked for deprecation. Branch B, actually removed. Becomes active when merged into Branch A. (Assuming Branch B is merged roughly every other month as per current processes.)
 4. why don't we add this breaking feature?
Add it. Branch B.
 5. why did you add that breaking feature which broke my code?
Why are you using Branch B you knucklehead?
 Often, these are the same people! Sometimes, even in the same post!
This concept is precisely designed to significantly mitigate all five problems. Not everyone will test against Repo B, but this allows you to put the responsibility for not testing against it on them. They know how it works here, it's not your problem if you broke something that they had the chance to test for but didn't.
 And, to reiterate, we did release D1. Since its release, it has only  
 received bug fixes. No breaking changes, no regressions. This,  
 inevitably, has made many D1 users unhappy - they wanted new features  
 folded in.

 So that was not satisfactory, either.

 Yes, I do feel a bit put upon by this, as I see no way to satisfy all  
 these mutually contradictory requests.
I do apologize for that. It is not my intention to cause undue stress. I am pushing the for the change because I think it will mitigate much of your current stress in dealing with us. And I do recognize that we users can be pretty demanding as I sit on the other side of this equation at work. But because I sit on the other side, I get frustrated when I see developers actively resisting the proven concepts that will drastically reduce the very problem they are complaining about. I should note that we use this exact model for every project we have where I work and that it is been highly successful at keeping those five points of tension moderated. And our users can actually get work done without waiting for weeks and months because thing X is just plain broken, which in turn makes us look good. (Improving Loyalty) -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
Jul 15 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 21:11:12 Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible?
I'm sure it's possible. The problem is that it requires someone who understands the compiler well enough to do it and is willing to take the time to do it. Such a person could be very hard to find. It would probably require that one of our primary contributors to the compiler use some of the time that they've been using for that to maintain the new bug fix-only branch. It may very well be worth it to do so, but we'll need a volunteer. - Jonathan M Davis
Jul 15 2012
prev sibling next sibling parent reply deadalnix <deadalnix gmail.com> writes:
On 16/07/2012 03:11, Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
What would be the difference betwwen dmd head and unstable ? Isn't it more simple to merge in unstable only or both unstable and bugfix at first ?
Jul 15 2012
parent Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/15/12 9:21 PM, deadalnix wrote:
 What would be the difference betwwen dmd head and unstable ?

 Isn't it more simple to merge in unstable only or both unstable and
 bugfix at first ?
I think you're right, we only need the "stable/bugfix" branch. Andrei
Jul 15 2012
prev sibling next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 18:11:12 -0700, Andrei Alexandrescu  
<SeeWebsiteForEmail erdani.org> wrote:

 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
I like this, A LOT! This is a nice twist on the proposed model and I think it improves on the process. It certainly means that no release is predicated on the state of HEAD, which is a fantastic achievement. And this process certainly wasn't possible prior to git. It also achieves to goal of separate branches for unstable work and stable bugfixes. I may just co-opt this for my projects at work! However, this is all predicated on finding such a person, of which few exist. But I would argue that it should NOT fall on to someone in the core team (Walter, Kenji, Braddr, Don, etc.), they should be working on the compiler HEAD. There must be someone out there with decent knowledge of the internals and yet doesn't consider themselves core team, but the biggest them I think is going to be the time, which is why I think it shouldn't be a core team member. Actually, here is another idea. How about we train someone who maybe has some experience with the compiler but might not know what to do in all situations. If they had direct access to Walter and the Core Team, they could be quickly brought up to speed on the job so to speak. And it would widen our potential volunteer poll. Plus it would widen the number of team members who are deeply involved with the compiler. Reducing our bus-factor is always a very good thing. If the core team was willing to accept an apprentice, then I would be willing to learn. As far as git goes, the only thing I don't have much experience with is reversions, the rest I can handle, we use git internally at work so it's very familiar to me. But I'd want to see if anyone else more qualified than I was willing to volunteer first! -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 18:28:36 -0700, Adam Wilson <flyboynw gmail.com> wrote:

 On Sun, 15 Jul 2012 18:11:12 -0700, Andrei Alexandrescu  
 <SeeWebsiteForEmail erdani.org> wrote:

 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those  
 five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
I like this, A LOT! This is a nice twist on the proposed model and I think it improves on the process. It certainly means that no release is predicated on the state of HEAD, which is a fantastic achievement. And this process certainly wasn't possible prior to git. It also achieves to goal of separate branches for unstable work and stable bugfixes. I may just co-opt this for my projects at work! However, this is all predicated on finding such a person, of which few exist. But I would argue that it should NOT fall on to someone in the core team (Walter, Kenji, Braddr, Don, etc.), they should be working on the compiler HEAD. There must be someone out there with decent knowledge of the internals and yet doesn't consider themselves core team, but the biggest them I think is going to be the time, which is why I think it shouldn't be a core team member. Actually, here is another idea. How about we train someone who maybe has some experience with the compiler but might not know what to do in all situations. If they had direct access to Walter and the Core Team, they could be quickly brought up to speed on the job so to speak. And it would widen our potential volunteer poll. Plus it would widen the number of team members who are deeply involved with the compiler. Reducing our bus-factor is always a very good thing. If the core team was willing to accept an apprentice, then I would be willing to learn. As far as git goes, the only thing I don't have much experience with is reversions, the rest I can handle, we use git internally at work so it's very familiar to me. But I'd want to see if anyone else more qualified than I was willing to volunteer first!
As an addition to my training proposal, I submit that we make it two people, to account for vacations and other times when one person may not be available. Although I imagine that anything beyond two people might be a little more than the core team can handle, and I can't see it being that much work that we'd need three people. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
prev sibling next sibling parent reply =?ISO-8859-15?Q?Alex_R=F8nne_Petersen?= <alex lycus.org> writes:
On 16-07-2012 03:11, Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
I propose a slight variation: * master: This is the 'incoming' branch. Unstable, in-dev, etc. It's easier this way since pull requests will usually target this branch and build bots will test this. * stable: This branch contains only bug fixes to existing language features, and enhancements that do not in any way impact existing features (or break code). Should be manually maintained based on master. That's it. I don't see a need for any added complexity to this simple model. Feel free to destroy as you see fit, though! -- Alex Rønne Petersen alex lycus.org http://lycus.org
Jul 15 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, July 16, 2012 03:38:37 Alex R=C3=B8nne Petersen wrote:
 I propose a slight variation:
=20
 * master: This is the 'incoming' branch. Unstable, in-dev, etc. It's
 easier this way since pull requests will usually target this branch a=
nd
 build bots will test this.
 * stable: This branch contains only bug fixes to existing language
 features, and enhancements that do not in any way impact existing
 features (or break code). Should be manually maintained based on mast=
er.
=20
 That's it. I don't see a need for any added complexity to this simple=
 model. Feel free to destroy as you see fit, though!
I agree that that's probably better, since I don't see much point in=20= separating unstable from master either, but the main problem is still h= aving=20 someone who is able and willing to do the merging into the bug fix bran= ch. - Jonathan M Davis
Jul 15 2012
prev sibling next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Sun, 15 Jul 2012 18:38:37 -0700, Alex R=F8nne Petersen <alex lycus.or=
g>  =

wrote:

 On 16-07-2012 03:11, Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have=
 where I work and that it is been highly successful at keeping those =
=
 five
 points of tension moderated. And our users can actually get work don=
e
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve=
it
 and make it the practice of the entire team.

 Would this be possible?


 Andrei
I propose a slight variation: * master: This is the 'incoming' branch. Unstable, in-dev, etc. It's =
 easier this way since pull requests will usually target this branch an=
d =
 build bots will test this.
 * stable: This branch contains only bug fixes to existing language  =
 features, and enhancements that do not in any way impact existing  =
 features (or break code). Should be manually maintained based on maste=
r.
 That's it. I don't see a need for any added complexity to this simple =
=
 model. Feel free to destroy as you see fit, though!
I think this would work very well. When it comes time to roll out new = features, you could just merge master into stable and you've got a brand= = new stable release, tag it, build an installer, and you're done. -- = Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 15 2012
prev sibling parent "Masahiro Nakagawa" <repeatedly gmail.com> writes:
On Monday, 16 July 2012 at 01:38:38 UTC, Alex Rønne Petersen 
wrote:
 On 16-07-2012 03:11, Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project 
 we have
 where I work and that it is been highly successful at keeping 
 those five
 points of tension moderated. And our users can actually get 
 work done
 without waiting for weeks and months because thing X is just 
 plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team. Would this be possible? Andrei
I propose a slight variation: * master: This is the 'incoming' branch. Unstable, in-dev, etc. It's easier this way since pull requests will usually target this branch and build bots will test this. * stable: This branch contains only bug fixes to existing language features, and enhancements that do not in any way impact existing features (or break code). Should be manually maintained based on master. That's it. I don't see a need for any added complexity to this simple model. Feel free to destroy as you see fit, though!
git-flow is the other candidate. https://github.com/nvie/gitflow/ See more detail: http://nvie.com/posts/a-successful-git-branching-model/
Jul 16 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 03:11, Andrei Alexandrescu wrote:

 I wonder if it's possible that that person cherry-picks commits from
 HEAD into two separate branches: bugfixes and unstable. It should be
 easy to create installers etc. for those.
What would the difference between unstable and HEAD be? -- /Jacob Carlborg
Jul 16 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-16 03:11, Andrei Alexandrescu wrote:
 On 7/15/12 7:44 PM, Adam Wilson wrote:
 I should note that we use this exact model for every project we have
 where I work and that it is been highly successful at keeping those five
 points of tension moderated. And our users can actually get work done
 without waiting for weeks and months because thing X is just plain
 broken, which in turn makes us look good. (Improving Loyalty)
Allow me to propose something. Right now all dmd changes get merged in the head. Suppose we find a volunteer in the community who is: 1. Highly motivated 2. With a good understanding of D 3. Expert with git 4. Reliable I wonder if it's possible that that person cherry-picks commits from HEAD into two separate branches: bugfixes and unstable. It should be easy to create installers etc. for those. If we see this works well and gathers steady interest, we can improve it and make it the practice of the entire team.
Another idea to start with would be to create new temporary branches for bigger changes, i.e. COFF/Win64. -- /Jacob Carlborg
Jul 16 2012
prev sibling next sibling parent deadalnix <deadalnix gmail.com> writes:
On 16/07/2012 01:06, Walter Bright wrote:
 Frankly, I don't know how to do what you're asking for. D users, every
 single day, clamor for:

 1. more bug fixes
 2. more new features
 3. why aren't deprecated features removed more quickly?
 4. why don't we add this breaking feature?
 5. why did you add that breaking feature which broke my code?

 Often, these are the same people! Sometimes, even in the same post!
I understand that this may seems messed up. It isn't as much as it seems. It simply show the need for a more elaborate versionning and releasing system for D. Theses needs do all exists. But not all at the same time or all in the same situation. When dealing with code, as in all engineering fields, you always make tradeoffs. And changing things in a codebase have a cost. Presumably, every feature which are included into D has also a benefit. The larger your codebase, the more interesting it is to slow down inclusion of new feature in your work. The smaller, the quicker you want to do it because the cost of doing so isn't the same, and you can benefit from the news feature at a very little cost. Different situations, different needs. The same person can have both need at the same time, because it have experienced both situations with different codebases.
 And, to reiterate, we did release D1. Since its release, it has only
 received bug fixes. No breaking changes, no regressions. This,
 inevitably, has made many D1 users unhappy - they wanted new features
 folded in.

 So that was not satisfactory, either.

 Yes, I do feel a bit put upon by this, as I see no way to satisfy all
 these mutually contradictory requests.
As of D1, the problem is different. I'll use again the example of PHP, because it has proven to manage the issue quite well, and because I discussed that quite a lot with Ramsus recently, so I know the topic quite well. PHP release PHP 5.2 . Then it released PHP6 . PHP introduced breaking changes, just like D2 does. We can compare D1 as PHP5.2 and D2 as PHP6 . It happened that some new feature of PHP6 weren't breaking features (GC, closures, namespaces) and some other were (introducing unicode into source code). And here is what is done then, and here what we should learn from PHP. After PHP6, PHP relased PHP5.3 . PHP5.3 was basically PHP5.2 with all new feature of PHP6, except the one that was breaking. PHP5.2 continued to live for very conservative users, 5.3 for user that want to use new features, and 6 for users that feel like beta testers. (note 6 was then canceled, but for reasons completely unrelated to what we are talking here. I may talk about that, but this is really off topic here, so let's not epilogue on that).
Jul 15 2012
prev sibling parent reply Andrei Alexandrescu <SeeWebsiteForEmail erdani.org> writes:
On 7/15/12 7:06 PM, Walter Bright wrote:
 Frankly, I don't know how to do what you're asking for. D users, every
 single day, clamor for:

 1. more bug fixes
 2. more new features
 3. why aren't deprecated features removed more quickly?
 4. why don't we add this breaking feature?
 5. why did you add that breaking feature which broke my code?

 Often, these are the same people! Sometimes, even in the same post!

 And, to reiterate, we did release D1. Since its release, it has only
 received bug fixes. No breaking changes, no regressions. This,
 inevitably, has made many D1 users unhappy - they wanted new features
 folded in.

 So that was not satisfactory, either.

 Yes, I do feel a bit put upon by this, as I see no way to satisfy all
 these mutually contradictory requests.
I think you're conflating two different trends. One is the annoying one you mentioned, and the other is a very reasonable request - that D has one branch containing only bug fixes, and another branch with new features and other potentially disruptive things. The key is that the branches are merged once a more risky branch is stable enough, and the essential ingredient is that git makes branch merging easy. This is not something you could have done essentially at any pre-github time in D's history, and is not to be confused with D1 vs D2 or with the known contradictory requests. Andrei
Jul 15 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 20:43:52 Andrei Alexandrescu wrote:
 The key is that the branches are merged once a more risky branch is
 stable enough, and the essential ingredient is that git makes branch
 merging easy.
Yes. This is a huge advantage to using git. It's actually reasonably sane to maintain multiple branches. - Jonathan M Davis
Jul 15 2012
prev sibling next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 14:58:14 Walter Bright wrote:
 On 7/12/2012 3:40 PM, Patrick Stewart wrote:
 Most ridiculous thing about D is that it breaks so much backward
 compatibility that people just give up using it. Decent versioning like
 this might help people stick to something.
 
 Wake up, guys, it is 10+ years and *still* it haven't reached some form of
 stable release.
 
 Like I sad, engineering failure.
We did do a stable release, D1, and there were plenty of complaints that D1 did not get new features.
Well, if we were to move to a model where we had 2.x.y, we only put new features on changes to x, and bug fixes when in changes to y, and we did x releases every few months and y releases monthly (or something along those lines), then people would theoretically get a more stable release to work off of with the new features still coming semi-frequently. You get a better balance between stability and new stuff than we've had. And I think that in the long run, that's probably what we should do. Instead, what we've had is either stability without any new features or at all or a new features but a lack of stabliity. The problem is that we're still ironing out too much, and most of the breakage relates to bug fixes, not new features. I think that we need to reach the point where D is more or less at where TDPL says it should be before we go to a model we're splitting out the new stuff from the bug fixes. In theory, the only new stuff that we're doing right now relates to matching TDPL and ironing out issues with existing stuff rather than outright adding new features anyway (though some outright new stuff _has_ been added - e.g. the new lambda syntax). So, in that sense, pretty much everything is supposed to be bug fixing right now. - Jonathan M Davis
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 3:27 PM, Jonathan M Davis wrote:
 The problem is that we're still ironing out too much, and most of the breakage
 relates to bug fixes, not new features.
There's been a lot of non-bug-fixing breakage, for example, renaming library functions.
Jul 15 2012
parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 15:30:57 Walter Bright wrote:
 On 7/15/2012 3:27 PM, Jonathan M Davis wrote:
 The problem is that we're still ironing out too much, and most of the
 breakage relates to bug fixes, not new features.
There's been a lot of non-bug-fixing breakage, for example, renaming library functions.
Yeah, but those are always done through a deprecation path, so there's no immediate breakage. And we've done most of that already, so that should be happening less and less. However, if we did move to a model where we had major and minor releases, then deprecating and removing functions would presumably be restricted to major releases. - Jonathan M Davis
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 3:43 PM, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 15:30:57 Walter Bright wrote:
 On 7/15/2012 3:27 PM, Jonathan M Davis wrote:
 The problem is that we're still ironing out too much, and most of the
 breakage relates to bug fixes, not new features.
There's been a lot of non-bug-fixing breakage, for example, renaming library functions.
Yeah, but those are always done through a deprecation path, so there's no immediate breakage. And we've done most of that already, so that should be happening less and less.
It needs to stop completely.
Jul 15 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 16:07:40 Walter Bright wrote:
 On 7/15/2012 3:43 PM, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 15:30:57 Walter Bright wrote:
 On 7/15/2012 3:27 PM, Jonathan M Davis wrote:
 The problem is that we're still ironing out too much, and most of the
 breakage relates to bug fixes, not new features.
There's been a lot of non-bug-fixing breakage, for example, renaming library functions.
Yeah, but those are always done through a deprecation path, so there's no immediate breakage. And we've done most of that already, so that should be happening less and less.
It needs to stop completely.
Most of the renaming of functions which has gone on has been because Phobos has been inconsistent with its naming, which makes it harder to use and learn. As that's sorted out (as has mostly been done), those changes will stop. But do you honestly expect that everything in the standard library is going to be frozen at some point? Is that what you're suggesting? If we figure out that function X really should be replaced, we should be able to replace it. If we come up with a way better design for a module, we should be able to replace it. That may mean leaving the old version around for a long period of time, but we shouldn't be stuck for bad design decisions permanently. That's actually one area where having major and minor releases (in addition to D1 vs D2 vs D3 etc) can really help, because then you restrict the larger changes to the major releases. Major libraries do this all the time (e.g. Qt). We definitely want to be more stable than we have been, and we want to reach the point where there are no longer any minor changes for naming and whatnot (and we're getting there), but if you want to freeze the API on Phobos permanently, making it so that we can only ever have additive changes, then there are going to be a number of people who are going to be very unhappy. In the long run, breaking changes should be better managed (e.g. restricted to only certain releases) and much rarer, but they still need to be able to happen. - Jonathan M Davis
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 4:34 PM, Jonathan M Davis wrote:
 It needs to stop completely.
Most of the renaming of functions which has gone on has been because Phobos has been inconsistent with its naming, which makes it harder to use and learn. As that's sorted out (as has mostly been done), those changes will stop. But do you honestly expect that everything in the standard library is going to be frozen at some point? Is that what you're suggesting?
I've had a lot of my own working D code break because of name changes in Phobos. This is extremely annoying. I can fully understand that it drives people away. It's got to stop. We could bikeshed forever about what exact spelling and casing a name should have. That's fine for new names. Old names should stay. Breaking things should have a very high bar. Merely a name change is not good enough.
Jul 15 2012
next sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Sunday, July 15, 2012 23:05:39 Walter Bright wrote:
 On 7/15/2012 4:34 PM, Jonathan M Davis wrote:
 It needs to stop completely.
Most of the renaming of functions which has gone on has been because Phobos has been inconsistent with its naming, which makes it harder to use and learn. As that's sorted out (as has mostly been done), those changes will stop. But do you honestly expect that everything in the standard library is going to be frozen at some point? Is that what you're suggesting?
I've had a lot of my own working D code break because of name changes in Phobos. This is extremely annoying. I can fully understand that it drives people away. It's got to stop. We could bikeshed forever about what exact spelling and casing a name should have. That's fine for new names. Old names should stay. Breaking things should have a very high bar. Merely a name change is not good enough.
Which is precisely why I was trying to get all of the name changes out of the way early and quickly so that we'd get the names fixed and then not having any more of that kind of breakage. At this point, in almost all cases (maybe even in all cases), when a name gets changed, it's because the function is being replaced by a better function. And even that kind of change is happening less and should eventually become quite rare. While there are a number of symbols currently going through the deprecation process in Phobos, not much is being scheduled for deprecation anymore, and most of the stuff on the deprecation path has already been deprecated and is approaching the point when it will be removed. I understand your annoyance with the name changes, but when it was discussed, almost everyone in the newsgroup thought that it was worth it to make those changes in order to make the library more consistent. Having done that, we are now far more stringent about changing names. - Jonathan M Davis
Jul 15 2012
prev sibling parent reply "Jesse Phillips" <jessekphillips+D gmail.com> writes:
On Monday, 16 July 2012 at 06:05:43 UTC, Walter Bright wrote:

 I've had a lot of my own working D code break because of name 
 changes in Phobos. This is extremely annoying. I can fully 
 understand that it drives people away. It's got to stop.
Name changes have been the least annoying breaking change I've come across from using D. Language design being the biggest. Luckily I've expected that, and recently hit has been very infrequent. There are still changes that will be making big ripples (actually maybe not so much if we are changing how we handle toHash...). Maybe this unstable branch thing will allow us to make a bunch of a breaking changes together when we have one of these required disruptions.
Jul 16 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 14:28:45 UTC, Jesse Phillips wrote:
 On Monday, 16 July 2012 at 06:05:43 UTC, Walter Bright wrote:

 I've had a lot of my own working D code break because of name 
 changes in Phobos. This is extremely annoying. I can fully 
 understand that it drives people away. It's got to stop.
Name changes have been the least annoying breaking change I've come across from using D. Language design being the biggest. Luckily I've expected that, and recently hit has been very infrequent. There are still changes that will be making big ripples (actually maybe not so much if we are changing how we handle toHash...). Maybe this unstable branch thing will allow us to make a bunch of a breaking changes together when we have one of these required disruptions.
I don't think the unstable branch should give you the false idea that you are free to make breaking changes, because in the end, they will be incorporated in the stable branch, breaking code, or the users will be stuck. So although there may be *some* breaking changes, they should stay relatively limited. Each breaking change should increment the x number in the 2.x.y scheme. So if the x moves too quickly, the stabe and unstable branches will quickly diverge to become unreconciliable.
Jul 16 2012
parent reply deadalnix <deadalnix gmail.com> writes:
On 16/07/2012 23:26, SomeDude wrote:
 On Monday, 16 July 2012 at 14:28:45 UTC, Jesse Phillips wrote:
 On Monday, 16 July 2012 at 06:05:43 UTC, Walter Bright wrote:

 I've had a lot of my own working D code break because of name changes
 in Phobos. This is extremely annoying. I can fully understand that it
 drives people away. It's got to stop.
Name changes have been the least annoying breaking change I've come across from using D. Language design being the biggest. Luckily I've expected that, and recently hit has been very infrequent. There are still changes that will be making big ripples (actually maybe not so much if we are changing how we handle toHash...). Maybe this unstable branch thing will allow us to make a bunch of a breaking changes together when we have one of these required disruptions.
I don't think the unstable branch should give you the false idea that you are free to make breaking changes, because in the end, they will be incorporated in the stable branch, breaking code, or the users will be stuck. So although there may be *some* breaking changes, they should stay relatively limited. Each breaking change should increment the x number in the 2.x.y scheme. So if the x moves too quickly, the stabe and unstable branches will quickly diverge to become unreconciliable.
This is exactly the reason why the 3 digit versionning system exists.
Jul 16 2012
parent reply "Chris NS" <ibisbasenji gmail.com> writes:
+1 for a "2.breaking.bugfix" scheme.  I've used this scheme on 
anything serious for years, and know many others who have; so it 
is not only popular but also quite tried and proven.  Not for 
every project, of course (although I don't understand why the 
Linux kernel team dropped it with 3.x), but for the majority it 
seems to work wonders.

Rest of the thread seems like so much unnecessary politic.
Jul 16 2012
parent reply Wouter Verhelst <wouter grep.be> writes:
"Chris NS" <ibisbasenji gmail.com> writes:

 +1 for a "2.breaking.bugfix" scheme.  I've used this scheme on
 anything serious for years, and know many others who have; so it is
 not only popular but also quite tried and proven.  Not for every
 project, of course (although I don't understand why the Linux kernel
 team dropped it with 3.x), but for the majority it seems to work
 wonders.
They haven't, on the contrary. 3.x is a release with new features 3.x.y is a bugfix release. Before the move to 3.x, this was 2.6.x and 2.6.x.y -- which was confusing, because many people thought there was going to be a 2.8 at some point when there wasn't. -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a
Jul 17 2012
parent reply Iain Buclaw <ibuclaw ubuntu.com> writes:
On 17 July 2012 12:05, Wouter Verhelst <wouter grep.be> wrote:
 "Chris NS" <ibisbasenji gmail.com> writes:

 +1 for a "2.breaking.bugfix" scheme.  I've used this scheme on
 anything serious for years, and know many others who have; so it is
 not only popular but also quite tried and proven.  Not for every
 project, of course (although I don't understand why the Linux kernel
 team dropped it with 3.x), but for the majority it seems to work
 wonders.
They haven't, on the contrary. 3.x is a release with new features 3.x.y is a bugfix release. Before the move to 3.x, this was 2.6.x and 2.6.x.y -- which was confusing, because many people thought there was going to be a 2.8 at some point when there wasn't.
The reason for the move to 3.x is in the announcement. http://lkml.org/lkml/2011/7/21/455 But yes, it simplifies the stable vs development kernel versioning. -- Iain Buclaw *(p < e ? p++ : p) = (c & 0x0f) + '0';
Jul 17 2012
parent "Chris NS" <ibisbasenji gmail.com> writes:
On Tuesday, 17 July 2012 at 18:12:28 UTC, Iain Buclaw wrote:
 On 17 July 2012 12:05, Wouter Verhelst <wouter grep.be> wrote:
 "Chris NS" <ibisbasenji gmail.com> writes:

 +1 for a "2.breaking.bugfix" scheme.  I've used this scheme on
 anything serious for years, and know many others who have; so 
 it is
 not only popular but also quite tried and proven.  Not for 
 every
 project, of course (although I don't understand why the Linux 
 kernel
 team dropped it with 3.x), but for the majority it seems to 
 work
 wonders.
They haven't, on the contrary. 3.x is a release with new features 3.x.y is a bugfix release. Before the move to 3.x, this was 2.6.x and 2.6.x.y -- which was confusing, because many people thought there was going to be a 2.8 at some point when there wasn't.
The reason for the move to 3.x is in the announcement. http://lkml.org/lkml/2011/7/21/455 But yes, it simplifies the stable vs development kernel versioning.
I don't recall where I first got my information, but clearly I was mistaken. And I'm happy to have been so. Maybe if I actually kept up more with the info on kernel releases I'd have known... alas. -- Chris NS
Jul 18 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On 16/07/2012 01:07, Walter Bright wrote:
 On 7/15/2012 3:43 PM, Jonathan M Davis wrote:
 On Sunday, July 15, 2012 15:30:57 Walter Bright wrote:
 On 7/15/2012 3:27 PM, Jonathan M Davis wrote:
 The problem is that we're still ironing out too much, and most of the
 breakage relates to bug fixes, not new features.
There's been a lot of non-bug-fixing breakage, for example, renaming library functions.
Yeah, but those are always done through a deprecation path, so there's no immediate breakage. And we've done most of that already, so that should be happening less and less.
It needs to stop completely.
No. It hasn't been made for no reasons. But yes, some code is broken in the process. This is exactly why we need a more sophisticated versionning process (note the recurring pattern in my posts :D ). The fact that some people have legacy code shouldn't stop D progress. But with the current system, D must either break code or make no progress.
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 4:53 PM, deadalnix wrote:
 On 16/07/2012 01:07, Walter Bright wrote:
 It needs to stop completely.
No. It hasn't been made for no reasons. But yes, some code is broken in the process. This is exactly why we need a more sophisticated versionning process (note the recurring pattern in my posts :D ). The fact that some people have legacy code shouldn't stop D progress. But with the current system, D must either break code or make no progress.
Changing names is minute progress, and is too costly in terms of annoying existing users and breaking their code.
Jul 15 2012
parent reply "RivenTheMage" <riven-mage id.ru> writes:
On Monday, 16 July 2012 at 06:07:21 UTC, Walter Bright wrote:

 Changing names is minute progress, and is too costly in terms 
 of annoying existing users and breaking their code.
Cost can be lowered - by introducing (semi-)automatic refactoring/upgrade mode. dmd -upgrade zzz.d Compiler can do renames (clear() -> destroy()), insert workarounds (if needed), and so on. Easy, fast, no risk of human error. Of course, in certain cituations no automatic upgrade is possible...
Jul 15 2012
next sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 11:48 PM, RivenTheMage wrote:
 On Monday, 16 July 2012 at 06:07:21 UTC, Walter Bright wrote:

 Changing names is minute progress, and is too costly in terms of annoying
 existing users and breaking their code.
Cost can be lowered - by introducing (semi-)automatic refactoring/upgrade mode. dmd -upgrade zzz.d Compiler can do renames (clear() -> destroy()), insert workarounds (if needed), and so on. Easy, fast, no risk of human error. Of course, in certain cituations no automatic upgrade is possible...
It is a good idea, but I'd be nervous myself about allowing the compiler to edit my code :-)
Jul 15 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-16 08:51, Walter Bright wrote:

 It is a good idea, but I'd be nervous myself about allowing the compiler
 to edit my code :-)
Don't you trust your own compiler :) The compiler could have --dry-run option to show what would be changed. It could also show a diff after all processing. -- /Jacob Carlborg
Jul 16 2012
next sibling parent Kevin Cox <kevincox.ca gmail.com> writes:
On Jul 16, 2012 4:15 AM, "Jacob Carlborg" <doob me.com> wrote:
 On 2012-07-16 08:51, Walter Bright wrote:

 It is a good idea, but I'd be nervous myself about allowing the compiler
 to edit my code :-)
Don't you trust your own compiler :) The compiler could have --dry-run option to show what would be changed.
It could also show a diff after all processing.
 --
 /Jacob Carlborg
Yeah, and if you're using some kind of version control system there is really no risk.
Jul 16 2012
prev sibling parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 08:12:39 UTC, Jacob Carlborg wrote:
 On 2012-07-16 08:51, Walter Bright wrote:

 It is a good idea, but I'd be nervous myself about allowing 
 the compiler
 to edit my code :-)
Don't you trust your own compiler :) The compiler could have --dry-run option to show what would be changed. It could also show a diff after all processing.
That's very precisely what the "rename" function of IDEs like eclipse do, BTW.
Jul 16 2012
parent reply "RivenTheMage" <riven-mage id.ru> writes:
On Monday, 16 July 2012 at 21:29:39 UTC, SomeDude wrote:

 That's very precisely what the "rename" function of IDEs like 
 eclipse do, BTW.
The "upgrade mode" should be more than just textual search-and-replace.
Jul 16 2012
parent reply "SomeDude" <lovelydear mailmetrash.com> writes:
On Monday, 16 July 2012 at 21:34:45 UTC, RivenTheMage wrote:
 On Monday, 16 July 2012 at 21:29:39 UTC, SomeDude wrote:

 That's very precisely what the "rename" function of IDEs like 
 eclipse do, BTW.
The "upgrade mode" should be more than just textual search-and-replace.
It's *not* textual search and replace. It's compiler checked semantic refactoring.
Jul 16 2012
parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-16 23:36, SomeDude wrote:

 It's *not* textual search and replace. It's compiler checked semantic
 refactoring.
Yet again we need a compiler as a library :) -- /Jacob Carlborg
Jul 16 2012
parent "Chris NS" <ibisbasenji gmail.com> writes:
On Tuesday, 17 July 2012 at 06:29:46 UTC, Jacob Carlborg wrote:
 On 2012-07-16 23:36, SomeDude wrote:

 It's *not* textual search and replace. It's compiler checked 
 semantic
 refactoring.
Yet again we need a compiler as a library :)
I find myself agreeing more and more.
Jul 16 2012
prev sibling parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Monday, July 16, 2012 08:48:31 RivenTheMage wrote:
 On Monday, 16 July 2012 at 06:07:21 UTC, Walter Bright wrote:
 Changing names is minute progress, and is too costly in terms
 of annoying existing users and breaking their code.
Cost can be lowered - by introducing (semi-)automatic refactoring/upgrade mode. dmd -upgrade zzz.d Compiler can do renames (clear() -> destroy()), insert workarounds (if needed), and so on. Easy, fast, no risk of human error. Of course, in certain cituations no automatic upgrade is possible...
If we were doing that all the time, then such a tool would be very useful, and it may very well be worth creating such a tool if/when D3 comes around and code needs to be converted, but if we continue to change names often enough that you really need such a tool, then we're screwing up. Name changes _should_ be rare. We were only doing as many as we were for a while there, because the names in Phobos were quite inconsistent, and almost everyone thought that we'd be better off to fix the names and deal with the breakage then rather than having to deal with the bad names forever. - Jonathan M Davis
Jul 15 2012
prev sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-15 23:58, Walter Bright wrote:

 We did do a stable release, D1, and there were plenty of complaints that
 D1 did not get new features.
Yes because it was just an arbitrary release that was picked, with seemingly not much though behind.
 Also, all the released versions of D are available for download. There
 is no need to constantly download the latest if that disrupts your
 projects.
Then when you hit a bug and you ask the community they say that was fixed in the previous release, "you should always use the latest release". -- /Jacob Carlborg
Jul 16 2012
prev sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-13 00:24, Adam Wilson wrote:

 For example:
 2.0.60 is the current HEAD. Bug fixes Only.
 2.1.60 is the new feature branch. It is a GitHub fork of the current
 DMD-HEAD owned by the same org as current DMD-HEAD. This way Walter can
 work against both simultaneously.

 We could have rolled the Object const change in 2.1.60, found out we
 didn't like them but instead of being FORCED to revert it to keep 2.060
 stable, we could have continued developing and improving the model or
 working on the problem from a completely different angle, WITHOUT
 affecting the release of 2.0.60.

 We could keep all the COFF work in the DMD 2.1 branch without affecting
 DMD 2.0 branch and having nearly as many breakages as we currently do in
 HEAD. Most recently, the ElfObj breakage. Roll that work into 2.1.60 and
 if it breaks well, you KNEW you were on the development branch, what's
 your problem?

 The stable/development branch model exists for a reason, it works, well.
 We don't have to keep rediscovering the models that worked successfully
 for other teams the hard way. If we proactively seek best practices, we
 can proactively avoid a huge amount of pain.
Yeah, I still don't understand why we don't do this. Is Walter against this? Anyone else? -- /Jacob Carlborg
Jul 12 2012
parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Thu, 12 Jul 2012 23:43:40 -0700, Jacob Carlborg <doob me.com> wrote:

 On 2012-07-13 00:24, Adam Wilson wrote:

 For example:
 2.0.60 is the current HEAD. Bug fixes Only.
 2.1.60 is the new feature branch. It is a GitHub fork of the current
 DMD-HEAD owned by the same org as current DMD-HEAD. This way Walter can
 work against both simultaneously.

 We could have rolled the Object const change in 2.1.60, found out we
 didn't like them but instead of being FORCED to revert it to keep 2.060
 stable, we could have continued developing and improving the model or
 working on the problem from a completely different angle, WITHOUT
 affecting the release of 2.0.60.

 We could keep all the COFF work in the DMD 2.1 branch without affecting
 DMD 2.0 branch and having nearly as many breakages as we currently do in
 HEAD. Most recently, the ElfObj breakage. Roll that work into 2.1.60 and
 if it breaks well, you KNEW you were on the development branch, what's
 your problem?

 The stable/development branch model exists for a reason, it works, well.
 We don't have to keep rediscovering the models that worked successfully
 for other teams the hard way. If we proactively seek best practices, we
 can proactively avoid a huge amount of pain.
Yeah, I still don't understand why we don't do this. Is Walter against this? Anyone else?
I hope Walter isn't against this, because I'm not seeing much community disagreement with this... -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 12 2012
next sibling parent reply Jacob Carlborg <doob me.com> writes:
On 2012-07-13 08:52, Adam Wilson wrote:

 I hope Walter isn't against this, because I'm not seeing much community
 disagreement with this...
If he's not against it, I see know reason why this haven't been done already. -- /Jacob Carlborg
Jul 13 2012
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 13 Jul 2012 00:11:12 -0700, Jacob Carlborg <doob me.com> wrote:

 On 2012-07-13 08:52, Adam Wilson wrote:

 I hope Walter isn't against this, because I'm not seeing much community
 disagreement with this...
If he's not against it, I see know reason why this haven't been done already.
Concurred. The next step is probably to send emails to Walter/Andrei detailing our case. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 13 2012
prev sibling parent reply Don Clugston <dac nospam.com> writes:
On 13/07/12 09:11, Jacob Carlborg wrote:
 On 2012-07-13 08:52, Adam Wilson wrote:

 I hope Walter isn't against this, because I'm not seeing much community
 disagreement with this...
If he's not against it, I see know reason why this haven't been done already.
It has. It's called D1.
Jul 13 2012
parent deadalnix <deadalnix gmail.com> writes:
On 13/07/2012 15:17, Don Clugston wrote:
 On 13/07/12 09:11, Jacob Carlborg wrote:
 On 2012-07-13 08:52, Adam Wilson wrote:

 I hope Walter isn't against this, because I'm not seeing much community
 disagreement with this...
If he's not against it, I see know reason why this haven't been done already.
It has. It's called D1.
No. D1 code in interleaved with D2 and you can choose which one you want when compiling the stuff. It means we still have to consider the whole D1 stuff when doing any D2 evolution now.
Jul 13 2012
prev sibling next sibling parent reply "Roman D. Boiko" <rb d-coding.com> writes:
On Friday, 13 July 2012 at 06:52:25 UTC, Adam Wilson wrote:
 I hope Walter isn't against this, because I'm not seeing much 
 community disagreement with this...
I would not be against having development and stable versions, but the price is not trivial: every pull request must be done in at least two branches, probably diverging significantly. And most benefits are already available: we have the git version and the last stable version (of course, the latter would be without the latest bug-fixes). That would mean slower progress in applying existing pull requests. (There are 100+ of those, aren't there?) Also, nobody is preventing any person that considers this to be very important from creating a fork of stable branch and applying bug-fixes there. If this happens to be a very useful option, then it could be accepted as a policy. So my point of view is that it might be too early to have such policy yet.
Jul 13 2012
next sibling parent reply Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 13, 2012 09:30:47 Roman D. Boiko wrote:
 So my point of view is that it might be too early to have such
 policy yet.
Which was my point. I think that we'll need to switch to a model like that eventually, but things are still in too much flux for it to make sense yet. Switching now would just slow everything down. - Jonathan M Davis
Jul 13 2012
next sibling parent deadalnix <deadalnix gmail.com> writes:
On 13/07/2012 09:37, Jonathan M Davis wrote:
 On Friday, July 13, 2012 09:30:47 Roman D. Boiko wrote:
 So my point of view is that it might be too early to have such
 policy yet.
Which was my point. I think that we'll need to switch to a model like that eventually, but things are still in too much flux for it to make sense yet. Switching now would just slow everything down. - Jonathan M Davis
Just think about how long this -property thing is around, since when the delete or scope has been deprecated, and that all of this doesn't even give a warning or something when the compiler stumble on them.
Jul 13 2012
prev sibling next sibling parent Jacob Carlborg <doob me.com> writes:
On 2012-07-13 09:37, Jonathan M Davis wrote:

 Which was my point. I think that we'll need to switch to a model like that
 eventually, but things are still in too much flux for it to make sense yet.
 Switching now would just slow everything down.
We could have more of an experimental branch which would be used for testing bigger changes or changes that will impact a lot of code. -- /Jacob Carlborg
Jul 13 2012
prev sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Jonathan M Davis:

 I think that we'll need to switch to a model like that 
 eventually,
When D1 bugfixes stop? Bye, bearophile
Jul 13 2012
parent Jonathan M Davis <jmdavisProg gmx.com> writes:
On Friday, July 13, 2012 15:24:03 bearophile wrote:
 Jonathan M Davis:
 I think that we'll need to switch to a model like that
 eventually,
When D1 bugfixes stop?
I don't see what the state of D1 has to do with anything other than the fact that the closest that D has ever had to this sort of model is the fact that D1 was split off specifically so that those using it could continue to use it rather than having everything broken when const was introduced. I would think that if/when we switch is likely to be highly tied to when things stabilize much better and we don't even have to discuss things like removing 4 major functions from Object. We're passed the point where D is unstable enough that we're constantly reworking how things without need, and the target feature set is essentially frozen, but we still make major changes once in while in order to make the language work as-designed. The proposed model works much better when what you have already is fully stable, and you want to make it so that new stuff can be introduced in a clean and stable manner, and we're still having to make large changes at least once in while in order to make stuff that we already have work properly. - Jonathan M Davis
Jul 13 2012
prev sibling parent Wouter Verhelst <wouter grep.be> writes:
"Roman D. Boiko" <rb d-coding.com> writes:

 On Friday, 13 July 2012 at 06:52:25 UTC, Adam Wilson wrote:
 I hope Walter isn't against this, because I'm not seeing much
 community disagreement with this...
I would not be against having development and stable versions, but the price is not trivial: every pull request must be done in at least two branches, probably diverging significantly. And most benefits are already available: we have the git version and the last stable version (of course, the latter would be without the latest bug-fixes). That would mean slower progress in applying existing pull requests. (There are 100+ of those, aren't there?)
Speaking from personal experience maitaining some code in git, I believe this fear is unfounded. Although code may and will diverge in such a model, you'll find that in most cases, bugfixes will apply to both branches with no or little changes; and that git will be able to automatically handle most of those differences with no issues (things like "the line numbers didn't match, but the code did"). This is actually one of the major strengths of git: merging code and patches to several branches is extremely easy. While you will probably want to review what was merged, this usually doesn't take a whole lot of time, and should be fairly straightforward. And when you eventually do reach the point where maintaining the divergent versions is taking much more of your time, that's probably the point where you need to think about releasing the next stable version. -- The volume of a pizza of thickness a and radius z can be described by the following formula: pi zz a
Jul 14 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/12/2012 11:52 PM, Adam Wilson wrote:
 I hope Walter isn't against this, because I'm not seeing much community
 disagreement with this...
Note this: http://d.puremagic.com/test-results/ I don't see how what we're doing is so broken.
Jul 15 2012
prev sibling next sibling parent reply "Jonathan M Davis" <jmdavisProg gmx.com> writes:
On Thursday, July 12, 2012 18:49:16 deadalnix wrote:
 The system adopted in PHP works with a 3 number version. The first
 number is used for major languages changes (for instance 4 > 5 imply
 passing object by reference when it was by copy before, 5 > 6 switched
 the whole thing to unicode).
 
 The second number imply language changes, but either non breaking or
 very specific, rarely used stuff. For instance 5.2 > 5.3 added GC,
 closures and namespace which does not break code.
 
 The last one is reserved for bug fixes. Several version are maintained
 at the same time (even if a large amount of code base is common, so bug
 fixes can be used for many version at the time).
You know, the more that I think about this, the less that I think that this idea buys us anything right now. This sort of versioning scheme is great when you need to maintain ABI compatibility, when you want to restrict adding new features to only specific releases, and when you want to specify which kind of release is allowed to introduce breaking changes, but it doesn't really fit where D is right now. We're not at a point where we're trying to maintain ABI compatibility, so that doesn't matter, but the major reason that this wouldn't buy us much is that almost all breaking changes are not introduced by new features or major redesigns or whatnot. They're introduced by bug fixes (either by the fix themselves changing how things work, since the way they worked was broken - though code may have accidentally been relying on it - or because a regression was introduced as part of the fix). Once in a while, fleshing out a partially working feature breaks some stuff (generally causing regressions of some kind), but most of it's bug fixes, and if it's due to an incomplete feature being fleshed out, then fewer people are going to be relying on it anyway. The few new features that we've added since TDPL have not really been breaking changes. They've added new functionality on top of what we've already had. The few cases where we _do_ introduce breaking changes on purpose, we do so via a deprecation path. We try and inform people (sometimes badly) that a feature is going to be changed, removed, or replaced - that it's been scheduled for deprecation. Later, we deprecate it, and even later, we remove it. In the case of Phobos, this is fairly well laid out, with things generally being scheduled for deprecation for about 6 months, and deprecated stuff sticking around for about 6 months before being removed. In the case of the compiler, it's less organized. Features generally don't actually get deprecated for a long time after it's been decided that they'll be deprecated, and they stick around as deprecated for quite a while. Newer functionality which breaks code is introduced with -w (or in one case, with a whole new flag: -property) so that programmers have a chance to switch to it more slowly rather than breaking their code immediately when the next release occurs. Later, they'll become part of the normal build in many cases, but that generally takes forever. And we don't even add new stuff like that very often, so even if you always compile with -w, it should be fairly rare that your code breaks with a new release due to something like that being added to -w. The last one that I can think of was disallowing implicit fallthrough on case statements. So, in general, when stuff breaks, it's on accident or because how things worked before was broken, and some code accidentally relied on the buggy behavior. Even removing opEquals, opCmp, toHash, and toString will be done in a way which minimizes (if not completely avoids) immediate breakage. People will need to change their code to work with the new scheme, but they won't have to do so immediately, because we'll find a way to introduce the changes such that they're phased in rather than immediately breaking everything. All that being the case, I don't know what this proposal actually buys us. The very thing that causes the most breaking changes (bug fixes) is the thing that still occurs in every release. - Jonathan M Davis
Jul 13 2012
next sibling parent "Adam Wilson" <flyboynw gmail.com> writes:
On Fri, 13 Jul 2012 09:58:22 -0700, Jonathan M Davis <jmdavisProg gmx.com>  
wrote:

 On Thursday, July 12, 2012 18:49:16 deadalnix wrote:
 The system adopted in PHP works with a 3 number version. The first
 number is used for major languages changes (for instance 4 > 5 imply
 passing object by reference when it was by copy before, 5 > 6 switched
 the whole thing to unicode).

 The second number imply language changes, but either non breaking or
 very specific, rarely used stuff. For instance 5.2 > 5.3 added GC,
 closures and namespace which does not break code.

 The last one is reserved for bug fixes. Several version are maintained
 at the same time (even if a large amount of code base is common, so bug
 fixes can be used for many version at the time).
You know, the more that I think about this, the less that I think that this idea buys us anything right now. This sort of versioning scheme is great when you need to maintain ABI compatibility, when you want to restrict adding new features to only specific releases, and when you want to specify which kind of release is allowed to introduce breaking changes, but it doesn't really fit where D is right now. We're not at a point where we're trying to maintain ABI compatibility, so that doesn't matter, but the major reason that this wouldn't buy us much is that almost all breaking changes are not introduced by new features or major redesigns or whatnot. They're introduced by bug fixes (either by the fix themselves changing how things work, since the way they worked was broken - though code may have accidentally been relying on it - or because a regression was introduced as part of the fix). Once in a while, fleshing out a partially working feature breaks some stuff (generally causing regressions of some kind), but most of it's bug fixes, and if it's due to an incomplete feature being fleshed out, then fewer people are going to be relying on it anyway. The few new features that we've added since TDPL have not really been breaking changes. They've added new functionality on top of what we've already had. The few cases where we _do_ introduce breaking changes on purpose, we do so via a deprecation path. We try and inform people (sometimes badly) that a feature is going to be changed, removed, or replaced - that it's been scheduled for deprecation. Later, we deprecate it, and even later, we remove it. In the case of Phobos, this is fairly well laid out, with things generally being scheduled for deprecation for about 6 months, and deprecated stuff sticking around for about 6 months before being removed. In the case of the compiler, it's less organized. Features generally don't actually get deprecated for a long time after it's been decided that they'll be deprecated, and they stick around as deprecated for quite a while. Newer functionality which breaks code is introduced with -w (or in one case, with a whole new flag: -property) so that programmers have a chance to switch to it more slowly rather than breaking their code immediately when the next release occurs. Later, they'll become part of the normal build in many cases, but that generally takes forever. And we don't even add new stuff like that very often, so even if you always compile with -w, it should be fairly rare that your code breaks with a new release due to something like that being added to -w. The last one that I can think of was disallowing implicit fallthrough on case statements. So, in general, when stuff breaks, it's on accident or because how things worked before was broken, and some code accidentally relied on the buggy behavior. Even removing opEquals, opCmp, toHash, and toString will be done in a way which minimizes (if not completely avoids) immediate breakage. People will need to change their code to work with the new scheme, but they won't have to do so immediately, because we'll find a way to introduce the changes such that they're phased in rather than immediately breaking everything.
And if we had a dev branch we could have rolled Object const into it and let it broken there without affecting stable. We have no stable release because we only have one branch, dev. To have a stable release you must first have a branch you consider to be stable. Major changes are rolled INTO stable from dev once they become stable. Another term for stable is staging if that language helps you understand the concept better. Stable does NOT and NEVER will mean bug free. It means that we think this code has generally been well tested and works in most cases and we promise not to break it with big changes. Note the last part, we promise not to break it with big changes (such as object const). Thats how you create a stable release, first you must promise not to break it severly. Currently we don't make that promise. But just because we don't make that promise does not mean that we cannot or should not make that promise. That promise is highly valuable to the community at large.
 All that being the case, I don't know what this proposal actually buys  
 us. The
 very thing that causes the most breaking changes (bug fixes) is the  
 thing that
 still occurs in every release.

 - Jonathan M Davis
-- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 13 2012
prev sibling next sibling parent deadalnix <deadalnix gmail.com> writes:
On 13/07/2012 18:58, Jonathan M Davis wrote:
 So, in general, when stuff breaks, it's on accident or because how things
 worked before was broken, and some code accidentally relied on the buggy
 behavior. Even removing opEquals, opCmp, toHash, and toString will be done in
 a way which minimizes (if not completely avoids) immediate breakage. People
 will need to change their code to work with the new scheme, but they won't
 have to do so immediately, because we'll find a way to introduce the changes
 such that they're phased in rather than immediately breaking everything.
Yeah, I know that. We don't change stuff just to change them, but because something is broken. But come on, we are comparing with PHP here ! PHP is so much broken that it is even hard to figure out what is done correctly to make it successful. I'll tell you what, it is successful because you know you'll be able to run you piece of crappy code on top of an half broken VM next year without problems.
Jul 13 2012
prev sibling parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/13/2012 9:58 AM, Jonathan M Davis wrote:
 All that being the case, I don't know what this proposal actually buys us.
I tend to agree.
Jul 14 2012
next sibling parent reply "Adam Wilson" <flyboynw gmail.com> writes:
On Sat, 14 Jul 2012 16:56:50 -0700, Walter Bright  
<newshound2 digitalmars.com> wrote:

 On 7/13/2012 9:58 AM, Jonathan M Davis wrote:
 All that being the case, I don't know what this proposal actually buys  
 us.
I tend to agree.
If this was the case; 2.059 would not be three months old with no 2.060 in the immediate future. Not having that situation is what this buys us. I believe that there IS a problem here. There are people who, for various reasons, cannot use Git HEAD, and they have open problems. They are stuck. I believe that is the unstated impetus for this thread. -- Adam Wilson IRC: LightBender Project Coordinator The Horizon Project http://www.thehorizonproject.org/
Jul 14 2012
next sibling parent reply captaindet <2krnk gmx.net> writes:
On 2012-07-14 20:42, Adam Wilson wrote:
 On Sat, 14 Jul 2012 16:56:50 -0700, Walter Bright <newshound2 digitalmars.com>
wrote:

 On 7/13/2012 9:58 AM, Jonathan M Davis wrote:
 All that being the case, I don't know what this proposal actually buys us.
I tend to agree.
If this was the case; 2.059 would not be three months old with no 2.060 in the immediate future. Not having that situation is what this buys us. I believe that there IS a problem here. There are people who, for various reasons, cannot use Git HEAD, and they have open problems. They are stuck. I believe that is the unstated impetus for this thread.
+1 pls make a fresh build available on a weekly or at least biweekly basis, just with regressions fixed.
Jul 15 2012
parent reply Walter Bright <newshound2 digitalmars.com> writes:
On 7/15/2012 10:25 AM, captaindet wrote:
 pls make a fresh build available on a weekly or at least biweekly basis, just
 with regressions fixed.
2.059 had only 3 outstanding regressions.
Jul 15 2012
parent captaindet <2krnk gmx.net> writes:
On 2012-07-15 17:35, Walter Bright wrote:
 On 7/15/2012 10:25 AM, captaindet wrote:
 pls make a fresh build available on a weekly or at least biweekly basis, just
 with regressions fixed.
2.059 had only 3 outstanding regressions.
my bad. i got the impression regressions were a bigger issue. just want to add that i am really happy 64bit for windows is about to happen. i should not whinge if this means other things have to slow down a bit for the time being. after all, i am not helping but just enjoying the free beer... /det
Jul 15 2012
prev sibling parent Walter Bright <newshound2 digitalmars.com> writes:
On 7/14/2012 6:42 PM, Adam Wilson wrote:
 I believe that there IS a problem here. There are people who, for various
 reasons, cannot use Git HEAD, and they have open problems. They are stuck. I
 believe that is the unstated impetus for this thread.
There is no answer to: "Do not change things, but change everything."
Jul 15 2012
prev sibling parent reply deadalnix <deadalnix gmail.com> writes:
On 15/07/2012 01:56, Walter Bright wrote:
 On 7/13/2012 9:58 AM, Jonathan M Davis wrote:
 All that being the case, I don't know what this proposal actually buys
 us.
I tend to agree.
After 10 years of D, nothing stable still exists. We may call 2.059 stable, but frankly, it isn't. This have a lot to do with the fact that new feature are included (with news bugs) in the same stream of version than bug fixes. In other terms, bug are fixed but at the same time, bugs are added with new features. The natural result is that D is not stable yet, and never will be as long as the same process is used. The proposed versioning system solve that problem. It doesn't matter if that very system is chosen in fact, but the current one is certainly not the one D needs.
Jul 15 2012
parent Patrick Stewart <ncc1701d starfed.com> writes:
There is one thing missing from developers perspective as far D is concerned: 
Not all D users want to be beta testers. But they are all treated in such a way.


tough decision, but after 5 years of tracking D story I am fed up and I see
nothing has changed and it will probably stay like that in foreseeable future.
Jul 15 2012
prev sibling parent "Jesse Phillips" <Jessekphillips+D gmail.com> writes:
On Thursday, 12 July 2012 at 16:49:17 UTC, deadalnix wrote:

 Such a system would also permit to drop all D1 stuff that are 
 in current DMD because D1 vs D2 can be chosen at compile time 
 on the same sources.
This is how DMD v2 was developed at the beginning, I bet the version 1 compiler still has the -v1 switch. I'm with Johnathan though. I don't see much benefit. Yes, the system has great benefit, but we can't support it. If we create a stable branch we'd need to define what is "big" and decide on a support system, what do we do when the dev branch has another "big" change and need to have a stable-dev... How do we change the "big" changed dev to stable without being "unstable." Eventually these happy people will get the unhappy news they have to fix their code and probably don't have the resources to keep it up for years. Maybe someone else can take on the task of merging bug fixes into their branch, yes a little bit of rework for them, but will that matter if the fixes merge cleanly?
Jul 13 2012